Add tests for all RV32I instructions
This commit is contained in:
parent
0e0d4b6e42
commit
c073411bd2
13
README.md
13
README.md
|
@ -5,6 +5,14 @@ A port of [riscv-formal](https://github.com/SymbioticEDA/riscv-formal) to nMigen
|
|||
## Dependencies
|
||||
|
||||
- [nMigen](https://github.com/m-labs/nmigen)
|
||||
|
||||
Note that:
|
||||
|
||||
1. Even though this project includes a copy of nMigen, it is highly recommended to remove the copy and install the latest version of nMigen on your system for building this repo
|
||||
1. The nMigen package that comes with the [Nix](https://nixos.org/features.html) package manager may not contain the latest changes and therefore may not work with this repo
|
||||
1. If you do choose to keep the copy of nMigen provided in this repo anyway, you may need to separately install its dependencies for the build to work:
|
||||
- [setuptools](https://pypi.org/project/setuptools/)
|
||||
- [pyvcd](https://pypi.org/project/pyvcd/)
|
||||
- [Yosys](https://github.com/YosysHQ/yosys)
|
||||
- [SymbiYosys](https://github.com/YosysHQ/SymbiYosys)
|
||||
|
||||
|
@ -12,9 +20,10 @@ A port of [riscv-formal](https://github.com/SymbioticEDA/riscv-formal) to nMigen
|
|||
|
||||
| Directory | Description |
|
||||
| --- | --- |
|
||||
| `nmigen | [nMigen](https://github.com/m-labs/nmigen) |
|
||||
| `rvfi` | RISC-V Formal Verification Framework (nMigen port) |
|
||||
| `rvfi/insns` | Supported RISC-V instructions and ISAs |
|
||||
| `rvfi/cores` | Example cores to be integrated with riscv-formal-nmigen (WIP) |
|
||||
| `rvfi/cores` | Example cores for verification with riscv-formal-nmigen |
|
||||
| `rvfi/cores/minerva` | The [Minerva](https://github.com/lambdaconcept/minerva) core |
|
||||
|
||||
## Build
|
||||
|
@ -27,6 +36,8 @@ A port of [riscv-formal](https://github.com/SymbioticEDA/riscv-formal) to nMigen
|
|||
$ python -m rvfi.cores.minerva.verify
|
||||
```
|
||||
|
||||
You may see some warning messages about unused `Elaboratable`s and deprecated use of `Simulation` which can be safely ignored.
|
||||
|
||||
## Scope
|
||||
|
||||
Support for the RV32I base ISA and RV32M extension are planned and well underway. Support for other ISAs in the original riscv-formal such as RV32C and their 64-bit counterparts may also be added in the future as time permits.
|
||||
|
|
|
@ -0,0 +1,29 @@
|
|||
Copyright (C) 2011-2020 M-Labs Limited
|
||||
Copyright (C) 2020 whitequark
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
Other authors retain ownership of their contributions. If a submission can
|
||||
reasonably be considered independently copyrightable, it's yours and we
|
||||
encourage you to claim it with appropriate copyright notices. This submission
|
||||
then falls under the "otherwise noted" category. All submissions are strongly
|
||||
encouraged to use the two-clause BSD license reproduced above.
|
|
@ -0,0 +1,84 @@
|
|||
# nMigen
|
||||
|
||||
## A refreshed Python toolbox for building complex digital hardware
|
||||
|
||||
**Although nMigen is incomplete and in active development, it can already be used for real-world designs. The nMigen language (`nmigen.hdl.ast`, `nmigen.hdl.dsl`) will not undergo incompatible changes. The nMigen standard library (`nmigen.lib`) and build system (`nmigen.build`) will undergo minimal changes before their design is finalized.**
|
||||
|
||||
Despite being faster than schematics entry, hardware design with Verilog and VHDL remains tedious and inefficient for several reasons. The event-driven model introduces issues and manual coding that are unnecessary for synchronous circuits, which represent the lion's share of today's logic designs. Counterintuitive arithmetic rules result in steeper learning curves and provide a fertile ground for subtle bugs in designs. Finally, support for procedural generation of logic (metaprogramming) through "generate" statements is very limited and restricts the ways code can be made generic, reused and organized.
|
||||
|
||||
To address those issues, we have developed the *nMigen FHDL*, a library that replaces the event-driven paradigm with the notions of combinatorial and synchronous statements, has arithmetic rules that make integers always behave like mathematical integers, and most importantly allows the design's logic to be constructed by a Python program. This last point enables hardware designers to take advantage of the richness of the Python language—object oriented programming, function parameters, generators, operator overloading, libraries, etc.—to build well organized, reusable and elegant designs.
|
||||
|
||||
Other nMigen libraries are built on FHDL and provide various tools and logic cores. nMigen also contains a simulator that allows test benches to be written in Python.
|
||||
|
||||
See the [doc/](doc/) folder for more technical information.
|
||||
|
||||
nMigen is based on [Migen][], a hardware description language developed by [M-Labs][]. Although Migen works very well in production, its design could be improved in many fundamental ways, and nMigen reimplements Migen concepts from scratch to do so. nMigen also provides an extensive [compatibility layer](#migration-from-migen) that makes it possible to build and simulate most Migen designs unmodified, as well as integrate modules written for Migen and nMigen.
|
||||
|
||||
The development of nMigen has been supported by [M-Labs][] and [LambdaConcept][].
|
||||
|
||||
[migen]: https://m-labs.hk/migen
|
||||
[yosys]: http://www.clifford.at/yosys/
|
||||
[m-labs]: https://m-labs.hk
|
||||
[lambdaconcept]: http://lambdaconcept.com/
|
||||
|
||||
### HLS?
|
||||
|
||||
nMigen is *not* a "Python-to-FPGA" conventional high level synthesis (HLS) tool. It will *not* take a Python program as input and generate a hardware implementation of it. In nMigen, the Python program is executed by a regular Python interpreter, and it emits explicit statements in the FHDL domain-specific language. Writing a conventional HLS tool that uses nMigen as an internal component might be a good idea, on the other hand :)
|
||||
|
||||
### Installation
|
||||
|
||||
nMigen requires Python 3.6 (or newer), [Yosys][] 0.9 (or newer), as well as a device-specific toolchain.
|
||||
|
||||
pip install git+https://github.com/m-labs/nmigen.git
|
||||
pip install git+https://github.com/m-labs/nmigen-boards.git
|
||||
|
||||
### Introduction
|
||||
|
||||
TBD
|
||||
|
||||
### Supported devices
|
||||
|
||||
nMigen can be used to target any FPGA or ASIC process that accepts behavioral Verilog-2001 as input. It also offers extended support for many FPGA families, providing toolchain integration, abstractions for device-specific primitives, and more. Specifically:
|
||||
|
||||
* Lattice iCE40 (toolchains: **Yosys+nextpnr**, LSE-iCECube2, Synplify-iCECube2);
|
||||
* Lattice MachXO2 (toolchains: Diamond);
|
||||
* Lattice ECP5 (toolchains: **Yosys+nextpnr**, Diamond);
|
||||
* Xilinx Spartan 3A (toolchains: ISE);
|
||||
* Xilinx Spartan 6 (toolchains: ISE);
|
||||
* Xilinx 7-series (toolchains: Vivado);
|
||||
* Xilinx UltraScale (toolchains: Vivado);
|
||||
* Intel (toolchains: Quartus).
|
||||
|
||||
FOSS toolchains are listed in **bold**.
|
||||
|
||||
### Migration from [Migen][]
|
||||
|
||||
If you are already familiar with [Migen][], the good news is that nMigen provides a comprehensive Migen compatibility layer! An existing Migen design can be synthesized and simulated with nMigen in three steps:
|
||||
|
||||
1. Replace all `from migen import <...>` statements with `from nmigen.compat import <...>`.
|
||||
2. Replace every explicit mention of the default `sys` clock domain with the new default `sync` clock domain. E.g. `ClockSignal("sys")` is changed to `ClockSignal("sync")`.
|
||||
3. Migrate from Migen build/platform system to nMigen build/platform system. nMigen does not provide a build/platform compatibility layer because both the board definition files and the platform abstraction differ too much.
|
||||
|
||||
Note that nMigen will **not** produce the exact same RTL as Migen did. nMigen has been built to allow you to take advantage of the new and improved functionality it has (such as producing hierarchical RTL) while making migration as painless as possible.
|
||||
|
||||
Once your design passes verification with nMigen, you can migrate it to the nMigen syntax one module at a time. Migen modules can be added to nMigen modules and vice versa, so there is no restriction on the order of migration, either.
|
||||
|
||||
### Community
|
||||
|
||||
nMigen discussions take place on the M-Labs IRC channel, [#m-labs at freenode.net](https://webchat.freenode.net/?channels=m-labs). Feel free to join to ask questions about using nMigen or discuss ongoing development of nMigen and its related projects.
|
||||
|
||||
### License
|
||||
|
||||
nMigen is released under the very permissive two-clause BSD license. Under the terms of this license, you are authorized to use nMigen for closed-source proprietary designs.
|
||||
|
||||
Even though we do not require you to do so, these things are awesome, so please do them if possible:
|
||||
* tell us that you are using nMigen
|
||||
* put the [nMigen logo](doc/nmigen_logo.svg) on the page of a product using it, with a link to https://nmigen.org
|
||||
* cite nMigen in publications related to research it has helped
|
||||
* send us feedback and suggestions for improvements
|
||||
* send us bug reports when something goes wrong
|
||||
* send us the modifications and improvements you have done to nMigen as pull requests on GitHub
|
||||
|
||||
See LICENSE file for full copyright and license info.
|
||||
|
||||
"Electricity! It's like magic!"
|
|
@ -0,0 +1,20 @@
|
|||
import pkg_resources
|
||||
try:
|
||||
__version__ = pkg_resources.get_distribution(__name__).version
|
||||
except pkg_resources.DistributionNotFound:
|
||||
pass
|
||||
|
||||
|
||||
from .hdl import *
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Shape", "unsigned", "signed",
|
||||
"Value", "Const", "C", "Mux", "Cat", "Repl", "Array", "Signal", "ClockSignal", "ResetSignal",
|
||||
"Module",
|
||||
"ClockDomain",
|
||||
"Elaboratable", "Fragment", "Instance",
|
||||
"Memory",
|
||||
"Record",
|
||||
"DomainRenamer", "ResetInserter", "EnableInserter",
|
||||
]
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -0,0 +1,37 @@
|
|||
import os
|
||||
import shutil
|
||||
|
||||
|
||||
__all__ = ["ToolNotFound", "tool_env_var", "has_tool", "require_tool"]
|
||||
|
||||
|
||||
class ToolNotFound(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def tool_env_var(name):
|
||||
return name.upper().replace("-", "_")
|
||||
|
||||
|
||||
def _get_tool(name):
|
||||
return os.environ.get(tool_env_var(name), name)
|
||||
|
||||
|
||||
def has_tool(name):
|
||||
return shutil.which(_get_tool(name)) is not None
|
||||
|
||||
|
||||
def require_tool(name):
|
||||
env_var = tool_env_var(name)
|
||||
path = _get_tool(name)
|
||||
if shutil.which(path) is None:
|
||||
if env_var in os.environ:
|
||||
raise ToolNotFound("Could not find required tool {} in {} as "
|
||||
"specified via the {} environment variable".
|
||||
format(name, path, env_var))
|
||||
else:
|
||||
raise ToolNotFound("Could not find required tool {} in PATH. Place "
|
||||
"it directly in PATH or specify path explicitly "
|
||||
"via the {} environment variable".
|
||||
format(name, env_var))
|
||||
return path
|
|
@ -0,0 +1,45 @@
|
|||
import sys
|
||||
import warnings
|
||||
|
||||
from ._utils import get_linter_option
|
||||
|
||||
|
||||
__all__ = ["UnusedMustUse", "MustUse"]
|
||||
|
||||
|
||||
class UnusedMustUse(Warning):
|
||||
pass
|
||||
|
||||
|
||||
class MustUse:
|
||||
_MustUse__silence = False
|
||||
_MustUse__warning = UnusedMustUse
|
||||
|
||||
def __new__(cls, *args, src_loc_at=0, **kwargs):
|
||||
frame = sys._getframe(1 + src_loc_at)
|
||||
self = super().__new__(cls)
|
||||
self._MustUse__used = False
|
||||
self._MustUse__context = dict(
|
||||
filename=frame.f_code.co_filename,
|
||||
lineno=frame.f_lineno,
|
||||
source=self)
|
||||
return self
|
||||
|
||||
def __del__(self):
|
||||
if self._MustUse__silence:
|
||||
return
|
||||
if hasattr(self, "_MustUse__used") and not self._MustUse__used:
|
||||
if get_linter_option(self._MustUse__context["filename"],
|
||||
self._MustUse__warning.__name__, bool, True):
|
||||
warnings.warn_explicit(
|
||||
"{!r} created but never used".format(self), self._MustUse__warning,
|
||||
**self._MustUse__context)
|
||||
|
||||
|
||||
_old_excepthook = sys.excepthook
|
||||
def _silence_elaboratable(type, value, traceback):
|
||||
# Don't show anything if the interpreter crashed; that'd just obscure the exception
|
||||
# traceback instead of helping.
|
||||
MustUse._MustUse__silence = True
|
||||
_old_excepthook(type, value, traceback)
|
||||
sys.excepthook = _silence_elaboratable
|
|
@ -0,0 +1,116 @@
|
|||
import contextlib
|
||||
import functools
|
||||
import warnings
|
||||
import linecache
|
||||
import re
|
||||
from collections import OrderedDict
|
||||
from collections.abc import Iterable
|
||||
from contextlib import contextmanager
|
||||
|
||||
from .utils import *
|
||||
|
||||
|
||||
__all__ = ["flatten", "union" , "log2_int", "bits_for", "memoize", "final", "deprecated",
|
||||
"get_linter_options", "get_linter_option"]
|
||||
|
||||
|
||||
def flatten(i):
|
||||
for e in i:
|
||||
if isinstance(e, Iterable):
|
||||
yield from flatten(e)
|
||||
else:
|
||||
yield e
|
||||
|
||||
|
||||
def union(i, start=None):
|
||||
r = start
|
||||
for e in i:
|
||||
if r is None:
|
||||
r = e
|
||||
else:
|
||||
r |= e
|
||||
return r
|
||||
|
||||
|
||||
def memoize(f):
|
||||
memo = OrderedDict()
|
||||
@functools.wraps(f)
|
||||
def g(*args):
|
||||
if args not in memo:
|
||||
memo[args] = f(*args)
|
||||
return memo[args]
|
||||
return g
|
||||
|
||||
|
||||
def final(cls):
|
||||
def init_subclass():
|
||||
raise TypeError("Subclassing {}.{} is not supported"
|
||||
.format(cls.__module__, cls.__name__))
|
||||
cls.__init_subclass__ = init_subclass
|
||||
return cls
|
||||
|
||||
|
||||
def deprecated(message, stacklevel=2):
|
||||
def decorator(f):
|
||||
@functools.wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
warnings.warn(message, DeprecationWarning, stacklevel=stacklevel)
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
return decorator
|
||||
|
||||
|
||||
def _ignore_deprecated(f=None):
|
||||
if f is None:
|
||||
@contextlib.contextmanager
|
||||
def context_like():
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings(action="ignore", category=DeprecationWarning)
|
||||
yield
|
||||
return context_like()
|
||||
else:
|
||||
@functools.wraps(f)
|
||||
def decorator_like(*args, **kwargs):
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings(action="ignore", category=DeprecationWarning)
|
||||
f(*args, **kwargs)
|
||||
return decorator_like
|
||||
|
||||
|
||||
def extend(cls):
|
||||
def decorator(f):
|
||||
if isinstance(f, property):
|
||||
name = f.fget.__name__
|
||||
else:
|
||||
name = f.__name__
|
||||
setattr(cls, name, f)
|
||||
return decorator
|
||||
|
||||
|
||||
def get_linter_options(filename):
|
||||
first_line = linecache.getline(filename, 1)
|
||||
if first_line:
|
||||
match = re.match(r"^#\s*nmigen:\s*((?:\w+=\w+\s*)(?:,\s*\w+=\w+\s*)*)\n$", first_line)
|
||||
if match:
|
||||
return dict(map(lambda s: s.strip().split("=", 2), match.group(1).split(",")))
|
||||
return dict()
|
||||
|
||||
|
||||
def get_linter_option(filename, name, type, default):
|
||||
options = get_linter_options(filename)
|
||||
if name not in options:
|
||||
return default
|
||||
|
||||
option = options[name]
|
||||
if type is bool:
|
||||
if option in ("1", "yes", "enable"):
|
||||
return True
|
||||
if option in ("0", "no", "disable"):
|
||||
return False
|
||||
return default
|
||||
if type is int:
|
||||
try:
|
||||
return int(option, 0)
|
||||
except ValueError:
|
||||
return default
|
||||
assert False
|
|
@ -0,0 +1,2 @@
|
|||
from .hdl.ast import AnyConst, AnySeq, Assert, Assume, Cover
|
||||
from .hdl.ast import Past, Stable, Rose, Fell, Initial
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,77 @@
|
|||
import os
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
from .._toolchain import *
|
||||
from . import rtlil
|
||||
|
||||
|
||||
__all__ = ["YosysError", "convert", "convert_fragment"]
|
||||
|
||||
|
||||
class YosysError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def _yosys_version():
|
||||
yosys_path = require_tool("yosys")
|
||||
version = subprocess.check_output([yosys_path, "-V"], encoding="utf-8")
|
||||
m = re.match(r"^Yosys ([\d.]+)(?:\+(\d+))?", version)
|
||||
tag, offset = m[1], m[2] or 0
|
||||
return tuple(map(int, tag.split("."))), offset
|
||||
|
||||
|
||||
def _convert_rtlil_text(rtlil_text, *, strip_internal_attrs=False, write_verilog_opts=()):
|
||||
version, offset = _yosys_version()
|
||||
if version < (0, 9):
|
||||
raise YosysError("Yosys {}.{} is not supported".format(*version))
|
||||
|
||||
attr_map = []
|
||||
if strip_internal_attrs:
|
||||
attr_map.append("-remove generator")
|
||||
attr_map.append("-remove top")
|
||||
attr_map.append("-remove src")
|
||||
attr_map.append("-remove nmigen.hierarchy")
|
||||
attr_map.append("-remove nmigen.decoding")
|
||||
|
||||
script = """
|
||||
# Convert nMigen's RTLIL to readable Verilog.
|
||||
read_ilang <<rtlil
|
||||
{}
|
||||
rtlil
|
||||
{prune}delete w:$verilog_initial_trigger
|
||||
{prune}proc_prune
|
||||
proc_init
|
||||
proc_arst
|
||||
proc_dff
|
||||
proc_clean
|
||||
memory_collect
|
||||
attrmap {attr_map}
|
||||
attrmap -modattr {attr_map}
|
||||
write_verilog -norename {write_verilog_opts}
|
||||
""".format(rtlil_text,
|
||||
prune="# " if version == (0, 9) and offset == 0 else "",
|
||||
attr_map=" ".join(attr_map),
|
||||
write_verilog_opts=" ".join(write_verilog_opts),
|
||||
)
|
||||
|
||||
popen = subprocess.Popen([require_tool("yosys"), "-q", "-"],
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
encoding="utf-8")
|
||||
verilog_text, error = popen.communicate(script)
|
||||
if popen.returncode:
|
||||
raise YosysError(error.strip())
|
||||
else:
|
||||
return verilog_text
|
||||
|
||||
|
||||
def convert_fragment(*args, strip_internal_attrs=False, **kwargs):
|
||||
rtlil_text, name_map = rtlil.convert_fragment(*args, **kwargs)
|
||||
return _convert_rtlil_text(rtlil_text, strip_internal_attrs=strip_internal_attrs), name_map
|
||||
|
||||
|
||||
def convert(*args, strip_internal_attrs=False, **kwargs):
|
||||
rtlil_text = rtlil.convert(*args, **kwargs)
|
||||
return _convert_rtlil_text(rtlil_text, strip_internal_attrs=strip_internal_attrs)
|
|
@ -0,0 +1,3 @@
|
|||
from .dsl import *
|
||||
from .res import ResourceError
|
||||
from .plat import *
|
|
@ -0,0 +1,260 @@
|
|||
from collections import OrderedDict
|
||||
|
||||
|
||||
__all__ = ["Pins", "PinsN", "DiffPairs", "DiffPairsN",
|
||||
"Attrs", "Clock", "Subsignal", "Resource", "Connector"]
|
||||
|
||||
|
||||
class Pins:
|
||||
def __init__(self, names, *, dir="io", invert=False, conn=None, assert_width=None):
|
||||
if not isinstance(names, str):
|
||||
raise TypeError("Names must be a whitespace-separated string, not {!r}"
|
||||
.format(names))
|
||||
names = names.split()
|
||||
|
||||
if conn is not None:
|
||||
conn_name, conn_number = conn
|
||||
if not (isinstance(conn_name, str) and isinstance(conn_number, (int, str))):
|
||||
raise TypeError("Connector must be None or a pair of string (connector name) and "
|
||||
"integer/string (connector number), not {!r}"
|
||||
.format(conn))
|
||||
names = ["{}_{}:{}".format(conn_name, conn_number, name) for name in names]
|
||||
|
||||
if dir not in ("i", "o", "io", "oe"):
|
||||
raise TypeError("Direction must be one of \"i\", \"o\", \"oe\", or \"io\", not {!r}"
|
||||
.format(dir))
|
||||
|
||||
if assert_width is not None and len(names) != assert_width:
|
||||
raise AssertionError("{} names are specified ({}), but {} names are expected"
|
||||
.format(len(names), " ".join(names), assert_width))
|
||||
|
||||
self.names = names
|
||||
self.dir = dir
|
||||
self.invert = bool(invert)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.names)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.names)
|
||||
|
||||
def map_names(self, mapping, resource):
|
||||
mapped_names = []
|
||||
for name in self.names:
|
||||
while ":" in name:
|
||||
if name not in mapping:
|
||||
raise NameError("Resource {!r} refers to nonexistent connector pin {}"
|
||||
.format(resource, name))
|
||||
name = mapping[name]
|
||||
mapped_names.append(name)
|
||||
return mapped_names
|
||||
|
||||
def __repr__(self):
|
||||
return "(pins{} {} {})".format("-n" if self.invert else "",
|
||||
self.dir, " ".join(self.names))
|
||||
|
||||
|
||||
def PinsN(*args, **kwargs):
|
||||
pins = Pins(*args, **kwargs)
|
||||
pins.invert = True
|
||||
return pins
|
||||
|
||||
|
||||
class DiffPairs:
|
||||
def __init__(self, p, n, *, dir="io", conn=None, assert_width=None):
|
||||
self.p = Pins(p, dir=dir, conn=conn, assert_width=assert_width)
|
||||
self.n = Pins(n, dir=dir, conn=conn, assert_width=assert_width)
|
||||
|
||||
if len(self.p.names) != len(self.n.names):
|
||||
raise TypeError("Positive and negative pins must have the same width, but {!r} "
|
||||
"and {!r} do not"
|
||||
.format(self.p, self.n))
|
||||
|
||||
self.dir = dir
|
||||
self.invert = False
|
||||
|
||||
def __len__(self):
|
||||
return len(self.p.names)
|
||||
|
||||
def __iter__(self):
|
||||
return zip(self.p.names, self.n.names)
|
||||
|
||||
def __repr__(self):
|
||||
return "(diffpairs{} {} (p {}) (n {}))".format("-n" if self.invert else "",
|
||||
self.dir, " ".join(self.p.names), " ".join(self.n.names))
|
||||
|
||||
|
||||
def DiffPairsN(*args, **kwargs):
|
||||
diff_pairs = DiffPairs(*args, **kwargs)
|
||||
diff_pairs.invert = True
|
||||
return diff_pairs
|
||||
|
||||
|
||||
class Attrs(OrderedDict):
|
||||
def __init__(self, **attrs):
|
||||
for key, value in attrs.items():
|
||||
if not (value is None or isinstance(value, (str, int)) or hasattr(value, "__call__")):
|
||||
raise TypeError("Value of attribute {} must be None, int, str, or callable, "
|
||||
"not {!r}"
|
||||
.format(key, value))
|
||||
|
||||
super().__init__(**attrs)
|
||||
|
||||
def __repr__(self):
|
||||
items = []
|
||||
for key, value in self.items():
|
||||
if value is None:
|
||||
items.append("!" + key)
|
||||
else:
|
||||
items.append(key + "=" + repr(value))
|
||||
return "(attrs {})".format(" ".join(items))
|
||||
|
||||
|
||||
class Clock:
|
||||
def __init__(self, frequency):
|
||||
if not isinstance(frequency, (float, int)):
|
||||
raise TypeError("Clock frequency must be a number")
|
||||
|
||||
self.frequency = float(frequency)
|
||||
|
||||
@property
|
||||
def period(self):
|
||||
return 1 / self.frequency
|
||||
|
||||
def __repr__(self):
|
||||
return "(clock {})".format(self.frequency)
|
||||
|
||||
|
||||
class Subsignal:
|
||||
def __init__(self, name, *args):
|
||||
self.name = name
|
||||
self.ios = []
|
||||
self.attrs = Attrs()
|
||||
self.clock = None
|
||||
|
||||
if not args:
|
||||
raise ValueError("Missing I/O constraints")
|
||||
for arg in args:
|
||||
if isinstance(arg, (Pins, DiffPairs)):
|
||||
if not self.ios:
|
||||
self.ios.append(arg)
|
||||
else:
|
||||
raise TypeError("Pins and DiffPairs are incompatible with other location or "
|
||||
"subsignal constraints, but {!r} appears after {!r}"
|
||||
.format(arg, self.ios[-1]))
|
||||
elif isinstance(arg, Subsignal):
|
||||
if not self.ios or isinstance(self.ios[-1], Subsignal):
|
||||
self.ios.append(arg)
|
||||
else:
|
||||
raise TypeError("Subsignal is incompatible with location constraints, but "
|
||||
"{!r} appears after {!r}"
|
||||
.format(arg, self.ios[-1]))
|
||||
elif isinstance(arg, Attrs):
|
||||
self.attrs.update(arg)
|
||||
elif isinstance(arg, Clock):
|
||||
if self.ios and isinstance(self.ios[-1], (Pins, DiffPairs)):
|
||||
if self.clock is None:
|
||||
self.clock = arg
|
||||
else:
|
||||
raise ValueError("Clock constraint can be applied only once")
|
||||
else:
|
||||
raise TypeError("Clock constraint can only be applied to Pins or DiffPairs, "
|
||||
"not {!r}"
|
||||
.format(self.ios[-1]))
|
||||
else:
|
||||
raise TypeError("Constraint must be one of Pins, DiffPairs, Subsignal, Attrs, "
|
||||
"or Clock, not {!r}"
|
||||
.format(arg))
|
||||
|
||||
def _content_repr(self):
|
||||
parts = []
|
||||
for io in self.ios:
|
||||
parts.append(repr(io))
|
||||
if self.clock is not None:
|
||||
parts.append(repr(self.clock))
|
||||
if self.attrs:
|
||||
parts.append(repr(self.attrs))
|
||||
return " ".join(parts)
|
||||
|
||||
def __repr__(self):
|
||||
return "(subsignal {} {})".format(self.name, self._content_repr())
|
||||
|
||||
|
||||
class Resource(Subsignal):
|
||||
@classmethod
|
||||
def family(cls, name_or_number, number=None, *, ios, default_name, name_suffix=""):
|
||||
# This constructor accepts two different forms:
|
||||
# 1. Number-only form:
|
||||
# Resource.family(0, default_name="name", ios=[Pins("A0 A1")])
|
||||
# 2. Name-and-number (name override) form:
|
||||
# Resource.family("override", 0, default_name="name", ios=...)
|
||||
# This makes it easier to build abstractions for resources, e.g. an SPIResource abstraction
|
||||
# could simply delegate to `Resource.family(*args, default_name="spi", ios=ios)`.
|
||||
# The name_suffix argument is meant to support creating resources with
|
||||
# similar names, such as spi_flash, spi_flash_2x, etc.
|
||||
if name_suffix: # Only add "_" if we actually have a suffix.
|
||||
name_suffix = "_" + name_suffix
|
||||
|
||||
if number is None: # name_or_number is number
|
||||
return cls(default_name + name_suffix, name_or_number, *ios)
|
||||
else: # name_or_number is name
|
||||
return cls(name_or_number + name_suffix, number, *ios)
|
||||
|
||||
def __init__(self, name, number, *args):
|
||||
super().__init__(name, *args)
|
||||
|
||||
self.number = number
|
||||
|
||||
def __repr__(self):
|
||||
return "(resource {} {} {})".format(self.name, self.number, self._content_repr())
|
||||
|
||||
|
||||
class Connector:
|
||||
def __init__(self, name, number, io, *, conn=None):
|
||||
self.name = name
|
||||
self.number = number
|
||||
mapping = OrderedDict()
|
||||
|
||||
if isinstance(io, dict):
|
||||
for conn_pin, plat_pin in io.items():
|
||||
if not isinstance(conn_pin, str):
|
||||
raise TypeError("Connector pin name must be a string, not {!r}"
|
||||
.format(conn_pin))
|
||||
if not isinstance(plat_pin, str):
|
||||
raise TypeError("Platform pin name must be a string, not {!r}"
|
||||
.format(plat_pin))
|
||||
mapping[conn_pin] = plat_pin
|
||||
|
||||
elif isinstance(io, str):
|
||||
for conn_pin, plat_pin in enumerate(io.split(), start=1):
|
||||
if plat_pin == "-":
|
||||
continue
|
||||
|
||||
mapping[str(conn_pin)] = plat_pin
|
||||
else:
|
||||
raise TypeError("Connector I/Os must be a dictionary or a string, not {!r}"
|
||||
.format(io))
|
||||
|
||||
if conn is not None:
|
||||
conn_name, conn_number = conn
|
||||
if not (isinstance(conn_name, str) and isinstance(conn_number, (int, str))):
|
||||
raise TypeError("Connector must be None or a pair of string (connector name) and "
|
||||
"integer/string (connector number), not {!r}"
|
||||
.format(conn))
|
||||
|
||||
for conn_pin, plat_pin in mapping.items():
|
||||
mapping[conn_pin] = "{}_{}:{}".format(conn_name, conn_number, plat_pin)
|
||||
|
||||
self.mapping = mapping
|
||||
|
||||
def __repr__(self):
|
||||
return "(connector {} {} {})".format(self.name, self.number,
|
||||
" ".join("{}=>{}".format(conn, plat)
|
||||
for conn, plat in self.mapping.items()))
|
||||
|
||||
def __len__(self):
|
||||
return len(self.mapping)
|
||||
|
||||
def __iter__(self):
|
||||
for conn_pin, plat_pin in self.mapping.items():
|
||||
yield "{}_{}:{}".format(self.name, self.number, conn_pin), plat_pin
|
|
@ -0,0 +1,404 @@
|
|||
from collections import OrderedDict
|
||||
from abc import ABCMeta, abstractmethod, abstractproperty
|
||||
import os
|
||||
import textwrap
|
||||
import re
|
||||
import jinja2
|
||||
|
||||
from .. import __version__
|
||||
from .._toolchain import *
|
||||
from ..hdl import *
|
||||
from ..hdl.xfrm import SampleLowerer, DomainLowerer
|
||||
from ..lib.cdc import ResetSynchronizer
|
||||
from ..back import rtlil, verilog
|
||||
from .res import *
|
||||
from .run import *
|
||||
|
||||
|
||||
__all__ = ["Platform", "TemplatedPlatform"]
|
||||
|
||||
|
||||
class Platform(ResourceManager, metaclass=ABCMeta):
|
||||
resources = abstractproperty()
|
||||
connectors = abstractproperty()
|
||||
default_clk = None
|
||||
default_rst = None
|
||||
required_tools = abstractproperty()
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(self.resources, self.connectors)
|
||||
|
||||
self.extra_files = OrderedDict()
|
||||
|
||||
self._prepared = False
|
||||
|
||||
@property
|
||||
def default_clk_constraint(self):
|
||||
if self.default_clk is None:
|
||||
raise AttributeError("Platform '{}' does not define a default clock"
|
||||
.format(type(self).__name__))
|
||||
return self.lookup(self.default_clk).clock
|
||||
|
||||
@property
|
||||
def default_clk_frequency(self):
|
||||
constraint = self.default_clk_constraint
|
||||
if constraint is None:
|
||||
raise AttributeError("Platform '{}' does not constrain its default clock"
|
||||
.format(type(self).__name__))
|
||||
return constraint.frequency
|
||||
|
||||
def add_file(self, filename, content):
|
||||
if not isinstance(filename, str):
|
||||
raise TypeError("File name must be a string, not {!r}"
|
||||
.format(filename))
|
||||
if hasattr(content, "read"):
|
||||
content = content.read()
|
||||
elif not isinstance(content, (str, bytes)):
|
||||
raise TypeError("File contents must be str, bytes, or a file-like object, not {!r}"
|
||||
.format(content))
|
||||
if filename in self.extra_files:
|
||||
if self.extra_files[filename] != content:
|
||||
raise ValueError("File {!r} already exists"
|
||||
.format(filename))
|
||||
else:
|
||||
self.extra_files[filename] = content
|
||||
|
||||
@property
|
||||
def _toolchain_env_var(self):
|
||||
return f"NMIGEN_ENV_{self.toolchain}"
|
||||
|
||||
def build(self, elaboratable, name="top",
|
||||
build_dir="build", do_build=True,
|
||||
program_opts=None, do_program=False,
|
||||
**kwargs):
|
||||
if self._toolchain_env_var not in os.environ:
|
||||
for tool in self.required_tools:
|
||||
require_tool(tool)
|
||||
|
||||
plan = self.prepare(elaboratable, name, **kwargs)
|
||||
if not do_build:
|
||||
return plan
|
||||
|
||||
products = plan.execute_local(build_dir)
|
||||
if not do_program:
|
||||
return products
|
||||
|
||||
self.toolchain_program(products, name, **(program_opts or {}))
|
||||
|
||||
def has_required_tools(self):
|
||||
if self._toolchain_env_var in os.environ:
|
||||
return True
|
||||
return all(has_tool(name) for name in self.required_tools)
|
||||
|
||||
def create_missing_domain(self, name):
|
||||
# Simple instantiation of a clock domain driven directly by the board clock and reset.
|
||||
# This implementation uses a single ResetSynchronizer to ensure that:
|
||||
# * an external reset is definitely synchronized to the system clock;
|
||||
# * release of power-on reset, which is inherently asynchronous, is synchronized to
|
||||
# the system clock.
|
||||
# Many device families provide advanced primitives for tackling reset. If these exist,
|
||||
# they should be used instead.
|
||||
if name == "sync" and self.default_clk is not None:
|
||||
clk_i = self.request(self.default_clk).i
|
||||
if self.default_rst is not None:
|
||||
rst_i = self.request(self.default_rst).i
|
||||
else:
|
||||
rst_i = Const(0)
|
||||
|
||||
m = Module()
|
||||
m.domains += ClockDomain("sync")
|
||||
m.d.comb += ClockSignal("sync").eq(clk_i)
|
||||
m.submodules.reset_sync = ResetSynchronizer(rst_i, domain="sync")
|
||||
return m
|
||||
|
||||
def prepare(self, elaboratable, name="top", **kwargs):
|
||||
assert not self._prepared
|
||||
self._prepared = True
|
||||
|
||||
fragment = Fragment.get(elaboratable, self)
|
||||
fragment = SampleLowerer()(fragment)
|
||||
fragment._propagate_domains(self.create_missing_domain, platform=self)
|
||||
fragment = DomainLowerer()(fragment)
|
||||
|
||||
def add_pin_fragment(pin, pin_fragment):
|
||||
pin_fragment = Fragment.get(pin_fragment, self)
|
||||
if not isinstance(pin_fragment, Instance):
|
||||
pin_fragment.flatten = True
|
||||
fragment.add_subfragment(pin_fragment, name="pin_{}".format(pin.name))
|
||||
|
||||
for pin, port, attrs, invert in self.iter_single_ended_pins():
|
||||
if pin.dir == "i":
|
||||
add_pin_fragment(pin, self.get_input(pin, port, attrs, invert))
|
||||
if pin.dir == "o":
|
||||
add_pin_fragment(pin, self.get_output(pin, port, attrs, invert))
|
||||
if pin.dir == "oe":
|
||||
add_pin_fragment(pin, self.get_tristate(pin, port, attrs, invert))
|
||||
if pin.dir == "io":
|
||||
add_pin_fragment(pin, self.get_input_output(pin, port, attrs, invert))
|
||||
|
||||
for pin, p_port, n_port, attrs, invert in self.iter_differential_pins():
|
||||
if pin.dir == "i":
|
||||
add_pin_fragment(pin, self.get_diff_input(pin, p_port, n_port, attrs, invert))
|
||||
if pin.dir == "o":
|
||||
add_pin_fragment(pin, self.get_diff_output(pin, p_port, n_port, attrs, invert))
|
||||
if pin.dir == "oe":
|
||||
add_pin_fragment(pin, self.get_diff_tristate(pin, p_port, n_port, attrs, invert))
|
||||
if pin.dir == "io":
|
||||
add_pin_fragment(pin,
|
||||
self.get_diff_input_output(pin, p_port, n_port, attrs, invert))
|
||||
|
||||
fragment._propagate_ports(ports=self.iter_ports(), all_undef_as_ports=False)
|
||||
return self.toolchain_prepare(fragment, name, **kwargs)
|
||||
|
||||
@abstractmethod
|
||||
def toolchain_prepare(self, fragment, name, **kwargs):
|
||||
"""
|
||||
Convert the ``fragment`` and constraints recorded in this :class:`Platform` into
|
||||
a :class:`BuildPlan`.
|
||||
"""
|
||||
raise NotImplementedError # :nocov:
|
||||
|
||||
def toolchain_program(self, products, name, **kwargs):
|
||||
"""
|
||||
Extract bitstream for fragment ``name`` from ``products`` and download it to a target.
|
||||
"""
|
||||
raise NotImplementedError("Platform '{}' does not support programming"
|
||||
.format(type(self).__name__))
|
||||
|
||||
def _check_feature(self, feature, pin, attrs, valid_xdrs, valid_attrs):
|
||||
if not valid_xdrs:
|
||||
raise NotImplementedError("Platform '{}' does not support {}"
|
||||
.format(type(self).__name__, feature))
|
||||
elif pin.xdr not in valid_xdrs:
|
||||
raise NotImplementedError("Platform '{}' does not support {} for XDR {}"
|
||||
.format(type(self).__name__, feature, pin.xdr))
|
||||
|
||||
if not valid_attrs and attrs:
|
||||
raise NotImplementedError("Platform '{}' does not support attributes for {}"
|
||||
.format(type(self).__name__, feature))
|
||||
|
||||
@staticmethod
|
||||
def _invert_if(invert, value):
|
||||
if invert:
|
||||
return ~value
|
||||
else:
|
||||
return value
|
||||
|
||||
def get_input(self, pin, port, attrs, invert):
|
||||
self._check_feature("single-ended input", pin, attrs,
|
||||
valid_xdrs=(0,), valid_attrs=None)
|
||||
|
||||
m = Module()
|
||||
m.d.comb += pin.i.eq(self._invert_if(invert, port))
|
||||
return m
|
||||
|
||||
def get_output(self, pin, port, attrs, invert):
|
||||
self._check_feature("single-ended output", pin, attrs,
|
||||
valid_xdrs=(0,), valid_attrs=None)
|
||||
|
||||
m = Module()
|
||||
m.d.comb += port.eq(self._invert_if(invert, pin.o))
|
||||
return m
|
||||
|
||||
def get_tristate(self, pin, port, attrs, invert):
|
||||
self._check_feature("single-ended tristate", pin, attrs,
|
||||
valid_xdrs=(0,), valid_attrs=None)
|
||||
|
||||
m = Module()
|
||||
m.submodules += Instance("$tribuf",
|
||||
p_WIDTH=pin.width,
|
||||
i_EN=pin.oe,
|
||||
i_A=self._invert_if(invert, pin.o),
|
||||
o_Y=port,
|
||||
)
|
||||
return m
|
||||
|
||||
def get_input_output(self, pin, port, attrs, invert):
|
||||
self._check_feature("single-ended input/output", pin, attrs,
|
||||
valid_xdrs=(0,), valid_attrs=None)
|
||||
|
||||
m = Module()
|
||||
m.submodules += Instance("$tribuf",
|
||||
p_WIDTH=pin.width,
|
||||
i_EN=pin.oe,
|
||||
i_A=self._invert_if(invert, pin.o),
|
||||
o_Y=port,
|
||||
)
|
||||
m.d.comb += pin.i.eq(self._invert_if(invert, port))
|
||||
return m
|
||||
|
||||
def get_diff_input(self, pin, p_port, n_port, attrs, invert):
|
||||
self._check_feature("differential input", pin, attrs,
|
||||
valid_xdrs=(), valid_attrs=None)
|
||||
|
||||
def get_diff_output(self, pin, p_port, n_port, attrs, invert):
|
||||
self._check_feature("differential output", pin, attrs,
|
||||
valid_xdrs=(), valid_attrs=None)
|
||||
|
||||
def get_diff_tristate(self, pin, p_port, n_port, attrs, invert):
|
||||
self._check_feature("differential tristate", pin, attrs,
|
||||
valid_xdrs=(), valid_attrs=None)
|
||||
|
||||
def get_diff_input_output(self, pin, p_port, n_port, attrs, invert):
|
||||
self._check_feature("differential input/output", pin, attrs,
|
||||
valid_xdrs=(), valid_attrs=None)
|
||||
|
||||
|
||||
class TemplatedPlatform(Platform):
|
||||
toolchain = abstractproperty()
|
||||
file_templates = abstractproperty()
|
||||
command_templates = abstractproperty()
|
||||
|
||||
build_script_templates = {
|
||||
"build_{{name}}.sh": """
|
||||
# {{autogenerated}}
|
||||
set -e{{verbose("x")}}
|
||||
[ -n "${{platform._toolchain_env_var}}" ] && . "${{platform._toolchain_env_var}}"
|
||||
{{emit_commands("sh")}}
|
||||
""",
|
||||
"build_{{name}}.bat": """
|
||||
@rem {{autogenerated}}
|
||||
{{quiet("@echo off")}}
|
||||
if defined {{platform._toolchain_env_var}} call %{{platform._toolchain_env_var}}%
|
||||
{{emit_commands("bat")}}
|
||||
""",
|
||||
}
|
||||
|
||||
def toolchain_prepare(self, fragment, name, **kwargs):
|
||||
# Restrict the name of the design to a strict alphanumeric character set. Platforms will
|
||||
# interpolate the name of the design in many different contexts: filesystem paths, Python
|
||||
# scripts, Tcl scripts, ad-hoc constraint files, and so on. It is not practical to add
|
||||
# escaping code that handles every one of their edge cases, so make sure we never hit them
|
||||
# in the first place.
|
||||
invalid_char = re.match(r"[^A-Za-z0-9_]", name)
|
||||
if invalid_char:
|
||||
raise ValueError("Design name {!r} contains invalid character {!r}; only alphanumeric "
|
||||
"characters are valid in design names"
|
||||
.format(name, invalid_char.group(0)))
|
||||
|
||||
# This notice serves a dual purpose: to explain that the file is autogenerated,
|
||||
# and to incorporate the nMigen version into generated code.
|
||||
autogenerated = "Automatically generated by nMigen {}. Do not edit.".format(__version__)
|
||||
|
||||
rtlil_text, name_map = rtlil.convert_fragment(fragment, name=name)
|
||||
|
||||
def emit_rtlil():
|
||||
return rtlil_text
|
||||
|
||||
def emit_verilog(opts=()):
|
||||
return verilog._convert_rtlil_text(rtlil_text,
|
||||
strip_internal_attrs=True, write_verilog_opts=opts)
|
||||
|
||||
def emit_debug_verilog(opts=()):
|
||||
return verilog._convert_rtlil_text(rtlil_text,
|
||||
strip_internal_attrs=False, write_verilog_opts=opts)
|
||||
|
||||
def emit_commands(syntax):
|
||||
commands = []
|
||||
|
||||
for name in self.required_tools:
|
||||
env_var = tool_env_var(name)
|
||||
if syntax == "sh":
|
||||
template = ": ${{{env_var}:={name}}}"
|
||||
elif syntax == "bat":
|
||||
template = \
|
||||
"if [%{env_var}%] equ [\"\"] set {env_var}=\n" \
|
||||
"if [%{env_var}%] equ [] set {env_var}={name}"
|
||||
else:
|
||||
assert False
|
||||
commands.append(template.format(env_var=env_var, name=name))
|
||||
|
||||
for index, command_tpl in enumerate(self.command_templates):
|
||||
command = render(command_tpl, origin="<command#{}>".format(index + 1),
|
||||
syntax=syntax)
|
||||
command = re.sub(r"\s+", " ", command)
|
||||
if syntax == "sh":
|
||||
commands.append(command)
|
||||
elif syntax == "bat":
|
||||
commands.append(command + " || exit /b")
|
||||
else:
|
||||
assert False
|
||||
|
||||
return "\n".join(commands)
|
||||
|
||||
def get_override(var):
|
||||
var_env = "NMIGEN_{}".format(var)
|
||||
if var_env in os.environ:
|
||||
# On Windows, there is no way to define an "empty but set" variable; it is tempting
|
||||
# to use a quoted empty string, but it doesn't do what one would expect. Recognize
|
||||
# this as a useful pattern anyway, and treat `set VAR=""` on Windows the same way
|
||||
# `export VAR=` is treated on Linux.
|
||||
return re.sub(r'^\"\"$', "", os.environ[var_env])
|
||||
elif var in kwargs:
|
||||
if isinstance(kwargs[var], str):
|
||||
return textwrap.dedent(kwargs[var]).strip()
|
||||
else:
|
||||
return kwargs[var]
|
||||
else:
|
||||
return jinja2.Undefined(name=var)
|
||||
|
||||
@jinja2.contextfunction
|
||||
def invoke_tool(context, name):
|
||||
env_var = tool_env_var(name)
|
||||
if context.parent["syntax"] == "sh":
|
||||
return "\"${}\"".format(env_var)
|
||||
elif context.parent["syntax"] == "bat":
|
||||
return "%{}%".format(env_var)
|
||||
else:
|
||||
assert False
|
||||
|
||||
def options(opts):
|
||||
if isinstance(opts, str):
|
||||
return opts
|
||||
else:
|
||||
return " ".join(opts)
|
||||
|
||||
def hierarchy(signal, separator):
|
||||
return separator.join(name_map[signal][1:])
|
||||
|
||||
def verbose(arg):
|
||||
if "NMIGEN_verbose" in os.environ:
|
||||
return arg
|
||||
else:
|
||||
return jinja2.Undefined(name="quiet")
|
||||
|
||||
def quiet(arg):
|
||||
if "NMIGEN_verbose" in os.environ:
|
||||
return jinja2.Undefined(name="quiet")
|
||||
else:
|
||||
return arg
|
||||
|
||||
def render(source, origin, syntax=None):
|
||||
try:
|
||||
source = textwrap.dedent(source).strip()
|
||||
compiled = jinja2.Template(source, trim_blocks=True, lstrip_blocks=True)
|
||||
compiled.environment.filters["options"] = options
|
||||
compiled.environment.filters["hierarchy"] = hierarchy
|
||||
except jinja2.TemplateSyntaxError as e:
|
||||
e.args = ("{} (at {}:{})".format(e.message, origin, e.lineno),)
|
||||
raise
|
||||
return compiled.render({
|
||||
"name": name,
|
||||
"platform": self,
|
||||
"emit_rtlil": emit_rtlil,
|
||||
"emit_verilog": emit_verilog,
|
||||
"emit_debug_verilog": emit_debug_verilog,
|
||||
"emit_commands": emit_commands,
|
||||
"syntax": syntax,
|
||||
"invoke_tool": invoke_tool,
|
||||
"get_override": get_override,
|
||||
"verbose": verbose,
|
||||
"quiet": quiet,
|
||||
"autogenerated": autogenerated,
|
||||
})
|
||||
|
||||
plan = BuildPlan(script="build_{}".format(name))
|
||||
for filename_tpl, content_tpl in self.file_templates.items():
|
||||
plan.add_file(render(filename_tpl, origin=filename_tpl),
|
||||
render(content_tpl, origin=content_tpl))
|
||||
for filename, content in self.extra_files.items():
|
||||
plan.add_file(filename, content)
|
||||
return plan
|
||||
|
||||
def iter_extra_files(self, *endswith):
|
||||
return (f for f in self.extra_files if f.endswith(endswith))
|
|
@ -0,0 +1,250 @@
|
|||
from collections import OrderedDict
|
||||
|
||||
from ..hdl.ast import *
|
||||
from ..hdl.rec import *
|
||||
from ..lib.io import *
|
||||
|
||||
from .dsl import *
|
||||
|
||||
|
||||
__all__ = ["ResourceError", "ResourceManager"]
|
||||
|
||||
|
||||
class ResourceError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ResourceManager:
|
||||
def __init__(self, resources, connectors):
|
||||
self.resources = OrderedDict()
|
||||
self._requested = OrderedDict()
|
||||
self._phys_reqd = OrderedDict()
|
||||
|
||||
self.connectors = OrderedDict()
|
||||
self._conn_pins = OrderedDict()
|
||||
|
||||
# Constraint lists
|
||||
self._ports = []
|
||||
self._clocks = SignalDict()
|
||||
|
||||
self.add_resources(resources)
|
||||
self.add_connectors(connectors)
|
||||
|
||||
def add_resources(self, resources):
|
||||
for res in resources:
|
||||
if not isinstance(res, Resource):
|
||||
raise TypeError("Object {!r} is not a Resource".format(res))
|
||||
if (res.name, res.number) in self.resources:
|
||||
raise NameError("Trying to add {!r}, but {!r} has the same name and number"
|
||||
.format(res, self.resources[res.name, res.number]))
|
||||
self.resources[res.name, res.number] = res
|
||||
|
||||
def add_connectors(self, connectors):
|
||||
for conn in connectors:
|
||||
if not isinstance(conn, Connector):
|
||||
raise TypeError("Object {!r} is not a Connector".format(conn))
|
||||
if (conn.name, conn.number) in self.connectors:
|
||||
raise NameError("Trying to add {!r}, but {!r} has the same name and number"
|
||||
.format(conn, self.connectors[conn.name, conn.number]))
|
||||
self.connectors[conn.name, conn.number] = conn
|
||||
|
||||
for conn_pin, plat_pin in conn:
|
||||
assert conn_pin not in self._conn_pins
|
||||
self._conn_pins[conn_pin] = plat_pin
|
||||
|
||||
def lookup(self, name, number=0):
|
||||
if (name, number) not in self.resources:
|
||||
raise ResourceError("Resource {}#{} does not exist"
|
||||
.format(name, number))
|
||||
return self.resources[name, number]
|
||||
|
||||
def request(self, name, number=0, *, dir=None, xdr=None):
|
||||
resource = self.lookup(name, number)
|
||||
if (resource.name, resource.number) in self._requested:
|
||||
raise ResourceError("Resource {}#{} has already been requested"
|
||||
.format(name, number))
|
||||
|
||||
def merge_options(subsignal, dir, xdr):
|
||||
if isinstance(subsignal.ios[0], Subsignal):
|
||||
if dir is None:
|
||||
dir = dict()
|
||||
if xdr is None:
|
||||
xdr = dict()
|
||||
if not isinstance(dir, dict):
|
||||
raise TypeError("Directions must be a dict, not {!r}, because {!r} "
|
||||
"has subsignals"
|
||||
.format(dir, subsignal))
|
||||
if not isinstance(xdr, dict):
|
||||
raise TypeError("Data rate must be a dict, not {!r}, because {!r} "
|
||||
"has subsignals"
|
||||
.format(xdr, subsignal))
|
||||
for sub in subsignal.ios:
|
||||
sub_dir = dir.get(sub.name, None)
|
||||
sub_xdr = xdr.get(sub.name, None)
|
||||
dir[sub.name], xdr[sub.name] = merge_options(sub, sub_dir, sub_xdr)
|
||||
else:
|
||||
if dir is None:
|
||||
dir = subsignal.ios[0].dir
|
||||
if xdr is None:
|
||||
xdr = 0
|
||||
if dir not in ("i", "o", "oe", "io", "-"):
|
||||
raise TypeError("Direction must be one of \"i\", \"o\", \"oe\", \"io\", "
|
||||
"or \"-\", not {!r}"
|
||||
.format(dir))
|
||||
if dir != subsignal.ios[0].dir and \
|
||||
not (subsignal.ios[0].dir == "io" or dir == "-"):
|
||||
raise ValueError("Direction of {!r} cannot be changed from \"{}\" to \"{}\"; "
|
||||
"direction can be changed from \"io\" to \"i\", \"o\", or "
|
||||
"\"oe\", or from anything to \"-\""
|
||||
.format(subsignal.ios[0], subsignal.ios[0].dir, dir))
|
||||
if not isinstance(xdr, int) or xdr < 0:
|
||||
raise ValueError("Data rate of {!r} must be a non-negative integer, not {!r}"
|
||||
.format(subsignal.ios[0], xdr))
|
||||
return dir, xdr
|
||||
|
||||
def resolve(resource, dir, xdr, name, attrs):
|
||||
for attr_key, attr_value in attrs.items():
|
||||
if hasattr(attr_value, "__call__"):
|
||||
attr_value = attr_value(self)
|
||||
assert attr_value is None or isinstance(attr_value, str)
|
||||
if attr_value is None:
|
||||
del attrs[attr_key]
|
||||
else:
|
||||
attrs[attr_key] = attr_value
|
||||
|
||||
if isinstance(resource.ios[0], Subsignal):
|
||||
fields = OrderedDict()
|
||||
for sub in resource.ios:
|
||||
fields[sub.name] = resolve(sub, dir[sub.name], xdr[sub.name],
|
||||
name="{}__{}".format(name, sub.name),
|
||||
attrs={**attrs, **sub.attrs})
|
||||
return Record([
|
||||
(f_name, f.layout) for (f_name, f) in fields.items()
|
||||
], fields=fields, name=name)
|
||||
|
||||
elif isinstance(resource.ios[0], (Pins, DiffPairs)):
|
||||
phys = resource.ios[0]
|
||||
if isinstance(phys, Pins):
|
||||
phys_names = phys.names
|
||||
port = Record([("io", len(phys))], name=name)
|
||||
if isinstance(phys, DiffPairs):
|
||||
phys_names = phys.p.names + phys.n.names
|
||||
port = Record([("p", len(phys)),
|
||||
("n", len(phys))], name=name)
|
||||
if dir == "-":
|
||||
pin = None
|
||||
else:
|
||||
pin = Pin(len(phys), dir, xdr=xdr, name=name)
|
||||
|
||||
for phys_name in phys_names:
|
||||
if phys_name in self._phys_reqd:
|
||||
raise ResourceError("Resource component {} uses physical pin {}, but it "
|
||||
"is already used by resource component {} that was "
|
||||
"requested earlier"
|
||||
.format(name, phys_name, self._phys_reqd[phys_name]))
|
||||
self._phys_reqd[phys_name] = name
|
||||
|
||||
self._ports.append((resource, pin, port, attrs))
|
||||
|
||||
if pin is not None and resource.clock is not None:
|
||||
self.add_clock_constraint(pin.i, resource.clock.frequency)
|
||||
|
||||
return pin if pin is not None else port
|
||||
|
||||
else:
|
||||
assert False # :nocov:
|
||||
|
||||
value = resolve(resource,
|
||||
*merge_options(resource, dir, xdr),
|
||||
name="{}_{}".format(resource.name, resource.number),
|
||||
attrs=resource.attrs)
|
||||
self._requested[resource.name, resource.number] = value
|
||||
return value
|
||||
|
||||
def iter_single_ended_pins(self):
|
||||
for res, pin, port, attrs in self._ports:
|
||||
if pin is None:
|
||||
continue
|
||||
if isinstance(res.ios[0], Pins):
|
||||
yield pin, port.io, attrs, res.ios[0].invert
|
||||
|
||||
def iter_differential_pins(self):
|
||||
for res, pin, port, attrs in self._ports:
|
||||
if pin is None:
|
||||
continue
|
||||
if isinstance(res.ios[0], DiffPairs):
|
||||
yield pin, port.p, port.n, attrs, res.ios[0].invert
|
||||
|
||||
def should_skip_port_component(self, port, attrs, component):
|
||||
return False
|
||||
|
||||
def iter_ports(self):
|
||||
for res, pin, port, attrs in self._ports:
|
||||
if isinstance(res.ios[0], Pins):
|
||||
if not self.should_skip_port_component(port, attrs, "io"):
|
||||
yield port.io
|
||||
elif isinstance(res.ios[0], DiffPairs):
|
||||
if not self.should_skip_port_component(port, attrs, "p"):
|
||||
yield port.p
|
||||
if not self.should_skip_port_component(port, attrs, "n"):
|
||||
yield port.n
|
||||
else:
|
||||
assert False
|
||||
|
||||
def iter_port_constraints(self):
|
||||
for res, pin, port, attrs in self._ports:
|
||||
if isinstance(res.ios[0], Pins):
|
||||
if not self.should_skip_port_component(port, attrs, "io"):
|
||||
yield port.io.name, res.ios[0].map_names(self._conn_pins, res), attrs
|
||||
elif isinstance(res.ios[0], DiffPairs):
|
||||
if not self.should_skip_port_component(port, attrs, "p"):
|
||||
yield port.p.name, res.ios[0].p.map_names(self._conn_pins, res), attrs
|
||||
if not self.should_skip_port_component(port, attrs, "n"):
|
||||
yield port.n.name, res.ios[0].n.map_names(self._conn_pins, res), attrs
|
||||
else:
|
||||
assert False
|
||||
|
||||
def iter_port_constraints_bits(self):
|
||||
for port_name, pin_names, attrs in self.iter_port_constraints():
|
||||
if len(pin_names) == 1:
|
||||
yield port_name, pin_names[0], attrs
|
||||
else:
|
||||
for bit, pin_name in enumerate(pin_names):
|
||||
yield "{}[{}]".format(port_name, bit), pin_name, attrs
|
||||
|
||||
def add_clock_constraint(self, clock, frequency):
|
||||
if not isinstance(clock, Signal):
|
||||
raise TypeError("Object {!r} is not a Signal".format(clock))
|
||||
if not isinstance(frequency, (int, float)):
|
||||
raise TypeError("Frequency must be a number, not {!r}".format(frequency))
|
||||
|
||||
if clock in self._clocks:
|
||||
raise ValueError("Cannot add clock constraint on {!r}, which is already constrained "
|
||||
"to {} Hz"
|
||||
.format(clock, self._clocks[clock]))
|
||||
else:
|
||||
self._clocks[clock] = float(frequency)
|
||||
|
||||
def iter_clock_constraints(self):
|
||||
# Back-propagate constraints through the input buffer. For clock constraints on pins
|
||||
# (the majority of cases), toolchains work better if the constraint is defined on the pin
|
||||
# and not on the buffered internal net; and if the toolchain is advanced enough that
|
||||
# it considers clock phase and delay of the input buffer, it is *necessary* to define
|
||||
# the constraint on the pin to match the designer's expectation of phase being referenced
|
||||
# to the pin.
|
||||
#
|
||||
# Constraints on nets with no corresponding input pin (e.g. PLL or SERDES outputs) are not
|
||||
# affected.
|
||||
pin_i_to_port = SignalDict()
|
||||
for res, pin, port, attrs in self._ports:
|
||||
if hasattr(pin, "i"):
|
||||
if isinstance(res.ios[0], Pins):
|
||||
pin_i_to_port[pin.i] = port.io
|
||||
elif isinstance(res.ios[0], DiffPairs):
|
||||
pin_i_to_port[pin.i] = port.p
|
||||
else:
|
||||
assert False
|
||||
|
||||
for net_signal, frequency in self._clocks.items():
|
||||
port_signal = pin_i_to_port.get(net_signal)
|
||||
yield net_signal, port_signal, frequency
|
|
@ -0,0 +1,166 @@
|
|||
from collections import OrderedDict
|
||||
from contextlib import contextmanager
|
||||
from abc import ABCMeta, abstractmethod
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
import tempfile
|
||||
import zipfile
|
||||
import hashlib
|
||||
|
||||
|
||||
__all__ = ["BuildPlan", "BuildProducts", "LocalBuildProducts"]
|
||||
|
||||
|
||||
class BuildPlan:
|
||||
def __init__(self, script):
|
||||
"""A build plan.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
script : str
|
||||
The base name (without extension) of the script that will be executed.
|
||||
"""
|
||||
self.script = script
|
||||
self.files = OrderedDict()
|
||||
|
||||
def add_file(self, filename, content):
|
||||
"""
|
||||
Add ``content``, which can be a :class:`str`` or :class:`bytes`, to the build plan
|
||||
as ``filename``. The file name can be a relative path with directories separated by
|
||||
forward slashes (``/``).
|
||||
"""
|
||||
assert isinstance(filename, str) and filename not in self.files
|
||||
self.files[filename] = content
|
||||
|
||||
def digest(self, size=64):
|
||||
"""
|
||||
Compute a `digest`, a short byte sequence deterministically and uniquely identifying
|
||||
this build plan.
|
||||
"""
|
||||
hasher = hashlib.blake2b(digest_size=size)
|
||||
for filename in sorted(self.files):
|
||||
hasher.update(filename.encode("utf-8"))
|
||||
content = self.files[filename]
|
||||
if isinstance(content, str):
|
||||
content = content.encode("utf-8")
|
||||
hasher.update(content)
|
||||
hasher.update(self.script.encode("utf-8"))
|
||||
return hasher.digest()
|
||||
|
||||
def archive(self, file):
|
||||
"""
|
||||
Archive files from the build plan into ``file``, which can be either a filename, or
|
||||
a file-like object. The produced archive is deterministic: exact same files will
|
||||
always produce exact same archive.
|
||||
"""
|
||||
with zipfile.ZipFile(file, "w") as archive:
|
||||
# Write archive members in deterministic order and with deterministic timestamp.
|
||||
for filename in sorted(self.files):
|
||||
archive.writestr(zipfile.ZipInfo(filename), self.files[filename])
|
||||
|
||||
def execute_local(self, root="build", *, run_script=True):
|
||||
"""
|
||||
Execute build plan using the local strategy. Files from the build plan are placed in
|
||||
the build root directory ``root``, and, if ``run_script`` is ``True``, the script
|
||||
appropriate for the platform (``{script}.bat`` on Windows, ``{script}.sh`` elsewhere) is
|
||||
executed in the build root.
|
||||
|
||||
Returns :class:`LocalBuildProducts`.
|
||||
"""
|
||||
os.makedirs(root, exist_ok=True)
|
||||
cwd = os.getcwd()
|
||||
try:
|
||||
os.chdir(root)
|
||||
|
||||
for filename, content in self.files.items():
|
||||
filename = os.path.normpath(filename)
|
||||
# Just to make sure we don't accidentally overwrite anything outside of build root.
|
||||
assert not filename.startswith("..")
|
||||
dirname = os.path.dirname(filename)
|
||||
if dirname:
|
||||
os.makedirs(dirname, exist_ok=True)
|
||||
|
||||
mode = "wt" if isinstance(content, str) else "wb"
|
||||
with open(filename, mode) as f:
|
||||
f.write(content)
|
||||
|
||||
if run_script:
|
||||
if sys.platform.startswith("win32"):
|
||||
# Without "call", "cmd /c {}.bat" will return 0.
|
||||
# See https://stackoverflow.com/a/30736987 for a detailed
|
||||
# explanation of why, including disassembly/decompilation
|
||||
# of relevant code in cmd.exe.
|
||||
# Running the script manually from a command prompt is
|
||||
# unaffected- i.e. "call" is not required.
|
||||
subprocess.check_call(["cmd", "/c", "call {}.bat".format(self.script)])
|
||||
else:
|
||||
subprocess.check_call(["sh", "{}.sh".format(self.script)])
|
||||
|
||||
return LocalBuildProducts(os.getcwd())
|
||||
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
|
||||
def execute(self):
|
||||
"""
|
||||
Execute build plan using the default strategy. Use one of the ``execute_*`` methods
|
||||
explicitly to have more control over the strategy.
|
||||
"""
|
||||
return self.execute_local()
|
||||
|
||||
|
||||
class BuildProducts(metaclass=ABCMeta):
|
||||
@abstractmethod
|
||||
def get(self, filename, mode="b"):
|
||||
"""
|
||||
Extract ``filename`` from build products, and return it as a :class:`bytes` (if ``mode``
|
||||
is ``"b"``) or a :class:`str` (if ``mode`` is ``"t"``).
|
||||
"""
|
||||
assert mode in ("b", "t")
|
||||
|
||||
@contextmanager
|
||||
def extract(self, *filenames):
|
||||
"""
|
||||
Extract ``filenames`` from build products, place them in an OS-specific temporary file
|
||||
location, with the extension preserved, and delete them afterwards. This method is used
|
||||
as a context manager, e.g.: ::
|
||||
|
||||
with products.extract("bitstream.bin", "programmer.cfg") \
|
||||
as bitstream_filename, config_filename:
|
||||
subprocess.check_call(["program", "-c", config_filename, bitstream_filename])
|
||||
"""
|
||||
files = []
|
||||
try:
|
||||
for filename in filenames:
|
||||
# On Windows, a named temporary file (as created by Python) is not accessible to
|
||||
# others if it's still open within the Python process, so we close it and delete
|
||||
# it manually.
|
||||
file = tempfile.NamedTemporaryFile(prefix="nmigen_", suffix="_" + filename,
|
||||
delete=False)
|
||||
files.append(file)
|
||||
file.write(self.get(filename))
|
||||
file.close()
|
||||
|
||||
if len(files) == 0:
|
||||
return (yield)
|
||||
elif len(files) == 1:
|
||||
return (yield files[0].name)
|
||||
else:
|
||||
return (yield [file.name for file in files])
|
||||
finally:
|
||||
for file in files:
|
||||
os.unlink(file.name)
|
||||
|
||||
|
||||
class LocalBuildProducts(BuildProducts):
|
||||
def __init__(self, root):
|
||||
# We provide no guarantees that files will be available on the local filesystem (i.e. in
|
||||
# any way other than through `products.get()`) in general, so downstream code must never
|
||||
# rely on this, even when we happen to use a local build most of the time.
|
||||
self.__root = root
|
||||
|
||||
def get(self, filename, mode="b"):
|
||||
super().get(filename, mode)
|
||||
with open(os.path.join(self.__root, filename), "r" + mode) as f:
|
||||
return f.read()
|
|
@ -0,0 +1,74 @@
|
|||
import argparse
|
||||
|
||||
from .hdl.ir import Fragment
|
||||
from .back import rtlil, verilog, pysim
|
||||
|
||||
|
||||
__all__ = ["main"]
|
||||
|
||||
|
||||
def main_parser(parser=None):
|
||||
if parser is None:
|
||||
parser = argparse.ArgumentParser()
|
||||
|
||||
p_action = parser.add_subparsers(dest="action")
|
||||
|
||||
p_generate = p_action.add_parser("generate",
|
||||
help="generate RTLIL or Verilog from the design")
|
||||
p_generate.add_argument("-t", "--type", dest="generate_type",
|
||||
metavar="LANGUAGE", choices=["il", "v"],
|
||||
default="v",
|
||||
help="generate LANGUAGE (il for RTLIL, v for Verilog; default: %(default)s)")
|
||||
p_generate.add_argument("generate_file",
|
||||
metavar="FILE", type=argparse.FileType("w"), nargs="?",
|
||||
help="write generated code to FILE")
|
||||
|
||||
p_simulate = p_action.add_parser(
|
||||
"simulate", help="simulate the design")
|
||||
p_simulate.add_argument("-v", "--vcd-file",
|
||||
metavar="VCD-FILE", type=argparse.FileType("w"),
|
||||
help="write execution trace to VCD-FILE")
|
||||
p_simulate.add_argument("-w", "--gtkw-file",
|
||||
metavar="GTKW-FILE", type=argparse.FileType("w"),
|
||||
help="write GTKWave configuration to GTKW-FILE")
|
||||
p_simulate.add_argument("-p", "--period", dest="sync_period",
|
||||
metavar="TIME", type=float, default=1e-6,
|
||||
help="set 'sync' clock domain period to TIME (default: %(default)s)")
|
||||
p_simulate.add_argument("-c", "--clocks", dest="sync_clocks",
|
||||
metavar="COUNT", type=int, required=True,
|
||||
help="simulate for COUNT 'sync' clock periods")
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def main_runner(parser, args, design, platform=None, name="top", ports=()):
|
||||
if args.action == "generate":
|
||||
fragment = Fragment.get(design, platform)
|
||||
generate_type = args.generate_type
|
||||
if generate_type is None and args.generate_file:
|
||||
if args.generate_file.name.endswith(".v"):
|
||||
generate_type = "v"
|
||||
if args.generate_file.name.endswith(".il"):
|
||||
generate_type = "il"
|
||||
if generate_type is None:
|
||||
parser.error("specify file type explicitly with -t")
|
||||
if generate_type == "il":
|
||||
output = rtlil.convert(fragment, name=name, ports=ports)
|
||||
if generate_type == "v":
|
||||
output = verilog.convert(fragment, name=name, ports=ports)
|
||||
if args.generate_file:
|
||||
args.generate_file.write(output)
|
||||
else:
|
||||
print(output)
|
||||
|
||||
if args.action == "simulate":
|
||||
fragment = Fragment.get(design, platform)
|
||||
sim = pysim.Simulator(fragment)
|
||||
sim.add_clock(args.sync_period)
|
||||
with sim.write_vcd(vcd_file=args.vcd_file, gtkw_file=args.gtkw_file, traces=ports):
|
||||
sim.run_until(args.sync_period * args.sync_clocks, run_passive=True)
|
||||
|
||||
|
||||
def main(*args, **kwargs):
|
||||
parser = main_parser()
|
||||
main_runner(parser, parser.parse_args(), *args, **kwargs)
|
|
@ -0,0 +1,11 @@
|
|||
from .fhdl.structure import *
|
||||
from .fhdl.module import *
|
||||
from .fhdl.specials import *
|
||||
from .fhdl.bitcontainer import *
|
||||
from .fhdl.decorators import *
|
||||
# from .fhdl.simplify import *
|
||||
|
||||
from .sim import *
|
||||
|
||||
from .genlib.record import *
|
||||
from .genlib.fsm import *
|
|
@ -0,0 +1,21 @@
|
|||
from ... import utils
|
||||
from ...hdl import ast
|
||||
from ..._utils import deprecated
|
||||
|
||||
|
||||
__all__ = ["log2_int", "bits_for", "value_bits_sign"]
|
||||
|
||||
|
||||
@deprecated("instead of `log2_int`, use `nmigen.utils.log2_int`")
|
||||
def log2_int(n, need_pow2=True):
|
||||
return utils.log2_int(n, need_pow2)
|
||||
|
||||
|
||||
@deprecated("instead of `bits_for`, use `nmigen.utils.bits_for`")
|
||||
def bits_for(n, require_sign_bit=False):
|
||||
return utils.bits_for(n, require_sign_bit)
|
||||
|
||||
|
||||
@deprecated("instead of `value_bits_sign(v)`, use `v.shape()`")
|
||||
def value_bits_sign(v):
|
||||
return ast.Value.cast(v).shape()
|
|
@ -0,0 +1,35 @@
|
|||
from operator import itemgetter
|
||||
|
||||
|
||||
class ConvOutput:
|
||||
def __init__(self):
|
||||
self.main_source = ""
|
||||
self.data_files = dict()
|
||||
|
||||
def set_main_source(self, src):
|
||||
self.main_source = src
|
||||
|
||||
def add_data_file(self, filename_base, content):
|
||||
filename = filename_base
|
||||
i = 1
|
||||
while filename in self.data_files:
|
||||
parts = filename_base.split(".", maxsplit=1)
|
||||
parts[0] += "_" + str(i)
|
||||
filename = ".".join(parts)
|
||||
i += 1
|
||||
self.data_files[filename] = content
|
||||
return filename
|
||||
|
||||
def __str__(self):
|
||||
r = self.main_source + "\n"
|
||||
for filename, content in sorted(self.data_files.items(),
|
||||
key=itemgetter(0)):
|
||||
r += filename + ":\n" + content
|
||||
return r
|
||||
|
||||
def write(self, main_filename):
|
||||
with open(main_filename, "w") as f:
|
||||
f.write(self.main_source)
|
||||
for filename, content in self.data_files.items():
|
||||
with open(filename, "w") as f:
|
||||
f.write(content)
|
|
@ -0,0 +1,55 @@
|
|||
from ...hdl.ast import *
|
||||
from ...hdl.xfrm import ResetInserter as NativeResetInserter
|
||||
from ...hdl.xfrm import EnableInserter as NativeEnableInserter
|
||||
from ...hdl.xfrm import DomainRenamer as NativeDomainRenamer
|
||||
from ..._utils import deprecated
|
||||
|
||||
|
||||
__all__ = ["ResetInserter", "CEInserter", "ClockDomainsRenamer"]
|
||||
|
||||
|
||||
class _CompatControlInserter:
|
||||
_control_name = None
|
||||
_native_inserter = None
|
||||
|
||||
def __init__(self, clock_domains=None):
|
||||
self.clock_domains = clock_domains
|
||||
|
||||
def __call__(self, module):
|
||||
if self.clock_domains is None:
|
||||
signals = {self._control_name: ("sync", Signal(name=self._control_name))}
|
||||
else:
|
||||
def name(cd):
|
||||
return self._control_name + "_" + cd
|
||||
signals = {name(cd): (cd, Signal(name=name(cd))) for cd in self.clock_domains}
|
||||
for name, (cd, signal) in signals.items():
|
||||
setattr(module, name, signal)
|
||||
return self._native_inserter(dict(signals.values()))(module)
|
||||
|
||||
|
||||
@deprecated("instead of `migen.fhdl.decorators.ResetInserter`, "
|
||||
"use `nmigen.hdl.xfrm.ResetInserter`; note that nMigen ResetInserter accepts "
|
||||
"a dict of reset signals (or a single reset signal) as an argument, not "
|
||||
"a set of clock domain names (or a single clock domain name)")
|
||||
class CompatResetInserter(_CompatControlInserter):
|
||||
_control_name = "reset"
|
||||
_native_inserter = NativeResetInserter
|
||||
|
||||
|
||||
@deprecated("instead of `migen.fhdl.decorators.CEInserter`, "
|
||||
"use `nmigen.hdl.xfrm.EnableInserter`; note that nMigen EnableInserter accepts "
|
||||
"a dict of enable signals (or a single enable signal) as an argument, not "
|
||||
"a set of clock domain names (or a single clock domain name)")
|
||||
class CompatCEInserter(_CompatControlInserter):
|
||||
_control_name = "ce"
|
||||
_native_inserter = NativeEnableInserter
|
||||
|
||||
|
||||
class CompatClockDomainsRenamer(NativeDomainRenamer):
|
||||
def __init__(self, cd_remapping):
|
||||
super().__init__(cd_remapping)
|
||||
|
||||
|
||||
ResetInserter = CompatResetInserter
|
||||
CEInserter = CompatCEInserter
|
||||
ClockDomainsRenamer = CompatClockDomainsRenamer
|
|
@ -0,0 +1,163 @@
|
|||
from collections.abc import Iterable
|
||||
|
||||
from ..._utils import flatten, deprecated
|
||||
from ...hdl import dsl, ir
|
||||
|
||||
|
||||
__all__ = ["Module", "FinalizeError"]
|
||||
|
||||
|
||||
def _flat_list(e):
|
||||
if isinstance(e, Iterable):
|
||||
return list(flatten(e))
|
||||
else:
|
||||
return [e]
|
||||
|
||||
|
||||
class CompatFinalizeError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
FinalizeError = CompatFinalizeError
|
||||
|
||||
|
||||
class _CompatModuleProxy:
|
||||
def __init__(self, cm):
|
||||
object.__setattr__(self, "_cm", cm)
|
||||
|
||||
|
||||
class _CompatModuleComb(_CompatModuleProxy):
|
||||
@deprecated("instead of `self.comb +=`, use `m.d.comb +=`")
|
||||
def __iadd__(self, assigns):
|
||||
self._cm._module._add_statement(assigns, domain=None, depth=0, compat_mode=True)
|
||||
return self
|
||||
|
||||
|
||||
class _CompatModuleSyncCD:
|
||||
def __init__(self, cm, cd):
|
||||
self._cm = cm
|
||||
self._cd = cd
|
||||
|
||||
@deprecated("instead of `self.sync.<domain> +=`, use `m.d.<domain> +=`")
|
||||
def __iadd__(self, assigns):
|
||||
self._cm._module._add_statement(assigns, domain=self._cd, depth=0, compat_mode=True)
|
||||
return self
|
||||
|
||||
|
||||
class _CompatModuleSync(_CompatModuleProxy):
|
||||
@deprecated("instead of `self.sync +=`, use `m.d.sync +=`")
|
||||
def __iadd__(self, assigns):
|
||||
self._cm._module._add_statement(assigns, domain="sync", depth=0, compat_mode=True)
|
||||
return self
|
||||
|
||||
def __getattr__(self, name):
|
||||
return _CompatModuleSyncCD(self._cm, name)
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
if not isinstance(value, _CompatModuleSyncCD):
|
||||
raise AttributeError("Attempted to assign sync property - use += instead")
|
||||
|
||||
|
||||
class _CompatModuleSpecials(_CompatModuleProxy):
|
||||
@deprecated("instead of `self.specials.<name> =`, use `m.submodules.<name> =`")
|
||||
def __setattr__(self, name, value):
|
||||
self._cm._submodules.append((name, value))
|
||||
setattr(self._cm, name, value)
|
||||
|
||||
@deprecated("instead of `self.specials +=`, use `m.submodules +=`")
|
||||
def __iadd__(self, other):
|
||||
self._cm._submodules += [(None, e) for e in _flat_list(other)]
|
||||
return self
|
||||
|
||||
|
||||
class _CompatModuleSubmodules(_CompatModuleProxy):
|
||||
@deprecated("instead of `self.submodules.<name> =`, use `m.submodules.<name> =`")
|
||||
def __setattr__(self, name, value):
|
||||
self._cm._submodules.append((name, value))
|
||||
setattr(self._cm, name, value)
|
||||
|
||||
@deprecated("instead of `self.submodules +=`, use `m.submodules +=`")
|
||||
def __iadd__(self, other):
|
||||
self._cm._submodules += [(None, e) for e in _flat_list(other)]
|
||||
return self
|
||||
|
||||
|
||||
class _CompatModuleClockDomains(_CompatModuleProxy):
|
||||
@deprecated("instead of `self.clock_domains.<name> =`, use `m.domains.<name> =`")
|
||||
def __setattr__(self, name, value):
|
||||
self.__iadd__(value)
|
||||
setattr(self._cm, name, value)
|
||||
|
||||
@deprecated("instead of `self.clock_domains +=`, use `m.domains +=`")
|
||||
def __iadd__(self, other):
|
||||
self._cm._module.domains += _flat_list(other)
|
||||
return self
|
||||
|
||||
|
||||
class CompatModule(ir.Elaboratable):
|
||||
_MustUse__silence = True
|
||||
|
||||
# Actually returns another nMigen Elaboratable (nmigen.dsl.Module), not a Fragment.
|
||||
def get_fragment(self):
|
||||
assert not self.get_fragment_called
|
||||
self.get_fragment_called = True
|
||||
self.finalize()
|
||||
return self._module
|
||||
|
||||
def elaborate(self, platform):
|
||||
if not self.get_fragment_called:
|
||||
self.get_fragment()
|
||||
return self._module
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name == "comb":
|
||||
return _CompatModuleComb(self)
|
||||
elif name == "sync":
|
||||
return _CompatModuleSync(self)
|
||||
elif name == "specials":
|
||||
return _CompatModuleSpecials(self)
|
||||
elif name == "submodules":
|
||||
return _CompatModuleSubmodules(self)
|
||||
elif name == "clock_domains":
|
||||
return _CompatModuleClockDomains(self)
|
||||
elif name == "finalized":
|
||||
self.finalized = False
|
||||
return self.finalized
|
||||
elif name == "_module":
|
||||
self._module = dsl.Module()
|
||||
return self._module
|
||||
elif name == "_submodules":
|
||||
self._submodules = []
|
||||
return self._submodules
|
||||
elif name == "_clock_domains":
|
||||
self._clock_domains = []
|
||||
return self._clock_domains
|
||||
elif name == "get_fragment_called":
|
||||
self.get_fragment_called = False
|
||||
return self.get_fragment_called
|
||||
else:
|
||||
raise AttributeError("'{}' object has no attribute '{}'"
|
||||
.format(type(self).__name__, name))
|
||||
|
||||
def finalize(self, *args, **kwargs):
|
||||
def finalize_submodules():
|
||||
for name, submodule in self._submodules:
|
||||
if not hasattr(submodule, "finalize"):
|
||||
continue
|
||||
if submodule.finalized:
|
||||
continue
|
||||
submodule.finalize(*args, **kwargs)
|
||||
|
||||
if not self.finalized:
|
||||
self.finalized = True
|
||||
finalize_submodules()
|
||||
self.do_finalize(*args, **kwargs)
|
||||
finalize_submodules()
|
||||
for name, submodule in self._submodules:
|
||||
self._module._add_submodule(submodule, name)
|
||||
|
||||
def do_finalize(self):
|
||||
pass
|
||||
|
||||
|
||||
Module = CompatModule
|
|
@ -0,0 +1,140 @@
|
|||
import warnings
|
||||
|
||||
from ..._utils import deprecated, extend
|
||||
from ...hdl.ast import *
|
||||
from ...hdl.ir import Elaboratable
|
||||
from ...hdl.mem import Memory as NativeMemory
|
||||
from ...hdl.ir import Fragment, Instance
|
||||
from ...hdl.dsl import Module
|
||||
from .module import Module as CompatModule
|
||||
from .structure import Signal
|
||||
from ...lib.io import Pin
|
||||
|
||||
|
||||
__all__ = ["TSTriple", "Instance", "Memory", "READ_FIRST", "WRITE_FIRST", "NO_CHANGE"]
|
||||
|
||||
|
||||
class TSTriple:
|
||||
def __init__(self, bits_sign=None, min=None, max=None, reset_o=0, reset_oe=0, reset_i=0,
|
||||
name=None):
|
||||
self.o = Signal(bits_sign, min=min, max=max, reset=reset_o,
|
||||
name=None if name is None else name + "_o")
|
||||
self.oe = Signal(reset=reset_oe,
|
||||
name=None if name is None else name + "_oe")
|
||||
self.i = Signal(bits_sign, min=min, max=max, reset=reset_i,
|
||||
name=None if name is None else name + "_i")
|
||||
|
||||
def __len__(self):
|
||||
return len(self.o)
|
||||
|
||||
def get_tristate(self, io):
|
||||
return Tristate(io, self.o, self.oe, self.i)
|
||||
|
||||
|
||||
class Tristate(Elaboratable):
|
||||
def __init__(self, target, o, oe, i=None):
|
||||
self.target = target
|
||||
self.o = o
|
||||
self.oe = oe
|
||||
self.i = i if i is not None else None
|
||||
|
||||
def elaborate(self, platform):
|
||||
if hasattr(platform, "get_input_output"):
|
||||
pin = Pin(len(self.target), dir="oe" if self.i is None else "io")
|
||||
pin.o = self.o
|
||||
pin.oe = self.oe
|
||||
if self.i is not None:
|
||||
pin.i = self.i
|
||||
return platform.get_input_output(pin, self.target, attrs={}, invert=None)
|
||||
|
||||
m = Module()
|
||||
m.d.comb += self.i.eq(self.target)
|
||||
m.submodules += Instance("$tribuf",
|
||||
p_WIDTH=len(self.target),
|
||||
i_EN=self.oe,
|
||||
i_A=self.o,
|
||||
o_Y=self.target,
|
||||
)
|
||||
|
||||
f = m.elaborate(platform)
|
||||
f.flatten = True
|
||||
return f
|
||||
|
||||
|
||||
(READ_FIRST, WRITE_FIRST, NO_CHANGE) = range(3)
|
||||
|
||||
|
||||
class _MemoryPort(CompatModule):
|
||||
def __init__(self, adr, dat_r, we=None, dat_w=None, async_read=False, re=None,
|
||||
we_granularity=0, mode=WRITE_FIRST, clock_domain="sync"):
|
||||
self.adr = adr
|
||||
self.dat_r = dat_r
|
||||
self.we = we
|
||||
self.dat_w = dat_w
|
||||
self.async_read = async_read
|
||||
self.re = re
|
||||
self.we_granularity = we_granularity
|
||||
self.mode = mode
|
||||
self.clock = ClockSignal(clock_domain)
|
||||
|
||||
|
||||
@extend(NativeMemory)
|
||||
@deprecated("it is not necessary or permitted to add Memory as a special or submodule")
|
||||
def elaborate(self, platform):
|
||||
return Fragment()
|
||||
|
||||
|
||||
class CompatMemory(NativeMemory, Elaboratable):
|
||||
def __init__(self, width, depth, init=None, name=None):
|
||||
super().__init__(width=width, depth=depth, init=init, name=name)
|
||||
|
||||
@deprecated("instead of `get_port()`, use `read_port()` and `write_port()`")
|
||||
def get_port(self, write_capable=False, async_read=False, has_re=False, we_granularity=0,
|
||||
mode=WRITE_FIRST, clock_domain="sync"):
|
||||
if we_granularity >= self.width:
|
||||
warnings.warn("do not specify `we_granularity` greater than memory width, as it "
|
||||
"is a hard error in non-compatibility mode",
|
||||
DeprecationWarning, stacklevel=1)
|
||||
we_granularity = 0
|
||||
if we_granularity == 0:
|
||||
warnings.warn("instead of `we_granularity=0`, use `we_granularity=None` or avoid "
|
||||
"specifying it at all, as it is a hard error in non-compatibility mode",
|
||||
DeprecationWarning, stacklevel=1)
|
||||
we_granularity = None
|
||||
assert mode != NO_CHANGE
|
||||
rdport = self.read_port(domain="comb" if async_read else clock_domain,
|
||||
transparent=mode == WRITE_FIRST)
|
||||
rdport.addr.name = "{}_addr".format(self.name)
|
||||
adr = rdport.addr
|
||||
dat_r = rdport.data
|
||||
if write_capable:
|
||||
wrport = self.write_port(domain=clock_domain, granularity=we_granularity)
|
||||
wrport.addr = rdport.addr
|
||||
we = wrport.en
|
||||
dat_w = wrport.data
|
||||
else:
|
||||
we = None
|
||||
dat_w = None
|
||||
if has_re:
|
||||
if mode == READ_FIRST:
|
||||
re = rdport.en
|
||||
else:
|
||||
warnings.warn("the combination of `has_re=True` and `mode=WRITE_FIRST` has "
|
||||
"surprising behavior: keeping `re` low would merely latch "
|
||||
"the address, while the data will change with changing memory "
|
||||
"contents; avoid using `re` with transparent ports as it is a hard "
|
||||
"error in non-compatibility mode",
|
||||
DeprecationWarning, stacklevel=1)
|
||||
re = Signal()
|
||||
else:
|
||||
re = None
|
||||
mp = _MemoryPort(adr, dat_r, we, dat_w,
|
||||
async_read, re, we_granularity, mode,
|
||||
clock_domain)
|
||||
mp.submodules.rdport = rdport
|
||||
if write_capable:
|
||||
mp.submodules.wrport = wrport
|
||||
return mp
|
||||
|
||||
|
||||
Memory = CompatMemory
|
|
@ -0,0 +1,185 @@
|
|||
import builtins
|
||||
import warnings
|
||||
from collections import OrderedDict
|
||||
|
||||
from ...utils import bits_for
|
||||
from ..._utils import deprecated, extend
|
||||
from ...hdl import ast
|
||||
from ...hdl.ast import (DUID,
|
||||
Shape, signed, unsigned,
|
||||
Value, Const, C, Mux, Slice as _Slice, Part, Cat, Repl,
|
||||
Signal as NativeSignal,
|
||||
ClockSignal, ResetSignal,
|
||||
Array, ArrayProxy as _ArrayProxy)
|
||||
from ...hdl.cd import ClockDomain
|
||||
|
||||
|
||||
__all__ = ["DUID", "wrap", "Mux", "Cat", "Replicate", "Constant", "C", "Signal", "ClockSignal",
|
||||
"ResetSignal", "If", "Case", "Array", "ClockDomain"]
|
||||
|
||||
|
||||
@deprecated("instead of `wrap`, use `Value.cast`")
|
||||
def wrap(v):
|
||||
return Value.cast(v)
|
||||
|
||||
|
||||
class CompatSignal(NativeSignal):
|
||||
def __init__(self, bits_sign=None, name=None, variable=False, reset=0,
|
||||
reset_less=False, name_override=None, min=None, max=None,
|
||||
related=None, attr=None, src_loc_at=0, **kwargs):
|
||||
if min is not None or max is not None:
|
||||
warnings.warn("instead of `Signal(min={min}, max={max})`, "
|
||||
"use `Signal(range({min}, {max}))`"
|
||||
.format(min=min or 0, max=max or 2),
|
||||
DeprecationWarning, stacklevel=2 + src_loc_at)
|
||||
|
||||
if bits_sign is None:
|
||||
if min is None:
|
||||
min = 0
|
||||
if max is None:
|
||||
max = 2
|
||||
max -= 1 # make both bounds inclusive
|
||||
if min > max:
|
||||
raise ValueError("Lower bound {} should be less or equal to higher bound {}"
|
||||
.format(min, max + 1))
|
||||
sign = min < 0 or max < 0
|
||||
if min == max:
|
||||
bits = 0
|
||||
else:
|
||||
bits = builtins.max(bits_for(min, sign), bits_for(max, sign))
|
||||
shape = signed(bits) if sign else unsigned(bits)
|
||||
else:
|
||||
if not (min is None and max is None):
|
||||
raise ValueError("Only one of bits/signedness or bounds may be specified")
|
||||
shape = bits_sign
|
||||
|
||||
super().__init__(shape=shape, name=name_override or name,
|
||||
reset=reset, reset_less=reset_less,
|
||||
attrs=attr, src_loc_at=1 + src_loc_at, **kwargs)
|
||||
|
||||
|
||||
Signal = CompatSignal
|
||||
|
||||
|
||||
@deprecated("instead of `Constant`, use `Const`")
|
||||
def Constant(value, bits_sign=None):
|
||||
return Const(value, bits_sign)
|
||||
|
||||
|
||||
@deprecated("instead of `Replicate`, use `Repl`")
|
||||
def Replicate(v, n):
|
||||
return Repl(v, n)
|
||||
|
||||
|
||||
@extend(Const)
|
||||
@property
|
||||
@deprecated("instead of `.nbits`, use `.width`")
|
||||
def nbits(self):
|
||||
return self.width
|
||||
|
||||
|
||||
@extend(NativeSignal)
|
||||
@property
|
||||
@deprecated("instead of `.nbits`, use `.width`")
|
||||
def nbits(self):
|
||||
return self.width
|
||||
|
||||
|
||||
@extend(NativeSignal)
|
||||
@NativeSignal.nbits.setter
|
||||
@deprecated("instead of `.nbits = x`, use `.width = x`")
|
||||
def nbits(self, value):
|
||||
self.width = value
|
||||
|
||||
|
||||
@extend(NativeSignal)
|
||||
@deprecated("instead of `.part`, use `.bit_select`")
|
||||
def part(self, offset, width):
|
||||
return Part(self, offset, width, src_loc_at=2)
|
||||
|
||||
|
||||
@extend(Cat)
|
||||
@property
|
||||
@deprecated("instead of `.l`, use `.parts`")
|
||||
def l(self):
|
||||
return self.parts
|
||||
|
||||
|
||||
@extend(ast.Operator)
|
||||
@property
|
||||
@deprecated("instead of `.op`, use `.operator`")
|
||||
def op(self):
|
||||
return self.operator
|
||||
|
||||
|
||||
@extend(_ArrayProxy)
|
||||
@property
|
||||
@deprecated("instead `_ArrayProxy.choices`, use `ArrayProxy.elems`")
|
||||
def choices(self):
|
||||
return self.elems
|
||||
|
||||
|
||||
class If(ast.Switch):
|
||||
@deprecated("instead of `If(cond, ...)`, use `with m.If(cond): ...`")
|
||||
def __init__(self, cond, *stmts):
|
||||
cond = Value.cast(cond)
|
||||
if len(cond) != 1:
|
||||
cond = cond.bool()
|
||||
super().__init__(cond, {("1",): ast.Statement.cast(stmts)})
|
||||
|
||||
@deprecated("instead of `.Elif(cond, ...)`, use `with m.Elif(cond): ...`")
|
||||
def Elif(self, cond, *stmts):
|
||||
cond = Value.cast(cond)
|
||||
if len(cond) != 1:
|
||||
cond = cond.bool()
|
||||
self.cases = OrderedDict((("-" + k,), v) for (k,), v in self.cases.items())
|
||||
self.cases[("1" + "-" * len(self.test),)] = ast.Statement.cast(stmts)
|
||||
self.test = Cat(self.test, cond)
|
||||
return self
|
||||
|
||||
@deprecated("instead of `.Else(...)`, use `with m.Else(): ...`")
|
||||
def Else(self, *stmts):
|
||||
self.cases[()] = ast.Statement.cast(stmts)
|
||||
return self
|
||||
|
||||
|
||||
class Case(ast.Switch):
|
||||
@deprecated("instead of `Case(test, { value: stmts })`, use `with m.Switch(test):` and "
|
||||
"`with m.Case(value): stmts`; instead of `\"default\": stmts`, use "
|
||||
"`with m.Case(): stmts`")
|
||||
def __init__(self, test, cases):
|
||||
new_cases = []
|
||||
default = None
|
||||
for k, v in cases.items():
|
||||
if isinstance(k, (bool, int)):
|
||||
k = Const(k)
|
||||
if (not isinstance(k, Const)
|
||||
and not (isinstance(k, str) and k == "default")):
|
||||
raise TypeError("Case object is not a Migen constant")
|
||||
if isinstance(k, str) and k == "default":
|
||||
default = v
|
||||
continue
|
||||
else:
|
||||
k = k.value
|
||||
new_cases.append((k, v))
|
||||
if default is not None:
|
||||
new_cases.append((None, default))
|
||||
super().__init__(test, OrderedDict(new_cases))
|
||||
|
||||
@deprecated("instead of `Case(...).makedefault()`, use an explicit default case: "
|
||||
"`with m.Case(): ...`")
|
||||
def makedefault(self, key=None):
|
||||
if key is None:
|
||||
for choice in self.cases.keys():
|
||||
if (key is None
|
||||
or (isinstance(choice, str) and choice == "default")
|
||||
or choice > key):
|
||||
key = choice
|
||||
elif isinstance(key, str) and key == "default":
|
||||
key = ()
|
||||
else:
|
||||
key = ("{:0{}b}".format(ast.Value.cast(key).value, len(self.test)),)
|
||||
stmts = self.cases[key]
|
||||
del self.cases[key]
|
||||
self.cases[()] = stmts
|
||||
return self
|
|
@ -0,0 +1,31 @@
|
|||
import warnings
|
||||
|
||||
from ...hdl.ir import Fragment
|
||||
from ...hdl.cd import ClockDomain
|
||||
from ...back import verilog
|
||||
from .conv_output import ConvOutput
|
||||
|
||||
|
||||
def convert(fi, ios=None, name="top", special_overrides=dict(),
|
||||
attr_translate=None, create_clock_domains=True,
|
||||
display_run=False):
|
||||
if display_run:
|
||||
warnings.warn("`display_run=True` support has been removed",
|
||||
DeprecationWarning, stacklevel=1)
|
||||
if special_overrides:
|
||||
warnings.warn("`special_overrides` support as well as `Special` has been removed",
|
||||
DeprecationWarning, stacklevel=1)
|
||||
# TODO: attr_translate
|
||||
|
||||
def missing_domain(name):
|
||||
if create_clock_domains:
|
||||
return ClockDomain(name)
|
||||
v_output = verilog.convert(
|
||||
elaboratable=fi.get_fragment(),
|
||||
name=name,
|
||||
ports=ios or (),
|
||||
missing_domain=missing_domain
|
||||
)
|
||||
output = ConvOutput()
|
||||
output.set_main_source(v_output)
|
||||
return output
|
|
@ -0,0 +1,74 @@
|
|||
import warnings
|
||||
|
||||
from ..._utils import deprecated
|
||||
from ...lib.cdc import FFSynchronizer as NativeFFSynchronizer
|
||||
from ...lib.cdc import PulseSynchronizer as NativePulseSynchronizer
|
||||
from ...hdl.ast import *
|
||||
from ..fhdl.module import CompatModule
|
||||
from ..fhdl.structure import If
|
||||
|
||||
|
||||
__all__ = ["MultiReg", "PulseSynchronizer", "GrayCounter", "GrayDecoder"]
|
||||
|
||||
|
||||
class MultiReg(NativeFFSynchronizer):
|
||||
def __init__(self, i, o, odomain="sync", n=2, reset=0):
|
||||
old_opts = []
|
||||
new_opts = []
|
||||
if odomain != "sync":
|
||||
old_opts.append(", odomain={!r}".format(odomain))
|
||||
new_opts.append(", o_domain={!r}".format(odomain))
|
||||
if n != 2:
|
||||
old_opts.append(", n={!r}".format(n))
|
||||
new_opts.append(", stages={!r}".format(n))
|
||||
warnings.warn("instead of `MultiReg(...{})`, use `FFSynchronizer(...{})`"
|
||||
.format("".join(old_opts), "".join(new_opts)),
|
||||
DeprecationWarning, stacklevel=2)
|
||||
super().__init__(i, o, o_domain=odomain, stages=n, reset=reset)
|
||||
self.odomain = odomain
|
||||
|
||||
|
||||
@deprecated("instead of `migen.genlib.cdc.PulseSynchronizer`, use `nmigen.lib.cdc.PulseSynchronizer`")
|
||||
class PulseSynchronizer(NativePulseSynchronizer):
|
||||
def __init__(self, idomain, odomain):
|
||||
super().__init__(i_domain=idomain, o_domain=odomain)
|
||||
|
||||
|
||||
@deprecated("instead of `migen.genlib.cdc.GrayCounter`, use `nmigen.lib.coding.GrayEncoder`")
|
||||
class GrayCounter(CompatModule):
|
||||
def __init__(self, width):
|
||||
self.ce = Signal()
|
||||
self.q = Signal(width)
|
||||
self.q_next = Signal(width)
|
||||
self.q_binary = Signal(width)
|
||||
self.q_next_binary = Signal(width)
|
||||
|
||||
###
|
||||
|
||||
self.comb += [
|
||||
If(self.ce,
|
||||
self.q_next_binary.eq(self.q_binary + 1)
|
||||
).Else(
|
||||
self.q_next_binary.eq(self.q_binary)
|
||||
),
|
||||
self.q_next.eq(self.q_next_binary ^ self.q_next_binary[1:])
|
||||
]
|
||||
self.sync += [
|
||||
self.q_binary.eq(self.q_next_binary),
|
||||
self.q.eq(self.q_next)
|
||||
]
|
||||
|
||||
|
||||
@deprecated("instead of `migen.genlib.cdc.GrayDecoder`, use `nmigen.lib.coding.GrayDecoder`")
|
||||
class GrayDecoder(CompatModule):
|
||||
def __init__(self, width):
|
||||
self.i = Signal(width)
|
||||
self.o = Signal(width, reset_less=True)
|
||||
|
||||
# # #
|
||||
|
||||
o_comb = Signal(width)
|
||||
self.comb += o_comb[-1].eq(self.i[-1])
|
||||
for i in reversed(range(width-1)):
|
||||
self.comb += o_comb[i].eq(o_comb[i+1] ^ self.i[i])
|
||||
self.sync += self.o.eq(o_comb)
|
|
@ -0,0 +1,4 @@
|
|||
from ...lib.coding import *
|
||||
|
||||
|
||||
__all__ = ["Encoder", "PriorityEncoder", "Decoder", "PriorityDecoder"]
|
|
@ -0,0 +1,147 @@
|
|||
from ..._utils import deprecated, extend
|
||||
from ...lib.fifo import (FIFOInterface as NativeFIFOInterface,
|
||||
SyncFIFO as NativeSyncFIFO, SyncFIFOBuffered as NativeSyncFIFOBuffered,
|
||||
AsyncFIFO as NativeAsyncFIFO, AsyncFIFOBuffered as NativeAsyncFIFOBuffered)
|
||||
|
||||
|
||||
__all__ = ["_FIFOInterface", "SyncFIFO", "SyncFIFOBuffered", "AsyncFIFO", "AsyncFIFOBuffered"]
|
||||
|
||||
|
||||
class CompatFIFOInterface(NativeFIFOInterface):
|
||||
@deprecated("attribute `fwft` must be provided to FIFOInterface constructor")
|
||||
def __init__(self, width, depth):
|
||||
super().__init__(width=width, depth=depth, fwft=False)
|
||||
del self.fwft
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@property
|
||||
@deprecated("instead of `fifo.din`, use `fifo.w_data`")
|
||||
def din(self):
|
||||
return self.w_data
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@NativeFIFOInterface.din.setter
|
||||
@deprecated("instead of `fifo.din = x`, use `fifo.w_data = x`")
|
||||
def din(self, w_data):
|
||||
self.w_data = w_data
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@property
|
||||
@deprecated("instead of `fifo.writable`, use `fifo.w_rdy`")
|
||||
def writable(self):
|
||||
return self.w_rdy
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@NativeFIFOInterface.writable.setter
|
||||
@deprecated("instead of `fifo.writable = x`, use `fifo.w_rdy = x`")
|
||||
def writable(self, w_rdy):
|
||||
self.w_rdy = w_rdy
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@property
|
||||
@deprecated("instead of `fifo.we`, use `fifo.w_en`")
|
||||
def we(self):
|
||||
return self.w_en
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@NativeFIFOInterface.we.setter
|
||||
@deprecated("instead of `fifo.we = x`, use `fifo.w_en = x`")
|
||||
def we(self, w_en):
|
||||
self.w_en = w_en
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@property
|
||||
@deprecated("instead of `fifo.dout`, use `fifo.r_data`")
|
||||
def dout(self):
|
||||
return self.r_data
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@NativeFIFOInterface.dout.setter
|
||||
@deprecated("instead of `fifo.dout = x`, use `fifo.r_data = x`")
|
||||
def dout(self, r_data):
|
||||
self.r_data = r_data
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@property
|
||||
@deprecated("instead of `fifo.readable`, use `fifo.r_rdy`")
|
||||
def readable(self):
|
||||
return self.r_rdy
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@NativeFIFOInterface.readable.setter
|
||||
@deprecated("instead of `fifo.readable = x`, use `fifo.r_rdy = x`")
|
||||
def readable(self, r_rdy):
|
||||
self.r_rdy = r_rdy
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@property
|
||||
@deprecated("instead of `fifo.re`, use `fifo.r_en`")
|
||||
def re(self):
|
||||
return self.r_en
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
@NativeFIFOInterface.re.setter
|
||||
@deprecated("instead of `fifo.re = x`, use `fifo.r_en = x`")
|
||||
def re(self, r_en):
|
||||
self.r_en = r_en
|
||||
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
def read(self):
|
||||
"""Read method for simulation."""
|
||||
assert (yield self.r_rdy)
|
||||
value = (yield self.r_data)
|
||||
yield self.r_en.eq(1)
|
||||
yield
|
||||
yield self.r_en.eq(0)
|
||||
yield
|
||||
return value
|
||||
|
||||
@extend(NativeFIFOInterface)
|
||||
def write(self, data):
|
||||
"""Write method for simulation."""
|
||||
assert (yield self.w_rdy)
|
||||
yield self.w_data.eq(data)
|
||||
yield self.w_en.eq(1)
|
||||
yield
|
||||
yield self.w_en.eq(0)
|
||||
yield
|
||||
|
||||
|
||||
class CompatSyncFIFO(NativeSyncFIFO):
|
||||
def __init__(self, width, depth, fwft=True):
|
||||
super().__init__(width=width, depth=depth, fwft=fwft)
|
||||
|
||||
|
||||
class CompatSyncFIFOBuffered(NativeSyncFIFOBuffered):
|
||||
def __init__(self, width, depth):
|
||||
super().__init__(width=width, depth=depth)
|
||||
|
||||
|
||||
class CompatAsyncFIFO(NativeAsyncFIFO):
|
||||
def __init__(self, width, depth):
|
||||
super().__init__(width=width, depth=depth)
|
||||
|
||||
|
||||
class CompatAsyncFIFOBuffered(NativeAsyncFIFOBuffered):
|
||||
def __init__(self, width, depth):
|
||||
super().__init__(width=width, depth=depth)
|
||||
|
||||
|
||||
_FIFOInterface = CompatFIFOInterface
|
||||
SyncFIFO = CompatSyncFIFO
|
||||
SyncFIFOBuffered = CompatSyncFIFOBuffered
|
||||
AsyncFIFO = CompatAsyncFIFO
|
||||
AsyncFIFOBuffered = CompatAsyncFIFOBuffered
|
|
@ -0,0 +1,193 @@
|
|||
from collections import OrderedDict
|
||||
|
||||
from ..._utils import deprecated, _ignore_deprecated
|
||||
from ...hdl.xfrm import ValueTransformer, StatementTransformer
|
||||
from ...hdl.ast import *
|
||||
from ...hdl.ast import Signal as NativeSignal
|
||||
from ..fhdl.module import CompatModule, CompatFinalizeError
|
||||
from ..fhdl.structure import Signal, If, Case
|
||||
|
||||
|
||||
__all__ = ["AnonymousState", "NextState", "NextValue", "FSM"]
|
||||
|
||||
|
||||
class AnonymousState:
|
||||
pass
|
||||
|
||||
|
||||
class NextState(Statement):
|
||||
def __init__(self, state):
|
||||
super().__init__()
|
||||
self.state = state
|
||||
|
||||
|
||||
class NextValue(Statement):
|
||||
def __init__(self, target, value):
|
||||
super().__init__()
|
||||
self.target = target
|
||||
self.value = value
|
||||
|
||||
|
||||
def _target_eq(a, b):
|
||||
if type(a) != type(b):
|
||||
return False
|
||||
ty = type(a)
|
||||
if ty == Const:
|
||||
return a.value == b.value
|
||||
elif ty == NativeSignal or ty == Signal:
|
||||
return a is b
|
||||
elif ty == Cat:
|
||||
return all(_target_eq(x, y) for x, y in zip(a.l, b.l))
|
||||
elif ty == Slice:
|
||||
return (_target_eq(a.value, b.value)
|
||||
and a.start == b.start
|
||||
and a.stop == b.stop)
|
||||
elif ty == Part:
|
||||
return (_target_eq(a.value, b.value)
|
||||
and _target_eq(a.offset == b.offset)
|
||||
and a.width == b.width)
|
||||
elif ty == ArrayProxy:
|
||||
return (all(_target_eq(x, y) for x, y in zip(a.choices, b.choices))
|
||||
and _target_eq(a.key, b.key))
|
||||
else:
|
||||
raise ValueError("NextValue cannot be used with target type '{}'"
|
||||
.format(ty))
|
||||
|
||||
|
||||
class _LowerNext(ValueTransformer, StatementTransformer):
|
||||
def __init__(self, next_state_signal, encoding, aliases):
|
||||
self.next_state_signal = next_state_signal
|
||||
self.encoding = encoding
|
||||
self.aliases = aliases
|
||||
# (target, next_value_ce, next_value)
|
||||
self.registers = []
|
||||
|
||||
def _get_register_control(self, target):
|
||||
for x in self.registers:
|
||||
if _target_eq(target, x[0]):
|
||||
return x[1], x[2]
|
||||
raise KeyError
|
||||
|
||||
def on_unknown_statement(self, node):
|
||||
if isinstance(node, NextState):
|
||||
try:
|
||||
actual_state = self.aliases[node.state]
|
||||
except KeyError:
|
||||
actual_state = node.state
|
||||
return self.next_state_signal.eq(self.encoding[actual_state])
|
||||
elif isinstance(node, NextValue):
|
||||
try:
|
||||
next_value_ce, next_value = self._get_register_control(node.target)
|
||||
except KeyError:
|
||||
related = node.target if isinstance(node.target, Signal) else None
|
||||
next_value = Signal(node.target.shape(),
|
||||
name=None if related is None else "{}_fsm_next".format(related.name))
|
||||
next_value_ce = Signal(
|
||||
name=None if related is None else "{}_fsm_next_ce".format(related.name))
|
||||
self.registers.append((node.target, next_value_ce, next_value))
|
||||
return next_value.eq(node.value), next_value_ce.eq(1)
|
||||
else:
|
||||
return node
|
||||
|
||||
|
||||
@deprecated("instead of `migen.genlib.fsm.FSM()`, use `with m.FSM():`; note that there is no "
|
||||
"replacement for `{before,after}_{entering,leaving}` and `delayed_enter` methods")
|
||||
class FSM(CompatModule):
|
||||
def __init__(self, reset_state=None):
|
||||
self.actions = OrderedDict()
|
||||
self.state_aliases = dict()
|
||||
self.reset_state = reset_state
|
||||
|
||||
self.before_entering_signals = OrderedDict()
|
||||
self.before_leaving_signals = OrderedDict()
|
||||
self.after_entering_signals = OrderedDict()
|
||||
self.after_leaving_signals = OrderedDict()
|
||||
|
||||
def act(self, state, *statements):
|
||||
if self.finalized:
|
||||
raise CompatFinalizeError
|
||||
if self.reset_state is None:
|
||||
self.reset_state = state
|
||||
if state not in self.actions:
|
||||
self.actions[state] = []
|
||||
self.actions[state] += statements
|
||||
|
||||
def delayed_enter(self, name, target, delay):
|
||||
if self.finalized:
|
||||
raise CompatFinalizeError
|
||||
if delay > 0:
|
||||
state = name
|
||||
for i in range(delay):
|
||||
if i == delay - 1:
|
||||
next_state = target
|
||||
else:
|
||||
next_state = AnonymousState()
|
||||
self.act(state, NextState(next_state))
|
||||
state = next_state
|
||||
else:
|
||||
self.state_aliases[name] = target
|
||||
|
||||
def ongoing(self, state):
|
||||
is_ongoing = Signal()
|
||||
self.act(state, is_ongoing.eq(1))
|
||||
return is_ongoing
|
||||
|
||||
def _get_signal(self, d, state):
|
||||
if state not in self.actions:
|
||||
self.actions[state] = []
|
||||
try:
|
||||
return d[state]
|
||||
except KeyError:
|
||||
is_el = Signal()
|
||||
d[state] = is_el
|
||||
return is_el
|
||||
|
||||
def before_entering(self, state):
|
||||
return self._get_signal(self.before_entering_signals, state)
|
||||
|
||||
def before_leaving(self, state):
|
||||
return self._get_signal(self.before_leaving_signals, state)
|
||||
|
||||
def after_entering(self, state):
|
||||
signal = self._get_signal(self.after_entering_signals, state)
|
||||
self.sync += signal.eq(self.before_entering(state))
|
||||
return signal
|
||||
|
||||
def after_leaving(self, state):
|
||||
signal = self._get_signal(self.after_leaving_signals, state)
|
||||
self.sync += signal.eq(self.before_leaving(state))
|
||||
return signal
|
||||
|
||||
@_ignore_deprecated
|
||||
def do_finalize(self):
|
||||
nstates = len(self.actions)
|
||||
self.encoding = dict((s, n) for n, s in enumerate(self.actions.keys()))
|
||||
self.decoding = {n: s for s, n in self.encoding.items()}
|
||||
|
||||
decoder = lambda n: "{}/{}".format(self.decoding[n], n)
|
||||
self.state = Signal(range(nstates), reset=self.encoding[self.reset_state], decoder=decoder)
|
||||
self.next_state = Signal.like(self.state)
|
||||
|
||||
for state, signal in self.before_leaving_signals.items():
|
||||
encoded = self.encoding[state]
|
||||
self.comb += signal.eq((self.state == encoded) & ~(self.next_state == encoded))
|
||||
if self.reset_state in self.after_entering_signals:
|
||||
self.after_entering_signals[self.reset_state].reset = 1
|
||||
for state, signal in self.before_entering_signals.items():
|
||||
encoded = self.encoding[state]
|
||||
self.comb += signal.eq(~(self.state == encoded) & (self.next_state == encoded))
|
||||
|
||||
self._finalize_sync(self._lower_controls())
|
||||
|
||||
def _lower_controls(self):
|
||||
return _LowerNext(self.next_state, self.encoding, self.state_aliases)
|
||||
|
||||
def _finalize_sync(self, ls):
|
||||
cases = dict((self.encoding[k], ls.on_statement(v)) for k, v in self.actions.items() if v)
|
||||
self.comb += [
|
||||
self.next_state.eq(self.state),
|
||||
Case(self.state, cases).makedefault(self.encoding[self.reset_state])
|
||||
]
|
||||
self.sync += self.state.eq(self.next_state)
|
||||
for register, next_value_ce, next_value in ls.registers:
|
||||
self.sync += If(next_value_ce, register.eq(next_value))
|
|
@ -0,0 +1,195 @@
|
|||
from ...tracer import *
|
||||
from ..fhdl.structure import *
|
||||
|
||||
from functools import reduce
|
||||
from operator import or_
|
||||
|
||||
|
||||
(DIR_NONE, DIR_S_TO_M, DIR_M_TO_S) = range(3)
|
||||
|
||||
# Possible layout elements:
|
||||
# 1. (name, size)
|
||||
# 2. (name, size, direction)
|
||||
# 3. (name, sublayout)
|
||||
# size can be an int, or a (int, bool) tuple for signed numbers
|
||||
# sublayout must be a list
|
||||
|
||||
|
||||
def set_layout_parameters(layout, **layout_dict):
|
||||
def resolve(p):
|
||||
if isinstance(p, str):
|
||||
try:
|
||||
return layout_dict[p]
|
||||
except KeyError:
|
||||
return p
|
||||
else:
|
||||
return p
|
||||
|
||||
r = []
|
||||
for f in layout:
|
||||
if isinstance(f[1], (int, tuple, str)): # cases 1/2
|
||||
if len(f) == 3:
|
||||
r.append((f[0], resolve(f[1]), f[2]))
|
||||
else:
|
||||
r.append((f[0], resolve(f[1])))
|
||||
elif isinstance(f[1], list): # case 3
|
||||
r.append((f[0], set_layout_parameters(f[1], **layout_dict)))
|
||||
else:
|
||||
raise TypeError
|
||||
return r
|
||||
|
||||
|
||||
def layout_len(layout):
|
||||
r = 0
|
||||
for f in layout:
|
||||
if isinstance(f[1], (int, tuple)): # cases 1/2
|
||||
if len(f) == 3:
|
||||
fname, fsize, fdirection = f
|
||||
else:
|
||||
fname, fsize = f
|
||||
elif isinstance(f[1], list): # case 3
|
||||
fname, fsublayout = f
|
||||
fsize = layout_len(fsublayout)
|
||||
else:
|
||||
raise TypeError
|
||||
if isinstance(fsize, tuple):
|
||||
r += fsize[0]
|
||||
else:
|
||||
r += fsize
|
||||
return r
|
||||
|
||||
|
||||
def layout_get(layout, name):
|
||||
for f in layout:
|
||||
if f[0] == name:
|
||||
return f
|
||||
raise KeyError(name)
|
||||
|
||||
|
||||
def layout_partial(layout, *elements):
|
||||
r = []
|
||||
for path in elements:
|
||||
path_s = path.split("/")
|
||||
last = path_s.pop()
|
||||
copy_ref = layout
|
||||
insert_ref = r
|
||||
for hop in path_s:
|
||||
name, copy_ref = layout_get(copy_ref, hop)
|
||||
try:
|
||||
name, insert_ref = layout_get(insert_ref, hop)
|
||||
except KeyError:
|
||||
new_insert_ref = []
|
||||
insert_ref.append((hop, new_insert_ref))
|
||||
insert_ref = new_insert_ref
|
||||
insert_ref.append(layout_get(copy_ref, last))
|
||||
return r
|
||||
|
||||
|
||||
class Record:
|
||||
def __init__(self, layout, name=None, **kwargs):
|
||||
try:
|
||||
self.name = get_var_name()
|
||||
except NameNotFound:
|
||||
self.name = ""
|
||||
self.layout = layout
|
||||
|
||||
if self.name:
|
||||
prefix = self.name + "_"
|
||||
else:
|
||||
prefix = ""
|
||||
for f in self.layout:
|
||||
if isinstance(f[1], (int, tuple)): # cases 1/2
|
||||
if(len(f) == 3):
|
||||
fname, fsize, fdirection = f
|
||||
else:
|
||||
fname, fsize = f
|
||||
finst = Signal(fsize, name=prefix + fname, **kwargs)
|
||||
elif isinstance(f[1], list): # case 3
|
||||
fname, fsublayout = f
|
||||
finst = Record(fsublayout, prefix + fname, **kwargs)
|
||||
else:
|
||||
raise TypeError
|
||||
setattr(self, fname, finst)
|
||||
|
||||
def eq(self, other):
|
||||
return [getattr(self, f[0]).eq(getattr(other, f[0]))
|
||||
for f in self.layout if hasattr(other, f[0])]
|
||||
|
||||
def iter_flat(self):
|
||||
for f in self.layout:
|
||||
e = getattr(self, f[0])
|
||||
if isinstance(e, Signal):
|
||||
if len(f) == 3:
|
||||
yield e, f[2]
|
||||
else:
|
||||
yield e, DIR_NONE
|
||||
elif isinstance(e, Record):
|
||||
yield from e.iter_flat()
|
||||
else:
|
||||
raise TypeError
|
||||
|
||||
def flatten(self):
|
||||
return [signal for signal, direction in self.iter_flat()]
|
||||
|
||||
def raw_bits(self):
|
||||
return Cat(*self.flatten())
|
||||
|
||||
def connect(self, *slaves, keep=None, omit=None):
|
||||
if keep is None:
|
||||
_keep = set([f[0] for f in self.layout])
|
||||
elif isinstance(keep, list):
|
||||
_keep = set(keep)
|
||||
else:
|
||||
_keep = keep
|
||||
if omit is None:
|
||||
_omit = set()
|
||||
elif isinstance(omit, list):
|
||||
_omit = set(omit)
|
||||
else:
|
||||
_omit = omit
|
||||
|
||||
_keep = _keep - _omit
|
||||
|
||||
r = []
|
||||
for f in self.layout:
|
||||
field = f[0]
|
||||
self_e = getattr(self, field)
|
||||
if isinstance(self_e, Signal):
|
||||
if field in _keep:
|
||||
direction = f[2]
|
||||
if direction == DIR_M_TO_S:
|
||||
r += [getattr(slave, field).eq(self_e) for slave in slaves]
|
||||
elif direction == DIR_S_TO_M:
|
||||
r.append(self_e.eq(reduce(or_, [getattr(slave, field) for slave in slaves])))
|
||||
else:
|
||||
raise TypeError
|
||||
else:
|
||||
for slave in slaves:
|
||||
r += self_e.connect(getattr(slave, field), keep=keep, omit=omit)
|
||||
return r
|
||||
|
||||
def connect_flat(self, *slaves):
|
||||
r = []
|
||||
iter_slaves = [slave.iter_flat() for slave in slaves]
|
||||
for m_signal, m_direction in self.iter_flat():
|
||||
if m_direction == DIR_M_TO_S:
|
||||
for iter_slave in iter_slaves:
|
||||
s_signal, s_direction = next(iter_slave)
|
||||
assert(s_direction == DIR_M_TO_S)
|
||||
r.append(s_signal.eq(m_signal))
|
||||
elif m_direction == DIR_S_TO_M:
|
||||
s_signals = []
|
||||
for iter_slave in iter_slaves:
|
||||
s_signal, s_direction = next(iter_slave)
|
||||
assert(s_direction == DIR_S_TO_M)
|
||||
s_signals.append(s_signal)
|
||||
r.append(m_signal.eq(reduce(or_, s_signals)))
|
||||
else:
|
||||
raise TypeError
|
||||
return r
|
||||
|
||||
def __len__(self):
|
||||
return layout_len(self.layout)
|
||||
|
||||
def __repr__(self):
|
||||
return "<Record " + ":".join(f[0] for f in self.layout) + " at " + hex(id(self)) + ">"
|
|
@ -0,0 +1,16 @@
|
|||
from ..._utils import deprecated
|
||||
from ...lib.cdc import ResetSynchronizer as NativeResetSynchronizer
|
||||
|
||||
|
||||
__all__ = ["AsyncResetSynchronizer"]
|
||||
|
||||
|
||||
@deprecated("instead of `migen.genlib.resetsync.AsyncResetSynchronizer`, "
|
||||
"use `nmigen.lib.cdc.ResetSynchronizer`; note that ResetSynchronizer accepts "
|
||||
"a clock domain name as an argument, not a clock domain object")
|
||||
class CompatResetSynchronizer(NativeResetSynchronizer):
|
||||
def __init__(self, cd, async_reset):
|
||||
super().__init__(async_reset, domain=cd.name)
|
||||
|
||||
|
||||
AsyncResetSynchronizer = CompatResetSynchronizer
|
|
@ -0,0 +1,50 @@
|
|||
import functools
|
||||
import inspect
|
||||
from collections.abc import Iterable
|
||||
from ...hdl.cd import ClockDomain
|
||||
from ...back.pysim import *
|
||||
|
||||
|
||||
__all__ = ["run_simulation", "passive"]
|
||||
|
||||
|
||||
def run_simulation(fragment_or_module, generators, clocks={"sync": 10}, vcd_name=None,
|
||||
special_overrides={}):
|
||||
assert not special_overrides
|
||||
|
||||
if hasattr(fragment_or_module, "get_fragment"):
|
||||
fragment = fragment_or_module.get_fragment()
|
||||
else:
|
||||
fragment = fragment_or_module
|
||||
|
||||
if not isinstance(generators, dict):
|
||||
generators = {"sync": generators}
|
||||
fragment.domains += ClockDomain("sync")
|
||||
|
||||
sim = Simulator(fragment)
|
||||
for domain, period in clocks.items():
|
||||
sim.add_clock(period / 1e9, domain=domain)
|
||||
for domain, processes in generators.items():
|
||||
def wrap(process):
|
||||
def wrapper():
|
||||
yield from process
|
||||
return wrapper
|
||||
if isinstance(processes, Iterable) and not inspect.isgenerator(processes):
|
||||
for process in processes:
|
||||
sim.add_sync_process(wrap(process), domain=domain)
|
||||
else:
|
||||
sim.add_sync_process(wrap(processes), domain=domain)
|
||||
|
||||
if vcd_name is not None:
|
||||
with sim.write_vcd(vcd_name):
|
||||
sim.run()
|
||||
else:
|
||||
sim.run()
|
||||
|
||||
|
||||
def passive(generator):
|
||||
@functools.wraps(generator)
|
||||
def wrapper(*args, **kwargs):
|
||||
yield Passive()
|
||||
yield from generator(*args, **kwargs)
|
||||
return wrapper
|
|
@ -0,0 +1,20 @@
|
|||
from .ast import Shape, unsigned, signed
|
||||
from .ast import Value, Const, C, Mux, Cat, Repl, Array, Signal, ClockSignal, ResetSignal
|
||||
from .dsl import Module
|
||||
from .cd import ClockDomain
|
||||
from .ir import Elaboratable, Fragment, Instance
|
||||
from .mem import Memory
|
||||
from .rec import Record
|
||||
from .xfrm import DomainRenamer, ResetInserter, EnableInserter
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Shape", "unsigned", "signed",
|
||||
"Value", "Const", "C", "Mux", "Cat", "Repl", "Array", "Signal", "ClockSignal", "ResetSignal",
|
||||
"Module",
|
||||
"ClockDomain",
|
||||
"Elaboratable", "Fragment", "Instance",
|
||||
"Memory",
|
||||
"Record",
|
||||
"DomainRenamer", "ResetInserter", "EnableInserter",
|
||||
]
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,84 @@
|
|||
from .. import tracer
|
||||
from .ast import Signal
|
||||
|
||||
|
||||
__all__ = ["ClockDomain", "DomainError"]
|
||||
|
||||
|
||||
class DomainError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ClockDomain:
|
||||
"""Synchronous domain.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
name : str or None
|
||||
Domain name. If ``None`` (the default) the name is inferred from the variable name this
|
||||
``ClockDomain`` is assigned to (stripping any `"cd_"` prefix).
|
||||
reset_less : bool
|
||||
If ``True``, the domain does not use a reset signal. Registers within this domain are
|
||||
still all initialized to their reset state once, e.g. through Verilog `"initial"`
|
||||
statements.
|
||||
clock_edge : str
|
||||
The edge of the clock signal on which signals are sampled. Must be one of "pos" or "neg".
|
||||
async_reset : bool
|
||||
If ``True``, the domain uses an asynchronous reset, and registers within this domain
|
||||
are initialized to their reset state when reset level changes. Otherwise, registers
|
||||
are initialized to reset state at the next clock cycle when reset is asserted.
|
||||
local : bool
|
||||
If ``True``, the domain will propagate only downwards in the design hierarchy. Otherwise,
|
||||
the domain will propagate everywhere.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
clk : Signal, inout
|
||||
The clock for this domain. Can be driven or used to drive other signals (preferably
|
||||
in combinatorial context).
|
||||
rst : Signal or None, inout
|
||||
Reset signal for this domain. Can be driven or used to drive.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def _name_for(domain_name, signal_name):
|
||||
if domain_name == "sync":
|
||||
return signal_name
|
||||
else:
|
||||
return "{}_{}".format(domain_name, signal_name)
|
||||
|
||||
def __init__(self, name=None, *, clk_edge="pos", reset_less=False, async_reset=False,
|
||||
local=False):
|
||||
if name is None:
|
||||
try:
|
||||
name = tracer.get_var_name()
|
||||
except tracer.NameNotFound:
|
||||
raise ValueError("Clock domain name must be specified explicitly")
|
||||
if name.startswith("cd_"):
|
||||
name = name[3:]
|
||||
if name == "comb":
|
||||
raise ValueError("Domain '{}' may not be clocked".format(name))
|
||||
|
||||
if clk_edge not in ("pos", "neg"):
|
||||
raise ValueError("Domain clock edge must be one of 'pos' or 'neg', not {!r}"
|
||||
.format(clk_edge))
|
||||
|
||||
self.name = name
|
||||
|
||||
self.clk = Signal(name=self._name_for(name, "clk"), src_loc_at=1)
|
||||
self.clk_edge = clk_edge
|
||||
|
||||
if reset_less:
|
||||
self.rst = None
|
||||
else:
|
||||
self.rst = Signal(name=self._name_for(name, "rst"), src_loc_at=1)
|
||||
|
||||
self.async_reset = async_reset
|
||||
|
||||
self.local = local
|
||||
|
||||
def rename(self, new_name):
|
||||
self.name = new_name
|
||||
self.clk.name = self._name_for(new_name, "clk")
|
||||
if self.rst is not None:
|
||||
self.rst.name = self._name_for(new_name, "rst")
|
|
@ -0,0 +1,546 @@
|
|||
from collections import OrderedDict, namedtuple
|
||||
from collections.abc import Iterable
|
||||
from contextlib import contextmanager, _GeneratorContextManager
|
||||
from functools import wraps
|
||||
from enum import Enum
|
||||
import warnings
|
||||
|
||||
from .._utils import flatten, bits_for, deprecated
|
||||
from .. import tracer
|
||||
from .ast import *
|
||||
from .ir import *
|
||||
from .cd import *
|
||||
from .xfrm import *
|
||||
|
||||
|
||||
__all__ = ["SyntaxError", "SyntaxWarning", "Module"]
|
||||
|
||||
|
||||
class SyntaxError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class SyntaxWarning(Warning):
|
||||
pass
|
||||
|
||||
|
||||
class _ModuleBuilderProxy:
|
||||
def __init__(self, builder, depth):
|
||||
object.__setattr__(self, "_builder", builder)
|
||||
object.__setattr__(self, "_depth", depth)
|
||||
|
||||
|
||||
class _ModuleBuilderDomain(_ModuleBuilderProxy):
|
||||
def __init__(self, builder, depth, domain):
|
||||
super().__init__(builder, depth)
|
||||
self._domain = domain
|
||||
|
||||
def __iadd__(self, assigns):
|
||||
self._builder._add_statement(assigns, domain=self._domain, depth=self._depth)
|
||||
return self
|
||||
|
||||
|
||||
class _ModuleBuilderDomains(_ModuleBuilderProxy):
|
||||
def __getattr__(self, name):
|
||||
if name == "submodules":
|
||||
warnings.warn("Using '<module>.d.{}' would add statements to clock domain {!r}; "
|
||||
"did you mean <module>.{} instead?"
|
||||
.format(name, name, name),
|
||||
SyntaxWarning, stacklevel=2)
|
||||
if name == "comb":
|
||||
domain = None
|
||||
else:
|
||||
domain = name
|
||||
return _ModuleBuilderDomain(self._builder, self._depth, domain)
|
||||
|
||||
def __getitem__(self, name):
|
||||
return self.__getattr__(name)
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
if name == "_depth":
|
||||
object.__setattr__(self, name, value)
|
||||
elif not isinstance(value, _ModuleBuilderDomain):
|
||||
raise AttributeError("Cannot assign 'd.{}' attribute; did you mean 'd.{} +='?"
|
||||
.format(name, name))
|
||||
|
||||
def __setitem__(self, name, value):
|
||||
return self.__setattr__(name, value)
|
||||
|
||||
|
||||
class _ModuleBuilderRoot:
|
||||
def __init__(self, builder, depth):
|
||||
self._builder = builder
|
||||
self.domain = self.d = _ModuleBuilderDomains(builder, depth)
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name in ("comb", "sync"):
|
||||
raise AttributeError("'{}' object has no attribute '{}'; did you mean 'd.{}'?"
|
||||
.format(type(self).__name__, name, name))
|
||||
raise AttributeError("'{}' object has no attribute '{}'"
|
||||
.format(type(self).__name__, name))
|
||||
|
||||
|
||||
class _ModuleBuilderSubmodules:
|
||||
def __init__(self, builder):
|
||||
object.__setattr__(self, "_builder", builder)
|
||||
|
||||
def __iadd__(self, modules):
|
||||
for module in flatten([modules]):
|
||||
self._builder._add_submodule(module)
|
||||
return self
|
||||
|
||||
def __setattr__(self, name, submodule):
|
||||
self._builder._add_submodule(submodule, name)
|
||||
|
||||
def __setitem__(self, name, value):
|
||||
return self.__setattr__(name, value)
|
||||
|
||||
def __getattr__(self, name):
|
||||
return self._builder._get_submodule(name)
|
||||
|
||||
def __getitem__(self, name):
|
||||
return self.__getattr__(name)
|
||||
|
||||
|
||||
class _ModuleBuilderDomainSet:
|
||||
def __init__(self, builder):
|
||||
object.__setattr__(self, "_builder", builder)
|
||||
|
||||
def __iadd__(self, domains):
|
||||
for domain in flatten([domains]):
|
||||
if not isinstance(domain, ClockDomain):
|
||||
raise TypeError("Only clock domains may be added to `m.domains`, not {!r}"
|
||||
.format(domain))
|
||||
self._builder._add_domain(domain)
|
||||
return self
|
||||
|
||||
def __setattr__(self, name, domain):
|
||||
if not isinstance(domain, ClockDomain):
|
||||
raise TypeError("Only clock domains may be added to `m.domains`, not {!r}"
|
||||
.format(domain))
|
||||
if domain.name != name:
|
||||
raise NameError("Clock domain name {!r} must match name in `m.domains.{} += ...` "
|
||||
"syntax"
|
||||
.format(domain.name, name))
|
||||
self._builder._add_domain(domain)
|
||||
|
||||
|
||||
# It's not particularly clean to depend on an internal interface, but, unfortunately, __bool__
|
||||
# must be defined on a class to be called during implicit conversion.
|
||||
class _GuardedContextManager(_GeneratorContextManager):
|
||||
def __init__(self, keyword, func, args, kwds):
|
||||
self.keyword = keyword
|
||||
return super().__init__(func, args, kwds)
|
||||
|
||||
def __bool__(self):
|
||||
raise SyntaxError("`if m.{kw}(...):` does not work; use `with m.{kw}(...)`"
|
||||
.format(kw=self.keyword))
|
||||
|
||||
|
||||
def _guardedcontextmanager(keyword):
|
||||
def decorator(func):
|
||||
@wraps(func)
|
||||
def helper(*args, **kwds):
|
||||
return _GuardedContextManager(keyword, func, args, kwds)
|
||||
return helper
|
||||
return decorator
|
||||
|
||||
|
||||
class FSM:
|
||||
def __init__(self, state, encoding, decoding):
|
||||
self.state = state
|
||||
self.encoding = encoding
|
||||
self.decoding = decoding
|
||||
|
||||
def ongoing(self, name):
|
||||
if name not in self.encoding:
|
||||
self.encoding[name] = len(self.encoding)
|
||||
return Operator("==", [self.state, self.encoding[name]], src_loc_at=0)
|
||||
|
||||
|
||||
class Module(_ModuleBuilderRoot, Elaboratable):
|
||||
@classmethod
|
||||
def __init_subclass__(cls):
|
||||
raise SyntaxError("Instead of inheriting from `Module`, inherit from `Elaboratable` "
|
||||
"and return a `Module` from the `elaborate(self, platform)` method")
|
||||
|
||||
def __init__(self):
|
||||
_ModuleBuilderRoot.__init__(self, self, depth=0)
|
||||
self.submodules = _ModuleBuilderSubmodules(self)
|
||||
self.domains = _ModuleBuilderDomainSet(self)
|
||||
|
||||
self._statements = Statement.cast([])
|
||||
self._ctrl_context = None
|
||||
self._ctrl_stack = []
|
||||
|
||||
self._driving = SignalDict()
|
||||
self._named_submodules = {}
|
||||
self._anon_submodules = []
|
||||
self._domains = []
|
||||
self._generated = {}
|
||||
|
||||
def _check_context(self, construct, context):
|
||||
if self._ctrl_context != context:
|
||||
if self._ctrl_context is None:
|
||||
raise SyntaxError("{} is not permitted outside of {}"
|
||||
.format(construct, context))
|
||||
else:
|
||||
if self._ctrl_context == "Switch":
|
||||
secondary_context = "Case"
|
||||
if self._ctrl_context == "FSM":
|
||||
secondary_context = "State"
|
||||
raise SyntaxError("{} is not permitted directly inside of {}; it is permitted "
|
||||
"inside of {} {}"
|
||||
.format(construct, self._ctrl_context,
|
||||
self._ctrl_context, secondary_context))
|
||||
|
||||
def _get_ctrl(self, name):
|
||||
if self._ctrl_stack:
|
||||
top_name, top_data = self._ctrl_stack[-1]
|
||||
if top_name == name:
|
||||
return top_data
|
||||
|
||||
def _flush_ctrl(self):
|
||||
while len(self._ctrl_stack) > self.domain._depth:
|
||||
self._pop_ctrl()
|
||||
|
||||
def _set_ctrl(self, name, data):
|
||||
self._flush_ctrl()
|
||||
self._ctrl_stack.append((name, data))
|
||||
return data
|
||||
|
||||
def _check_signed_cond(self, cond):
|
||||
cond = Value.cast(cond)
|
||||
width, signed = cond.shape()
|
||||
if signed:
|
||||
warnings.warn("Signed values in If/Elif conditions usually result from inverting "
|
||||
"Python booleans with ~, which leads to unexpected results: ~True is "
|
||||
"-2, which is truthful. Replace `~flag` with `not flag`. (If this is "
|
||||
"a false positive, silence this warning with "
|
||||
"`m.If(x)` → `m.If(x.bool())`.)",
|
||||
SyntaxWarning, stacklevel=4)
|
||||
return cond
|
||||
|
||||
@_guardedcontextmanager("If")
|
||||
def If(self, cond):
|
||||
self._check_context("If", context=None)
|
||||
cond = self._check_signed_cond(cond)
|
||||
src_loc = tracer.get_src_loc(src_loc_at=1)
|
||||
if_data = self._set_ctrl("If", {
|
||||
"tests": [],
|
||||
"bodies": [],
|
||||
"src_loc": src_loc,
|
||||
"src_locs": [],
|
||||
})
|
||||
try:
|
||||
_outer_case, self._statements = self._statements, []
|
||||
self.domain._depth += 1
|
||||
yield
|
||||
self._flush_ctrl()
|
||||
if_data["tests"].append(cond)
|
||||
if_data["bodies"].append(self._statements)
|
||||
if_data["src_locs"].append(src_loc)
|
||||
finally:
|
||||
self.domain._depth -= 1
|
||||
self._statements = _outer_case
|
||||
|
||||
@_guardedcontextmanager("Elif")
|
||||
def Elif(self, cond):
|
||||
self._check_context("Elif", context=None)
|
||||
cond = self._check_signed_cond(cond)
|
||||
src_loc = tracer.get_src_loc(src_loc_at=1)
|
||||
if_data = self._get_ctrl("If")
|
||||
if if_data is None:
|
||||
raise SyntaxError("Elif without preceding If")
|
||||
try:
|
||||
_outer_case, self._statements = self._statements, []
|
||||
self.domain._depth += 1
|
||||
yield
|
||||
self._flush_ctrl()
|
||||
if_data["tests"].append(cond)
|
||||
if_data["bodies"].append(self._statements)
|
||||
if_data["src_locs"].append(src_loc)
|
||||
finally:
|
||||
self.domain._depth -= 1
|
||||
self._statements = _outer_case
|
||||
|
||||
@_guardedcontextmanager("Else")
|
||||
def Else(self):
|
||||
self._check_context("Else", context=None)
|
||||
src_loc = tracer.get_src_loc(src_loc_at=1)
|
||||
if_data = self._get_ctrl("If")
|
||||
if if_data is None:
|
||||
raise SyntaxError("Else without preceding If/Elif")
|
||||
try:
|
||||
_outer_case, self._statements = self._statements, []
|
||||
self.domain._depth += 1
|
||||
yield
|
||||
self._flush_ctrl()
|
||||
if_data["bodies"].append(self._statements)
|
||||
if_data["src_locs"].append(src_loc)
|
||||
finally:
|
||||
self.domain._depth -= 1
|
||||
self._statements = _outer_case
|
||||
self._pop_ctrl()
|
||||
|
||||
@contextmanager
|
||||
def Switch(self, test):
|
||||
self._check_context("Switch", context=None)
|
||||
switch_data = self._set_ctrl("Switch", {
|
||||
"test": Value.cast(test),
|
||||
"cases": OrderedDict(),
|
||||
"src_loc": tracer.get_src_loc(src_loc_at=1),
|
||||
"case_src_locs": {},
|
||||
})
|
||||
try:
|
||||
self._ctrl_context = "Switch"
|
||||
self.domain._depth += 1
|
||||
yield
|
||||
finally:
|
||||
self.domain._depth -= 1
|
||||
self._ctrl_context = None
|
||||
self._pop_ctrl()
|
||||
|
||||
@contextmanager
|
||||
def Case(self, *patterns):
|
||||
self._check_context("Case", context="Switch")
|
||||
src_loc = tracer.get_src_loc(src_loc_at=1)
|
||||
switch_data = self._get_ctrl("Switch")
|
||||
new_patterns = ()
|
||||
for pattern in patterns:
|
||||
if not isinstance(pattern, (int, str, Enum)):
|
||||
raise SyntaxError("Case pattern must be an integer, a string, or an enumeration, "
|
||||
"not {!r}"
|
||||
.format(pattern))
|
||||
if isinstance(pattern, str) and any(bit not in "01- \t" for bit in pattern):
|
||||
raise SyntaxError("Case pattern '{}' must consist of 0, 1, and - (don't care) "
|
||||
"bits, and may include whitespace"
|
||||
.format(pattern))
|
||||
if (isinstance(pattern, str) and
|
||||
len("".join(pattern.split())) != len(switch_data["test"])):
|
||||
raise SyntaxError("Case pattern '{}' must have the same width as switch value "
|
||||
"(which is {})"
|
||||
.format(pattern, len(switch_data["test"])))
|
||||
if isinstance(pattern, int) and bits_for(pattern) > len(switch_data["test"]):
|
||||
warnings.warn("Case pattern '{:b}' is wider than switch value "
|
||||
"(which has width {}); comparison will never be true"
|
||||
.format(pattern, len(switch_data["test"])),
|
||||
SyntaxWarning, stacklevel=3)
|
||||
continue
|
||||
if isinstance(pattern, Enum) and bits_for(pattern.value) > len(switch_data["test"]):
|
||||
warnings.warn("Case pattern '{:b}' ({}.{}) is wider than switch value "
|
||||
"(which has width {}); comparison will never be true"
|
||||
.format(pattern.value, pattern.__class__.__name__, pattern.name,
|
||||
len(switch_data["test"])),
|
||||
SyntaxWarning, stacklevel=3)
|
||||
continue
|
||||
new_patterns = (*new_patterns, pattern)
|
||||
try:
|
||||
_outer_case, self._statements = self._statements, []
|
||||
self._ctrl_context = None
|
||||
yield
|
||||
self._flush_ctrl()
|
||||
# If none of the provided cases can possibly be true, omit this branch completely.
|
||||
# This needs to be differentiated from no cases being provided in the first place,
|
||||
# which means the branch will always match.
|
||||
if not (patterns and not new_patterns):
|
||||
switch_data["cases"][new_patterns] = self._statements
|
||||
switch_data["case_src_locs"][new_patterns] = src_loc
|
||||
finally:
|
||||
self._ctrl_context = "Switch"
|
||||
self._statements = _outer_case
|
||||
|
||||
def Default(self):
|
||||
return self.Case()
|
||||
|
||||
@contextmanager
|
||||
def FSM(self, reset=None, domain="sync", name="fsm"):
|
||||
self._check_context("FSM", context=None)
|
||||
if domain == "comb":
|
||||
raise ValueError("FSM may not be driven by the '{}' domain".format(domain))
|
||||
fsm_data = self._set_ctrl("FSM", {
|
||||
"name": name,
|
||||
"signal": Signal(name="{}_state".format(name), src_loc_at=2),
|
||||
"reset": reset,
|
||||
"domain": domain,
|
||||
"encoding": OrderedDict(),
|
||||
"decoding": OrderedDict(),
|
||||
"states": OrderedDict(),
|
||||
"src_loc": tracer.get_src_loc(src_loc_at=1),
|
||||
"state_src_locs": {},
|
||||
})
|
||||
self._generated[name] = fsm = \
|
||||
FSM(fsm_data["signal"], fsm_data["encoding"], fsm_data["decoding"])
|
||||
try:
|
||||
self._ctrl_context = "FSM"
|
||||
self.domain._depth += 1
|
||||
yield fsm
|
||||
for state_name in fsm_data["encoding"]:
|
||||
if state_name not in fsm_data["states"]:
|
||||
raise NameError("FSM state '{}' is referenced but not defined"
|
||||
.format(state_name))
|
||||
finally:
|
||||
self.domain._depth -= 1
|
||||
self._ctrl_context = None
|
||||
self._pop_ctrl()
|
||||
|
||||
@contextmanager
|
||||
def State(self, name):
|
||||
self._check_context("FSM State", context="FSM")
|
||||
src_loc = tracer.get_src_loc(src_loc_at=1)
|
||||
fsm_data = self._get_ctrl("FSM")
|
||||
if name in fsm_data["states"]:
|
||||
raise NameError("FSM state '{}' is already defined".format(name))
|
||||
if name not in fsm_data["encoding"]:
|
||||
fsm_data["encoding"][name] = len(fsm_data["encoding"])
|
||||
try:
|
||||
_outer_case, self._statements = self._statements, []
|
||||
self._ctrl_context = None
|
||||
yield
|
||||
self._flush_ctrl()
|
||||
fsm_data["states"][name] = self._statements
|
||||
fsm_data["state_src_locs"][name] = src_loc
|
||||
finally:
|
||||
self._ctrl_context = "FSM"
|
||||
self._statements = _outer_case
|
||||
|
||||
@property
|
||||
def next(self):
|
||||
raise SyntaxError("Only assignment to `m.next` is permitted")
|
||||
|
||||
@next.setter
|
||||
def next(self, name):
|
||||
if self._ctrl_context != "FSM":
|
||||
for level, (ctrl_name, ctrl_data) in enumerate(reversed(self._ctrl_stack)):
|
||||
if ctrl_name == "FSM":
|
||||
if name not in ctrl_data["encoding"]:
|
||||
ctrl_data["encoding"][name] = len(ctrl_data["encoding"])
|
||||
self._add_statement(
|
||||
assigns=[ctrl_data["signal"].eq(ctrl_data["encoding"][name])],
|
||||
domain=ctrl_data["domain"],
|
||||
depth=len(self._ctrl_stack))
|
||||
return
|
||||
|
||||
raise SyntaxError("`m.next = <...>` is only permitted inside an FSM state")
|
||||
|
||||
def _pop_ctrl(self):
|
||||
name, data = self._ctrl_stack.pop()
|
||||
src_loc = data["src_loc"]
|
||||
|
||||
if name == "If":
|
||||
if_tests, if_bodies = data["tests"], data["bodies"]
|
||||
if_src_locs = data["src_locs"]
|
||||
|
||||
tests, cases = [], OrderedDict()
|
||||
for if_test, if_case in zip(if_tests + [None], if_bodies):
|
||||
if if_test is not None:
|
||||
if_test = Value.cast(if_test)
|
||||
if len(if_test) != 1:
|
||||
if_test = if_test.bool()
|
||||
tests.append(if_test)
|
||||
|
||||
if if_test is not None:
|
||||
match = ("1" + "-" * (len(tests) - 1)).rjust(len(if_tests), "-")
|
||||
else:
|
||||
match = None
|
||||
cases[match] = if_case
|
||||
|
||||
self._statements.append(Switch(Cat(tests), cases,
|
||||
src_loc=src_loc, case_src_locs=dict(zip(cases, if_src_locs))))
|
||||
|
||||
if name == "Switch":
|
||||
switch_test, switch_cases = data["test"], data["cases"]
|
||||
switch_case_src_locs = data["case_src_locs"]
|
||||
|
||||
self._statements.append(Switch(switch_test, switch_cases,
|
||||
src_loc=src_loc, case_src_locs=switch_case_src_locs))
|
||||
|
||||
if name == "FSM":
|
||||
fsm_signal, fsm_reset, fsm_encoding, fsm_decoding, fsm_states = \
|
||||
data["signal"], data["reset"], data["encoding"], data["decoding"], data["states"]
|
||||
fsm_state_src_locs = data["state_src_locs"]
|
||||
if not fsm_states:
|
||||
return
|
||||
fsm_signal.width = bits_for(len(fsm_encoding) - 1)
|
||||
if fsm_reset is None:
|
||||
fsm_signal.reset = fsm_encoding[next(iter(fsm_states))]
|
||||
else:
|
||||
fsm_signal.reset = fsm_encoding[fsm_reset]
|
||||
# The FSM is encoded such that the state with encoding 0 is always the reset state.
|
||||
fsm_decoding.update((n, s) for s, n in fsm_encoding.items())
|
||||
fsm_signal.decoder = lambda n: "{}/{}".format(fsm_decoding[n], n)
|
||||
self._statements.append(Switch(fsm_signal,
|
||||
OrderedDict((fsm_encoding[name], stmts) for name, stmts in fsm_states.items()),
|
||||
src_loc=src_loc, case_src_locs={fsm_encoding[name]: fsm_state_src_locs[name]
|
||||
for name in fsm_states}))
|
||||
|
||||
def _add_statement(self, assigns, domain, depth, compat_mode=False):
|
||||
def domain_name(domain):
|
||||
if domain is None:
|
||||
return "comb"
|
||||
else:
|
||||
return domain
|
||||
|
||||
while len(self._ctrl_stack) > self.domain._depth:
|
||||
self._pop_ctrl()
|
||||
|
||||
for stmt in Statement.cast(assigns):
|
||||
if not compat_mode and not isinstance(stmt, (Assign, Assert, Assume, Cover)):
|
||||
raise SyntaxError(
|
||||
"Only assignments and property checks may be appended to d.{}"
|
||||
.format(domain_name(domain)))
|
||||
|
||||
stmt._MustUse__used = True
|
||||
stmt = SampleDomainInjector(domain)(stmt)
|
||||
|
||||
for signal in stmt._lhs_signals():
|
||||
if signal not in self._driving:
|
||||
self._driving[signal] = domain
|
||||
elif self._driving[signal] != domain:
|
||||
cd_curr = self._driving[signal]
|
||||
raise SyntaxError(
|
||||
"Driver-driver conflict: trying to drive {!r} from d.{}, but it is "
|
||||
"already driven from d.{}"
|
||||
.format(signal, domain_name(domain), domain_name(cd_curr)))
|
||||
|
||||
self._statements.append(stmt)
|
||||
|
||||
def _add_submodule(self, submodule, name=None):
|
||||
if not hasattr(submodule, "elaborate"):
|
||||
raise TypeError("Trying to add {!r}, which does not implement .elaborate(), as "
|
||||
"a submodule".format(submodule))
|
||||
if name == None:
|
||||
self._anon_submodules.append(submodule)
|
||||
else:
|
||||
if name in self._named_submodules:
|
||||
raise NameError("Submodule named '{}' already exists".format(name))
|
||||
self._named_submodules[name] = submodule
|
||||
|
||||
def _get_submodule(self, name):
|
||||
if name in self._named_submodules:
|
||||
return self._named_submodules[name]
|
||||
else:
|
||||
raise AttributeError("No submodule named '{}' exists".format(name))
|
||||
|
||||
def _add_domain(self, cd):
|
||||
self._domains.append(cd)
|
||||
|
||||
def _flush(self):
|
||||
while self._ctrl_stack:
|
||||
self._pop_ctrl()
|
||||
|
||||
def elaborate(self, platform):
|
||||
self._flush()
|
||||
|
||||
fragment = Fragment()
|
||||
for name in self._named_submodules:
|
||||
fragment.add_subfragment(Fragment.get(self._named_submodules[name], platform), name)
|
||||
for submodule in self._anon_submodules:
|
||||
fragment.add_subfragment(Fragment.get(submodule, platform), None)
|
||||
statements = SampleDomainInjector("sync")(self._statements)
|
||||
fragment.add_statements(statements)
|
||||
for signal, domain in self._driving.items():
|
||||
fragment.add_driver(signal, domain)
|
||||
fragment.add_domains(self._domains)
|
||||
fragment.generated.update(self._generated)
|
||||
return fragment
|
|
@ -0,0 +1,588 @@
|
|||
from abc import ABCMeta, abstractmethod
|
||||
from collections import defaultdict, OrderedDict
|
||||
from functools import reduce
|
||||
import warnings
|
||||
import traceback
|
||||
import sys
|
||||
|
||||
from .._utils import *
|
||||
from .._unused import *
|
||||
from .ast import *
|
||||
from .cd import *
|
||||
|
||||
|
||||
__all__ = ["UnusedElaboratable", "Elaboratable", "DriverConflict", "Fragment", "Instance"]
|
||||
|
||||
|
||||
class UnusedElaboratable(UnusedMustUse):
|
||||
pass
|
||||
|
||||
|
||||
class Elaboratable(MustUse, metaclass=ABCMeta):
|
||||
_MustUse__warning = UnusedElaboratable
|
||||
|
||||
|
||||
class DriverConflict(UserWarning):
|
||||
pass
|
||||
|
||||
|
||||
class Fragment:
|
||||
@staticmethod
|
||||
def get(obj, platform):
|
||||
code = None
|
||||
while True:
|
||||
if isinstance(obj, Fragment):
|
||||
return obj
|
||||
elif isinstance(obj, Elaboratable):
|
||||
code = obj.elaborate.__code__
|
||||
obj._MustUse__used = True
|
||||
obj = obj.elaborate(platform)
|
||||
elif hasattr(obj, "elaborate"):
|
||||
warnings.warn(
|
||||
message="Class {!r} is an elaboratable that does not explicitly inherit from "
|
||||
"Elaboratable; doing so would improve diagnostics"
|
||||
.format(type(obj)),
|
||||
category=RuntimeWarning,
|
||||
stacklevel=2)
|
||||
code = obj.elaborate.__code__
|
||||
obj = obj.elaborate(platform)
|
||||
else:
|
||||
raise AttributeError("Object {!r} cannot be elaborated".format(obj))
|
||||
if obj is None and code is not None:
|
||||
warnings.warn_explicit(
|
||||
message=".elaborate() returned None; missing return statement?",
|
||||
category=UserWarning,
|
||||
filename=code.co_filename,
|
||||
lineno=code.co_firstlineno)
|
||||
|
||||
def __init__(self):
|
||||
self.ports = SignalDict()
|
||||
self.drivers = OrderedDict()
|
||||
self.statements = []
|
||||
self.domains = OrderedDict()
|
||||
self.subfragments = []
|
||||
self.attrs = OrderedDict()
|
||||
self.generated = OrderedDict()
|
||||
self.flatten = False
|
||||
|
||||
def add_ports(self, *ports, dir):
|
||||
assert dir in ("i", "o", "io")
|
||||
for port in flatten(ports):
|
||||
self.ports[port] = dir
|
||||
|
||||
def iter_ports(self, dir=None):
|
||||
if dir is None:
|
||||
yield from self.ports
|
||||
else:
|
||||
for port, port_dir in self.ports.items():
|
||||
if port_dir == dir:
|
||||
yield port
|
||||
|
||||
def add_driver(self, signal, domain=None):
|
||||
if domain not in self.drivers:
|
||||
self.drivers[domain] = SignalSet()
|
||||
self.drivers[domain].add(signal)
|
||||
|
||||
def iter_drivers(self):
|
||||
for domain, signals in self.drivers.items():
|
||||
for signal in signals:
|
||||
yield domain, signal
|
||||
|
||||
def iter_comb(self):
|
||||
if None in self.drivers:
|
||||
yield from self.drivers[None]
|
||||
|
||||
def iter_sync(self):
|
||||
for domain, signals in self.drivers.items():
|
||||
if domain is None:
|
||||
continue
|
||||
for signal in signals:
|
||||
yield domain, signal
|
||||
|
||||
def iter_signals(self):
|
||||
signals = SignalSet()
|
||||
signals |= self.ports.keys()
|
||||
for domain, domain_signals in self.drivers.items():
|
||||
if domain is not None:
|
||||
cd = self.domains[domain]
|
||||
signals.add(cd.clk)
|
||||
if cd.rst is not None:
|
||||
signals.add(cd.rst)
|
||||
signals |= domain_signals
|
||||
return signals
|
||||
|
||||
def add_domains(self, *domains):
|
||||
for domain in flatten(domains):
|
||||
assert isinstance(domain, ClockDomain)
|
||||
assert domain.name not in self.domains
|
||||
self.domains[domain.name] = domain
|
||||
|
||||
def iter_domains(self):
|
||||
yield from self.domains
|
||||
|
||||
def add_statements(self, *stmts):
|
||||
for stmt in Statement.cast(stmts):
|
||||
stmt._MustUse__used = True
|
||||
self.statements.append(stmt)
|
||||
|
||||
def add_subfragment(self, subfragment, name=None):
|
||||
assert isinstance(subfragment, Fragment)
|
||||
self.subfragments.append((subfragment, name))
|
||||
|
||||
def find_subfragment(self, name_or_index):
|
||||
if isinstance(name_or_index, int):
|
||||
if name_or_index < len(self.subfragments):
|
||||
subfragment, name = self.subfragments[name_or_index]
|
||||
return subfragment
|
||||
raise NameError("No subfragment at index #{}".format(name_or_index))
|
||||
else:
|
||||
for subfragment, name in self.subfragments:
|
||||
if name == name_or_index:
|
||||
return subfragment
|
||||
raise NameError("No subfragment with name '{}'".format(name_or_index))
|
||||
|
||||
def find_generated(self, *path):
|
||||
if len(path) > 1:
|
||||
path_component, *path = path
|
||||
return self.find_subfragment(path_component).find_generated(*path)
|
||||
else:
|
||||
item, = path
|
||||
return self.generated[item]
|
||||
|
||||
def elaborate(self, platform):
|
||||
return self
|
||||
|
||||
def _merge_subfragment(self, subfragment):
|
||||
# Merge subfragment's everything except clock domains into this fragment.
|
||||
# Flattening is done after clock domain propagation, so we can assume the domains
|
||||
# are already the same in every involved fragment in the first place.
|
||||
self.ports.update(subfragment.ports)
|
||||
for domain, signal in subfragment.iter_drivers():
|
||||
self.add_driver(signal, domain)
|
||||
self.statements += subfragment.statements
|
||||
self.subfragments += subfragment.subfragments
|
||||
|
||||
# Remove the merged subfragment.
|
||||
found = False
|
||||
for i, (check_subfrag, check_name) in enumerate(self.subfragments): # :nobr:
|
||||
if subfragment == check_subfrag:
|
||||
del self.subfragments[i]
|
||||
found = True
|
||||
break
|
||||
assert found
|
||||
|
||||
def _resolve_hierarchy_conflicts(self, hierarchy=("top",), mode="warn"):
|
||||
assert mode in ("silent", "warn", "error")
|
||||
|
||||
driver_subfrags = SignalDict()
|
||||
memory_subfrags = OrderedDict()
|
||||
def add_subfrag(registry, entity, entry):
|
||||
# Because of missing domain insertion, at the point when this code runs, we have
|
||||
# a mixture of bound and unbound {Clock,Reset}Signals. Map the bound ones to
|
||||
# the actual signals (because the signal itself can be driven as well); but leave
|
||||
# the unbound ones as it is, because there's no concrete signal for it yet anyway.
|
||||
if isinstance(entity, ClockSignal) and entity.domain in self.domains:
|
||||
entity = self.domains[entity.domain].clk
|
||||
elif isinstance(entity, ResetSignal) and entity.domain in self.domains:
|
||||
entity = self.domains[entity.domain].rst
|
||||
|
||||
if entity not in registry:
|
||||
registry[entity] = set()
|
||||
registry[entity].add(entry)
|
||||
|
||||
# For each signal driven by this fragment and/or its subfragments, determine which
|
||||
# subfragments also drive it.
|
||||
for domain, signal in self.iter_drivers():
|
||||
add_subfrag(driver_subfrags, signal, (None, hierarchy))
|
||||
|
||||
flatten_subfrags = set()
|
||||
for i, (subfrag, name) in enumerate(self.subfragments):
|
||||
if name is None:
|
||||
name = "<unnamed #{}>".format(i)
|
||||
subfrag_hierarchy = hierarchy + (name,)
|
||||
|
||||
if subfrag.flatten:
|
||||
# Always flatten subfragments that explicitly request it.
|
||||
flatten_subfrags.add((subfrag, subfrag_hierarchy))
|
||||
|
||||
if isinstance(subfrag, Instance):
|
||||
# For memories (which are subfragments, but semantically a part of superfragment),
|
||||
# record that this fragment is driving it.
|
||||
if subfrag.type in ("$memrd", "$memwr"):
|
||||
memory = subfrag.parameters["MEMID"]
|
||||
add_subfrag(memory_subfrags, memory, (None, hierarchy))
|
||||
|
||||
# Never flatten instances.
|
||||
continue
|
||||
|
||||
# First, recurse into subfragments and let them detect driver conflicts as well.
|
||||
subfrag_drivers, subfrag_memories = \
|
||||
subfrag._resolve_hierarchy_conflicts(subfrag_hierarchy, mode)
|
||||
|
||||
# Second, classify subfragments by signals they drive and memories they use.
|
||||
for signal in subfrag_drivers:
|
||||
add_subfrag(driver_subfrags, signal, (subfrag, subfrag_hierarchy))
|
||||
for memory in subfrag_memories:
|
||||
add_subfrag(memory_subfrags, memory, (subfrag, subfrag_hierarchy))
|
||||
|
||||
# Find out the set of subfragments that needs to be flattened into this fragment
|
||||
# to resolve driver-driver conflicts.
|
||||
def flatten_subfrags_if_needed(subfrags):
|
||||
if len(subfrags) == 1:
|
||||
return []
|
||||
flatten_subfrags.update((f, h) for f, h in subfrags if f is not None)
|
||||
return list(sorted(".".join(h) for f, h in subfrags))
|
||||
|
||||
for signal, subfrags in driver_subfrags.items():
|
||||
subfrag_names = flatten_subfrags_if_needed(subfrags)
|
||||
if not subfrag_names:
|
||||
continue
|
||||
|
||||
# While we're at it, show a message.
|
||||
message = ("Signal '{}' is driven from multiple fragments: {}"
|
||||
.format(signal, ", ".join(subfrag_names)))
|
||||
if mode == "error":
|
||||
raise DriverConflict(message)
|
||||
elif mode == "warn":
|
||||
message += "; hierarchy will be flattened"
|
||||
warnings.warn_explicit(message, DriverConflict, *signal.src_loc)
|
||||
|
||||
for memory, subfrags in memory_subfrags.items():
|
||||
subfrag_names = flatten_subfrags_if_needed(subfrags)
|
||||
if not subfrag_names:
|
||||
continue
|
||||
|
||||
# While we're at it, show a message.
|
||||
message = ("Memory '{}' is accessed from multiple fragments: {}"
|
||||
.format(memory.name, ", ".join(subfrag_names)))
|
||||
if mode == "error":
|
||||
raise DriverConflict(message)
|
||||
elif mode == "warn":
|
||||
message += "; hierarchy will be flattened"
|
||||
warnings.warn_explicit(message, DriverConflict, *memory.src_loc)
|
||||
|
||||
# Flatten hierarchy.
|
||||
for subfrag, subfrag_hierarchy in sorted(flatten_subfrags, key=lambda x: x[1]):
|
||||
self._merge_subfragment(subfrag)
|
||||
|
||||
# If we flattened anything, we might be in a situation where we have a driver conflict
|
||||
# again, e.g. if we had a tree of fragments like A --- B --- C where only fragments
|
||||
# A and C were driving a signal S. In that case, since B is not driving S itself,
|
||||
# processing B will not result in any flattening, but since B is transitively driving S,
|
||||
# processing A will flatten B into it. Afterwards, we have a tree like AB --- C, which
|
||||
# has another conflict.
|
||||
if any(flatten_subfrags):
|
||||
# Try flattening again.
|
||||
return self._resolve_hierarchy_conflicts(hierarchy, mode)
|
||||
|
||||
# Nothing was flattened, we're done!
|
||||
return (SignalSet(driver_subfrags.keys()),
|
||||
set(memory_subfrags.keys()))
|
||||
|
||||
def _propagate_domains_up(self, hierarchy=("top",)):
|
||||
from .xfrm import DomainRenamer
|
||||
|
||||
domain_subfrags = defaultdict(lambda: set())
|
||||
|
||||
# For each domain defined by a subfragment, determine which subfragments define it.
|
||||
for i, (subfrag, name) in enumerate(self.subfragments):
|
||||
# First, recurse into subfragments and let them propagate domains up as well.
|
||||
hier_name = name
|
||||
if hier_name is None:
|
||||
hier_name = "<unnamed #{}>".format(i)
|
||||
subfrag._propagate_domains_up(hierarchy + (hier_name,))
|
||||
|
||||
# Second, classify subfragments by domains they define.
|
||||
for domain_name, domain in subfrag.domains.items():
|
||||
if domain.local:
|
||||
continue
|
||||
domain_subfrags[domain_name].add((subfrag, name, i))
|
||||
|
||||
# For each domain defined by more than one subfragment, rename the domain in each
|
||||
# of the subfragments such that they no longer conflict.
|
||||
for domain_name, subfrags in domain_subfrags.items():
|
||||
if len(subfrags) == 1:
|
||||
continue
|
||||
|
||||
names = [n for f, n, i in subfrags]
|
||||
if not all(names):
|
||||
names = sorted("<unnamed #{}>".format(i) if n is None else "'{}'".format(n)
|
||||
for f, n, i in subfrags)
|
||||
raise DomainError("Domain '{}' is defined by subfragments {} of fragment '{}'; "
|
||||
"it is necessary to either rename subfragment domains "
|
||||
"explicitly, or give names to subfragments"
|
||||
.format(domain_name, ", ".join(names), ".".join(hierarchy)))
|
||||
|
||||
if len(names) != len(set(names)):
|
||||
names = sorted("#{}".format(i) for f, n, i in subfrags)
|
||||
raise DomainError("Domain '{}' is defined by subfragments {} of fragment '{}', "
|
||||
"some of which have identical names; it is necessary to either "
|
||||
"rename subfragment domains explicitly, or give distinct names "
|
||||
"to subfragments"
|
||||
.format(domain_name, ", ".join(names), ".".join(hierarchy)))
|
||||
|
||||
for subfrag, name, i in subfrags:
|
||||
domain_name_map = {domain_name: "{}_{}".format(name, domain_name)}
|
||||
self.subfragments[i] = (DomainRenamer(domain_name_map)(subfrag), name)
|
||||
|
||||
# Finally, collect the (now unique) subfragment domains, and merge them into our domains.
|
||||
for subfrag, name in self.subfragments:
|
||||
for domain_name, domain in subfrag.domains.items():
|
||||
if domain.local:
|
||||
continue
|
||||
self.add_domains(domain)
|
||||
|
||||
def _propagate_domains_down(self):
|
||||
# For each domain defined in this fragment, ensure it also exists in all subfragments.
|
||||
for subfrag, name in self.subfragments:
|
||||
for domain in self.iter_domains():
|
||||
if domain in subfrag.domains:
|
||||
assert self.domains[domain] is subfrag.domains[domain]
|
||||
else:
|
||||
subfrag.add_domains(self.domains[domain])
|
||||
|
||||
subfrag._propagate_domains_down()
|
||||
|
||||
def _create_missing_domains(self, missing_domain, *, platform=None):
|
||||
from .xfrm import DomainCollector
|
||||
|
||||
collector = DomainCollector()
|
||||
collector(self)
|
||||
|
||||
new_domains = []
|
||||
for domain_name in collector.used_domains - collector.defined_domains:
|
||||
if domain_name is None:
|
||||
continue
|
||||
value = missing_domain(domain_name)
|
||||
if value is None:
|
||||
raise DomainError("Domain '{}' is used but not defined".format(domain_name))
|
||||
if type(value) is ClockDomain:
|
||||
self.add_domains(value)
|
||||
# And expose ports on the newly added clock domain, since it is added directly
|
||||
# and there was no chance to add any logic driving it.
|
||||
new_domains.append(value)
|
||||
else:
|
||||
new_fragment = Fragment.get(value, platform=platform)
|
||||
if domain_name not in new_fragment.domains:
|
||||
defined = new_fragment.domains.keys()
|
||||
raise DomainError(
|
||||
"Fragment returned by missing domain callback does not define "
|
||||
"requested domain '{}' (defines {})."
|
||||
.format(domain_name, ", ".join("'{}'".format(n) for n in defined)))
|
||||
self.add_subfragment(new_fragment, "cd_{}".format(domain_name))
|
||||
self.add_domains(new_fragment.domains.values())
|
||||
return new_domains
|
||||
|
||||
def _propagate_domains(self, missing_domain, *, platform=None):
|
||||
self._propagate_domains_up()
|
||||
self._propagate_domains_down()
|
||||
self._resolve_hierarchy_conflicts()
|
||||
new_domains = self._create_missing_domains(missing_domain, platform=platform)
|
||||
self._propagate_domains_down()
|
||||
return new_domains
|
||||
|
||||
def _prepare_use_def_graph(self, parent, level, uses, defs, ios, top):
|
||||
def add_uses(*sigs, self=self):
|
||||
for sig in flatten(sigs):
|
||||
if sig not in uses:
|
||||
uses[sig] = set()
|
||||
uses[sig].add(self)
|
||||
|
||||
def add_defs(*sigs):
|
||||
for sig in flatten(sigs):
|
||||
if sig not in defs:
|
||||
defs[sig] = self
|
||||
else:
|
||||
assert defs[sig] is self
|
||||
|
||||
def add_io(*sigs):
|
||||
for sig in flatten(sigs):
|
||||
if sig not in ios:
|
||||
ios[sig] = self
|
||||
else:
|
||||
assert ios[sig] is self
|
||||
|
||||
# Collect all signals we're driving (on LHS of statements), and signals we're using
|
||||
# (on RHS of statements, or in clock domains).
|
||||
for stmt in self.statements:
|
||||
add_uses(stmt._rhs_signals())
|
||||
add_defs(stmt._lhs_signals())
|
||||
|
||||
for domain, _ in self.iter_sync():
|
||||
cd = self.domains[domain]
|
||||
add_uses(cd.clk)
|
||||
if cd.rst is not None:
|
||||
add_uses(cd.rst)
|
||||
|
||||
# Repeat for subfragments.
|
||||
for subfrag, name in self.subfragments:
|
||||
if isinstance(subfrag, Instance):
|
||||
for port_name, (value, dir) in subfrag.named_ports.items():
|
||||
if dir == "i":
|
||||
# Prioritize defs over uses.
|
||||
rhs_without_outputs = value._rhs_signals() - subfrag.iter_ports(dir="o")
|
||||
subfrag.add_ports(rhs_without_outputs, dir=dir)
|
||||
add_uses(value._rhs_signals())
|
||||
if dir == "o":
|
||||
subfrag.add_ports(value._lhs_signals(), dir=dir)
|
||||
add_defs(value._lhs_signals())
|
||||
if dir == "io":
|
||||
subfrag.add_ports(value._lhs_signals(), dir=dir)
|
||||
add_io(value._lhs_signals())
|
||||
else:
|
||||
parent[subfrag] = self
|
||||
level [subfrag] = level[self] + 1
|
||||
|
||||
subfrag._prepare_use_def_graph(parent, level, uses, defs, ios, top)
|
||||
|
||||
def _propagate_ports(self, ports, all_undef_as_ports):
|
||||
# Take this fragment graph:
|
||||
#
|
||||
# __ B (def: q, use: p r)
|
||||
# /
|
||||
# A (def: p, use: q r)
|
||||
# \
|
||||
# \_ C (def: r, use: p q)
|
||||
#
|
||||
# We need to consider three cases.
|
||||
# 1. Signal p requires an input port in B;
|
||||
# 2. Signal r requires an output port in C;
|
||||
# 3. Signal r requires an output port in C and an input port in B.
|
||||
#
|
||||
# Adding these ports can be in general done in three steps for each signal:
|
||||
# 1. Find the least common ancestor of all uses and defs.
|
||||
# 2. Going upwards from the single def, add output ports.
|
||||
# 3. Going upwards from all uses, add input ports.
|
||||
|
||||
parent = {self: None}
|
||||
level = {self: 0}
|
||||
uses = SignalDict()
|
||||
defs = SignalDict()
|
||||
ios = SignalDict()
|
||||
self._prepare_use_def_graph(parent, level, uses, defs, ios, self)
|
||||
|
||||
ports = SignalSet(ports)
|
||||
if all_undef_as_ports:
|
||||
for sig in uses:
|
||||
if sig in defs:
|
||||
continue
|
||||
ports.add(sig)
|
||||
for sig in ports:
|
||||
if sig not in uses:
|
||||
uses[sig] = set()
|
||||
uses[sig].add(self)
|
||||
|
||||
@memoize
|
||||
def lca_of(fragu, fragv):
|
||||
# Normalize fragu to be deeper than fragv.
|
||||
if level[fragu] < level[fragv]:
|
||||
fragu, fragv = fragv, fragu
|
||||
# Find ancestor of fragu on the same level as fragv.
|
||||
for _ in range(level[fragu] - level[fragv]):
|
||||
fragu = parent[fragu]
|
||||
# If fragv was the ancestor of fragv, we're done.
|
||||
if fragu == fragv:
|
||||
return fragu
|
||||
# Otherwise, they are at the same level but in different branches. Step both fragu
|
||||
# and fragv until we find the common ancestor.
|
||||
while parent[fragu] != parent[fragv]:
|
||||
fragu = parent[fragu]
|
||||
fragv = parent[fragv]
|
||||
return parent[fragu]
|
||||
|
||||
for sig in uses:
|
||||
if sig in defs:
|
||||
lca = reduce(lca_of, uses[sig], defs[sig])
|
||||
else:
|
||||
lca = reduce(lca_of, uses[sig])
|
||||
|
||||
for frag in uses[sig]:
|
||||
if sig in defs and frag is defs[sig]:
|
||||
continue
|
||||
while frag != lca:
|
||||
frag.add_ports(sig, dir="i")
|
||||
frag = parent[frag]
|
||||
|
||||
if sig in defs:
|
||||
frag = defs[sig]
|
||||
while frag != lca:
|
||||
frag.add_ports(sig, dir="o")
|
||||
frag = parent[frag]
|
||||
|
||||
for sig in ios:
|
||||
frag = ios[sig]
|
||||
while frag is not None:
|
||||
frag.add_ports(sig, dir="io")
|
||||
frag = parent[frag]
|
||||
|
||||
for sig in ports:
|
||||
if sig in ios:
|
||||
continue
|
||||
if sig in defs:
|
||||
self.add_ports(sig, dir="o")
|
||||
else:
|
||||
self.add_ports(sig, dir="i")
|
||||
|
||||
def prepare(self, ports=None, missing_domain=lambda name: ClockDomain(name)):
|
||||
from .xfrm import SampleLowerer, DomainLowerer
|
||||
|
||||
fragment = SampleLowerer()(self)
|
||||
new_domains = fragment._propagate_domains(missing_domain)
|
||||
fragment = DomainLowerer()(fragment)
|
||||
if ports is None:
|
||||
fragment._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
else:
|
||||
mapped_ports = []
|
||||
# Lower late bound signals like ClockSignal() to ports.
|
||||
port_lowerer = DomainLowerer(fragment.domains)
|
||||
for port in ports:
|
||||
if not isinstance(port, (Signal, ClockSignal, ResetSignal)):
|
||||
raise TypeError("Only signals may be added as ports, not {!r}"
|
||||
.format(port))
|
||||
mapped_ports.append(port_lowerer.on_value(port))
|
||||
# Add ports for all newly created missing clock domains, since not doing so defeats
|
||||
# the purpose of domain auto-creation. (It's possible to refer to these ports before
|
||||
# the domain actually exists through late binding, but it's inconvenient.)
|
||||
for cd in new_domains:
|
||||
mapped_ports.append(cd.clk)
|
||||
if cd.rst is not None:
|
||||
mapped_ports.append(cd.rst)
|
||||
fragment._propagate_ports(ports=mapped_ports, all_undef_as_ports=False)
|
||||
return fragment
|
||||
|
||||
|
||||
class Instance(Fragment):
|
||||
def __init__(self, type, *args, **kwargs):
|
||||
super().__init__()
|
||||
|
||||
self.type = type
|
||||
self.parameters = OrderedDict()
|
||||
self.named_ports = OrderedDict()
|
||||
|
||||
for (kind, name, value) in args:
|
||||
if kind == "a":
|
||||
self.attrs[name] = value
|
||||
elif kind == "p":
|
||||
self.parameters[name] = value
|
||||
elif kind in ("i", "o", "io"):
|
||||
self.named_ports[name] = (Value.cast(value), kind)
|
||||
else:
|
||||
raise NameError("Instance argument {!r} should be a tuple (kind, name, value) "
|
||||
"where kind is one of \"p\", \"i\", \"o\", or \"io\""
|
||||
.format((kind, name, value)))
|
||||
|
||||
for kw, arg in kwargs.items():
|
||||
if kw.startswith("a_"):
|
||||
self.attrs[kw[2:]] = arg
|
||||
elif kw.startswith("p_"):
|
||||
self.parameters[kw[2:]] = arg
|
||||
elif kw.startswith("i_"):
|
||||
self.named_ports[kw[2:]] = (Value.cast(arg), "i")
|
||||
elif kw.startswith("o_"):
|
||||
self.named_ports[kw[2:]] = (Value.cast(arg), "o")
|
||||
elif kw.startswith("io_"):
|
||||
self.named_ports[kw[3:]] = (Value.cast(arg), "io")
|
||||
else:
|
||||
raise NameError("Instance keyword argument {}={!r} does not start with one of "
|
||||
"\"p_\", \"i_\", \"o_\", or \"io_\""
|
||||
.format(kw, arg))
|
|
@ -0,0 +1,234 @@
|
|||
import operator
|
||||
from collections import OrderedDict
|
||||
|
||||
from .. import tracer
|
||||
from .ast import *
|
||||
from .ir import Elaboratable, Instance
|
||||
|
||||
|
||||
__all__ = ["Memory", "ReadPort", "WritePort", "DummyPort"]
|
||||
|
||||
|
||||
class Memory:
|
||||
"""A word addressable storage.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Access granularity. Each storage element of this memory is ``width`` bits in size.
|
||||
depth : int
|
||||
Word count. This memory contains ``depth`` storage elements.
|
||||
init : list of int
|
||||
Initial values. At power on, each storage element in this memory is initialized to
|
||||
the corresponding element of ``init``, if any, or to zero otherwise.
|
||||
Uninitialized memories are not currently supported.
|
||||
name : str
|
||||
Name hint for this memory. If ``None`` (default) the name is inferred from the variable
|
||||
name this ``Signal`` is assigned to.
|
||||
attrs : dict
|
||||
Dictionary of synthesis attributes.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
width : int
|
||||
depth : int
|
||||
init : list of int
|
||||
attrs : dict
|
||||
"""
|
||||
def __init__(self, *, width, depth, init=None, name=None, attrs=None, simulate=True):
|
||||
if not isinstance(width, int) or width < 0:
|
||||
raise TypeError("Memory width must be a non-negative integer, not {!r}"
|
||||
.format(width))
|
||||
if not isinstance(depth, int) or depth < 0:
|
||||
raise TypeError("Memory depth must be a non-negative integer, not {!r}"
|
||||
.format(depth))
|
||||
|
||||
self.name = name or tracer.get_var_name(depth=2, default="$memory")
|
||||
self.src_loc = tracer.get_src_loc()
|
||||
|
||||
self.width = width
|
||||
self.depth = depth
|
||||
self.attrs = OrderedDict(() if attrs is None else attrs)
|
||||
|
||||
# Array of signals for simulation.
|
||||
self._array = Array()
|
||||
if simulate:
|
||||
for addr in range(self.depth):
|
||||
self._array.append(Signal(self.width, name="{}({})"
|
||||
.format(name or "memory", addr)))
|
||||
|
||||
self.init = init
|
||||
|
||||
@property
|
||||
def init(self):
|
||||
return self._init
|
||||
|
||||
@init.setter
|
||||
def init(self, new_init):
|
||||
self._init = [] if new_init is None else list(new_init)
|
||||
if len(self.init) > self.depth:
|
||||
raise ValueError("Memory initialization value count exceed memory depth ({} > {})"
|
||||
.format(len(self.init), self.depth))
|
||||
|
||||
try:
|
||||
for addr in range(len(self._array)):
|
||||
if addr < len(self._init):
|
||||
self._array[addr].reset = operator.index(self._init[addr])
|
||||
else:
|
||||
self._array[addr].reset = 0
|
||||
except TypeError as e:
|
||||
raise TypeError("Memory initialization value at address {:x}: {}"
|
||||
.format(addr, e)) from None
|
||||
|
||||
def read_port(self, *, src_loc_at=0, **kwargs):
|
||||
return ReadPort(self, src_loc_at=1 + src_loc_at, **kwargs)
|
||||
|
||||
def write_port(self, *, src_loc_at=0, **kwargs):
|
||||
return WritePort(self, src_loc_at=1 + src_loc_at, **kwargs)
|
||||
|
||||
def __getitem__(self, index):
|
||||
"""Simulation only."""
|
||||
return self._array[index]
|
||||
|
||||
|
||||
class ReadPort(Elaboratable):
|
||||
def __init__(self, memory, *, domain="sync", transparent=True, src_loc_at=0):
|
||||
if domain == "comb" and not transparent:
|
||||
raise ValueError("Read port cannot be simultaneously asynchronous and non-transparent")
|
||||
|
||||
self.memory = memory
|
||||
self.domain = domain
|
||||
self.transparent = transparent
|
||||
|
||||
self.addr = Signal(range(memory.depth),
|
||||
name="{}_r_addr".format(memory.name), src_loc_at=1 + src_loc_at)
|
||||
self.data = Signal(memory.width,
|
||||
name="{}_r_data".format(memory.name), src_loc_at=1 + src_loc_at)
|
||||
if self.domain != "comb" and not transparent:
|
||||
self.en = Signal(name="{}_r_en".format(memory.name), reset=1,
|
||||
src_loc_at=2 + src_loc_at)
|
||||
else:
|
||||
self.en = Const(1)
|
||||
|
||||
def elaborate(self, platform):
|
||||
f = Instance("$memrd",
|
||||
p_MEMID=self.memory,
|
||||
p_ABITS=self.addr.width,
|
||||
p_WIDTH=self.data.width,
|
||||
p_CLK_ENABLE=self.domain != "comb",
|
||||
p_CLK_POLARITY=1,
|
||||
p_TRANSPARENT=self.transparent,
|
||||
i_CLK=ClockSignal(self.domain) if self.domain != "comb" else Const(0),
|
||||
i_EN=self.en,
|
||||
i_ADDR=self.addr,
|
||||
o_DATA=self.data,
|
||||
)
|
||||
if self.domain == "comb":
|
||||
# Asynchronous port
|
||||
f.add_statements(self.data.eq(self.memory._array[self.addr]))
|
||||
f.add_driver(self.data)
|
||||
elif not self.transparent:
|
||||
# Synchronous, read-before-write port
|
||||
f.add_statements(
|
||||
Switch(self.en, {
|
||||
1: self.data.eq(self.memory._array[self.addr])
|
||||
})
|
||||
)
|
||||
f.add_driver(self.data, self.domain)
|
||||
else:
|
||||
# Synchronous, write-through port
|
||||
# This model is a bit unconventional. We model transparent ports as asynchronous ports
|
||||
# that are latched when the clock is high. This isn't exactly correct, but it is very
|
||||
# close to the correct behavior of a transparent port, and the difference should only
|
||||
# be observable in pathological cases of clock gating. A register is injected to
|
||||
# the address input to achieve the correct address-to-data latency. Also, the reset
|
||||
# value of the data output is forcibly set to the 0th initial value, if any--note that
|
||||
# many FPGAs do not guarantee this behavior!
|
||||
if len(self.memory.init) > 0:
|
||||
self.data.reset = self.memory.init[0]
|
||||
latch_addr = Signal.like(self.addr)
|
||||
f.add_statements(
|
||||
latch_addr.eq(self.addr),
|
||||
Switch(ClockSignal(self.domain), {
|
||||
0: self.data.eq(self.data),
|
||||
1: self.data.eq(self.memory._array[latch_addr]),
|
||||
}),
|
||||
)
|
||||
f.add_driver(latch_addr, self.domain)
|
||||
f.add_driver(self.data)
|
||||
return f
|
||||
|
||||
|
||||
class WritePort(Elaboratable):
|
||||
def __init__(self, memory, *, domain="sync", granularity=None, src_loc_at=0):
|
||||
if granularity is None:
|
||||
granularity = memory.width
|
||||
if not isinstance(granularity, int) or granularity < 0:
|
||||
raise TypeError("Write port granularity must be a non-negative integer, not {!r}"
|
||||
.format(granularity))
|
||||
if granularity > memory.width:
|
||||
raise ValueError("Write port granularity must not be greater than memory width "
|
||||
"({} > {})"
|
||||
.format(granularity, memory.width))
|
||||
if memory.width // granularity * granularity != memory.width:
|
||||
raise ValueError("Write port granularity must divide memory width evenly")
|
||||
|
||||
self.memory = memory
|
||||
self.domain = domain
|
||||
self.granularity = granularity
|
||||
|
||||
self.addr = Signal(range(memory.depth),
|
||||
name="{}_w_addr".format(memory.name), src_loc_at=1 + src_loc_at)
|
||||
self.data = Signal(memory.width,
|
||||
name="{}_w_data".format(memory.name), src_loc_at=1 + src_loc_at)
|
||||
self.en = Signal(memory.width // granularity,
|
||||
name="{}_w_en".format(memory.name), src_loc_at=1 + src_loc_at)
|
||||
|
||||
def elaborate(self, platform):
|
||||
f = Instance("$memwr",
|
||||
p_MEMID=self.memory,
|
||||
p_ABITS=self.addr.width,
|
||||
p_WIDTH=self.data.width,
|
||||
p_CLK_ENABLE=1,
|
||||
p_CLK_POLARITY=1,
|
||||
p_PRIORITY=0,
|
||||
i_CLK=ClockSignal(self.domain),
|
||||
i_EN=Cat(Repl(en_bit, self.granularity) for en_bit in self.en),
|
||||
i_ADDR=self.addr,
|
||||
i_DATA=self.data,
|
||||
)
|
||||
if len(self.en) > 1:
|
||||
for index, en_bit in enumerate(self.en):
|
||||
offset = index * self.granularity
|
||||
bits = slice(offset, offset + self.granularity)
|
||||
write_data = self.memory._array[self.addr][bits].eq(self.data[bits])
|
||||
f.add_statements(Switch(en_bit, { 1: write_data }))
|
||||
else:
|
||||
write_data = self.memory._array[self.addr].eq(self.data)
|
||||
f.add_statements(Switch(self.en, { 1: write_data }))
|
||||
for signal in self.memory._array:
|
||||
f.add_driver(signal, self.domain)
|
||||
return f
|
||||
|
||||
|
||||
class DummyPort:
|
||||
"""Dummy memory port.
|
||||
|
||||
This port can be used in place of either a read or a write port for testing and verification.
|
||||
It does not include any read/write port specific attributes, i.e. none besides ``"domain"``;
|
||||
any such attributes may be set manually.
|
||||
"""
|
||||
def __init__(self, *, data_width, addr_width, domain="sync", name=None, granularity=None):
|
||||
self.domain = domain
|
||||
|
||||
if granularity is None:
|
||||
granularity = data_width
|
||||
if name is None:
|
||||
name = tracer.get_var_name(depth=2, default="dummy")
|
||||
|
||||
self.addr = Signal(addr_width,
|
||||
name="{}_addr".format(name), src_loc_at=1)
|
||||
self.data = Signal(data_width,
|
||||
name="{}_data".format(name), src_loc_at=1)
|
||||
self.en = Signal(data_width // granularity,
|
||||
name="{}_en".format(name), src_loc_at=1)
|
|
@ -0,0 +1,228 @@
|
|||
from enum import Enum
|
||||
from collections import OrderedDict
|
||||
from functools import reduce
|
||||
|
||||
from .. import tracer
|
||||
from .._utils import union, deprecated
|
||||
from .ast import *
|
||||
|
||||
|
||||
__all__ = ["Direction", "DIR_NONE", "DIR_FANOUT", "DIR_FANIN", "Layout", "Record"]
|
||||
|
||||
|
||||
Direction = Enum('Direction', ('NONE', 'FANOUT', 'FANIN'))
|
||||
|
||||
DIR_NONE = Direction.NONE
|
||||
DIR_FANOUT = Direction.FANOUT
|
||||
DIR_FANIN = Direction.FANIN
|
||||
|
||||
|
||||
class Layout:
|
||||
@staticmethod
|
||||
def cast(obj, *, src_loc_at=0):
|
||||
if isinstance(obj, Layout):
|
||||
return obj
|
||||
return Layout(obj, src_loc_at=1 + src_loc_at)
|
||||
|
||||
def __init__(self, fields, *, src_loc_at=0):
|
||||
self.fields = OrderedDict()
|
||||
for field in fields:
|
||||
if not isinstance(field, tuple) or len(field) not in (2, 3):
|
||||
raise TypeError("Field {!r} has invalid layout: should be either "
|
||||
"(name, shape) or (name, shape, direction)"
|
||||
.format(field))
|
||||
if len(field) == 2:
|
||||
name, shape = field
|
||||
direction = DIR_NONE
|
||||
if isinstance(shape, list):
|
||||
shape = Layout.cast(shape)
|
||||
else:
|
||||
name, shape, direction = field
|
||||
if not isinstance(direction, Direction):
|
||||
raise TypeError("Field {!r} has invalid direction: should be a Direction "
|
||||
"instance like DIR_FANIN"
|
||||
.format(field))
|
||||
if not isinstance(name, str):
|
||||
raise TypeError("Field {!r} has invalid name: should be a string"
|
||||
.format(field))
|
||||
if not isinstance(shape, Layout):
|
||||
try:
|
||||
shape = Shape.cast(shape, src_loc_at=1 + src_loc_at)
|
||||
except Exception as error:
|
||||
raise TypeError("Field {!r} has invalid shape: should be castable to Shape "
|
||||
"or a list of fields of a nested record"
|
||||
.format(field))
|
||||
if name in self.fields:
|
||||
raise NameError("Field {!r} has a name that is already present in the layout"
|
||||
.format(field))
|
||||
self.fields[name] = (shape, direction)
|
||||
|
||||
def __getitem__(self, item):
|
||||
if isinstance(item, tuple):
|
||||
return Layout([
|
||||
(name, shape, dir)
|
||||
for (name, (shape, dir)) in self.fields.items()
|
||||
if name in item
|
||||
])
|
||||
|
||||
return self.fields[item]
|
||||
|
||||
def __iter__(self):
|
||||
for name, (shape, dir) in self.fields.items():
|
||||
yield (name, shape, dir)
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.fields == other.fields
|
||||
|
||||
|
||||
# Unlike most Values, Record *can* be subclassed.
|
||||
class Record(Value):
|
||||
@staticmethod
|
||||
def like(other, *, name=None, name_suffix=None, src_loc_at=0):
|
||||
if name is not None:
|
||||
new_name = str(name)
|
||||
elif name_suffix is not None:
|
||||
new_name = other.name + str(name_suffix)
|
||||
else:
|
||||
new_name = tracer.get_var_name(depth=2 + src_loc_at, default=None)
|
||||
|
||||
def concat(a, b):
|
||||
if a is None:
|
||||
return b
|
||||
return "{}__{}".format(a, b)
|
||||
|
||||
fields = {}
|
||||
for field_name in other.fields:
|
||||
field = other[field_name]
|
||||
if isinstance(field, Record):
|
||||
fields[field_name] = Record.like(field, name=concat(new_name, field_name),
|
||||
src_loc_at=1 + src_loc_at)
|
||||
else:
|
||||
fields[field_name] = Signal.like(field, name=concat(new_name, field_name),
|
||||
src_loc_at=1 + src_loc_at)
|
||||
|
||||
return Record(other.layout, name=new_name, fields=fields, src_loc_at=1)
|
||||
|
||||
def __init__(self, layout, *, name=None, fields=None, src_loc_at=0):
|
||||
if name is None:
|
||||
name = tracer.get_var_name(depth=2 + src_loc_at, default=None)
|
||||
|
||||
self.name = name
|
||||
self.src_loc = tracer.get_src_loc(src_loc_at)
|
||||
|
||||
def concat(a, b):
|
||||
if a is None:
|
||||
return b
|
||||
return "{}__{}".format(a, b)
|
||||
|
||||
self.layout = Layout.cast(layout, src_loc_at=1 + src_loc_at)
|
||||
self.fields = OrderedDict()
|
||||
for field_name, field_shape, field_dir in self.layout:
|
||||
if fields is not None and field_name in fields:
|
||||
field = fields[field_name]
|
||||
if isinstance(field_shape, Layout):
|
||||
assert isinstance(field, Record) and field_shape == field.layout
|
||||
else:
|
||||
assert isinstance(field, Signal) and field_shape == field.shape()
|
||||
self.fields[field_name] = field
|
||||
else:
|
||||
if isinstance(field_shape, Layout):
|
||||
self.fields[field_name] = Record(field_shape, name=concat(name, field_name),
|
||||
src_loc_at=1 + src_loc_at)
|
||||
else:
|
||||
self.fields[field_name] = Signal(field_shape, name=concat(name, field_name),
|
||||
src_loc_at=1 + src_loc_at)
|
||||
|
||||
def __getattr__(self, name):
|
||||
return self[name]
|
||||
|
||||
def __getitem__(self, item):
|
||||
if isinstance(item, str):
|
||||
try:
|
||||
return self.fields[item]
|
||||
except KeyError:
|
||||
if self.name is None:
|
||||
reference = "Unnamed record"
|
||||
else:
|
||||
reference = "Record '{}'".format(self.name)
|
||||
raise AttributeError("{} does not have a field '{}'. Did you mean one of: {}?"
|
||||
.format(reference, item, ", ".join(self.fields))) from None
|
||||
elif isinstance(item, tuple):
|
||||
return Record(self.layout[item], fields={
|
||||
field_name: field_value
|
||||
for field_name, field_value in self.fields.items()
|
||||
if field_name in item
|
||||
})
|
||||
else:
|
||||
return super().__getitem__(item)
|
||||
|
||||
def shape(self):
|
||||
return Shape(sum(len(f) for f in self.fields.values()))
|
||||
|
||||
def _lhs_signals(self):
|
||||
return union((f._lhs_signals() for f in self.fields.values()), start=SignalSet())
|
||||
|
||||
def _rhs_signals(self):
|
||||
return union((f._rhs_signals() for f in self.fields.values()), start=SignalSet())
|
||||
|
||||
def __repr__(self):
|
||||
fields = []
|
||||
for field_name, field in self.fields.items():
|
||||
if isinstance(field, Signal):
|
||||
fields.append(field_name)
|
||||
else:
|
||||
fields.append(repr(field))
|
||||
name = self.name
|
||||
if name is None:
|
||||
name = "<unnamed>"
|
||||
return "(rec {} {})".format(name, " ".join(fields))
|
||||
|
||||
def connect(self, *subordinates, include=None, exclude=None):
|
||||
def rec_name(record):
|
||||
if record.name is None:
|
||||
return "unnamed record"
|
||||
else:
|
||||
return "record '{}'".format(record.name)
|
||||
|
||||
for field in include or {}:
|
||||
if field not in self.fields:
|
||||
raise AttributeError("Cannot include field '{}' because it is not present in {}"
|
||||
.format(field, rec_name(self)))
|
||||
for field in exclude or {}:
|
||||
if field not in self.fields:
|
||||
raise AttributeError("Cannot exclude field '{}' because it is not present in {}"
|
||||
.format(field, rec_name(self)))
|
||||
|
||||
stmts = []
|
||||
for field in self.fields:
|
||||
if include is not None and field not in include:
|
||||
continue
|
||||
if exclude is not None and field in exclude:
|
||||
continue
|
||||
|
||||
shape, direction = self.layout[field]
|
||||
if not isinstance(shape, Layout) and direction == DIR_NONE:
|
||||
raise TypeError("Cannot connect field '{}' of {} because it does not have "
|
||||
"a direction"
|
||||
.format(field, rec_name(self)))
|
||||
|
||||
item = self.fields[field]
|
||||
subord_items = []
|
||||
for subord in subordinates:
|
||||
if field not in subord.fields:
|
||||
raise AttributeError("Cannot connect field '{}' of {} to subordinate {} "
|
||||
"because the subordinate record does not have this field"
|
||||
.format(field, rec_name(self), rec_name(subord)))
|
||||
subord_items.append(subord.fields[field])
|
||||
|
||||
if isinstance(shape, Layout):
|
||||
sub_include = include[field] if include and field in include else None
|
||||
sub_exclude = exclude[field] if exclude and field in exclude else None
|
||||
stmts += item.connect(*subord_items, include=sub_include, exclude=sub_exclude)
|
||||
else:
|
||||
if direction == DIR_FANOUT:
|
||||
stmts += [sub_item.eq(item) for sub_item in subord_items]
|
||||
if direction == DIR_FANIN:
|
||||
stmts += [item.eq(reduce(lambda a, b: a | b, subord_items))]
|
||||
|
||||
return stmts
|
|
@ -0,0 +1,754 @@
|
|||
from abc import ABCMeta, abstractmethod
|
||||
from collections import OrderedDict
|
||||
from collections.abc import Iterable
|
||||
|
||||
from .._utils import flatten, deprecated
|
||||
from .. import tracer
|
||||
from .ast import *
|
||||
from .ast import _StatementList
|
||||
from .cd import *
|
||||
from .ir import *
|
||||
from .rec import *
|
||||
|
||||
|
||||
__all__ = ["ValueVisitor", "ValueTransformer",
|
||||
"StatementVisitor", "StatementTransformer",
|
||||
"FragmentTransformer",
|
||||
"TransformedElaboratable",
|
||||
"DomainCollector", "DomainRenamer", "DomainLowerer",
|
||||
"SampleDomainInjector", "SampleLowerer",
|
||||
"SwitchCleaner", "LHSGroupAnalyzer", "LHSGroupFilter",
|
||||
"ResetInserter", "EnableInserter"]
|
||||
|
||||
|
||||
class ValueVisitor(metaclass=ABCMeta):
|
||||
@abstractmethod
|
||||
def on_Const(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_AnyConst(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_AnySeq(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Signal(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Record(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_ClockSignal(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_ResetSignal(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Operator(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Slice(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Part(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Cat(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Repl(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_ArrayProxy(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Sample(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Initial(self, value):
|
||||
pass # :nocov:
|
||||
|
||||
def on_unknown_value(self, value):
|
||||
raise TypeError("Cannot transform value {!r}".format(value)) # :nocov:
|
||||
|
||||
def replace_value_src_loc(self, value, new_value):
|
||||
return True
|
||||
|
||||
def on_value(self, value):
|
||||
if type(value) is Const:
|
||||
new_value = self.on_Const(value)
|
||||
elif type(value) is AnyConst:
|
||||
new_value = self.on_AnyConst(value)
|
||||
elif type(value) is AnySeq:
|
||||
new_value = self.on_AnySeq(value)
|
||||
elif isinstance(value, Signal):
|
||||
# Uses `isinstance()` and not `type() is` because nmigen.compat requires it.
|
||||
new_value = self.on_Signal(value)
|
||||
elif isinstance(value, Record):
|
||||
# Uses `isinstance()` and not `type() is` to allow inheriting from Record.
|
||||
new_value = self.on_Record(value)
|
||||
elif type(value) is ClockSignal:
|
||||
new_value = self.on_ClockSignal(value)
|
||||
elif type(value) is ResetSignal:
|
||||
new_value = self.on_ResetSignal(value)
|
||||
elif type(value) is Operator:
|
||||
new_value = self.on_Operator(value)
|
||||
elif type(value) is Slice:
|
||||
new_value = self.on_Slice(value)
|
||||
elif type(value) is Part:
|
||||
new_value = self.on_Part(value)
|
||||
elif type(value) is Cat:
|
||||
new_value = self.on_Cat(value)
|
||||
elif type(value) is Repl:
|
||||
new_value = self.on_Repl(value)
|
||||
elif type(value) is ArrayProxy:
|
||||
new_value = self.on_ArrayProxy(value)
|
||||
elif type(value) is Sample:
|
||||
new_value = self.on_Sample(value)
|
||||
elif type(value) is Initial:
|
||||
new_value = self.on_Initial(value)
|
||||
elif isinstance(value, UserValue):
|
||||
# Uses `isinstance()` and not `type() is` to allow inheriting.
|
||||
new_value = self.on_value(value._lazy_lower())
|
||||
else:
|
||||
new_value = self.on_unknown_value(value)
|
||||
if isinstance(new_value, Value) and self.replace_value_src_loc(value, new_value):
|
||||
new_value.src_loc = value.src_loc
|
||||
return new_value
|
||||
|
||||
def __call__(self, value):
|
||||
return self.on_value(value)
|
||||
|
||||
|
||||
class ValueTransformer(ValueVisitor):
|
||||
def on_Const(self, value):
|
||||
return value
|
||||
|
||||
def on_AnyConst(self, value):
|
||||
return value
|
||||
|
||||
def on_AnySeq(self, value):
|
||||
return value
|
||||
|
||||
def on_Signal(self, value):
|
||||
return value
|
||||
|
||||
def on_Record(self, value):
|
||||
return value
|
||||
|
||||
def on_ClockSignal(self, value):
|
||||
return value
|
||||
|
||||
def on_ResetSignal(self, value):
|
||||
return value
|
||||
|
||||
def on_Operator(self, value):
|
||||
return Operator(value.operator, [self.on_value(o) for o in value.operands])
|
||||
|
||||
def on_Slice(self, value):
|
||||
return Slice(self.on_value(value.value), value.start, value.stop)
|
||||
|
||||
def on_Part(self, value):
|
||||
return Part(self.on_value(value.value), self.on_value(value.offset),
|
||||
value.width, value.stride)
|
||||
|
||||
def on_Cat(self, value):
|
||||
return Cat(self.on_value(o) for o in value.parts)
|
||||
|
||||
def on_Repl(self, value):
|
||||
return Repl(self.on_value(value.value), value.count)
|
||||
|
||||
def on_ArrayProxy(self, value):
|
||||
return ArrayProxy([self.on_value(elem) for elem in value._iter_as_values()],
|
||||
self.on_value(value.index))
|
||||
|
||||
def on_Sample(self, value):
|
||||
return Sample(self.on_value(value.value), value.clocks, value.domain)
|
||||
|
||||
def on_Initial(self, value):
|
||||
return value
|
||||
|
||||
|
||||
class StatementVisitor(metaclass=ABCMeta):
|
||||
@abstractmethod
|
||||
def on_Assign(self, stmt):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Assert(self, stmt):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Assume(self, stmt):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Cover(self, stmt):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_Switch(self, stmt):
|
||||
pass # :nocov:
|
||||
|
||||
@abstractmethod
|
||||
def on_statements(self, stmts):
|
||||
pass # :nocov:
|
||||
|
||||
def on_unknown_statement(self, stmt):
|
||||
raise TypeError("Cannot transform statement {!r}".format(stmt)) # :nocov:
|
||||
|
||||
def replace_statement_src_loc(self, stmt, new_stmt):
|
||||
return True
|
||||
|
||||
def on_statement(self, stmt):
|
||||
if type(stmt) is Assign:
|
||||
new_stmt = self.on_Assign(stmt)
|
||||
elif type(stmt) is Assert:
|
||||
new_stmt = self.on_Assert(stmt)
|
||||
elif type(stmt) is Assume:
|
||||
new_stmt = self.on_Assume(stmt)
|
||||
elif type(stmt) is Cover:
|
||||
new_stmt = self.on_Cover(stmt)
|
||||
elif isinstance(stmt, Switch):
|
||||
# Uses `isinstance()` and not `type() is` because nmigen.compat requires it.
|
||||
new_stmt = self.on_Switch(stmt)
|
||||
elif isinstance(stmt, Iterable):
|
||||
new_stmt = self.on_statements(stmt)
|
||||
else:
|
||||
new_stmt = self.on_unknown_statement(stmt)
|
||||
if isinstance(new_stmt, Statement) and self.replace_statement_src_loc(stmt, new_stmt):
|
||||
new_stmt.src_loc = stmt.src_loc
|
||||
if isinstance(new_stmt, Switch) and isinstance(stmt, Switch):
|
||||
new_stmt.case_src_locs = stmt.case_src_locs
|
||||
if isinstance(new_stmt, Property):
|
||||
new_stmt._MustUse__used = True
|
||||
return new_stmt
|
||||
|
||||
def __call__(self, stmt):
|
||||
return self.on_statement(stmt)
|
||||
|
||||
|
||||
class StatementTransformer(StatementVisitor):
|
||||
def on_value(self, value):
|
||||
return value
|
||||
|
||||
def on_Assign(self, stmt):
|
||||
return Assign(self.on_value(stmt.lhs), self.on_value(stmt.rhs))
|
||||
|
||||
def on_Assert(self, stmt):
|
||||
return Assert(self.on_value(stmt.test), _check=stmt._check, _en=stmt._en)
|
||||
|
||||
def on_Assume(self, stmt):
|
||||
return Assume(self.on_value(stmt.test), _check=stmt._check, _en=stmt._en)
|
||||
|
||||
def on_Cover(self, stmt):
|
||||
return Cover(self.on_value(stmt.test), _check=stmt._check, _en=stmt._en)
|
||||
|
||||
def on_Switch(self, stmt):
|
||||
cases = OrderedDict((k, self.on_statement(s)) for k, s in stmt.cases.items())
|
||||
return Switch(self.on_value(stmt.test), cases)
|
||||
|
||||
def on_statements(self, stmts):
|
||||
return _StatementList(flatten(self.on_statement(stmt) for stmt in stmts))
|
||||
|
||||
|
||||
class FragmentTransformer:
|
||||
def map_subfragments(self, fragment, new_fragment):
|
||||
for subfragment, name in fragment.subfragments:
|
||||
new_fragment.add_subfragment(self(subfragment), name)
|
||||
|
||||
def map_ports(self, fragment, new_fragment):
|
||||
for port, dir in fragment.ports.items():
|
||||
new_fragment.add_ports(port, dir=dir)
|
||||
|
||||
def map_named_ports(self, fragment, new_fragment):
|
||||
if hasattr(self, "on_value"):
|
||||
for name, (value, dir) in fragment.named_ports.items():
|
||||
new_fragment.named_ports[name] = self.on_value(value), dir
|
||||
else:
|
||||
new_fragment.named_ports = OrderedDict(fragment.named_ports.items())
|
||||
|
||||
def map_domains(self, fragment, new_fragment):
|
||||
for domain in fragment.iter_domains():
|
||||
new_fragment.add_domains(fragment.domains[domain])
|
||||
|
||||
def map_statements(self, fragment, new_fragment):
|
||||
if hasattr(self, "on_statement"):
|
||||
new_fragment.add_statements(map(self.on_statement, fragment.statements))
|
||||
else:
|
||||
new_fragment.add_statements(fragment.statements)
|
||||
|
||||
def map_drivers(self, fragment, new_fragment):
|
||||
for domain, signal in fragment.iter_drivers():
|
||||
new_fragment.add_driver(signal, domain)
|
||||
|
||||
def on_fragment(self, fragment):
|
||||
if isinstance(fragment, Instance):
|
||||
new_fragment = Instance(fragment.type)
|
||||
new_fragment.parameters = OrderedDict(fragment.parameters)
|
||||
self.map_named_ports(fragment, new_fragment)
|
||||
else:
|
||||
new_fragment = Fragment()
|
||||
new_fragment.flatten = fragment.flatten
|
||||
new_fragment.attrs = OrderedDict(fragment.attrs)
|
||||
self.map_ports(fragment, new_fragment)
|
||||
self.map_subfragments(fragment, new_fragment)
|
||||
self.map_domains(fragment, new_fragment)
|
||||
self.map_statements(fragment, new_fragment)
|
||||
self.map_drivers(fragment, new_fragment)
|
||||
return new_fragment
|
||||
|
||||
def __call__(self, value, *, src_loc_at=0):
|
||||
if isinstance(value, Fragment):
|
||||
return self.on_fragment(value)
|
||||
elif isinstance(value, TransformedElaboratable):
|
||||
value._transforms_.append(self)
|
||||
return value
|
||||
elif hasattr(value, "elaborate"):
|
||||
value = TransformedElaboratable(value, src_loc_at=1 + src_loc_at)
|
||||
value._transforms_.append(self)
|
||||
return value
|
||||
else:
|
||||
raise AttributeError("Object {!r} cannot be elaborated".format(value))
|
||||
|
||||
|
||||
class TransformedElaboratable(Elaboratable):
|
||||
def __init__(self, elaboratable, *, src_loc_at=0):
|
||||
assert hasattr(elaboratable, "elaborate")
|
||||
|
||||
# Fields prefixed and suffixed with underscore to avoid as many conflicts with the inner
|
||||
# object as possible, since we're forwarding attribute requests to it.
|
||||
self._elaboratable_ = elaboratable
|
||||
self._transforms_ = []
|
||||
|
||||
def __getattr__(self, attr):
|
||||
return getattr(self._elaboratable_, attr)
|
||||
|
||||
def elaborate(self, platform):
|
||||
fragment = Fragment.get(self._elaboratable_, platform)
|
||||
for transform in self._transforms_:
|
||||
fragment = transform(fragment)
|
||||
return fragment
|
||||
|
||||
|
||||
class DomainCollector(ValueVisitor, StatementVisitor):
|
||||
def __init__(self):
|
||||
self.used_domains = set()
|
||||
self.defined_domains = set()
|
||||
self._local_domains = set()
|
||||
|
||||
def _add_used_domain(self, domain_name):
|
||||
if domain_name is None:
|
||||
return
|
||||
if domain_name in self._local_domains:
|
||||
return
|
||||
self.used_domains.add(domain_name)
|
||||
|
||||
def on_ignore(self, value):
|
||||
pass
|
||||
|
||||
on_Const = on_ignore
|
||||
on_AnyConst = on_ignore
|
||||
on_AnySeq = on_ignore
|
||||
on_Signal = on_ignore
|
||||
|
||||
def on_ClockSignal(self, value):
|
||||
self._add_used_domain(value.domain)
|
||||
|
||||
def on_ResetSignal(self, value):
|
||||
self._add_used_domain(value.domain)
|
||||
|
||||
on_Record = on_ignore
|
||||
|
||||
def on_Operator(self, value):
|
||||
for o in value.operands:
|
||||
self.on_value(o)
|
||||
|
||||
def on_Slice(self, value):
|
||||
self.on_value(value.value)
|
||||
|
||||
def on_Part(self, value):
|
||||
self.on_value(value.value)
|
||||
self.on_value(value.offset)
|
||||
|
||||
def on_Cat(self, value):
|
||||
for o in value.parts:
|
||||
self.on_value(o)
|
||||
|
||||
def on_Repl(self, value):
|
||||
self.on_value(value.value)
|
||||
|
||||
def on_ArrayProxy(self, value):
|
||||
for elem in value._iter_as_values():
|
||||
self.on_value(elem)
|
||||
self.on_value(value.index)
|
||||
|
||||
def on_Sample(self, value):
|
||||
self.on_value(value.value)
|
||||
|
||||
def on_Initial(self, value):
|
||||
pass
|
||||
|
||||
def on_Assign(self, stmt):
|
||||
self.on_value(stmt.lhs)
|
||||
self.on_value(stmt.rhs)
|
||||
|
||||
def on_property(self, stmt):
|
||||
self.on_value(stmt.test)
|
||||
|
||||
on_Assert = on_property
|
||||
on_Assume = on_property
|
||||
on_Cover = on_property
|
||||
|
||||
def on_Switch(self, stmt):
|
||||
self.on_value(stmt.test)
|
||||
for stmts in stmt.cases.values():
|
||||
self.on_statement(stmts)
|
||||
|
||||
def on_statements(self, stmts):
|
||||
for stmt in stmts:
|
||||
self.on_statement(stmt)
|
||||
|
||||
def on_fragment(self, fragment):
|
||||
if isinstance(fragment, Instance):
|
||||
for name, (value, dir) in fragment.named_ports.items():
|
||||
self.on_value(value)
|
||||
|
||||
old_local_domains, self._local_domains = self._local_domains, set(self._local_domains)
|
||||
for domain_name, domain in fragment.domains.items():
|
||||
if domain.local:
|
||||
self._local_domains.add(domain_name)
|
||||
else:
|
||||
self.defined_domains.add(domain_name)
|
||||
|
||||
self.on_statements(fragment.statements)
|
||||
for domain_name in fragment.drivers:
|
||||
self._add_used_domain(domain_name)
|
||||
for subfragment, name in fragment.subfragments:
|
||||
self.on_fragment(subfragment)
|
||||
|
||||
self._local_domains = old_local_domains
|
||||
|
||||
def __call__(self, fragment):
|
||||
self.on_fragment(fragment)
|
||||
|
||||
|
||||
class DomainRenamer(FragmentTransformer, ValueTransformer, StatementTransformer):
|
||||
def __init__(self, domain_map):
|
||||
if isinstance(domain_map, str):
|
||||
domain_map = {"sync": domain_map}
|
||||
for src, dst in domain_map.items():
|
||||
if src == "comb":
|
||||
raise ValueError("Domain '{}' may not be renamed".format(src))
|
||||
if dst == "comb":
|
||||
raise ValueError("Domain '{}' may not be renamed to '{}'".format(src, dst))
|
||||
self.domain_map = OrderedDict(domain_map)
|
||||
|
||||
def on_ClockSignal(self, value):
|
||||
if value.domain in self.domain_map:
|
||||
return ClockSignal(self.domain_map[value.domain])
|
||||
return value
|
||||
|
||||
def on_ResetSignal(self, value):
|
||||
if value.domain in self.domain_map:
|
||||
return ResetSignal(self.domain_map[value.domain])
|
||||
return value
|
||||
|
||||
def map_domains(self, fragment, new_fragment):
|
||||
for domain in fragment.iter_domains():
|
||||
cd = fragment.domains[domain]
|
||||
if domain in self.domain_map:
|
||||
if cd.name == domain:
|
||||
# Rename the actual ClockDomain object.
|
||||
cd.rename(self.domain_map[domain])
|
||||
else:
|
||||
assert cd.name == self.domain_map[domain]
|
||||
new_fragment.add_domains(cd)
|
||||
|
||||
def map_drivers(self, fragment, new_fragment):
|
||||
for domain, signals in fragment.drivers.items():
|
||||
if domain in self.domain_map:
|
||||
domain = self.domain_map[domain]
|
||||
for signal in signals:
|
||||
new_fragment.add_driver(self.on_value(signal), domain)
|
||||
|
||||
|
||||
class DomainLowerer(FragmentTransformer, ValueTransformer, StatementTransformer):
|
||||
def __init__(self, domains=None):
|
||||
self.domains = domains
|
||||
|
||||
def _resolve(self, domain, context):
|
||||
if domain not in self.domains:
|
||||
raise DomainError("Signal {!r} refers to nonexistent domain '{}'"
|
||||
.format(context, domain))
|
||||
return self.domains[domain]
|
||||
|
||||
def map_drivers(self, fragment, new_fragment):
|
||||
for domain, signal in fragment.iter_drivers():
|
||||
new_fragment.add_driver(self.on_value(signal), domain)
|
||||
|
||||
def replace_value_src_loc(self, value, new_value):
|
||||
return not isinstance(value, (ClockSignal, ResetSignal))
|
||||
|
||||
def on_ClockSignal(self, value):
|
||||
domain = self._resolve(value.domain, value)
|
||||
return domain.clk
|
||||
|
||||
def on_ResetSignal(self, value):
|
||||
domain = self._resolve(value.domain, value)
|
||||
if domain.rst is None:
|
||||
if value.allow_reset_less:
|
||||
return Const(0)
|
||||
else:
|
||||
raise DomainError("Signal {!r} refers to reset of reset-less domain '{}'"
|
||||
.format(value, value.domain))
|
||||
return domain.rst
|
||||
|
||||
def _insert_resets(self, fragment):
|
||||
for domain_name, signals in fragment.drivers.items():
|
||||
if domain_name is None:
|
||||
continue
|
||||
domain = fragment.domains[domain_name]
|
||||
if domain.rst is None:
|
||||
continue
|
||||
stmts = [signal.eq(Const(signal.reset, signal.width))
|
||||
for signal in signals if not signal.reset_less]
|
||||
fragment.add_statements(Switch(domain.rst, {1: stmts}))
|
||||
|
||||
def on_fragment(self, fragment):
|
||||
self.domains = fragment.domains
|
||||
new_fragment = super().on_fragment(fragment)
|
||||
self._insert_resets(new_fragment)
|
||||
return new_fragment
|
||||
|
||||
|
||||
class SampleDomainInjector(ValueTransformer, StatementTransformer):
|
||||
def __init__(self, domain):
|
||||
self.domain = domain
|
||||
|
||||
def on_Sample(self, value):
|
||||
if value.domain is not None:
|
||||
return value
|
||||
return Sample(value.value, value.clocks, self.domain)
|
||||
|
||||
def __call__(self, stmts):
|
||||
return self.on_statement(stmts)
|
||||
|
||||
|
||||
class SampleLowerer(FragmentTransformer, ValueTransformer, StatementTransformer):
|
||||
def __init__(self):
|
||||
self.initial = None
|
||||
self.sample_cache = None
|
||||
self.sample_stmts = None
|
||||
|
||||
def _name_reset(self, value):
|
||||
if isinstance(value, Const):
|
||||
return "c${}".format(value.value), value.value
|
||||
elif isinstance(value, Signal):
|
||||
return "s${}".format(value.name), value.reset
|
||||
elif isinstance(value, ClockSignal):
|
||||
return "clk", 0
|
||||
elif isinstance(value, ResetSignal):
|
||||
return "rst", 1
|
||||
elif isinstance(value, Initial):
|
||||
return "init", 0 # Past(Initial()) produces 0, 1, 0, 0, ...
|
||||
else:
|
||||
raise NotImplementedError # :nocov:
|
||||
|
||||
def on_Sample(self, value):
|
||||
if value in self.sample_cache:
|
||||
return self.sample_cache[value]
|
||||
|
||||
sampled_value = self.on_value(value.value)
|
||||
if value.clocks == 0:
|
||||
sample = sampled_value
|
||||
else:
|
||||
assert value.domain is not None
|
||||
sampled_name, sampled_reset = self._name_reset(value.value)
|
||||
name = "$sample${}${}${}".format(sampled_name, value.domain, value.clocks)
|
||||
sample = Signal.like(value.value, name=name, reset_less=True, reset=sampled_reset)
|
||||
sample.attrs["nmigen.sample_reg"] = True
|
||||
|
||||
prev_sample = self.on_Sample(Sample(sampled_value, value.clocks - 1, value.domain))
|
||||
if value.domain not in self.sample_stmts:
|
||||
self.sample_stmts[value.domain] = []
|
||||
self.sample_stmts[value.domain].append(sample.eq(prev_sample))
|
||||
|
||||
self.sample_cache[value] = sample
|
||||
return sample
|
||||
|
||||
def on_Initial(self, value):
|
||||
if self.initial is None:
|
||||
self.initial = Signal(name="init")
|
||||
return self.initial
|
||||
|
||||
def map_statements(self, fragment, new_fragment):
|
||||
self.initial = None
|
||||
self.sample_cache = ValueDict()
|
||||
self.sample_stmts = OrderedDict()
|
||||
new_fragment.add_statements(map(self.on_statement, fragment.statements))
|
||||
for domain, stmts in self.sample_stmts.items():
|
||||
new_fragment.add_statements(stmts)
|
||||
for stmt in stmts:
|
||||
new_fragment.add_driver(stmt.lhs, domain)
|
||||
if self.initial is not None:
|
||||
new_fragment.add_subfragment(Instance("$initstate", o_Y=self.initial))
|
||||
|
||||
|
||||
class SwitchCleaner(StatementVisitor):
|
||||
def on_ignore(self, stmt):
|
||||
return stmt
|
||||
|
||||
on_Assign = on_ignore
|
||||
on_Assert = on_ignore
|
||||
on_Assume = on_ignore
|
||||
on_Cover = on_ignore
|
||||
|
||||
def on_Switch(self, stmt):
|
||||
cases = OrderedDict((k, self.on_statement(s)) for k, s in stmt.cases.items())
|
||||
if any(len(s) for s in cases.values()):
|
||||
return Switch(stmt.test, cases)
|
||||
|
||||
def on_statements(self, stmts):
|
||||
stmts = flatten(self.on_statement(stmt) for stmt in stmts)
|
||||
return _StatementList(stmt for stmt in stmts if stmt is not None)
|
||||
|
||||
|
||||
class LHSGroupAnalyzer(StatementVisitor):
|
||||
def __init__(self):
|
||||
self.signals = SignalDict()
|
||||
self.unions = OrderedDict()
|
||||
|
||||
def find(self, signal):
|
||||
if signal not in self.signals:
|
||||
self.signals[signal] = len(self.signals)
|
||||
group = self.signals[signal]
|
||||
while group in self.unions:
|
||||
group = self.unions[group]
|
||||
self.signals[signal] = group
|
||||
return group
|
||||
|
||||
def unify(self, root, *leaves):
|
||||
root_group = self.find(root)
|
||||
for leaf in leaves:
|
||||
leaf_group = self.find(leaf)
|
||||
if root_group == leaf_group:
|
||||
continue
|
||||
self.unions[leaf_group] = root_group
|
||||
|
||||
def groups(self):
|
||||
groups = OrderedDict()
|
||||
for signal in self.signals:
|
||||
group = self.find(signal)
|
||||
if group not in groups:
|
||||
groups[group] = SignalSet()
|
||||
groups[group].add(signal)
|
||||
return groups
|
||||
|
||||
def on_Assign(self, stmt):
|
||||
lhs_signals = stmt._lhs_signals()
|
||||
if lhs_signals:
|
||||
self.unify(*stmt._lhs_signals())
|
||||
|
||||
def on_property(self, stmt):
|
||||
lhs_signals = stmt._lhs_signals()
|
||||
if lhs_signals:
|
||||
self.unify(*stmt._lhs_signals())
|
||||
|
||||
on_Assert = on_property
|
||||
on_Assume = on_property
|
||||
on_Cover = on_property
|
||||
|
||||
def on_Switch(self, stmt):
|
||||
for case_stmts in stmt.cases.values():
|
||||
self.on_statements(case_stmts)
|
||||
|
||||
def on_statements(self, stmts):
|
||||
for stmt in stmts:
|
||||
self.on_statement(stmt)
|
||||
|
||||
def __call__(self, stmts):
|
||||
self.on_statements(stmts)
|
||||
return self.groups()
|
||||
|
||||
|
||||
class LHSGroupFilter(SwitchCleaner):
|
||||
def __init__(self, signals):
|
||||
self.signals = signals
|
||||
|
||||
def on_Assign(self, stmt):
|
||||
# The invariant provided by LHSGroupAnalyzer is that all signals that ever appear together
|
||||
# on LHS are a part of the same group, so it is sufficient to check any of them.
|
||||
lhs_signals = stmt.lhs._lhs_signals()
|
||||
if lhs_signals:
|
||||
any_lhs_signal = next(iter(lhs_signals))
|
||||
if any_lhs_signal in self.signals:
|
||||
return stmt
|
||||
|
||||
def on_property(self, stmt):
|
||||
any_lhs_signal = next(iter(stmt._lhs_signals()))
|
||||
if any_lhs_signal in self.signals:
|
||||
return stmt
|
||||
|
||||
on_Assert = on_property
|
||||
on_Assume = on_property
|
||||
on_Cover = on_property
|
||||
|
||||
|
||||
class _ControlInserter(FragmentTransformer):
|
||||
def __init__(self, controls):
|
||||
self.src_loc = None
|
||||
if isinstance(controls, Value):
|
||||
controls = {"sync": controls}
|
||||
self.controls = OrderedDict(controls)
|
||||
|
||||
def on_fragment(self, fragment):
|
||||
new_fragment = super().on_fragment(fragment)
|
||||
for domain, signals in fragment.drivers.items():
|
||||
if domain is None or domain not in self.controls:
|
||||
continue
|
||||
self._insert_control(new_fragment, domain, signals)
|
||||
return new_fragment
|
||||
|
||||
def _insert_control(self, fragment, domain, signals):
|
||||
raise NotImplementedError # :nocov:
|
||||
|
||||
def __call__(self, value, *, src_loc_at=0):
|
||||
self.src_loc = tracer.get_src_loc(src_loc_at=src_loc_at)
|
||||
return super().__call__(value, src_loc_at=1 + src_loc_at)
|
||||
|
||||
|
||||
class ResetInserter(_ControlInserter):
|
||||
def _insert_control(self, fragment, domain, signals):
|
||||
stmts = [s.eq(Const(s.reset, s.width)) for s in signals if not s.reset_less]
|
||||
fragment.add_statements(Switch(self.controls[domain], {1: stmts}, src_loc=self.src_loc))
|
||||
|
||||
|
||||
class EnableInserter(_ControlInserter):
|
||||
def _insert_control(self, fragment, domain, signals):
|
||||
stmts = [s.eq(s) for s in signals]
|
||||
fragment.add_statements(Switch(self.controls[domain], {0: stmts}, src_loc=self.src_loc))
|
||||
|
||||
def on_fragment(self, fragment):
|
||||
new_fragment = super().on_fragment(fragment)
|
||||
if isinstance(new_fragment, Instance) and new_fragment.type in ("$memrd", "$memwr"):
|
||||
clk_port, clk_dir = new_fragment.named_ports["CLK"]
|
||||
if isinstance(clk_port, ClockSignal) and clk_port.domain in self.controls:
|
||||
en_port, en_dir = new_fragment.named_ports["EN"]
|
||||
en_port = Mux(self.controls[clk_port.domain], en_port, Const(0, len(en_port)))
|
||||
new_fragment.named_ports["EN"] = en_port, en_dir
|
||||
return new_fragment
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -0,0 +1,252 @@
|
|||
from .._utils import deprecated
|
||||
from .. import *
|
||||
|
||||
|
||||
__all__ = ["FFSynchronizer", "AsyncFFSynchronizer", "ResetSynchronizer", "PulseSynchronizer"]
|
||||
|
||||
|
||||
def _check_stages(stages):
|
||||
if not isinstance(stages, int) or stages < 1:
|
||||
raise TypeError("Synchronization stage count must be a positive integer, not {!r}"
|
||||
.format(stages))
|
||||
if stages < 2:
|
||||
raise ValueError("Synchronization stage count may not safely be less than 2")
|
||||
|
||||
|
||||
class FFSynchronizer(Elaboratable):
|
||||
"""Resynchronise a signal to a different clock domain.
|
||||
|
||||
Consists of a chain of flip-flops. Eliminates metastabilities at the output, but provides
|
||||
no other guarantee as to the safe domain-crossing of a signal.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
i : Signal(n), in
|
||||
Signal to be resynchronised.
|
||||
o : Signal(n), out
|
||||
Signal connected to synchroniser output.
|
||||
o_domain : str
|
||||
Name of output clock domain.
|
||||
reset : int
|
||||
Reset value of the flip-flops. On FPGAs, even if ``reset_less`` is True,
|
||||
the :class:`FFSynchronizer` is still set to this value during initialization.
|
||||
reset_less : bool
|
||||
If ``True`` (the default), this :class:`FFSynchronizer` is unaffected by ``o_domain``
|
||||
reset. See "Note on Reset" below.
|
||||
stages : int
|
||||
Number of synchronization stages between input and output. The lowest safe number is 2,
|
||||
with higher numbers reducing MTBF further, at the cost of increased latency.
|
||||
max_input_delay : None or float
|
||||
Maximum delay from the input signal's clock to the first synchronization stage, in seconds.
|
||||
If specified and the platform does not support it, elaboration will fail.
|
||||
|
||||
Platform override
|
||||
-----------------
|
||||
Define the ``get_ff_sync`` platform method to override the implementation of
|
||||
:class:`FFSynchronizer`, e.g. to instantiate library cells directly.
|
||||
|
||||
Note on Reset
|
||||
-------------
|
||||
:class:`FFSynchronizer` is non-resettable by default. Usually this is the safest option;
|
||||
on FPGAs the :class:`FFSynchronizer` will still be initialized to its ``reset`` value when
|
||||
the FPGA loads its configuration.
|
||||
|
||||
However, in designs where the value of the :class:`FFSynchronizer` must be valid immediately
|
||||
after reset, consider setting ``reset_less`` to False if any of the following is true:
|
||||
|
||||
- You are targeting an ASIC, or an FPGA that does not allow arbitrary initial flip-flop states;
|
||||
- Your design features warm (non-power-on) resets of ``o_domain``, so the one-time
|
||||
initialization at power on is insufficient;
|
||||
- Your design features a sequenced reset, and the :class:`FFSynchronizer` must maintain
|
||||
its reset value until ``o_domain`` reset specifically is deasserted.
|
||||
|
||||
:class:`FFSynchronizer` is reset by the ``o_domain`` reset only.
|
||||
"""
|
||||
def __init__(self, i, o, *, o_domain="sync", reset=0, reset_less=True, stages=2,
|
||||
max_input_delay=None):
|
||||
_check_stages(stages)
|
||||
|
||||
self.i = i
|
||||
self.o = o
|
||||
|
||||
self._reset = reset
|
||||
self._reset_less = reset_less
|
||||
self._o_domain = o_domain
|
||||
self._stages = stages
|
||||
|
||||
self._max_input_delay = max_input_delay
|
||||
|
||||
def elaborate(self, platform):
|
||||
if hasattr(platform, "get_ff_sync"):
|
||||
return platform.get_ff_sync(self)
|
||||
|
||||
if self._max_input_delay is not None:
|
||||
raise NotImplementedError("Platform '{}' does not support constraining input delay "
|
||||
"for FFSynchronizer"
|
||||
.format(type(platform).__name__))
|
||||
|
||||
m = Module()
|
||||
flops = [Signal(self.i.shape(), name="stage{}".format(index),
|
||||
reset=self._reset, reset_less=self._reset_less)
|
||||
for index in range(self._stages)]
|
||||
for i, o in zip((self.i, *flops), flops):
|
||||
m.d[self._o_domain] += o.eq(i)
|
||||
m.d.comb += self.o.eq(flops[-1])
|
||||
return m
|
||||
|
||||
|
||||
class AsyncFFSynchronizer(Elaboratable):
|
||||
"""Synchronize deassertion of an asynchronous signal.
|
||||
|
||||
The signal driven by the :class:`AsyncFFSynchronizer` is asserted asynchronously and deasserted
|
||||
synchronously, eliminating metastability during deassertion.
|
||||
|
||||
This synchronizer is primarily useful for resets and reset-like signals.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
i : Signal(1), in
|
||||
Asynchronous input signal, to be synchronized.
|
||||
o : Signal(1), out
|
||||
Synchronously released output signal.
|
||||
domain : str
|
||||
Name of clock domain to reset.
|
||||
stages : int, >=2
|
||||
Number of synchronization stages between input and output. The lowest safe number is 2,
|
||||
with higher numbers reducing MTBF further, at the cost of increased deassertion latency.
|
||||
async_edge : str
|
||||
The edge of the input signal which causes the output to be set. Must be one of "pos" or "neg".
|
||||
"""
|
||||
def __init__(self, i, o, *, domain="sync", stages=2, async_edge="pos", max_input_delay=None):
|
||||
_check_stages(stages)
|
||||
|
||||
self.i = i
|
||||
self.o = o
|
||||
|
||||
self._domain = domain
|
||||
self._stages = stages
|
||||
|
||||
if async_edge not in ("pos", "neg"):
|
||||
raise ValueError("AsyncFFSynchronizer async edge must be one of 'pos' or 'neg', "
|
||||
"not {!r}"
|
||||
.format(async_edge))
|
||||
self._edge = async_edge
|
||||
|
||||
self._max_input_delay = max_input_delay
|
||||
|
||||
def elaborate(self, platform):
|
||||
if hasattr(platform, "get_async_ff_sync"):
|
||||
return platform.get_async_ff_sync(self)
|
||||
|
||||
if self._max_input_delay is not None:
|
||||
raise NotImplementedError("Platform '{}' does not support constraining input delay "
|
||||
"for AsyncFFSynchronizer"
|
||||
.format(type(platform).__name__))
|
||||
|
||||
m = Module()
|
||||
m.domains += ClockDomain("async_ff", async_reset=True, local=True)
|
||||
flops = [Signal(1, name="stage{}".format(index), reset=1)
|
||||
for index in range(self._stages)]
|
||||
for i, o in zip((0, *flops), flops):
|
||||
m.d.async_ff += o.eq(i)
|
||||
|
||||
if self._edge == "pos":
|
||||
m.d.comb += ResetSignal("async_ff").eq(self.i)
|
||||
else:
|
||||
m.d.comb += ResetSignal("async_ff").eq(~self.i)
|
||||
|
||||
m.d.comb += [
|
||||
ClockSignal("async_ff").eq(ClockSignal(self._domain)),
|
||||
self.o.eq(flops[-1])
|
||||
]
|
||||
|
||||
return m
|
||||
|
||||
|
||||
class ResetSynchronizer(Elaboratable):
|
||||
"""Synchronize deassertion of a clock domain reset.
|
||||
|
||||
The reset of the clock domain driven by the :class:`ResetSynchronizer` is asserted
|
||||
asynchronously and deasserted synchronously, eliminating metastability during deassertion.
|
||||
|
||||
The driven clock domain could use a reset that is asserted either synchronously or
|
||||
asynchronously; a reset is always deasserted synchronously. A domain with an asynchronously
|
||||
asserted reset is useful if the clock of the domain may be gated, yet the domain still
|
||||
needs to be reset promptly; otherwise, synchronously asserted reset (the default) should
|
||||
be used.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
arst : Signal(1), in
|
||||
Asynchronous reset signal, to be synchronized.
|
||||
domain : str
|
||||
Name of clock domain to reset.
|
||||
stages : int, >=2
|
||||
Number of synchronization stages between input and output. The lowest safe number is 2,
|
||||
with higher numbers reducing MTBF further, at the cost of increased deassertion latency.
|
||||
max_input_delay : None or float
|
||||
Maximum delay from the input signal's clock to the first synchronization stage, in seconds.
|
||||
If specified and the platform does not support it, elaboration will fail.
|
||||
|
||||
Platform override
|
||||
-----------------
|
||||
Define the ``get_reset_sync`` platform method to override the implementation of
|
||||
:class:`ResetSynchronizer`, e.g. to instantiate library cells directly.
|
||||
"""
|
||||
def __init__(self, arst, *, domain="sync", stages=2, max_input_delay=None):
|
||||
_check_stages(stages)
|
||||
|
||||
self.arst = arst
|
||||
|
||||
self._domain = domain
|
||||
self._stages = stages
|
||||
|
||||
self._max_input_delay = max_input_delay
|
||||
|
||||
def elaborate(self, platform):
|
||||
return AsyncFFSynchronizer(self.arst, ResetSignal(self._domain), domain=self._domain,
|
||||
stages=self._stages, max_input_delay=self._max_input_delay)
|
||||
|
||||
|
||||
class PulseSynchronizer(Elaboratable):
|
||||
"""A one-clock pulse on the input produces a one-clock pulse on the output.
|
||||
|
||||
If the output clock is faster than the input clock, then the input may be safely asserted at
|
||||
100% duty cycle. Otherwise, if the clock ratio is n : 1, the input may be asserted at most once
|
||||
in every n input clocks, else pulses may be dropped.
|
||||
Other than this there is no constraint on the ratio of input and output clock frequency.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
i_domain : str
|
||||
Name of input clock domain.
|
||||
o-domain : str
|
||||
Name of output clock domain.
|
||||
sync_stages : int
|
||||
Number of synchronisation flops between the two clock domains. 2 is the default, and
|
||||
minimum safe value. High-frequency designs may choose to increase this.
|
||||
"""
|
||||
def __init__(self, i_domain, o_domain, sync_stages=2):
|
||||
if not isinstance(sync_stages, int) or sync_stages < 1:
|
||||
raise TypeError("sync_stages must be a positive integer, not '{!r}'".format(sync_stages))
|
||||
|
||||
self.i = Signal()
|
||||
self.o = Signal()
|
||||
self.i_domain = i_domain
|
||||
self.o_domain = o_domain
|
||||
self.sync_stages = sync_stages
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
|
||||
itoggle = Signal()
|
||||
otoggle = Signal()
|
||||
ff_sync = m.submodules.ff_sync = \
|
||||
FFSynchronizer(itoggle, otoggle, o_domain=self.o_domain, stages=self.sync_stages)
|
||||
otoggle_prev = Signal()
|
||||
|
||||
m.d[self.i_domain] += itoggle.eq(itoggle ^ self.i)
|
||||
m.d[self.o_domain] += otoggle_prev.eq(otoggle)
|
||||
m.d.comb += self.o.eq(otoggle ^ otoggle_prev)
|
||||
|
||||
return m
|
|
@ -0,0 +1,186 @@
|
|||
"""Encoders and decoders between binary and one-hot representation."""
|
||||
|
||||
from .. import *
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Encoder", "Decoder",
|
||||
"PriorityEncoder", "PriorityDecoder",
|
||||
"GrayEncoder", "GrayDecoder",
|
||||
]
|
||||
|
||||
|
||||
class Encoder(Elaboratable):
|
||||
"""Encode one-hot to binary.
|
||||
|
||||
If one bit in ``i`` is asserted, ``n`` is low and ``o`` indicates the asserted bit.
|
||||
Otherwise, ``n`` is high and ``o`` is ``0``.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Bit width of the input
|
||||
|
||||
Attributes
|
||||
----------
|
||||
i : Signal(width), in
|
||||
One-hot input.
|
||||
o : Signal(range(width)), out
|
||||
Encoded binary.
|
||||
n : Signal, out
|
||||
Invalid: either none or multiple input bits are asserted.
|
||||
"""
|
||||
def __init__(self, width):
|
||||
self.width = width
|
||||
|
||||
self.i = Signal(width)
|
||||
self.o = Signal(range(width))
|
||||
self.n = Signal()
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
with m.Switch(self.i):
|
||||
for j in range(self.width):
|
||||
with m.Case(1 << j):
|
||||
m.d.comb += self.o.eq(j)
|
||||
with m.Case():
|
||||
m.d.comb += self.n.eq(1)
|
||||
return m
|
||||
|
||||
|
||||
class PriorityEncoder(Elaboratable):
|
||||
"""Priority encode requests to binary.
|
||||
|
||||
If any bit in ``i`` is asserted, ``n`` is low and ``o`` indicates the least significant
|
||||
asserted bit.
|
||||
Otherwise, ``n`` is high and ``o`` is ``0``.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Bit width of the input.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
i : Signal(width), in
|
||||
Input requests.
|
||||
o : Signal(range(width)), out
|
||||
Encoded binary.
|
||||
n : Signal, out
|
||||
Invalid: no input bits are asserted.
|
||||
"""
|
||||
def __init__(self, width):
|
||||
self.width = width
|
||||
|
||||
self.i = Signal(width)
|
||||
self.o = Signal(range(width))
|
||||
self.n = Signal()
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
for j in reversed(range(self.width)):
|
||||
with m.If(self.i[j]):
|
||||
m.d.comb += self.o.eq(j)
|
||||
m.d.comb += self.n.eq(self.i == 0)
|
||||
return m
|
||||
|
||||
|
||||
class Decoder(Elaboratable):
|
||||
"""Decode binary to one-hot.
|
||||
|
||||
If ``n`` is low, only the ``i``th bit in ``o`` is asserted.
|
||||
If ``n`` is high, ``o`` is ``0``.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Bit width of the output.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
i : Signal(range(width)), in
|
||||
Input binary.
|
||||
o : Signal(width), out
|
||||
Decoded one-hot.
|
||||
n : Signal, in
|
||||
Invalid, no output bits are to be asserted.
|
||||
"""
|
||||
def __init__(self, width):
|
||||
self.width = width
|
||||
|
||||
self.i = Signal(range(width))
|
||||
self.n = Signal()
|
||||
self.o = Signal(width)
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
with m.Switch(self.i):
|
||||
for j in range(len(self.o)):
|
||||
with m.Case(j):
|
||||
m.d.comb += self.o.eq(1 << j)
|
||||
with m.If(self.n):
|
||||
m.d.comb += self.o.eq(0)
|
||||
return m
|
||||
|
||||
|
||||
class PriorityDecoder(Decoder):
|
||||
"""Decode binary to priority request.
|
||||
|
||||
Identical to :class:`Decoder`.
|
||||
"""
|
||||
|
||||
|
||||
class GrayEncoder(Elaboratable):
|
||||
"""Encode binary to Gray code.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Bit width.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
i : Signal(width), in
|
||||
Input natural binary.
|
||||
o : Signal(width), out
|
||||
Encoded Gray code.
|
||||
"""
|
||||
def __init__(self, width):
|
||||
self.width = width
|
||||
|
||||
self.i = Signal(width)
|
||||
self.o = Signal(width)
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
m.d.comb += self.o.eq(self.i ^ self.i[1:])
|
||||
return m
|
||||
|
||||
|
||||
class GrayDecoder(Elaboratable):
|
||||
"""Decode Gray code to binary.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Bit width.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
i : Signal(width), in
|
||||
Input Gray code.
|
||||
o : Signal(width), out
|
||||
Decoded natural binary.
|
||||
"""
|
||||
def __init__(self, width):
|
||||
self.width = width
|
||||
|
||||
self.i = Signal(width)
|
||||
self.o = Signal(width)
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
m.d.comb += self.o[-1].eq(self.i[-1])
|
||||
for i in reversed(range(self.width - 1)):
|
||||
m.d.comb += self.o[i].eq(self.o[i + 1] ^ self.i[i])
|
||||
return m
|
|
@ -0,0 +1,450 @@
|
|||
"""First-in first-out queues."""
|
||||
|
||||
from .. import *
|
||||
from ..asserts import *
|
||||
from .._utils import log2_int, deprecated
|
||||
from .coding import GrayEncoder
|
||||
from .cdc import FFSynchronizer
|
||||
|
||||
|
||||
__all__ = ["FIFOInterface", "SyncFIFO", "SyncFIFOBuffered", "AsyncFIFO", "AsyncFIFOBuffered"]
|
||||
|
||||
|
||||
class FIFOInterface:
|
||||
_doc_template = """
|
||||
{description}
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Bit width of data entries.
|
||||
depth : int
|
||||
Depth of the queue. If zero, the FIFO cannot be read from or written to.
|
||||
{parameters}
|
||||
|
||||
Attributes
|
||||
----------
|
||||
{attributes}
|
||||
w_data : in, width
|
||||
Input data.
|
||||
w_rdy : out
|
||||
Asserted if there is space in the queue, i.e. ``w_en`` can be asserted to write
|
||||
a new entry.
|
||||
w_en : in
|
||||
Write strobe. Latches ``w_data`` into the queue. Does nothing if ``w_rdy`` is not asserted.
|
||||
{w_attributes}
|
||||
r_data : out, width
|
||||
Output data. {r_data_valid}
|
||||
r_rdy : out
|
||||
Asserted if there is an entry in the queue, i.e. ``r_en`` can be asserted to read
|
||||
an existing entry.
|
||||
r_en : in
|
||||
Read strobe. Makes the next entry (if any) available on ``r_data`` at the next cycle.
|
||||
Does nothing if ``r_rdy`` is not asserted.
|
||||
{r_attributes}
|
||||
"""
|
||||
|
||||
__doc__ = _doc_template.format(description="""
|
||||
Data written to the input interface (``w_data``, ``w_rdy``, ``w_en``) is buffered and can be
|
||||
read at the output interface (``r_data``, ``r_rdy``, ``r_en`). The data entry written first
|
||||
to the input also appears first on the output.
|
||||
""",
|
||||
parameters="",
|
||||
r_data_valid="The conditions in which ``r_data`` is valid depends on the type of the queue.",
|
||||
attributes="""
|
||||
fwft : bool
|
||||
First-word fallthrough. If set, when ``r_rdy`` rises, the first entry is already
|
||||
available, i.e. ``r_data`` is valid. Otherwise, after ``r_rdy`` rises, it is necessary
|
||||
to strobe ``r_en`` for ``r_data`` to become valid.
|
||||
""".strip(),
|
||||
w_attributes="",
|
||||
r_attributes="")
|
||||
|
||||
def __init__(self, *, width, depth, fwft):
|
||||
if not isinstance(width, int) or width < 0:
|
||||
raise TypeError("FIFO width must be a non-negative integer, not {!r}"
|
||||
.format(width))
|
||||
if not isinstance(depth, int) or depth < 0:
|
||||
raise TypeError("FIFO depth must be a non-negative integer, not {!r}"
|
||||
.format(depth))
|
||||
self.width = width
|
||||
self.depth = depth
|
||||
self.fwft = fwft
|
||||
|
||||
self.w_data = Signal(width, reset_less=True)
|
||||
self.w_rdy = Signal() # writable; not full
|
||||
self.w_en = Signal()
|
||||
|
||||
self.r_data = Signal(width, reset_less=True)
|
||||
self.r_rdy = Signal() # readable; not empty
|
||||
self.r_en = Signal()
|
||||
|
||||
|
||||
def _incr(signal, modulo):
|
||||
if modulo == 2 ** len(signal):
|
||||
return signal + 1
|
||||
else:
|
||||
return Mux(signal == modulo - 1, 0, signal + 1)
|
||||
|
||||
|
||||
class SyncFIFO(Elaboratable, FIFOInterface):
|
||||
__doc__ = FIFOInterface._doc_template.format(
|
||||
description="""
|
||||
Synchronous first in, first out queue.
|
||||
|
||||
Read and write interfaces are accessed from the same clock domain. If different clock domains
|
||||
are needed, use :class:`AsyncFIFO`.
|
||||
""".strip(),
|
||||
parameters="""
|
||||
fwft : bool
|
||||
First-word fallthrough. If set, when the queue is empty and an entry is written into it,
|
||||
that entry becomes available on the output on the same clock cycle. Otherwise, it is
|
||||
necessary to assert ``r_en`` for ``r_data`` to become valid.
|
||||
""".strip(),
|
||||
r_data_valid="""
|
||||
For FWFT queues, valid if ``r_rdy`` is asserted. For non-FWFT queues, valid on the next
|
||||
cycle after ``r_rdy`` and ``r_en`` have been asserted.
|
||||
""".strip(),
|
||||
attributes="",
|
||||
r_attributes="""
|
||||
level : out
|
||||
Number of unread entries.
|
||||
""".strip(),
|
||||
w_attributes="")
|
||||
|
||||
def __init__(self, *, width, depth, fwft=True):
|
||||
super().__init__(width=width, depth=depth, fwft=fwft)
|
||||
|
||||
self.level = Signal(range(depth + 1))
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
if self.depth == 0:
|
||||
m.d.comb += [
|
||||
self.w_rdy.eq(0),
|
||||
self.r_rdy.eq(0),
|
||||
]
|
||||
return m
|
||||
|
||||
m.d.comb += [
|
||||
self.w_rdy.eq(self.level != self.depth),
|
||||
self.r_rdy.eq(self.level != 0)
|
||||
]
|
||||
|
||||
do_read = self.r_rdy & self.r_en
|
||||
do_write = self.w_rdy & self.w_en
|
||||
|
||||
storage = Memory(width=self.width, depth=self.depth)
|
||||
w_port = m.submodules.w_port = storage.write_port()
|
||||
r_port = m.submodules.r_port = storage.read_port(
|
||||
domain="comb" if self.fwft else "sync", transparent=self.fwft)
|
||||
produce = Signal(range(self.depth))
|
||||
consume = Signal(range(self.depth))
|
||||
|
||||
m.d.comb += [
|
||||
w_port.addr.eq(produce),
|
||||
w_port.data.eq(self.w_data),
|
||||
w_port.en.eq(self.w_en & self.w_rdy)
|
||||
]
|
||||
with m.If(do_write):
|
||||
m.d.sync += produce.eq(_incr(produce, self.depth))
|
||||
|
||||
m.d.comb += [
|
||||
r_port.addr.eq(consume),
|
||||
self.r_data.eq(r_port.data),
|
||||
]
|
||||
if not self.fwft:
|
||||
m.d.comb += r_port.en.eq(self.r_en)
|
||||
with m.If(do_read):
|
||||
m.d.sync += consume.eq(_incr(consume, self.depth))
|
||||
|
||||
with m.If(do_write & ~do_read):
|
||||
m.d.sync += self.level.eq(self.level + 1)
|
||||
with m.If(do_read & ~do_write):
|
||||
m.d.sync += self.level.eq(self.level - 1)
|
||||
|
||||
if platform == "formal":
|
||||
# TODO: move this logic to SymbiYosys
|
||||
with m.If(Initial()):
|
||||
m.d.comb += [
|
||||
Assume(produce < self.depth),
|
||||
Assume(consume < self.depth),
|
||||
]
|
||||
with m.If(produce == consume):
|
||||
m.d.comb += Assume((self.level == 0) | (self.level == self.depth))
|
||||
with m.If(produce > consume):
|
||||
m.d.comb += Assume(self.level == (produce - consume))
|
||||
with m.If(produce < consume):
|
||||
m.d.comb += Assume(self.level == (self.depth + produce - consume))
|
||||
with m.Else():
|
||||
m.d.comb += [
|
||||
Assert(produce < self.depth),
|
||||
Assert(consume < self.depth),
|
||||
]
|
||||
with m.If(produce == consume):
|
||||
m.d.comb += Assert((self.level == 0) | (self.level == self.depth))
|
||||
with m.If(produce > consume):
|
||||
m.d.comb += Assert(self.level == (produce - consume))
|
||||
with m.If(produce < consume):
|
||||
m.d.comb += Assert(self.level == (self.depth + produce - consume))
|
||||
|
||||
return m
|
||||
|
||||
|
||||
class SyncFIFOBuffered(Elaboratable, FIFOInterface):
|
||||
__doc__ = FIFOInterface._doc_template.format(
|
||||
description="""
|
||||
Buffered synchronous first in, first out queue.
|
||||
|
||||
This queue's interface is identical to :class:`SyncFIFO` configured as ``fwft=True``, but it
|
||||
does not use asynchronous memory reads, which are incompatible with FPGA block RAMs.
|
||||
|
||||
In exchange, the latency between an entry being written to an empty queue and that entry
|
||||
becoming available on the output is increased by one cycle compared to :class:`SyncFIFO`.
|
||||
""".strip(),
|
||||
parameters="""
|
||||
fwft : bool
|
||||
Always set.
|
||||
""".strip(),
|
||||
attributes="",
|
||||
r_data_valid="Valid if ``r_rdy`` is asserted.",
|
||||
r_attributes="""
|
||||
level : out
|
||||
Number of unread entries.
|
||||
""".strip(),
|
||||
w_attributes="")
|
||||
|
||||
def __init__(self, *, width, depth):
|
||||
super().__init__(width=width, depth=depth, fwft=True)
|
||||
|
||||
self.level = Signal(range(depth + 1))
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
if self.depth == 0:
|
||||
m.d.comb += [
|
||||
self.w_rdy.eq(0),
|
||||
self.r_rdy.eq(0),
|
||||
]
|
||||
return m
|
||||
|
||||
# Effectively, this queue treats the output register of the non-FWFT inner queue as
|
||||
# an additional storage element.
|
||||
m.submodules.unbuffered = fifo = SyncFIFO(width=self.width, depth=self.depth - 1,
|
||||
fwft=False)
|
||||
|
||||
m.d.comb += [
|
||||
fifo.w_data.eq(self.w_data),
|
||||
fifo.w_en.eq(self.w_en),
|
||||
self.w_rdy.eq(fifo.w_rdy),
|
||||
]
|
||||
|
||||
m.d.comb += [
|
||||
self.r_data.eq(fifo.r_data),
|
||||
fifo.r_en.eq(fifo.r_rdy & (~self.r_rdy | self.r_en)),
|
||||
]
|
||||
with m.If(fifo.r_en):
|
||||
m.d.sync += self.r_rdy.eq(1)
|
||||
with m.Elif(self.r_en):
|
||||
m.d.sync += self.r_rdy.eq(0)
|
||||
|
||||
m.d.comb += self.level.eq(fifo.level + self.r_rdy)
|
||||
|
||||
return m
|
||||
|
||||
|
||||
class AsyncFIFO(Elaboratable, FIFOInterface):
|
||||
__doc__ = FIFOInterface._doc_template.format(
|
||||
description="""
|
||||
Asynchronous first in, first out queue.
|
||||
|
||||
Read and write interfaces are accessed from different clock domains, which can be set when
|
||||
constructing the FIFO.
|
||||
|
||||
:class:`AsyncFIFO` only supports power of 2 depths. Unless ``exact_depth`` is specified,
|
||||
the ``depth`` parameter is rounded up to the next power of 2.
|
||||
""".strip(),
|
||||
parameters="""
|
||||
r_domain : str
|
||||
Read clock domain.
|
||||
w_domain : str
|
||||
Write clock domain.
|
||||
""".strip(),
|
||||
attributes="""
|
||||
fwft : bool
|
||||
Always set.
|
||||
""".strip(),
|
||||
r_data_valid="Valid if ``r_rdy`` is asserted.",
|
||||
r_attributes="",
|
||||
w_attributes="")
|
||||
|
||||
def __init__(self, *, width, depth, r_domain="read", w_domain="write", exact_depth=False):
|
||||
if depth != 0:
|
||||
try:
|
||||
depth_bits = log2_int(depth, need_pow2=exact_depth)
|
||||
depth = 1 << depth_bits
|
||||
except ValueError as e:
|
||||
raise ValueError("AsyncFIFO only supports depths that are powers of 2; requested "
|
||||
"exact depth {} is not"
|
||||
.format(depth)) from None
|
||||
else:
|
||||
depth_bits = 0
|
||||
super().__init__(width=width, depth=depth, fwft=True)
|
||||
|
||||
self._r_domain = r_domain
|
||||
self._w_domain = w_domain
|
||||
self._ctr_bits = depth_bits + 1
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
if self.depth == 0:
|
||||
m.d.comb += [
|
||||
self.w_rdy.eq(0),
|
||||
self.r_rdy.eq(0),
|
||||
]
|
||||
return m
|
||||
|
||||
# The design of this queue is the "style #2" from Clifford E. Cummings' paper "Simulation
|
||||
# and Synthesis Techniques for Asynchronous FIFO Design":
|
||||
# http://www.sunburst-design.com/papers/CummingsSNUG2002SJ_FIFO1.pdf
|
||||
|
||||
do_write = self.w_rdy & self.w_en
|
||||
do_read = self.r_rdy & self.r_en
|
||||
|
||||
# TODO: extract this pattern into lib.cdc.GrayCounter
|
||||
produce_w_bin = Signal(self._ctr_bits)
|
||||
produce_w_nxt = Signal(self._ctr_bits)
|
||||
m.d.comb += produce_w_nxt.eq(produce_w_bin + do_write)
|
||||
m.d[self._w_domain] += produce_w_bin.eq(produce_w_nxt)
|
||||
|
||||
consume_r_bin = Signal(self._ctr_bits)
|
||||
consume_r_nxt = Signal(self._ctr_bits)
|
||||
m.d.comb += consume_r_nxt.eq(consume_r_bin + do_read)
|
||||
m.d[self._r_domain] += consume_r_bin.eq(consume_r_nxt)
|
||||
|
||||
produce_w_gry = Signal(self._ctr_bits)
|
||||
produce_r_gry = Signal(self._ctr_bits)
|
||||
produce_enc = m.submodules.produce_enc = \
|
||||
GrayEncoder(self._ctr_bits)
|
||||
produce_cdc = m.submodules.produce_cdc = \
|
||||
FFSynchronizer(produce_w_gry, produce_r_gry, o_domain=self._r_domain)
|
||||
m.d.comb += produce_enc.i.eq(produce_w_nxt),
|
||||
m.d[self._w_domain] += produce_w_gry.eq(produce_enc.o)
|
||||
|
||||
consume_r_gry = Signal(self._ctr_bits)
|
||||
consume_w_gry = Signal(self._ctr_bits)
|
||||
consume_enc = m.submodules.consume_enc = \
|
||||
GrayEncoder(self._ctr_bits)
|
||||
consume_cdc = m.submodules.consume_cdc = \
|
||||
FFSynchronizer(consume_r_gry, consume_w_gry, o_domain=self._w_domain)
|
||||
m.d.comb += consume_enc.i.eq(consume_r_nxt)
|
||||
m.d[self._r_domain] += consume_r_gry.eq(consume_enc.o)
|
||||
|
||||
w_full = Signal()
|
||||
r_empty = Signal()
|
||||
m.d.comb += [
|
||||
w_full.eq((produce_w_gry[-1] != consume_w_gry[-1]) &
|
||||
(produce_w_gry[-2] != consume_w_gry[-2]) &
|
||||
(produce_w_gry[:-2] == consume_w_gry[:-2])),
|
||||
r_empty.eq(consume_r_gry == produce_r_gry),
|
||||
]
|
||||
|
||||
storage = Memory(width=self.width, depth=self.depth)
|
||||
w_port = m.submodules.w_port = storage.write_port(domain=self._w_domain)
|
||||
r_port = m.submodules.r_port = storage.read_port (domain=self._r_domain,
|
||||
transparent=False)
|
||||
m.d.comb += [
|
||||
w_port.addr.eq(produce_w_bin[:-1]),
|
||||
w_port.data.eq(self.w_data),
|
||||
w_port.en.eq(do_write),
|
||||
self.w_rdy.eq(~w_full),
|
||||
]
|
||||
m.d.comb += [
|
||||
r_port.addr.eq(consume_r_nxt[:-1]),
|
||||
self.r_data.eq(r_port.data),
|
||||
r_port.en.eq(1),
|
||||
self.r_rdy.eq(~r_empty),
|
||||
]
|
||||
|
||||
if platform == "formal":
|
||||
with m.If(Initial()):
|
||||
m.d.comb += Assume(produce_w_gry == (produce_w_bin ^ produce_w_bin[1:]))
|
||||
m.d.comb += Assume(consume_r_gry == (consume_r_bin ^ consume_r_bin[1:]))
|
||||
|
||||
return m
|
||||
|
||||
|
||||
class AsyncFIFOBuffered(Elaboratable, FIFOInterface):
|
||||
__doc__ = FIFOInterface._doc_template.format(
|
||||
description="""
|
||||
Buffered asynchronous first in, first out queue.
|
||||
|
||||
Read and write interfaces are accessed from different clock domains, which can be set when
|
||||
constructing the FIFO.
|
||||
|
||||
:class:`AsyncFIFOBuffered` only supports power of 2 plus one depths. Unless ``exact_depth``
|
||||
is specified, the ``depth`` parameter is rounded up to the next power of 2 plus one.
|
||||
(The output buffer acts as an additional queue element.)
|
||||
|
||||
This queue's interface is identical to :class:`AsyncFIFO`, but it has an additional register
|
||||
on the output, improving timing in case of block RAM that has large clock-to-output delay.
|
||||
|
||||
In exchange, the latency between an entry being written to an empty queue and that entry
|
||||
becoming available on the output is increased by one cycle compared to :class:`AsyncFIFO`.
|
||||
""".strip(),
|
||||
parameters="""
|
||||
r_domain : str
|
||||
Read clock domain.
|
||||
w_domain : str
|
||||
Write clock domain.
|
||||
""".strip(),
|
||||
attributes="""
|
||||
fwft : bool
|
||||
Always set.
|
||||
""".strip(),
|
||||
r_data_valid="Valid if ``r_rdy`` is asserted.",
|
||||
r_attributes="",
|
||||
w_attributes="")
|
||||
|
||||
def __init__(self, *, width, depth, r_domain="read", w_domain="write", exact_depth=False):
|
||||
if depth != 0:
|
||||
try:
|
||||
depth_bits = log2_int(max(0, depth - 1), need_pow2=exact_depth)
|
||||
depth = (1 << depth_bits) + 1
|
||||
except ValueError as e:
|
||||
raise ValueError("AsyncFIFOBuffered only supports depths that are one higher "
|
||||
"than powers of 2; requested exact depth {} is not"
|
||||
.format(depth)) from None
|
||||
super().__init__(width=width, depth=depth, fwft=True)
|
||||
|
||||
self._r_domain = r_domain
|
||||
self._w_domain = w_domain
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
if self.depth == 0:
|
||||
m.d.comb += [
|
||||
self.w_rdy.eq(0),
|
||||
self.r_rdy.eq(0),
|
||||
]
|
||||
return m
|
||||
|
||||
m.submodules.unbuffered = fifo = AsyncFIFO(width=self.width, depth=self.depth - 1,
|
||||
r_domain=self._r_domain, w_domain=self._w_domain)
|
||||
|
||||
m.d.comb += [
|
||||
fifo.w_data.eq(self.w_data),
|
||||
self.w_rdy.eq(fifo.w_rdy),
|
||||
fifo.w_en.eq(self.w_en),
|
||||
]
|
||||
|
||||
with m.If(self.r_en | ~self.r_rdy):
|
||||
m.d[self._r_domain] += [
|
||||
self.r_data.eq(fifo.r_data),
|
||||
self.r_rdy.eq(fifo.r_rdy),
|
||||
]
|
||||
m.d.comb += [
|
||||
fifo.r_en.eq(1)
|
||||
]
|
||||
|
||||
return m
|
|
@ -0,0 +1,106 @@
|
|||
from .. import *
|
||||
from ..hdl.rec import *
|
||||
|
||||
|
||||
__all__ = ["pin_layout", "Pin"]
|
||||
|
||||
|
||||
def pin_layout(width, dir, xdr=0):
|
||||
"""
|
||||
Layout of the platform interface of a pin or several pins, which may be used inside
|
||||
user-defined records.
|
||||
|
||||
See :class:`Pin` for details.
|
||||
"""
|
||||
if not isinstance(width, int) or width < 1:
|
||||
raise TypeError("Width must be a positive integer, not {!r}"
|
||||
.format(width))
|
||||
if dir not in ("i", "o", "oe", "io"):
|
||||
raise TypeError("Direction must be one of \"i\", \"o\", \"io\", or \"oe\", not {!r}"""
|
||||
.format(dir))
|
||||
if not isinstance(xdr, int) or xdr < 0:
|
||||
raise TypeError("Gearing ratio must be a non-negative integer, not {!r}"
|
||||
.format(xdr))
|
||||
|
||||
fields = []
|
||||
if dir in ("i", "io"):
|
||||
if xdr > 0:
|
||||
fields.append(("i_clk", 1))
|
||||
if xdr in (0, 1):
|
||||
fields.append(("i", width))
|
||||
else:
|
||||
for n in range(xdr):
|
||||
fields.append(("i{}".format(n), width))
|
||||
if dir in ("o", "oe", "io"):
|
||||
if xdr > 0:
|
||||
fields.append(("o_clk", 1))
|
||||
if xdr in (0, 1):
|
||||
fields.append(("o", width))
|
||||
else:
|
||||
for n in range(xdr):
|
||||
fields.append(("o{}".format(n), width))
|
||||
if dir in ("oe", "io"):
|
||||
fields.append(("oe", 1))
|
||||
return Layout(fields)
|
||||
|
||||
|
||||
class Pin(Record):
|
||||
"""
|
||||
An interface to an I/O buffer or a group of them that provides uniform access to input, output,
|
||||
or tristate buffers that may include a 1:n gearbox. (A 1:2 gearbox is typically called "DDR".)
|
||||
|
||||
A :class:`Pin` is identical to a :class:`Record` that uses the corresponding :meth:`pin_layout`
|
||||
except that it allos accessing the parameters like ``width`` as attributes. It is legal to use
|
||||
a plain :class:`Record` anywhere a :class:`Pin` is used, provided that these attributes are
|
||||
not necessary.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
width : int
|
||||
Width of the ``i``/``iN`` and ``o``/``oN`` signals.
|
||||
dir : ``"i"``, ``"o"``, ``"io"``, ``"oe"``
|
||||
Direction of the buffers. If ``"i"`` is specified, only the ``i``/``iN`` signals are
|
||||
present. If ``"o"`` is specified, only the ``o``/``oN`` signals are present. If ``"oe"`` is
|
||||
specified, the ``o``/``oN`` signals are present, and an ``oe`` signal is present.
|
||||
If ``"io"`` is specified, both the ``i``/``iN`` and ``o``/``oN`` signals are present, and
|
||||
an ``oe`` signal is present.
|
||||
xdr : int
|
||||
Gearbox ratio. If equal to 0, the I/O buffer is combinatorial, and only ``i``/``o``
|
||||
signals are present. If equal to 1, the I/O buffer is SDR, and only ``i``/``o`` signals are
|
||||
present. If greater than 1, the I/O buffer includes a gearbox, and ``iN``/``oN`` signals
|
||||
are present instead, where ``N in range(0, N)``. For example, if ``xdr=2``, the I/O buffer
|
||||
is DDR; the signal ``i0`` reflects the value at the rising edge, and the signal ``i1``
|
||||
reflects the value at the falling edge.
|
||||
name : str
|
||||
Name of the underlying record.
|
||||
|
||||
Attributes
|
||||
----------
|
||||
i_clk:
|
||||
I/O buffer input clock. Synchronizes `i*`. Present if ``xdr`` is nonzero.
|
||||
i : Signal, out
|
||||
I/O buffer input, without gearing. Present if ``dir="i"`` or ``dir="io"``, and ``xdr`` is
|
||||
equal to 0 or 1.
|
||||
i0, i1, ... : Signal, out
|
||||
I/O buffer inputs, with gearing. Present if ``dir="i"`` or ``dir="io"``, and ``xdr`` is
|
||||
greater than 1.
|
||||
o_clk:
|
||||
I/O buffer output clock. Synchronizes `o*`, including `oe`. Present if ``xdr`` is nonzero.
|
||||
o : Signal, in
|
||||
I/O buffer output, without gearing. Present if ``dir="o"`` or ``dir="io"``, and ``xdr`` is
|
||||
equal to 0 or 1.
|
||||
o0, o1, ... : Signal, in
|
||||
I/O buffer outputs, with gearing. Present if ``dir="o"`` or ``dir="io"``, and ``xdr`` is
|
||||
greater than 1.
|
||||
oe : Signal, in
|
||||
I/O buffer output enable. Present if ``dir="io"`` or ``dir="oe"``. Buffers generally
|
||||
cannot change direction more than once per cycle, so at most one output enable signal
|
||||
is present.
|
||||
"""
|
||||
def __init__(self, width, dir, *, xdr=0, name=None, src_loc_at=0):
|
||||
self.width = width
|
||||
self.dir = dir
|
||||
self.xdr = xdr
|
||||
|
||||
super().__init__(pin_layout(self.width, self.dir, self.xdr),
|
||||
name=name, src_loc_at=src_loc_at + 1)
|
|
@ -0,0 +1,111 @@
|
|||
import sys
|
||||
import json
|
||||
import argparse
|
||||
import importlib
|
||||
|
||||
from .hdl import Signal, Record, Elaboratable
|
||||
from .back import rtlil
|
||||
|
||||
|
||||
__all__ = ["main"]
|
||||
|
||||
|
||||
def _collect_modules(names):
|
||||
modules = {}
|
||||
for name in names:
|
||||
py_module_name, py_class_name = name.rsplit(".", 1)
|
||||
py_module = importlib.import_module(py_module_name)
|
||||
if py_class_name == "*":
|
||||
for py_class_name in py_module.__all__:
|
||||
py_class = py_module.__dict__[py_class_name]
|
||||
if not issubclass(py_class, Elaboratable):
|
||||
continue
|
||||
modules["{}.{}".format(py_module_name, py_class_name)] = py_class
|
||||
else:
|
||||
py_class = py_module.__dict__[py_class_name]
|
||||
if not isinstance(py_class, type) or not issubclass(py_class, Elaboratable):
|
||||
raise TypeError("{}.{} is not a class inheriting from Elaboratable"
|
||||
.format(py_module_name, py_class_name))
|
||||
modules[name] = py_class
|
||||
return modules
|
||||
|
||||
|
||||
def _serve_yosys(modules):
|
||||
while True:
|
||||
request_json = sys.stdin.readline()
|
||||
if not request_json: break
|
||||
request = json.loads(request_json)
|
||||
|
||||
if request["method"] == "modules":
|
||||
response = {"modules": list(modules.keys())}
|
||||
|
||||
elif request["method"] == "derive":
|
||||
module_name = request["module"]
|
||||
|
||||
args, kwargs = [], {}
|
||||
for parameter_name, parameter in request["parameters"].items():
|
||||
if parameter["type"] == "unsigned":
|
||||
parameter_value = int(parameter["value"], 2)
|
||||
elif parameter["type"] == "signed":
|
||||
width = len(parameter["value"])
|
||||
parameter_value = int(parameter["value"], 2)
|
||||
if parameter_value & (1 << (width - 1)):
|
||||
parameter_value = -((1 << width) - value)
|
||||
elif parameter["type"] == "string":
|
||||
parameter_value = parameter["value"]
|
||||
elif parameter["type"] == "real":
|
||||
parameter_value = float(parameter["value"])
|
||||
else:
|
||||
raise NotImplementedError("Unrecognized parameter type {}"
|
||||
.format(parameter_name))
|
||||
if parameter_name.startswith("$"):
|
||||
index = int(parameter_name[1:])
|
||||
while len(args) < index:
|
||||
args.append(None)
|
||||
args[index] = parameter_value
|
||||
if parameter_name.startswith("\\"):
|
||||
kwargs[parameter_name[1:]] = parameter_value
|
||||
|
||||
try:
|
||||
elaboratable = modules[module_name](*args, **kwargs)
|
||||
ports = []
|
||||
# By convention, any public attribute that is a Signal or a Record is
|
||||
# considered a port.
|
||||
for port_name, port in vars(elaboratable).items():
|
||||
if not port_name.startswith("_") and isinstance(port, (Signal, Record)):
|
||||
ports += port._lhs_signals()
|
||||
rtlil_text = rtlil.convert(elaboratable, name=module_name, ports=ports)
|
||||
response = {"frontend": "ilang", "source": rtlil_text}
|
||||
except Exception as error:
|
||||
response = {"error": "{}: {}".format(type(error).__name__, str(error))}
|
||||
|
||||
else:
|
||||
return {"error": "Unrecognized method {!r}".format(request["method"])}
|
||||
|
||||
sys.stdout.write(json.dumps(response))
|
||||
sys.stdout.write("\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description=r"""
|
||||
The nMigen RPC server allows a HDL synthesis program to request an nMigen module to
|
||||
be elaborated on demand using the parameters it provides. For example, using Yosys together
|
||||
with the nMigen RPC server allows instantiating parametric nMigen modules directly
|
||||
from Verilog.
|
||||
""")
|
||||
def add_modules_arg(parser):
|
||||
parser.add_argument("modules", metavar="MODULE", type=str, nargs="+",
|
||||
help="import and provide MODULES")
|
||||
protocols = parser.add_subparsers(metavar="PROTOCOL", dest="protocol", required=True)
|
||||
protocol_yosys = protocols.add_parser("yosys", help="use Yosys JSON-based RPC protocol")
|
||||
add_modules_arg(protocol_yosys)
|
||||
|
||||
args = parser.parse_args()
|
||||
modules = _collect_modules(args.modules)
|
||||
if args.protocol == "yosys":
|
||||
_serve_yosys(modules)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
Binary file not shown.
Binary file not shown.
|
@ -0,0 +1,16 @@
|
|||
from ..._utils import _ignore_deprecated
|
||||
from ...compat import *
|
||||
from ...compat.fhdl import verilog
|
||||
|
||||
|
||||
class SimCase:
|
||||
def setUp(self, *args, **kwargs):
|
||||
with _ignore_deprecated():
|
||||
self.tb = self.TestBench(*args, **kwargs)
|
||||
|
||||
def test_to_verilog(self):
|
||||
verilog.convert(self.tb)
|
||||
|
||||
def run_with(self, generator):
|
||||
with _ignore_deprecated():
|
||||
run_simulation(self.tb, generator)
|
|
@ -0,0 +1,116 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
import unittest
|
||||
|
||||
from ...compat import *
|
||||
from ...compat.genlib.coding import *
|
||||
|
||||
from .support import SimCase
|
||||
|
||||
|
||||
class EncCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.submodules.dut = Encoder(8)
|
||||
|
||||
def test_sizes(self):
|
||||
self.assertEqual(len(self.tb.dut.i), 8)
|
||||
self.assertEqual(len(self.tb.dut.o), 3)
|
||||
self.assertEqual(len(self.tb.dut.n), 1)
|
||||
|
||||
def test_run_sequence(self):
|
||||
seq = list(range(1<<8))
|
||||
def gen():
|
||||
for _ in range(256):
|
||||
if seq:
|
||||
yield self.tb.dut.i.eq(seq.pop(0))
|
||||
yield
|
||||
if (yield self.tb.dut.n):
|
||||
self.assertNotIn((yield self.tb.dut.i), [1<<i for i in range(8)])
|
||||
else:
|
||||
self.assertEqual((yield self.tb.dut.i), 1<<(yield self.tb.dut.o))
|
||||
self.run_with(gen())
|
||||
|
||||
|
||||
class PrioEncCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.submodules.dut = PriorityEncoder(8)
|
||||
|
||||
def test_sizes(self):
|
||||
self.assertEqual(len(self.tb.dut.i), 8)
|
||||
self.assertEqual(len(self.tb.dut.o), 3)
|
||||
self.assertEqual(len(self.tb.dut.n), 1)
|
||||
|
||||
def test_run_sequence(self):
|
||||
seq = list(range(1<<8))
|
||||
def gen():
|
||||
for _ in range(256):
|
||||
if seq:
|
||||
yield self.tb.dut.i.eq(seq.pop(0))
|
||||
yield
|
||||
i = yield self.tb.dut.i
|
||||
if (yield self.tb.dut.n):
|
||||
self.assertEqual(i, 0)
|
||||
else:
|
||||
o = yield self.tb.dut.o
|
||||
if o > 0:
|
||||
self.assertEqual(i & 1<<(o - 1), 0)
|
||||
self.assertGreaterEqual(i, 1<<o)
|
||||
self.run_with(gen())
|
||||
|
||||
|
||||
class DecCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.submodules.dut = Decoder(8)
|
||||
|
||||
def test_sizes(self):
|
||||
self.assertEqual(len(self.tb.dut.i), 3)
|
||||
self.assertEqual(len(self.tb.dut.o), 8)
|
||||
self.assertEqual(len(self.tb.dut.n), 1)
|
||||
|
||||
def test_run_sequence(self):
|
||||
seq = list(range(8*2))
|
||||
def gen():
|
||||
for _ in range(256):
|
||||
if seq:
|
||||
i = seq.pop()
|
||||
yield self.tb.dut.i.eq(i//2)
|
||||
yield self.tb.dut.n.eq(i%2)
|
||||
yield
|
||||
i = yield self.tb.dut.i
|
||||
o = yield self.tb.dut.o
|
||||
if (yield self.tb.dut.n):
|
||||
self.assertEqual(o, 0)
|
||||
else:
|
||||
self.assertEqual(o, 1<<i)
|
||||
self.run_with(gen())
|
||||
|
||||
|
||||
class SmallPrioEncCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.submodules.dut = PriorityEncoder(1)
|
||||
|
||||
def test_sizes(self):
|
||||
self.assertEqual(len(self.tb.dut.i), 1)
|
||||
self.assertEqual(len(self.tb.dut.o), 1)
|
||||
self.assertEqual(len(self.tb.dut.n), 1)
|
||||
|
||||
def test_run_sequence(self):
|
||||
seq = list(range(1))
|
||||
def gen():
|
||||
for _ in range(5):
|
||||
if seq:
|
||||
yield self.tb.dut.i.eq(seq.pop(0))
|
||||
yield
|
||||
i = yield self.tb.dut.i
|
||||
if (yield self.tb.dut.n):
|
||||
self.assertEqual(i, 0)
|
||||
else:
|
||||
o = yield self.tb.dut.o
|
||||
if o > 0:
|
||||
self.assertEqual(i & 1<<(o - 1), 0)
|
||||
self.assertGreaterEqual(i, 1<<o)
|
||||
self.run_with(gen())
|
|
@ -0,0 +1,29 @@
|
|||
import unittest
|
||||
|
||||
from ...compat import *
|
||||
from .support import SimCase
|
||||
|
||||
|
||||
class ConstantCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.sigs = [
|
||||
(Signal(3), Constant(0), 0),
|
||||
(Signal(3), Constant(5), 5),
|
||||
(Signal(3), Constant(1, 2), 1),
|
||||
(Signal(3), Constant(-1, 7), 7),
|
||||
(Signal(3), Constant(0b10101)[:3], 0b101),
|
||||
(Signal(3), Constant(0b10101)[1:4], 0b10),
|
||||
(Signal(4), Constant(0b1100)[::-1], 0b0011),
|
||||
]
|
||||
self.comb += [a.eq(b) for a, b, c in self.sigs]
|
||||
|
||||
def test_comparisons(self):
|
||||
def gen():
|
||||
for s, l, v in self.tb.sigs:
|
||||
s = yield s
|
||||
self.assertEqual(
|
||||
s, int(v),
|
||||
"got {}, want {} from literal {}".format(
|
||||
s, v, l))
|
||||
self.run_with(gen())
|
|
@ -0,0 +1,38 @@
|
|||
import unittest
|
||||
from itertools import count
|
||||
|
||||
from ...compat import *
|
||||
from ...compat.genlib.fifo import SyncFIFO
|
||||
|
||||
from .support import SimCase
|
||||
|
||||
|
||||
class SyncFIFOCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.submodules.dut = SyncFIFO(64, 2)
|
||||
|
||||
self.sync += [
|
||||
If(self.dut.we & self.dut.writable,
|
||||
self.dut.din[:32].eq(self.dut.din[:32] + 1),
|
||||
self.dut.din[32:].eq(self.dut.din[32:] + 2)
|
||||
)
|
||||
]
|
||||
|
||||
def test_run_sequence(self):
|
||||
seq = list(range(20))
|
||||
def gen():
|
||||
for cycle in count():
|
||||
# fire re and we at "random"
|
||||
yield self.tb.dut.we.eq(cycle % 2 == 0)
|
||||
yield self.tb.dut.re.eq(cycle % 3 == 0)
|
||||
# the output if valid must be correct
|
||||
if (yield self.tb.dut.readable) and (yield self.tb.dut.re):
|
||||
try:
|
||||
i = seq.pop(0)
|
||||
except IndexError:
|
||||
break
|
||||
self.assertEqual((yield self.tb.dut.dout[:32]), i)
|
||||
self.assertEqual((yield self.tb.dut.dout[32:]), i*2)
|
||||
yield
|
||||
self.run_with(gen())
|
|
@ -0,0 +1,87 @@
|
|||
import unittest
|
||||
from itertools import count
|
||||
|
||||
from ...compat import *
|
||||
from ...compat.genlib.fsm import FSM
|
||||
|
||||
from .support import SimCase
|
||||
|
||||
|
||||
class FSMCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.ctrl = Signal()
|
||||
self.data = Signal()
|
||||
self.status = Signal(8)
|
||||
|
||||
self.submodules.dut = FSM()
|
||||
self.dut.act("IDLE",
|
||||
If(self.ctrl,
|
||||
NextState("START")
|
||||
)
|
||||
)
|
||||
self.dut.act("START",
|
||||
If(self.data,
|
||||
NextState("SET-STATUS-LOW")
|
||||
).Else(
|
||||
NextState("SET-STATUS")
|
||||
)
|
||||
)
|
||||
self.dut.act("SET-STATUS",
|
||||
NextValue(self.status, 0xaa),
|
||||
NextState("IDLE")
|
||||
)
|
||||
self.dut.act("SET-STATUS-LOW",
|
||||
NextValue(self.status[:4], 0xb),
|
||||
NextState("IDLE")
|
||||
)
|
||||
|
||||
def assertState(self, fsm, state):
|
||||
self.assertEqual(fsm.decoding[(yield fsm.state)], state)
|
||||
|
||||
def test_next_state(self):
|
||||
def gen():
|
||||
yield from self.assertState(self.tb.dut, "IDLE")
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "IDLE")
|
||||
yield self.tb.ctrl.eq(1)
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "IDLE")
|
||||
yield self.tb.ctrl.eq(0)
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "START")
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "SET-STATUS")
|
||||
yield self.tb.ctrl.eq(1)
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "IDLE")
|
||||
yield self.tb.ctrl.eq(0)
|
||||
yield self.tb.data.eq(1)
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "START")
|
||||
yield self.tb.data.eq(0)
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "SET-STATUS-LOW")
|
||||
self.run_with(gen())
|
||||
|
||||
def test_next_value(self):
|
||||
def gen():
|
||||
self.assertEqual((yield self.tb.status), 0x00)
|
||||
yield self.tb.ctrl.eq(1)
|
||||
yield
|
||||
yield self.tb.ctrl.eq(0)
|
||||
yield
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "SET-STATUS")
|
||||
yield self.tb.ctrl.eq(1)
|
||||
yield
|
||||
self.assertEqual((yield self.tb.status), 0xaa)
|
||||
yield self.tb.ctrl.eq(0)
|
||||
yield self.tb.data.eq(1)
|
||||
yield
|
||||
yield self.tb.data.eq(0)
|
||||
yield
|
||||
yield from self.assertState(self.tb.dut, "SET-STATUS-LOW")
|
||||
yield
|
||||
self.assertEqual((yield self.tb.status), 0xab)
|
||||
self.run_with(gen())
|
|
@ -0,0 +1,23 @@
|
|||
import unittest
|
||||
|
||||
from ...compat import *
|
||||
|
||||
|
||||
class PassiveCase(unittest.TestCase):
|
||||
def test_terminates_correctly(self):
|
||||
n = 5
|
||||
|
||||
count = 0
|
||||
@passive
|
||||
def counter():
|
||||
nonlocal count
|
||||
while True:
|
||||
yield
|
||||
count += 1
|
||||
|
||||
def terminator():
|
||||
for i in range(n):
|
||||
yield
|
||||
|
||||
run_simulation(Module(), [counter(), terminator()])
|
||||
self.assertEqual(count, n)
|
|
@ -0,0 +1,42 @@
|
|||
import unittest
|
||||
|
||||
from ...compat import *
|
||||
from .support import SimCase
|
||||
|
||||
|
||||
class SignedCase(SimCase, unittest.TestCase):
|
||||
class TestBench(Module):
|
||||
def __init__(self):
|
||||
self.a = Signal((3, True))
|
||||
self.b = Signal((4, True))
|
||||
comps = [
|
||||
lambda p, q: p > q,
|
||||
lambda p, q: p >= q,
|
||||
lambda p, q: p < q,
|
||||
lambda p, q: p <= q,
|
||||
lambda p, q: p == q,
|
||||
lambda p, q: p != q,
|
||||
]
|
||||
self.vals = []
|
||||
for asign in 1, -1:
|
||||
for bsign in 1, -1:
|
||||
for f in comps:
|
||||
r = Signal()
|
||||
r0 = f(asign*self.a, bsign*self.b)
|
||||
self.comb += r.eq(r0)
|
||||
self.vals.append((asign, bsign, f, r, r0.op))
|
||||
|
||||
def test_comparisons(self):
|
||||
def gen():
|
||||
for i in range(-4, 4):
|
||||
yield self.tb.a.eq(i)
|
||||
yield self.tb.b.eq(i)
|
||||
yield
|
||||
a = yield self.tb.a
|
||||
b = yield self.tb.b
|
||||
for asign, bsign, f, r, op in self.tb.vals:
|
||||
r, r0 = (yield r), f(asign*a, bsign*b)
|
||||
self.assertEqual(r, int(r0),
|
||||
"got {}, want {}*{} {} {}*{} = {}".format(
|
||||
r, asign, a, op, bsign, b, r0))
|
||||
self.run_with(gen())
|
|
@ -0,0 +1,21 @@
|
|||
import unittest
|
||||
|
||||
from ..._utils import _ignore_deprecated
|
||||
from ...compat import *
|
||||
|
||||
|
||||
def _same_slices(a, b):
|
||||
return a.value is b.value and a.start == b.start and a.stop == b.stop
|
||||
|
||||
|
||||
class SignalSizeCase(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.i = C(0xaa)
|
||||
self.j = C(-127)
|
||||
with _ignore_deprecated():
|
||||
self.s = Signal((13, True))
|
||||
|
||||
def test_len(self):
|
||||
self.assertEqual(len(self.s), 13)
|
||||
self.assertEqual(len(self.i), 8)
|
||||
self.assertEqual(len(self.j), 8)
|
|
@ -0,0 +1,328 @@
|
|||
from collections import OrderedDict
|
||||
|
||||
from ..build.dsl import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class PinsTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
p = Pins("A0 A1 A2")
|
||||
self.assertEqual(repr(p), "(pins io A0 A1 A2)")
|
||||
self.assertEqual(len(p.names), 3)
|
||||
self.assertEqual(p.dir, "io")
|
||||
self.assertEqual(p.invert, False)
|
||||
self.assertEqual(list(p), ["A0", "A1", "A2"])
|
||||
|
||||
def test_invert(self):
|
||||
p = PinsN("A0")
|
||||
self.assertEqual(repr(p), "(pins-n io A0)")
|
||||
self.assertEqual(p.invert, True)
|
||||
|
||||
def test_invert_arg(self):
|
||||
p = Pins("A0", invert=True)
|
||||
self.assertEqual(p.invert, True)
|
||||
|
||||
def test_conn(self):
|
||||
p = Pins("0 1 2", conn=("pmod", 0))
|
||||
self.assertEqual(list(p), ["pmod_0:0", "pmod_0:1", "pmod_0:2"])
|
||||
p = Pins("0 1 2", conn=("pmod", "a"))
|
||||
self.assertEqual(list(p), ["pmod_a:0", "pmod_a:1", "pmod_a:2"])
|
||||
|
||||
def test_map_names(self):
|
||||
p = Pins("0 1 2", conn=("pmod", 0))
|
||||
mapping = {
|
||||
"pmod_0:0": "A0",
|
||||
"pmod_0:1": "A1",
|
||||
"pmod_0:2": "A2",
|
||||
}
|
||||
self.assertEqual(p.map_names(mapping, p), ["A0", "A1", "A2"])
|
||||
|
||||
def test_map_names_recur(self):
|
||||
p = Pins("0", conn=("pmod", 0))
|
||||
mapping = {
|
||||
"pmod_0:0": "ext_0:1",
|
||||
"ext_0:1": "A1",
|
||||
}
|
||||
self.assertEqual(p.map_names(mapping, p), ["A1"])
|
||||
|
||||
def test_wrong_names(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Names must be a whitespace-separated string, not ['A0', 'A1', 'A2']"):
|
||||
p = Pins(["A0", "A1", "A2"])
|
||||
|
||||
def test_wrong_dir(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Direction must be one of \"i\", \"o\", \"oe\", or \"io\", not 'wrong'"):
|
||||
p = Pins("A0 A1", dir="wrong")
|
||||
|
||||
def test_wrong_conn(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Connector must be None or a pair of string (connector name) and "
|
||||
"integer/string (connector number), not ('foo', None)"):
|
||||
p = Pins("A0 A1", conn=("foo", None))
|
||||
|
||||
def test_wrong_map_names(self):
|
||||
p = Pins("0 1 2", conn=("pmod", 0))
|
||||
mapping = {
|
||||
"pmod_0:0": "A0",
|
||||
}
|
||||
with self.assertRaises(NameError,
|
||||
msg="Resource (pins io pmod_0:0 pmod_0:1 pmod_0:2) refers to nonexistent "
|
||||
"connector pin pmod_0:1"):
|
||||
p.map_names(mapping, p)
|
||||
|
||||
def test_wrong_assert_width(self):
|
||||
with self.assertRaises(AssertionError,
|
||||
msg="3 names are specified (0 1 2), but 4 names are expected"):
|
||||
Pins("0 1 2", assert_width=4)
|
||||
|
||||
|
||||
class DiffPairsTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
dp = DiffPairs(p="A0 A1", n="B0 B1")
|
||||
self.assertEqual(repr(dp), "(diffpairs io (p A0 A1) (n B0 B1))")
|
||||
self.assertEqual(dp.p.names, ["A0", "A1"])
|
||||
self.assertEqual(dp.n.names, ["B0", "B1"])
|
||||
self.assertEqual(dp.dir, "io")
|
||||
self.assertEqual(list(dp), [("A0", "B0"), ("A1", "B1")])
|
||||
|
||||
def test_invert(self):
|
||||
dp = DiffPairsN(p="A0", n="B0")
|
||||
self.assertEqual(repr(dp), "(diffpairs-n io (p A0) (n B0))")
|
||||
self.assertEqual(dp.p.names, ["A0"])
|
||||
self.assertEqual(dp.n.names, ["B0"])
|
||||
self.assertEqual(dp.invert, True)
|
||||
|
||||
def test_conn(self):
|
||||
dp = DiffPairs(p="0 1 2", n="3 4 5", conn=("pmod", 0))
|
||||
self.assertEqual(list(dp), [
|
||||
("pmod_0:0", "pmod_0:3"),
|
||||
("pmod_0:1", "pmod_0:4"),
|
||||
("pmod_0:2", "pmod_0:5"),
|
||||
])
|
||||
|
||||
def test_dir(self):
|
||||
dp = DiffPairs("A0", "B0", dir="o")
|
||||
self.assertEqual(dp.dir, "o")
|
||||
self.assertEqual(dp.p.dir, "o")
|
||||
self.assertEqual(dp.n.dir, "o")
|
||||
|
||||
def test_wrong_width(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Positive and negative pins must have the same width, but (pins io A0) "
|
||||
"and (pins io B0 B1) do not"):
|
||||
dp = DiffPairs("A0", "B0 B1")
|
||||
|
||||
def test_wrong_assert_width(self):
|
||||
with self.assertRaises(AssertionError,
|
||||
msg="3 names are specified (0 1 2), but 4 names are expected"):
|
||||
DiffPairs("0 1 2", "3 4 5", assert_width=4)
|
||||
|
||||
|
||||
class AttrsTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
a = Attrs(IO_STANDARD="LVCMOS33", PULLUP=1)
|
||||
self.assertEqual(a["IO_STANDARD"], "LVCMOS33")
|
||||
self.assertEqual(repr(a), "(attrs IO_STANDARD='LVCMOS33' PULLUP=1)")
|
||||
|
||||
def test_remove(self):
|
||||
a = Attrs(FOO=None)
|
||||
self.assertEqual(a["FOO"], None)
|
||||
self.assertEqual(repr(a), "(attrs !FOO)")
|
||||
|
||||
def test_callable(self):
|
||||
fn = lambda self: "FOO"
|
||||
a = Attrs(FOO=fn)
|
||||
self.assertEqual(a["FOO"], fn)
|
||||
self.assertEqual(repr(a), "(attrs FOO={!r})".format(fn))
|
||||
|
||||
def test_wrong_value(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Value of attribute FOO must be None, int, str, or callable, not 1.0"):
|
||||
a = Attrs(FOO=1.0)
|
||||
|
||||
|
||||
class ClockTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
c = Clock(1_000_000)
|
||||
self.assertEqual(c.frequency, 1e6)
|
||||
self.assertEqual(c.period, 1e-6)
|
||||
self.assertEqual(repr(c), "(clock 1000000.0)")
|
||||
|
||||
|
||||
class SubsignalTestCase(FHDLTestCase):
|
||||
def test_basic_pins(self):
|
||||
s = Subsignal("a", Pins("A0"), Attrs(IOSTANDARD="LVCMOS33"))
|
||||
self.assertEqual(repr(s),
|
||||
"(subsignal a (pins io A0) (attrs IOSTANDARD='LVCMOS33'))")
|
||||
|
||||
def test_basic_diffpairs(self):
|
||||
s = Subsignal("a", DiffPairs("A0", "B0"))
|
||||
self.assertEqual(repr(s),
|
||||
"(subsignal a (diffpairs io (p A0) (n B0)))")
|
||||
|
||||
def test_basic_subsignals(self):
|
||||
s = Subsignal("a",
|
||||
Subsignal("b", Pins("A0")),
|
||||
Subsignal("c", Pins("A1")))
|
||||
self.assertEqual(repr(s),
|
||||
"(subsignal a (subsignal b (pins io A0)) "
|
||||
"(subsignal c (pins io A1)))")
|
||||
|
||||
def test_attrs(self):
|
||||
s = Subsignal("a",
|
||||
Subsignal("b", Pins("A0")),
|
||||
Subsignal("c", Pins("A0"), Attrs(SLEW="FAST")),
|
||||
Attrs(IOSTANDARD="LVCMOS33"))
|
||||
self.assertEqual(s.attrs, {"IOSTANDARD": "LVCMOS33"})
|
||||
self.assertEqual(s.ios[0].attrs, {})
|
||||
self.assertEqual(s.ios[1].attrs, {"SLEW": "FAST"})
|
||||
|
||||
def test_attrs_many(self):
|
||||
s = Subsignal("a", Pins("A0"), Attrs(SLEW="FAST"), Attrs(PULLUP="1"))
|
||||
self.assertEqual(s.attrs, {"SLEW": "FAST", "PULLUP": "1"})
|
||||
|
||||
def test_clock(self):
|
||||
s = Subsignal("a", Pins("A0"), Clock(1e6))
|
||||
self.assertEqual(s.clock.frequency, 1e6)
|
||||
|
||||
def test_wrong_empty_io(self):
|
||||
with self.assertRaises(ValueError, msg="Missing I/O constraints"):
|
||||
s = Subsignal("a")
|
||||
|
||||
def test_wrong_io(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Constraint must be one of Pins, DiffPairs, Subsignal, Attrs, or Clock, "
|
||||
"not 'wrong'"):
|
||||
s = Subsignal("a", "wrong")
|
||||
|
||||
def test_wrong_pins(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Pins and DiffPairs are incompatible with other location or subsignal "
|
||||
"constraints, but (pins io A1) appears after (pins io A0)"):
|
||||
s = Subsignal("a", Pins("A0"), Pins("A1"))
|
||||
|
||||
def test_wrong_diffpairs(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Pins and DiffPairs are incompatible with other location or subsignal "
|
||||
"constraints, but (pins io A1) appears after (diffpairs io (p A0) (n B0))"):
|
||||
s = Subsignal("a", DiffPairs("A0", "B0"), Pins("A1"))
|
||||
|
||||
def test_wrong_subsignals(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Pins and DiffPairs are incompatible with other location or subsignal "
|
||||
"constraints, but (pins io B0) appears after (subsignal b (pins io A0))"):
|
||||
s = Subsignal("a", Subsignal("b", Pins("A0")), Pins("B0"))
|
||||
|
||||
def test_wrong_clock(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Clock constraint can only be applied to Pins or DiffPairs, not "
|
||||
"(subsignal b (pins io A0))"):
|
||||
s = Subsignal("a", Subsignal("b", Pins("A0")), Clock(1e6))
|
||||
|
||||
def test_wrong_clock_many(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Clock constraint can be applied only once"):
|
||||
s = Subsignal("a", Pins("A0"), Clock(1e6), Clock(1e7))
|
||||
|
||||
|
||||
class ResourceTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
r = Resource("serial", 0,
|
||||
Subsignal("tx", Pins("A0", dir="o")),
|
||||
Subsignal("rx", Pins("A1", dir="i")),
|
||||
Attrs(IOSTANDARD="LVCMOS33"))
|
||||
self.assertEqual(repr(r), "(resource serial 0"
|
||||
" (subsignal tx (pins o A0))"
|
||||
" (subsignal rx (pins i A1))"
|
||||
" (attrs IOSTANDARD='LVCMOS33'))")
|
||||
|
||||
def test_family(self):
|
||||
ios = [Subsignal("clk", Pins("A0", dir="o"))]
|
||||
r1 = Resource.family(0, default_name="spi", ios=ios)
|
||||
r2 = Resource.family("spi_flash", 0, default_name="spi", ios=ios)
|
||||
r3 = Resource.family("spi_flash", 0, default_name="spi", ios=ios, name_suffix="4x")
|
||||
r4 = Resource.family(0, default_name="spi", ios=ios, name_suffix="2x")
|
||||
self.assertEqual(r1.name, "spi")
|
||||
self.assertEqual(r1.ios, ios)
|
||||
self.assertEqual(r2.name, "spi_flash")
|
||||
self.assertEqual(r2.ios, ios)
|
||||
self.assertEqual(r3.name, "spi_flash_4x")
|
||||
self.assertEqual(r3.ios, ios)
|
||||
self.assertEqual(r4.name, "spi_2x")
|
||||
self.assertEqual(r4.ios, ios)
|
||||
|
||||
|
||||
class ConnectorTestCase(FHDLTestCase):
|
||||
def test_string(self):
|
||||
c = Connector("pmod", 0, "A0 A1 A2 A3 - - A4 A5 A6 A7 - -")
|
||||
self.assertEqual(c.name, "pmod")
|
||||
self.assertEqual(c.number, 0)
|
||||
self.assertEqual(c.mapping, OrderedDict([
|
||||
("1", "A0"),
|
||||
("2", "A1"),
|
||||
("3", "A2"),
|
||||
("4", "A3"),
|
||||
("7", "A4"),
|
||||
("8", "A5"),
|
||||
("9", "A6"),
|
||||
("10", "A7"),
|
||||
]))
|
||||
self.assertEqual(list(c), [
|
||||
("pmod_0:1", "A0"),
|
||||
("pmod_0:2", "A1"),
|
||||
("pmod_0:3", "A2"),
|
||||
("pmod_0:4", "A3"),
|
||||
("pmod_0:7", "A4"),
|
||||
("pmod_0:8", "A5"),
|
||||
("pmod_0:9", "A6"),
|
||||
("pmod_0:10", "A7"),
|
||||
])
|
||||
self.assertEqual(repr(c),
|
||||
"(connector pmod 0 1=>A0 2=>A1 3=>A2 4=>A3 7=>A4 8=>A5 9=>A6 10=>A7)")
|
||||
|
||||
def test_dict(self):
|
||||
c = Connector("ext", 1, {"DP0": "A0", "DP1": "A1"})
|
||||
self.assertEqual(c.name, "ext")
|
||||
self.assertEqual(c.number, 1)
|
||||
self.assertEqual(c.mapping, OrderedDict([
|
||||
("DP0", "A0"),
|
||||
("DP1", "A1"),
|
||||
]))
|
||||
|
||||
def test_conn(self):
|
||||
c = Connector("pmod", 0, "0 1 2 3 - - 4 5 6 7 - -", conn=("expansion", 0))
|
||||
self.assertEqual(c.mapping, OrderedDict([
|
||||
("1", "expansion_0:0"),
|
||||
("2", "expansion_0:1"),
|
||||
("3", "expansion_0:2"),
|
||||
("4", "expansion_0:3"),
|
||||
("7", "expansion_0:4"),
|
||||
("8", "expansion_0:5"),
|
||||
("9", "expansion_0:6"),
|
||||
("10", "expansion_0:7"),
|
||||
]))
|
||||
|
||||
def test_str_name(self):
|
||||
c = Connector("ext", "A", "0 1 2")
|
||||
self.assertEqual(c.name, "ext")
|
||||
self.assertEqual(c.number, "A")
|
||||
|
||||
def test_conn_wrong_name(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Connector must be None or a pair of string (connector name) and "
|
||||
"integer/string (connector number), not ('foo', None)"):
|
||||
Connector("ext", "A", "0 1 2", conn=("foo", None))
|
||||
|
||||
def test_wrong_io(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Connector I/Os must be a dictionary or a string, not []"):
|
||||
Connector("pmod", 0, [])
|
||||
|
||||
def test_wrong_dict_key_value(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Connector pin name must be a string, not 0"):
|
||||
Connector("pmod", 0, {0: "A"})
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Platform pin name must be a string, not 0"):
|
||||
Connector("pmod", 0, {"A": 0})
|
|
@ -0,0 +1,52 @@
|
|||
from .. import *
|
||||
from ..build.plat import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class MockPlatform(Platform):
|
||||
resources = []
|
||||
connectors = []
|
||||
|
||||
required_tools = []
|
||||
|
||||
def toolchain_prepare(self, fragment, name, **kwargs):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class PlatformTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.platform = MockPlatform()
|
||||
|
||||
def test_add_file_str(self):
|
||||
self.platform.add_file("x.txt", "foo")
|
||||
self.assertEqual(self.platform.extra_files["x.txt"], "foo")
|
||||
|
||||
def test_add_file_bytes(self):
|
||||
self.platform.add_file("x.txt", b"foo")
|
||||
self.assertEqual(self.platform.extra_files["x.txt"], b"foo")
|
||||
|
||||
def test_add_file_exact_duplicate(self):
|
||||
self.platform.add_file("x.txt", b"foo")
|
||||
self.platform.add_file("x.txt", b"foo")
|
||||
|
||||
def test_add_file_io(self):
|
||||
with open(__file__) as f:
|
||||
self.platform.add_file("x.txt", f)
|
||||
with open(__file__) as f:
|
||||
self.assertEqual(self.platform.extra_files["x.txt"], f.read())
|
||||
|
||||
def test_add_file_wrong_filename(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="File name must be a string, not 1"):
|
||||
self.platform.add_file(1, "")
|
||||
|
||||
def test_add_file_wrong_contents(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="File contents must be str, bytes, or a file-like object, not 1"):
|
||||
self.platform.add_file("foo", 1)
|
||||
|
||||
def test_add_file_wrong_duplicate(self):
|
||||
self.platform.add_file("foo", "")
|
||||
with self.assertRaises(ValueError,
|
||||
msg="File 'foo' already exists"):
|
||||
self.platform.add_file("foo", "bar")
|
|
@ -0,0 +1,298 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
from .. import *
|
||||
from ..hdl.rec import *
|
||||
from ..lib.io import *
|
||||
from ..build.dsl import *
|
||||
from ..build.res import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class ResourceManagerTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.resources = [
|
||||
Resource("clk100", 0, DiffPairs("H1", "H2", dir="i"), Clock(100e6)),
|
||||
Resource("clk50", 0, Pins("K1"), Clock(50e6)),
|
||||
Resource("user_led", 0, Pins("A0", dir="o")),
|
||||
Resource("i2c", 0,
|
||||
Subsignal("scl", Pins("N10", dir="o")),
|
||||
Subsignal("sda", Pins("N11"))
|
||||
)
|
||||
]
|
||||
self.connectors = [
|
||||
Connector("pmod", 0, "B0 B1 B2 B3 - -"),
|
||||
]
|
||||
self.cm = ResourceManager(self.resources, self.connectors)
|
||||
|
||||
def test_basic(self):
|
||||
self.cm = ResourceManager(self.resources, self.connectors)
|
||||
self.assertEqual(self.cm.resources, {
|
||||
("clk100", 0): self.resources[0],
|
||||
("clk50", 0): self.resources[1],
|
||||
("user_led", 0): self.resources[2],
|
||||
("i2c", 0): self.resources[3]
|
||||
})
|
||||
self.assertEqual(self.cm.connectors, {
|
||||
("pmod", 0): self.connectors[0],
|
||||
})
|
||||
|
||||
def test_add_resources(self):
|
||||
new_resources = [
|
||||
Resource("user_led", 1, Pins("A1", dir="o"))
|
||||
]
|
||||
self.cm.add_resources(new_resources)
|
||||
self.assertEqual(self.cm.resources, {
|
||||
("clk100", 0): self.resources[0],
|
||||
("clk50", 0): self.resources[1],
|
||||
("user_led", 0): self.resources[2],
|
||||
("i2c", 0): self.resources[3],
|
||||
("user_led", 1): new_resources[0]
|
||||
})
|
||||
|
||||
def test_lookup(self):
|
||||
r = self.cm.lookup("user_led", 0)
|
||||
self.assertIs(r, self.cm.resources["user_led", 0])
|
||||
|
||||
def test_request_basic(self):
|
||||
r = self.cm.lookup("user_led", 0)
|
||||
user_led = self.cm.request("user_led", 0)
|
||||
|
||||
self.assertIsInstance(user_led, Pin)
|
||||
self.assertEqual(user_led.name, "user_led_0")
|
||||
self.assertEqual(user_led.width, 1)
|
||||
self.assertEqual(user_led.dir, "o")
|
||||
|
||||
ports = list(self.cm.iter_ports())
|
||||
self.assertEqual(len(ports), 1)
|
||||
|
||||
self.assertEqual(list(self.cm.iter_port_constraints()), [
|
||||
("user_led_0__io", ["A0"], {})
|
||||
])
|
||||
|
||||
def test_request_with_dir(self):
|
||||
i2c = self.cm.request("i2c", 0, dir={"sda": "o"})
|
||||
self.assertIsInstance(i2c, Record)
|
||||
self.assertIsInstance(i2c.sda, Pin)
|
||||
self.assertEqual(i2c.sda.dir, "o")
|
||||
|
||||
def test_request_tristate(self):
|
||||
i2c = self.cm.request("i2c", 0)
|
||||
self.assertEqual(i2c.sda.dir, "io")
|
||||
|
||||
ports = list(self.cm.iter_ports())
|
||||
self.assertEqual(len(ports), 2)
|
||||
scl, sda = ports
|
||||
self.assertEqual(ports[1].name, "i2c_0__sda__io")
|
||||
self.assertEqual(ports[1].width, 1)
|
||||
|
||||
self.assertEqual(list(self.cm.iter_single_ended_pins()), [
|
||||
(i2c.scl, scl, {}, False),
|
||||
(i2c.sda, sda, {}, False),
|
||||
])
|
||||
self.assertEqual(list(self.cm.iter_port_constraints()), [
|
||||
("i2c_0__scl__io", ["N10"], {}),
|
||||
("i2c_0__sda__io", ["N11"], {})
|
||||
])
|
||||
|
||||
def test_request_diffpairs(self):
|
||||
clk100 = self.cm.request("clk100", 0)
|
||||
self.assertIsInstance(clk100, Pin)
|
||||
self.assertEqual(clk100.dir, "i")
|
||||
self.assertEqual(clk100.width, 1)
|
||||
|
||||
ports = list(self.cm.iter_ports())
|
||||
self.assertEqual(len(ports), 2)
|
||||
p, n = ports
|
||||
self.assertEqual(p.name, "clk100_0__p")
|
||||
self.assertEqual(p.width, clk100.width)
|
||||
self.assertEqual(n.name, "clk100_0__n")
|
||||
self.assertEqual(n.width, clk100.width)
|
||||
|
||||
self.assertEqual(list(self.cm.iter_differential_pins()), [
|
||||
(clk100, p, n, {}, False),
|
||||
])
|
||||
self.assertEqual(list(self.cm.iter_port_constraints()), [
|
||||
("clk100_0__p", ["H1"], {}),
|
||||
("clk100_0__n", ["H2"], {}),
|
||||
])
|
||||
|
||||
def test_request_inverted(self):
|
||||
new_resources = [
|
||||
Resource("cs", 0, PinsN("X0")),
|
||||
Resource("clk", 0, DiffPairsN("Y0", "Y1")),
|
||||
]
|
||||
self.cm.add_resources(new_resources)
|
||||
|
||||
sig_cs = self.cm.request("cs")
|
||||
sig_clk = self.cm.request("clk")
|
||||
port_cs, port_clk_p, port_clk_n = self.cm.iter_ports()
|
||||
self.assertEqual(list(self.cm.iter_single_ended_pins()), [
|
||||
(sig_cs, port_cs, {}, True),
|
||||
])
|
||||
self.assertEqual(list(self.cm.iter_differential_pins()), [
|
||||
(sig_clk, port_clk_p, port_clk_n, {}, True),
|
||||
])
|
||||
|
||||
def test_request_raw(self):
|
||||
clk50 = self.cm.request("clk50", 0, dir="-")
|
||||
self.assertIsInstance(clk50, Record)
|
||||
self.assertIsInstance(clk50.io, Signal)
|
||||
|
||||
ports = list(self.cm.iter_ports())
|
||||
self.assertEqual(len(ports), 1)
|
||||
self.assertIs(ports[0], clk50.io)
|
||||
|
||||
def test_request_raw_diffpairs(self):
|
||||
clk100 = self.cm.request("clk100", 0, dir="-")
|
||||
self.assertIsInstance(clk100, Record)
|
||||
self.assertIsInstance(clk100.p, Signal)
|
||||
self.assertIsInstance(clk100.n, Signal)
|
||||
|
||||
ports = list(self.cm.iter_ports())
|
||||
self.assertEqual(len(ports), 2)
|
||||
self.assertIs(ports[0], clk100.p)
|
||||
self.assertIs(ports[1], clk100.n)
|
||||
|
||||
def test_request_via_connector(self):
|
||||
self.cm.add_resources([
|
||||
Resource("spi", 0,
|
||||
Subsignal("ss", Pins("1", conn=("pmod", 0))),
|
||||
Subsignal("clk", Pins("2", conn=("pmod", 0))),
|
||||
Subsignal("miso", Pins("3", conn=("pmod", 0))),
|
||||
Subsignal("mosi", Pins("4", conn=("pmod", 0))),
|
||||
)
|
||||
])
|
||||
spi0 = self.cm.request("spi", 0)
|
||||
self.assertEqual(list(self.cm.iter_port_constraints()), [
|
||||
("spi_0__ss__io", ["B0"], {}),
|
||||
("spi_0__clk__io", ["B1"], {}),
|
||||
("spi_0__miso__io", ["B2"], {}),
|
||||
("spi_0__mosi__io", ["B3"], {}),
|
||||
])
|
||||
|
||||
def test_request_via_nested_connector(self):
|
||||
new_connectors = [
|
||||
Connector("pmod_extension", 0, "1 2 3 4 - -", conn=("pmod", 0)),
|
||||
]
|
||||
self.cm.add_connectors(new_connectors)
|
||||
self.cm.add_resources([
|
||||
Resource("spi", 0,
|
||||
Subsignal("ss", Pins("1", conn=("pmod_extension", 0))),
|
||||
Subsignal("clk", Pins("2", conn=("pmod_extension", 0))),
|
||||
Subsignal("miso", Pins("3", conn=("pmod_extension", 0))),
|
||||
Subsignal("mosi", Pins("4", conn=("pmod_extension", 0))),
|
||||
)
|
||||
])
|
||||
spi0 = self.cm.request("spi", 0)
|
||||
self.assertEqual(list(self.cm.iter_port_constraints()), [
|
||||
("spi_0__ss__io", ["B0"], {}),
|
||||
("spi_0__clk__io", ["B1"], {}),
|
||||
("spi_0__miso__io", ["B2"], {}),
|
||||
("spi_0__mosi__io", ["B3"], {}),
|
||||
])
|
||||
|
||||
def test_request_clock(self):
|
||||
clk100 = self.cm.request("clk100", 0)
|
||||
clk50 = self.cm.request("clk50", 0, dir="i")
|
||||
clk100_port_p, clk100_port_n, clk50_port = self.cm.iter_ports()
|
||||
self.assertEqual(list(self.cm.iter_clock_constraints()), [
|
||||
(clk100.i, clk100_port_p, 100e6),
|
||||
(clk50.i, clk50_port, 50e6)
|
||||
])
|
||||
|
||||
def test_add_clock(self):
|
||||
i2c = self.cm.request("i2c")
|
||||
self.cm.add_clock_constraint(i2c.scl.o, 100e3)
|
||||
self.assertEqual(list(self.cm.iter_clock_constraints()), [
|
||||
(i2c.scl.o, None, 100e3)
|
||||
])
|
||||
|
||||
def test_wrong_resources(self):
|
||||
with self.assertRaises(TypeError, msg="Object 'wrong' is not a Resource"):
|
||||
self.cm.add_resources(['wrong'])
|
||||
|
||||
def test_wrong_resources_duplicate(self):
|
||||
with self.assertRaises(NameError,
|
||||
msg="Trying to add (resource user_led 0 (pins o A1)), but "
|
||||
"(resource user_led 0 (pins o A0)) has the same name and number"):
|
||||
self.cm.add_resources([Resource("user_led", 0, Pins("A1", dir="o"))])
|
||||
|
||||
def test_wrong_connectors(self):
|
||||
with self.assertRaises(TypeError, msg="Object 'wrong' is not a Connector"):
|
||||
self.cm.add_connectors(['wrong'])
|
||||
|
||||
def test_wrong_connectors_duplicate(self):
|
||||
with self.assertRaises(NameError,
|
||||
msg="Trying to add (connector pmod 0 1=>1 2=>2), but "
|
||||
"(connector pmod 0 1=>B0 2=>B1 3=>B2 4=>B3) has the same name and number"):
|
||||
self.cm.add_connectors([Connector("pmod", 0, "1 2")])
|
||||
|
||||
def test_wrong_lookup(self):
|
||||
with self.assertRaises(ResourceError,
|
||||
msg="Resource user_led#1 does not exist"):
|
||||
r = self.cm.lookup("user_led", 1)
|
||||
|
||||
def test_wrong_clock_signal(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Object None is not a Signal"):
|
||||
self.cm.add_clock_constraint(None, 10e6)
|
||||
|
||||
def test_wrong_clock_frequency(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Frequency must be a number, not None"):
|
||||
self.cm.add_clock_constraint(Signal(), None)
|
||||
|
||||
def test_wrong_request_duplicate(self):
|
||||
with self.assertRaises(ResourceError,
|
||||
msg="Resource user_led#0 has already been requested"):
|
||||
self.cm.request("user_led", 0)
|
||||
self.cm.request("user_led", 0)
|
||||
|
||||
def test_wrong_request_duplicate_physical(self):
|
||||
self.cm.add_resources([
|
||||
Resource("clk20", 0, Pins("H1", dir="i")),
|
||||
])
|
||||
self.cm.request("clk100", 0)
|
||||
with self.assertRaises(ResourceError,
|
||||
msg="Resource component clk20_0 uses physical pin H1, but it is already "
|
||||
"used by resource component clk100_0 that was requested earlier"):
|
||||
self.cm.request("clk20", 0)
|
||||
|
||||
def test_wrong_request_with_dir(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Direction must be one of \"i\", \"o\", \"oe\", \"io\", or \"-\", "
|
||||
"not 'wrong'"):
|
||||
user_led = self.cm.request("user_led", 0, dir="wrong")
|
||||
|
||||
def test_wrong_request_with_dir_io(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Direction of (pins o A0) cannot be changed from \"o\" to \"i\"; direction "
|
||||
"can be changed from \"io\" to \"i\", \"o\", or \"oe\", or from anything "
|
||||
"to \"-\""):
|
||||
user_led = self.cm.request("user_led", 0, dir="i")
|
||||
|
||||
def test_wrong_request_with_dir_dict(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Directions must be a dict, not 'i', because (resource i2c 0 (subsignal scl "
|
||||
"(pins o N10)) (subsignal sda (pins io N11))) "
|
||||
"has subsignals"):
|
||||
i2c = self.cm.request("i2c", 0, dir="i")
|
||||
|
||||
def test_wrong_request_with_wrong_xdr(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Data rate of (pins o A0) must be a non-negative integer, not -1"):
|
||||
user_led = self.cm.request("user_led", 0, xdr=-1)
|
||||
|
||||
def test_wrong_request_with_xdr_dict(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Data rate must be a dict, not 2, because (resource i2c 0 (subsignal scl "
|
||||
"(pins o N10)) (subsignal sda (pins io N11))) "
|
||||
"has subsignals"):
|
||||
i2c = self.cm.request("i2c", 0, xdr=2)
|
||||
|
||||
def test_wrong_clock_constraint_twice(self):
|
||||
clk100 = self.cm.request("clk100")
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Cannot add clock constraint on (sig clk100_0__i), which is already "
|
||||
"constrained to 100000000.0 Hz"):
|
||||
self.cm.add_clock_constraint(clk100.i, 1e6)
|
|
@ -0,0 +1,9 @@
|
|||
from ..hdl.ir import Fragment
|
||||
from ..compat import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class CompatTestCase(FHDLTestCase):
|
||||
def test_fragment_get(self):
|
||||
m = Module()
|
||||
f = Fragment.get(m, platform=None)
|
|
@ -0,0 +1,28 @@
|
|||
import sys
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from .utils import *
|
||||
|
||||
|
||||
def example_test(name):
|
||||
path = (Path(__file__).parent / ".." / ".." / "examples" / name).resolve()
|
||||
def test_function(self):
|
||||
subprocess.check_call([sys.executable, str(path), "generate"], stdout=subprocess.DEVNULL)
|
||||
return test_function
|
||||
|
||||
|
||||
class ExamplesTestCase(FHDLTestCase):
|
||||
test_alu = example_test("basic/alu.py")
|
||||
test_alu_hier = example_test("basic/alu_hier.py")
|
||||
test_arst = example_test("basic/arst.py")
|
||||
test_cdc = example_test("basic/cdc.py")
|
||||
test_ctr = example_test("basic/ctr.py")
|
||||
test_ctr_en = example_test("basic/ctr_en.py")
|
||||
test_fsm = example_test("basic/fsm.py")
|
||||
test_gpio = example_test("basic/gpio.py")
|
||||
test_inst = example_test("basic/inst.py")
|
||||
test_mem = example_test("basic/mem.py")
|
||||
test_pmux = example_test("basic/pmux.py")
|
||||
test_por = example_test("basic/por.py")
|
||||
test_uart = example_test("basic/uart.py")
|
|
@ -0,0 +1,926 @@
|
|||
import warnings
|
||||
from enum import Enum
|
||||
|
||||
from ..hdl.ast import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class UnsignedEnum(Enum):
|
||||
FOO = 1
|
||||
BAR = 2
|
||||
BAZ = 3
|
||||
|
||||
|
||||
class SignedEnum(Enum):
|
||||
FOO = -1
|
||||
BAR = 0
|
||||
BAZ = +1
|
||||
|
||||
|
||||
class StringEnum(Enum):
|
||||
FOO = "a"
|
||||
BAR = "b"
|
||||
|
||||
|
||||
class ShapeTestCase(FHDLTestCase):
|
||||
def test_make(self):
|
||||
s1 = Shape()
|
||||
self.assertEqual(s1.width, 1)
|
||||
self.assertEqual(s1.signed, False)
|
||||
s2 = Shape(signed=True)
|
||||
self.assertEqual(s2.width, 1)
|
||||
self.assertEqual(s2.signed, True)
|
||||
s3 = Shape(3, True)
|
||||
self.assertEqual(s3.width, 3)
|
||||
self.assertEqual(s3.signed, True)
|
||||
|
||||
def test_make_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Width must be a non-negative integer, not -1"):
|
||||
Shape(-1)
|
||||
|
||||
def test_tuple(self):
|
||||
width, signed = Shape()
|
||||
self.assertEqual(width, 1)
|
||||
self.assertEqual(signed, False)
|
||||
|
||||
def test_unsigned(self):
|
||||
s1 = unsigned(2)
|
||||
self.assertIsInstance(s1, Shape)
|
||||
self.assertEqual(s1.width, 2)
|
||||
self.assertEqual(s1.signed, False)
|
||||
|
||||
def test_signed(self):
|
||||
s1 = signed(2)
|
||||
self.assertIsInstance(s1, Shape)
|
||||
self.assertEqual(s1.width, 2)
|
||||
self.assertEqual(s1.signed, True)
|
||||
|
||||
def test_cast_shape(self):
|
||||
s1 = Shape.cast(unsigned(1))
|
||||
self.assertEqual(s1.width, 1)
|
||||
self.assertEqual(s1.signed, False)
|
||||
s2 = Shape.cast(signed(3))
|
||||
self.assertEqual(s2.width, 3)
|
||||
self.assertEqual(s2.signed, True)
|
||||
|
||||
def test_cast_int(self):
|
||||
s1 = Shape.cast(2)
|
||||
self.assertEqual(s1.width, 2)
|
||||
self.assertEqual(s1.signed, False)
|
||||
|
||||
def test_cast_int_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Width must be a non-negative integer, not -1"):
|
||||
Shape.cast(-1)
|
||||
|
||||
def test_cast_tuple(self):
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings(action="ignore", category=DeprecationWarning)
|
||||
s1 = Shape.cast((1, True))
|
||||
self.assertEqual(s1.width, 1)
|
||||
self.assertEqual(s1.signed, True)
|
||||
|
||||
def test_cast_tuple_wrong(self):
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings(action="ignore", category=DeprecationWarning)
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Width must be a non-negative integer, not -1"):
|
||||
Shape.cast((-1, True))
|
||||
|
||||
def test_cast_range(self):
|
||||
s1 = Shape.cast(range(0, 8))
|
||||
self.assertEqual(s1.width, 3)
|
||||
self.assertEqual(s1.signed, False)
|
||||
s2 = Shape.cast(range(0, 9))
|
||||
self.assertEqual(s2.width, 4)
|
||||
self.assertEqual(s2.signed, False)
|
||||
s3 = Shape.cast(range(-7, 8))
|
||||
self.assertEqual(s3.width, 4)
|
||||
self.assertEqual(s3.signed, True)
|
||||
s4 = Shape.cast(range(0, 1))
|
||||
self.assertEqual(s4.width, 1)
|
||||
self.assertEqual(s4.signed, False)
|
||||
s5 = Shape.cast(range(-1, 0))
|
||||
self.assertEqual(s5.width, 1)
|
||||
self.assertEqual(s5.signed, True)
|
||||
s6 = Shape.cast(range(0, 0))
|
||||
self.assertEqual(s6.width, 0)
|
||||
self.assertEqual(s6.signed, False)
|
||||
s7 = Shape.cast(range(-1, -1))
|
||||
self.assertEqual(s7.width, 0)
|
||||
self.assertEqual(s7.signed, True)
|
||||
|
||||
def test_cast_enum(self):
|
||||
s1 = Shape.cast(UnsignedEnum)
|
||||
self.assertEqual(s1.width, 2)
|
||||
self.assertEqual(s1.signed, False)
|
||||
s2 = Shape.cast(SignedEnum)
|
||||
self.assertEqual(s2.width, 2)
|
||||
self.assertEqual(s2.signed, True)
|
||||
|
||||
def test_cast_enum_bad(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Only enumerations with integer values can be used as value shapes"):
|
||||
Shape.cast(StringEnum)
|
||||
|
||||
def test_cast_bad(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Object 'foo' cannot be used as value shape"):
|
||||
Shape.cast("foo")
|
||||
|
||||
|
||||
class ValueTestCase(FHDLTestCase):
|
||||
def test_cast(self):
|
||||
self.assertIsInstance(Value.cast(0), Const)
|
||||
self.assertIsInstance(Value.cast(True), Const)
|
||||
c = Const(0)
|
||||
self.assertIs(Value.cast(c), c)
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Object 'str' cannot be converted to an nMigen value"):
|
||||
Value.cast("str")
|
||||
|
||||
def test_cast_enum(self):
|
||||
e1 = Value.cast(UnsignedEnum.FOO)
|
||||
self.assertIsInstance(e1, Const)
|
||||
self.assertEqual(e1.shape(), unsigned(2))
|
||||
e2 = Value.cast(SignedEnum.FOO)
|
||||
self.assertIsInstance(e2, Const)
|
||||
self.assertEqual(e2.shape(), signed(2))
|
||||
|
||||
def test_cast_enum_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Only enumerations with integer values can be used as value shapes"):
|
||||
Value.cast(StringEnum.FOO)
|
||||
|
||||
def test_bool(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Attempted to convert nMigen value to boolean"):
|
||||
if Const(0):
|
||||
pass
|
||||
|
||||
def test_len(self):
|
||||
self.assertEqual(len(Const(10)), 4)
|
||||
|
||||
def test_getitem_int(self):
|
||||
s1 = Const(10)[0]
|
||||
self.assertIsInstance(s1, Slice)
|
||||
self.assertEqual(s1.start, 0)
|
||||
self.assertEqual(s1.stop, 1)
|
||||
s2 = Const(10)[-1]
|
||||
self.assertIsInstance(s2, Slice)
|
||||
self.assertEqual(s2.start, 3)
|
||||
self.assertEqual(s2.stop, 4)
|
||||
with self.assertRaises(IndexError,
|
||||
msg="Cannot index 5 bits into 4-bit value"):
|
||||
Const(10)[5]
|
||||
|
||||
def test_getitem_slice(self):
|
||||
s1 = Const(10)[1:3]
|
||||
self.assertIsInstance(s1, Slice)
|
||||
self.assertEqual(s1.start, 1)
|
||||
self.assertEqual(s1.stop, 3)
|
||||
s2 = Const(10)[1:-2]
|
||||
self.assertIsInstance(s2, Slice)
|
||||
self.assertEqual(s2.start, 1)
|
||||
self.assertEqual(s2.stop, 2)
|
||||
s3 = Const(31)[::2]
|
||||
self.assertIsInstance(s3, Cat)
|
||||
self.assertIsInstance(s3.parts[0], Slice)
|
||||
self.assertEqual(s3.parts[0].start, 0)
|
||||
self.assertEqual(s3.parts[0].stop, 1)
|
||||
self.assertIsInstance(s3.parts[1], Slice)
|
||||
self.assertEqual(s3.parts[1].start, 2)
|
||||
self.assertEqual(s3.parts[1].stop, 3)
|
||||
self.assertIsInstance(s3.parts[2], Slice)
|
||||
self.assertEqual(s3.parts[2].start, 4)
|
||||
self.assertEqual(s3.parts[2].stop, 5)
|
||||
|
||||
def test_getitem_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Cannot index value with 'str'"):
|
||||
Const(31)["str"]
|
||||
|
||||
|
||||
class ConstTestCase(FHDLTestCase):
|
||||
def test_shape(self):
|
||||
self.assertEqual(Const(0).shape(), unsigned(1))
|
||||
self.assertIsInstance(Const(0).shape(), Shape)
|
||||
self.assertEqual(Const(1).shape(), unsigned(1))
|
||||
self.assertEqual(Const(10).shape(), unsigned(4))
|
||||
self.assertEqual(Const(-10).shape(), signed(5))
|
||||
|
||||
self.assertEqual(Const(1, 4).shape(), unsigned(4))
|
||||
self.assertEqual(Const(-1, 4).shape(), signed(4))
|
||||
self.assertEqual(Const(1, signed(4)).shape(), signed(4))
|
||||
self.assertEqual(Const(0, unsigned(0)).shape(), unsigned(0))
|
||||
|
||||
def test_shape_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Width must be a non-negative integer, not -1"):
|
||||
Const(1, -1)
|
||||
|
||||
def test_normalization(self):
|
||||
self.assertEqual(Const(0b10110, signed(5)).value, -10)
|
||||
|
||||
def test_value(self):
|
||||
self.assertEqual(Const(10).value, 10)
|
||||
|
||||
def test_repr(self):
|
||||
self.assertEqual(repr(Const(10)), "(const 4'd10)")
|
||||
self.assertEqual(repr(Const(-10)), "(const 5'sd-10)")
|
||||
|
||||
def test_hash(self):
|
||||
with self.assertRaises(TypeError):
|
||||
hash(Const(0))
|
||||
|
||||
|
||||
class OperatorTestCase(FHDLTestCase):
|
||||
def test_bool(self):
|
||||
v = Const(0, 4).bool()
|
||||
self.assertEqual(repr(v), "(b (const 4'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_invert(self):
|
||||
v = ~Const(0, 4)
|
||||
self.assertEqual(repr(v), "(~ (const 4'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(4))
|
||||
|
||||
def test_as_unsigned(self):
|
||||
v = Const(-1, signed(4)).as_unsigned()
|
||||
self.assertEqual(repr(v), "(u (const 4'sd-1))")
|
||||
self.assertEqual(v.shape(), unsigned(4))
|
||||
|
||||
def test_as_signed(self):
|
||||
v = Const(1, unsigned(4)).as_signed()
|
||||
self.assertEqual(repr(v), "(s (const 4'd1))")
|
||||
self.assertEqual(v.shape(), signed(4))
|
||||
|
||||
def test_neg(self):
|
||||
v1 = -Const(0, unsigned(4))
|
||||
self.assertEqual(repr(v1), "(- (const 4'd0))")
|
||||
self.assertEqual(v1.shape(), signed(5))
|
||||
v2 = -Const(0, signed(4))
|
||||
self.assertEqual(repr(v2), "(- (const 4'sd0))")
|
||||
self.assertEqual(v2.shape(), signed(5))
|
||||
|
||||
def test_add(self):
|
||||
v1 = Const(0, unsigned(4)) + Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(+ (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(7))
|
||||
v2 = Const(0, signed(4)) + Const(0, signed(6))
|
||||
self.assertEqual(v2.shape(), signed(7))
|
||||
v3 = Const(0, signed(4)) + Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(6))
|
||||
v4 = Const(0, unsigned(4)) + Const(0, signed(4))
|
||||
self.assertEqual(v4.shape(), signed(6))
|
||||
v5 = 10 + Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(5))
|
||||
|
||||
def test_sub(self):
|
||||
v1 = Const(0, unsigned(4)) - Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(- (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(7))
|
||||
v2 = Const(0, signed(4)) - Const(0, signed(6))
|
||||
self.assertEqual(v2.shape(), signed(7))
|
||||
v3 = Const(0, signed(4)) - Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(6))
|
||||
v4 = Const(0, unsigned(4)) - Const(0, signed(4))
|
||||
self.assertEqual(v4.shape(), signed(6))
|
||||
v5 = 10 - Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(5))
|
||||
|
||||
def test_mul(self):
|
||||
v1 = Const(0, unsigned(4)) * Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(* (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(10))
|
||||
v2 = Const(0, signed(4)) * Const(0, signed(6))
|
||||
self.assertEqual(v2.shape(), signed(10))
|
||||
v3 = Const(0, signed(4)) * Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(8))
|
||||
v5 = 10 * Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(8))
|
||||
|
||||
def test_mod(self):
|
||||
v1 = Const(0, unsigned(4)) % Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(% (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(4))
|
||||
v3 = Const(0, signed(4)) % Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(4))
|
||||
v5 = 10 % Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(4))
|
||||
|
||||
def test_mod_wrong(self):
|
||||
with self.assertRaises(NotImplementedError,
|
||||
msg="Division by a signed value is not supported"):
|
||||
Const(0, signed(4)) % Const(0, signed(6))
|
||||
|
||||
def test_floordiv(self):
|
||||
v1 = Const(0, unsigned(4)) // Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(// (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(4))
|
||||
v3 = Const(0, signed(4)) // Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(4))
|
||||
v5 = 10 // Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(4))
|
||||
|
||||
def test_floordiv_wrong(self):
|
||||
with self.assertRaises(NotImplementedError,
|
||||
msg="Division by a signed value is not supported"):
|
||||
Const(0, signed(4)) // Const(0, signed(6))
|
||||
|
||||
def test_and(self):
|
||||
v1 = Const(0, unsigned(4)) & Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(& (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(6))
|
||||
v2 = Const(0, signed(4)) & Const(0, signed(6))
|
||||
self.assertEqual(v2.shape(), signed(6))
|
||||
v3 = Const(0, signed(4)) & Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(5))
|
||||
v4 = Const(0, unsigned(4)) & Const(0, signed(4))
|
||||
self.assertEqual(v4.shape(), signed(5))
|
||||
v5 = 10 & Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(4))
|
||||
|
||||
def test_or(self):
|
||||
v1 = Const(0, unsigned(4)) | Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(| (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(6))
|
||||
v2 = Const(0, signed(4)) | Const(0, signed(6))
|
||||
self.assertEqual(v2.shape(), signed(6))
|
||||
v3 = Const(0, signed(4)) | Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(5))
|
||||
v4 = Const(0, unsigned(4)) | Const(0, signed(4))
|
||||
self.assertEqual(v4.shape(), signed(5))
|
||||
v5 = 10 | Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(4))
|
||||
|
||||
def test_xor(self):
|
||||
v1 = Const(0, unsigned(4)) ^ Const(0, unsigned(6))
|
||||
self.assertEqual(repr(v1), "(^ (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(6))
|
||||
v2 = Const(0, signed(4)) ^ Const(0, signed(6))
|
||||
self.assertEqual(v2.shape(), signed(6))
|
||||
v3 = Const(0, signed(4)) ^ Const(0, unsigned(4))
|
||||
self.assertEqual(v3.shape(), signed(5))
|
||||
v4 = Const(0, unsigned(4)) ^ Const(0, signed(4))
|
||||
self.assertEqual(v4.shape(), signed(5))
|
||||
v5 = 10 ^ Const(0, 4)
|
||||
self.assertEqual(v5.shape(), unsigned(4))
|
||||
|
||||
def test_shl(self):
|
||||
v1 = Const(1, 4) << Const(4)
|
||||
self.assertEqual(repr(v1), "(<< (const 4'd1) (const 3'd4))")
|
||||
self.assertEqual(v1.shape(), unsigned(11))
|
||||
|
||||
def test_shl_wrong(self):
|
||||
with self.assertRaises(NotImplementedError,
|
||||
msg="Shift by a signed value is not supported"):
|
||||
1 << Const(0, signed(6))
|
||||
with self.assertRaises(NotImplementedError,
|
||||
msg="Shift by a signed value is not supported"):
|
||||
Const(1, unsigned(4)) << -1
|
||||
|
||||
def test_shr(self):
|
||||
v1 = Const(1, 4) >> Const(4)
|
||||
self.assertEqual(repr(v1), "(>> (const 4'd1) (const 3'd4))")
|
||||
self.assertEqual(v1.shape(), unsigned(4))
|
||||
|
||||
def test_shr_wrong(self):
|
||||
with self.assertRaises(NotImplementedError,
|
||||
msg="Shift by a signed value is not supported"):
|
||||
1 << Const(0, signed(6))
|
||||
with self.assertRaises(NotImplementedError,
|
||||
msg="Shift by a signed value is not supported"):
|
||||
Const(1, unsigned(4)) << -1
|
||||
|
||||
def test_lt(self):
|
||||
v = Const(0, 4) < Const(0, 6)
|
||||
self.assertEqual(repr(v), "(< (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_le(self):
|
||||
v = Const(0, 4) <= Const(0, 6)
|
||||
self.assertEqual(repr(v), "(<= (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_gt(self):
|
||||
v = Const(0, 4) > Const(0, 6)
|
||||
self.assertEqual(repr(v), "(> (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_ge(self):
|
||||
v = Const(0, 4) >= Const(0, 6)
|
||||
self.assertEqual(repr(v), "(>= (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_eq(self):
|
||||
v = Const(0, 4) == Const(0, 6)
|
||||
self.assertEqual(repr(v), "(== (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_ne(self):
|
||||
v = Const(0, 4) != Const(0, 6)
|
||||
self.assertEqual(repr(v), "(!= (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_mux(self):
|
||||
s = Const(0)
|
||||
v1 = Mux(s, Const(0, unsigned(4)), Const(0, unsigned(6)))
|
||||
self.assertEqual(repr(v1), "(m (const 1'd0) (const 4'd0) (const 6'd0))")
|
||||
self.assertEqual(v1.shape(), unsigned(6))
|
||||
v2 = Mux(s, Const(0, signed(4)), Const(0, signed(6)))
|
||||
self.assertEqual(v2.shape(), signed(6))
|
||||
v3 = Mux(s, Const(0, signed(4)), Const(0, unsigned(4)))
|
||||
self.assertEqual(v3.shape(), signed(5))
|
||||
v4 = Mux(s, Const(0, unsigned(4)), Const(0, signed(4)))
|
||||
self.assertEqual(v4.shape(), signed(5))
|
||||
|
||||
def test_mux_wide(self):
|
||||
s = Const(0b100)
|
||||
v = Mux(s, Const(0, unsigned(4)), Const(0, unsigned(6)))
|
||||
self.assertEqual(repr(v), "(m (b (const 3'd4)) (const 4'd0) (const 6'd0))")
|
||||
|
||||
def test_mux_bool(self):
|
||||
v = Mux(True, Const(0), Const(0))
|
||||
self.assertEqual(repr(v), "(m (const 1'd1) (const 1'd0) (const 1'd0))")
|
||||
|
||||
def test_bool(self):
|
||||
v = Const(0).bool()
|
||||
self.assertEqual(repr(v), "(b (const 1'd0))")
|
||||
self.assertEqual(v.shape(), unsigned(1))
|
||||
|
||||
def test_any(self):
|
||||
v = Const(0b101).any()
|
||||
self.assertEqual(repr(v), "(r| (const 3'd5))")
|
||||
|
||||
def test_all(self):
|
||||
v = Const(0b101).all()
|
||||
self.assertEqual(repr(v), "(r& (const 3'd5))")
|
||||
|
||||
def test_xor(self):
|
||||
v = Const(0b101).xor()
|
||||
self.assertEqual(repr(v), "(r^ (const 3'd5))")
|
||||
|
||||
def test_matches(self):
|
||||
s = Signal(4)
|
||||
self.assertRepr(s.matches(), "(const 1'd0)")
|
||||
self.assertRepr(s.matches(1), """
|
||||
(== (sig s) (const 1'd1))
|
||||
""")
|
||||
self.assertRepr(s.matches(0, 1), """
|
||||
(r| (cat (== (sig s) (const 1'd0)) (== (sig s) (const 1'd1))))
|
||||
""")
|
||||
self.assertRepr(s.matches("10--"), """
|
||||
(== (& (sig s) (const 4'd12)) (const 4'd8))
|
||||
""")
|
||||
self.assertRepr(s.matches("1 0--"), """
|
||||
(== (& (sig s) (const 4'd12)) (const 4'd8))
|
||||
""")
|
||||
|
||||
def test_matches_enum(self):
|
||||
s = Signal(SignedEnum)
|
||||
self.assertRepr(s.matches(SignedEnum.FOO), """
|
||||
(== (sig s) (const 1'sd-1))
|
||||
""")
|
||||
|
||||
def test_matches_width_wrong(self):
|
||||
s = Signal(4)
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Match pattern '--' must have the same width as match value (which is 4)"):
|
||||
s.matches("--")
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Match pattern '10110' is wider than match value (which has width 4); "
|
||||
"comparison will never be true"):
|
||||
s.matches(0b10110)
|
||||
|
||||
def test_matches_bits_wrong(self):
|
||||
s = Signal(4)
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Match pattern 'abc' must consist of 0, 1, and - (don't care) bits, "
|
||||
"and may include whitespace"):
|
||||
s.matches("abc")
|
||||
|
||||
def test_matches_pattern_wrong(self):
|
||||
s = Signal(4)
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Match pattern must be an integer, a string, or an enumeration, not 1.0"):
|
||||
s.matches(1.0)
|
||||
|
||||
def test_hash(self):
|
||||
with self.assertRaises(TypeError):
|
||||
hash(Const(0) + Const(0))
|
||||
|
||||
|
||||
class SliceTestCase(FHDLTestCase):
|
||||
def test_shape(self):
|
||||
s1 = Const(10)[2]
|
||||
self.assertEqual(s1.shape(), unsigned(1))
|
||||
self.assertIsInstance(s1.shape(), Shape)
|
||||
s2 = Const(-10)[0:2]
|
||||
self.assertEqual(s2.shape(), unsigned(2))
|
||||
|
||||
def test_start_end_negative(self):
|
||||
c = Const(0, 8)
|
||||
s1 = Slice(c, 0, -1)
|
||||
self.assertEqual((s1.start, s1.stop), (0, 7))
|
||||
s1 = Slice(c, -4, -1)
|
||||
self.assertEqual((s1.start, s1.stop), (4, 7))
|
||||
|
||||
def test_start_end_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Slice start must be an integer, not 'x'"):
|
||||
Slice(0, "x", 1)
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Slice stop must be an integer, not 'x'"):
|
||||
Slice(0, 1, "x")
|
||||
|
||||
def test_start_end_out_of_range(self):
|
||||
c = Const(0, 8)
|
||||
with self.assertRaises(IndexError,
|
||||
msg="Cannot start slice 10 bits into 8-bit value"):
|
||||
Slice(c, 10, 12)
|
||||
with self.assertRaises(IndexError,
|
||||
msg="Cannot stop slice 12 bits into 8-bit value"):
|
||||
Slice(c, 0, 12)
|
||||
with self.assertRaises(IndexError,
|
||||
msg="Slice start 4 must be less than slice stop 2"):
|
||||
Slice(c, 4, 2)
|
||||
|
||||
def test_repr(self):
|
||||
s1 = Const(10)[2]
|
||||
self.assertEqual(repr(s1), "(slice (const 4'd10) 2:3)")
|
||||
|
||||
|
||||
class BitSelectTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.c = Const(0, 8)
|
||||
self.s = Signal(range(self.c.width))
|
||||
|
||||
def test_shape(self):
|
||||
s1 = self.c.bit_select(self.s, 2)
|
||||
self.assertIsInstance(s1, Part)
|
||||
self.assertEqual(s1.shape(), unsigned(2))
|
||||
self.assertIsInstance(s1.shape(), Shape)
|
||||
s2 = self.c.bit_select(self.s, 0)
|
||||
self.assertIsInstance(s2, Part)
|
||||
self.assertEqual(s2.shape(), unsigned(0))
|
||||
|
||||
def test_stride(self):
|
||||
s1 = self.c.bit_select(self.s, 2)
|
||||
self.assertIsInstance(s1, Part)
|
||||
self.assertEqual(s1.stride, 1)
|
||||
|
||||
def test_const(self):
|
||||
s1 = self.c.bit_select(1, 2)
|
||||
self.assertIsInstance(s1, Slice)
|
||||
self.assertRepr(s1, """(slice (const 8'd0) 1:3)""")
|
||||
|
||||
def test_width_wrong(self):
|
||||
with self.assertRaises(TypeError):
|
||||
self.c.bit_select(self.s, -1)
|
||||
|
||||
def test_repr(self):
|
||||
s = self.c.bit_select(self.s, 2)
|
||||
self.assertEqual(repr(s), "(part (const 8'd0) (sig s) 2 1)")
|
||||
|
||||
|
||||
class WordSelectTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.c = Const(0, 8)
|
||||
self.s = Signal(range(self.c.width))
|
||||
|
||||
def test_shape(self):
|
||||
s1 = self.c.word_select(self.s, 2)
|
||||
self.assertIsInstance(s1, Part)
|
||||
self.assertEqual(s1.shape(), unsigned(2))
|
||||
self.assertIsInstance(s1.shape(), Shape)
|
||||
|
||||
def test_stride(self):
|
||||
s1 = self.c.word_select(self.s, 2)
|
||||
self.assertIsInstance(s1, Part)
|
||||
self.assertEqual(s1.stride, 2)
|
||||
|
||||
def test_const(self):
|
||||
s1 = self.c.word_select(1, 2)
|
||||
self.assertIsInstance(s1, Slice)
|
||||
self.assertRepr(s1, """(slice (const 8'd0) 2:4)""")
|
||||
|
||||
def test_width_wrong(self):
|
||||
with self.assertRaises(TypeError):
|
||||
self.c.word_select(self.s, 0)
|
||||
with self.assertRaises(TypeError):
|
||||
self.c.word_select(self.s, -1)
|
||||
|
||||
def test_repr(self):
|
||||
s = self.c.word_select(self.s, 2)
|
||||
self.assertEqual(repr(s), "(part (const 8'd0) (sig s) 2 2)")
|
||||
|
||||
|
||||
class CatTestCase(FHDLTestCase):
|
||||
def test_shape(self):
|
||||
c0 = Cat()
|
||||
self.assertEqual(c0.shape(), unsigned(0))
|
||||
self.assertIsInstance(c0.shape(), Shape)
|
||||
c1 = Cat(Const(10))
|
||||
self.assertEqual(c1.shape(), unsigned(4))
|
||||
c2 = Cat(Const(10), Const(1))
|
||||
self.assertEqual(c2.shape(), unsigned(5))
|
||||
c3 = Cat(Const(10), Const(1), Const(0))
|
||||
self.assertEqual(c3.shape(), unsigned(6))
|
||||
|
||||
def test_repr(self):
|
||||
c1 = Cat(Const(10), Const(1))
|
||||
self.assertEqual(repr(c1), "(cat (const 4'd10) (const 1'd1))")
|
||||
|
||||
|
||||
class ReplTestCase(FHDLTestCase):
|
||||
def test_shape(self):
|
||||
s1 = Repl(Const(10), 3)
|
||||
self.assertEqual(s1.shape(), unsigned(12))
|
||||
self.assertIsInstance(s1.shape(), Shape)
|
||||
s2 = Repl(Const(10), 0)
|
||||
self.assertEqual(s2.shape(), unsigned(0))
|
||||
|
||||
def test_count_wrong(self):
|
||||
with self.assertRaises(TypeError):
|
||||
Repl(Const(10), -1)
|
||||
with self.assertRaises(TypeError):
|
||||
Repl(Const(10), "str")
|
||||
|
||||
def test_repr(self):
|
||||
s = Repl(Const(10), 3)
|
||||
self.assertEqual(repr(s), "(repl (const 4'd10) 3)")
|
||||
|
||||
|
||||
class ArrayTestCase(FHDLTestCase):
|
||||
def test_acts_like_array(self):
|
||||
a = Array([1,2,3])
|
||||
self.assertSequenceEqual(a, [1,2,3])
|
||||
self.assertEqual(a[1], 2)
|
||||
a[1] = 4
|
||||
self.assertSequenceEqual(a, [1,4,3])
|
||||
del a[1]
|
||||
self.assertSequenceEqual(a, [1,3])
|
||||
a.insert(1, 2)
|
||||
self.assertSequenceEqual(a, [1,2,3])
|
||||
|
||||
def test_becomes_immutable(self):
|
||||
a = Array([1,2,3])
|
||||
s1 = Signal(range(len(a)))
|
||||
s2 = Signal(range(len(a)))
|
||||
v1 = a[s1]
|
||||
v2 = a[s2]
|
||||
with self.assertRaisesRegex(ValueError,
|
||||
regex=r"^Array can no longer be mutated after it was indexed with a value at "):
|
||||
a[1] = 2
|
||||
with self.assertRaisesRegex(ValueError,
|
||||
regex=r"^Array can no longer be mutated after it was indexed with a value at "):
|
||||
del a[1]
|
||||
with self.assertRaisesRegex(ValueError,
|
||||
regex=r"^Array can no longer be mutated after it was indexed with a value at "):
|
||||
a.insert(1, 2)
|
||||
|
||||
def test_repr(self):
|
||||
a = Array([1,2,3])
|
||||
self.assertEqual(repr(a), "(array mutable [1, 2, 3])")
|
||||
s = Signal(range(len(a)))
|
||||
v = a[s]
|
||||
self.assertEqual(repr(a), "(array [1, 2, 3])")
|
||||
|
||||
|
||||
class ArrayProxyTestCase(FHDLTestCase):
|
||||
def test_index_shape(self):
|
||||
m = Array(Array(x * y for y in range(1, 4)) for x in range(1, 4))
|
||||
a = Signal(range(3))
|
||||
b = Signal(range(3))
|
||||
v = m[a][b]
|
||||
self.assertEqual(v.shape(), unsigned(4))
|
||||
|
||||
def test_attr_shape(self):
|
||||
from collections import namedtuple
|
||||
pair = namedtuple("pair", ("p", "n"))
|
||||
a = Array(pair(i, -i) for i in range(10))
|
||||
s = Signal(range(len(a)))
|
||||
v = a[s]
|
||||
self.assertEqual(v.p.shape(), unsigned(4))
|
||||
self.assertEqual(v.n.shape(), signed(6))
|
||||
|
||||
def test_repr(self):
|
||||
a = Array([1, 2, 3])
|
||||
s = Signal(range(3))
|
||||
v = a[s]
|
||||
self.assertEqual(repr(v), "(proxy (array [1, 2, 3]) (sig s))")
|
||||
|
||||
|
||||
class SignalTestCase(FHDLTestCase):
|
||||
def test_shape(self):
|
||||
s1 = Signal()
|
||||
self.assertEqual(s1.shape(), unsigned(1))
|
||||
self.assertIsInstance(s1.shape(), Shape)
|
||||
s2 = Signal(2)
|
||||
self.assertEqual(s2.shape(), unsigned(2))
|
||||
s3 = Signal(unsigned(2))
|
||||
self.assertEqual(s3.shape(), unsigned(2))
|
||||
s4 = Signal(signed(2))
|
||||
self.assertEqual(s4.shape(), signed(2))
|
||||
s5 = Signal(0)
|
||||
self.assertEqual(s5.shape(), unsigned(0))
|
||||
s6 = Signal(range(16))
|
||||
self.assertEqual(s6.shape(), unsigned(4))
|
||||
s7 = Signal(range(4, 16))
|
||||
self.assertEqual(s7.shape(), unsigned(4))
|
||||
s8 = Signal(range(-4, 16))
|
||||
self.assertEqual(s8.shape(), signed(5))
|
||||
s9 = Signal(range(-20, 16))
|
||||
self.assertEqual(s9.shape(), signed(6))
|
||||
s10 = Signal(range(0))
|
||||
self.assertEqual(s10.shape(), unsigned(0))
|
||||
s11 = Signal(range(1))
|
||||
self.assertEqual(s11.shape(), unsigned(1))
|
||||
|
||||
def test_shape_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Width must be a non-negative integer, not -10"):
|
||||
Signal(-10)
|
||||
|
||||
def test_name(self):
|
||||
s1 = Signal()
|
||||
self.assertEqual(s1.name, "s1")
|
||||
s2 = Signal(name="sig")
|
||||
self.assertEqual(s2.name, "sig")
|
||||
|
||||
def test_reset(self):
|
||||
s1 = Signal(4, reset=0b111, reset_less=True)
|
||||
self.assertEqual(s1.reset, 0b111)
|
||||
self.assertEqual(s1.reset_less, True)
|
||||
|
||||
def test_reset_enum(self):
|
||||
s1 = Signal(2, reset=UnsignedEnum.BAR)
|
||||
self.assertEqual(s1.reset, 2)
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Reset value has to be an int or an integral Enum"
|
||||
):
|
||||
Signal(1, reset=StringEnum.FOO)
|
||||
|
||||
def test_reset_narrow(self):
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Reset value 8 requires 4 bits to represent, but the signal only has 3 bits"):
|
||||
Signal(3, reset=8)
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Reset value 4 requires 4 bits to represent, but the signal only has 3 bits"):
|
||||
Signal(signed(3), reset=4)
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Reset value -5 requires 4 bits to represent, but the signal only has 3 bits"):
|
||||
Signal(signed(3), reset=-5)
|
||||
|
||||
def test_attrs(self):
|
||||
s1 = Signal()
|
||||
self.assertEqual(s1.attrs, {})
|
||||
s2 = Signal(attrs={"no_retiming": True})
|
||||
self.assertEqual(s2.attrs, {"no_retiming": True})
|
||||
|
||||
def test_repr(self):
|
||||
s1 = Signal()
|
||||
self.assertEqual(repr(s1), "(sig s1)")
|
||||
|
||||
def test_like(self):
|
||||
s1 = Signal.like(Signal(4))
|
||||
self.assertEqual(s1.shape(), unsigned(4))
|
||||
s2 = Signal.like(Signal(range(-15, 1)))
|
||||
self.assertEqual(s2.shape(), signed(5))
|
||||
s3 = Signal.like(Signal(4, reset=0b111, reset_less=True))
|
||||
self.assertEqual(s3.reset, 0b111)
|
||||
self.assertEqual(s3.reset_less, True)
|
||||
s4 = Signal.like(Signal(attrs={"no_retiming": True}))
|
||||
self.assertEqual(s4.attrs, {"no_retiming": True})
|
||||
s5 = Signal.like(Signal(decoder=str))
|
||||
self.assertEqual(s5.decoder, str)
|
||||
s6 = Signal.like(10)
|
||||
self.assertEqual(s6.shape(), unsigned(4))
|
||||
s7 = [Signal.like(Signal(4))][0]
|
||||
self.assertEqual(s7.name, "$like")
|
||||
s8 = Signal.like(s1, name_suffix="_ff")
|
||||
self.assertEqual(s8.name, "s1_ff")
|
||||
|
||||
def test_decoder(self):
|
||||
class Color(Enum):
|
||||
RED = 1
|
||||
BLUE = 2
|
||||
s = Signal(decoder=Color)
|
||||
self.assertEqual(s.decoder(1), "RED/1")
|
||||
self.assertEqual(s.decoder(3), "3")
|
||||
|
||||
def test_enum(self):
|
||||
s1 = Signal(UnsignedEnum)
|
||||
self.assertEqual(s1.shape(), unsigned(2))
|
||||
s2 = Signal(SignedEnum)
|
||||
self.assertEqual(s2.shape(), signed(2))
|
||||
self.assertEqual(s2.decoder(SignedEnum.FOO), "FOO/-1")
|
||||
|
||||
|
||||
class ClockSignalTestCase(FHDLTestCase):
|
||||
def test_domain(self):
|
||||
s1 = ClockSignal()
|
||||
self.assertEqual(s1.domain, "sync")
|
||||
s2 = ClockSignal("pix")
|
||||
self.assertEqual(s2.domain, "pix")
|
||||
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Clock domain name must be a string, not 1"):
|
||||
ClockSignal(1)
|
||||
|
||||
def test_shape(self):
|
||||
s1 = ClockSignal()
|
||||
self.assertEqual(s1.shape(), unsigned(1))
|
||||
self.assertIsInstance(s1.shape(), Shape)
|
||||
|
||||
def test_repr(self):
|
||||
s1 = ClockSignal()
|
||||
self.assertEqual(repr(s1), "(clk sync)")
|
||||
|
||||
def test_wrong_name_comb(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Domain 'comb' does not have a clock"):
|
||||
ClockSignal("comb")
|
||||
|
||||
|
||||
class ResetSignalTestCase(FHDLTestCase):
|
||||
def test_domain(self):
|
||||
s1 = ResetSignal()
|
||||
self.assertEqual(s1.domain, "sync")
|
||||
s2 = ResetSignal("pix")
|
||||
self.assertEqual(s2.domain, "pix")
|
||||
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Clock domain name must be a string, not 1"):
|
||||
ResetSignal(1)
|
||||
|
||||
def test_shape(self):
|
||||
s1 = ResetSignal()
|
||||
self.assertEqual(s1.shape(), unsigned(1))
|
||||
self.assertIsInstance(s1.shape(), Shape)
|
||||
|
||||
def test_repr(self):
|
||||
s1 = ResetSignal()
|
||||
self.assertEqual(repr(s1), "(rst sync)")
|
||||
|
||||
def test_wrong_name_comb(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Domain 'comb' does not have a reset"):
|
||||
ResetSignal("comb")
|
||||
|
||||
|
||||
class MockUserValue(UserValue):
|
||||
def __init__(self, lowered):
|
||||
super().__init__()
|
||||
self.lower_count = 0
|
||||
self.lowered = lowered
|
||||
|
||||
def lower(self):
|
||||
self.lower_count += 1
|
||||
return self.lowered
|
||||
|
||||
|
||||
class UserValueTestCase(FHDLTestCase):
|
||||
def test_shape(self):
|
||||
uv = MockUserValue(1)
|
||||
self.assertEqual(uv.shape(), unsigned(1))
|
||||
self.assertIsInstance(uv.shape(), Shape)
|
||||
uv.lowered = 2
|
||||
self.assertEqual(uv.shape(), unsigned(1))
|
||||
self.assertEqual(uv.lower_count, 1)
|
||||
|
||||
|
||||
class SampleTestCase(FHDLTestCase):
|
||||
def test_const(self):
|
||||
s = Sample(1, 1, "sync")
|
||||
self.assertEqual(s.shape(), unsigned(1))
|
||||
|
||||
def test_signal(self):
|
||||
s1 = Sample(Signal(2), 1, "sync")
|
||||
self.assertEqual(s1.shape(), unsigned(2))
|
||||
s2 = Sample(ClockSignal(), 1, "sync")
|
||||
s3 = Sample(ResetSignal(), 1, "sync")
|
||||
|
||||
def test_wrong_value_operator(self):
|
||||
with self.assertRaises(TypeError,
|
||||
"Sampled value must be a signal or a constant, not "
|
||||
"(+ (sig $signal) (const 1'd1))"):
|
||||
Sample(Signal() + 1, 1, "sync")
|
||||
|
||||
def test_wrong_clocks_neg(self):
|
||||
with self.assertRaises(ValueError,
|
||||
"Cannot sample a value 1 cycles in the future"):
|
||||
Sample(Signal(), -1, "sync")
|
||||
|
||||
def test_wrong_domain(self):
|
||||
with self.assertRaises(TypeError,
|
||||
"Domain name must be a string or None, not 0"):
|
||||
Sample(Signal(), 1, 0)
|
||||
|
||||
|
||||
class InitialTestCase(FHDLTestCase):
|
||||
def test_initial(self):
|
||||
i = Initial()
|
||||
self.assertEqual(i.shape(), unsigned(1))
|
|
@ -0,0 +1,78 @@
|
|||
from ..hdl.cd import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class ClockDomainTestCase(FHDLTestCase):
|
||||
def test_name(self):
|
||||
sync = ClockDomain()
|
||||
self.assertEqual(sync.name, "sync")
|
||||
self.assertEqual(sync.clk.name, "clk")
|
||||
self.assertEqual(sync.rst.name, "rst")
|
||||
self.assertEqual(sync.local, False)
|
||||
pix = ClockDomain()
|
||||
self.assertEqual(pix.name, "pix")
|
||||
self.assertEqual(pix.clk.name, "pix_clk")
|
||||
self.assertEqual(pix.rst.name, "pix_rst")
|
||||
cd_pix = ClockDomain()
|
||||
self.assertEqual(pix.name, "pix")
|
||||
dom = [ClockDomain("foo")][0]
|
||||
self.assertEqual(dom.name, "foo")
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Clock domain name must be specified explicitly"):
|
||||
ClockDomain()
|
||||
cd_reset = ClockDomain(local=True)
|
||||
self.assertEqual(cd_reset.local, True)
|
||||
|
||||
def test_edge(self):
|
||||
sync = ClockDomain()
|
||||
self.assertEqual(sync.clk_edge, "pos")
|
||||
sync = ClockDomain(clk_edge="pos")
|
||||
self.assertEqual(sync.clk_edge, "pos")
|
||||
sync = ClockDomain(clk_edge="neg")
|
||||
self.assertEqual(sync.clk_edge, "neg")
|
||||
|
||||
def test_edge_wrong(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Domain clock edge must be one of 'pos' or 'neg', not 'xxx'"):
|
||||
ClockDomain("sync", clk_edge="xxx")
|
||||
|
||||
def test_with_reset(self):
|
||||
pix = ClockDomain()
|
||||
self.assertIsNotNone(pix.clk)
|
||||
self.assertIsNotNone(pix.rst)
|
||||
self.assertFalse(pix.async_reset)
|
||||
|
||||
def test_without_reset(self):
|
||||
pix = ClockDomain(reset_less=True)
|
||||
self.assertIsNotNone(pix.clk)
|
||||
self.assertIsNone(pix.rst)
|
||||
self.assertFalse(pix.async_reset)
|
||||
|
||||
def test_async_reset(self):
|
||||
pix = ClockDomain(async_reset=True)
|
||||
self.assertIsNotNone(pix.clk)
|
||||
self.assertIsNotNone(pix.rst)
|
||||
self.assertTrue(pix.async_reset)
|
||||
|
||||
def test_rename(self):
|
||||
sync = ClockDomain()
|
||||
self.assertEqual(sync.name, "sync")
|
||||
self.assertEqual(sync.clk.name, "clk")
|
||||
self.assertEqual(sync.rst.name, "rst")
|
||||
sync.rename("pix")
|
||||
self.assertEqual(sync.name, "pix")
|
||||
self.assertEqual(sync.clk.name, "pix_clk")
|
||||
self.assertEqual(sync.rst.name, "pix_rst")
|
||||
|
||||
def test_rename_reset_less(self):
|
||||
sync = ClockDomain(reset_less=True)
|
||||
self.assertEqual(sync.name, "sync")
|
||||
self.assertEqual(sync.clk.name, "clk")
|
||||
sync.rename("pix")
|
||||
self.assertEqual(sync.name, "pix")
|
||||
self.assertEqual(sync.clk.name, "pix_clk")
|
||||
|
||||
def test_wrong_name_comb(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Domain 'comb' may not be clocked"):
|
||||
comb = ClockDomain()
|
|
@ -0,0 +1,762 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
from collections import OrderedDict
|
||||
from enum import Enum
|
||||
|
||||
from ..hdl.ast import *
|
||||
from ..hdl.cd import *
|
||||
from ..hdl.dsl import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class DSLTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.s1 = Signal()
|
||||
self.s2 = Signal()
|
||||
self.s3 = Signal()
|
||||
self.c1 = Signal()
|
||||
self.c2 = Signal()
|
||||
self.c3 = Signal()
|
||||
self.w1 = Signal(4)
|
||||
|
||||
def test_cant_inherit(self):
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Instead of inheriting from `Module`, inherit from `Elaboratable` and "
|
||||
"return a `Module` from the `elaborate(self, platform)` method"):
|
||||
class ORGate(Module):
|
||||
pass
|
||||
|
||||
def test_d_comb(self):
|
||||
m = Module()
|
||||
m.d.comb += self.c1.eq(1)
|
||||
m._flush()
|
||||
self.assertEqual(m._driving[self.c1], None)
|
||||
self.assertRepr(m._statements, """(
|
||||
(eq (sig c1) (const 1'd1))
|
||||
)""")
|
||||
|
||||
def test_d_sync(self):
|
||||
m = Module()
|
||||
m.d.sync += self.c1.eq(1)
|
||||
m._flush()
|
||||
self.assertEqual(m._driving[self.c1], "sync")
|
||||
self.assertRepr(m._statements, """(
|
||||
(eq (sig c1) (const 1'd1))
|
||||
)""")
|
||||
|
||||
def test_d_pix(self):
|
||||
m = Module()
|
||||
m.d.pix += self.c1.eq(1)
|
||||
m._flush()
|
||||
self.assertEqual(m._driving[self.c1], "pix")
|
||||
self.assertRepr(m._statements, """(
|
||||
(eq (sig c1) (const 1'd1))
|
||||
)""")
|
||||
|
||||
def test_d_index(self):
|
||||
m = Module()
|
||||
m.d["pix"] += self.c1.eq(1)
|
||||
m._flush()
|
||||
self.assertEqual(m._driving[self.c1], "pix")
|
||||
self.assertRepr(m._statements, """(
|
||||
(eq (sig c1) (const 1'd1))
|
||||
)""")
|
||||
|
||||
def test_d_no_conflict(self):
|
||||
m = Module()
|
||||
m.d.comb += self.w1[0].eq(1)
|
||||
m.d.comb += self.w1[1].eq(1)
|
||||
|
||||
def test_d_conflict(self):
|
||||
m = Module()
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Driver-driver conflict: trying to drive (sig c1) from d.sync, but it "
|
||||
"is already driven from d.comb"):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
m.d.sync += self.c1.eq(1)
|
||||
|
||||
def test_d_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Cannot assign 'd.pix' attribute; did you mean 'd.pix +='?"):
|
||||
m.d.pix = None
|
||||
|
||||
def test_d_asgn_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Only assignments and property checks may be appended to d.sync"):
|
||||
m.d.sync += Switch(self.s1, {})
|
||||
|
||||
def test_comb_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="'Module' object has no attribute 'comb'; did you mean 'd.comb'?"):
|
||||
m.comb += self.c1.eq(1)
|
||||
|
||||
def test_sync_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="'Module' object has no attribute 'sync'; did you mean 'd.sync'?"):
|
||||
m.sync += self.c1.eq(1)
|
||||
|
||||
def test_attr_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="'Module' object has no attribute 'nonexistentattr'"):
|
||||
m.nonexistentattr
|
||||
|
||||
def test_d_suspicious(self):
|
||||
m = Module()
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Using '<module>.d.submodules' would add statements to clock domain "
|
||||
"'submodules'; did you mean <module>.submodules instead?"):
|
||||
m.d.submodules += []
|
||||
|
||||
def test_clock_signal(self):
|
||||
m = Module()
|
||||
m.d.comb += ClockSignal("pix").eq(ClockSignal())
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(eq (clk pix) (clk sync))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_reset_signal(self):
|
||||
m = Module()
|
||||
m.d.comb += ResetSignal("pix").eq(1)
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(eq (rst pix) (const 1'd1))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_sample_domain(self):
|
||||
m = Module()
|
||||
i = Signal()
|
||||
o1 = Signal()
|
||||
o2 = Signal()
|
||||
o3 = Signal()
|
||||
m.d.sync += o1.eq(Past(i))
|
||||
m.d.pix += o2.eq(Past(i))
|
||||
m.d.pix += o3.eq(Past(i, domain="sync"))
|
||||
f = m.elaborate(platform=None)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig o1) (sample (sig i) @ sync[1]))
|
||||
(eq (sig o2) (sample (sig i) @ pix[1]))
|
||||
(eq (sig o3) (sample (sig i) @ sync[1]))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_If(self):
|
||||
m = Module()
|
||||
with m.If(self.s1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (sig s1))
|
||||
(case 1 (eq (sig c1) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_If_Elif(self):
|
||||
m = Module()
|
||||
with m.If(self.s1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.Elif(self.s2):
|
||||
m.d.sync += self.c2.eq(0)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (sig s1) (sig s2))
|
||||
(case -1 (eq (sig c1) (const 1'd1)))
|
||||
(case 1- (eq (sig c2) (const 1'd0)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_If_Elif_Else(self):
|
||||
m = Module()
|
||||
with m.If(self.s1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.Elif(self.s2):
|
||||
m.d.sync += self.c2.eq(0)
|
||||
with m.Else():
|
||||
m.d.comb += self.c3.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (sig s1) (sig s2))
|
||||
(case -1 (eq (sig c1) (const 1'd1)))
|
||||
(case 1- (eq (sig c2) (const 1'd0)))
|
||||
(default (eq (sig c3) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_If_If(self):
|
||||
m = Module()
|
||||
with m.If(self.s1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.If(self.s2):
|
||||
m.d.comb += self.c2.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (sig s1))
|
||||
(case 1 (eq (sig c1) (const 1'd1)))
|
||||
)
|
||||
(switch (cat (sig s2))
|
||||
(case 1 (eq (sig c2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_If_nested_If(self):
|
||||
m = Module()
|
||||
with m.If(self.s1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.If(self.s2):
|
||||
m.d.comb += self.c2.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (sig s1))
|
||||
(case 1 (eq (sig c1) (const 1'd1))
|
||||
(switch (cat (sig s2))
|
||||
(case 1 (eq (sig c2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_If_dangling_Else(self):
|
||||
m = Module()
|
||||
with m.If(self.s1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.If(self.s2):
|
||||
m.d.comb += self.c2.eq(1)
|
||||
with m.Else():
|
||||
m.d.comb += self.c3.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (sig s1))
|
||||
(case 1
|
||||
(eq (sig c1) (const 1'd1))
|
||||
(switch (cat (sig s2))
|
||||
(case 1 (eq (sig c2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
(default
|
||||
(eq (sig c3) (const 1'd1))
|
||||
)
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_Elif_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Elif without preceding If"):
|
||||
with m.Elif(self.s2):
|
||||
pass
|
||||
|
||||
def test_Else_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Else without preceding If/Elif"):
|
||||
with m.Else():
|
||||
pass
|
||||
|
||||
def test_If_wide(self):
|
||||
m = Module()
|
||||
with m.If(self.w1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (b (sig w1)))
|
||||
(case 1 (eq (sig c1) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_If_signed_suspicious(self):
|
||||
m = Module()
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Signed values in If/Elif conditions usually result from inverting Python "
|
||||
"booleans with ~, which leads to unexpected results: ~True is -2, which is "
|
||||
"truthful. Replace `~flag` with `not flag`. (If this is a false positive, "
|
||||
"silence this warning with `m.If(x)` → `m.If(x.bool())`.)"):
|
||||
with m.If(~True):
|
||||
pass
|
||||
|
||||
def test_Elif_signed_suspicious(self):
|
||||
m = Module()
|
||||
with m.If(0):
|
||||
pass
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Signed values in If/Elif conditions usually result from inverting Python "
|
||||
"booleans with ~, which leads to unexpected results: ~True is -2, which is "
|
||||
"truthful. Replace `~flag` with `not flag`. (If this is a false positive, "
|
||||
"silence this warning with `m.If(x)` → `m.If(x.bool())`.)"):
|
||||
with m.Elif(~True):
|
||||
pass
|
||||
|
||||
def test_if_If_Elif_Else(self):
|
||||
m = Module()
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="`if m.If(...):` does not work; use `with m.If(...)`"):
|
||||
if m.If(0):
|
||||
pass
|
||||
with m.If(0):
|
||||
pass
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="`if m.Elif(...):` does not work; use `with m.Elif(...)`"):
|
||||
if m.Elif(0):
|
||||
pass
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="`if m.Else(...):` does not work; use `with m.Else(...)`"):
|
||||
if m.Else():
|
||||
pass
|
||||
|
||||
def test_Switch(self):
|
||||
m = Module()
|
||||
with m.Switch(self.w1):
|
||||
with m.Case(3):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.Case("11--"):
|
||||
m.d.comb += self.c2.eq(1)
|
||||
with m.Case("1 0--"):
|
||||
m.d.comb += self.c2.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (sig w1)
|
||||
(case 0011 (eq (sig c1) (const 1'd1)))
|
||||
(case 11-- (eq (sig c2) (const 1'd1)))
|
||||
(case 10-- (eq (sig c2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_Switch_default_Case(self):
|
||||
m = Module()
|
||||
with m.Switch(self.w1):
|
||||
with m.Case(3):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.Case():
|
||||
m.d.comb += self.c2.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (sig w1)
|
||||
(case 0011 (eq (sig c1) (const 1'd1)))
|
||||
(default (eq (sig c2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_Switch_default_Default(self):
|
||||
m = Module()
|
||||
with m.Switch(self.w1):
|
||||
with m.Case(3):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
with m.Default():
|
||||
m.d.comb += self.c2.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (sig w1)
|
||||
(case 0011 (eq (sig c1) (const 1'd1)))
|
||||
(default (eq (sig c2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_Switch_const_test(self):
|
||||
m = Module()
|
||||
with m.Switch(1):
|
||||
with m.Case(1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (const 1'd1)
|
||||
(case 1 (eq (sig c1) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_Switch_enum(self):
|
||||
class Color(Enum):
|
||||
RED = 1
|
||||
BLUE = 2
|
||||
m = Module()
|
||||
se = Signal(Color)
|
||||
with m.Switch(se):
|
||||
with m.Case(Color.RED):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (sig se)
|
||||
(case 01 (eq (sig c1) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_Case_width_wrong(self):
|
||||
class Color(Enum):
|
||||
RED = 0b10101010
|
||||
m = Module()
|
||||
with m.Switch(self.w1):
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Case pattern '--' must have the same width as switch value (which is 4)"):
|
||||
with m.Case("--"):
|
||||
pass
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Case pattern '10110' is wider than switch value (which has width 4); "
|
||||
"comparison will never be true"):
|
||||
with m.Case(0b10110):
|
||||
pass
|
||||
with self.assertWarns(SyntaxWarning,
|
||||
msg="Case pattern '10101010' (Color.RED) is wider than switch value "
|
||||
"(which has width 4); comparison will never be true"):
|
||||
with m.Case(Color.RED):
|
||||
pass
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (sig w1) )
|
||||
)
|
||||
""")
|
||||
|
||||
def test_Case_bits_wrong(self):
|
||||
m = Module()
|
||||
with m.Switch(self.w1):
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Case pattern 'abc' must consist of 0, 1, and - (don't care) bits, "
|
||||
"and may include whitespace"):
|
||||
with m.Case("abc"):
|
||||
pass
|
||||
|
||||
def test_Case_pattern_wrong(self):
|
||||
m = Module()
|
||||
with m.Switch(self.w1):
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Case pattern must be an integer, a string, or an enumeration, not 1.0"):
|
||||
with m.Case(1.0):
|
||||
pass
|
||||
|
||||
def test_Case_outside_Switch_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Case is not permitted outside of Switch"):
|
||||
with m.Case():
|
||||
pass
|
||||
|
||||
def test_If_inside_Switch_wrong(self):
|
||||
m = Module()
|
||||
with m.Switch(self.s1):
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="If is not permitted directly inside of Switch; "
|
||||
"it is permitted inside of Switch Case"):
|
||||
with m.If(self.s2):
|
||||
pass
|
||||
|
||||
def test_FSM_basic(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
c = Signal()
|
||||
m = Module()
|
||||
with m.FSM():
|
||||
with m.State("FIRST"):
|
||||
m.d.comb += a.eq(1)
|
||||
m.next = "SECOND"
|
||||
with m.State("SECOND"):
|
||||
m.d.sync += b.eq(~b)
|
||||
with m.If(c):
|
||||
m.next = "FIRST"
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (sig fsm_state)
|
||||
(case 0
|
||||
(eq (sig a) (const 1'd1))
|
||||
(eq (sig fsm_state) (const 1'd1))
|
||||
)
|
||||
(case 1
|
||||
(eq (sig b) (~ (sig b)))
|
||||
(switch (cat (sig c))
|
||||
(case 1
|
||||
(eq (sig fsm_state) (const 1'd0)))
|
||||
)
|
||||
)
|
||||
)
|
||||
)
|
||||
""")
|
||||
self.assertEqual({repr(k): v for k, v in m._driving.items()}, {
|
||||
"(sig a)": None,
|
||||
"(sig fsm_state)": "sync",
|
||||
"(sig b)": "sync",
|
||||
})
|
||||
|
||||
frag = m.elaborate(platform=None)
|
||||
fsm = frag.find_generated("fsm")
|
||||
self.assertIsInstance(fsm.state, Signal)
|
||||
self.assertEqual(fsm.encoding, OrderedDict({
|
||||
"FIRST": 0,
|
||||
"SECOND": 1,
|
||||
}))
|
||||
self.assertEqual(fsm.decoding, OrderedDict({
|
||||
0: "FIRST",
|
||||
1: "SECOND"
|
||||
}))
|
||||
|
||||
def test_FSM_reset(self):
|
||||
a = Signal()
|
||||
m = Module()
|
||||
with m.FSM(reset="SECOND"):
|
||||
with m.State("FIRST"):
|
||||
m.d.comb += a.eq(0)
|
||||
m.next = "SECOND"
|
||||
with m.State("SECOND"):
|
||||
m.next = "FIRST"
|
||||
m._flush()
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (sig fsm_state)
|
||||
(case 0
|
||||
(eq (sig a) (const 1'd0))
|
||||
(eq (sig fsm_state) (const 1'd1))
|
||||
)
|
||||
(case 1
|
||||
(eq (sig fsm_state) (const 1'd0))
|
||||
)
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_FSM_ongoing(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
m = Module()
|
||||
with m.FSM() as fsm:
|
||||
m.d.comb += b.eq(fsm.ongoing("SECOND"))
|
||||
with m.State("FIRST"):
|
||||
pass
|
||||
m.d.comb += a.eq(fsm.ongoing("FIRST"))
|
||||
with m.State("SECOND"):
|
||||
pass
|
||||
m._flush()
|
||||
self.assertEqual(m._generated["fsm"].state.reset, 1)
|
||||
self.maxDiff = 10000
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(eq (sig b) (== (sig fsm_state) (const 1'd0)))
|
||||
(eq (sig a) (== (sig fsm_state) (const 1'd1)))
|
||||
(switch (sig fsm_state)
|
||||
(case 1
|
||||
)
|
||||
(case 0
|
||||
)
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_FSM_empty(self):
|
||||
m = Module()
|
||||
with m.FSM():
|
||||
pass
|
||||
self.assertRepr(m._statements, """
|
||||
()
|
||||
""")
|
||||
|
||||
def test_FSM_wrong_domain(self):
|
||||
m = Module()
|
||||
with self.assertRaises(ValueError,
|
||||
msg="FSM may not be driven by the 'comb' domain"):
|
||||
with m.FSM(domain="comb"):
|
||||
pass
|
||||
|
||||
def test_FSM_wrong_undefined(self):
|
||||
m = Module()
|
||||
with self.assertRaises(NameError,
|
||||
msg="FSM state 'FOO' is referenced but not defined"):
|
||||
with m.FSM() as fsm:
|
||||
fsm.ongoing("FOO")
|
||||
|
||||
def test_FSM_wrong_redefined(self):
|
||||
m = Module()
|
||||
with m.FSM():
|
||||
with m.State("FOO"):
|
||||
pass
|
||||
with self.assertRaises(NameError,
|
||||
msg="FSM state 'FOO' is already defined"):
|
||||
with m.State("FOO"):
|
||||
pass
|
||||
|
||||
def test_FSM_wrong_next(self):
|
||||
m = Module()
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="Only assignment to `m.next` is permitted"):
|
||||
m.next
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="`m.next = <...>` is only permitted inside an FSM state"):
|
||||
m.next = "FOO"
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="`m.next = <...>` is only permitted inside an FSM state"):
|
||||
with m.FSM():
|
||||
m.next = "FOO"
|
||||
|
||||
def test_If_inside_FSM_wrong(self):
|
||||
m = Module()
|
||||
with m.FSM():
|
||||
with m.State("FOO"):
|
||||
pass
|
||||
with self.assertRaises(SyntaxError,
|
||||
msg="If is not permitted directly inside of FSM; "
|
||||
"it is permitted inside of FSM State"):
|
||||
with m.If(self.s2):
|
||||
pass
|
||||
|
||||
def test_auto_pop_ctrl(self):
|
||||
m = Module()
|
||||
with m.If(self.w1):
|
||||
m.d.comb += self.c1.eq(1)
|
||||
m.d.comb += self.c2.eq(1)
|
||||
self.assertRepr(m._statements, """
|
||||
(
|
||||
(switch (cat (b (sig w1)))
|
||||
(case 1 (eq (sig c1) (const 1'd1)))
|
||||
)
|
||||
(eq (sig c2) (const 1'd1))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_submodule_anon(self):
|
||||
m1 = Module()
|
||||
m2 = Module()
|
||||
m1.submodules += m2
|
||||
self.assertEqual(m1._anon_submodules, [m2])
|
||||
self.assertEqual(m1._named_submodules, {})
|
||||
|
||||
def test_submodule_anon_multi(self):
|
||||
m1 = Module()
|
||||
m2 = Module()
|
||||
m3 = Module()
|
||||
m1.submodules += m2, m3
|
||||
self.assertEqual(m1._anon_submodules, [m2, m3])
|
||||
self.assertEqual(m1._named_submodules, {})
|
||||
|
||||
def test_submodule_named(self):
|
||||
m1 = Module()
|
||||
m2 = Module()
|
||||
m1.submodules.foo = m2
|
||||
self.assertEqual(m1._anon_submodules, [])
|
||||
self.assertEqual(m1._named_submodules, {"foo": m2})
|
||||
|
||||
def test_submodule_named_index(self):
|
||||
m1 = Module()
|
||||
m2 = Module()
|
||||
m1.submodules["foo"] = m2
|
||||
self.assertEqual(m1._anon_submodules, [])
|
||||
self.assertEqual(m1._named_submodules, {"foo": m2})
|
||||
|
||||
def test_submodule_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Trying to add 1, which does not implement .elaborate(), as a submodule"):
|
||||
m.submodules.foo = 1
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Trying to add 1, which does not implement .elaborate(), as a submodule"):
|
||||
m.submodules += 1
|
||||
|
||||
def test_submodule_named_conflict(self):
|
||||
m1 = Module()
|
||||
m2 = Module()
|
||||
m1.submodules.foo = m2
|
||||
with self.assertRaises(NameError, msg="Submodule named 'foo' already exists"):
|
||||
m1.submodules.foo = m2
|
||||
|
||||
def test_submodule_get(self):
|
||||
m1 = Module()
|
||||
m2 = Module()
|
||||
m1.submodules.foo = m2
|
||||
m3 = m1.submodules.foo
|
||||
self.assertEqual(m2, m3)
|
||||
|
||||
def test_submodule_get_index(self):
|
||||
m1 = Module()
|
||||
m2 = Module()
|
||||
m1.submodules["foo"] = m2
|
||||
m3 = m1.submodules["foo"]
|
||||
self.assertEqual(m2, m3)
|
||||
|
||||
def test_submodule_get_unset(self):
|
||||
m1 = Module()
|
||||
with self.assertRaises(AttributeError, msg="No submodule named 'foo' exists"):
|
||||
m2 = m1.submodules.foo
|
||||
with self.assertRaises(AttributeError, msg="No submodule named 'foo' exists"):
|
||||
m2 = m1.submodules["foo"]
|
||||
|
||||
def test_domain_named_implicit(self):
|
||||
m = Module()
|
||||
m.domains += ClockDomain("sync")
|
||||
self.assertEqual(len(m._domains), 1)
|
||||
|
||||
def test_domain_named_explicit(self):
|
||||
m = Module()
|
||||
m.domains.foo = ClockDomain()
|
||||
self.assertEqual(len(m._domains), 1)
|
||||
self.assertEqual(m._domains[0].name, "foo")
|
||||
|
||||
def test_domain_add_wrong(self):
|
||||
m = Module()
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Only clock domains may be added to `m.domains`, not 1"):
|
||||
m.domains.foo = 1
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Only clock domains may be added to `m.domains`, not 1"):
|
||||
m.domains += 1
|
||||
|
||||
def test_domain_add_wrong_name(self):
|
||||
m = Module()
|
||||
with self.assertRaises(NameError,
|
||||
msg="Clock domain name 'bar' must match name in `m.domains.foo += ...` syntax"):
|
||||
m.domains.foo = ClockDomain("bar")
|
||||
|
||||
def test_lower(self):
|
||||
m1 = Module()
|
||||
m1.d.comb += self.c1.eq(self.s1)
|
||||
m2 = Module()
|
||||
m2.d.comb += self.c2.eq(self.s2)
|
||||
m2.d.sync += self.c3.eq(self.s3)
|
||||
m1.submodules.foo = m2
|
||||
|
||||
f1 = m1.elaborate(platform=None)
|
||||
self.assertRepr(f1.statements, """
|
||||
(
|
||||
(eq (sig c1) (sig s1))
|
||||
)
|
||||
""")
|
||||
self.assertEqual(f1.drivers, {
|
||||
None: SignalSet((self.c1,))
|
||||
})
|
||||
self.assertEqual(len(f1.subfragments), 1)
|
||||
(f2, f2_name), = f1.subfragments
|
||||
self.assertEqual(f2_name, "foo")
|
||||
self.assertRepr(f2.statements, """
|
||||
(
|
||||
(eq (sig c2) (sig s2))
|
||||
(eq (sig c3) (sig s3))
|
||||
)
|
||||
""")
|
||||
self.assertEqual(f2.drivers, {
|
||||
None: SignalSet((self.c2,)),
|
||||
"sync": SignalSet((self.c3,))
|
||||
})
|
||||
self.assertEqual(len(f2.subfragments), 0)
|
|
@ -0,0 +1,852 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from ..hdl.ast import *
|
||||
from ..hdl.cd import *
|
||||
from ..hdl.ir import *
|
||||
from ..hdl.mem import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class BadElaboratable(Elaboratable):
|
||||
def elaborate(self, platform):
|
||||
return
|
||||
|
||||
|
||||
class FragmentGetTestCase(FHDLTestCase):
|
||||
def test_get_wrong(self):
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Object None cannot be elaborated"):
|
||||
Fragment.get(None, platform=None)
|
||||
|
||||
with self.assertWarns(UserWarning,
|
||||
msg=".elaborate() returned None; missing return statement?"):
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Object None cannot be elaborated"):
|
||||
Fragment.get(BadElaboratable(), platform=None)
|
||||
|
||||
|
||||
class FragmentGeneratedTestCase(FHDLTestCase):
|
||||
def test_find_subfragment(self):
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f1.add_subfragment(f2, "f2")
|
||||
|
||||
self.assertEqual(f1.find_subfragment(0), f2)
|
||||
self.assertEqual(f1.find_subfragment("f2"), f2)
|
||||
|
||||
def test_find_subfragment_wrong(self):
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f1.add_subfragment(f2, "f2")
|
||||
|
||||
with self.assertRaises(NameError,
|
||||
msg="No subfragment at index #1"):
|
||||
f1.find_subfragment(1)
|
||||
with self.assertRaises(NameError,
|
||||
msg="No subfragment with name 'fx'"):
|
||||
f1.find_subfragment("fx")
|
||||
|
||||
def test_find_generated(self):
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f2.generated["sig"] = sig = Signal()
|
||||
f1.add_subfragment(f2, "f2")
|
||||
|
||||
self.assertEqual(SignalKey(f1.find_generated("f2", "sig")),
|
||||
SignalKey(sig))
|
||||
|
||||
|
||||
class FragmentDriversTestCase(FHDLTestCase):
|
||||
def test_empty(self):
|
||||
f = Fragment()
|
||||
self.assertEqual(list(f.iter_comb()), [])
|
||||
self.assertEqual(list(f.iter_sync()), [])
|
||||
|
||||
|
||||
class FragmentPortsTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.s1 = Signal()
|
||||
self.s2 = Signal()
|
||||
self.s3 = Signal()
|
||||
self.c1 = Signal()
|
||||
self.c2 = Signal()
|
||||
self.c3 = Signal()
|
||||
|
||||
def test_empty(self):
|
||||
f = Fragment()
|
||||
self.assertEqual(list(f.iter_ports()), [])
|
||||
|
||||
f._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f.ports, SignalDict([]))
|
||||
|
||||
def test_iter_signals(self):
|
||||
f = Fragment()
|
||||
f.add_ports(self.s1, self.s2, dir="io")
|
||||
self.assertEqual(SignalSet((self.s1, self.s2)), f.iter_signals())
|
||||
|
||||
def test_self_contained(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.c1.eq(self.s1),
|
||||
self.s1.eq(self.c1)
|
||||
)
|
||||
|
||||
f._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f.ports, SignalDict([]))
|
||||
|
||||
def test_infer_input(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.c1.eq(self.s1)
|
||||
)
|
||||
|
||||
f._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f.ports, SignalDict([
|
||||
(self.s1, "i")
|
||||
]))
|
||||
|
||||
def test_request_output(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.c1.eq(self.s1)
|
||||
)
|
||||
|
||||
f._propagate_ports(ports=(self.c1,), all_undef_as_ports=True)
|
||||
self.assertEqual(f.ports, SignalDict([
|
||||
(self.s1, "i"),
|
||||
(self.c1, "o")
|
||||
]))
|
||||
|
||||
def test_input_in_subfragment(self):
|
||||
f1 = Fragment()
|
||||
f1.add_statements(
|
||||
self.c1.eq(self.s1)
|
||||
)
|
||||
f2 = Fragment()
|
||||
f2.add_statements(
|
||||
self.s1.eq(0)
|
||||
)
|
||||
f1.add_subfragment(f2)
|
||||
f1._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f1.ports, SignalDict())
|
||||
self.assertEqual(f2.ports, SignalDict([
|
||||
(self.s1, "o"),
|
||||
]))
|
||||
|
||||
def test_input_only_in_subfragment(self):
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f2.add_statements(
|
||||
self.c1.eq(self.s1)
|
||||
)
|
||||
f1.add_subfragment(f2)
|
||||
f1._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f1.ports, SignalDict([
|
||||
(self.s1, "i"),
|
||||
]))
|
||||
self.assertEqual(f2.ports, SignalDict([
|
||||
(self.s1, "i"),
|
||||
]))
|
||||
|
||||
def test_output_from_subfragment(self):
|
||||
f1 = Fragment()
|
||||
f1.add_statements(
|
||||
self.c1.eq(0)
|
||||
)
|
||||
f2 = Fragment()
|
||||
f2.add_statements(
|
||||
self.c2.eq(1)
|
||||
)
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
f1._propagate_ports(ports=(self.c2,), all_undef_as_ports=True)
|
||||
self.assertEqual(f1.ports, SignalDict([
|
||||
(self.c2, "o"),
|
||||
]))
|
||||
self.assertEqual(f2.ports, SignalDict([
|
||||
(self.c2, "o"),
|
||||
]))
|
||||
|
||||
def test_output_from_subfragment_2(self):
|
||||
f1 = Fragment()
|
||||
f1.add_statements(
|
||||
self.c1.eq(self.s1)
|
||||
)
|
||||
f2 = Fragment()
|
||||
f2.add_statements(
|
||||
self.c2.eq(self.s1)
|
||||
)
|
||||
f1.add_subfragment(f2)
|
||||
f3 = Fragment()
|
||||
f3.add_statements(
|
||||
self.s1.eq(0)
|
||||
)
|
||||
f2.add_subfragment(f3)
|
||||
|
||||
f1._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f2.ports, SignalDict([
|
||||
(self.s1, "o"),
|
||||
]))
|
||||
|
||||
def test_input_output_sibling(self):
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f2.add_statements(
|
||||
self.c1.eq(self.c2)
|
||||
)
|
||||
f1.add_subfragment(f2)
|
||||
f3 = Fragment()
|
||||
f3.add_statements(
|
||||
self.c2.eq(0)
|
||||
)
|
||||
f3.add_driver(self.c2)
|
||||
f1.add_subfragment(f3)
|
||||
|
||||
f1._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f1.ports, SignalDict())
|
||||
|
||||
def test_output_input_sibling(self):
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f2.add_statements(
|
||||
self.c2.eq(0)
|
||||
)
|
||||
f2.add_driver(self.c2)
|
||||
f1.add_subfragment(f2)
|
||||
f3 = Fragment()
|
||||
f3.add_statements(
|
||||
self.c1.eq(self.c2)
|
||||
)
|
||||
f1.add_subfragment(f3)
|
||||
|
||||
f1._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f1.ports, SignalDict())
|
||||
|
||||
def test_input_cd(self):
|
||||
sync = ClockDomain()
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.c1.eq(self.s1)
|
||||
)
|
||||
f.add_domains(sync)
|
||||
f.add_driver(self.c1, "sync")
|
||||
|
||||
f._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f.ports, SignalDict([
|
||||
(self.s1, "i"),
|
||||
(sync.clk, "i"),
|
||||
(sync.rst, "i"),
|
||||
]))
|
||||
|
||||
def test_input_cd_reset_less(self):
|
||||
sync = ClockDomain(reset_less=True)
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.c1.eq(self.s1)
|
||||
)
|
||||
f.add_domains(sync)
|
||||
f.add_driver(self.c1, "sync")
|
||||
|
||||
f._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f.ports, SignalDict([
|
||||
(self.s1, "i"),
|
||||
(sync.clk, "i"),
|
||||
]))
|
||||
|
||||
def test_inout(self):
|
||||
s = Signal()
|
||||
f1 = Fragment()
|
||||
f2 = Instance("foo", io_x=s)
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
f1._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f1.ports, SignalDict([
|
||||
(s, "io")
|
||||
]))
|
||||
|
||||
def test_in_out_same_signal(self):
|
||||
s = Signal()
|
||||
|
||||
f1 = Instance("foo", i_x=s, o_y=s)
|
||||
f2 = Fragment()
|
||||
f2.add_subfragment(f1)
|
||||
|
||||
f2._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f1.ports, SignalDict([
|
||||
(s, "o")
|
||||
]))
|
||||
|
||||
f3 = Instance("foo", o_y=s, i_x=s)
|
||||
f4 = Fragment()
|
||||
f4.add_subfragment(f3)
|
||||
|
||||
f4._propagate_ports(ports=(), all_undef_as_ports=True)
|
||||
self.assertEqual(f3.ports, SignalDict([
|
||||
(s, "o")
|
||||
]))
|
||||
|
||||
def test_clk_rst(self):
|
||||
sync = ClockDomain()
|
||||
f = Fragment()
|
||||
f.add_domains(sync)
|
||||
|
||||
f = f.prepare(ports=(ClockSignal("sync"), ResetSignal("sync")))
|
||||
self.assertEqual(f.ports, SignalDict([
|
||||
(sync.clk, "i"),
|
||||
(sync.rst, "i"),
|
||||
]))
|
||||
|
||||
def test_port_wrong(self):
|
||||
f = Fragment()
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Only signals may be added as ports, not (const 1'd1)"):
|
||||
f.prepare(ports=(Const(1),))
|
||||
|
||||
class FragmentDomainsTestCase(FHDLTestCase):
|
||||
def test_iter_signals(self):
|
||||
cd1 = ClockDomain()
|
||||
cd2 = ClockDomain(reset_less=True)
|
||||
s1 = Signal()
|
||||
s2 = Signal()
|
||||
|
||||
f = Fragment()
|
||||
f.add_domains(cd1, cd2)
|
||||
f.add_driver(s1, "cd1")
|
||||
self.assertEqual(SignalSet((cd1.clk, cd1.rst, s1)), f.iter_signals())
|
||||
f.add_driver(s2, "cd2")
|
||||
self.assertEqual(SignalSet((cd1.clk, cd1.rst, cd2.clk, s1, s2)), f.iter_signals())
|
||||
|
||||
def test_propagate_up(self):
|
||||
cd = ClockDomain()
|
||||
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f1.add_subfragment(f2)
|
||||
f2.add_domains(cd)
|
||||
|
||||
f1._propagate_domains_up()
|
||||
self.assertEqual(f1.domains, {"cd": cd})
|
||||
|
||||
def test_propagate_up_local(self):
|
||||
cd = ClockDomain(local=True)
|
||||
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f1.add_subfragment(f2)
|
||||
f2.add_domains(cd)
|
||||
|
||||
f1._propagate_domains_up()
|
||||
self.assertEqual(f1.domains, {})
|
||||
|
||||
def test_domain_conflict(self):
|
||||
cda = ClockDomain("sync")
|
||||
cdb = ClockDomain("sync")
|
||||
|
||||
fa = Fragment()
|
||||
fa.add_domains(cda)
|
||||
fb = Fragment()
|
||||
fb.add_domains(cdb)
|
||||
f = Fragment()
|
||||
f.add_subfragment(fa, "a")
|
||||
f.add_subfragment(fb, "b")
|
||||
|
||||
f._propagate_domains_up()
|
||||
self.assertEqual(f.domains, {"a_sync": cda, "b_sync": cdb})
|
||||
(fa, _), (fb, _) = f.subfragments
|
||||
self.assertEqual(fa.domains, {"a_sync": cda})
|
||||
self.assertEqual(fb.domains, {"b_sync": cdb})
|
||||
|
||||
def test_domain_conflict_anon(self):
|
||||
cda = ClockDomain("sync")
|
||||
cdb = ClockDomain("sync")
|
||||
|
||||
fa = Fragment()
|
||||
fa.add_domains(cda)
|
||||
fb = Fragment()
|
||||
fb.add_domains(cdb)
|
||||
f = Fragment()
|
||||
f.add_subfragment(fa, "a")
|
||||
f.add_subfragment(fb)
|
||||
|
||||
with self.assertRaises(DomainError,
|
||||
msg="Domain 'sync' is defined by subfragments 'a', <unnamed #1> of fragment "
|
||||
"'top'; it is necessary to either rename subfragment domains explicitly, "
|
||||
"or give names to subfragments"):
|
||||
f._propagate_domains_up()
|
||||
|
||||
def test_domain_conflict_name(self):
|
||||
cda = ClockDomain("sync")
|
||||
cdb = ClockDomain("sync")
|
||||
|
||||
fa = Fragment()
|
||||
fa.add_domains(cda)
|
||||
fb = Fragment()
|
||||
fb.add_domains(cdb)
|
||||
f = Fragment()
|
||||
f.add_subfragment(fa, "x")
|
||||
f.add_subfragment(fb, "x")
|
||||
|
||||
with self.assertRaises(DomainError,
|
||||
msg="Domain 'sync' is defined by subfragments #0, #1 of fragment 'top', some "
|
||||
"of which have identical names; it is necessary to either rename subfragment "
|
||||
"domains explicitly, or give distinct names to subfragments"):
|
||||
f._propagate_domains_up()
|
||||
|
||||
def test_domain_conflict_rename_drivers(self):
|
||||
cda = ClockDomain("sync")
|
||||
cdb = ClockDomain("sync")
|
||||
|
||||
fa = Fragment()
|
||||
fa.add_domains(cda)
|
||||
fb = Fragment()
|
||||
fb.add_domains(cdb)
|
||||
fb.add_driver(ResetSignal("sync"), None)
|
||||
f = Fragment()
|
||||
f.add_subfragment(fa, "a")
|
||||
f.add_subfragment(fb, "b")
|
||||
|
||||
f._propagate_domains_up()
|
||||
fb_new, _ = f.subfragments[1]
|
||||
self.assertEqual(fb_new.drivers, OrderedDict({
|
||||
None: SignalSet((ResetSignal("b_sync"),))
|
||||
}))
|
||||
|
||||
def test_domain_conflict_rename_drivers(self):
|
||||
cda = ClockDomain("sync")
|
||||
cdb = ClockDomain("sync")
|
||||
s = Signal()
|
||||
|
||||
fa = Fragment()
|
||||
fa.add_domains(cda)
|
||||
fb = Fragment()
|
||||
fb.add_domains(cdb)
|
||||
f = Fragment()
|
||||
f.add_subfragment(fa, "a")
|
||||
f.add_subfragment(fb, "b")
|
||||
f.add_driver(s, "b_sync")
|
||||
|
||||
f._propagate_domains(lambda name: ClockDomain(name))
|
||||
|
||||
def test_propagate_down(self):
|
||||
cd = ClockDomain()
|
||||
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f1.add_domains(cd)
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
f1._propagate_domains_down()
|
||||
self.assertEqual(f2.domains, {"cd": cd})
|
||||
|
||||
def test_propagate_down_idempotent(self):
|
||||
cd = ClockDomain()
|
||||
|
||||
f1 = Fragment()
|
||||
f1.add_domains(cd)
|
||||
f2 = Fragment()
|
||||
f2.add_domains(cd)
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
f1._propagate_domains_down()
|
||||
self.assertEqual(f1.domains, {"cd": cd})
|
||||
self.assertEqual(f2.domains, {"cd": cd})
|
||||
|
||||
def test_propagate(self):
|
||||
cd = ClockDomain()
|
||||
|
||||
f1 = Fragment()
|
||||
f2 = Fragment()
|
||||
f1.add_domains(cd)
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
new_domains = f1._propagate_domains(missing_domain=lambda name: None)
|
||||
self.assertEqual(f1.domains, {"cd": cd})
|
||||
self.assertEqual(f2.domains, {"cd": cd})
|
||||
self.assertEqual(new_domains, [])
|
||||
|
||||
def test_propagate_missing(self):
|
||||
s1 = Signal()
|
||||
f1 = Fragment()
|
||||
f1.add_driver(s1, "sync")
|
||||
|
||||
with self.assertRaises(DomainError,
|
||||
msg="Domain 'sync' is used but not defined"):
|
||||
f1._propagate_domains(missing_domain=lambda name: None)
|
||||
|
||||
def test_propagate_create_missing(self):
|
||||
s1 = Signal()
|
||||
f1 = Fragment()
|
||||
f1.add_driver(s1, "sync")
|
||||
f2 = Fragment()
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
new_domains = f1._propagate_domains(missing_domain=lambda name: ClockDomain(name))
|
||||
self.assertEqual(f1.domains.keys(), {"sync"})
|
||||
self.assertEqual(f2.domains.keys(), {"sync"})
|
||||
self.assertEqual(f1.domains["sync"], f2.domains["sync"])
|
||||
self.assertEqual(new_domains, [f1.domains["sync"]])
|
||||
|
||||
def test_propagate_create_missing_fragment(self):
|
||||
s1 = Signal()
|
||||
f1 = Fragment()
|
||||
f1.add_driver(s1, "sync")
|
||||
|
||||
cd = ClockDomain("sync")
|
||||
f2 = Fragment()
|
||||
f2.add_domains(cd)
|
||||
|
||||
new_domains = f1._propagate_domains(missing_domain=lambda name: f2)
|
||||
self.assertEqual(f1.domains.keys(), {"sync"})
|
||||
self.assertEqual(f1.domains["sync"], f2.domains["sync"])
|
||||
self.assertEqual(new_domains, [])
|
||||
self.assertEqual(f1.subfragments, [
|
||||
(f2, "cd_sync")
|
||||
])
|
||||
|
||||
def test_propagate_create_missing_fragment_many_domains(self):
|
||||
s1 = Signal()
|
||||
f1 = Fragment()
|
||||
f1.add_driver(s1, "sync")
|
||||
|
||||
cd_por = ClockDomain("por")
|
||||
cd_sync = ClockDomain("sync")
|
||||
f2 = Fragment()
|
||||
f2.add_domains(cd_por, cd_sync)
|
||||
|
||||
new_domains = f1._propagate_domains(missing_domain=lambda name: f2)
|
||||
self.assertEqual(f1.domains.keys(), {"sync", "por"})
|
||||
self.assertEqual(f2.domains.keys(), {"sync", "por"})
|
||||
self.assertEqual(f1.domains["sync"], f2.domains["sync"])
|
||||
self.assertEqual(new_domains, [])
|
||||
self.assertEqual(f1.subfragments, [
|
||||
(f2, "cd_sync")
|
||||
])
|
||||
|
||||
def test_propagate_create_missing_fragment_wrong(self):
|
||||
s1 = Signal()
|
||||
f1 = Fragment()
|
||||
f1.add_driver(s1, "sync")
|
||||
|
||||
f2 = Fragment()
|
||||
f2.add_domains(ClockDomain("foo"))
|
||||
|
||||
with self.assertRaises(DomainError,
|
||||
msg="Fragment returned by missing domain callback does not define requested "
|
||||
"domain 'sync' (defines 'foo')."):
|
||||
f1._propagate_domains(missing_domain=lambda name: f2)
|
||||
|
||||
|
||||
class FragmentHierarchyConflictTestCase(FHDLTestCase):
|
||||
def setUp_self_sub(self):
|
||||
self.s1 = Signal()
|
||||
self.c1 = Signal()
|
||||
self.c2 = Signal()
|
||||
|
||||
self.f1 = Fragment()
|
||||
self.f1.add_statements(self.c1.eq(0))
|
||||
self.f1.add_driver(self.s1)
|
||||
self.f1.add_driver(self.c1, "sync")
|
||||
|
||||
self.f1a = Fragment()
|
||||
self.f1.add_subfragment(self.f1a, "f1a")
|
||||
|
||||
self.f2 = Fragment()
|
||||
self.f2.add_statements(self.c2.eq(1))
|
||||
self.f2.add_driver(self.s1)
|
||||
self.f2.add_driver(self.c2, "sync")
|
||||
self.f1.add_subfragment(self.f2)
|
||||
|
||||
self.f1b = Fragment()
|
||||
self.f1.add_subfragment(self.f1b, "f1b")
|
||||
|
||||
self.f2a = Fragment()
|
||||
self.f2.add_subfragment(self.f2a, "f2a")
|
||||
|
||||
def test_conflict_self_sub(self):
|
||||
self.setUp_self_sub()
|
||||
|
||||
self.f1._resolve_hierarchy_conflicts(mode="silent")
|
||||
self.assertEqual(self.f1.subfragments, [
|
||||
(self.f1a, "f1a"),
|
||||
(self.f1b, "f1b"),
|
||||
(self.f2a, "f2a"),
|
||||
])
|
||||
self.assertRepr(self.f1.statements, """
|
||||
(
|
||||
(eq (sig c1) (const 1'd0))
|
||||
(eq (sig c2) (const 1'd1))
|
||||
)
|
||||
""")
|
||||
self.assertEqual(self.f1.drivers, {
|
||||
None: SignalSet((self.s1,)),
|
||||
"sync": SignalSet((self.c1, self.c2)),
|
||||
})
|
||||
|
||||
def test_conflict_self_sub_error(self):
|
||||
self.setUp_self_sub()
|
||||
|
||||
with self.assertRaises(DriverConflict,
|
||||
msg="Signal '(sig s1)' is driven from multiple fragments: top, top.<unnamed #1>"):
|
||||
self.f1._resolve_hierarchy_conflicts(mode="error")
|
||||
|
||||
def test_conflict_self_sub_warning(self):
|
||||
self.setUp_self_sub()
|
||||
|
||||
with self.assertWarns(DriverConflict,
|
||||
msg="Signal '(sig s1)' is driven from multiple fragments: top, top.<unnamed #1>; "
|
||||
"hierarchy will be flattened"):
|
||||
self.f1._resolve_hierarchy_conflicts(mode="warn")
|
||||
|
||||
def setUp_sub_sub(self):
|
||||
self.s1 = Signal()
|
||||
self.c1 = Signal()
|
||||
self.c2 = Signal()
|
||||
|
||||
self.f1 = Fragment()
|
||||
|
||||
self.f2 = Fragment()
|
||||
self.f2.add_driver(self.s1)
|
||||
self.f2.add_statements(self.c1.eq(0))
|
||||
self.f1.add_subfragment(self.f2)
|
||||
|
||||
self.f3 = Fragment()
|
||||
self.f3.add_driver(self.s1)
|
||||
self.f3.add_statements(self.c2.eq(1))
|
||||
self.f1.add_subfragment(self.f3)
|
||||
|
||||
def test_conflict_sub_sub(self):
|
||||
self.setUp_sub_sub()
|
||||
|
||||
self.f1._resolve_hierarchy_conflicts(mode="silent")
|
||||
self.assertEqual(self.f1.subfragments, [])
|
||||
self.assertRepr(self.f1.statements, """
|
||||
(
|
||||
(eq (sig c1) (const 1'd0))
|
||||
(eq (sig c2) (const 1'd1))
|
||||
)
|
||||
""")
|
||||
|
||||
def setUp_self_subsub(self):
|
||||
self.s1 = Signal()
|
||||
self.c1 = Signal()
|
||||
self.c2 = Signal()
|
||||
|
||||
self.f1 = Fragment()
|
||||
self.f1.add_driver(self.s1)
|
||||
|
||||
self.f2 = Fragment()
|
||||
self.f2.add_statements(self.c1.eq(0))
|
||||
self.f1.add_subfragment(self.f2)
|
||||
|
||||
self.f3 = Fragment()
|
||||
self.f3.add_driver(self.s1)
|
||||
self.f3.add_statements(self.c2.eq(1))
|
||||
self.f2.add_subfragment(self.f3)
|
||||
|
||||
def test_conflict_self_subsub(self):
|
||||
self.setUp_self_subsub()
|
||||
|
||||
self.f1._resolve_hierarchy_conflicts(mode="silent")
|
||||
self.assertEqual(self.f1.subfragments, [])
|
||||
self.assertRepr(self.f1.statements, """
|
||||
(
|
||||
(eq (sig c1) (const 1'd0))
|
||||
(eq (sig c2) (const 1'd1))
|
||||
)
|
||||
""")
|
||||
|
||||
def setUp_memory(self):
|
||||
self.m = Memory(width=8, depth=4)
|
||||
self.fr = self.m.read_port().elaborate(platform=None)
|
||||
self.fw = self.m.write_port().elaborate(platform=None)
|
||||
self.f1 = Fragment()
|
||||
self.f2 = Fragment()
|
||||
self.f2.add_subfragment(self.fr)
|
||||
self.f1.add_subfragment(self.f2)
|
||||
self.f3 = Fragment()
|
||||
self.f3.add_subfragment(self.fw)
|
||||
self.f1.add_subfragment(self.f3)
|
||||
|
||||
def test_conflict_memory(self):
|
||||
self.setUp_memory()
|
||||
|
||||
self.f1._resolve_hierarchy_conflicts(mode="silent")
|
||||
self.assertEqual(self.f1.subfragments, [
|
||||
(self.fr, None),
|
||||
(self.fw, None),
|
||||
])
|
||||
|
||||
def test_conflict_memory_error(self):
|
||||
self.setUp_memory()
|
||||
|
||||
with self.assertRaises(DriverConflict,
|
||||
msg="Memory 'm' is accessed from multiple fragments: top.<unnamed #0>, "
|
||||
"top.<unnamed #1>"):
|
||||
self.f1._resolve_hierarchy_conflicts(mode="error")
|
||||
|
||||
def test_conflict_memory_warning(self):
|
||||
self.setUp_memory()
|
||||
|
||||
with self.assertWarns(DriverConflict,
|
||||
msg="Memory 'm' is accessed from multiple fragments: top.<unnamed #0>, "
|
||||
"top.<unnamed #1>; hierarchy will be flattened"):
|
||||
self.f1._resolve_hierarchy_conflicts(mode="warn")
|
||||
|
||||
def test_explicit_flatten(self):
|
||||
self.f1 = Fragment()
|
||||
self.f2 = Fragment()
|
||||
self.f2.flatten = True
|
||||
self.f1.add_subfragment(self.f2)
|
||||
|
||||
self.f1._resolve_hierarchy_conflicts(mode="silent")
|
||||
self.assertEqual(self.f1.subfragments, [])
|
||||
|
||||
def test_no_conflict_local_domains(self):
|
||||
f1 = Fragment()
|
||||
cd1 = ClockDomain("d", local=True)
|
||||
f1.add_domains(cd1)
|
||||
f1.add_driver(ClockSignal("d"))
|
||||
f2 = Fragment()
|
||||
cd2 = ClockDomain("d", local=True)
|
||||
f2.add_domains(cd2)
|
||||
f2.add_driver(ClockSignal("d"))
|
||||
f3 = Fragment()
|
||||
f3.add_subfragment(f1)
|
||||
f3.add_subfragment(f2)
|
||||
f3.prepare()
|
||||
|
||||
|
||||
class InstanceTestCase(FHDLTestCase):
|
||||
def test_construct(self):
|
||||
s1 = Signal()
|
||||
s2 = Signal()
|
||||
s3 = Signal()
|
||||
s4 = Signal()
|
||||
s5 = Signal()
|
||||
s6 = Signal()
|
||||
inst = Instance("foo",
|
||||
("a", "ATTR1", 1),
|
||||
("p", "PARAM1", 0x1234),
|
||||
("i", "s1", s1),
|
||||
("o", "s2", s2),
|
||||
("io", "s3", s3),
|
||||
a_ATTR2=2,
|
||||
p_PARAM2=0x5678,
|
||||
i_s4=s4,
|
||||
o_s5=s5,
|
||||
io_s6=s6,
|
||||
)
|
||||
self.assertEqual(inst.attrs, OrderedDict([
|
||||
("ATTR1", 1),
|
||||
("ATTR2", 2),
|
||||
]))
|
||||
self.assertEqual(inst.parameters, OrderedDict([
|
||||
("PARAM1", 0x1234),
|
||||
("PARAM2", 0x5678),
|
||||
]))
|
||||
self.assertEqual(inst.named_ports, OrderedDict([
|
||||
("s1", (s1, "i")),
|
||||
("s2", (s2, "o")),
|
||||
("s3", (s3, "io")),
|
||||
("s4", (s4, "i")),
|
||||
("s5", (s5, "o")),
|
||||
("s6", (s6, "io")),
|
||||
]))
|
||||
|
||||
def test_cast_ports(self):
|
||||
inst = Instance("foo",
|
||||
("i", "s1", 1),
|
||||
("o", "s2", 2),
|
||||
("io", "s3", 3),
|
||||
i_s4=4,
|
||||
o_s5=5,
|
||||
io_s6=6,
|
||||
)
|
||||
self.assertRepr(inst.named_ports["s1"][0], "(const 1'd1)")
|
||||
self.assertRepr(inst.named_ports["s2"][0], "(const 2'd2)")
|
||||
self.assertRepr(inst.named_ports["s3"][0], "(const 2'd3)")
|
||||
self.assertRepr(inst.named_ports["s4"][0], "(const 3'd4)")
|
||||
self.assertRepr(inst.named_ports["s5"][0], "(const 3'd5)")
|
||||
self.assertRepr(inst.named_ports["s6"][0], "(const 3'd6)")
|
||||
|
||||
def test_wrong_construct_arg(self):
|
||||
s = Signal()
|
||||
with self.assertRaises(NameError,
|
||||
msg="Instance argument ('', 's1', (sig s)) should be a tuple "
|
||||
"(kind, name, value) where kind is one of \"p\", \"i\", \"o\", or \"io\""):
|
||||
Instance("foo", ("", "s1", s))
|
||||
|
||||
def test_wrong_construct_kwarg(self):
|
||||
s = Signal()
|
||||
with self.assertRaises(NameError,
|
||||
msg="Instance keyword argument x_s1=(sig s) does not start with one of "
|
||||
"\"p_\", \"i_\", \"o_\", or \"io_\""):
|
||||
Instance("foo", x_s1=s)
|
||||
|
||||
def setUp_cpu(self):
|
||||
self.rst = Signal()
|
||||
self.stb = Signal()
|
||||
self.pins = Signal(8)
|
||||
self.datal = Signal(4)
|
||||
self.datah = Signal(4)
|
||||
self.inst = Instance("cpu",
|
||||
p_RESET=0x1234,
|
||||
i_clk=ClockSignal(),
|
||||
i_rst=self.rst,
|
||||
o_stb=self.stb,
|
||||
o_data=Cat(self.datal, self.datah),
|
||||
io_pins=self.pins[:]
|
||||
)
|
||||
self.wrap = Fragment()
|
||||
self.wrap.add_subfragment(self.inst)
|
||||
|
||||
def test_init(self):
|
||||
self.setUp_cpu()
|
||||
f = self.inst
|
||||
self.assertEqual(f.type, "cpu")
|
||||
self.assertEqual(f.parameters, OrderedDict([("RESET", 0x1234)]))
|
||||
self.assertEqual(list(f.named_ports.keys()), ["clk", "rst", "stb", "data", "pins"])
|
||||
self.assertEqual(f.ports, SignalDict([]))
|
||||
|
||||
def test_prepare(self):
|
||||
self.setUp_cpu()
|
||||
f = self.wrap.prepare()
|
||||
sync_clk = f.domains["sync"].clk
|
||||
self.assertEqual(f.ports, SignalDict([
|
||||
(sync_clk, "i"),
|
||||
(self.rst, "i"),
|
||||
(self.pins, "io"),
|
||||
]))
|
||||
|
||||
def test_prepare_explicit_ports(self):
|
||||
self.setUp_cpu()
|
||||
f = self.wrap.prepare(ports=[self.rst, self.stb])
|
||||
sync_clk = f.domains["sync"].clk
|
||||
sync_rst = f.domains["sync"].rst
|
||||
self.assertEqual(f.ports, SignalDict([
|
||||
(sync_clk, "i"),
|
||||
(sync_rst, "i"),
|
||||
(self.rst, "i"),
|
||||
(self.stb, "o"),
|
||||
(self.pins, "io"),
|
||||
]))
|
||||
|
||||
def test_prepare_slice_in_port(self):
|
||||
s = Signal(2)
|
||||
f = Fragment()
|
||||
f.add_subfragment(Instance("foo", o_O=s[0]))
|
||||
f.add_subfragment(Instance("foo", o_O=s[1]))
|
||||
fp = f.prepare(ports=[s], missing_domain=lambda name: None)
|
||||
self.assertEqual(fp.ports, SignalDict([
|
||||
(s, "o"),
|
||||
]))
|
||||
|
||||
def test_prepare_attrs(self):
|
||||
self.setUp_cpu()
|
||||
self.inst.attrs["ATTR"] = 1
|
||||
f = self.inst.prepare()
|
||||
self.assertEqual(f.attrs, OrderedDict([
|
||||
("ATTR", 1),
|
||||
]))
|
|
@ -0,0 +1,137 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
from ..hdl.ast import *
|
||||
from ..hdl.mem import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class MemoryTestCase(FHDLTestCase):
|
||||
def test_name(self):
|
||||
m1 = Memory(width=8, depth=4)
|
||||
self.assertEqual(m1.name, "m1")
|
||||
m2 = [Memory(width=8, depth=4)][0]
|
||||
self.assertEqual(m2.name, "$memory")
|
||||
m3 = Memory(width=8, depth=4, name="foo")
|
||||
self.assertEqual(m3.name, "foo")
|
||||
|
||||
def test_geometry(self):
|
||||
m = Memory(width=8, depth=4)
|
||||
self.assertEqual(m.width, 8)
|
||||
self.assertEqual(m.depth, 4)
|
||||
|
||||
def test_geometry_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Memory width must be a non-negative integer, not -1"):
|
||||
m = Memory(width=-1, depth=4)
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Memory depth must be a non-negative integer, not -1"):
|
||||
m = Memory(width=8, depth=-1)
|
||||
|
||||
def test_init(self):
|
||||
m = Memory(width=8, depth=4, init=range(4))
|
||||
self.assertEqual(m.init, [0, 1, 2, 3])
|
||||
|
||||
def test_init_wrong_count(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Memory initialization value count exceed memory depth (8 > 4)"):
|
||||
m = Memory(width=8, depth=4, init=range(8))
|
||||
|
||||
def test_init_wrong_type(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Memory initialization value at address 1: "
|
||||
"'str' object cannot be interpreted as an integer"):
|
||||
m = Memory(width=8, depth=4, init=[1, "0"])
|
||||
|
||||
def test_attrs(self):
|
||||
m1 = Memory(width=8, depth=4)
|
||||
self.assertEqual(m1.attrs, {})
|
||||
m2 = Memory(width=8, depth=4, attrs={"ram_block": True})
|
||||
self.assertEqual(m2.attrs, {"ram_block": True})
|
||||
|
||||
def test_read_port_transparent(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
rdport = mem.read_port()
|
||||
self.assertEqual(rdport.memory, mem)
|
||||
self.assertEqual(rdport.domain, "sync")
|
||||
self.assertEqual(rdport.transparent, True)
|
||||
self.assertEqual(len(rdport.addr), 2)
|
||||
self.assertEqual(len(rdport.data), 8)
|
||||
self.assertEqual(len(rdport.en), 1)
|
||||
self.assertIsInstance(rdport.en, Const)
|
||||
self.assertEqual(rdport.en.value, 1)
|
||||
|
||||
def test_read_port_non_transparent(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
rdport = mem.read_port(transparent=False)
|
||||
self.assertEqual(rdport.memory, mem)
|
||||
self.assertEqual(rdport.domain, "sync")
|
||||
self.assertEqual(rdport.transparent, False)
|
||||
self.assertEqual(len(rdport.en), 1)
|
||||
self.assertIsInstance(rdport.en, Signal)
|
||||
self.assertEqual(rdport.en.reset, 1)
|
||||
|
||||
def test_read_port_asynchronous(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
rdport = mem.read_port(domain="comb")
|
||||
self.assertEqual(rdport.memory, mem)
|
||||
self.assertEqual(rdport.domain, "comb")
|
||||
self.assertEqual(rdport.transparent, True)
|
||||
self.assertEqual(len(rdport.en), 1)
|
||||
self.assertIsInstance(rdport.en, Const)
|
||||
self.assertEqual(rdport.en.value, 1)
|
||||
|
||||
def test_read_port_wrong(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Read port cannot be simultaneously asynchronous and non-transparent"):
|
||||
mem.read_port(domain="comb", transparent=False)
|
||||
|
||||
def test_write_port(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
wrport = mem.write_port()
|
||||
self.assertEqual(wrport.memory, mem)
|
||||
self.assertEqual(wrport.domain, "sync")
|
||||
self.assertEqual(wrport.granularity, 8)
|
||||
self.assertEqual(len(wrport.addr), 2)
|
||||
self.assertEqual(len(wrport.data), 8)
|
||||
self.assertEqual(len(wrport.en), 1)
|
||||
|
||||
def test_write_port_granularity(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
wrport = mem.write_port(granularity=2)
|
||||
self.assertEqual(wrport.memory, mem)
|
||||
self.assertEqual(wrport.domain, "sync")
|
||||
self.assertEqual(wrport.granularity, 2)
|
||||
self.assertEqual(len(wrport.addr), 2)
|
||||
self.assertEqual(len(wrport.data), 8)
|
||||
self.assertEqual(len(wrport.en), 4)
|
||||
|
||||
def test_write_port_granularity_wrong(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Write port granularity must be a non-negative integer, not -1"):
|
||||
mem.write_port(granularity=-1)
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Write port granularity must not be greater than memory width (10 > 8)"):
|
||||
mem.write_port(granularity=10)
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Write port granularity must divide memory width evenly"):
|
||||
mem.write_port(granularity=3)
|
||||
|
||||
|
||||
class DummyPortTestCase(FHDLTestCase):
|
||||
def test_name(self):
|
||||
p1 = DummyPort(data_width=8, addr_width=2)
|
||||
self.assertEqual(p1.addr.name, "p1_addr")
|
||||
p2 = [DummyPort(data_width=8, addr_width=2)][0]
|
||||
self.assertEqual(p2.addr.name, "dummy_addr")
|
||||
p3 = DummyPort(data_width=8, addr_width=2, name="foo")
|
||||
self.assertEqual(p3.addr.name, "foo_addr")
|
||||
|
||||
def test_sizes(self):
|
||||
p1 = DummyPort(data_width=8, addr_width=2)
|
||||
self.assertEqual(p1.addr.width, 2)
|
||||
self.assertEqual(p1.data.width, 8)
|
||||
self.assertEqual(p1.en.width, 1)
|
||||
p2 = DummyPort(data_width=8, addr_width=2, granularity=2)
|
||||
self.assertEqual(p2.en.width, 4)
|
|
@ -0,0 +1,313 @@
|
|||
from enum import Enum
|
||||
|
||||
from ..hdl.ast import *
|
||||
from ..hdl.rec import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class UnsignedEnum(Enum):
|
||||
FOO = 1
|
||||
BAR = 2
|
||||
BAZ = 3
|
||||
|
||||
|
||||
class LayoutTestCase(FHDLTestCase):
|
||||
def test_fields(self):
|
||||
layout = Layout.cast([
|
||||
("cyc", 1),
|
||||
("data", signed(32)),
|
||||
("stb", 1, DIR_FANOUT),
|
||||
("ack", 1, DIR_FANIN),
|
||||
("info", [
|
||||
("a", 1),
|
||||
("b", 1),
|
||||
])
|
||||
])
|
||||
|
||||
self.assertEqual(layout["cyc"], ((1, False), DIR_NONE))
|
||||
self.assertEqual(layout["data"], ((32, True), DIR_NONE))
|
||||
self.assertEqual(layout["stb"], ((1, False), DIR_FANOUT))
|
||||
self.assertEqual(layout["ack"], ((1, False), DIR_FANIN))
|
||||
sublayout = layout["info"][0]
|
||||
self.assertEqual(layout["info"][1], DIR_NONE)
|
||||
self.assertEqual(sublayout["a"], ((1, False), DIR_NONE))
|
||||
self.assertEqual(sublayout["b"], ((1, False), DIR_NONE))
|
||||
|
||||
def test_enum_field(self):
|
||||
layout = Layout.cast([
|
||||
("enum", UnsignedEnum),
|
||||
("enum_dir", UnsignedEnum, DIR_FANOUT),
|
||||
])
|
||||
self.assertEqual(layout["enum"], ((2, False), DIR_NONE))
|
||||
self.assertEqual(layout["enum_dir"], ((2, False), DIR_FANOUT))
|
||||
|
||||
def test_range_field(self):
|
||||
layout = Layout.cast([
|
||||
("range", range(0, 7)),
|
||||
])
|
||||
self.assertEqual(layout["range"], ((3, False), DIR_NONE))
|
||||
|
||||
def test_slice_tuple(self):
|
||||
layout = Layout.cast([
|
||||
("a", 1),
|
||||
("b", 2),
|
||||
("c", 3)
|
||||
])
|
||||
expect = Layout.cast([
|
||||
("a", 1),
|
||||
("c", 3)
|
||||
])
|
||||
self.assertEqual(layout["a", "c"], expect)
|
||||
|
||||
def test_wrong_field(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Field (1,) has invalid layout: should be either (name, shape) or "
|
||||
"(name, shape, direction)"):
|
||||
Layout.cast([(1,)])
|
||||
|
||||
def test_wrong_name(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Field (1, 1) has invalid name: should be a string"):
|
||||
Layout.cast([(1, 1)])
|
||||
|
||||
def test_wrong_name_duplicate(self):
|
||||
with self.assertRaises(NameError,
|
||||
msg="Field ('a', 2) has a name that is already present in the layout"):
|
||||
Layout.cast([("a", 1), ("a", 2)])
|
||||
|
||||
def test_wrong_direction(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Field ('a', 1, 0) has invalid direction: should be a Direction "
|
||||
"instance like DIR_FANIN"):
|
||||
Layout.cast([("a", 1, 0)])
|
||||
|
||||
def test_wrong_shape(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Field ('a', 'x') has invalid shape: should be castable to Shape or "
|
||||
"a list of fields of a nested record"):
|
||||
Layout.cast([("a", "x")])
|
||||
|
||||
|
||||
class RecordTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
r = Record([
|
||||
("stb", 1),
|
||||
("data", 32),
|
||||
("info", [
|
||||
("a", 1),
|
||||
("b", 1),
|
||||
])
|
||||
])
|
||||
|
||||
self.assertEqual(repr(r), "(rec r stb data (rec r__info a b))")
|
||||
self.assertEqual(len(r), 35)
|
||||
self.assertIsInstance(r.stb, Signal)
|
||||
self.assertEqual(r.stb.name, "r__stb")
|
||||
self.assertEqual(r["stb"].name, "r__stb")
|
||||
|
||||
self.assertTrue(hasattr(r, "stb"))
|
||||
self.assertFalse(hasattr(r, "xxx"))
|
||||
|
||||
def test_unnamed(self):
|
||||
r = [Record([
|
||||
("stb", 1)
|
||||
])][0]
|
||||
|
||||
self.assertEqual(repr(r), "(rec <unnamed> stb)")
|
||||
self.assertEqual(r.stb.name, "stb")
|
||||
|
||||
def test_iter(self):
|
||||
r = Record([
|
||||
("data", 4),
|
||||
("stb", 1),
|
||||
])
|
||||
|
||||
self.assertEqual(repr(r[0]), "(slice (rec r data stb) 0:1)")
|
||||
self.assertEqual(repr(r[0:3]), "(slice (rec r data stb) 0:3)")
|
||||
|
||||
def test_wrong_field(self):
|
||||
r = Record([
|
||||
("stb", 1),
|
||||
("ack", 1),
|
||||
])
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Record 'r' does not have a field 'en'. Did you mean one of: stb, ack?"):
|
||||
r["en"]
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Record 'r' does not have a field 'en'. Did you mean one of: stb, ack?"):
|
||||
r.en
|
||||
|
||||
def test_wrong_field_unnamed(self):
|
||||
r = [Record([
|
||||
("stb", 1),
|
||||
("ack", 1),
|
||||
])][0]
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Unnamed record does not have a field 'en'. Did you mean one of: stb, ack?"):
|
||||
r.en
|
||||
|
||||
def test_construct_with_fields(self):
|
||||
ns = Signal(1)
|
||||
nr = Record([
|
||||
("burst", 1)
|
||||
])
|
||||
r = Record([
|
||||
("stb", 1),
|
||||
("info", [
|
||||
("burst", 1)
|
||||
])
|
||||
], fields={
|
||||
"stb": ns,
|
||||
"info": nr
|
||||
})
|
||||
self.assertIs(r.stb, ns)
|
||||
self.assertIs(r.info, nr)
|
||||
|
||||
def test_like(self):
|
||||
r1 = Record([("a", 1), ("b", 2)])
|
||||
r2 = Record.like(r1)
|
||||
self.assertEqual(r1.layout, r2.layout)
|
||||
self.assertEqual(r2.name, "r2")
|
||||
r3 = Record.like(r1, name="foo")
|
||||
self.assertEqual(r3.name, "foo")
|
||||
r4 = Record.like(r1, name_suffix="foo")
|
||||
self.assertEqual(r4.name, "r1foo")
|
||||
|
||||
def test_like_modifications(self):
|
||||
r1 = Record([("a", 1), ("b", [("s", 1)])])
|
||||
self.assertEqual(r1.a.name, "r1__a")
|
||||
self.assertEqual(r1.b.name, "r1__b")
|
||||
self.assertEqual(r1.b.s.name, "r1__b__s")
|
||||
r1.a.reset = 1
|
||||
r1.b.s.reset = 1
|
||||
r2 = Record.like(r1)
|
||||
self.assertEqual(r2.a.reset, 1)
|
||||
self.assertEqual(r2.b.s.reset, 1)
|
||||
self.assertEqual(r2.a.name, "r2__a")
|
||||
self.assertEqual(r2.b.name, "r2__b")
|
||||
self.assertEqual(r2.b.s.name, "r2__b__s")
|
||||
|
||||
def test_slice_tuple(self):
|
||||
r1 = Record([("a", 1), ("b", 2), ("c", 3)])
|
||||
r2 = r1["a", "c"]
|
||||
self.assertEqual(r2.layout, Layout([("a", 1), ("c", 3)]))
|
||||
self.assertIs(r2.a, r1.a)
|
||||
self.assertIs(r2.c, r1.c)
|
||||
|
||||
|
||||
class ConnectTestCase(FHDLTestCase):
|
||||
def setUp_flat(self):
|
||||
self.core_layout = [
|
||||
("addr", 32, DIR_FANOUT),
|
||||
("data_r", 32, DIR_FANIN),
|
||||
("data_w", 32, DIR_FANIN),
|
||||
]
|
||||
self.periph_layout = [
|
||||
("addr", 32, DIR_FANOUT),
|
||||
("data_r", 32, DIR_FANIN),
|
||||
("data_w", 32, DIR_FANIN),
|
||||
]
|
||||
|
||||
def setUp_nested(self):
|
||||
self.core_layout = [
|
||||
("addr", 32, DIR_FANOUT),
|
||||
("data", [
|
||||
("r", 32, DIR_FANIN),
|
||||
("w", 32, DIR_FANIN),
|
||||
]),
|
||||
]
|
||||
self.periph_layout = [
|
||||
("addr", 32, DIR_FANOUT),
|
||||
("data", [
|
||||
("r", 32, DIR_FANIN),
|
||||
("w", 32, DIR_FANIN),
|
||||
]),
|
||||
]
|
||||
|
||||
def test_flat(self):
|
||||
self.setUp_flat()
|
||||
|
||||
core = Record(self.core_layout)
|
||||
periph1 = Record(self.periph_layout)
|
||||
periph2 = Record(self.periph_layout)
|
||||
|
||||
stmts = core.connect(periph1, periph2)
|
||||
self.assertRepr(stmts, """(
|
||||
(eq (sig periph1__addr) (sig core__addr))
|
||||
(eq (sig periph2__addr) (sig core__addr))
|
||||
(eq (sig core__data_r) (| (sig periph1__data_r) (sig periph2__data_r)))
|
||||
(eq (sig core__data_w) (| (sig periph1__data_w) (sig periph2__data_w)))
|
||||
)""")
|
||||
|
||||
def test_flat_include(self):
|
||||
self.setUp_flat()
|
||||
|
||||
core = Record(self.core_layout)
|
||||
periph1 = Record(self.periph_layout)
|
||||
periph2 = Record(self.periph_layout)
|
||||
|
||||
stmts = core.connect(periph1, periph2, include={"addr": True})
|
||||
self.assertRepr(stmts, """(
|
||||
(eq (sig periph1__addr) (sig core__addr))
|
||||
(eq (sig periph2__addr) (sig core__addr))
|
||||
)""")
|
||||
|
||||
def test_flat_exclude(self):
|
||||
self.setUp_flat()
|
||||
|
||||
core = Record(self.core_layout)
|
||||
periph1 = Record(self.periph_layout)
|
||||
periph2 = Record(self.periph_layout)
|
||||
|
||||
stmts = core.connect(periph1, periph2, exclude={"addr": True})
|
||||
self.assertRepr(stmts, """(
|
||||
(eq (sig core__data_r) (| (sig periph1__data_r) (sig periph2__data_r)))
|
||||
(eq (sig core__data_w) (| (sig periph1__data_w) (sig periph2__data_w)))
|
||||
)""")
|
||||
|
||||
def test_nested(self):
|
||||
self.setUp_nested()
|
||||
|
||||
core = Record(self.core_layout)
|
||||
periph1 = Record(self.periph_layout)
|
||||
periph2 = Record(self.periph_layout)
|
||||
|
||||
stmts = core.connect(periph1, periph2)
|
||||
self.maxDiff = None
|
||||
self.assertRepr(stmts, """(
|
||||
(eq (sig periph1__addr) (sig core__addr))
|
||||
(eq (sig periph2__addr) (sig core__addr))
|
||||
(eq (sig core__data__r) (| (sig periph1__data__r) (sig periph2__data__r)))
|
||||
(eq (sig core__data__w) (| (sig periph1__data__w) (sig periph2__data__w)))
|
||||
)""")
|
||||
|
||||
def test_wrong_include_exclude(self):
|
||||
self.setUp_flat()
|
||||
|
||||
core = Record(self.core_layout)
|
||||
periph = Record(self.periph_layout)
|
||||
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Cannot include field 'foo' because it is not present in record 'core'"):
|
||||
core.connect(periph, include={"foo": True})
|
||||
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Cannot exclude field 'foo' because it is not present in record 'core'"):
|
||||
core.connect(periph, exclude={"foo": True})
|
||||
|
||||
def test_wrong_direction(self):
|
||||
recs = [Record([("x", 1)]) for _ in range(2)]
|
||||
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Cannot connect field 'x' of unnamed record because it does not have "
|
||||
"a direction"):
|
||||
recs[0].connect(recs[1])
|
||||
|
||||
def test_wrong_missing_field(self):
|
||||
core = Record([("addr", 32, DIR_FANOUT)])
|
||||
periph = Record([])
|
||||
|
||||
with self.assertRaises(AttributeError,
|
||||
msg="Cannot connect field 'addr' of record 'core' to subordinate record 'periph' "
|
||||
"because the subordinate record does not have this field"):
|
||||
core.connect(periph)
|
|
@ -0,0 +1,622 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
from ..hdl.ast import *
|
||||
from ..hdl.cd import *
|
||||
from ..hdl.ir import *
|
||||
from ..hdl.xfrm import *
|
||||
from ..hdl.mem import *
|
||||
from .utils import *
|
||||
|
||||
|
||||
class DomainRenamerTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.s1 = Signal()
|
||||
self.s2 = Signal()
|
||||
self.s3 = Signal()
|
||||
self.s4 = Signal()
|
||||
self.s5 = Signal()
|
||||
self.c1 = Signal()
|
||||
|
||||
def test_rename_signals(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s1.eq(ClockSignal()),
|
||||
ResetSignal().eq(self.s2),
|
||||
self.s3.eq(0),
|
||||
self.s4.eq(ClockSignal("other")),
|
||||
self.s5.eq(ResetSignal("other")),
|
||||
)
|
||||
f.add_driver(self.s1, None)
|
||||
f.add_driver(self.s2, None)
|
||||
f.add_driver(self.s3, "sync")
|
||||
|
||||
f = DomainRenamer("pix")(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s1) (clk pix))
|
||||
(eq (rst pix) (sig s2))
|
||||
(eq (sig s3) (const 1'd0))
|
||||
(eq (sig s4) (clk other))
|
||||
(eq (sig s5) (rst other))
|
||||
)
|
||||
""")
|
||||
self.assertEqual(f.drivers, {
|
||||
None: SignalSet((self.s1, self.s2)),
|
||||
"pix": SignalSet((self.s3,)),
|
||||
})
|
||||
|
||||
def test_rename_multi(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s1.eq(ClockSignal()),
|
||||
self.s2.eq(ResetSignal("other")),
|
||||
)
|
||||
|
||||
f = DomainRenamer({"sync": "pix", "other": "pix2"})(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s1) (clk pix))
|
||||
(eq (sig s2) (rst pix2))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_rename_cd(self):
|
||||
cd_sync = ClockDomain()
|
||||
cd_pix = ClockDomain()
|
||||
|
||||
f = Fragment()
|
||||
f.add_domains(cd_sync, cd_pix)
|
||||
|
||||
f = DomainRenamer("ext")(f)
|
||||
self.assertEqual(cd_sync.name, "ext")
|
||||
self.assertEqual(f.domains, {
|
||||
"ext": cd_sync,
|
||||
"pix": cd_pix,
|
||||
})
|
||||
|
||||
def test_rename_cd_subfragment(self):
|
||||
cd_sync = ClockDomain()
|
||||
cd_pix = ClockDomain()
|
||||
|
||||
f1 = Fragment()
|
||||
f1.add_domains(cd_sync, cd_pix)
|
||||
f2 = Fragment()
|
||||
f2.add_domains(cd_sync)
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
f1 = DomainRenamer("ext")(f1)
|
||||
self.assertEqual(cd_sync.name, "ext")
|
||||
self.assertEqual(f1.domains, {
|
||||
"ext": cd_sync,
|
||||
"pix": cd_pix,
|
||||
})
|
||||
|
||||
def test_rename_wrong_to_comb(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Domain 'sync' may not be renamed to 'comb'"):
|
||||
DomainRenamer("comb")
|
||||
|
||||
def test_rename_wrong_from_comb(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Domain 'comb' may not be renamed"):
|
||||
DomainRenamer({"comb": "sync"})
|
||||
|
||||
|
||||
class DomainLowererTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.s = Signal()
|
||||
|
||||
def test_lower_clk(self):
|
||||
sync = ClockDomain()
|
||||
f = Fragment()
|
||||
f.add_domains(sync)
|
||||
f.add_statements(
|
||||
self.s.eq(ClockSignal("sync"))
|
||||
)
|
||||
|
||||
f = DomainLowerer()(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s) (sig clk))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_lower_rst(self):
|
||||
sync = ClockDomain()
|
||||
f = Fragment()
|
||||
f.add_domains(sync)
|
||||
f.add_statements(
|
||||
self.s.eq(ResetSignal("sync"))
|
||||
)
|
||||
|
||||
f = DomainLowerer()(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s) (sig rst))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_lower_rst_reset_less(self):
|
||||
sync = ClockDomain(reset_less=True)
|
||||
f = Fragment()
|
||||
f.add_domains(sync)
|
||||
f.add_statements(
|
||||
self.s.eq(ResetSignal("sync", allow_reset_less=True))
|
||||
)
|
||||
|
||||
f = DomainLowerer()(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s) (const 1'd0))
|
||||
)
|
||||
""")
|
||||
|
||||
def test_lower_drivers(self):
|
||||
sync = ClockDomain()
|
||||
pix = ClockDomain()
|
||||
f = Fragment()
|
||||
f.add_domains(sync, pix)
|
||||
f.add_driver(ClockSignal("pix"), None)
|
||||
f.add_driver(ResetSignal("pix"), "sync")
|
||||
|
||||
f = DomainLowerer()(f)
|
||||
self.assertEqual(f.drivers, {
|
||||
None: SignalSet((pix.clk,)),
|
||||
"sync": SignalSet((pix.rst,))
|
||||
})
|
||||
|
||||
def test_lower_wrong_domain(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s.eq(ClockSignal("xxx"))
|
||||
)
|
||||
|
||||
with self.assertRaises(DomainError,
|
||||
msg="Signal (clk xxx) refers to nonexistent domain 'xxx'"):
|
||||
DomainLowerer()(f)
|
||||
|
||||
def test_lower_wrong_reset_less_domain(self):
|
||||
sync = ClockDomain(reset_less=True)
|
||||
f = Fragment()
|
||||
f.add_domains(sync)
|
||||
f.add_statements(
|
||||
self.s.eq(ResetSignal("sync"))
|
||||
)
|
||||
|
||||
with self.assertRaises(DomainError,
|
||||
msg="Signal (rst sync) refers to reset of reset-less domain 'sync'"):
|
||||
DomainLowerer()(f)
|
||||
|
||||
|
||||
class SampleLowererTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.i = Signal()
|
||||
self.o1 = Signal()
|
||||
self.o2 = Signal()
|
||||
self.o3 = Signal()
|
||||
|
||||
def test_lower_signal(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.o1.eq(Sample(self.i, 2, "sync")),
|
||||
self.o2.eq(Sample(self.i, 1, "sync")),
|
||||
self.o3.eq(Sample(self.i, 1, "pix")),
|
||||
)
|
||||
|
||||
f = SampleLowerer()(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig o1) (sig $sample$s$i$sync$2))
|
||||
(eq (sig o2) (sig $sample$s$i$sync$1))
|
||||
(eq (sig o3) (sig $sample$s$i$pix$1))
|
||||
(eq (sig $sample$s$i$sync$1) (sig i))
|
||||
(eq (sig $sample$s$i$sync$2) (sig $sample$s$i$sync$1))
|
||||
(eq (sig $sample$s$i$pix$1) (sig i))
|
||||
)
|
||||
""")
|
||||
self.assertEqual(len(f.drivers["sync"]), 2)
|
||||
self.assertEqual(len(f.drivers["pix"]), 1)
|
||||
|
||||
def test_lower_const(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.o1.eq(Sample(1, 2, "sync")),
|
||||
)
|
||||
|
||||
f = SampleLowerer()(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig o1) (sig $sample$c$1$sync$2))
|
||||
(eq (sig $sample$c$1$sync$1) (const 1'd1))
|
||||
(eq (sig $sample$c$1$sync$2) (sig $sample$c$1$sync$1))
|
||||
)
|
||||
""")
|
||||
self.assertEqual(len(f.drivers["sync"]), 2)
|
||||
|
||||
|
||||
class SwitchCleanerTestCase(FHDLTestCase):
|
||||
def test_clean(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
c = Signal()
|
||||
stmts = [
|
||||
Switch(a, {
|
||||
1: a.eq(0),
|
||||
0: [
|
||||
b.eq(1),
|
||||
Switch(b, {1: [
|
||||
Switch(a|b, {})
|
||||
]})
|
||||
]
|
||||
})
|
||||
]
|
||||
|
||||
self.assertRepr(SwitchCleaner()(stmts), """
|
||||
(
|
||||
(switch (sig a)
|
||||
(case 1
|
||||
(eq (sig a) (const 1'd0)))
|
||||
(case 0
|
||||
(eq (sig b) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
|
||||
class LHSGroupAnalyzerTestCase(FHDLTestCase):
|
||||
def test_no_group_unrelated(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
stmts = [
|
||||
a.eq(0),
|
||||
b.eq(0),
|
||||
]
|
||||
|
||||
groups = LHSGroupAnalyzer()(stmts)
|
||||
self.assertEqual(list(groups.values()), [
|
||||
SignalSet((a,)),
|
||||
SignalSet((b,)),
|
||||
])
|
||||
|
||||
def test_group_related(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
stmts = [
|
||||
a.eq(0),
|
||||
Cat(a, b).eq(0),
|
||||
]
|
||||
|
||||
groups = LHSGroupAnalyzer()(stmts)
|
||||
self.assertEqual(list(groups.values()), [
|
||||
SignalSet((a, b)),
|
||||
])
|
||||
|
||||
def test_no_loops(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
stmts = [
|
||||
a.eq(0),
|
||||
Cat(a, b).eq(0),
|
||||
Cat(a, b).eq(0),
|
||||
]
|
||||
|
||||
groups = LHSGroupAnalyzer()(stmts)
|
||||
self.assertEqual(list(groups.values()), [
|
||||
SignalSet((a, b)),
|
||||
])
|
||||
|
||||
def test_switch(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
stmts = [
|
||||
a.eq(0),
|
||||
Switch(a, {
|
||||
1: b.eq(0),
|
||||
})
|
||||
]
|
||||
|
||||
groups = LHSGroupAnalyzer()(stmts)
|
||||
self.assertEqual(list(groups.values()), [
|
||||
SignalSet((a,)),
|
||||
SignalSet((b,)),
|
||||
])
|
||||
|
||||
def test_lhs_empty(self):
|
||||
stmts = [
|
||||
Cat().eq(0)
|
||||
]
|
||||
|
||||
groups = LHSGroupAnalyzer()(stmts)
|
||||
self.assertEqual(list(groups.values()), [
|
||||
])
|
||||
|
||||
|
||||
class LHSGroupFilterTestCase(FHDLTestCase):
|
||||
def test_filter(self):
|
||||
a = Signal()
|
||||
b = Signal()
|
||||
c = Signal()
|
||||
stmts = [
|
||||
Switch(a, {
|
||||
1: a.eq(0),
|
||||
0: [
|
||||
b.eq(1),
|
||||
Switch(b, {1: []})
|
||||
]
|
||||
})
|
||||
]
|
||||
|
||||
self.assertRepr(LHSGroupFilter(SignalSet((a,)))(stmts), """
|
||||
(
|
||||
(switch (sig a)
|
||||
(case 1
|
||||
(eq (sig a) (const 1'd0)))
|
||||
(case 0 )
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_lhs_empty(self):
|
||||
stmts = [
|
||||
Cat().eq(0)
|
||||
]
|
||||
|
||||
self.assertRepr(LHSGroupFilter(SignalSet())(stmts), "()")
|
||||
|
||||
|
||||
class ResetInserterTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.s1 = Signal()
|
||||
self.s2 = Signal(reset=1)
|
||||
self.s3 = Signal(reset=1, reset_less=True)
|
||||
self.c1 = Signal()
|
||||
|
||||
def test_reset_default(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s1.eq(1)
|
||||
)
|
||||
f.add_driver(self.s1, "sync")
|
||||
|
||||
f = ResetInserter(self.c1)(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s1) (const 1'd1))
|
||||
(switch (sig c1)
|
||||
(case 1 (eq (sig s1) (const 1'd0)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_reset_cd(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s1.eq(1),
|
||||
self.s2.eq(0),
|
||||
)
|
||||
f.add_domains(ClockDomain("sync"))
|
||||
f.add_driver(self.s1, "sync")
|
||||
f.add_driver(self.s2, "pix")
|
||||
|
||||
f = ResetInserter({"pix": self.c1})(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s1) (const 1'd1))
|
||||
(eq (sig s2) (const 1'd0))
|
||||
(switch (sig c1)
|
||||
(case 1 (eq (sig s2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_reset_value(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s2.eq(0)
|
||||
)
|
||||
f.add_driver(self.s2, "sync")
|
||||
|
||||
f = ResetInserter(self.c1)(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s2) (const 1'd0))
|
||||
(switch (sig c1)
|
||||
(case 1 (eq (sig s2) (const 1'd1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_reset_less(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s3.eq(0)
|
||||
)
|
||||
f.add_driver(self.s3, "sync")
|
||||
|
||||
f = ResetInserter(self.c1)(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s3) (const 1'd0))
|
||||
(switch (sig c1)
|
||||
(case 1 )
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
|
||||
class EnableInserterTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.s1 = Signal()
|
||||
self.s2 = Signal()
|
||||
self.s3 = Signal()
|
||||
self.c1 = Signal()
|
||||
|
||||
def test_enable_default(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s1.eq(1)
|
||||
)
|
||||
f.add_driver(self.s1, "sync")
|
||||
|
||||
f = EnableInserter(self.c1)(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s1) (const 1'd1))
|
||||
(switch (sig c1)
|
||||
(case 0 (eq (sig s1) (sig s1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_enable_cd(self):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s1.eq(1),
|
||||
self.s2.eq(0),
|
||||
)
|
||||
f.add_driver(self.s1, "sync")
|
||||
f.add_driver(self.s2, "pix")
|
||||
|
||||
f = EnableInserter({"pix": self.c1})(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s1) (const 1'd1))
|
||||
(eq (sig s2) (const 1'd0))
|
||||
(switch (sig c1)
|
||||
(case 0 (eq (sig s2) (sig s2)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_enable_subfragment(self):
|
||||
f1 = Fragment()
|
||||
f1.add_statements(
|
||||
self.s1.eq(1)
|
||||
)
|
||||
f1.add_driver(self.s1, "sync")
|
||||
|
||||
f2 = Fragment()
|
||||
f2.add_statements(
|
||||
self.s2.eq(1)
|
||||
)
|
||||
f2.add_driver(self.s2, "sync")
|
||||
f1.add_subfragment(f2)
|
||||
|
||||
f1 = EnableInserter(self.c1)(f1)
|
||||
(f2, _), = f1.subfragments
|
||||
self.assertRepr(f1.statements, """
|
||||
(
|
||||
(eq (sig s1) (const 1'd1))
|
||||
(switch (sig c1)
|
||||
(case 0 (eq (sig s1) (sig s1)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
self.assertRepr(f2.statements, """
|
||||
(
|
||||
(eq (sig s2) (const 1'd1))
|
||||
(switch (sig c1)
|
||||
(case 0 (eq (sig s2) (sig s2)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
def test_enable_read_port(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
f = EnableInserter(self.c1)(mem.read_port(transparent=False)).elaborate(platform=None)
|
||||
self.assertRepr(f.named_ports["EN"][0], """
|
||||
(m (sig c1) (sig mem_r_en) (const 1'd0))
|
||||
""")
|
||||
|
||||
def test_enable_write_port(self):
|
||||
mem = Memory(width=8, depth=4)
|
||||
f = EnableInserter(self.c1)(mem.write_port()).elaborate(platform=None)
|
||||
self.assertRepr(f.named_ports["EN"][0], """
|
||||
(m (sig c1) (cat (repl (slice (sig mem_w_en) 0:1) 8)) (const 8'd0))
|
||||
""")
|
||||
|
||||
|
||||
class _MockElaboratable(Elaboratable):
|
||||
def __init__(self):
|
||||
self.s1 = Signal()
|
||||
|
||||
def elaborate(self, platform):
|
||||
f = Fragment()
|
||||
f.add_statements(
|
||||
self.s1.eq(1)
|
||||
)
|
||||
f.add_driver(self.s1, "sync")
|
||||
return f
|
||||
|
||||
|
||||
class TransformedElaboratableTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.c1 = Signal()
|
||||
self.c2 = Signal()
|
||||
|
||||
def test_getattr(self):
|
||||
e = _MockElaboratable()
|
||||
te = EnableInserter(self.c1)(e)
|
||||
|
||||
self.assertIs(te.s1, e.s1)
|
||||
|
||||
def test_composition(self):
|
||||
e = _MockElaboratable()
|
||||
te1 = EnableInserter(self.c1)(e)
|
||||
te2 = ResetInserter(self.c2)(te1)
|
||||
|
||||
self.assertIsInstance(te1, TransformedElaboratable)
|
||||
self.assertIs(te1, te2)
|
||||
|
||||
f = Fragment.get(te2, None)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s1) (const 1'd1))
|
||||
(switch (sig c1)
|
||||
(case 0 (eq (sig s1) (sig s1)))
|
||||
)
|
||||
(switch (sig c2)
|
||||
(case 1 (eq (sig s1) (const 1'd0)))
|
||||
)
|
||||
)
|
||||
""")
|
||||
|
||||
|
||||
class MockUserValue(UserValue):
|
||||
def __init__(self, lowered):
|
||||
super().__init__()
|
||||
self.lowered = lowered
|
||||
|
||||
def lower(self):
|
||||
return self.lowered
|
||||
|
||||
|
||||
class UserValueTestCase(FHDLTestCase):
|
||||
def setUp(self):
|
||||
self.s = Signal()
|
||||
self.c = Signal()
|
||||
self.uv = MockUserValue(self.s)
|
||||
|
||||
def test_lower(self):
|
||||
sync = ClockDomain()
|
||||
f = Fragment()
|
||||
f.add_domains(sync)
|
||||
f.add_statements(
|
||||
self.uv.eq(1)
|
||||
)
|
||||
for signal in self.uv._lhs_signals():
|
||||
f.add_driver(signal, "sync")
|
||||
|
||||
f = ResetInserter(self.c)(f)
|
||||
f = DomainLowerer()(f)
|
||||
self.assertRepr(f.statements, """
|
||||
(
|
||||
(eq (sig s) (const 1'd1))
|
||||
(switch (sig c)
|
||||
(case 1 (eq (sig s) (const 1'd0)))
|
||||
)
|
||||
(switch (sig rst)
|
||||
(case 1 (eq (sig s) (const 1'd0)))
|
||||
)
|
||||
)
|
||||
""")
|
|
@ -0,0 +1,229 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
from .utils import *
|
||||
from ..hdl import *
|
||||
from ..back.pysim import *
|
||||
from ..lib.cdc import *
|
||||
|
||||
|
||||
class FFSynchronizerTestCase(FHDLTestCase):
|
||||
def test_stages_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Synchronization stage count must be a positive integer, not 0"):
|
||||
FFSynchronizer(Signal(), Signal(), stages=0)
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Synchronization stage count may not safely be less than 2"):
|
||||
FFSynchronizer(Signal(), Signal(), stages=1)
|
||||
|
||||
def test_basic(self):
|
||||
i = Signal()
|
||||
o = Signal()
|
||||
frag = FFSynchronizer(i, o)
|
||||
|
||||
sim = Simulator(frag)
|
||||
sim.add_clock(1e-6)
|
||||
def process():
|
||||
self.assertEqual((yield o), 0)
|
||||
yield i.eq(1)
|
||||
yield Tick()
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick()
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick()
|
||||
self.assertEqual((yield o), 1)
|
||||
sim.add_process(process)
|
||||
sim.run()
|
||||
|
||||
def test_reset_value(self):
|
||||
i = Signal(reset=1)
|
||||
o = Signal()
|
||||
frag = FFSynchronizer(i, o, reset=1)
|
||||
|
||||
sim = Simulator(frag)
|
||||
sim.add_clock(1e-6)
|
||||
def process():
|
||||
self.assertEqual((yield o), 1)
|
||||
yield i.eq(0)
|
||||
yield Tick()
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick()
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick()
|
||||
self.assertEqual((yield o), 0)
|
||||
sim.add_process(process)
|
||||
sim.run()
|
||||
|
||||
|
||||
class AsyncFFSynchronizerTestCase(FHDLTestCase):
|
||||
def test_stages_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Synchronization stage count must be a positive integer, not 0"):
|
||||
ResetSynchronizer(Signal(), stages=0)
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Synchronization stage count may not safely be less than 2"):
|
||||
ResetSynchronizer(Signal(), stages=1)
|
||||
|
||||
def test_edge_wrong(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="AsyncFFSynchronizer async edge must be one of 'pos' or 'neg', not 'xxx'"):
|
||||
AsyncFFSynchronizer(Signal(), Signal(), domain="sync", async_edge="xxx")
|
||||
|
||||
def test_pos_edge(self):
|
||||
i = Signal()
|
||||
o = Signal()
|
||||
m = Module()
|
||||
m.domains += ClockDomain("sync")
|
||||
m.submodules += AsyncFFSynchronizer(i, o)
|
||||
|
||||
sim = Simulator(m)
|
||||
sim.add_clock(1e-6)
|
||||
def process():
|
||||
# initial reset
|
||||
self.assertEqual((yield i), 0)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
|
||||
yield i.eq(1)
|
||||
yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield i.eq(0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
sim.add_process(process)
|
||||
with sim.write_vcd("test.vcd"):
|
||||
sim.run()
|
||||
|
||||
def test_neg_edge(self):
|
||||
i = Signal(reset=1)
|
||||
o = Signal()
|
||||
m = Module()
|
||||
m.domains += ClockDomain("sync")
|
||||
m.submodules += AsyncFFSynchronizer(i, o, async_edge="neg")
|
||||
|
||||
sim = Simulator(m)
|
||||
sim.add_clock(1e-6)
|
||||
def process():
|
||||
# initial reset
|
||||
self.assertEqual((yield i), 1)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
|
||||
yield i.eq(0)
|
||||
yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield i.eq(1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield o), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
sim.add_process(process)
|
||||
with sim.write_vcd("test.vcd"):
|
||||
sim.run()
|
||||
|
||||
|
||||
class ResetSynchronizerTestCase(FHDLTestCase):
|
||||
def test_stages_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="Synchronization stage count must be a positive integer, not 0"):
|
||||
ResetSynchronizer(Signal(), stages=0)
|
||||
with self.assertRaises(ValueError,
|
||||
msg="Synchronization stage count may not safely be less than 2"):
|
||||
ResetSynchronizer(Signal(), stages=1)
|
||||
|
||||
def test_basic(self):
|
||||
arst = Signal()
|
||||
m = Module()
|
||||
m.domains += ClockDomain("sync")
|
||||
m.submodules += ResetSynchronizer(arst)
|
||||
s = Signal(reset=1)
|
||||
m.d.sync += s.eq(0)
|
||||
|
||||
sim = Simulator(m)
|
||||
sim.add_clock(1e-6)
|
||||
def process():
|
||||
# initial reset
|
||||
self.assertEqual((yield s), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
|
||||
yield arst.eq(1)
|
||||
yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 1)
|
||||
yield arst.eq(0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 1)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
self.assertEqual((yield s), 0)
|
||||
yield Tick(); yield Delay(1e-8)
|
||||
sim.add_process(process)
|
||||
with sim.write_vcd("test.vcd"):
|
||||
sim.run()
|
||||
|
||||
|
||||
# TODO: test with distinct clocks
|
||||
class PulseSynchronizerTestCase(FHDLTestCase):
|
||||
def test_paramcheck(self):
|
||||
with self.assertRaises(TypeError):
|
||||
ps = PulseSynchronizer("w", "r", sync_stages=0)
|
||||
with self.assertRaises(TypeError):
|
||||
ps = PulseSynchronizer("w", "r", sync_stages="abc")
|
||||
ps = PulseSynchronizer("w", "r", sync_stages = 1)
|
||||
|
||||
def test_smoke(self):
|
||||
m = Module()
|
||||
m.domains += ClockDomain("sync")
|
||||
ps = m.submodules.dut = PulseSynchronizer("sync", "sync")
|
||||
|
||||
sim = Simulator(m)
|
||||
sim.add_clock(1e-6)
|
||||
def process():
|
||||
yield ps.i.eq(0)
|
||||
# TODO: think about reset
|
||||
for n in range(5):
|
||||
yield Tick()
|
||||
# Make sure no pulses are generated in quiescent state
|
||||
for n in range(3):
|
||||
yield Tick()
|
||||
self.assertEqual((yield ps.o), 0)
|
||||
# Check conservation of pulses
|
||||
accum = 0
|
||||
for n in range(10):
|
||||
yield ps.i.eq(1 if n < 4 else 0)
|
||||
yield Tick()
|
||||
accum += yield ps.o
|
||||
self.assertEqual(accum, 4)
|
||||
sim.add_process(process)
|
||||
sim.run()
|
|
@ -0,0 +1,126 @@
|
|||
from .utils import *
|
||||
from ..hdl import *
|
||||
from ..asserts import *
|
||||
from ..back.pysim import *
|
||||
from ..lib.coding import *
|
||||
|
||||
|
||||
class EncoderTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
enc = Encoder(4)
|
||||
def process():
|
||||
self.assertEqual((yield enc.n), 1)
|
||||
self.assertEqual((yield enc.o), 0)
|
||||
|
||||
yield enc.i.eq(0b0001)
|
||||
yield Settle()
|
||||
self.assertEqual((yield enc.n), 0)
|
||||
self.assertEqual((yield enc.o), 0)
|
||||
|
||||
yield enc.i.eq(0b0100)
|
||||
yield Settle()
|
||||
self.assertEqual((yield enc.n), 0)
|
||||
self.assertEqual((yield enc.o), 2)
|
||||
|
||||
yield enc.i.eq(0b0110)
|
||||
yield Settle()
|
||||
self.assertEqual((yield enc.n), 1)
|
||||
self.assertEqual((yield enc.o), 0)
|
||||
|
||||
sim = Simulator(enc)
|
||||
sim.add_process(process)
|
||||
sim.run()
|
||||
|
||||
|
||||
class PriorityEncoderTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
enc = PriorityEncoder(4)
|
||||
def process():
|
||||
self.assertEqual((yield enc.n), 1)
|
||||
self.assertEqual((yield enc.o), 0)
|
||||
|
||||
yield enc.i.eq(0b0001)
|
||||
yield Settle()
|
||||
self.assertEqual((yield enc.n), 0)
|
||||
self.assertEqual((yield enc.o), 0)
|
||||
|
||||
yield enc.i.eq(0b0100)
|
||||
yield Settle()
|
||||
self.assertEqual((yield enc.n), 0)
|
||||
self.assertEqual((yield enc.o), 2)
|
||||
|
||||
yield enc.i.eq(0b0110)
|
||||
yield Settle()
|
||||
self.assertEqual((yield enc.n), 0)
|
||||
self.assertEqual((yield enc.o), 1)
|
||||
|
||||
sim = Simulator(enc)
|
||||
sim.add_process(process)
|
||||
sim.run()
|
||||
|
||||
|
||||
class DecoderTestCase(FHDLTestCase):
|
||||
def test_basic(self):
|
||||
dec = Decoder(4)
|
||||
def process():
|
||||
self.assertEqual((yield dec.o), 0b0001)
|
||||
|
||||
yield dec.i.eq(1)
|
||||
yield Settle()
|
||||
self.assertEqual((yield dec.o), 0b0010)
|
||||
|
||||
yield dec.i.eq(3)
|
||||
yield Settle()
|
||||
self.assertEqual((yield dec.o), 0b1000)
|
||||
|
||||
yield dec.n.eq(1)
|
||||
yield Settle()
|
||||
self.assertEqual((yield dec.o), 0b0000)
|
||||
|
||||
sim = Simulator(dec)
|
||||
sim.add_process(process)
|
||||
sim.run()
|
||||
|
||||
|
||||
class ReversibleSpec(Elaboratable):
|
||||
def __init__(self, encoder_cls, decoder_cls, args):
|
||||
self.encoder_cls = encoder_cls
|
||||
self.decoder_cls = decoder_cls
|
||||
self.coder_args = args
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
enc, dec = self.encoder_cls(*self.coder_args), self.decoder_cls(*self.coder_args)
|
||||
m.submodules += enc, dec
|
||||
m.d.comb += [
|
||||
dec.i.eq(enc.o),
|
||||
Assert(enc.i == dec.o)
|
||||
]
|
||||
return m
|
||||
|
||||
|
||||
class HammingDistanceSpec(Elaboratable):
|
||||
def __init__(self, distance, encoder_cls, args):
|
||||
self.distance = distance
|
||||
self.encoder_cls = encoder_cls
|
||||
self.coder_args = args
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
enc1, enc2 = self.encoder_cls(*self.coder_args), self.encoder_cls(*self.coder_args)
|
||||
m.submodules += enc1, enc2
|
||||
m.d.comb += [
|
||||
Assume(enc1.i + 1 == enc2.i),
|
||||
Assert(sum(enc1.o ^ enc2.o) == self.distance)
|
||||
]
|
||||
return m
|
||||
|
||||
|
||||
class GrayCoderTestCase(FHDLTestCase):
|
||||
def test_reversible(self):
|
||||
spec = ReversibleSpec(encoder_cls=GrayEncoder, decoder_cls=GrayDecoder, args=(16,))
|
||||
self.assertFormal(spec, mode="prove")
|
||||
|
||||
def test_distance(self):
|
||||
spec = HammingDistanceSpec(distance=1, encoder_cls=GrayEncoder, args=(16,))
|
||||
self.assertFormal(spec, mode="prove")
|
|
@ -0,0 +1,272 @@
|
|||
# nmigen: UnusedElaboratable=no
|
||||
|
||||
from .utils import *
|
||||
from ..hdl import *
|
||||
from ..asserts import *
|
||||
from ..back.pysim import *
|
||||
from ..lib.fifo import *
|
||||
|
||||
|
||||
class FIFOTestCase(FHDLTestCase):
|
||||
def test_depth_wrong(self):
|
||||
with self.assertRaises(TypeError,
|
||||
msg="FIFO width must be a non-negative integer, not -1"):
|
||||
FIFOInterface(width=-1, depth=8, fwft=True)
|
||||
with self.assertRaises(TypeError,
|
||||
msg="FIFO depth must be a non-negative integer, not -1"):
|
||||
FIFOInterface(width=8, depth=-1, fwft=True)
|
||||
|
||||
def test_sync_depth(self):
|
||||
self.assertEqual(SyncFIFO(width=8, depth=0).depth, 0)
|
||||
self.assertEqual(SyncFIFO(width=8, depth=1).depth, 1)
|
||||
self.assertEqual(SyncFIFO(width=8, depth=2).depth, 2)
|
||||
|
||||
def test_sync_buffered_depth(self):
|
||||
self.assertEqual(SyncFIFOBuffered(width=8, depth=0).depth, 0)
|
||||
self.assertEqual(SyncFIFOBuffered(width=8, depth=1).depth, 1)
|
||||
self.assertEqual(SyncFIFOBuffered(width=8, depth=2).depth, 2)
|
||||
|
||||
def test_async_depth(self):
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=0 ).depth, 0)
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=1 ).depth, 1)
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=2 ).depth, 2)
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=3 ).depth, 4)
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=4 ).depth, 4)
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=15).depth, 16)
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=16).depth, 16)
|
||||
self.assertEqual(AsyncFIFO(width=8, depth=17).depth, 32)
|
||||
|
||||
def test_async_depth_wrong(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="AsyncFIFO only supports depths that are powers of 2; "
|
||||
"requested exact depth 15 is not"):
|
||||
AsyncFIFO(width=8, depth=15, exact_depth=True)
|
||||
|
||||
def test_async_buffered_depth(self):
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=0 ).depth, 0)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=1 ).depth, 2)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=2 ).depth, 2)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=3 ).depth, 3)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=4 ).depth, 5)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=15).depth, 17)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=16).depth, 17)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=17).depth, 17)
|
||||
self.assertEqual(AsyncFIFOBuffered(width=8, depth=18).depth, 33)
|
||||
|
||||
def test_async_buffered_depth_wrong(self):
|
||||
with self.assertRaises(ValueError,
|
||||
msg="AsyncFIFOBuffered only supports depths that are one higher than powers of 2; "
|
||||
"requested exact depth 16 is not"):
|
||||
AsyncFIFOBuffered(width=8, depth=16, exact_depth=True)
|
||||
|
||||
class FIFOModel(Elaboratable, FIFOInterface):
|
||||
"""
|
||||
Non-synthesizable first-in first-out queue, implemented naively as a chain of registers.
|
||||
"""
|
||||
def __init__(self, *, width, depth, fwft, r_domain, w_domain):
|
||||
super().__init__(width=width, depth=depth, fwft=fwft)
|
||||
|
||||
self.r_domain = r_domain
|
||||
self.w_domain = w_domain
|
||||
|
||||
self.level = Signal(range(self.depth + 1))
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
|
||||
storage = Memory(width=self.width, depth=self.depth)
|
||||
w_port = m.submodules.w_port = storage.write_port(domain=self.w_domain)
|
||||
r_port = m.submodules.r_port = storage.read_port (domain="comb")
|
||||
|
||||
produce = Signal(range(self.depth))
|
||||
consume = Signal(range(self.depth))
|
||||
|
||||
m.d.comb += self.r_rdy.eq(self.level > 0)
|
||||
m.d.comb += r_port.addr.eq((consume + 1) % self.depth)
|
||||
if self.fwft:
|
||||
m.d.comb += self.r_data.eq(r_port.data)
|
||||
with m.If(self.r_en & self.r_rdy):
|
||||
if not self.fwft:
|
||||
m.d[self.r_domain] += self.r_data.eq(r_port.data)
|
||||
m.d[self.r_domain] += consume.eq(r_port.addr)
|
||||
|
||||
m.d.comb += self.w_rdy.eq(self.level < self.depth)
|
||||
m.d.comb += w_port.data.eq(self.w_data)
|
||||
with m.If(self.w_en & self.w_rdy):
|
||||
m.d.comb += w_port.addr.eq((produce + 1) % self.depth)
|
||||
m.d.comb += w_port.en.eq(1)
|
||||
m.d[self.w_domain] += produce.eq(w_port.addr)
|
||||
|
||||
with m.If(ResetSignal(self.r_domain) | ResetSignal(self.w_domain)):
|
||||
m.d.sync += self.level.eq(0)
|
||||
with m.Else():
|
||||
m.d.sync += self.level.eq(self.level
|
||||
+ (self.w_rdy & self.w_en)
|
||||
- (self.r_rdy & self.r_en))
|
||||
|
||||
m.d.comb += Assert(ResetSignal(self.r_domain) == ResetSignal(self.w_domain))
|
||||
|
||||
return m
|
||||
|
||||
|
||||
class FIFOModelEquivalenceSpec(Elaboratable):
|
||||
"""
|
||||
The first-in first-out queue model equivalence specification: for any inputs and control
|
||||
signals, the behavior of the implementation under test exactly matches the ideal model,
|
||||
except for behavior not defined by the model.
|
||||
"""
|
||||
def __init__(self, fifo, r_domain, w_domain):
|
||||
self.fifo = fifo
|
||||
|
||||
self.r_domain = r_domain
|
||||
self.w_domain = w_domain
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
m.submodules.dut = dut = self.fifo
|
||||
m.submodules.gold = gold = FIFOModel(width=dut.width, depth=dut.depth, fwft=dut.fwft,
|
||||
r_domain=self.r_domain, w_domain=self.w_domain)
|
||||
|
||||
m.d.comb += [
|
||||
gold.r_en.eq(dut.r_rdy & dut.r_en),
|
||||
gold.w_en.eq(dut.w_en),
|
||||
gold.w_data.eq(dut.w_data),
|
||||
]
|
||||
|
||||
m.d.comb += Assert(dut.r_rdy.implies(gold.r_rdy))
|
||||
m.d.comb += Assert(dut.w_rdy.implies(gold.w_rdy))
|
||||
if hasattr(dut, "level"):
|
||||
m.d.comb += Assert(dut.level == gold.level)
|
||||
|
||||
if dut.fwft:
|
||||
m.d.comb += Assert(dut.r_rdy
|
||||
.implies(dut.r_data == gold.r_data))
|
||||
else:
|
||||
m.d.comb += Assert((Past(dut.r_rdy, domain=self.r_domain) &
|
||||
Past(dut.r_en, domain=self.r_domain))
|
||||
.implies(dut.r_data == gold.r_data))
|
||||
|
||||
return m
|
||||
|
||||
|
||||
class FIFOContractSpec(Elaboratable):
|
||||
"""
|
||||
The first-in first-out queue contract specification: if two elements are written to the queue
|
||||
consecutively, they must be read out consecutively at some later point, no matter all other
|
||||
circumstances, with the exception of reset.
|
||||
"""
|
||||
def __init__(self, fifo, *, r_domain, w_domain, bound):
|
||||
self.fifo = fifo
|
||||
self.r_domain = r_domain
|
||||
self.w_domain = w_domain
|
||||
self.bound = bound
|
||||
|
||||
def elaborate(self, platform):
|
||||
m = Module()
|
||||
m.submodules.dut = fifo = self.fifo
|
||||
|
||||
m.domains += ClockDomain("sync")
|
||||
m.d.comb += ResetSignal().eq(0)
|
||||
if self.w_domain != "sync":
|
||||
m.domains += ClockDomain(self.w_domain)
|
||||
m.d.comb += ResetSignal(self.w_domain).eq(0)
|
||||
if self.r_domain != "sync":
|
||||
m.domains += ClockDomain(self.r_domain)
|
||||
m.d.comb += ResetSignal(self.r_domain).eq(0)
|
||||
|
||||
entry_1 = AnyConst(fifo.width)
|
||||
entry_2 = AnyConst(fifo.width)
|
||||
|
||||
with m.FSM(domain=self.w_domain) as write_fsm:
|
||||
with m.State("WRITE-1"):
|
||||
with m.If(fifo.w_rdy):
|
||||
m.d.comb += [
|
||||
fifo.w_data.eq(entry_1),
|
||||
fifo.w_en.eq(1)
|
||||
]
|
||||
m.next = "WRITE-2"
|
||||
with m.State("WRITE-2"):
|
||||
with m.If(fifo.w_rdy):
|
||||
m.d.comb += [
|
||||
fifo.w_data.eq(entry_2),
|
||||
fifo.w_en.eq(1)
|
||||
]
|
||||
m.next = "DONE"
|
||||
with m.State("DONE"):
|
||||
pass
|
||||
|
||||
with m.FSM(domain=self.r_domain) as read_fsm:
|
||||
read_1 = Signal(fifo.width)
|
||||
read_2 = Signal(fifo.width)
|
||||
with m.State("READ"):
|
||||
m.d.comb += fifo.r_en.eq(1)
|
||||
if fifo.fwft:
|
||||
r_rdy = fifo.r_rdy
|
||||
else:
|
||||
r_rdy = Past(fifo.r_rdy, domain=self.r_domain)
|
||||
with m.If(r_rdy):
|
||||
m.d.sync += [
|
||||
read_1.eq(read_2),
|
||||
read_2.eq(fifo.r_data),
|
||||
]
|
||||
with m.If((read_1 == entry_1) & (read_2 == entry_2)):
|
||||
m.next = "DONE"
|
||||
with m.State("DONE"):
|
||||
pass
|
||||
|
||||
with m.If(Initial()):
|
||||
m.d.comb += Assume(write_fsm.ongoing("WRITE-1"))
|
||||
m.d.comb += Assume(read_fsm.ongoing("READ"))
|
||||
with m.If(Past(Initial(), self.bound - 1)):
|
||||
m.d.comb += Assert(read_fsm.ongoing("DONE"))
|
||||
|
||||
if self.w_domain != "sync" or self.r_domain != "sync":
|
||||
m.d.comb += Assume(Rose(ClockSignal(self.w_domain)) |
|
||||
Rose(ClockSignal(self.r_domain)))
|
||||
|
||||
return m
|
||||
|
||||
|
||||
class FIFOFormalCase(FHDLTestCase):
|
||||
def check_sync_fifo(self, fifo):
|
||||
self.assertFormal(FIFOModelEquivalenceSpec(fifo, r_domain="sync", w_domain="sync"),
|
||||
mode="bmc", depth=fifo.depth + 1)
|
||||
self.assertFormal(FIFOContractSpec(fifo, r_domain="sync", w_domain="sync",
|
||||
bound=fifo.depth * 2 + 1),
|
||||
mode="hybrid", depth=fifo.depth * 2 + 1)
|
||||
|
||||
def test_sync_fwft_pot(self):
|
||||
self.check_sync_fifo(SyncFIFO(width=8, depth=4, fwft=True))
|
||||
|
||||
def test_sync_fwft_npot(self):
|
||||
self.check_sync_fifo(SyncFIFO(width=8, depth=5, fwft=True))
|
||||
|
||||
def test_sync_not_fwft_pot(self):
|
||||
self.check_sync_fifo(SyncFIFO(width=8, depth=4, fwft=False))
|
||||
|
||||
def test_sync_not_fwft_npot(self):
|
||||
self.check_sync_fifo(SyncFIFO(width=8, depth=5, fwft=False))
|
||||
|
||||
def test_sync_buffered_pot(self):
|
||||
self.check_sync_fifo(SyncFIFOBuffered(width=8, depth=4))
|
||||
|
||||
def test_sync_buffered_potp1(self):
|
||||
self.check_sync_fifo(SyncFIFOBuffered(width=8, depth=5))
|
||||
|
||||
def test_sync_buffered_potm1(self):
|
||||
self.check_sync_fifo(SyncFIFOBuffered(width=8, depth=3))
|
||||
|
||||
def check_async_fifo(self, fifo):
|
||||
# TODO: properly doing model equivalence checking on this likely requires multiclock,
|
||||
# which is not really documented nor is it clear how to use it.
|
||||
# self.assertFormal(FIFOModelEquivalenceSpec(fifo, r_domain="read", w_domain="write"),
|
||||
# mode="bmc", depth=fifo.depth * 3 + 1)
|
||||
self.assertFormal(FIFOContractSpec(fifo, r_domain="read", w_domain="write",
|
||||
bound=fifo.depth * 4 + 1),
|
||||
mode="hybrid", depth=fifo.depth * 4 + 1)
|
||||
|
||||
def test_async(self):
|
||||
self.check_async_fifo(AsyncFIFO(width=8, depth=4))
|
||||
|
||||
def test_async_buffered(self):
|
||||
self.check_async_fifo(AsyncFIFOBuffered(width=8, depth=4))
|
|
@ -0,0 +1,199 @@
|
|||
from .utils import *
|
||||
from ..hdl import *
|
||||
from ..hdl.rec import *
|
||||
from ..back.pysim import *
|
||||
from ..lib.io import *
|
||||
|
||||
|
||||
class PinLayoutCombTestCase(FHDLTestCase):
|
||||
def test_pin_layout_i(self):
|
||||
layout_1 = pin_layout(1, dir="i")
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"i": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="i")
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"i": ((2, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_o(self):
|
||||
layout_1 = pin_layout(1, dir="o")
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"o": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="o")
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"o": ((2, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_oe(self):
|
||||
layout_1 = pin_layout(1, dir="oe")
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"o": ((1, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="oe")
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"o": ((2, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_io(self):
|
||||
layout_1 = pin_layout(1, dir="io")
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"i": ((1, False), DIR_NONE),
|
||||
"o": ((1, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="io")
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"i": ((2, False), DIR_NONE),
|
||||
"o": ((2, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
|
||||
class PinLayoutSDRTestCase(FHDLTestCase):
|
||||
def test_pin_layout_i(self):
|
||||
layout_1 = pin_layout(1, dir="i", xdr=1)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="i", xdr=1)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i": ((2, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_o(self):
|
||||
layout_1 = pin_layout(1, dir="o", xdr=1)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="o", xdr=1)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o": ((2, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_oe(self):
|
||||
layout_1 = pin_layout(1, dir="oe", xdr=1)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o": ((1, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="oe", xdr=1)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o": ((2, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_io(self):
|
||||
layout_1 = pin_layout(1, dir="io", xdr=1)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i": ((1, False), DIR_NONE),
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o": ((1, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="io", xdr=1)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i": ((2, False), DIR_NONE),
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o": ((2, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
|
||||
class PinLayoutDDRTestCase(FHDLTestCase):
|
||||
def test_pin_layout_i(self):
|
||||
layout_1 = pin_layout(1, dir="i", xdr=2)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i0": ((1, False), DIR_NONE),
|
||||
"i1": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="i", xdr=2)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i0": ((2, False), DIR_NONE),
|
||||
"i1": ((2, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_o(self):
|
||||
layout_1 = pin_layout(1, dir="o", xdr=2)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o0": ((1, False), DIR_NONE),
|
||||
"o1": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="o", xdr=2)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o0": ((2, False), DIR_NONE),
|
||||
"o1": ((2, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_oe(self):
|
||||
layout_1 = pin_layout(1, dir="oe", xdr=2)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o0": ((1, False), DIR_NONE),
|
||||
"o1": ((1, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="oe", xdr=2)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o0": ((2, False), DIR_NONE),
|
||||
"o1": ((2, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
def test_pin_layout_io(self):
|
||||
layout_1 = pin_layout(1, dir="io", xdr=2)
|
||||
self.assertEqual(layout_1.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i0": ((1, False), DIR_NONE),
|
||||
"i1": ((1, False), DIR_NONE),
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o0": ((1, False), DIR_NONE),
|
||||
"o1": ((1, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
layout_2 = pin_layout(2, dir="io", xdr=2)
|
||||
self.assertEqual(layout_2.fields, {
|
||||
"i_clk": ((1, False), DIR_NONE),
|
||||
"i0": ((2, False), DIR_NONE),
|
||||
"i1": ((2, False), DIR_NONE),
|
||||
"o_clk": ((1, False), DIR_NONE),
|
||||
"o0": ((2, False), DIR_NONE),
|
||||
"o1": ((2, False), DIR_NONE),
|
||||
"oe": ((1, False), DIR_NONE),
|
||||
})
|
||||
|
||||
|
||||
class PinTestCase(FHDLTestCase):
|
||||
def test_attributes(self):
|
||||
pin = Pin(2, dir="io", xdr=2)
|
||||
self.assertEqual(pin.width, 2)
|
||||
self.assertEqual(pin.dir, "io")
|
||||
self.assertEqual(pin.xdr, 2)
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue