diff --git a/README.md b/README.md index 93bc8ca..8316610 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,14 @@ A port of [riscv-formal](https://github.com/SymbioticEDA/riscv-formal) to nMigen ## Dependencies - [nMigen](https://github.com/m-labs/nmigen) + + Note that: + + 1. Even though this project includes a copy of nMigen, it is highly recommended to remove the copy and install the latest version of nMigen on your system for building this repo + 1. The nMigen package that comes with the [Nix](https://nixos.org/features.html) package manager may not contain the latest changes and therefore may not work with this repo + 1. If you do choose to keep the copy of nMigen provided in this repo anyway, you may need to separately install its dependencies for the build to work: + - [setuptools](https://pypi.org/project/setuptools/) + - [pyvcd](https://pypi.org/project/pyvcd/) - [Yosys](https://github.com/YosysHQ/yosys) - [SymbiYosys](https://github.com/YosysHQ/SymbiYosys) @@ -12,9 +20,10 @@ A port of [riscv-formal](https://github.com/SymbioticEDA/riscv-formal) to nMigen | Directory | Description | | --- | --- | +| `nmigen | [nMigen](https://github.com/m-labs/nmigen) | | `rvfi` | RISC-V Formal Verification Framework (nMigen port) | | `rvfi/insns` | Supported RISC-V instructions and ISAs | -| `rvfi/cores` | Example cores to be integrated with riscv-formal-nmigen (WIP) | +| `rvfi/cores` | Example cores for verification with riscv-formal-nmigen | | `rvfi/cores/minerva` | The [Minerva](https://github.com/lambdaconcept/minerva) core | ## Build @@ -27,6 +36,8 @@ A port of [riscv-formal](https://github.com/SymbioticEDA/riscv-formal) to nMigen $ python -m rvfi.cores.minerva.verify ``` +You may see some warning messages about unused `Elaboratable`s and deprecated use of `Simulation` which can be safely ignored. + ## Scope Support for the RV32I base ISA and RV32M extension are planned and well underway. Support for other ISAs in the original riscv-formal such as RV32C and their 64-bit counterparts may also be added in the future as time permits. diff --git a/nmigen/LICENSE.txt b/nmigen/LICENSE.txt new file mode 100644 index 0000000..2ad6276 --- /dev/null +++ b/nmigen/LICENSE.txt @@ -0,0 +1,29 @@ +Copyright (C) 2011-2020 M-Labs Limited +Copyright (C) 2020 whitequark + +Redistribution and use in source and binary forms, with or without modification, +are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +Other authors retain ownership of their contributions. If a submission can +reasonably be considered independently copyrightable, it's yours and we +encourage you to claim it with appropriate copyright notices. This submission +then falls under the "otherwise noted" category. All submissions are strongly +encouraged to use the two-clause BSD license reproduced above. diff --git a/nmigen/README.md b/nmigen/README.md new file mode 100644 index 0000000..29e68cb --- /dev/null +++ b/nmigen/README.md @@ -0,0 +1,84 @@ +# nMigen + +## A refreshed Python toolbox for building complex digital hardware + +**Although nMigen is incomplete and in active development, it can already be used for real-world designs. The nMigen language (`nmigen.hdl.ast`, `nmigen.hdl.dsl`) will not undergo incompatible changes. The nMigen standard library (`nmigen.lib`) and build system (`nmigen.build`) will undergo minimal changes before their design is finalized.** + +Despite being faster than schematics entry, hardware design with Verilog and VHDL remains tedious and inefficient for several reasons. The event-driven model introduces issues and manual coding that are unnecessary for synchronous circuits, which represent the lion's share of today's logic designs. Counterintuitive arithmetic rules result in steeper learning curves and provide a fertile ground for subtle bugs in designs. Finally, support for procedural generation of logic (metaprogramming) through "generate" statements is very limited and restricts the ways code can be made generic, reused and organized. + +To address those issues, we have developed the *nMigen FHDL*, a library that replaces the event-driven paradigm with the notions of combinatorial and synchronous statements, has arithmetic rules that make integers always behave like mathematical integers, and most importantly allows the design's logic to be constructed by a Python program. This last point enables hardware designers to take advantage of the richness of the Python language—object oriented programming, function parameters, generators, operator overloading, libraries, etc.—to build well organized, reusable and elegant designs. + +Other nMigen libraries are built on FHDL and provide various tools and logic cores. nMigen also contains a simulator that allows test benches to be written in Python. + +See the [doc/](doc/) folder for more technical information. + +nMigen is based on [Migen][], a hardware description language developed by [M-Labs][]. Although Migen works very well in production, its design could be improved in many fundamental ways, and nMigen reimplements Migen concepts from scratch to do so. nMigen also provides an extensive [compatibility layer](#migration-from-migen) that makes it possible to build and simulate most Migen designs unmodified, as well as integrate modules written for Migen and nMigen. + +The development of nMigen has been supported by [M-Labs][] and [LambdaConcept][]. + +[migen]: https://m-labs.hk/migen +[yosys]: http://www.clifford.at/yosys/ +[m-labs]: https://m-labs.hk +[lambdaconcept]: http://lambdaconcept.com/ + +### HLS? + +nMigen is *not* a "Python-to-FPGA" conventional high level synthesis (HLS) tool. It will *not* take a Python program as input and generate a hardware implementation of it. In nMigen, the Python program is executed by a regular Python interpreter, and it emits explicit statements in the FHDL domain-specific language. Writing a conventional HLS tool that uses nMigen as an internal component might be a good idea, on the other hand :) + +### Installation + +nMigen requires Python 3.6 (or newer), [Yosys][] 0.9 (or newer), as well as a device-specific toolchain. + + pip install git+https://github.com/m-labs/nmigen.git + pip install git+https://github.com/m-labs/nmigen-boards.git + +### Introduction + +TBD + +### Supported devices + +nMigen can be used to target any FPGA or ASIC process that accepts behavioral Verilog-2001 as input. It also offers extended support for many FPGA families, providing toolchain integration, abstractions for device-specific primitives, and more. Specifically: + + * Lattice iCE40 (toolchains: **Yosys+nextpnr**, LSE-iCECube2, Synplify-iCECube2); + * Lattice MachXO2 (toolchains: Diamond); + * Lattice ECP5 (toolchains: **Yosys+nextpnr**, Diamond); + * Xilinx Spartan 3A (toolchains: ISE); + * Xilinx Spartan 6 (toolchains: ISE); + * Xilinx 7-series (toolchains: Vivado); + * Xilinx UltraScale (toolchains: Vivado); + * Intel (toolchains: Quartus). + +FOSS toolchains are listed in **bold**. + +### Migration from [Migen][] + +If you are already familiar with [Migen][], the good news is that nMigen provides a comprehensive Migen compatibility layer! An existing Migen design can be synthesized and simulated with nMigen in three steps: + + 1. Replace all `from migen import <...>` statements with `from nmigen.compat import <...>`. + 2. Replace every explicit mention of the default `sys` clock domain with the new default `sync` clock domain. E.g. `ClockSignal("sys")` is changed to `ClockSignal("sync")`. + 3. Migrate from Migen build/platform system to nMigen build/platform system. nMigen does not provide a build/platform compatibility layer because both the board definition files and the platform abstraction differ too much. + +Note that nMigen will **not** produce the exact same RTL as Migen did. nMigen has been built to allow you to take advantage of the new and improved functionality it has (such as producing hierarchical RTL) while making migration as painless as possible. + +Once your design passes verification with nMigen, you can migrate it to the nMigen syntax one module at a time. Migen modules can be added to nMigen modules and vice versa, so there is no restriction on the order of migration, either. + +### Community + +nMigen discussions take place on the M-Labs IRC channel, [#m-labs at freenode.net](https://webchat.freenode.net/?channels=m-labs). Feel free to join to ask questions about using nMigen or discuss ongoing development of nMigen and its related projects. + +### License + +nMigen is released under the very permissive two-clause BSD license. Under the terms of this license, you are authorized to use nMigen for closed-source proprietary designs. + +Even though we do not require you to do so, these things are awesome, so please do them if possible: + * tell us that you are using nMigen + * put the [nMigen logo](doc/nmigen_logo.svg) on the page of a product using it, with a link to https://nmigen.org + * cite nMigen in publications related to research it has helped + * send us feedback and suggestions for improvements + * send us bug reports when something goes wrong + * send us the modifications and improvements you have done to nMigen as pull requests on GitHub + +See LICENSE file for full copyright and license info. + + "Electricity! It's like magic!" diff --git a/nmigen/__init__.py b/nmigen/__init__.py new file mode 100644 index 0000000..5646c6d --- /dev/null +++ b/nmigen/__init__.py @@ -0,0 +1,20 @@ +import pkg_resources +try: + __version__ = pkg_resources.get_distribution(__name__).version +except pkg_resources.DistributionNotFound: + pass + + +from .hdl import * + + +__all__ = [ + "Shape", "unsigned", "signed", + "Value", "Const", "C", "Mux", "Cat", "Repl", "Array", "Signal", "ClockSignal", "ResetSignal", + "Module", + "ClockDomain", + "Elaboratable", "Fragment", "Instance", + "Memory", + "Record", + "DomainRenamer", "ResetInserter", "EnableInserter", +] diff --git a/nmigen/__pycache__/__init__.cpython-37.pyc b/nmigen/__pycache__/__init__.cpython-37.pyc new file mode 100644 index 0000000..75c37a3 Binary files /dev/null and b/nmigen/__pycache__/__init__.cpython-37.pyc differ diff --git a/nmigen/__pycache__/__init__.cpython-39.pyc b/nmigen/__pycache__/__init__.cpython-39.pyc new file mode 100644 index 0000000..bb19314 Binary files /dev/null and b/nmigen/__pycache__/__init__.cpython-39.pyc differ diff --git a/nmigen/__pycache__/_toolchain.cpython-37.pyc b/nmigen/__pycache__/_toolchain.cpython-37.pyc new file mode 100644 index 0000000..1b0606b Binary files /dev/null and b/nmigen/__pycache__/_toolchain.cpython-37.pyc differ diff --git a/nmigen/__pycache__/_unused.cpython-37.pyc b/nmigen/__pycache__/_unused.cpython-37.pyc new file mode 100644 index 0000000..6307c3f Binary files /dev/null and b/nmigen/__pycache__/_unused.cpython-37.pyc differ diff --git a/nmigen/__pycache__/_utils.cpython-37.pyc b/nmigen/__pycache__/_utils.cpython-37.pyc new file mode 100644 index 0000000..23e688e Binary files /dev/null and b/nmigen/__pycache__/_utils.cpython-37.pyc differ diff --git a/nmigen/__pycache__/asserts.cpython-37.pyc b/nmigen/__pycache__/asserts.cpython-37.pyc new file mode 100644 index 0000000..7ed0d89 Binary files /dev/null and b/nmigen/__pycache__/asserts.cpython-37.pyc differ diff --git a/nmigen/__pycache__/tracer.cpython-37.pyc b/nmigen/__pycache__/tracer.cpython-37.pyc new file mode 100644 index 0000000..9e184e2 Binary files /dev/null and b/nmigen/__pycache__/tracer.cpython-37.pyc differ diff --git a/nmigen/__pycache__/utils.cpython-37.pyc b/nmigen/__pycache__/utils.cpython-37.pyc new file mode 100644 index 0000000..de6c672 Binary files /dev/null and b/nmigen/__pycache__/utils.cpython-37.pyc differ diff --git a/nmigen/_toolchain.py b/nmigen/_toolchain.py new file mode 100644 index 0000000..fca2bac --- /dev/null +++ b/nmigen/_toolchain.py @@ -0,0 +1,37 @@ +import os +import shutil + + +__all__ = ["ToolNotFound", "tool_env_var", "has_tool", "require_tool"] + + +class ToolNotFound(Exception): + pass + + +def tool_env_var(name): + return name.upper().replace("-", "_") + + +def _get_tool(name): + return os.environ.get(tool_env_var(name), name) + + +def has_tool(name): + return shutil.which(_get_tool(name)) is not None + + +def require_tool(name): + env_var = tool_env_var(name) + path = _get_tool(name) + if shutil.which(path) is None: + if env_var in os.environ: + raise ToolNotFound("Could not find required tool {} in {} as " + "specified via the {} environment variable". + format(name, path, env_var)) + else: + raise ToolNotFound("Could not find required tool {} in PATH. Place " + "it directly in PATH or specify path explicitly " + "via the {} environment variable". + format(name, env_var)) + return path diff --git a/nmigen/_unused.py b/nmigen/_unused.py new file mode 100644 index 0000000..e0c67a7 --- /dev/null +++ b/nmigen/_unused.py @@ -0,0 +1,45 @@ +import sys +import warnings + +from ._utils import get_linter_option + + +__all__ = ["UnusedMustUse", "MustUse"] + + +class UnusedMustUse(Warning): + pass + + +class MustUse: + _MustUse__silence = False + _MustUse__warning = UnusedMustUse + + def __new__(cls, *args, src_loc_at=0, **kwargs): + frame = sys._getframe(1 + src_loc_at) + self = super().__new__(cls) + self._MustUse__used = False + self._MustUse__context = dict( + filename=frame.f_code.co_filename, + lineno=frame.f_lineno, + source=self) + return self + + def __del__(self): + if self._MustUse__silence: + return + if hasattr(self, "_MustUse__used") and not self._MustUse__used: + if get_linter_option(self._MustUse__context["filename"], + self._MustUse__warning.__name__, bool, True): + warnings.warn_explicit( + "{!r} created but never used".format(self), self._MustUse__warning, + **self._MustUse__context) + + +_old_excepthook = sys.excepthook +def _silence_elaboratable(type, value, traceback): + # Don't show anything if the interpreter crashed; that'd just obscure the exception + # traceback instead of helping. + MustUse._MustUse__silence = True + _old_excepthook(type, value, traceback) +sys.excepthook = _silence_elaboratable diff --git a/nmigen/_utils.py b/nmigen/_utils.py new file mode 100644 index 0000000..6168ea1 --- /dev/null +++ b/nmigen/_utils.py @@ -0,0 +1,116 @@ +import contextlib +import functools +import warnings +import linecache +import re +from collections import OrderedDict +from collections.abc import Iterable +from contextlib import contextmanager + +from .utils import * + + +__all__ = ["flatten", "union" , "log2_int", "bits_for", "memoize", "final", "deprecated", + "get_linter_options", "get_linter_option"] + + +def flatten(i): + for e in i: + if isinstance(e, Iterable): + yield from flatten(e) + else: + yield e + + +def union(i, start=None): + r = start + for e in i: + if r is None: + r = e + else: + r |= e + return r + + +def memoize(f): + memo = OrderedDict() + @functools.wraps(f) + def g(*args): + if args not in memo: + memo[args] = f(*args) + return memo[args] + return g + + +def final(cls): + def init_subclass(): + raise TypeError("Subclassing {}.{} is not supported" + .format(cls.__module__, cls.__name__)) + cls.__init_subclass__ = init_subclass + return cls + + +def deprecated(message, stacklevel=2): + def decorator(f): + @functools.wraps(f) + def wrapper(*args, **kwargs): + warnings.warn(message, DeprecationWarning, stacklevel=stacklevel) + return f(*args, **kwargs) + return wrapper + return decorator + + +def _ignore_deprecated(f=None): + if f is None: + @contextlib.contextmanager + def context_like(): + with warnings.catch_warnings(): + warnings.filterwarnings(action="ignore", category=DeprecationWarning) + yield + return context_like() + else: + @functools.wraps(f) + def decorator_like(*args, **kwargs): + with warnings.catch_warnings(): + warnings.filterwarnings(action="ignore", category=DeprecationWarning) + f(*args, **kwargs) + return decorator_like + + +def extend(cls): + def decorator(f): + if isinstance(f, property): + name = f.fget.__name__ + else: + name = f.__name__ + setattr(cls, name, f) + return decorator + + +def get_linter_options(filename): + first_line = linecache.getline(filename, 1) + if first_line: + match = re.match(r"^#\s*nmigen:\s*((?:\w+=\w+\s*)(?:,\s*\w+=\w+\s*)*)\n$", first_line) + if match: + return dict(map(lambda s: s.strip().split("=", 2), match.group(1).split(","))) + return dict() + + +def get_linter_option(filename, name, type, default): + options = get_linter_options(filename) + if name not in options: + return default + + option = options[name] + if type is bool: + if option in ("1", "yes", "enable"): + return True + if option in ("0", "no", "disable"): + return False + return default + if type is int: + try: + return int(option, 0) + except ValueError: + return default + assert False diff --git a/nmigen/asserts.py b/nmigen/asserts.py new file mode 100644 index 0000000..b0e97b9 --- /dev/null +++ b/nmigen/asserts.py @@ -0,0 +1,2 @@ +from .hdl.ast import AnyConst, AnySeq, Assert, Assume, Cover +from .hdl.ast import Past, Stable, Rose, Fell, Initial diff --git a/nmigen/back/__init__.py b/nmigen/back/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nmigen/back/__pycache__/__init__.cpython-37.pyc b/nmigen/back/__pycache__/__init__.cpython-37.pyc new file mode 100644 index 0000000..c48afb9 Binary files /dev/null and b/nmigen/back/__pycache__/__init__.cpython-37.pyc differ diff --git a/nmigen/back/__pycache__/pysim.cpython-37.pyc b/nmigen/back/__pycache__/pysim.cpython-37.pyc new file mode 100644 index 0000000..d2502a7 Binary files /dev/null and b/nmigen/back/__pycache__/pysim.cpython-37.pyc differ diff --git a/nmigen/back/__pycache__/rtlil.cpython-37.pyc b/nmigen/back/__pycache__/rtlil.cpython-37.pyc new file mode 100644 index 0000000..938064d Binary files /dev/null and b/nmigen/back/__pycache__/rtlil.cpython-37.pyc differ diff --git a/nmigen/back/pysim.py b/nmigen/back/pysim.py new file mode 100644 index 0000000..5d599a3 --- /dev/null +++ b/nmigen/back/pysim.py @@ -0,0 +1,1128 @@ +import os +import tempfile +import warnings +import inspect +from contextlib import contextmanager +import itertools +from vcd import VCDWriter +from vcd.gtkw import GTKWSave + +from .._utils import deprecated +from ..hdl.ast import * +from ..hdl.cd import * +from ..hdl.ir import * +from ..hdl.xfrm import ValueVisitor, StatementVisitor, LHSGroupFilter + + +class Command: + pass + + +class Settle(Command): + def __repr__(self): + return "(settle)" + + +class Delay(Command): + def __init__(self, interval=None): + self.interval = None if interval is None else float(interval) + + def __repr__(self): + if self.interval is None: + return "(delay ε)" + else: + return "(delay {:.3}us)".format(self.interval * 1e6) + + +class Tick(Command): + def __init__(self, domain="sync"): + if not isinstance(domain, (str, ClockDomain)): + raise TypeError("Domain must be a string or a ClockDomain instance, not {!r}" + .format(domain)) + assert domain != "comb" + self.domain = domain + + def __repr__(self): + return "(tick {})".format(self.domain) + + +class Passive(Command): + def __repr__(self): + return "(passive)" + + +class Active(Command): + def __repr__(self): + return "(active)" + + +class _WaveformWriter: + def update(self, timestamp, signal, value): + raise NotImplementedError # :nocov: + + def close(self, timestamp): + raise NotImplementedError # :nocov: + + +class _VCDWaveformWriter(_WaveformWriter): + @staticmethod + def timestamp_to_vcd(timestamp): + return timestamp * (10 ** 10) # 1/(100 ps) + + @staticmethod + def decode_to_vcd(signal, value): + return signal.decoder(value).expandtabs().replace(" ", "_") + + def __init__(self, signal_names, *, vcd_file, gtkw_file=None, traces=()): + if isinstance(vcd_file, str): + vcd_file = open(vcd_file, "wt") + if isinstance(gtkw_file, str): + gtkw_file = open(gtkw_file, "wt") + + self.vcd_vars = SignalDict() + self.vcd_file = vcd_file + self.vcd_writer = vcd_file and VCDWriter(self.vcd_file, + timescale="100 ps", comment="Generated by nMigen") + + self.gtkw_names = SignalDict() + self.gtkw_file = gtkw_file + self.gtkw_save = gtkw_file and GTKWSave(self.gtkw_file) + + self.traces = [] + + trace_names = SignalDict() + for trace in traces: + if trace not in signal_names: + trace_names[trace] = trace.name + self.traces.append(trace) + + if self.vcd_writer is None: + return + + for signal, names in itertools.chain(signal_names.items(), trace_names.items()): + if signal.decoder: + var_type = "string" + var_size = 1 + var_init = self.decode_to_vcd(signal, signal.reset) + else: + var_type = "wire" + var_size = signal.width + var_init = signal.reset + + for (*var_scope, var_name) in names: + suffix = None + while True: + try: + if suffix is None: + var_name_suffix = var_name + else: + var_name_suffix = "{}${}".format(var_name, suffix) + vcd_var = self.vcd_writer.register_var( + scope=var_scope, name=var_name_suffix, + var_type=var_type, size=var_size, init=var_init) + break + except KeyError: + suffix = (suffix or 0) + 1 + + if signal not in self.vcd_vars: + self.vcd_vars[signal] = set() + self.vcd_vars[signal].add(vcd_var) + + if signal not in self.gtkw_names: + self.gtkw_names[signal] = (*var_scope, var_name_suffix) + + def update(self, timestamp, signal, value): + if signal not in self.vcd_vars: + return + + vcd_timestamp = self.timestamp_to_vcd(timestamp) + if signal.decoder: + var_value = self.decode_to_vcd(signal, value) + else: + var_value = value + for vcd_var in self.vcd_vars[signal]: + self.vcd_writer.change(vcd_var, vcd_timestamp, var_value) + + def close(self, timestamp): + if self.vcd_writer is not None: + self.vcd_writer.close(self.timestamp_to_vcd(timestamp)) + + if self.gtkw_save is not None: + self.gtkw_save.dumpfile(self.vcd_file.name) + self.gtkw_save.dumpfile_size(self.vcd_file.tell()) + + self.gtkw_save.treeopen("top") + for signal in self.traces: + if len(signal) > 1 and not signal.decoder: + suffix = "[{}:0]".format(len(signal) - 1) + else: + suffix = "" + self.gtkw_save.trace(".".join(self.gtkw_names[signal]) + suffix) + + if self.vcd_file is not None: + self.vcd_file.close() + if self.gtkw_file is not None: + self.gtkw_file.close() + + +class _Process: + __slots__ = ("runnable", "passive") + + def reset(self): + raise NotImplementedError # :nocov: + + def run(self): + raise NotImplementedError # :nocov: + + @property + def name(self): + raise NotImplementedError # :nocov: + + +class _SignalState: + __slots__ = ("signal", "curr", "next", "waiters", "pending") + + def __init__(self, signal, pending): + self.signal = signal + self.pending = pending + self.waiters = dict() + self.reset() + + def reset(self): + self.curr = self.next = self.signal.reset + + def set(self, value): + if self.next == value: + return + self.next = value + self.pending.add(self) + + def wait(self, task, *, trigger=None): + assert task not in self.waiters + self.waiters[task] = trigger + + def commit(self): + if self.curr == self.next: + return False + self.curr = self.next + return True + + def wakeup(self): + awoken_any = False + for process, trigger in self.waiters.items(): + if trigger is None or trigger == self.curr: + process.runnable = awoken_any = True + return awoken_any + + +class _SimulatorState: + def __init__(self): + self.signals = SignalDict() + self.pending = set() + + self.timestamp = 0.0 + self.deadlines = dict() + + self.waveform_writer = None + + def reset(self): + for signal_state in self.signals.values(): + signal_state.reset() + self.pending.clear() + + self.timestamp = 0.0 + self.deadlines.clear() + + def for_signal(self, signal): + try: + return self.signals[signal] + except KeyError: + signal_state = _SignalState(signal, self.pending) + self.signals[signal] = signal_state + return signal_state + + def commit(self): + awoken_any = False + for signal_state in self.pending: + if signal_state.commit(): + if signal_state.wakeup(): + awoken_any = True + if self.waveform_writer is not None: + self.waveform_writer.update(self.timestamp, + signal_state.signal, signal_state.curr) + return awoken_any + + def advance(self): + nearest_processes = set() + nearest_deadline = None + for process, deadline in self.deadlines.items(): + if deadline is None: + if nearest_deadline is not None: + nearest_processes.clear() + nearest_processes.add(process) + nearest_deadline = self.timestamp + break + elif nearest_deadline is None or deadline <= nearest_deadline: + assert deadline >= self.timestamp + if nearest_deadline is not None and deadline < nearest_deadline: + nearest_processes.clear() + nearest_processes.add(process) + nearest_deadline = deadline + + if not nearest_processes: + return False + + for process in nearest_processes: + process.runnable = True + del self.deadlines[process] + self.timestamp = nearest_deadline + + return True + + def start_waveform(self, waveform_writer): + if self.timestamp != 0.0: + raise ValueError("Cannot start writing waveforms after advancing simulation time") + if self.waveform_writer is not None: + raise ValueError("Already writing waveforms to {!r}" + .format(self.waveform_writer)) + self.waveform_writer = waveform_writer + + def finish_waveform(self): + if self.waveform_writer is None: + return + self.waveform_writer.close(self.timestamp) + self.waveform_writer = None + + +class _EvalContext: + __slots__ = ("state", "indexes", "slots") + + def __init__(self, state): + self.state = state + self.indexes = SignalDict() + self.slots = [] + + def get_signal(self, signal): + try: + return self.indexes[signal] + except KeyError: + index = len(self.slots) + self.slots.append(self.state.for_signal(signal)) + self.indexes[signal] = index + return index + + def get_in_signal(self, signal, *, trigger=None): + index = self.get_signal(signal) + self.slots[index].waiters[self] = trigger + return index + + def get_out_signal(self, signal): + return self.get_signal(signal) + + +class _Emitter: + def __init__(self): + self._buffer = [] + self._suffix = 0 + self._level = 0 + + def append(self, code): + self._buffer.append(" " * self._level) + self._buffer.append(code) + self._buffer.append("\n") + + @contextmanager + def indent(self): + self._level += 1 + yield + self._level -= 1 + + def flush(self, indent=""): + code = "".join(self._buffer) + self._buffer.clear() + return code + + def gen_var(self, prefix): + name = f"{prefix}_{self._suffix}" + self._suffix += 1 + return name + + def def_var(self, prefix, value): + name = self.gen_var(prefix) + self.append(f"{name} = {value}") + return name + + +class _Compiler: + def __init__(self, context, emitter): + self.context = context + self.emitter = emitter + + +class _ValueCompiler(ValueVisitor, _Compiler): + helpers = { + "sign": lambda value, sign: value | sign if value & sign else value, + "zdiv": lambda lhs, rhs: 0 if rhs == 0 else lhs // rhs, + } + + def on_ClockSignal(self, value): + raise NotImplementedError # :nocov: + + def on_ResetSignal(self, value): + raise NotImplementedError # :nocov: + + def on_Record(self, value): + return self(Cat(value.fields.values())) + + def on_AnyConst(self, value): + raise NotImplementedError # :nocov: + + def on_AnySeq(self, value): + raise NotImplementedError # :nocov: + + def on_Sample(self, value): + raise NotImplementedError # :nocov: + + def on_Initial(self, value): + raise NotImplementedError # :nocov: + + +class _RHSValueCompiler(_ValueCompiler): + def __init__(self, context, emitter, *, mode, inputs=None): + super().__init__(context, emitter) + assert mode in ("curr", "next") + self.mode = mode + # If not None, `inputs` gets populated with RHS signals. + self.inputs = inputs + + def on_Const(self, value): + return f"{value.value}" + + def on_Signal(self, value): + if self.inputs is not None: + self.inputs.add(value) + + if self.mode == "curr": + return f"slots[{self.context.get_signal(value)}].{self.mode}" + else: + return f"next_{self.context.get_signal(value)}" + + def on_Operator(self, value): + def mask(value): + value_mask = (1 << len(value)) - 1 + return f"({self(value)} & {value_mask})" + + def sign(value): + if value.shape().signed: + return f"sign({mask(value)}, {-1 << (len(value) - 1)})" + else: # unsigned + return mask(value) + + if len(value.operands) == 1: + arg, = value.operands + if value.operator == "~": + return f"(~{self(arg)})" + if value.operator == "-": + return f"(-{self(arg)})" + if value.operator == "b": + return f"bool({mask(arg)})" + if value.operator == "r|": + return f"({mask(arg)} != 0)" + if value.operator == "r&": + return f"({mask(arg)} == {(1 << len(arg)) - 1})" + if value.operator == "r^": + # Believe it or not, this is the fastest way to compute a sideways XOR in Python. + return f"(format({mask(arg)}, 'b').count('1') % 2)" + if value.operator in ("u", "s"): + # These operators don't change the bit pattern, only its interpretation. + return self(arg) + elif len(value.operands) == 2: + lhs, rhs = value.operands + lhs_mask = (1 << len(lhs)) - 1 + rhs_mask = (1 << len(rhs)) - 1 + if value.operator == "+": + return f"({sign(lhs)} + {sign(rhs)})" + if value.operator == "-": + return f"({sign(lhs)} - {sign(rhs)})" + if value.operator == "*": + return f"({sign(lhs)} * {sign(rhs)})" + if value.operator == "//": + return f"zdiv({sign(lhs)}, {sign(rhs)})" + if value.operator == "&": + return f"({self(lhs)} & {self(rhs)})" + if value.operator == "|": + return f"({self(lhs)} | {self(rhs)})" + if value.operator == "^": + return f"({self(lhs)} ^ {self(rhs)})" + if value.operator == "<<": + return f"({sign(lhs)} << {sign(rhs)})" + if value.operator == ">>": + return f"({sign(lhs)} >> {sign(rhs)})" + if value.operator == "==": + return f"({sign(lhs)} == {sign(rhs)})" + if value.operator == "!=": + return f"({sign(lhs)} != {sign(rhs)})" + if value.operator == "<": + return f"({sign(lhs)} < {sign(rhs)})" + if value.operator == "<=": + return f"({sign(lhs)} <= {sign(rhs)})" + if value.operator == ">": + return f"({sign(lhs)} > {sign(rhs)})" + if value.operator == ">=": + return f"({sign(lhs)} >= {sign(rhs)})" + elif len(value.operands) == 3: + if value.operator == "m": + sel, val1, val0 = value.operands + return f"({self(val1)} if {self(sel)} else {self(val0)})" + raise NotImplementedError("Operator '{}' not implemented".format(value.operator)) # :nocov: + + def on_Slice(self, value): + return f"(({self(value.value)} >> {value.start}) & {(1 << len(value)) - 1})" + + def on_Part(self, value): + offset_mask = (1 << len(value.offset)) - 1 + offset = f"(({self(value.offset)} & {offset_mask}) * {value.stride})" + return f"({self(value.value)} >> {offset} & " \ + f"{(1 << value.width) - 1})" + + def on_Cat(self, value): + gen_parts = [] + offset = 0 + for part in value.parts: + part_mask = (1 << len(part)) - 1 + gen_parts.append(f"(({self(part)} & {part_mask}) << {offset})") + offset += len(part) + if gen_parts: + return f"({' | '.join(gen_parts)})" + return f"0" + + def on_Repl(self, value): + part_mask = (1 << len(value.value)) - 1 + gen_part = self.emitter.def_var("repl", f"{self(value.value)} & {part_mask}") + gen_parts = [] + offset = 0 + for _ in range(value.count): + gen_parts.append(f"({gen_part} << {offset})") + offset += len(value.value) + if gen_parts: + return f"({' | '.join(gen_parts)})" + return f"0" + + def on_ArrayProxy(self, value): + index_mask = (1 << len(value.index)) - 1 + gen_index = self.emitter.def_var("rhs_index", f"{self(value.index)} & {index_mask}") + gen_value = self.emitter.gen_var("rhs_proxy") + if value.elems: + gen_elems = [] + for index, elem in enumerate(value.elems): + if index == 0: + self.emitter.append(f"if {gen_index} == {index}:") + else: + self.emitter.append(f"elif {gen_index} == {index}:") + with self.emitter.indent(): + self.emitter.append(f"{gen_value} = {self(elem)}") + self.emitter.append(f"else:") + with self.emitter.indent(): + self.emitter.append(f"{gen_value} = {self(value.elems[-1])}") + return gen_value + else: + return f"0" + + @classmethod + def compile(cls, context, value, *, mode, inputs=None): + emitter = _Emitter() + compiler = cls(context, emitter, mode=mode, inputs=inputs) + emitter.append(f"result = {compiler(value)}") + return emitter.flush() + + +class _LHSValueCompiler(_ValueCompiler): + def __init__(self, context, emitter, *, rhs, outputs=None): + super().__init__(context, emitter) + # `rrhs` is used to translate rvalues that are syntactically a part of an lvalue, e.g. + # the offset of a Part. + self.rrhs = rhs + # `lrhs` is used to translate the read part of a read-modify-write cycle during partial + # update of an lvalue. + self.lrhs = _RHSValueCompiler(context, emitter, mode="next", inputs=None) + # If not None, `outputs` gets populated with signals on LHS. + self.outputs = outputs + + def on_Const(self, value): + raise TypeError # :nocov: + + def on_Signal(self, value): + if self.outputs is not None: + self.outputs.add(value) + + def gen(arg): + value_mask = (1 << len(value)) - 1 + if value.shape().signed: + value_sign = f"sign({arg} & {value_mask}, {-1 << (len(value) - 1)})" + else: # unsigned + value_sign = f"{arg} & {value_mask}" + self.emitter.append(f"next_{self.context.get_out_signal(value)} = {value_sign}") + return gen + + def on_Operator(self, value): + raise TypeError # :nocov: + + def on_Slice(self, value): + def gen(arg): + width_mask = (1 << (value.stop - value.start)) - 1 + self(value.value)(f"({self.lrhs(value.value)} & " \ + f"{~(width_mask << value.start)} | " \ + f"(({arg} & {width_mask}) << {value.start}))") + return gen + + def on_Part(self, value): + def gen(arg): + width_mask = (1 << value.width) - 1 + offset_mask = (1 << len(value.offset)) - 1 + offset = f"(({self.rrhs(value.offset)} & {offset_mask}) * {value.stride})" + self(value.value)(f"({self.lrhs(value.value)} & " \ + f"~({width_mask} << {offset}) | " \ + f"(({arg} & {width_mask}) << {offset}))") + return gen + + def on_Cat(self, value): + def gen(arg): + gen_arg = self.emitter.def_var("cat", arg) + gen_parts = [] + offset = 0 + for part in value.parts: + part_mask = (1 << len(part)) - 1 + self(part)(f"(({gen_arg} >> {offset}) & {part_mask})") + offset += len(part) + return gen + + def on_Repl(self, value): + raise TypeError # :nocov: + + def on_ArrayProxy(self, value): + def gen(arg): + index_mask = (1 << len(value.index)) - 1 + gen_index = self.emitter.def_var("index", f"{self.rrhs(value.index)} & {index_mask}") + if value.elems: + gen_elems = [] + for index, elem in enumerate(value.elems): + if index == 0: + self.emitter.append(f"if {gen_index} == {index}:") + else: + self.emitter.append(f"elif {gen_index} == {index}:") + with self.emitter.indent(): + self(elem)(arg) + self.emitter.append(f"else:") + with self.emitter.indent(): + self(value.elems[-1])(arg) + else: + self.emitter.append(f"pass") + return gen + + @classmethod + def compile(cls, context, stmt, *, inputs=None, outputs=None): + emitter = _Emitter() + compiler = cls(context, emitter, inputs=inputs, outputs=outputs) + compiler(stmt) + return emitter.flush() + + +class _StatementCompiler(StatementVisitor, _Compiler): + def __init__(self, context, emitter, *, inputs=None, outputs=None): + super().__init__(context, emitter) + self.rhs = _RHSValueCompiler(context, emitter, mode="curr", inputs=inputs) + self.lhs = _LHSValueCompiler(context, emitter, rhs=self.rhs, outputs=outputs) + + def on_statements(self, stmts): + for stmt in stmts: + self(stmt) + if not stmts: + self.emitter.append("pass") + + def on_Assign(self, stmt): + return self.lhs(stmt.lhs)(self.rhs(stmt.rhs)) + + def on_Switch(self, stmt): + gen_test = self.emitter.def_var("test", + f"{self.rhs(stmt.test)} & {(1 << len(stmt.test)) - 1}") + for index, (patterns, stmts) in enumerate(stmt.cases.items()): + gen_checks = [] + if not patterns: + gen_checks.append(f"True") + else: + for pattern in patterns: + if "-" in pattern: + mask = int("".join("0" if b == "-" else "1" for b in pattern), 2) + value = int("".join("0" if b == "-" else b for b in pattern), 2) + gen_checks.append(f"({gen_test} & {mask}) == {value}") + else: + value = int(pattern, 2) + gen_checks.append(f"{gen_test} == {value}") + if index == 0: + self.emitter.append(f"if {' or '.join(gen_checks)}:") + else: + self.emitter.append(f"elif {' or '.join(gen_checks)}:") + with self.emitter.indent(): + self(stmts) + + def on_Assert(self, stmt): + raise NotImplementedError # :nocov: + + def on_Assume(self, stmt): + raise NotImplementedError # :nocov: + + def on_Cover(self, stmt): + raise NotImplementedError # :nocov: + + @classmethod + def compile(cls, context, stmt, *, inputs=None, outputs=None): + output_indexes = [context.get_signal(signal) for signal in stmt._lhs_signals()] + emitter = _Emitter() + for signal_index in output_indexes: + emitter.append(f"next_{signal_index} = slots[{signal_index}].next") + compiler = cls(context, emitter, inputs=inputs, outputs=outputs) + compiler(stmt) + for signal_index in output_indexes: + emitter.append(f"slots[{signal_index}].set(next_{signal_index})") + return emitter.flush() + + +class _CompiledProcess(_Process): + __slots__ = ("context", "comb", "name", "run") + + def __init__(self, state, *, comb, name): + self.context = _EvalContext(state) + self.comb = comb + self.name = name + self.run = None # set by _FragmentCompiler + self.reset() + + def reset(self): + self.runnable = self.comb + self.passive = True + + +class _FragmentCompiler: + def __init__(self, state, signal_names): + self.state = state + self.signal_names = signal_names + + def __call__(self, fragment, *, hierarchy=("top",)): + processes = set() + + def add_signal_name(signal): + hierarchical_signal_name = (*hierarchy, signal.name) + if signal not in self.signal_names: + self.signal_names[signal] = {hierarchical_signal_name} + else: + self.signal_names[signal].add(hierarchical_signal_name) + + for domain_name, domain_signals in fragment.drivers.items(): + domain_stmts = LHSGroupFilter(domain_signals)(fragment.statements) + domain_process = _CompiledProcess(self.state, comb=domain_name is None, + name=".".join((*hierarchy, "<{}>".format(domain_name or "comb")))) + + emitter = _Emitter() + emitter.append(f"def run():") + emitter._level += 1 + + if domain_name is None: + for signal in domain_signals: + signal_index = domain_process.context.get_signal(signal) + emitter.append(f"next_{signal_index} = {signal.reset}") + + inputs = SignalSet() + _StatementCompiler(domain_process.context, emitter, inputs=inputs)(domain_stmts) + + for input in inputs: + self.state.for_signal(input).wait(domain_process) + + else: + domain = fragment.domains[domain_name] + add_signal_name(domain.clk) + if domain.rst is not None: + add_signal_name(domain.rst) + + clk_trigger = 1 if domain.clk_edge == "pos" else 0 + self.state.for_signal(domain.clk).wait(domain_process, trigger=clk_trigger) + if domain.rst is not None and domain.async_reset: + rst_trigger = 1 + self.state.for_signal(domain.rst).wait(domain_process, trigger=rst_trigger) + + gen_asserts = [] + clk_index = domain_process.context.get_signal(domain.clk) + gen_asserts.append(f"slots[{clk_index}].curr == {clk_trigger}") + if domain.rst is not None and domain.async_reset: + rst_index = domain_process.context.get_signal(domain.rst) + gen_asserts.append(f"slots[{rst_index}].curr == {rst_trigger}") + emitter.append(f"assert {' or '.join(gen_asserts)}") + + for signal in domain_signals: + signal_index = domain_process.context.get_signal(signal) + emitter.append(f"next_{signal_index} = slots[{signal_index}].next") + + _StatementCompiler(domain_process.context, emitter)(domain_stmts) + + for signal in domain_signals: + signal_index = domain_process.context.get_signal(signal) + emitter.append(f"slots[{signal_index}].set(next_{signal_index})") + + # There shouldn't be any exceptions raised by the generated code, but if there are + # (almost certainly due to a bug in the code generator), use this environment variable + # to make backtraces useful. + code = emitter.flush() + if os.getenv("NMIGEN_pysim_dump"): + file = tempfile.NamedTemporaryFile("w", prefix="nmigen_pysim_", delete=False) + file.write(code) + filename = file.name + else: + filename = "" + + exec_locals = {"slots": domain_process.context.slots, **_ValueCompiler.helpers} + exec(compile(code, filename, "exec"), exec_locals) + domain_process.run = exec_locals["run"] + + processes.add(domain_process) + + for used_signal in domain_process.context.indexes: + add_signal_name(used_signal) + + for subfragment_index, (subfragment, subfragment_name) in enumerate(fragment.subfragments): + if subfragment_name is None: + subfragment_name = "U${}".format(subfragment_index) + processes.update(self(subfragment, hierarchy=(*hierarchy, subfragment_name))) + + return processes + + +class _CoroutineProcess(_Process): + def __init__(self, state, domains, constructor, *, default_cmd=None): + self.state = state + self.domains = domains + self.constructor = constructor + self.default_cmd = default_cmd + self.reset() + + def reset(self): + self.runnable = True + self.passive = False + self.coroutine = self.constructor() + self.eval_context = _EvalContext(self.state) + self.exec_locals = { + "slots": self.eval_context.slots, + "result": None, + **_ValueCompiler.helpers + } + self.waits_on = set() + + @property + def name(self): + coroutine = self.coroutine + while coroutine.gi_yieldfrom is not None: + coroutine = coroutine.gi_yieldfrom + if inspect.isgenerator(coroutine): + frame = coroutine.gi_frame + if inspect.iscoroutine(coroutine): + frame = coroutine.cr_frame + return "{}:{}".format(inspect.getfile(frame), inspect.getlineno(frame)) + + def get_in_signal(self, signal, *, trigger=None): + signal_state = self.state.for_signal(signal) + assert self not in signal_state.waiters + signal_state.waiters[self] = trigger + self.waits_on.add(signal_state) + return signal_state + + def run(self): + if self.coroutine is None: + return + + if self.waits_on: + for signal_state in self.waits_on: + del signal_state.waiters[self] + self.waits_on.clear() + + response = None + while True: + try: + command = self.coroutine.send(response) + if command is None: + command = self.default_cmd + response = None + + if isinstance(command, Value): + exec(_RHSValueCompiler.compile(self.eval_context, command, mode="curr"), + self.exec_locals) + response = Const.normalize(self.exec_locals["result"], command.shape()) + + elif isinstance(command, Statement): + exec(_StatementCompiler.compile(self.eval_context, command), + self.exec_locals) + + elif type(command) is Tick: + domain = command.domain + if isinstance(domain, ClockDomain): + pass + elif domain in self.domains: + domain = self.domains[domain] + else: + raise NameError("Received command {!r} that refers to a nonexistent " + "domain {!r} from process {!r}" + .format(command, command.domain, self.name)) + self.get_in_signal(domain.clk, trigger=1 if domain.clk_edge == "pos" else 0) + if domain.rst is not None and domain.async_reset: + self.get_in_signal(domain.rst, trigger=1) + return + + elif type(command) is Settle: + self.state.deadlines[self] = None + return + + elif type(command) is Delay: + if command.interval is None: + self.state.deadlines[self] = None + else: + self.state.deadlines[self] = self.state.timestamp + command.interval + return + + elif type(command) is Passive: + self.passive = True + + elif type(command) is Active: + self.passive = False + + elif command is None: # only possible if self.default_cmd is None + raise TypeError("Received default command from process {!r} that was added " + "with add_process(); did you mean to add this process with " + "add_sync_process() instead?" + .format(self.name)) + + else: + raise TypeError("Received unsupported command {!r} from process {!r}" + .format(command, self.name)) + + except StopIteration: + self.passive = True + self.coroutine = None + return + + except Exception as exn: + self.coroutine.throw(exn) + + +class _WaveformContextManager: + def __init__(self, state, waveform_writer): + self._state = state + self._waveform_writer = waveform_writer + + def __enter__(self): + try: + self._state.start_waveform(self._waveform_writer) + except: + self._waveform_writer.close(0) + raise + + def __exit__(self, *args): + self._state.finish_waveform() + + +class Simulator: + def __init__(self, fragment, **kwargs): + self._state = _SimulatorState() + self._signal_names = SignalDict() + self._fragment = Fragment.get(fragment, platform=None).prepare() + self._processes = _FragmentCompiler(self._state, self._signal_names)(self._fragment) + if kwargs: # :nocov: + # TODO(nmigen-0.3): remove + self._state.start_waveform(_VCDWaveformWriter(self._signal_names, **kwargs)) + self._clocked = set() + + def _check_process(self, process): + if not (inspect.isgeneratorfunction(process) or inspect.iscoroutinefunction(process)): + if inspect.isgenerator(process) or inspect.iscoroutine(process): + warnings.warn("instead of generators, use generator functions as processes; " + "this allows the simulator to be repeatedly reset", + DeprecationWarning, stacklevel=3) + def wrapper(): + yield from process + return wrapper + else: + raise TypeError("Cannot add a process {!r} because it is not a generator function" + .format(process)) + return process + + def _add_coroutine_process(self, process, *, default_cmd): + self._processes.add(_CoroutineProcess(self._state, self._fragment.domains, process, + default_cmd=default_cmd)) + + def add_process(self, process): + process = self._check_process(process) + def wrapper(): + # Only start a bench process after comb settling, so that the reset values are correct. + yield Settle() + yield from process() + self._add_coroutine_process(wrapper, default_cmd=None) + + def add_sync_process(self, process, *, domain="sync"): + process = self._check_process(process) + def wrapper(): + # Only start a sync process after the first clock edge (or reset edge, if the domain + # uses an asynchronous reset). This matches the behavior of synchronous FFs. + yield Tick(domain) + yield from process() + return self._add_coroutine_process(wrapper, default_cmd=Tick(domain)) + + def add_clock(self, period, *, phase=None, domain="sync", if_exists=False): + """Add a clock process. + + Adds a process that drives the clock signal of ``domain`` at a 50% duty cycle. + + Arguments + --------- + period : float + Clock period. The process will toggle the ``domain`` clock signal every ``period / 2`` + seconds. + phase : None or float + Clock phase. The process will wait ``phase`` seconds before the first clock transition. + If not specified, defaults to ``period / 2``. + domain : str or ClockDomain + Driven clock domain. If specified as a string, the domain with that name is looked up + in the root fragment of the simulation. + if_exists : bool + If ``False`` (the default), raise an error if the driven domain is specified as + a string and the root fragment does not have such a domain. If ``True``, do nothing + in this case. + """ + if isinstance(domain, ClockDomain): + pass + elif domain in self._fragment.domains: + domain = self._fragment.domains[domain] + elif if_exists: + return + else: + raise ValueError("Domain {!r} is not present in simulation" + .format(domain)) + if domain in self._clocked: + raise ValueError("Domain {!r} already has a clock driving it" + .format(domain.name)) + + half_period = period / 2 + if phase is None: + # By default, delay the first edge by half period. This causes any synchronous activity + # to happen at a non-zero time, distinguishing it from the reset values in the waveform + # viewer. + phase = half_period + def clk_process(): + yield Passive() + yield Delay(phase) + # Behave correctly if the process is added after the clock signal is manipulated, or if + # its reset state is high. + initial = (yield domain.clk) + while True: + yield domain.clk.eq(~initial) + yield Delay(half_period) + yield domain.clk.eq(initial) + yield Delay(half_period) + self._add_coroutine_process(clk_process, default_cmd=None) + self._clocked.add(domain) + + def reset(self): + """Reset the simulation. + + Assign the reset value to every signal in the simulation, and restart every user process. + """ + self._state.reset() + for process in self._processes: + process.reset() + + def _delta(self): + """Perform a delta cycle. + + Performs the two phases of a delta cycle: + 1. run and suspend every non-waiting process once, queueing signal changes; + 2. commit every queued signal change, waking up any waiting process. + """ + for process in self._processes: + if process.runnable: + process.runnable = False + process.run() + + return self._state.commit() + + def _settle(self): + """Settle the simulation. + + Run every process and commit changes until a fixed point is reached. If there is + an unstable combinatorial loop, this function will never return. + """ + while self._delta(): + pass + + def step(self): + """Step the simulation. + + Run every process and commit changes until a fixed point is reached, then advance time + to the closest deadline (if any). If there is an unstable combinatorial loop, + this function will never return. + + Returns ``True`` if there are any active processes, ``False`` otherwise. + """ + self._settle() + self._state.advance() + return any(not process.passive for process in self._processes) + + def run(self): + """Run the simulation while any processes are active. + + Processes added with :meth:`add_process` and :meth:`add_sync_process` are initially active, + and may change their status using the ``yield Passive()`` and ``yield Active()`` commands. + Processes compiled from HDL and added with :meth:`add_clock` are always passive. + """ + while self.step(): + pass + + def run_until(self, deadline, *, run_passive=False): + """Run the simulation until it advances to ``deadline``. + + If ``run_passive`` is ``False``, the simulation also stops when there are no active + processes, similar to :meth:`run`. Otherwise, the simulation will stop only after it + advances to or past ``deadline``. + + If the simulation stops advancing, this function will never return. + """ + assert self._state.timestamp <= deadline + while (self.step() or run_passive) and self._state.timestamp < deadline: + pass + + def write_vcd(self, vcd_file, gtkw_file=None, *, traces=()): + """Write waveforms to a Value Change Dump file, optionally populating a GTKWave save file. + + This method returns a context manager. It can be used as: :: + + sim = Simulator(frag) + sim.add_clock(1e-6) + with sim.write_vcd("dump.vcd", "dump.gtkw"): + sim.run_until(1e-3) + + Arguments + --------- + vcd_file : str or file-like object + Verilog Value Change Dump file or filename. + gtkw_file : str or file-like object + GTKWave save file or filename. + traces : iterable of Signal + Signals to display traces for. + """ + waveform_writer = _VCDWaveformWriter(self._signal_names, + vcd_file=vcd_file, gtkw_file=gtkw_file, traces=traces) + return _WaveformContextManager(self._state, waveform_writer) + + # TODO(nmigen-0.3): remove + @deprecated("instead of `with Simulator(fragment, ...) as sim:`, use " + "`sim = Simulator(fragment); with sim.write_vcd(...):`") + def __enter__(self): # :nocov: + return self + + # TODO(nmigen-0.3): remove + def __exit__(self, *args): # :nocov: + self._state.finish_waveform() diff --git a/nmigen/back/rtlil.py b/nmigen/back/rtlil.py new file mode 100644 index 0000000..2075b90 --- /dev/null +++ b/nmigen/back/rtlil.py @@ -0,0 +1,1019 @@ +import io +import textwrap +from collections import defaultdict, OrderedDict +from contextlib import contextmanager + +from .._utils import bits_for, flatten +from ..hdl import ast, rec, ir, mem, xfrm + + +__all__ = ["convert", "convert_fragment"] + + +class _Namer: + def __init__(self): + super().__init__() + self._anon = 0 + self._index = 0 + self._names = set() + + def anonymous(self): + name = "U$${}".format(self._anon) + assert name not in self._names + self._anon += 1 + return name + + def _make_name(self, name, local): + if name is None: + self._index += 1 + name = "${}".format(self._index) + elif not local and name[0] not in "\\$": + name = "\\{}".format(name) + while name in self._names: + self._index += 1 + name = "{}${}".format(name, self._index) + self._names.add(name) + return name + + +class _BufferedBuilder: + def __init__(self): + super().__init__() + self._buffer = io.StringIO() + + def __str__(self): + return self._buffer.getvalue() + + def _append(self, fmt, *args, **kwargs): + self._buffer.write(fmt.format(*args, **kwargs)) + + +class _ProxiedBuilder: + def _append(self, *args, **kwargs): + self.rtlil._append(*args, **kwargs) + + +class _AttrBuilder: + _escape_map = str.maketrans({ + "\"": "\\\"", + "\\": "\\\\", + "\t": "\\t", + "\r": "\\r", + "\n": "\\n", + }) + + def _attribute(self, name, value, *, indent=0): + if isinstance(value, str): + self._append("{}attribute \\{} \"{}\"\n", + " " * indent, name, value.translate(self._escape_map)) + else: + self._append("{}attribute \\{} {}\n", + " " * indent, name, int(value)) + + def _attributes(self, attrs, *, src=None, **kwargs): + for name, value in attrs.items(): + self._attribute(name, value, **kwargs) + if src: + self._attribute("src", src, **kwargs) + + +class _Builder(_Namer, _BufferedBuilder): + def module(self, name=None, attrs={}): + name = self._make_name(name, local=False) + return _ModuleBuilder(self, name, attrs) + + +class _ModuleBuilder(_Namer, _BufferedBuilder, _AttrBuilder): + def __init__(self, rtlil, name, attrs): + super().__init__() + self.rtlil = rtlil + self.name = name + self.attrs = {"generator": "nMigen"} + self.attrs.update(attrs) + + def __enter__(self): + self._attributes(self.attrs) + self._append("module {}\n", self.name) + return self + + def __exit__(self, *args): + self._append("end\n") + self.rtlil._buffer.write(str(self)) + + def wire(self, width, port_id=None, port_kind=None, name=None, attrs={}, src=""): + self._attributes(attrs, src=src, indent=1) + name = self._make_name(name, local=False) + if port_id is None: + self._append(" wire width {} {}\n", width, name) + else: + assert port_kind in ("input", "output", "inout") + self._append(" wire width {} {} {} {}\n", width, port_kind, port_id, name) + return name + + def connect(self, lhs, rhs): + self._append(" connect {} {}\n", lhs, rhs) + + def memory(self, width, size, name=None, attrs={}, src=""): + self._attributes(attrs, src=src, indent=1) + name = self._make_name(name, local=False) + self._append(" memory width {} size {} {}\n", width, size, name) + return name + + def cell(self, kind, name=None, params={}, ports={}, attrs={}, src=""): + self._attributes(attrs, src=src, indent=1) + name = self._make_name(name, local=False) + self._append(" cell {} {}\n", kind, name) + for param, value in params.items(): + if isinstance(value, str): + self._append(" parameter \\{} \"{}\"\n", + param, value.translate(self._escape_map)) + elif isinstance(value, int): + self._append(" parameter \\{} {}'{:b}\n", + param, bits_for(value), value) + elif isinstance(value, float): + self._append(" parameter real \\{} \"{!r}\"\n", + param, value) + elif isinstance(value, ast.Const): + self._append(" parameter \\{} {}'{:b}\n", + param, len(value), value.value) + else: + assert False, "Bad parameter {!r}".format(value) + for port, wire in ports.items(): + self._append(" connect {} {}\n", port, wire) + self._append(" end\n") + return name + + def process(self, name=None, attrs={}, src=""): + name = self._make_name(name, local=True) + return _ProcessBuilder(self, name, attrs, src) + + +class _ProcessBuilder(_BufferedBuilder, _AttrBuilder): + def __init__(self, rtlil, name, attrs, src): + super().__init__() + self.rtlil = rtlil + self.name = name + self.attrs = {} + self.src = src + + def __enter__(self): + self._attributes(self.attrs, src=self.src, indent=1) + self._append(" process {}\n", self.name) + return self + + def __exit__(self, *args): + self._append(" end\n") + self.rtlil._buffer.write(str(self)) + + def case(self): + return _CaseBuilder(self, indent=2) + + def sync(self, kind, cond=None): + return _SyncBuilder(self, kind, cond) + + +class _CaseBuilder(_ProxiedBuilder): + def __init__(self, rtlil, indent): + self.rtlil = rtlil + self.indent = indent + + def __enter__(self): + return self + + def __exit__(self, *args): + pass + + def assign(self, lhs, rhs): + self._append("{}assign {} {}\n", " " * self.indent, lhs, rhs) + + def switch(self, cond, attrs={}, src=""): + return _SwitchBuilder(self.rtlil, cond, attrs, src, self.indent) + + +class _SwitchBuilder(_ProxiedBuilder, _AttrBuilder): + def __init__(self, rtlil, cond, attrs, src, indent): + self.rtlil = rtlil + self.cond = cond + self.attrs = attrs + self.src = src + self.indent = indent + + def __enter__(self): + self._attributes(self.attrs, src=self.src, indent=self.indent) + self._append("{}switch {}\n", " " * self.indent, self.cond) + return self + + def __exit__(self, *args): + self._append("{}end\n", " " * self.indent) + + def case(self, *values, attrs={}, src=""): + self._attributes(attrs, src=src, indent=self.indent + 1) + if values == (): + self._append("{}case\n", " " * (self.indent + 1)) + else: + self._append("{}case {}\n", " " * (self.indent + 1), + ", ".join("{}'{}".format(len(value), value) for value in values)) + return _CaseBuilder(self.rtlil, self.indent + 2) + + +class _SyncBuilder(_ProxiedBuilder): + def __init__(self, rtlil, kind, cond): + self.rtlil = rtlil + self.kind = kind + self.cond = cond + + def __enter__(self): + if self.cond is None: + self._append(" sync {}\n", self.kind) + else: + self._append(" sync {} {}\n", self.kind, self.cond) + return self + + def __exit__(self, *args): + pass + + def update(self, lhs, rhs): + self._append(" update {} {}\n", lhs, rhs) + + +def src(src_loc): + if src_loc is None: + return None + file, line = src_loc + return "{}:{}".format(file, line) + + +def srcs(src_locs): + return "|".join(sorted(filter(lambda x: x, map(src, src_locs)))) + + +class LegalizeValue(Exception): + def __init__(self, value, branches, src_loc): + self.value = value + self.branches = list(branches) + self.src_loc = src_loc + + +class _ValueCompilerState: + def __init__(self, rtlil): + self.rtlil = rtlil + self.wires = ast.SignalDict() + self.driven = ast.SignalDict() + self.ports = ast.SignalDict() + self.anys = ast.ValueDict() + + self.expansions = ast.ValueDict() + + def add_driven(self, signal, sync): + self.driven[signal] = sync + + def add_port(self, signal, kind): + assert kind in ("i", "o", "io") + if kind == "i": + kind = "input" + elif kind == "o": + kind = "output" + elif kind == "io": + kind = "inout" + self.ports[signal] = (len(self.ports), kind) + + def resolve(self, signal, prefix=None): + if len(signal) == 0: + return "{ }", "{ }" + + if signal in self.wires: + return self.wires[signal] + + if signal in self.ports: + port_id, port_kind = self.ports[signal] + else: + port_id = port_kind = None + if prefix is not None: + wire_name = "{}_{}".format(prefix, signal.name) + else: + wire_name = signal.name + + wire_curr = self.rtlil.wire(width=signal.width, name=wire_name, + port_id=port_id, port_kind=port_kind, + attrs=signal.attrs, + src=src(signal.src_loc)) + if signal in self.driven and self.driven[signal]: + wire_next = self.rtlil.wire(width=signal.width, name=wire_curr + "$next", + src=src(signal.src_loc)) + else: + wire_next = None + self.wires[signal] = (wire_curr, wire_next) + + return wire_curr, wire_next + + def resolve_curr(self, signal, prefix=None): + wire_curr, wire_next = self.resolve(signal, prefix) + return wire_curr + + def expand(self, value): + if not self.expansions: + return value + return self.expansions.get(value, value) + + @contextmanager + def expand_to(self, value, expansion): + try: + assert value not in self.expansions + self.expansions[value] = expansion + yield + finally: + del self.expansions[value] + + +class _ValueCompiler(xfrm.ValueVisitor): + def __init__(self, state): + self.s = state + + def on_unknown(self, value): + if value is None: + return None + else: + super().on_unknown(value) + + def on_ClockSignal(self, value): + raise NotImplementedError # :nocov: + + def on_ResetSignal(self, value): + raise NotImplementedError # :nocov: + + def on_Sample(self, value): + raise NotImplementedError # :nocov: + + def on_Initial(self, value): + raise NotImplementedError # :nocov: + + def on_Record(self, value): + return self(ast.Cat(value.fields.values())) + + def on_Cat(self, value): + return "{{ {} }}".format(" ".join(reversed([self(o) for o in value.parts]))) + + def _prepare_value_for_Slice(self, value): + raise NotImplementedError # :nocov: + + def on_Slice(self, value): + if value.start == 0 and value.stop == len(value.value): + return self(value.value) + + sigspec = self._prepare_value_for_Slice(value.value) + if value.start == value.stop: + return "{}" + elif value.start + 1 == value.stop: + return "{} [{}]".format(sigspec, value.start) + else: + return "{} [{}:{}]".format(sigspec, value.stop - 1, value.start) + + def on_ArrayProxy(self, value): + index = self.s.expand(value.index) + if isinstance(index, ast.Const): + if index.value < len(value.elems): + elem = value.elems[index.value] + else: + elem = value.elems[-1] + return self.match_shape(elem, *value.shape()) + else: + max_index = 1 << len(value.index) + max_elem = len(value.elems) + raise LegalizeValue(value.index, range(min(max_index, max_elem)), value.src_loc) + + +class _RHSValueCompiler(_ValueCompiler): + operator_map = { + (1, "~"): "$not", + (1, "-"): "$neg", + (1, "b"): "$reduce_bool", + (1, "r|"): "$reduce_or", + (1, "r&"): "$reduce_and", + (1, "r^"): "$reduce_xor", + (2, "+"): "$add", + (2, "-"): "$sub", + (2, "*"): "$mul", + (2, "//"): "$div", + (2, "%"): "$mod", + (2, "**"): "$pow", + (2, "<<"): "$sshl", + (2, ">>"): "$sshr", + (2, "&"): "$and", + (2, "^"): "$xor", + (2, "|"): "$or", + (2, "=="): "$eq", + (2, "!="): "$ne", + (2, "<"): "$lt", + (2, "<="): "$le", + (2, ">"): "$gt", + (2, ">="): "$ge", + (3, "m"): "$mux", + } + + def on_value(self, value): + return super().on_value(self.s.expand(value)) + + def on_Const(self, value): + if isinstance(value.value, str): + return "{}'{}".format(value.width, value.value) + else: + value_twos_compl = value.value & ((1 << value.width) - 1) + return "{}'{:0{}b}".format(value.width, value_twos_compl, value.width) + + def on_AnyConst(self, value): + if value in self.s.anys: + return self.s.anys[value] + + res_bits, res_sign = value.shape() + res = self.s.rtlil.wire(width=res_bits, src=src(value.src_loc)) + self.s.rtlil.cell("$anyconst", ports={ + "\\Y": res, + }, params={ + "WIDTH": res_bits, + }, src=src(value.src_loc)) + self.s.anys[value] = res + return res + + def on_AnySeq(self, value): + if value in self.s.anys: + return self.s.anys[value] + + res_bits, res_sign = value.shape() + res = self.s.rtlil.wire(width=res_bits, src=src(value.src_loc)) + self.s.rtlil.cell("$anyseq", ports={ + "\\Y": res, + }, params={ + "WIDTH": res_bits, + }, src=src(value.src_loc)) + self.s.anys[value] = res + return res + + def on_Signal(self, value): + wire_curr, wire_next = self.s.resolve(value) + return wire_curr + + def on_Operator_unary(self, value): + arg, = value.operands + if value.operator in ("u", "s"): + # These operators don't change the bit pattern, only its interpretation. + return self(arg) + + arg_bits, arg_sign = arg.shape() + res_bits, res_sign = value.shape() + res = self.s.rtlil.wire(width=res_bits, src=src(value.src_loc)) + self.s.rtlil.cell(self.operator_map[(1, value.operator)], ports={ + "\\A": self(arg), + "\\Y": res, + }, params={ + "A_SIGNED": arg_sign, + "A_WIDTH": arg_bits, + "Y_WIDTH": res_bits, + }, src=src(value.src_loc)) + return res + + def match_shape(self, value, new_bits, new_sign): + if isinstance(value, ast.Const): + return self(ast.Const(value.value, ast.Shape(new_bits, new_sign))) + + value_bits, value_sign = value.shape() + if new_bits <= value_bits: + return self(ast.Slice(value, 0, new_bits)) + + res = self.s.rtlil.wire(width=new_bits, src=src(value.src_loc)) + self.s.rtlil.cell("$pos", ports={ + "\\A": self(value), + "\\Y": res, + }, params={ + "A_SIGNED": value_sign, + "A_WIDTH": value_bits, + "Y_WIDTH": new_bits, + }, src=src(value.src_loc)) + return res + + def on_Operator_binary(self, value): + lhs, rhs = value.operands + lhs_bits, lhs_sign = lhs.shape() + rhs_bits, rhs_sign = rhs.shape() + if lhs_sign == rhs_sign or value.operator in ("<<", ">>", "**"): + lhs_wire = self(lhs) + rhs_wire = self(rhs) + else: + lhs_sign = rhs_sign = True + lhs_bits = rhs_bits = max(lhs_bits, rhs_bits) + lhs_wire = self.match_shape(lhs, lhs_bits, lhs_sign) + rhs_wire = self.match_shape(rhs, rhs_bits, rhs_sign) + res_bits, res_sign = value.shape() + res = self.s.rtlil.wire(width=res_bits, src=src(value.src_loc)) + self.s.rtlil.cell(self.operator_map[(2, value.operator)], ports={ + "\\A": lhs_wire, + "\\B": rhs_wire, + "\\Y": res, + }, params={ + "A_SIGNED": lhs_sign, + "A_WIDTH": lhs_bits, + "B_SIGNED": rhs_sign, + "B_WIDTH": rhs_bits, + "Y_WIDTH": res_bits, + }, src=src(value.src_loc)) + if value.operator in ("//", "%"): + # RTLIL leaves division by zero undefined, but we require it to return zero. + divmod_res = res + res = self.s.rtlil.wire(width=res_bits, src=src(value.src_loc)) + self.s.rtlil.cell("$mux", ports={ + "\\A": divmod_res, + "\\B": self(ast.Const(0, ast.Shape(res_bits, res_sign))), + "\\S": self(lhs == 0), + "\\Y": res, + }, params={ + "WIDTH": res_bits + }, src=src(value.src_loc)) + return res + + def on_Operator_mux(self, value): + sel, val1, val0 = value.operands + val1_bits, val1_sign = val1.shape() + val0_bits, val0_sign = val0.shape() + res_bits, res_sign = value.shape() + val1_bits = val0_bits = res_bits = max(val1_bits, val0_bits, res_bits) + val1_wire = self.match_shape(val1, val1_bits, val1_sign) + val0_wire = self.match_shape(val0, val0_bits, val0_sign) + res = self.s.rtlil.wire(width=res_bits, src=src(value.src_loc)) + self.s.rtlil.cell("$mux", ports={ + "\\A": val0_wire, + "\\B": val1_wire, + "\\S": self(sel), + "\\Y": res, + }, params={ + "WIDTH": res_bits + }, src=src(value.src_loc)) + return res + + def on_Operator(self, value): + if len(value.operands) == 1: + return self.on_Operator_unary(value) + elif len(value.operands) == 2: + return self.on_Operator_binary(value) + elif len(value.operands) == 3: + assert value.operator == "m" + return self.on_Operator_mux(value) + else: + raise TypeError # :nocov: + + def _prepare_value_for_Slice(self, value): + if isinstance(value, (ast.Signal, ast.Slice, ast.Cat)): + sigspec = self(value) + else: + sigspec = self.s.rtlil.wire(len(value), src=src(value.src_loc)) + self.s.rtlil.connect(sigspec, self(value)) + return sigspec + + def on_Part(self, value): + lhs, rhs = value.value, value.offset + if value.stride != 1: + rhs *= value.stride + lhs_bits, lhs_sign = lhs.shape() + rhs_bits, rhs_sign = rhs.shape() + res_bits, res_sign = value.shape() + res = self.s.rtlil.wire(width=res_bits, src=src(value.src_loc)) + # Note: Verilog's x[o+:w] construct produces a $shiftx cell, not a $shift cell. + # However, Migen's semantics defines the out-of-range bits to be zero, so it is correct + # to use a $shift cell here instead, even though it produces less idiomatic Verilog. + self.s.rtlil.cell("$shift", ports={ + "\\A": self(lhs), + "\\B": self(rhs), + "\\Y": res, + }, params={ + "A_SIGNED": lhs_sign, + "A_WIDTH": lhs_bits, + "B_SIGNED": rhs_sign, + "B_WIDTH": rhs_bits, + "Y_WIDTH": res_bits, + }, src=src(value.src_loc)) + return res + + def on_Repl(self, value): + return "{{ {} }}".format(" ".join(self(value.value) for _ in range(value.count))) + + +class _LHSValueCompiler(_ValueCompiler): + def on_Const(self, value): + raise TypeError # :nocov: + + def on_AnyConst(self, value): + raise TypeError # :nocov: + + def on_AnySeq(self, value): + raise TypeError # :nocov: + + def on_Operator(self, value): + raise TypeError # :nocov: + + def match_shape(self, value, new_bits, new_sign): + value_bits, value_sign = value.shape() + if new_bits == value_bits: + return self(value) + elif new_bits < value_bits: + return self(ast.Slice(value, 0, new_bits)) + else: # new_bits > value_bits + dummy_bits = new_bits - value_bits + dummy_wire = self.s.rtlil.wire(dummy_bits) + return "{{ {} {} }}".format(dummy_wire, self(value)) + + def on_Signal(self, value): + if value not in self.s.driven: + raise ValueError("No LHS wire for non-driven signal {}".format(repr(value))) + wire_curr, wire_next = self.s.resolve(value) + return wire_next or wire_curr + + def _prepare_value_for_Slice(self, value): + assert isinstance(value, (ast.Signal, ast.Slice, ast.Cat, rec.Record)) + return self(value) + + def on_Part(self, value): + offset = self.s.expand(value.offset) + if isinstance(offset, ast.Const): + if offset.value == len(value.value): + dummy_wire = self.s.rtlil.wire(value.width) + return dummy_wire + return self(ast.Slice(value.value, + offset.value * value.stride, + offset.value * value.stride + value.width)) + else: + # Only so many possible parts. The amount of branches is exponential; if value.offset + # is large (e.g. 32-bit wide), trying to naively legalize it is likely to exhaust + # system resources. + max_branches = len(value.value) // value.stride + 1 + raise LegalizeValue(value.offset, + range((1 << len(value.offset)) // value.stride)[:max_branches], + value.src_loc) + + def on_Repl(self, value): + raise TypeError # :nocov: + + +class _StatementCompiler(xfrm.StatementVisitor): + def __init__(self, state, rhs_compiler, lhs_compiler): + self.state = state + self.rhs_compiler = rhs_compiler + self.lhs_compiler = lhs_compiler + + self._case = None + self._test_cache = {} + self._has_rhs = False + self._wrap_assign = False + + @contextmanager + def case(self, switch, values, attrs={}, src=""): + try: + old_case = self._case + with switch.case(*values, attrs=attrs, src=src) as self._case: + yield + finally: + self._case = old_case + + def _check_rhs(self, value): + if self._has_rhs or next(iter(value._rhs_signals()), None) is not None: + self._has_rhs = True + + def on_Assign(self, stmt): + self._check_rhs(stmt.rhs) + + lhs_bits, lhs_sign = stmt.lhs.shape() + rhs_bits, rhs_sign = stmt.rhs.shape() + if lhs_bits == rhs_bits: + rhs_sigspec = self.rhs_compiler(stmt.rhs) + else: + # In RTLIL, LHS and RHS of assignment must have exactly same width. + rhs_sigspec = self.rhs_compiler.match_shape( + stmt.rhs, lhs_bits, lhs_sign) + if self._wrap_assign: + # In RTLIL, all assigns are logically sequenced before all switches, even if they are + # interleaved in the source. In nMigen, the source ordering is used. To handle this + # mismatch, we wrap all assigns following a switch in a dummy switch. + with self._case.switch("{ }") as wrap_switch: + with wrap_switch.case() as wrap_case: + wrap_case.assign(self.lhs_compiler(stmt.lhs), rhs_sigspec) + else: + self._case.assign(self.lhs_compiler(stmt.lhs), rhs_sigspec) + + def on_property(self, stmt): + self(stmt._check.eq(stmt.test)) + self(stmt._en.eq(1)) + + en_wire = self.rhs_compiler(stmt._en) + check_wire = self.rhs_compiler(stmt._check) + self.state.rtlil.cell("$" + stmt._kind, ports={ + "\\A": check_wire, + "\\EN": en_wire, + }, src=src(stmt.src_loc)) + + on_Assert = on_property + on_Assume = on_property + on_Cover = on_property + + def on_Switch(self, stmt): + self._check_rhs(stmt.test) + + if not self.state.expansions: + # We repeatedly translate the same switches over and over (see the LHSGroupAnalyzer + # related code below), and translating the switch test only once helps readability. + if stmt not in self._test_cache: + self._test_cache[stmt] = self.rhs_compiler(stmt.test) + test_sigspec = self._test_cache[stmt] + else: + # However, if the switch test contains an illegal value, then it may not be cached + # (since the illegal value will be repeatedly replaced with different constants), so + # don't cache anything in that case. + test_sigspec = self.rhs_compiler(stmt.test) + + with self._case.switch(test_sigspec, src=src(stmt.src_loc)) as switch: + for values, stmts in stmt.cases.items(): + case_attrs = {} + if values in stmt.case_src_locs: + case_attrs["src"] = src(stmt.case_src_locs[values]) + if isinstance(stmt.test, ast.Signal) and stmt.test.decoder: + decoded_values = [] + for value in values: + if "-" in value: + decoded_values.append("") + else: + decoded_values.append(stmt.test.decoder(int(value, 2))) + case_attrs["nmigen.decoding"] = "|".join(decoded_values) + with self.case(switch, values, attrs=case_attrs): + self._wrap_assign = False + self.on_statements(stmts) + self._wrap_assign = True + + def on_statement(self, stmt): + try: + super().on_statement(stmt) + except LegalizeValue as legalize: + with self._case.switch(self.rhs_compiler(legalize.value), + src=src(legalize.src_loc)) as switch: + shape = legalize.value.shape() + tests = ["{:0{}b}".format(v, shape.width) for v in legalize.branches] + if tests: + tests[-1] = "-" * shape.width + for branch, test in zip(legalize.branches, tests): + with self.case(switch, (test,)): + self._wrap_assign = False + branch_value = ast.Const(branch, shape) + with self.state.expand_to(legalize.value, branch_value): + self.on_statement(stmt) + self._wrap_assign = True + + def on_statements(self, stmts): + for stmt in stmts: + self.on_statement(stmt) + + +def _convert_fragment(builder, fragment, name_map, hierarchy): + if isinstance(fragment, ir.Instance): + port_map = OrderedDict() + for port_name, (value, dir) in fragment.named_ports.items(): + port_map["\\{}".format(port_name)] = value + + if fragment.type[0] == "$": + return fragment.type, port_map + else: + return "\\{}".format(fragment.type), port_map + + module_name = hierarchy[-1] or "anonymous" + module_attrs = OrderedDict() + if len(hierarchy) == 1: + module_attrs["top"] = 1 + module_attrs["nmigen.hierarchy"] = ".".join(name or "anonymous" for name in hierarchy) + + with builder.module(module_name, attrs=module_attrs) as module: + compiler_state = _ValueCompilerState(module) + rhs_compiler = _RHSValueCompiler(compiler_state) + lhs_compiler = _LHSValueCompiler(compiler_state) + stmt_compiler = _StatementCompiler(compiler_state, rhs_compiler, lhs_compiler) + + verilog_trigger = None + verilog_trigger_sync_emitted = False + + # Register all signals driven in the current fragment. This must be done first, as it + # affects further codegen; e.g. whether \sig$next signals will be generated and used. + for domain, signal in fragment.iter_drivers(): + compiler_state.add_driven(signal, sync=domain is not None) + + # Transform all signals used as ports in the current fragment eagerly and outside of + # any hierarchy, to make sure they get sensible (non-prefixed) names. + for signal in fragment.ports: + compiler_state.add_port(signal, fragment.ports[signal]) + compiler_state.resolve_curr(signal) + + # Transform all clocks clocks and resets eagerly and outside of any hierarchy, to make + # sure they get sensible (non-prefixed) names. This does not affect semantics. + for domain, _ in fragment.iter_sync(): + cd = fragment.domains[domain] + compiler_state.resolve_curr(cd.clk) + if cd.rst is not None: + compiler_state.resolve_curr(cd.rst) + + # Transform all subfragments to their respective cells. Transforming signals connected + # to their ports into wires eagerly makes sure they get sensible (prefixed with submodule + # name) names. + memories = OrderedDict() + for subfragment, sub_name in fragment.subfragments: + if not subfragment.ports: + continue + + if sub_name is None: + sub_name = module.anonymous() + + sub_params = OrderedDict() + if hasattr(subfragment, "parameters"): + for param_name, param_value in subfragment.parameters.items(): + if isinstance(param_value, mem.Memory): + memory = param_value + if memory not in memories: + memories[memory] = module.memory(width=memory.width, size=memory.depth, + name=memory.name, attrs=memory.attrs) + addr_bits = bits_for(memory.depth) + data_parts = [] + data_mask = (1 << memory.width) - 1 + for addr in range(memory.depth): + if addr < len(memory.init): + data = memory.init[addr] & data_mask + else: + data = 0 + data_parts.append("{:0{}b}".format(data, memory.width)) + module.cell("$meminit", ports={ + "\\ADDR": rhs_compiler(ast.Const(0, addr_bits)), + "\\DATA": "{}'".format(memory.width * memory.depth) + + "".join(reversed(data_parts)), + }, params={ + "MEMID": memories[memory], + "ABITS": addr_bits, + "WIDTH": memory.width, + "WORDS": memory.depth, + "PRIORITY": 0, + }) + + param_value = memories[memory] + + sub_params[param_name] = param_value + + sub_type, sub_port_map = \ + _convert_fragment(builder, subfragment, name_map, + hierarchy=hierarchy + (sub_name,)) + + sub_ports = OrderedDict() + for port, value in sub_port_map.items(): + if not isinstance(subfragment, ir.Instance): + for signal in value._rhs_signals(): + compiler_state.resolve_curr(signal, prefix=sub_name) + sub_ports[port] = rhs_compiler(value) + + module.cell(sub_type, name=sub_name, ports=sub_ports, params=sub_params, + attrs=subfragment.attrs) + + # If we emit all of our combinatorial logic into a single RTLIL process, Verilog + # simulators will break horribly, because Yosys write_verilog transforms RTLIL processes + # into always @* blocks with blocking assignment, and that does not create delta cycles. + # + # Therefore, we translate the fragment as many times as there are independent groups + # of signals (a group is a transitive closure of signals that appear together on LHS), + # splitting them into many RTLIL (and thus Verilog) processes. + lhs_grouper = xfrm.LHSGroupAnalyzer() + lhs_grouper.on_statements(fragment.statements) + + for group, group_signals in lhs_grouper.groups().items(): + lhs_group_filter = xfrm.LHSGroupFilter(group_signals) + group_stmts = lhs_group_filter(fragment.statements) + + with module.process(name="$group_{}".format(group)) as process: + with process.case() as case: + # For every signal in comb domain, assign \sig$next to the reset value. + # For every signal in sync domains, assign \sig$next to the current + # value (\sig). + for domain, signal in fragment.iter_drivers(): + if signal not in group_signals: + continue + if domain is None: + prev_value = ast.Const(signal.reset, signal.width) + else: + prev_value = signal + case.assign(lhs_compiler(signal), rhs_compiler(prev_value)) + + # Convert statements into decision trees. + stmt_compiler._case = case + stmt_compiler._has_rhs = False + stmt_compiler._wrap_assign = False + stmt_compiler(group_stmts) + + # Verilog `always @*` blocks will not run if `*` does not match anything, i.e. + # if the implicit sensitivity list is empty. We check this while translating, + # by looking for any signals on RHS. If there aren't any, we add some logic + # whose only purpose is to trigger Verilog simulators when it converts + # through RTLIL and to Verilog, by populating the sensitivity list. + # + # Unfortunately, while this workaround allows true (event-driven) Verilog + # simulators to work properly, and is universally ignored by synthesizers, + # Verilator rejects it. + # + # Running the Yosys proc_prune pass converts such pathological `always @*` + # blocks to `assign` statements, so this workaround can be removed completely + # once support for Yosys 0.9 is dropped. + if not stmt_compiler._has_rhs: + if verilog_trigger is None: + verilog_trigger = \ + module.wire(1, name="$verilog_initial_trigger") + case.assign(verilog_trigger, verilog_trigger) + + # For every signal in the sync domain, assign \sig's initial value (which will + # end up as the \init reg attribute) to the reset value. + with process.sync("init") as sync: + for domain, signal in fragment.iter_sync(): + if signal not in group_signals: + continue + wire_curr, wire_next = compiler_state.resolve(signal) + sync.update(wire_curr, rhs_compiler(ast.Const(signal.reset, signal.width))) + + # The Verilog simulator trigger needs to change at time 0, so if we haven't + # yet done that in some process, do it. + if verilog_trigger and not verilog_trigger_sync_emitted: + sync.update(verilog_trigger, "1'0") + verilog_trigger_sync_emitted = True + + # For every signal in every sync domain, assign \sig to \sig$next. The sensitivity + # list, however, differs between domains: for domains with sync reset, it is + # `[pos|neg]edge clk`, for sync domains with async reset it is `[pos|neg]edge clk + # or posedge rst`. + for domain, signals in fragment.drivers.items(): + if domain is None: + continue + + signals = signals & group_signals + if not signals: + continue + + cd = fragment.domains[domain] + + triggers = [] + triggers.append((cd.clk_edge + "edge", compiler_state.resolve_curr(cd.clk))) + if cd.async_reset: + triggers.append(("posedge", compiler_state.resolve_curr(cd.rst))) + + for trigger in triggers: + with process.sync(*trigger) as sync: + for signal in signals: + wire_curr, wire_next = compiler_state.resolve(signal) + sync.update(wire_curr, wire_next) + + # Any signals that are used but neither driven nor connected to an input port always + # assume their reset values. We need to assign the reset value explicitly, since only + # driven sync signals are handled by the logic above. + # + # Because this assignment is done at a late stage, a single Signal object can get assigned + # many times, once in each module it is used. This is a deliberate decision; the possible + # alternatives are to add ports for undriven signals (which requires choosing one module + # to drive it to reset value arbitrarily) or to replace them with their reset value (which + # removes valuable source location information). + driven = ast.SignalSet() + for domain, signals in fragment.iter_drivers(): + driven.update(flatten(signal._lhs_signals() for signal in signals)) + driven.update(fragment.iter_ports(dir="i")) + driven.update(fragment.iter_ports(dir="io")) + for subfragment, sub_name in fragment.subfragments: + driven.update(subfragment.iter_ports(dir="o")) + driven.update(subfragment.iter_ports(dir="io")) + + for wire in compiler_state.wires: + if wire in driven: + continue + wire_curr, _ = compiler_state.wires[wire] + module.connect(wire_curr, rhs_compiler(ast.Const(wire.reset, wire.width))) + + # Collect the names we've given to our ports in RTLIL, and correlate these with the signals + # represented by these ports. If we are a submodule, this will be necessary to create a cell + # for us in the parent module. + port_map = OrderedDict() + for signal in fragment.ports: + port_map[compiler_state.resolve_curr(signal)] = signal + + # Finally, collect tha names we've given to each wire in RTLIL, and provide these to + # the caller, to allow manipulating them in the toolchain. + for signal in compiler_state.wires: + wire_name = compiler_state.resolve_curr(signal) + if wire_name.startswith("\\"): + wire_name = wire_name[1:] + name_map[signal] = hierarchy + (wire_name,) + + return module.name, port_map + + +def convert_fragment(fragment, name="top"): + assert isinstance(fragment, ir.Fragment) + builder = _Builder() + name_map = ast.SignalDict() + _convert_fragment(builder, fragment, name_map, hierarchy=(name,)) + return str(builder), name_map + + +def convert(elaboratable, name="top", platform=None, **kwargs): + fragment = ir.Fragment.get(elaboratable, platform).prepare(**kwargs) + il_text, name_map = convert_fragment(fragment, name) + return il_text diff --git a/nmigen/back/verilog.py b/nmigen/back/verilog.py new file mode 100644 index 0000000..eb98a34 --- /dev/null +++ b/nmigen/back/verilog.py @@ -0,0 +1,77 @@ +import os +import re +import subprocess + +from .._toolchain import * +from . import rtlil + + +__all__ = ["YosysError", "convert", "convert_fragment"] + + +class YosysError(Exception): + pass + + +def _yosys_version(): + yosys_path = require_tool("yosys") + version = subprocess.check_output([yosys_path, "-V"], encoding="utf-8") + m = re.match(r"^Yosys ([\d.]+)(?:\+(\d+))?", version) + tag, offset = m[1], m[2] or 0 + return tuple(map(int, tag.split("."))), offset + + +def _convert_rtlil_text(rtlil_text, *, strip_internal_attrs=False, write_verilog_opts=()): + version, offset = _yosys_version() + if version < (0, 9): + raise YosysError("Yosys {}.{} is not supported".format(*version)) + + attr_map = [] + if strip_internal_attrs: + attr_map.append("-remove generator") + attr_map.append("-remove top") + attr_map.append("-remove src") + attr_map.append("-remove nmigen.hierarchy") + attr_map.append("-remove nmigen.decoding") + + script = """ +# Convert nMigen's RTLIL to readable Verilog. +read_ilang <{}".format(conn, plat) + for conn, plat in self.mapping.items())) + + def __len__(self): + return len(self.mapping) + + def __iter__(self): + for conn_pin, plat_pin in self.mapping.items(): + yield "{}_{}:{}".format(self.name, self.number, conn_pin), plat_pin diff --git a/nmigen/build/plat.py b/nmigen/build/plat.py new file mode 100644 index 0000000..422cc85 --- /dev/null +++ b/nmigen/build/plat.py @@ -0,0 +1,404 @@ +from collections import OrderedDict +from abc import ABCMeta, abstractmethod, abstractproperty +import os +import textwrap +import re +import jinja2 + +from .. import __version__ +from .._toolchain import * +from ..hdl import * +from ..hdl.xfrm import SampleLowerer, DomainLowerer +from ..lib.cdc import ResetSynchronizer +from ..back import rtlil, verilog +from .res import * +from .run import * + + +__all__ = ["Platform", "TemplatedPlatform"] + + +class Platform(ResourceManager, metaclass=ABCMeta): + resources = abstractproperty() + connectors = abstractproperty() + default_clk = None + default_rst = None + required_tools = abstractproperty() + + def __init__(self): + super().__init__(self.resources, self.connectors) + + self.extra_files = OrderedDict() + + self._prepared = False + + @property + def default_clk_constraint(self): + if self.default_clk is None: + raise AttributeError("Platform '{}' does not define a default clock" + .format(type(self).__name__)) + return self.lookup(self.default_clk).clock + + @property + def default_clk_frequency(self): + constraint = self.default_clk_constraint + if constraint is None: + raise AttributeError("Platform '{}' does not constrain its default clock" + .format(type(self).__name__)) + return constraint.frequency + + def add_file(self, filename, content): + if not isinstance(filename, str): + raise TypeError("File name must be a string, not {!r}" + .format(filename)) + if hasattr(content, "read"): + content = content.read() + elif not isinstance(content, (str, bytes)): + raise TypeError("File contents must be str, bytes, or a file-like object, not {!r}" + .format(content)) + if filename in self.extra_files: + if self.extra_files[filename] != content: + raise ValueError("File {!r} already exists" + .format(filename)) + else: + self.extra_files[filename] = content + + @property + def _toolchain_env_var(self): + return f"NMIGEN_ENV_{self.toolchain}" + + def build(self, elaboratable, name="top", + build_dir="build", do_build=True, + program_opts=None, do_program=False, + **kwargs): + if self._toolchain_env_var not in os.environ: + for tool in self.required_tools: + require_tool(tool) + + plan = self.prepare(elaboratable, name, **kwargs) + if not do_build: + return plan + + products = plan.execute_local(build_dir) + if not do_program: + return products + + self.toolchain_program(products, name, **(program_opts or {})) + + def has_required_tools(self): + if self._toolchain_env_var in os.environ: + return True + return all(has_tool(name) for name in self.required_tools) + + def create_missing_domain(self, name): + # Simple instantiation of a clock domain driven directly by the board clock and reset. + # This implementation uses a single ResetSynchronizer to ensure that: + # * an external reset is definitely synchronized to the system clock; + # * release of power-on reset, which is inherently asynchronous, is synchronized to + # the system clock. + # Many device families provide advanced primitives for tackling reset. If these exist, + # they should be used instead. + if name == "sync" and self.default_clk is not None: + clk_i = self.request(self.default_clk).i + if self.default_rst is not None: + rst_i = self.request(self.default_rst).i + else: + rst_i = Const(0) + + m = Module() + m.domains += ClockDomain("sync") + m.d.comb += ClockSignal("sync").eq(clk_i) + m.submodules.reset_sync = ResetSynchronizer(rst_i, domain="sync") + return m + + def prepare(self, elaboratable, name="top", **kwargs): + assert not self._prepared + self._prepared = True + + fragment = Fragment.get(elaboratable, self) + fragment = SampleLowerer()(fragment) + fragment._propagate_domains(self.create_missing_domain, platform=self) + fragment = DomainLowerer()(fragment) + + def add_pin_fragment(pin, pin_fragment): + pin_fragment = Fragment.get(pin_fragment, self) + if not isinstance(pin_fragment, Instance): + pin_fragment.flatten = True + fragment.add_subfragment(pin_fragment, name="pin_{}".format(pin.name)) + + for pin, port, attrs, invert in self.iter_single_ended_pins(): + if pin.dir == "i": + add_pin_fragment(pin, self.get_input(pin, port, attrs, invert)) + if pin.dir == "o": + add_pin_fragment(pin, self.get_output(pin, port, attrs, invert)) + if pin.dir == "oe": + add_pin_fragment(pin, self.get_tristate(pin, port, attrs, invert)) + if pin.dir == "io": + add_pin_fragment(pin, self.get_input_output(pin, port, attrs, invert)) + + for pin, p_port, n_port, attrs, invert in self.iter_differential_pins(): + if pin.dir == "i": + add_pin_fragment(pin, self.get_diff_input(pin, p_port, n_port, attrs, invert)) + if pin.dir == "o": + add_pin_fragment(pin, self.get_diff_output(pin, p_port, n_port, attrs, invert)) + if pin.dir == "oe": + add_pin_fragment(pin, self.get_diff_tristate(pin, p_port, n_port, attrs, invert)) + if pin.dir == "io": + add_pin_fragment(pin, + self.get_diff_input_output(pin, p_port, n_port, attrs, invert)) + + fragment._propagate_ports(ports=self.iter_ports(), all_undef_as_ports=False) + return self.toolchain_prepare(fragment, name, **kwargs) + + @abstractmethod + def toolchain_prepare(self, fragment, name, **kwargs): + """ + Convert the ``fragment`` and constraints recorded in this :class:`Platform` into + a :class:`BuildPlan`. + """ + raise NotImplementedError # :nocov: + + def toolchain_program(self, products, name, **kwargs): + """ + Extract bitstream for fragment ``name`` from ``products`` and download it to a target. + """ + raise NotImplementedError("Platform '{}' does not support programming" + .format(type(self).__name__)) + + def _check_feature(self, feature, pin, attrs, valid_xdrs, valid_attrs): + if not valid_xdrs: + raise NotImplementedError("Platform '{}' does not support {}" + .format(type(self).__name__, feature)) + elif pin.xdr not in valid_xdrs: + raise NotImplementedError("Platform '{}' does not support {} for XDR {}" + .format(type(self).__name__, feature, pin.xdr)) + + if not valid_attrs and attrs: + raise NotImplementedError("Platform '{}' does not support attributes for {}" + .format(type(self).__name__, feature)) + + @staticmethod + def _invert_if(invert, value): + if invert: + return ~value + else: + return value + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0,), valid_attrs=None) + + m = Module() + m.d.comb += pin.i.eq(self._invert_if(invert, port)) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0,), valid_attrs=None) + + m = Module() + m.d.comb += port.eq(self._invert_if(invert, pin.o)) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0,), valid_attrs=None) + + m = Module() + m.submodules += Instance("$tribuf", + p_WIDTH=pin.width, + i_EN=pin.oe, + i_A=self._invert_if(invert, pin.o), + o_Y=port, + ) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0,), valid_attrs=None) + + m = Module() + m.submodules += Instance("$tribuf", + p_WIDTH=pin.width, + i_EN=pin.oe, + i_A=self._invert_if(invert, pin.o), + o_Y=port, + ) + m.d.comb += pin.i.eq(self._invert_if(invert, port)) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(), valid_attrs=None) + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(), valid_attrs=None) + + def get_diff_tristate(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential tristate", pin, attrs, + valid_xdrs=(), valid_attrs=None) + + def get_diff_input_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input/output", pin, attrs, + valid_xdrs=(), valid_attrs=None) + + +class TemplatedPlatform(Platform): + toolchain = abstractproperty() + file_templates = abstractproperty() + command_templates = abstractproperty() + + build_script_templates = { + "build_{{name}}.sh": """ + # {{autogenerated}} + set -e{{verbose("x")}} + [ -n "${{platform._toolchain_env_var}}" ] && . "${{platform._toolchain_env_var}}" + {{emit_commands("sh")}} + """, + "build_{{name}}.bat": """ + @rem {{autogenerated}} + {{quiet("@echo off")}} + if defined {{platform._toolchain_env_var}} call %{{platform._toolchain_env_var}}% + {{emit_commands("bat")}} + """, + } + + def toolchain_prepare(self, fragment, name, **kwargs): + # Restrict the name of the design to a strict alphanumeric character set. Platforms will + # interpolate the name of the design in many different contexts: filesystem paths, Python + # scripts, Tcl scripts, ad-hoc constraint files, and so on. It is not practical to add + # escaping code that handles every one of their edge cases, so make sure we never hit them + # in the first place. + invalid_char = re.match(r"[^A-Za-z0-9_]", name) + if invalid_char: + raise ValueError("Design name {!r} contains invalid character {!r}; only alphanumeric " + "characters are valid in design names" + .format(name, invalid_char.group(0))) + + # This notice serves a dual purpose: to explain that the file is autogenerated, + # and to incorporate the nMigen version into generated code. + autogenerated = "Automatically generated by nMigen {}. Do not edit.".format(__version__) + + rtlil_text, name_map = rtlil.convert_fragment(fragment, name=name) + + def emit_rtlil(): + return rtlil_text + + def emit_verilog(opts=()): + return verilog._convert_rtlil_text(rtlil_text, + strip_internal_attrs=True, write_verilog_opts=opts) + + def emit_debug_verilog(opts=()): + return verilog._convert_rtlil_text(rtlil_text, + strip_internal_attrs=False, write_verilog_opts=opts) + + def emit_commands(syntax): + commands = [] + + for name in self.required_tools: + env_var = tool_env_var(name) + if syntax == "sh": + template = ": ${{{env_var}:={name}}}" + elif syntax == "bat": + template = \ + "if [%{env_var}%] equ [\"\"] set {env_var}=\n" \ + "if [%{env_var}%] equ [] set {env_var}={name}" + else: + assert False + commands.append(template.format(env_var=env_var, name=name)) + + for index, command_tpl in enumerate(self.command_templates): + command = render(command_tpl, origin="".format(index + 1), + syntax=syntax) + command = re.sub(r"\s+", " ", command) + if syntax == "sh": + commands.append(command) + elif syntax == "bat": + commands.append(command + " || exit /b") + else: + assert False + + return "\n".join(commands) + + def get_override(var): + var_env = "NMIGEN_{}".format(var) + if var_env in os.environ: + # On Windows, there is no way to define an "empty but set" variable; it is tempting + # to use a quoted empty string, but it doesn't do what one would expect. Recognize + # this as a useful pattern anyway, and treat `set VAR=""` on Windows the same way + # `export VAR=` is treated on Linux. + return re.sub(r'^\"\"$', "", os.environ[var_env]) + elif var in kwargs: + if isinstance(kwargs[var], str): + return textwrap.dedent(kwargs[var]).strip() + else: + return kwargs[var] + else: + return jinja2.Undefined(name=var) + + @jinja2.contextfunction + def invoke_tool(context, name): + env_var = tool_env_var(name) + if context.parent["syntax"] == "sh": + return "\"${}\"".format(env_var) + elif context.parent["syntax"] == "bat": + return "%{}%".format(env_var) + else: + assert False + + def options(opts): + if isinstance(opts, str): + return opts + else: + return " ".join(opts) + + def hierarchy(signal, separator): + return separator.join(name_map[signal][1:]) + + def verbose(arg): + if "NMIGEN_verbose" in os.environ: + return arg + else: + return jinja2.Undefined(name="quiet") + + def quiet(arg): + if "NMIGEN_verbose" in os.environ: + return jinja2.Undefined(name="quiet") + else: + return arg + + def render(source, origin, syntax=None): + try: + source = textwrap.dedent(source).strip() + compiled = jinja2.Template(source, trim_blocks=True, lstrip_blocks=True) + compiled.environment.filters["options"] = options + compiled.environment.filters["hierarchy"] = hierarchy + except jinja2.TemplateSyntaxError as e: + e.args = ("{} (at {}:{})".format(e.message, origin, e.lineno),) + raise + return compiled.render({ + "name": name, + "platform": self, + "emit_rtlil": emit_rtlil, + "emit_verilog": emit_verilog, + "emit_debug_verilog": emit_debug_verilog, + "emit_commands": emit_commands, + "syntax": syntax, + "invoke_tool": invoke_tool, + "get_override": get_override, + "verbose": verbose, + "quiet": quiet, + "autogenerated": autogenerated, + }) + + plan = BuildPlan(script="build_{}".format(name)) + for filename_tpl, content_tpl in self.file_templates.items(): + plan.add_file(render(filename_tpl, origin=filename_tpl), + render(content_tpl, origin=content_tpl)) + for filename, content in self.extra_files.items(): + plan.add_file(filename, content) + return plan + + def iter_extra_files(self, *endswith): + return (f for f in self.extra_files if f.endswith(endswith)) diff --git a/nmigen/build/res.py b/nmigen/build/res.py new file mode 100644 index 0000000..5153a5c --- /dev/null +++ b/nmigen/build/res.py @@ -0,0 +1,250 @@ +from collections import OrderedDict + +from ..hdl.ast import * +from ..hdl.rec import * +from ..lib.io import * + +from .dsl import * + + +__all__ = ["ResourceError", "ResourceManager"] + + +class ResourceError(Exception): + pass + + +class ResourceManager: + def __init__(self, resources, connectors): + self.resources = OrderedDict() + self._requested = OrderedDict() + self._phys_reqd = OrderedDict() + + self.connectors = OrderedDict() + self._conn_pins = OrderedDict() + + # Constraint lists + self._ports = [] + self._clocks = SignalDict() + + self.add_resources(resources) + self.add_connectors(connectors) + + def add_resources(self, resources): + for res in resources: + if not isinstance(res, Resource): + raise TypeError("Object {!r} is not a Resource".format(res)) + if (res.name, res.number) in self.resources: + raise NameError("Trying to add {!r}, but {!r} has the same name and number" + .format(res, self.resources[res.name, res.number])) + self.resources[res.name, res.number] = res + + def add_connectors(self, connectors): + for conn in connectors: + if not isinstance(conn, Connector): + raise TypeError("Object {!r} is not a Connector".format(conn)) + if (conn.name, conn.number) in self.connectors: + raise NameError("Trying to add {!r}, but {!r} has the same name and number" + .format(conn, self.connectors[conn.name, conn.number])) + self.connectors[conn.name, conn.number] = conn + + for conn_pin, plat_pin in conn: + assert conn_pin not in self._conn_pins + self._conn_pins[conn_pin] = plat_pin + + def lookup(self, name, number=0): + if (name, number) not in self.resources: + raise ResourceError("Resource {}#{} does not exist" + .format(name, number)) + return self.resources[name, number] + + def request(self, name, number=0, *, dir=None, xdr=None): + resource = self.lookup(name, number) + if (resource.name, resource.number) in self._requested: + raise ResourceError("Resource {}#{} has already been requested" + .format(name, number)) + + def merge_options(subsignal, dir, xdr): + if isinstance(subsignal.ios[0], Subsignal): + if dir is None: + dir = dict() + if xdr is None: + xdr = dict() + if not isinstance(dir, dict): + raise TypeError("Directions must be a dict, not {!r}, because {!r} " + "has subsignals" + .format(dir, subsignal)) + if not isinstance(xdr, dict): + raise TypeError("Data rate must be a dict, not {!r}, because {!r} " + "has subsignals" + .format(xdr, subsignal)) + for sub in subsignal.ios: + sub_dir = dir.get(sub.name, None) + sub_xdr = xdr.get(sub.name, None) + dir[sub.name], xdr[sub.name] = merge_options(sub, sub_dir, sub_xdr) + else: + if dir is None: + dir = subsignal.ios[0].dir + if xdr is None: + xdr = 0 + if dir not in ("i", "o", "oe", "io", "-"): + raise TypeError("Direction must be one of \"i\", \"o\", \"oe\", \"io\", " + "or \"-\", not {!r}" + .format(dir)) + if dir != subsignal.ios[0].dir and \ + not (subsignal.ios[0].dir == "io" or dir == "-"): + raise ValueError("Direction of {!r} cannot be changed from \"{}\" to \"{}\"; " + "direction can be changed from \"io\" to \"i\", \"o\", or " + "\"oe\", or from anything to \"-\"" + .format(subsignal.ios[0], subsignal.ios[0].dir, dir)) + if not isinstance(xdr, int) or xdr < 0: + raise ValueError("Data rate of {!r} must be a non-negative integer, not {!r}" + .format(subsignal.ios[0], xdr)) + return dir, xdr + + def resolve(resource, dir, xdr, name, attrs): + for attr_key, attr_value in attrs.items(): + if hasattr(attr_value, "__call__"): + attr_value = attr_value(self) + assert attr_value is None or isinstance(attr_value, str) + if attr_value is None: + del attrs[attr_key] + else: + attrs[attr_key] = attr_value + + if isinstance(resource.ios[0], Subsignal): + fields = OrderedDict() + for sub in resource.ios: + fields[sub.name] = resolve(sub, dir[sub.name], xdr[sub.name], + name="{}__{}".format(name, sub.name), + attrs={**attrs, **sub.attrs}) + return Record([ + (f_name, f.layout) for (f_name, f) in fields.items() + ], fields=fields, name=name) + + elif isinstance(resource.ios[0], (Pins, DiffPairs)): + phys = resource.ios[0] + if isinstance(phys, Pins): + phys_names = phys.names + port = Record([("io", len(phys))], name=name) + if isinstance(phys, DiffPairs): + phys_names = phys.p.names + phys.n.names + port = Record([("p", len(phys)), + ("n", len(phys))], name=name) + if dir == "-": + pin = None + else: + pin = Pin(len(phys), dir, xdr=xdr, name=name) + + for phys_name in phys_names: + if phys_name in self._phys_reqd: + raise ResourceError("Resource component {} uses physical pin {}, but it " + "is already used by resource component {} that was " + "requested earlier" + .format(name, phys_name, self._phys_reqd[phys_name])) + self._phys_reqd[phys_name] = name + + self._ports.append((resource, pin, port, attrs)) + + if pin is not None and resource.clock is not None: + self.add_clock_constraint(pin.i, resource.clock.frequency) + + return pin if pin is not None else port + + else: + assert False # :nocov: + + value = resolve(resource, + *merge_options(resource, dir, xdr), + name="{}_{}".format(resource.name, resource.number), + attrs=resource.attrs) + self._requested[resource.name, resource.number] = value + return value + + def iter_single_ended_pins(self): + for res, pin, port, attrs in self._ports: + if pin is None: + continue + if isinstance(res.ios[0], Pins): + yield pin, port.io, attrs, res.ios[0].invert + + def iter_differential_pins(self): + for res, pin, port, attrs in self._ports: + if pin is None: + continue + if isinstance(res.ios[0], DiffPairs): + yield pin, port.p, port.n, attrs, res.ios[0].invert + + def should_skip_port_component(self, port, attrs, component): + return False + + def iter_ports(self): + for res, pin, port, attrs in self._ports: + if isinstance(res.ios[0], Pins): + if not self.should_skip_port_component(port, attrs, "io"): + yield port.io + elif isinstance(res.ios[0], DiffPairs): + if not self.should_skip_port_component(port, attrs, "p"): + yield port.p + if not self.should_skip_port_component(port, attrs, "n"): + yield port.n + else: + assert False + + def iter_port_constraints(self): + for res, pin, port, attrs in self._ports: + if isinstance(res.ios[0], Pins): + if not self.should_skip_port_component(port, attrs, "io"): + yield port.io.name, res.ios[0].map_names(self._conn_pins, res), attrs + elif isinstance(res.ios[0], DiffPairs): + if not self.should_skip_port_component(port, attrs, "p"): + yield port.p.name, res.ios[0].p.map_names(self._conn_pins, res), attrs + if not self.should_skip_port_component(port, attrs, "n"): + yield port.n.name, res.ios[0].n.map_names(self._conn_pins, res), attrs + else: + assert False + + def iter_port_constraints_bits(self): + for port_name, pin_names, attrs in self.iter_port_constraints(): + if len(pin_names) == 1: + yield port_name, pin_names[0], attrs + else: + for bit, pin_name in enumerate(pin_names): + yield "{}[{}]".format(port_name, bit), pin_name, attrs + + def add_clock_constraint(self, clock, frequency): + if not isinstance(clock, Signal): + raise TypeError("Object {!r} is not a Signal".format(clock)) + if not isinstance(frequency, (int, float)): + raise TypeError("Frequency must be a number, not {!r}".format(frequency)) + + if clock in self._clocks: + raise ValueError("Cannot add clock constraint on {!r}, which is already constrained " + "to {} Hz" + .format(clock, self._clocks[clock])) + else: + self._clocks[clock] = float(frequency) + + def iter_clock_constraints(self): + # Back-propagate constraints through the input buffer. For clock constraints on pins + # (the majority of cases), toolchains work better if the constraint is defined on the pin + # and not on the buffered internal net; and if the toolchain is advanced enough that + # it considers clock phase and delay of the input buffer, it is *necessary* to define + # the constraint on the pin to match the designer's expectation of phase being referenced + # to the pin. + # + # Constraints on nets with no corresponding input pin (e.g. PLL or SERDES outputs) are not + # affected. + pin_i_to_port = SignalDict() + for res, pin, port, attrs in self._ports: + if hasattr(pin, "i"): + if isinstance(res.ios[0], Pins): + pin_i_to_port[pin.i] = port.io + elif isinstance(res.ios[0], DiffPairs): + pin_i_to_port[pin.i] = port.p + else: + assert False + + for net_signal, frequency in self._clocks.items(): + port_signal = pin_i_to_port.get(net_signal) + yield net_signal, port_signal, frequency diff --git a/nmigen/build/run.py b/nmigen/build/run.py new file mode 100644 index 0000000..a32f38e --- /dev/null +++ b/nmigen/build/run.py @@ -0,0 +1,166 @@ +from collections import OrderedDict +from contextlib import contextmanager +from abc import ABCMeta, abstractmethod +import os +import sys +import subprocess +import tempfile +import zipfile +import hashlib + + +__all__ = ["BuildPlan", "BuildProducts", "LocalBuildProducts"] + + +class BuildPlan: + def __init__(self, script): + """A build plan. + + Parameters + ---------- + script : str + The base name (without extension) of the script that will be executed. + """ + self.script = script + self.files = OrderedDict() + + def add_file(self, filename, content): + """ + Add ``content``, which can be a :class:`str`` or :class:`bytes`, to the build plan + as ``filename``. The file name can be a relative path with directories separated by + forward slashes (``/``). + """ + assert isinstance(filename, str) and filename not in self.files + self.files[filename] = content + + def digest(self, size=64): + """ + Compute a `digest`, a short byte sequence deterministically and uniquely identifying + this build plan. + """ + hasher = hashlib.blake2b(digest_size=size) + for filename in sorted(self.files): + hasher.update(filename.encode("utf-8")) + content = self.files[filename] + if isinstance(content, str): + content = content.encode("utf-8") + hasher.update(content) + hasher.update(self.script.encode("utf-8")) + return hasher.digest() + + def archive(self, file): + """ + Archive files from the build plan into ``file``, which can be either a filename, or + a file-like object. The produced archive is deterministic: exact same files will + always produce exact same archive. + """ + with zipfile.ZipFile(file, "w") as archive: + # Write archive members in deterministic order and with deterministic timestamp. + for filename in sorted(self.files): + archive.writestr(zipfile.ZipInfo(filename), self.files[filename]) + + def execute_local(self, root="build", *, run_script=True): + """ + Execute build plan using the local strategy. Files from the build plan are placed in + the build root directory ``root``, and, if ``run_script`` is ``True``, the script + appropriate for the platform (``{script}.bat`` on Windows, ``{script}.sh`` elsewhere) is + executed in the build root. + + Returns :class:`LocalBuildProducts`. + """ + os.makedirs(root, exist_ok=True) + cwd = os.getcwd() + try: + os.chdir(root) + + for filename, content in self.files.items(): + filename = os.path.normpath(filename) + # Just to make sure we don't accidentally overwrite anything outside of build root. + assert not filename.startswith("..") + dirname = os.path.dirname(filename) + if dirname: + os.makedirs(dirname, exist_ok=True) + + mode = "wt" if isinstance(content, str) else "wb" + with open(filename, mode) as f: + f.write(content) + + if run_script: + if sys.platform.startswith("win32"): + # Without "call", "cmd /c {}.bat" will return 0. + # See https://stackoverflow.com/a/30736987 for a detailed + # explanation of why, including disassembly/decompilation + # of relevant code in cmd.exe. + # Running the script manually from a command prompt is + # unaffected- i.e. "call" is not required. + subprocess.check_call(["cmd", "/c", "call {}.bat".format(self.script)]) + else: + subprocess.check_call(["sh", "{}.sh".format(self.script)]) + + return LocalBuildProducts(os.getcwd()) + + finally: + os.chdir(cwd) + + def execute(self): + """ + Execute build plan using the default strategy. Use one of the ``execute_*`` methods + explicitly to have more control over the strategy. + """ + return self.execute_local() + + +class BuildProducts(metaclass=ABCMeta): + @abstractmethod + def get(self, filename, mode="b"): + """ + Extract ``filename`` from build products, and return it as a :class:`bytes` (if ``mode`` + is ``"b"``) or a :class:`str` (if ``mode`` is ``"t"``). + """ + assert mode in ("b", "t") + + @contextmanager + def extract(self, *filenames): + """ + Extract ``filenames`` from build products, place them in an OS-specific temporary file + location, with the extension preserved, and delete them afterwards. This method is used + as a context manager, e.g.: :: + + with products.extract("bitstream.bin", "programmer.cfg") \ + as bitstream_filename, config_filename: + subprocess.check_call(["program", "-c", config_filename, bitstream_filename]) + """ + files = [] + try: + for filename in filenames: + # On Windows, a named temporary file (as created by Python) is not accessible to + # others if it's still open within the Python process, so we close it and delete + # it manually. + file = tempfile.NamedTemporaryFile(prefix="nmigen_", suffix="_" + filename, + delete=False) + files.append(file) + file.write(self.get(filename)) + file.close() + + if len(files) == 0: + return (yield) + elif len(files) == 1: + return (yield files[0].name) + else: + return (yield [file.name for file in files]) + finally: + for file in files: + os.unlink(file.name) + + +class LocalBuildProducts(BuildProducts): + def __init__(self, root): + # We provide no guarantees that files will be available on the local filesystem (i.e. in + # any way other than through `products.get()`) in general, so downstream code must never + # rely on this, even when we happen to use a local build most of the time. + self.__root = root + + def get(self, filename, mode="b"): + super().get(filename, mode) + with open(os.path.join(self.__root, filename), "r" + mode) as f: + return f.read() diff --git a/nmigen/cli.py b/nmigen/cli.py new file mode 100644 index 0000000..e6757a0 --- /dev/null +++ b/nmigen/cli.py @@ -0,0 +1,74 @@ +import argparse + +from .hdl.ir import Fragment +from .back import rtlil, verilog, pysim + + +__all__ = ["main"] + + +def main_parser(parser=None): + if parser is None: + parser = argparse.ArgumentParser() + + p_action = parser.add_subparsers(dest="action") + + p_generate = p_action.add_parser("generate", + help="generate RTLIL or Verilog from the design") + p_generate.add_argument("-t", "--type", dest="generate_type", + metavar="LANGUAGE", choices=["il", "v"], + default="v", + help="generate LANGUAGE (il for RTLIL, v for Verilog; default: %(default)s)") + p_generate.add_argument("generate_file", + metavar="FILE", type=argparse.FileType("w"), nargs="?", + help="write generated code to FILE") + + p_simulate = p_action.add_parser( + "simulate", help="simulate the design") + p_simulate.add_argument("-v", "--vcd-file", + metavar="VCD-FILE", type=argparse.FileType("w"), + help="write execution trace to VCD-FILE") + p_simulate.add_argument("-w", "--gtkw-file", + metavar="GTKW-FILE", type=argparse.FileType("w"), + help="write GTKWave configuration to GTKW-FILE") + p_simulate.add_argument("-p", "--period", dest="sync_period", + metavar="TIME", type=float, default=1e-6, + help="set 'sync' clock domain period to TIME (default: %(default)s)") + p_simulate.add_argument("-c", "--clocks", dest="sync_clocks", + metavar="COUNT", type=int, required=True, + help="simulate for COUNT 'sync' clock periods") + + return parser + + +def main_runner(parser, args, design, platform=None, name="top", ports=()): + if args.action == "generate": + fragment = Fragment.get(design, platform) + generate_type = args.generate_type + if generate_type is None and args.generate_file: + if args.generate_file.name.endswith(".v"): + generate_type = "v" + if args.generate_file.name.endswith(".il"): + generate_type = "il" + if generate_type is None: + parser.error("specify file type explicitly with -t") + if generate_type == "il": + output = rtlil.convert(fragment, name=name, ports=ports) + if generate_type == "v": + output = verilog.convert(fragment, name=name, ports=ports) + if args.generate_file: + args.generate_file.write(output) + else: + print(output) + + if args.action == "simulate": + fragment = Fragment.get(design, platform) + sim = pysim.Simulator(fragment) + sim.add_clock(args.sync_period) + with sim.write_vcd(vcd_file=args.vcd_file, gtkw_file=args.gtkw_file, traces=ports): + sim.run_until(args.sync_period * args.sync_clocks, run_passive=True) + + +def main(*args, **kwargs): + parser = main_parser() + main_runner(parser, parser.parse_args(), *args, **kwargs) diff --git a/nmigen/compat/__init__.py b/nmigen/compat/__init__.py new file mode 100644 index 0000000..bdf1313 --- /dev/null +++ b/nmigen/compat/__init__.py @@ -0,0 +1,11 @@ +from .fhdl.structure import * +from .fhdl.module import * +from .fhdl.specials import * +from .fhdl.bitcontainer import * +from .fhdl.decorators import * +# from .fhdl.simplify import * + +from .sim import * + +from .genlib.record import * +from .genlib.fsm import * diff --git a/nmigen/compat/fhdl/__init__.py b/nmigen/compat/fhdl/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nmigen/compat/fhdl/bitcontainer.py b/nmigen/compat/fhdl/bitcontainer.py new file mode 100644 index 0000000..648b129 --- /dev/null +++ b/nmigen/compat/fhdl/bitcontainer.py @@ -0,0 +1,21 @@ +from ... import utils +from ...hdl import ast +from ..._utils import deprecated + + +__all__ = ["log2_int", "bits_for", "value_bits_sign"] + + +@deprecated("instead of `log2_int`, use `nmigen.utils.log2_int`") +def log2_int(n, need_pow2=True): + return utils.log2_int(n, need_pow2) + + +@deprecated("instead of `bits_for`, use `nmigen.utils.bits_for`") +def bits_for(n, require_sign_bit=False): + return utils.bits_for(n, require_sign_bit) + + +@deprecated("instead of `value_bits_sign(v)`, use `v.shape()`") +def value_bits_sign(v): + return ast.Value.cast(v).shape() diff --git a/nmigen/compat/fhdl/conv_output.py b/nmigen/compat/fhdl/conv_output.py new file mode 100644 index 0000000..793fad2 --- /dev/null +++ b/nmigen/compat/fhdl/conv_output.py @@ -0,0 +1,35 @@ +from operator import itemgetter + + +class ConvOutput: + def __init__(self): + self.main_source = "" + self.data_files = dict() + + def set_main_source(self, src): + self.main_source = src + + def add_data_file(self, filename_base, content): + filename = filename_base + i = 1 + while filename in self.data_files: + parts = filename_base.split(".", maxsplit=1) + parts[0] += "_" + str(i) + filename = ".".join(parts) + i += 1 + self.data_files[filename] = content + return filename + + def __str__(self): + r = self.main_source + "\n" + for filename, content in sorted(self.data_files.items(), + key=itemgetter(0)): + r += filename + ":\n" + content + return r + + def write(self, main_filename): + with open(main_filename, "w") as f: + f.write(self.main_source) + for filename, content in self.data_files.items(): + with open(filename, "w") as f: + f.write(content) diff --git a/nmigen/compat/fhdl/decorators.py b/nmigen/compat/fhdl/decorators.py new file mode 100644 index 0000000..e61ccb5 --- /dev/null +++ b/nmigen/compat/fhdl/decorators.py @@ -0,0 +1,55 @@ +from ...hdl.ast import * +from ...hdl.xfrm import ResetInserter as NativeResetInserter +from ...hdl.xfrm import EnableInserter as NativeEnableInserter +from ...hdl.xfrm import DomainRenamer as NativeDomainRenamer +from ..._utils import deprecated + + +__all__ = ["ResetInserter", "CEInserter", "ClockDomainsRenamer"] + + +class _CompatControlInserter: + _control_name = None + _native_inserter = None + + def __init__(self, clock_domains=None): + self.clock_domains = clock_domains + + def __call__(self, module): + if self.clock_domains is None: + signals = {self._control_name: ("sync", Signal(name=self._control_name))} + else: + def name(cd): + return self._control_name + "_" + cd + signals = {name(cd): (cd, Signal(name=name(cd))) for cd in self.clock_domains} + for name, (cd, signal) in signals.items(): + setattr(module, name, signal) + return self._native_inserter(dict(signals.values()))(module) + + +@deprecated("instead of `migen.fhdl.decorators.ResetInserter`, " + "use `nmigen.hdl.xfrm.ResetInserter`; note that nMigen ResetInserter accepts " + "a dict of reset signals (or a single reset signal) as an argument, not " + "a set of clock domain names (or a single clock domain name)") +class CompatResetInserter(_CompatControlInserter): + _control_name = "reset" + _native_inserter = NativeResetInserter + + +@deprecated("instead of `migen.fhdl.decorators.CEInserter`, " + "use `nmigen.hdl.xfrm.EnableInserter`; note that nMigen EnableInserter accepts " + "a dict of enable signals (or a single enable signal) as an argument, not " + "a set of clock domain names (or a single clock domain name)") +class CompatCEInserter(_CompatControlInserter): + _control_name = "ce" + _native_inserter = NativeEnableInserter + + +class CompatClockDomainsRenamer(NativeDomainRenamer): + def __init__(self, cd_remapping): + super().__init__(cd_remapping) + + +ResetInserter = CompatResetInserter +CEInserter = CompatCEInserter +ClockDomainsRenamer = CompatClockDomainsRenamer diff --git a/nmigen/compat/fhdl/module.py b/nmigen/compat/fhdl/module.py new file mode 100644 index 0000000..e5b7796 --- /dev/null +++ b/nmigen/compat/fhdl/module.py @@ -0,0 +1,163 @@ +from collections.abc import Iterable + +from ..._utils import flatten, deprecated +from ...hdl import dsl, ir + + +__all__ = ["Module", "FinalizeError"] + + +def _flat_list(e): + if isinstance(e, Iterable): + return list(flatten(e)) + else: + return [e] + + +class CompatFinalizeError(Exception): + pass + + +FinalizeError = CompatFinalizeError + + +class _CompatModuleProxy: + def __init__(self, cm): + object.__setattr__(self, "_cm", cm) + + +class _CompatModuleComb(_CompatModuleProxy): + @deprecated("instead of `self.comb +=`, use `m.d.comb +=`") + def __iadd__(self, assigns): + self._cm._module._add_statement(assigns, domain=None, depth=0, compat_mode=True) + return self + + +class _CompatModuleSyncCD: + def __init__(self, cm, cd): + self._cm = cm + self._cd = cd + + @deprecated("instead of `self.sync. +=`, use `m.d. +=`") + def __iadd__(self, assigns): + self._cm._module._add_statement(assigns, domain=self._cd, depth=0, compat_mode=True) + return self + + +class _CompatModuleSync(_CompatModuleProxy): + @deprecated("instead of `self.sync +=`, use `m.d.sync +=`") + def __iadd__(self, assigns): + self._cm._module._add_statement(assigns, domain="sync", depth=0, compat_mode=True) + return self + + def __getattr__(self, name): + return _CompatModuleSyncCD(self._cm, name) + + def __setattr__(self, name, value): + if not isinstance(value, _CompatModuleSyncCD): + raise AttributeError("Attempted to assign sync property - use += instead") + + +class _CompatModuleSpecials(_CompatModuleProxy): + @deprecated("instead of `self.specials. =`, use `m.submodules. =`") + def __setattr__(self, name, value): + self._cm._submodules.append((name, value)) + setattr(self._cm, name, value) + + @deprecated("instead of `self.specials +=`, use `m.submodules +=`") + def __iadd__(self, other): + self._cm._submodules += [(None, e) for e in _flat_list(other)] + return self + + +class _CompatModuleSubmodules(_CompatModuleProxy): + @deprecated("instead of `self.submodules. =`, use `m.submodules. =`") + def __setattr__(self, name, value): + self._cm._submodules.append((name, value)) + setattr(self._cm, name, value) + + @deprecated("instead of `self.submodules +=`, use `m.submodules +=`") + def __iadd__(self, other): + self._cm._submodules += [(None, e) for e in _flat_list(other)] + return self + + +class _CompatModuleClockDomains(_CompatModuleProxy): + @deprecated("instead of `self.clock_domains. =`, use `m.domains. =`") + def __setattr__(self, name, value): + self.__iadd__(value) + setattr(self._cm, name, value) + + @deprecated("instead of `self.clock_domains +=`, use `m.domains +=`") + def __iadd__(self, other): + self._cm._module.domains += _flat_list(other) + return self + + +class CompatModule(ir.Elaboratable): + _MustUse__silence = True + + # Actually returns another nMigen Elaboratable (nmigen.dsl.Module), not a Fragment. + def get_fragment(self): + assert not self.get_fragment_called + self.get_fragment_called = True + self.finalize() + return self._module + + def elaborate(self, platform): + if not self.get_fragment_called: + self.get_fragment() + return self._module + + def __getattr__(self, name): + if name == "comb": + return _CompatModuleComb(self) + elif name == "sync": + return _CompatModuleSync(self) + elif name == "specials": + return _CompatModuleSpecials(self) + elif name == "submodules": + return _CompatModuleSubmodules(self) + elif name == "clock_domains": + return _CompatModuleClockDomains(self) + elif name == "finalized": + self.finalized = False + return self.finalized + elif name == "_module": + self._module = dsl.Module() + return self._module + elif name == "_submodules": + self._submodules = [] + return self._submodules + elif name == "_clock_domains": + self._clock_domains = [] + return self._clock_domains + elif name == "get_fragment_called": + self.get_fragment_called = False + return self.get_fragment_called + else: + raise AttributeError("'{}' object has no attribute '{}'" + .format(type(self).__name__, name)) + + def finalize(self, *args, **kwargs): + def finalize_submodules(): + for name, submodule in self._submodules: + if not hasattr(submodule, "finalize"): + continue + if submodule.finalized: + continue + submodule.finalize(*args, **kwargs) + + if not self.finalized: + self.finalized = True + finalize_submodules() + self.do_finalize(*args, **kwargs) + finalize_submodules() + for name, submodule in self._submodules: + self._module._add_submodule(submodule, name) + + def do_finalize(self): + pass + + +Module = CompatModule diff --git a/nmigen/compat/fhdl/specials.py b/nmigen/compat/fhdl/specials.py new file mode 100644 index 0000000..eef5a57 --- /dev/null +++ b/nmigen/compat/fhdl/specials.py @@ -0,0 +1,140 @@ +import warnings + +from ..._utils import deprecated, extend +from ...hdl.ast import * +from ...hdl.ir import Elaboratable +from ...hdl.mem import Memory as NativeMemory +from ...hdl.ir import Fragment, Instance +from ...hdl.dsl import Module +from .module import Module as CompatModule +from .structure import Signal +from ...lib.io import Pin + + +__all__ = ["TSTriple", "Instance", "Memory", "READ_FIRST", "WRITE_FIRST", "NO_CHANGE"] + + +class TSTriple: + def __init__(self, bits_sign=None, min=None, max=None, reset_o=0, reset_oe=0, reset_i=0, + name=None): + self.o = Signal(bits_sign, min=min, max=max, reset=reset_o, + name=None if name is None else name + "_o") + self.oe = Signal(reset=reset_oe, + name=None if name is None else name + "_oe") + self.i = Signal(bits_sign, min=min, max=max, reset=reset_i, + name=None if name is None else name + "_i") + + def __len__(self): + return len(self.o) + + def get_tristate(self, io): + return Tristate(io, self.o, self.oe, self.i) + + +class Tristate(Elaboratable): + def __init__(self, target, o, oe, i=None): + self.target = target + self.o = o + self.oe = oe + self.i = i if i is not None else None + + def elaborate(self, platform): + if hasattr(platform, "get_input_output"): + pin = Pin(len(self.target), dir="oe" if self.i is None else "io") + pin.o = self.o + pin.oe = self.oe + if self.i is not None: + pin.i = self.i + return platform.get_input_output(pin, self.target, attrs={}, invert=None) + + m = Module() + m.d.comb += self.i.eq(self.target) + m.submodules += Instance("$tribuf", + p_WIDTH=len(self.target), + i_EN=self.oe, + i_A=self.o, + o_Y=self.target, + ) + + f = m.elaborate(platform) + f.flatten = True + return f + + +(READ_FIRST, WRITE_FIRST, NO_CHANGE) = range(3) + + +class _MemoryPort(CompatModule): + def __init__(self, adr, dat_r, we=None, dat_w=None, async_read=False, re=None, + we_granularity=0, mode=WRITE_FIRST, clock_domain="sync"): + self.adr = adr + self.dat_r = dat_r + self.we = we + self.dat_w = dat_w + self.async_read = async_read + self.re = re + self.we_granularity = we_granularity + self.mode = mode + self.clock = ClockSignal(clock_domain) + + +@extend(NativeMemory) +@deprecated("it is not necessary or permitted to add Memory as a special or submodule") +def elaborate(self, platform): + return Fragment() + + +class CompatMemory(NativeMemory, Elaboratable): + def __init__(self, width, depth, init=None, name=None): + super().__init__(width=width, depth=depth, init=init, name=name) + + @deprecated("instead of `get_port()`, use `read_port()` and `write_port()`") + def get_port(self, write_capable=False, async_read=False, has_re=False, we_granularity=0, + mode=WRITE_FIRST, clock_domain="sync"): + if we_granularity >= self.width: + warnings.warn("do not specify `we_granularity` greater than memory width, as it " + "is a hard error in non-compatibility mode", + DeprecationWarning, stacklevel=1) + we_granularity = 0 + if we_granularity == 0: + warnings.warn("instead of `we_granularity=0`, use `we_granularity=None` or avoid " + "specifying it at all, as it is a hard error in non-compatibility mode", + DeprecationWarning, stacklevel=1) + we_granularity = None + assert mode != NO_CHANGE + rdport = self.read_port(domain="comb" if async_read else clock_domain, + transparent=mode == WRITE_FIRST) + rdport.addr.name = "{}_addr".format(self.name) + adr = rdport.addr + dat_r = rdport.data + if write_capable: + wrport = self.write_port(domain=clock_domain, granularity=we_granularity) + wrport.addr = rdport.addr + we = wrport.en + dat_w = wrport.data + else: + we = None + dat_w = None + if has_re: + if mode == READ_FIRST: + re = rdport.en + else: + warnings.warn("the combination of `has_re=True` and `mode=WRITE_FIRST` has " + "surprising behavior: keeping `re` low would merely latch " + "the address, while the data will change with changing memory " + "contents; avoid using `re` with transparent ports as it is a hard " + "error in non-compatibility mode", + DeprecationWarning, stacklevel=1) + re = Signal() + else: + re = None + mp = _MemoryPort(adr, dat_r, we, dat_w, + async_read, re, we_granularity, mode, + clock_domain) + mp.submodules.rdport = rdport + if write_capable: + mp.submodules.wrport = wrport + return mp + + +Memory = CompatMemory diff --git a/nmigen/compat/fhdl/structure.py b/nmigen/compat/fhdl/structure.py new file mode 100644 index 0000000..d450e45 --- /dev/null +++ b/nmigen/compat/fhdl/structure.py @@ -0,0 +1,185 @@ +import builtins +import warnings +from collections import OrderedDict + +from ...utils import bits_for +from ..._utils import deprecated, extend +from ...hdl import ast +from ...hdl.ast import (DUID, + Shape, signed, unsigned, + Value, Const, C, Mux, Slice as _Slice, Part, Cat, Repl, + Signal as NativeSignal, + ClockSignal, ResetSignal, + Array, ArrayProxy as _ArrayProxy) +from ...hdl.cd import ClockDomain + + +__all__ = ["DUID", "wrap", "Mux", "Cat", "Replicate", "Constant", "C", "Signal", "ClockSignal", + "ResetSignal", "If", "Case", "Array", "ClockDomain"] + + +@deprecated("instead of `wrap`, use `Value.cast`") +def wrap(v): + return Value.cast(v) + + +class CompatSignal(NativeSignal): + def __init__(self, bits_sign=None, name=None, variable=False, reset=0, + reset_less=False, name_override=None, min=None, max=None, + related=None, attr=None, src_loc_at=0, **kwargs): + if min is not None or max is not None: + warnings.warn("instead of `Signal(min={min}, max={max})`, " + "use `Signal(range({min}, {max}))`" + .format(min=min or 0, max=max or 2), + DeprecationWarning, stacklevel=2 + src_loc_at) + + if bits_sign is None: + if min is None: + min = 0 + if max is None: + max = 2 + max -= 1 # make both bounds inclusive + if min > max: + raise ValueError("Lower bound {} should be less or equal to higher bound {}" + .format(min, max + 1)) + sign = min < 0 or max < 0 + if min == max: + bits = 0 + else: + bits = builtins.max(bits_for(min, sign), bits_for(max, sign)) + shape = signed(bits) if sign else unsigned(bits) + else: + if not (min is None and max is None): + raise ValueError("Only one of bits/signedness or bounds may be specified") + shape = bits_sign + + super().__init__(shape=shape, name=name_override or name, + reset=reset, reset_less=reset_less, + attrs=attr, src_loc_at=1 + src_loc_at, **kwargs) + + +Signal = CompatSignal + + +@deprecated("instead of `Constant`, use `Const`") +def Constant(value, bits_sign=None): + return Const(value, bits_sign) + + +@deprecated("instead of `Replicate`, use `Repl`") +def Replicate(v, n): + return Repl(v, n) + + +@extend(Const) +@property +@deprecated("instead of `.nbits`, use `.width`") +def nbits(self): + return self.width + + +@extend(NativeSignal) +@property +@deprecated("instead of `.nbits`, use `.width`") +def nbits(self): + return self.width + + +@extend(NativeSignal) +@NativeSignal.nbits.setter +@deprecated("instead of `.nbits = x`, use `.width = x`") +def nbits(self, value): + self.width = value + + +@extend(NativeSignal) +@deprecated("instead of `.part`, use `.bit_select`") +def part(self, offset, width): + return Part(self, offset, width, src_loc_at=2) + + +@extend(Cat) +@property +@deprecated("instead of `.l`, use `.parts`") +def l(self): + return self.parts + + +@extend(ast.Operator) +@property +@deprecated("instead of `.op`, use `.operator`") +def op(self): + return self.operator + + +@extend(_ArrayProxy) +@property +@deprecated("instead `_ArrayProxy.choices`, use `ArrayProxy.elems`") +def choices(self): + return self.elems + + +class If(ast.Switch): + @deprecated("instead of `If(cond, ...)`, use `with m.If(cond): ...`") + def __init__(self, cond, *stmts): + cond = Value.cast(cond) + if len(cond) != 1: + cond = cond.bool() + super().__init__(cond, {("1",): ast.Statement.cast(stmts)}) + + @deprecated("instead of `.Elif(cond, ...)`, use `with m.Elif(cond): ...`") + def Elif(self, cond, *stmts): + cond = Value.cast(cond) + if len(cond) != 1: + cond = cond.bool() + self.cases = OrderedDict((("-" + k,), v) for (k,), v in self.cases.items()) + self.cases[("1" + "-" * len(self.test),)] = ast.Statement.cast(stmts) + self.test = Cat(self.test, cond) + return self + + @deprecated("instead of `.Else(...)`, use `with m.Else(): ...`") + def Else(self, *stmts): + self.cases[()] = ast.Statement.cast(stmts) + return self + + +class Case(ast.Switch): + @deprecated("instead of `Case(test, { value: stmts })`, use `with m.Switch(test):` and " + "`with m.Case(value): stmts`; instead of `\"default\": stmts`, use " + "`with m.Case(): stmts`") + def __init__(self, test, cases): + new_cases = [] + default = None + for k, v in cases.items(): + if isinstance(k, (bool, int)): + k = Const(k) + if (not isinstance(k, Const) + and not (isinstance(k, str) and k == "default")): + raise TypeError("Case object is not a Migen constant") + if isinstance(k, str) and k == "default": + default = v + continue + else: + k = k.value + new_cases.append((k, v)) + if default is not None: + new_cases.append((None, default)) + super().__init__(test, OrderedDict(new_cases)) + + @deprecated("instead of `Case(...).makedefault()`, use an explicit default case: " + "`with m.Case(): ...`") + def makedefault(self, key=None): + if key is None: + for choice in self.cases.keys(): + if (key is None + or (isinstance(choice, str) and choice == "default") + or choice > key): + key = choice + elif isinstance(key, str) and key == "default": + key = () + else: + key = ("{:0{}b}".format(ast.Value.cast(key).value, len(self.test)),) + stmts = self.cases[key] + del self.cases[key] + self.cases[()] = stmts + return self diff --git a/nmigen/compat/fhdl/verilog.py b/nmigen/compat/fhdl/verilog.py new file mode 100644 index 0000000..35a1ca5 --- /dev/null +++ b/nmigen/compat/fhdl/verilog.py @@ -0,0 +1,31 @@ +import warnings + +from ...hdl.ir import Fragment +from ...hdl.cd import ClockDomain +from ...back import verilog +from .conv_output import ConvOutput + + +def convert(fi, ios=None, name="top", special_overrides=dict(), + attr_translate=None, create_clock_domains=True, + display_run=False): + if display_run: + warnings.warn("`display_run=True` support has been removed", + DeprecationWarning, stacklevel=1) + if special_overrides: + warnings.warn("`special_overrides` support as well as `Special` has been removed", + DeprecationWarning, stacklevel=1) + # TODO: attr_translate + + def missing_domain(name): + if create_clock_domains: + return ClockDomain(name) + v_output = verilog.convert( + elaboratable=fi.get_fragment(), + name=name, + ports=ios or (), + missing_domain=missing_domain + ) + output = ConvOutput() + output.set_main_source(v_output) + return output diff --git a/nmigen/compat/genlib/__init__.py b/nmigen/compat/genlib/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nmigen/compat/genlib/cdc.py b/nmigen/compat/genlib/cdc.py new file mode 100644 index 0000000..9da98bd --- /dev/null +++ b/nmigen/compat/genlib/cdc.py @@ -0,0 +1,74 @@ +import warnings + +from ..._utils import deprecated +from ...lib.cdc import FFSynchronizer as NativeFFSynchronizer +from ...lib.cdc import PulseSynchronizer as NativePulseSynchronizer +from ...hdl.ast import * +from ..fhdl.module import CompatModule +from ..fhdl.structure import If + + +__all__ = ["MultiReg", "PulseSynchronizer", "GrayCounter", "GrayDecoder"] + + +class MultiReg(NativeFFSynchronizer): + def __init__(self, i, o, odomain="sync", n=2, reset=0): + old_opts = [] + new_opts = [] + if odomain != "sync": + old_opts.append(", odomain={!r}".format(odomain)) + new_opts.append(", o_domain={!r}".format(odomain)) + if n != 2: + old_opts.append(", n={!r}".format(n)) + new_opts.append(", stages={!r}".format(n)) + warnings.warn("instead of `MultiReg(...{})`, use `FFSynchronizer(...{})`" + .format("".join(old_opts), "".join(new_opts)), + DeprecationWarning, stacklevel=2) + super().__init__(i, o, o_domain=odomain, stages=n, reset=reset) + self.odomain = odomain + + +@deprecated("instead of `migen.genlib.cdc.PulseSynchronizer`, use `nmigen.lib.cdc.PulseSynchronizer`") +class PulseSynchronizer(NativePulseSynchronizer): + def __init__(self, idomain, odomain): + super().__init__(i_domain=idomain, o_domain=odomain) + + +@deprecated("instead of `migen.genlib.cdc.GrayCounter`, use `nmigen.lib.coding.GrayEncoder`") +class GrayCounter(CompatModule): + def __init__(self, width): + self.ce = Signal() + self.q = Signal(width) + self.q_next = Signal(width) + self.q_binary = Signal(width) + self.q_next_binary = Signal(width) + + ### + + self.comb += [ + If(self.ce, + self.q_next_binary.eq(self.q_binary + 1) + ).Else( + self.q_next_binary.eq(self.q_binary) + ), + self.q_next.eq(self.q_next_binary ^ self.q_next_binary[1:]) + ] + self.sync += [ + self.q_binary.eq(self.q_next_binary), + self.q.eq(self.q_next) + ] + + +@deprecated("instead of `migen.genlib.cdc.GrayDecoder`, use `nmigen.lib.coding.GrayDecoder`") +class GrayDecoder(CompatModule): + def __init__(self, width): + self.i = Signal(width) + self.o = Signal(width, reset_less=True) + + # # # + + o_comb = Signal(width) + self.comb += o_comb[-1].eq(self.i[-1]) + for i in reversed(range(width-1)): + self.comb += o_comb[i].eq(o_comb[i+1] ^ self.i[i]) + self.sync += self.o.eq(o_comb) diff --git a/nmigen/compat/genlib/coding.py b/nmigen/compat/genlib/coding.py new file mode 100644 index 0000000..44154a2 --- /dev/null +++ b/nmigen/compat/genlib/coding.py @@ -0,0 +1,4 @@ +from ...lib.coding import * + + +__all__ = ["Encoder", "PriorityEncoder", "Decoder", "PriorityDecoder"] diff --git a/nmigen/compat/genlib/fifo.py b/nmigen/compat/genlib/fifo.py new file mode 100644 index 0000000..a944428 --- /dev/null +++ b/nmigen/compat/genlib/fifo.py @@ -0,0 +1,147 @@ +from ..._utils import deprecated, extend +from ...lib.fifo import (FIFOInterface as NativeFIFOInterface, + SyncFIFO as NativeSyncFIFO, SyncFIFOBuffered as NativeSyncFIFOBuffered, + AsyncFIFO as NativeAsyncFIFO, AsyncFIFOBuffered as NativeAsyncFIFOBuffered) + + +__all__ = ["_FIFOInterface", "SyncFIFO", "SyncFIFOBuffered", "AsyncFIFO", "AsyncFIFOBuffered"] + + +class CompatFIFOInterface(NativeFIFOInterface): + @deprecated("attribute `fwft` must be provided to FIFOInterface constructor") + def __init__(self, width, depth): + super().__init__(width=width, depth=depth, fwft=False) + del self.fwft + + +@extend(NativeFIFOInterface) +@property +@deprecated("instead of `fifo.din`, use `fifo.w_data`") +def din(self): + return self.w_data + + +@extend(NativeFIFOInterface) +@NativeFIFOInterface.din.setter +@deprecated("instead of `fifo.din = x`, use `fifo.w_data = x`") +def din(self, w_data): + self.w_data = w_data + + +@extend(NativeFIFOInterface) +@property +@deprecated("instead of `fifo.writable`, use `fifo.w_rdy`") +def writable(self): + return self.w_rdy + + +@extend(NativeFIFOInterface) +@NativeFIFOInterface.writable.setter +@deprecated("instead of `fifo.writable = x`, use `fifo.w_rdy = x`") +def writable(self, w_rdy): + self.w_rdy = w_rdy + + +@extend(NativeFIFOInterface) +@property +@deprecated("instead of `fifo.we`, use `fifo.w_en`") +def we(self): + return self.w_en + + +@extend(NativeFIFOInterface) +@NativeFIFOInterface.we.setter +@deprecated("instead of `fifo.we = x`, use `fifo.w_en = x`") +def we(self, w_en): + self.w_en = w_en + + +@extend(NativeFIFOInterface) +@property +@deprecated("instead of `fifo.dout`, use `fifo.r_data`") +def dout(self): + return self.r_data + + +@extend(NativeFIFOInterface) +@NativeFIFOInterface.dout.setter +@deprecated("instead of `fifo.dout = x`, use `fifo.r_data = x`") +def dout(self, r_data): + self.r_data = r_data + + +@extend(NativeFIFOInterface) +@property +@deprecated("instead of `fifo.readable`, use `fifo.r_rdy`") +def readable(self): + return self.r_rdy + + +@extend(NativeFIFOInterface) +@NativeFIFOInterface.readable.setter +@deprecated("instead of `fifo.readable = x`, use `fifo.r_rdy = x`") +def readable(self, r_rdy): + self.r_rdy = r_rdy + + +@extend(NativeFIFOInterface) +@property +@deprecated("instead of `fifo.re`, use `fifo.r_en`") +def re(self): + return self.r_en + + +@extend(NativeFIFOInterface) +@NativeFIFOInterface.re.setter +@deprecated("instead of `fifo.re = x`, use `fifo.r_en = x`") +def re(self, r_en): + self.r_en = r_en + + +@extend(NativeFIFOInterface) +def read(self): + """Read method for simulation.""" + assert (yield self.r_rdy) + value = (yield self.r_data) + yield self.r_en.eq(1) + yield + yield self.r_en.eq(0) + yield + return value + +@extend(NativeFIFOInterface) +def write(self, data): + """Write method for simulation.""" + assert (yield self.w_rdy) + yield self.w_data.eq(data) + yield self.w_en.eq(1) + yield + yield self.w_en.eq(0) + yield + + +class CompatSyncFIFO(NativeSyncFIFO): + def __init__(self, width, depth, fwft=True): + super().__init__(width=width, depth=depth, fwft=fwft) + + +class CompatSyncFIFOBuffered(NativeSyncFIFOBuffered): + def __init__(self, width, depth): + super().__init__(width=width, depth=depth) + + +class CompatAsyncFIFO(NativeAsyncFIFO): + def __init__(self, width, depth): + super().__init__(width=width, depth=depth) + + +class CompatAsyncFIFOBuffered(NativeAsyncFIFOBuffered): + def __init__(self, width, depth): + super().__init__(width=width, depth=depth) + + +_FIFOInterface = CompatFIFOInterface +SyncFIFO = CompatSyncFIFO +SyncFIFOBuffered = CompatSyncFIFOBuffered +AsyncFIFO = CompatAsyncFIFO +AsyncFIFOBuffered = CompatAsyncFIFOBuffered diff --git a/nmigen/compat/genlib/fsm.py b/nmigen/compat/genlib/fsm.py new file mode 100644 index 0000000..6904550 --- /dev/null +++ b/nmigen/compat/genlib/fsm.py @@ -0,0 +1,193 @@ +from collections import OrderedDict + +from ..._utils import deprecated, _ignore_deprecated +from ...hdl.xfrm import ValueTransformer, StatementTransformer +from ...hdl.ast import * +from ...hdl.ast import Signal as NativeSignal +from ..fhdl.module import CompatModule, CompatFinalizeError +from ..fhdl.structure import Signal, If, Case + + +__all__ = ["AnonymousState", "NextState", "NextValue", "FSM"] + + +class AnonymousState: + pass + + +class NextState(Statement): + def __init__(self, state): + super().__init__() + self.state = state + + +class NextValue(Statement): + def __init__(self, target, value): + super().__init__() + self.target = target + self.value = value + + +def _target_eq(a, b): + if type(a) != type(b): + return False + ty = type(a) + if ty == Const: + return a.value == b.value + elif ty == NativeSignal or ty == Signal: + return a is b + elif ty == Cat: + return all(_target_eq(x, y) for x, y in zip(a.l, b.l)) + elif ty == Slice: + return (_target_eq(a.value, b.value) + and a.start == b.start + and a.stop == b.stop) + elif ty == Part: + return (_target_eq(a.value, b.value) + and _target_eq(a.offset == b.offset) + and a.width == b.width) + elif ty == ArrayProxy: + return (all(_target_eq(x, y) for x, y in zip(a.choices, b.choices)) + and _target_eq(a.key, b.key)) + else: + raise ValueError("NextValue cannot be used with target type '{}'" + .format(ty)) + + +class _LowerNext(ValueTransformer, StatementTransformer): + def __init__(self, next_state_signal, encoding, aliases): + self.next_state_signal = next_state_signal + self.encoding = encoding + self.aliases = aliases + # (target, next_value_ce, next_value) + self.registers = [] + + def _get_register_control(self, target): + for x in self.registers: + if _target_eq(target, x[0]): + return x[1], x[2] + raise KeyError + + def on_unknown_statement(self, node): + if isinstance(node, NextState): + try: + actual_state = self.aliases[node.state] + except KeyError: + actual_state = node.state + return self.next_state_signal.eq(self.encoding[actual_state]) + elif isinstance(node, NextValue): + try: + next_value_ce, next_value = self._get_register_control(node.target) + except KeyError: + related = node.target if isinstance(node.target, Signal) else None + next_value = Signal(node.target.shape(), + name=None if related is None else "{}_fsm_next".format(related.name)) + next_value_ce = Signal( + name=None if related is None else "{}_fsm_next_ce".format(related.name)) + self.registers.append((node.target, next_value_ce, next_value)) + return next_value.eq(node.value), next_value_ce.eq(1) + else: + return node + + +@deprecated("instead of `migen.genlib.fsm.FSM()`, use `with m.FSM():`; note that there is no " + "replacement for `{before,after}_{entering,leaving}` and `delayed_enter` methods") +class FSM(CompatModule): + def __init__(self, reset_state=None): + self.actions = OrderedDict() + self.state_aliases = dict() + self.reset_state = reset_state + + self.before_entering_signals = OrderedDict() + self.before_leaving_signals = OrderedDict() + self.after_entering_signals = OrderedDict() + self.after_leaving_signals = OrderedDict() + + def act(self, state, *statements): + if self.finalized: + raise CompatFinalizeError + if self.reset_state is None: + self.reset_state = state + if state not in self.actions: + self.actions[state] = [] + self.actions[state] += statements + + def delayed_enter(self, name, target, delay): + if self.finalized: + raise CompatFinalizeError + if delay > 0: + state = name + for i in range(delay): + if i == delay - 1: + next_state = target + else: + next_state = AnonymousState() + self.act(state, NextState(next_state)) + state = next_state + else: + self.state_aliases[name] = target + + def ongoing(self, state): + is_ongoing = Signal() + self.act(state, is_ongoing.eq(1)) + return is_ongoing + + def _get_signal(self, d, state): + if state not in self.actions: + self.actions[state] = [] + try: + return d[state] + except KeyError: + is_el = Signal() + d[state] = is_el + return is_el + + def before_entering(self, state): + return self._get_signal(self.before_entering_signals, state) + + def before_leaving(self, state): + return self._get_signal(self.before_leaving_signals, state) + + def after_entering(self, state): + signal = self._get_signal(self.after_entering_signals, state) + self.sync += signal.eq(self.before_entering(state)) + return signal + + def after_leaving(self, state): + signal = self._get_signal(self.after_leaving_signals, state) + self.sync += signal.eq(self.before_leaving(state)) + return signal + + @_ignore_deprecated + def do_finalize(self): + nstates = len(self.actions) + self.encoding = dict((s, n) for n, s in enumerate(self.actions.keys())) + self.decoding = {n: s for s, n in self.encoding.items()} + + decoder = lambda n: "{}/{}".format(self.decoding[n], n) + self.state = Signal(range(nstates), reset=self.encoding[self.reset_state], decoder=decoder) + self.next_state = Signal.like(self.state) + + for state, signal in self.before_leaving_signals.items(): + encoded = self.encoding[state] + self.comb += signal.eq((self.state == encoded) & ~(self.next_state == encoded)) + if self.reset_state in self.after_entering_signals: + self.after_entering_signals[self.reset_state].reset = 1 + for state, signal in self.before_entering_signals.items(): + encoded = self.encoding[state] + self.comb += signal.eq(~(self.state == encoded) & (self.next_state == encoded)) + + self._finalize_sync(self._lower_controls()) + + def _lower_controls(self): + return _LowerNext(self.next_state, self.encoding, self.state_aliases) + + def _finalize_sync(self, ls): + cases = dict((self.encoding[k], ls.on_statement(v)) for k, v in self.actions.items() if v) + self.comb += [ + self.next_state.eq(self.state), + Case(self.state, cases).makedefault(self.encoding[self.reset_state]) + ] + self.sync += self.state.eq(self.next_state) + for register, next_value_ce, next_value in ls.registers: + self.sync += If(next_value_ce, register.eq(next_value)) diff --git a/nmigen/compat/genlib/record.py b/nmigen/compat/genlib/record.py new file mode 100644 index 0000000..69ce0e2 --- /dev/null +++ b/nmigen/compat/genlib/record.py @@ -0,0 +1,195 @@ +from ...tracer import * +from ..fhdl.structure import * + +from functools import reduce +from operator import or_ + + +(DIR_NONE, DIR_S_TO_M, DIR_M_TO_S) = range(3) + +# Possible layout elements: +# 1. (name, size) +# 2. (name, size, direction) +# 3. (name, sublayout) +# size can be an int, or a (int, bool) tuple for signed numbers +# sublayout must be a list + + +def set_layout_parameters(layout, **layout_dict): + def resolve(p): + if isinstance(p, str): + try: + return layout_dict[p] + except KeyError: + return p + else: + return p + + r = [] + for f in layout: + if isinstance(f[1], (int, tuple, str)): # cases 1/2 + if len(f) == 3: + r.append((f[0], resolve(f[1]), f[2])) + else: + r.append((f[0], resolve(f[1]))) + elif isinstance(f[1], list): # case 3 + r.append((f[0], set_layout_parameters(f[1], **layout_dict))) + else: + raise TypeError + return r + + +def layout_len(layout): + r = 0 + for f in layout: + if isinstance(f[1], (int, tuple)): # cases 1/2 + if len(f) == 3: + fname, fsize, fdirection = f + else: + fname, fsize = f + elif isinstance(f[1], list): # case 3 + fname, fsublayout = f + fsize = layout_len(fsublayout) + else: + raise TypeError + if isinstance(fsize, tuple): + r += fsize[0] + else: + r += fsize + return r + + +def layout_get(layout, name): + for f in layout: + if f[0] == name: + return f + raise KeyError(name) + + +def layout_partial(layout, *elements): + r = [] + for path in elements: + path_s = path.split("/") + last = path_s.pop() + copy_ref = layout + insert_ref = r + for hop in path_s: + name, copy_ref = layout_get(copy_ref, hop) + try: + name, insert_ref = layout_get(insert_ref, hop) + except KeyError: + new_insert_ref = [] + insert_ref.append((hop, new_insert_ref)) + insert_ref = new_insert_ref + insert_ref.append(layout_get(copy_ref, last)) + return r + + +class Record: + def __init__(self, layout, name=None, **kwargs): + try: + self.name = get_var_name() + except NameNotFound: + self.name = "" + self.layout = layout + + if self.name: + prefix = self.name + "_" + else: + prefix = "" + for f in self.layout: + if isinstance(f[1], (int, tuple)): # cases 1/2 + if(len(f) == 3): + fname, fsize, fdirection = f + else: + fname, fsize = f + finst = Signal(fsize, name=prefix + fname, **kwargs) + elif isinstance(f[1], list): # case 3 + fname, fsublayout = f + finst = Record(fsublayout, prefix + fname, **kwargs) + else: + raise TypeError + setattr(self, fname, finst) + + def eq(self, other): + return [getattr(self, f[0]).eq(getattr(other, f[0])) + for f in self.layout if hasattr(other, f[0])] + + def iter_flat(self): + for f in self.layout: + e = getattr(self, f[0]) + if isinstance(e, Signal): + if len(f) == 3: + yield e, f[2] + else: + yield e, DIR_NONE + elif isinstance(e, Record): + yield from e.iter_flat() + else: + raise TypeError + + def flatten(self): + return [signal for signal, direction in self.iter_flat()] + + def raw_bits(self): + return Cat(*self.flatten()) + + def connect(self, *slaves, keep=None, omit=None): + if keep is None: + _keep = set([f[0] for f in self.layout]) + elif isinstance(keep, list): + _keep = set(keep) + else: + _keep = keep + if omit is None: + _omit = set() + elif isinstance(omit, list): + _omit = set(omit) + else: + _omit = omit + + _keep = _keep - _omit + + r = [] + for f in self.layout: + field = f[0] + self_e = getattr(self, field) + if isinstance(self_e, Signal): + if field in _keep: + direction = f[2] + if direction == DIR_M_TO_S: + r += [getattr(slave, field).eq(self_e) for slave in slaves] + elif direction == DIR_S_TO_M: + r.append(self_e.eq(reduce(or_, [getattr(slave, field) for slave in slaves]))) + else: + raise TypeError + else: + for slave in slaves: + r += self_e.connect(getattr(slave, field), keep=keep, omit=omit) + return r + + def connect_flat(self, *slaves): + r = [] + iter_slaves = [slave.iter_flat() for slave in slaves] + for m_signal, m_direction in self.iter_flat(): + if m_direction == DIR_M_TO_S: + for iter_slave in iter_slaves: + s_signal, s_direction = next(iter_slave) + assert(s_direction == DIR_M_TO_S) + r.append(s_signal.eq(m_signal)) + elif m_direction == DIR_S_TO_M: + s_signals = [] + for iter_slave in iter_slaves: + s_signal, s_direction = next(iter_slave) + assert(s_direction == DIR_S_TO_M) + s_signals.append(s_signal) + r.append(m_signal.eq(reduce(or_, s_signals))) + else: + raise TypeError + return r + + def __len__(self): + return layout_len(self.layout) + + def __repr__(self): + return "" diff --git a/nmigen/compat/genlib/resetsync.py b/nmigen/compat/genlib/resetsync.py new file mode 100644 index 0000000..66f0bae --- /dev/null +++ b/nmigen/compat/genlib/resetsync.py @@ -0,0 +1,16 @@ +from ..._utils import deprecated +from ...lib.cdc import ResetSynchronizer as NativeResetSynchronizer + + +__all__ = ["AsyncResetSynchronizer"] + + +@deprecated("instead of `migen.genlib.resetsync.AsyncResetSynchronizer`, " + "use `nmigen.lib.cdc.ResetSynchronizer`; note that ResetSynchronizer accepts " + "a clock domain name as an argument, not a clock domain object") +class CompatResetSynchronizer(NativeResetSynchronizer): + def __init__(self, cd, async_reset): + super().__init__(async_reset, domain=cd.name) + + +AsyncResetSynchronizer = CompatResetSynchronizer diff --git a/nmigen/compat/sim/__init__.py b/nmigen/compat/sim/__init__.py new file mode 100644 index 0000000..c6a89e3 --- /dev/null +++ b/nmigen/compat/sim/__init__.py @@ -0,0 +1,50 @@ +import functools +import inspect +from collections.abc import Iterable +from ...hdl.cd import ClockDomain +from ...back.pysim import * + + +__all__ = ["run_simulation", "passive"] + + +def run_simulation(fragment_or_module, generators, clocks={"sync": 10}, vcd_name=None, + special_overrides={}): + assert not special_overrides + + if hasattr(fragment_or_module, "get_fragment"): + fragment = fragment_or_module.get_fragment() + else: + fragment = fragment_or_module + + if not isinstance(generators, dict): + generators = {"sync": generators} + fragment.domains += ClockDomain("sync") + + sim = Simulator(fragment) + for domain, period in clocks.items(): + sim.add_clock(period / 1e9, domain=domain) + for domain, processes in generators.items(): + def wrap(process): + def wrapper(): + yield from process + return wrapper + if isinstance(processes, Iterable) and not inspect.isgenerator(processes): + for process in processes: + sim.add_sync_process(wrap(process), domain=domain) + else: + sim.add_sync_process(wrap(processes), domain=domain) + + if vcd_name is not None: + with sim.write_vcd(vcd_name): + sim.run() + else: + sim.run() + + +def passive(generator): + @functools.wraps(generator) + def wrapper(*args, **kwargs): + yield Passive() + yield from generator(*args, **kwargs) + return wrapper diff --git a/nmigen/hdl/__init__.py b/nmigen/hdl/__init__.py new file mode 100644 index 0000000..770fc25 --- /dev/null +++ b/nmigen/hdl/__init__.py @@ -0,0 +1,20 @@ +from .ast import Shape, unsigned, signed +from .ast import Value, Const, C, Mux, Cat, Repl, Array, Signal, ClockSignal, ResetSignal +from .dsl import Module +from .cd import ClockDomain +from .ir import Elaboratable, Fragment, Instance +from .mem import Memory +from .rec import Record +from .xfrm import DomainRenamer, ResetInserter, EnableInserter + + +__all__ = [ + "Shape", "unsigned", "signed", + "Value", "Const", "C", "Mux", "Cat", "Repl", "Array", "Signal", "ClockSignal", "ResetSignal", + "Module", + "ClockDomain", + "Elaboratable", "Fragment", "Instance", + "Memory", + "Record", + "DomainRenamer", "ResetInserter", "EnableInserter", +] diff --git a/nmigen/hdl/__pycache__/__init__.cpython-37.pyc b/nmigen/hdl/__pycache__/__init__.cpython-37.pyc new file mode 100644 index 0000000..efb4fc0 Binary files /dev/null and b/nmigen/hdl/__pycache__/__init__.cpython-37.pyc differ diff --git a/nmigen/hdl/__pycache__/ast.cpython-37.pyc b/nmigen/hdl/__pycache__/ast.cpython-37.pyc new file mode 100644 index 0000000..0202ac0 Binary files /dev/null and b/nmigen/hdl/__pycache__/ast.cpython-37.pyc differ diff --git a/nmigen/hdl/__pycache__/cd.cpython-37.pyc b/nmigen/hdl/__pycache__/cd.cpython-37.pyc new file mode 100644 index 0000000..25cb719 Binary files /dev/null and b/nmigen/hdl/__pycache__/cd.cpython-37.pyc differ diff --git a/nmigen/hdl/__pycache__/dsl.cpython-37.pyc b/nmigen/hdl/__pycache__/dsl.cpython-37.pyc new file mode 100644 index 0000000..76ea74d Binary files /dev/null and b/nmigen/hdl/__pycache__/dsl.cpython-37.pyc differ diff --git a/nmigen/hdl/__pycache__/ir.cpython-37.pyc b/nmigen/hdl/__pycache__/ir.cpython-37.pyc new file mode 100644 index 0000000..b6c60ee Binary files /dev/null and b/nmigen/hdl/__pycache__/ir.cpython-37.pyc differ diff --git a/nmigen/hdl/__pycache__/mem.cpython-37.pyc b/nmigen/hdl/__pycache__/mem.cpython-37.pyc new file mode 100644 index 0000000..d5acc5c Binary files /dev/null and b/nmigen/hdl/__pycache__/mem.cpython-37.pyc differ diff --git a/nmigen/hdl/__pycache__/rec.cpython-37.pyc b/nmigen/hdl/__pycache__/rec.cpython-37.pyc new file mode 100644 index 0000000..471e02f Binary files /dev/null and b/nmigen/hdl/__pycache__/rec.cpython-37.pyc differ diff --git a/nmigen/hdl/__pycache__/xfrm.cpython-37.pyc b/nmigen/hdl/__pycache__/xfrm.cpython-37.pyc new file mode 100644 index 0000000..4b96e36 Binary files /dev/null and b/nmigen/hdl/__pycache__/xfrm.cpython-37.pyc differ diff --git a/nmigen/hdl/ast.py b/nmigen/hdl/ast.py new file mode 100644 index 0000000..af1d831 --- /dev/null +++ b/nmigen/hdl/ast.py @@ -0,0 +1,1601 @@ +from abc import ABCMeta, abstractmethod +import traceback +import warnings +import typing +from collections import OrderedDict +from collections.abc import Iterable, MutableMapping, MutableSet, MutableSequence +from enum import Enum + +from .. import tracer +from .._utils import * +from .._unused import * + + +__all__ = [ + "Shape", "signed", "unsigned", + "Value", "Const", "C", "AnyConst", "AnySeq", "Operator", "Mux", "Part", "Slice", "Cat", "Repl", + "Array", "ArrayProxy", + "Signal", "ClockSignal", "ResetSignal", + "UserValue", + "Sample", "Past", "Stable", "Rose", "Fell", "Initial", + "Statement", "Switch", + "Property", "Assign", "Assert", "Assume", "Cover", + "ValueKey", "ValueDict", "ValueSet", "SignalKey", "SignalDict", "SignalSet", +] + + +class DUID: + """Deterministic Unique IDentifier.""" + __next_uid = 0 + def __init__(self): + self.duid = DUID.__next_uid + DUID.__next_uid += 1 + + +class Shape(typing.NamedTuple): + """Bit width and signedness of a value. + + A ``Shape`` can be constructed using: + * explicit bit width and signedness; + * aliases :func:`signed` and :func:`unsigned`; + * casting from a variety of objects. + + A ``Shape`` can be cast from: + * an integer, where the integer specifies the bit width; + * a range, where the result is wide enough to represent any element of the range, and is + signed if any element of the range is signed; + * an :class:`Enum` with all integer members or :class:`IntEnum`, where the result is wide + enough to represent any member of the enumeration, and is signed if any member of + the enumeration is signed. + + Parameters + ---------- + width : int + The number of bits in the representation, including the sign bit (if any). + signed : bool + If ``False``, the value is unsigned. If ``True``, the value is signed two's complement. + """ + width: int = 1 + signed: bool = False + + @staticmethod + def cast(obj, *, src_loc_at=0): + if isinstance(obj, Shape): + return obj + if isinstance(obj, int): + return Shape(obj) + if isinstance(obj, tuple): + width, signed = obj + warnings.warn("instead of `{tuple}`, use `{constructor}({width})`" + .format(constructor="signed" if signed else "unsigned", width=width, + tuple=obj), + DeprecationWarning, stacklevel=2 + src_loc_at) + return Shape(width, signed) + if isinstance(obj, range): + if len(obj) == 0: + return Shape(0, obj.start < 0) + signed = obj.start < 0 or (obj.stop - obj.step) < 0 + width = max(bits_for(obj.start, signed), + bits_for(obj.stop - obj.step, signed)) + return Shape(width, signed) + if isinstance(obj, type) and issubclass(obj, Enum): + min_value = min(member.value for member in obj) + max_value = max(member.value for member in obj) + if not isinstance(min_value, int) or not isinstance(max_value, int): + raise TypeError("Only enumerations with integer values can be used " + "as value shapes") + signed = min_value < 0 or max_value < 0 + width = max(bits_for(min_value, signed), bits_for(max_value, signed)) + return Shape(width, signed) + raise TypeError("Object {!r} cannot be used as value shape".format(obj)) + + +# TODO: use dataclasses instead of this hack +def _Shape___init__(self, width=1, signed=False): + if not isinstance(width, int) or width < 0: + raise TypeError("Width must be a non-negative integer, not {!r}" + .format(width)) +Shape.__init__ = _Shape___init__ + + +def unsigned(width): + """Shorthand for ``Shape(width, signed=False)``.""" + return Shape(width, signed=False) + + +def signed(width): + """Shorthand for ``Shape(width, signed=True)``.""" + return Shape(width, signed=True) + + +class Value(metaclass=ABCMeta): + @staticmethod + def cast(obj): + """Converts ``obj`` to an nMigen value. + + Booleans and integers are wrapped into a :class:`Const`. Enumerations whose members are + all integers are converted to a :class:`Const` with a shape that fits every member. + """ + if isinstance(obj, Value): + return obj + if isinstance(obj, int): + return Const(obj) + if isinstance(obj, Enum): + return Const(obj.value, Shape.cast(type(obj))) + raise TypeError("Object {!r} cannot be converted to an nMigen value".format(obj)) + + def __init__(self, *, src_loc_at=0): + super().__init__() + self.src_loc = tracer.get_src_loc(1 + src_loc_at) + + def __bool__(self): + raise TypeError("Attempted to convert nMigen value to boolean") + + def __invert__(self): + return Operator("~", [self]) + def __neg__(self): + return Operator("-", [self]) + + def __add__(self, other): + return Operator("+", [self, other]) + def __radd__(self, other): + return Operator("+", [other, self]) + def __sub__(self, other): + return Operator("-", [self, other]) + def __rsub__(self, other): + return Operator("-", [other, self]) + + def __mul__(self, other): + return Operator("*", [self, other]) + def __rmul__(self, other): + return Operator("*", [other, self]) + + def __check_divisor(self): + width, signed = self.shape() + if signed: + # Python's division semantics and Verilog's division semantics differ for negative + # divisors (Python uses div/mod, Verilog uses quo/rem); for now, avoid the issue + # completely by prohibiting such division operations. + raise NotImplementedError("Division by a signed value is not supported") + def __mod__(self, other): + other = Value.cast(other) + other.__check_divisor() + return Operator("%", [self, other]) + def __rmod__(self, other): + self.__check_divisor() + return Operator("%", [other, self]) + def __floordiv__(self, other): + other = Value.cast(other) + other.__check_divisor() + return Operator("//", [self, other]) + def __rfloordiv__(self, other): + self.__check_divisor() + return Operator("//", [other, self]) + + def __check_shamt(self): + width, signed = self.shape() + if signed: + # Neither Python nor HDLs implement shifts by negative values; prohibit any shifts + # by a signed value to make sure the shift amount can always be interpreted as + # an unsigned value. + raise NotImplementedError("Shift by a signed value is not supported") + def __lshift__(self, other): + other = Value.cast(other) + other.__check_shamt() + return Operator("<<", [self, other]) + def __rlshift__(self, other): + self.__check_shamt() + return Operator("<<", [other, self]) + def __rshift__(self, other): + other = Value.cast(other) + other.__check_shamt() + return Operator(">>", [self, other]) + def __rrshift__(self, other): + self.__check_shamt() + return Operator(">>", [other, self]) + + def __and__(self, other): + return Operator("&", [self, other]) + def __rand__(self, other): + return Operator("&", [other, self]) + def __xor__(self, other): + return Operator("^", [self, other]) + def __rxor__(self, other): + return Operator("^", [other, self]) + def __or__(self, other): + return Operator("|", [self, other]) + def __ror__(self, other): + return Operator("|", [other, self]) + + def __eq__(self, other): + return Operator("==", [self, other]) + def __ne__(self, other): + return Operator("!=", [self, other]) + def __lt__(self, other): + return Operator("<", [self, other]) + def __le__(self, other): + return Operator("<=", [self, other]) + def __gt__(self, other): + return Operator(">", [self, other]) + def __ge__(self, other): + return Operator(">=", [self, other]) + + def __len__(self): + return self.shape().width + + def __getitem__(self, key): + n = len(self) + if isinstance(key, int): + if key not in range(-n, n): + raise IndexError("Cannot index {} bits into {}-bit value".format(key, n)) + if key < 0: + key += n + return Slice(self, key, key + 1) + elif isinstance(key, slice): + start, stop, step = key.indices(n) + if step != 1: + return Cat(self[i] for i in range(start, stop, step)) + return Slice(self, start, stop) + else: + raise TypeError("Cannot index value with {}".format(repr(key))) + + def as_unsigned(self): + """Conversion to unsigned. + + Returns + ------- + Value, out + This ``Value`` reinterpreted as a unsigned integer. + """ + return Operator("u", [self]) + + def as_signed(self): + """Conversion to signed. + + Returns + ------- + Value, out + This ``Value`` reinterpreted as a signed integer. + """ + return Operator("s", [self]) + + def bool(self): + """Conversion to boolean. + + Returns + ------- + Value, out + ``1`` if any bits are set, ``0`` otherwise. + """ + return Operator("b", [self]) + + def any(self): + """Check if any bits are ``1``. + + Returns + ------- + Value, out + ``1`` if any bits are set, ``0`` otherwise. + """ + return Operator("r|", [self]) + + def all(self): + """Check if all bits are ``1``. + + Returns + ------- + Value, out + ``1`` if all bits are set, ``0`` otherwise. + """ + return Operator("r&", [self]) + + def xor(self): + """Compute pairwise exclusive-or of every bit. + + Returns + ------- + Value, out + ``1`` if an odd number of bits are set, ``0`` if an even number of bits are set. + """ + return Operator("r^", [self]) + + def implies(premise, conclusion): + """Implication. + + Returns + ------- + Value, out + ``0`` if ``premise`` is true and ``conclusion`` is not, ``1`` otherwise. + """ + return ~premise | conclusion + + def bit_select(self, offset, width): + """Part-select with bit granularity. + + Selects a constant width but variable offset part of a ``Value``, such that successive + parts overlap by all but 1 bit. + + Parameters + ---------- + offset : Value, in + Index of first selected bit. + width : int + Number of selected bits. + + Returns + ------- + Part, out + Selected part of the ``Value`` + """ + offset = Value.cast(offset) + if type(offset) is Const and isinstance(width, int): + return self[offset.value:offset.value + width] + return Part(self, offset, width, stride=1, src_loc_at=1) + + def word_select(self, offset, width): + """Part-select with word granularity. + + Selects a constant width but variable offset part of a ``Value``, such that successive + parts do not overlap. + + Parameters + ---------- + offset : Value, in + Index of first selected word. + width : int + Number of selected bits. + + Returns + ------- + Part, out + Selected part of the ``Value`` + """ + offset = Value.cast(offset) + if type(offset) is Const and isinstance(width, int): + return self[offset.value * width:(offset.value + 1) * width] + return Part(self, offset, width, stride=width, src_loc_at=1) + + def matches(self, *patterns): + """Pattern matching. + + Matches against a set of patterns, which may be integers or bit strings, recognizing + the same grammar as ``Case()``. + + Parameters + ---------- + patterns : int or str + Patterns to match against. + + Returns + ------- + Value, out + ``1`` if any pattern matches the value, ``0`` otherwise. + """ + matches = [] + for pattern in patterns: + if not isinstance(pattern, (int, str, Enum)): + raise SyntaxError("Match pattern must be an integer, a string, or an enumeration, " + "not {!r}" + .format(pattern)) + if isinstance(pattern, str) and any(bit not in "01- \t" for bit in pattern): + raise SyntaxError("Match pattern '{}' must consist of 0, 1, and - (don't care) " + "bits, and may include whitespace" + .format(pattern)) + if (isinstance(pattern, str) and + len("".join(pattern.split())) != len(self)): + raise SyntaxError("Match pattern '{}' must have the same width as match value " + "(which is {})" + .format(pattern, len(self))) + if isinstance(pattern, int) and bits_for(pattern) > len(self): + warnings.warn("Match pattern '{:b}' is wider than match value " + "(which has width {}); comparison will never be true" + .format(pattern, len(self)), + SyntaxWarning, stacklevel=3) + continue + if isinstance(pattern, str): + pattern = "".join(pattern.split()) # remove whitespace + mask = int(pattern.replace("0", "1").replace("-", "0"), 2) + pattern = int(pattern.replace("-", "0"), 2) + matches.append((self & mask) == pattern) + elif isinstance(pattern, int): + matches.append(self == pattern) + elif isinstance(pattern, Enum): + matches.append(self == pattern.value) + else: + assert False + if not matches: + return Const(0) + elif len(matches) == 1: + return matches[0] + else: + return Cat(*matches).any() + + def eq(self, value): + """Assignment. + + Parameters + ---------- + value : Value, in + Value to be assigned. + + Returns + ------- + Assign + Assignment statement that can be used in combinatorial or synchronous context. + """ + return Assign(self, value, src_loc_at=1) + + @abstractmethod + def shape(self): + """Bit width and signedness of a value. + + Returns + ------- + Shape + See :class:`Shape`. + + Examples + -------- + >>> Signal(8).shape() + Shape(width=8, signed=False) + >>> Const(0xaa).shape() + Shape(width=8, signed=False) + """ + pass # :nocov: + + def _lhs_signals(self): + raise TypeError("Value {!r} cannot be used in assignments".format(self)) + + @abstractmethod + def _rhs_signals(self): + pass # :nocov: + + def _as_const(self): + raise TypeError("Value {!r} cannot be evaluated as constant".format(self)) + + __hash__ = None + + +@final +class Const(Value): + """A constant, literal integer value. + + Parameters + ---------- + value : int + shape : int or tuple or None + Either an integer ``width`` or a tuple ``(width, signed)`` specifying the number of bits + in this constant and whether it is signed (can represent negative values). + ``shape`` defaults to the minimum possible width and signedness of ``value``. + + Attributes + ---------- + width : int + signed : bool + """ + src_loc = None + + @staticmethod + def normalize(value, shape): + width, signed = shape + mask = (1 << width) - 1 + value &= mask + if signed and value >> (width - 1): + value |= ~mask + return value + + def __init__(self, value, shape=None, *, src_loc_at=0): + # We deliberately do not call Value.__init__ here. + self.value = int(value) + if shape is None: + shape = Shape(bits_for(self.value), signed=self.value < 0) + elif isinstance(shape, int): + shape = Shape(shape, signed=self.value < 0) + else: + shape = Shape.cast(shape, src_loc_at=1 + src_loc_at) + self.width, self.signed = shape + self.value = self.normalize(self.value, shape) + + def shape(self): + return Shape(self.width, self.signed) + + def _rhs_signals(self): + return ValueSet() + + def _as_const(self): + return self.value + + def __repr__(self): + return "(const {}'{}d{})".format(self.width, "s" if self.signed else "", self.value) + + +C = Const # shorthand + + +class AnyValue(Value, DUID): + def __init__(self, shape, *, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + self.width, self.signed = Shape.cast(shape, src_loc_at=1 + src_loc_at) + if not isinstance(self.width, int) or self.width < 0: + raise TypeError("Width must be a non-negative integer, not {!r}" + .format(self.width)) + + def shape(self): + return Shape(self.width, self.signed) + + def _rhs_signals(self): + return ValueSet() + + +@final +class AnyConst(AnyValue): + def __repr__(self): + return "(anyconst {}'{})".format(self.width, "s" if self.signed else "") + + +@final +class AnySeq(AnyValue): + def __repr__(self): + return "(anyseq {}'{})".format(self.width, "s" if self.signed else "") + + +@final +class Operator(Value): + def __init__(self, operator, operands, *, src_loc_at=0): + super().__init__(src_loc_at=1 + src_loc_at) + self.operator = operator + self.operands = [Value.cast(op) for op in operands] + + def shape(self): + def _bitwise_binary_shape(a_shape, b_shape): + a_bits, a_sign = a_shape + b_bits, b_sign = b_shape + if not a_sign and not b_sign: + # both operands unsigned + return Shape(max(a_bits, b_bits), False) + elif a_sign and b_sign: + # both operands signed + return Shape(max(a_bits, b_bits), True) + elif not a_sign and b_sign: + # first operand unsigned (add sign bit), second operand signed + return Shape(max(a_bits + 1, b_bits), True) + else: + # first signed, second operand unsigned (add sign bit) + return Shape(max(a_bits, b_bits + 1), True) + + op_shapes = list(map(lambda x: x.shape(), self.operands)) + if len(op_shapes) == 1: + (a_width, a_signed), = op_shapes + if self.operator in ("+", "~"): + return Shape(a_width, a_signed) + if self.operator == "-": + return Shape(a_width + 1, True) + if self.operator in ("b", "r|", "r&", "r^"): + return Shape(1, False) + if self.operator == "u": + return Shape(a_width, False) + if self.operator == "s": + return Shape(a_width, True) + elif len(op_shapes) == 2: + (a_width, a_signed), (b_width, b_signed) = op_shapes + if self.operator in ("+", "-"): + width, signed = _bitwise_binary_shape(*op_shapes) + return Shape(width + 1, signed) + if self.operator == "*": + return Shape(a_width + b_width, a_signed or b_signed) + if self.operator in ("//", "%"): + assert not b_signed + return Shape(a_width, a_signed) + if self.operator in ("<", "<=", "==", "!=", ">", ">="): + return Shape(1, False) + if self.operator in ("&", "^", "|"): + return _bitwise_binary_shape(*op_shapes) + if self.operator == "<<": + if b_signed: + extra = 2 ** (b_width - 1) - 1 + else: + extra = 2 ** (b_width) - 1 + return Shape(a_width + extra, a_signed) + if self.operator == ">>": + if b_signed: + extra = 2 ** (b_width - 1) + else: + extra = 0 + return Shape(a_width + extra, a_signed) + elif len(op_shapes) == 3: + if self.operator == "m": + s_shape, a_shape, b_shape = op_shapes + return _bitwise_binary_shape(a_shape, b_shape) + raise NotImplementedError("Operator {}/{} not implemented" + .format(self.operator, len(op_shapes))) # :nocov: + + def _rhs_signals(self): + return union(op._rhs_signals() for op in self.operands) + + def __repr__(self): + return "({} {})".format(self.operator, " ".join(map(repr, self.operands))) + + +def Mux(sel, val1, val0): + """Choose between two values. + + Parameters + ---------- + sel : Value, in + Selector. + val1 : Value, in + val0 : Value, in + Input values. + + Returns + ------- + Value, out + Output ``Value``. If ``sel`` is asserted, the Mux returns ``val1``, else ``val0``. + """ + sel = Value.cast(sel) + if len(sel) != 1: + sel = sel.bool() + return Operator("m", [sel, val1, val0]) + + +@final +class Slice(Value): + def __init__(self, value, start, stop, *, src_loc_at=0): + if not isinstance(start, int): + raise TypeError("Slice start must be an integer, not {!r}".format(start)) + if not isinstance(stop, int): + raise TypeError("Slice stop must be an integer, not {!r}".format(stop)) + + n = len(value) + if start not in range(-(n+1), n+1): + raise IndexError("Cannot start slice {} bits into {}-bit value".format(start, n)) + if start < 0: + start += n + if stop not in range(-(n+1), n+1): + raise IndexError("Cannot stop slice {} bits into {}-bit value".format(stop, n)) + if stop < 0: + stop += n + if start > stop: + raise IndexError("Slice start {} must be less than slice stop {}".format(start, stop)) + + super().__init__(src_loc_at=src_loc_at) + self.value = Value.cast(value) + self.start = start + self.stop = stop + + def shape(self): + return Shape(self.stop - self.start) + + def _lhs_signals(self): + return self.value._lhs_signals() + + def _rhs_signals(self): + return self.value._rhs_signals() + + def __repr__(self): + return "(slice {} {}:{})".format(repr(self.value), self.start, self.stop) + + +@final +class Part(Value): + def __init__(self, value, offset, width, stride=1, *, src_loc_at=0): + if not isinstance(width, int) or width < 0: + raise TypeError("Part width must be a non-negative integer, not {!r}".format(width)) + if not isinstance(stride, int) or stride <= 0: + raise TypeError("Part stride must be a positive integer, not {!r}".format(stride)) + + super().__init__(src_loc_at=src_loc_at) + self.value = value + self.offset = Value.cast(offset) + self.width = width + self.stride = stride + + def shape(self): + return Shape(self.width) + + def _lhs_signals(self): + return self.value._lhs_signals() + + def _rhs_signals(self): + return self.value._rhs_signals() | self.offset._rhs_signals() + + def __repr__(self): + return "(part {} {} {} {})".format(repr(self.value), repr(self.offset), + self.width, self.stride) + + +@final +class Cat(Value): + """Concatenate values. + + Form a compound ``Value`` from several smaller ones by concatenation. + The first argument occupies the lower bits of the result. + The return value can be used on either side of an assignment, that + is, the concatenated value can be used as an argument on the RHS or + as a target on the LHS. If it is used on the LHS, it must solely + consist of ``Signal`` s, slices of ``Signal`` s, and other concatenations + meeting these properties. The bit length of the return value is the sum of + the bit lengths of the arguments:: + + len(Cat(args)) == sum(len(arg) for arg in args) + + Parameters + ---------- + *args : Values or iterables of Values, inout + ``Value`` s to be concatenated. + + Returns + ------- + Value, inout + Resulting ``Value`` obtained by concatentation. + """ + def __init__(self, *args, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + self.parts = [Value.cast(v) for v in flatten(args)] + + def shape(self): + return Shape(sum(len(part) for part in self.parts)) + + def _lhs_signals(self): + return union((part._lhs_signals() for part in self.parts), start=ValueSet()) + + def _rhs_signals(self): + return union((part._rhs_signals() for part in self.parts), start=ValueSet()) + + def _as_const(self): + value = 0 + for part in reversed(self.parts): + value <<= len(part) + value |= part._as_const() + return value + + def __repr__(self): + return "(cat {})".format(" ".join(map(repr, self.parts))) + + +@final +class Repl(Value): + """Replicate a value + + An input value is replicated (repeated) several times + to be used on the RHS of assignments:: + + len(Repl(s, n)) == len(s) * n + + Parameters + ---------- + value : Value, in + Input value to be replicated. + count : int + Number of replications. + + Returns + ------- + Repl, out + Replicated value. + """ + def __init__(self, value, count, *, src_loc_at=0): + if not isinstance(count, int) or count < 0: + raise TypeError("Replication count must be a non-negative integer, not {!r}" + .format(count)) + + super().__init__(src_loc_at=src_loc_at) + self.value = Value.cast(value) + self.count = count + + def shape(self): + return Shape(len(self.value) * self.count) + + def _rhs_signals(self): + return self.value._rhs_signals() + + def __repr__(self): + return "(repl {!r} {})".format(self.value, self.count) + + +# @final +class Signal(Value, DUID): + """A varying integer value. + + Parameters + ---------- + shape : ``Shape``-castable object or None + Specification for the number of bits in this ``Signal`` and its signedness (whether it + can represent negative values). See ``Shape.cast`` for details. + If not specified, ``shape`` defaults to 1-bit and non-signed. + name : str + Name hint for this signal. If ``None`` (default) the name is inferred from the variable + name this ``Signal`` is assigned to. + reset : int or integral Enum + Reset (synchronous) or default (combinatorial) value. + When this ``Signal`` is assigned to in synchronous context and the corresponding clock + domain is reset, the ``Signal`` assumes the given value. When this ``Signal`` is unassigned + in combinatorial context (due to conditional assignments not being taken), the ``Signal`` + assumes its ``reset`` value. Defaults to 0. + reset_less : bool + If ``True``, do not generate reset logic for this ``Signal`` in synchronous statements. + The ``reset`` value is only used as a combinatorial default or as the initial value. + Defaults to ``False``. + attrs : dict + Dictionary of synthesis attributes. + decoder : function or Enum + A function converting integer signal values to human-readable strings (e.g. FSM state + names). If an ``Enum`` subclass is passed, it is concisely decoded using format string + ``"{0.name:}/{0.value:}"``, or a number if the signal value is not a member of + the enumeration. + + Attributes + ---------- + width : int + signed : bool + name : str + reset : int + reset_less : bool + attrs : dict + decoder : function + """ + + def __init__(self, shape=None, *, name=None, reset=0, reset_less=False, + attrs=None, decoder=None, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + + if name is not None and not isinstance(name, str): + raise TypeError("Name must be a string, not {!r}".format(name)) + self.name = name or tracer.get_var_name(depth=2 + src_loc_at, default="$signal") + + if shape is None: + shape = unsigned(1) + self.width, self.signed = Shape.cast(shape, src_loc_at=1 + src_loc_at) + + if isinstance(reset, Enum): + reset = reset.value + if not isinstance(reset, int): + raise TypeError("Reset value has to be an int or an integral Enum") + + reset_width = bits_for(reset, self.signed) + if reset != 0 and reset_width > self.width: + warnings.warn("Reset value {!r} requires {} bits to represent, but the signal " + "only has {} bits" + .format(reset, reset_width, self.width), + SyntaxWarning, stacklevel=2 + src_loc_at) + + self.reset = reset + self.reset_less = bool(reset_less) + + self.attrs = OrderedDict(() if attrs is None else attrs) + + if decoder is None and isinstance(shape, type) and issubclass(shape, Enum): + decoder = shape + if isinstance(decoder, type) and issubclass(decoder, Enum): + def enum_decoder(value): + try: + return "{0.name:}/{0.value:}".format(decoder(value)) + except ValueError: + return str(value) + self.decoder = enum_decoder + else: + self.decoder = decoder + + # Not a @classmethod because nmigen.compat requires it. + @staticmethod + def like(other, *, name=None, name_suffix=None, src_loc_at=0, **kwargs): + """Create Signal based on another. + + Parameters + ---------- + other : Value + Object to base this Signal on. + """ + if name is not None: + new_name = str(name) + elif name_suffix is not None: + new_name = other.name + str(name_suffix) + else: + new_name = tracer.get_var_name(depth=2 + src_loc_at, default="$like") + kw = dict(shape=Value.cast(other).shape(), name=new_name) + if isinstance(other, Signal): + kw.update(reset=other.reset, reset_less=other.reset_less, + attrs=other.attrs, decoder=other.decoder) + kw.update(kwargs) + return Signal(**kw, src_loc_at=1 + src_loc_at) + + def shape(self): + return Shape(self.width, self.signed) + + def _lhs_signals(self): + return ValueSet((self,)) + + def _rhs_signals(self): + return ValueSet((self,)) + + def __repr__(self): + return "(sig {})".format(self.name) + + +@final +class ClockSignal(Value): + """Clock signal for a clock domain. + + Any ``ClockSignal`` is equivalent to ``cd.clk`` for a clock domain with the corresponding name. + All of these signals ultimately refer to the same signal, but they can be manipulated + independently of the clock domain, even before the clock domain is created. + + Parameters + ---------- + domain : str + Clock domain to obtain a clock signal for. Defaults to ``"sync"``. + """ + def __init__(self, domain="sync", *, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + if not isinstance(domain, str): + raise TypeError("Clock domain name must be a string, not {!r}".format(domain)) + if domain == "comb": + raise ValueError("Domain '{}' does not have a clock".format(domain)) + self.domain = domain + + def shape(self): + return Shape(1) + + def _lhs_signals(self): + return ValueSet((self,)) + + def _rhs_signals(self): + raise NotImplementedError("ClockSignal must be lowered to a concrete signal") # :nocov: + + def __repr__(self): + return "(clk {})".format(self.domain) + + +@final +class ResetSignal(Value): + """Reset signal for a clock domain. + + Any ``ResetSignal`` is equivalent to ``cd.rst`` for a clock domain with the corresponding name. + All of these signals ultimately refer to the same signal, but they can be manipulated + independently of the clock domain, even before the clock domain is created. + + Parameters + ---------- + domain : str + Clock domain to obtain a reset signal for. Defaults to ``"sync"``. + allow_reset_less : bool + If the clock domain is reset-less, act as a constant ``0`` instead of reporting an error. + """ + def __init__(self, domain="sync", allow_reset_less=False, *, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + if not isinstance(domain, str): + raise TypeError("Clock domain name must be a string, not {!r}".format(domain)) + if domain == "comb": + raise ValueError("Domain '{}' does not have a reset".format(domain)) + self.domain = domain + self.allow_reset_less = allow_reset_less + + def shape(self): + return Shape(1) + + def _lhs_signals(self): + return ValueSet((self,)) + + def _rhs_signals(self): + raise NotImplementedError("ResetSignal must be lowered to a concrete signal") # :nocov: + + def __repr__(self): + return "(rst {})".format(self.domain) + + +class Array(MutableSequence): + """Addressable multiplexer. + + An array is similar to a ``list`` that can also be indexed by ``Value``s; indexing by an integer or a slice works the same as for Python lists, but indexing by a ``Value`` results + in a proxy. + + The array proxy can be used as an ordinary ``Value``, i.e. participate in calculations and + assignments, provided that all elements of the array are values. The array proxy also supports + attribute access and further indexing, each returning another array proxy; this means that + the results of indexing into arrays, arrays of records, and arrays of arrays can all + be used as first-class values. + + It is an error to change an array or any of its elements after an array proxy was created. + Changing the array directly will raise an exception. However, it is not possible to detect + the elements being modified; if an element's attribute or element is modified after the proxy + for it has been created, the proxy will refer to stale data. + + Examples + -------- + + Simple array:: + + gpios = Array(Signal() for _ in range(10)) + with m.If(bus.we): + m.d.sync += gpios[bus.addr].eq(bus.w_data) + with m.Else(): + m.d.sync += bus.r_data.eq(gpios[bus.addr]) + + Multidimensional array:: + + mult = Array(Array(x * y for y in range(10)) for x in range(10)) + a = Signal.range(10) + b = Signal.range(10) + r = Signal(8) + m.d.comb += r.eq(mult[a][b]) + + Array of records:: + + layout = [ + ("r_data", 16), + ("r_en", 1), + ] + buses = Array(Record(layout) for busno in range(4)) + master = Record(layout) + m.d.comb += [ + buses[sel].r_en.eq(master.r_en), + master.r_data.eq(buses[sel].r_data), + ] + """ + def __init__(self, iterable=()): + self._inner = list(iterable) + self._proxy_at = None + self._mutable = True + + def __getitem__(self, index): + if isinstance(index, Value): + if self._mutable: + self._proxy_at = tracer.get_src_loc() + self._mutable = False + return ArrayProxy(self, index) + else: + return self._inner[index] + + def __len__(self): + return len(self._inner) + + def _check_mutability(self): + if not self._mutable: + raise ValueError("Array can no longer be mutated after it was indexed with a value " + "at {}:{}".format(*self._proxy_at)) + + def __setitem__(self, index, value): + self._check_mutability() + self._inner[index] = value + + def __delitem__(self, index): + self._check_mutability() + del self._inner[index] + + def insert(self, index, value): + self._check_mutability() + self._inner.insert(index, value) + + def __repr__(self): + return "(array{} [{}])".format(" mutable" if self._mutable else "", + ", ".join(map(repr, self._inner))) + + +@final +class ArrayProxy(Value): + def __init__(self, elems, index, *, src_loc_at=0): + super().__init__(src_loc_at=1 + src_loc_at) + self.elems = elems + self.index = Value.cast(index) + + def __getattr__(self, attr): + return ArrayProxy([getattr(elem, attr) for elem in self.elems], self.index) + + def __getitem__(self, index): + return ArrayProxy([ elem[index] for elem in self.elems], self.index) + + def _iter_as_values(self): + return (Value.cast(elem) for elem in self.elems) + + def shape(self): + width, signed = 0, False + for elem_width, elem_signed in (elem.shape() for elem in self._iter_as_values()): + width = max(width, elem_width + elem_signed) + signed = max(signed, elem_signed) + return Shape(width, signed) + + def _lhs_signals(self): + signals = union((elem._lhs_signals() for elem in self._iter_as_values()), start=ValueSet()) + return signals + + def _rhs_signals(self): + signals = union((elem._rhs_signals() for elem in self._iter_as_values()), start=ValueSet()) + return self.index._rhs_signals() | signals + + def __repr__(self): + return "(proxy (array [{}]) {!r})".format(", ".join(map(repr, self.elems)), self.index) + + +class UserValue(Value): + """Value with custom lowering. + + A ``UserValue`` is a value whose precise representation does not have to be immediately known, + which is useful in certain metaprogramming scenarios. Instead of providing fixed semantics + upfront, it is kept abstract for as long as possible, only being lowered to a concrete nMigen + value when required. + + Note that the ``lower`` method will only be called once; this is necessary to ensure that + nMigen's view of representation of all values stays internally consistent. If the class + deriving from ``UserValue`` is mutable, then it must ensure that after ``lower`` is called, + it is not mutated in a way that changes its representation. + + The following is an incomplete list of actions that, when applied to an ``UserValue`` directly + or indirectly, will cause it to be lowered, provided as an illustrative reference: + * Querying the shape using ``.shape()`` or ``len()``; + * Creating a similarly shaped signal using ``Signal.like``; + * Indexing or iterating through individual bits; + * Adding an assignment to the value to a ``Module`` using ``m.d. +=``. + """ + def __init__(self, *, src_loc_at=0): + super().__init__(src_loc_at=1 + src_loc_at) + self.__lowered = None + + @abstractmethod + def lower(self): + """Conversion to a concrete representation.""" + pass # :nocov: + + def _lazy_lower(self): + if self.__lowered is None: + self.__lowered = Value.cast(self.lower()) + return self.__lowered + + def shape(self): + return self._lazy_lower().shape() + + def _lhs_signals(self): + return self._lazy_lower()._lhs_signals() + + def _rhs_signals(self): + return self._lazy_lower()._rhs_signals() + + +@final +class Sample(Value): + """Value from the past. + + A ``Sample`` of an expression is equal to the value of the expression ``clocks`` clock edges + of the ``domain`` clock back. If that moment is before the beginning of time, it is equal + to the value of the expression calculated as if each signal had its reset value. + """ + def __init__(self, expr, clocks, domain, *, src_loc_at=0): + super().__init__(src_loc_at=1 + src_loc_at) + self.value = Value.cast(expr) + self.clocks = int(clocks) + self.domain = domain + if not isinstance(self.value, (Const, Signal, ClockSignal, ResetSignal, Initial)): + raise TypeError("Sampled value must be a signal or a constant, not {!r}" + .format(self.value)) + if self.clocks < 0: + raise ValueError("Cannot sample a value {} cycles in the future" + .format(-self.clocks)) + if not (self.domain is None or isinstance(self.domain, str)): + raise TypeError("Domain name must be a string or None, not {!r}" + .format(self.domain)) + + def shape(self): + return self.value.shape() + + def _rhs_signals(self): + return ValueSet((self,)) + + def __repr__(self): + return "(sample {!r} @ {}[{}])".format( + self.value, "" if self.domain is None else self.domain, self.clocks) + + +def Past(expr, clocks=1, domain=None): + return Sample(expr, clocks, domain) + + +def Stable(expr, clocks=0, domain=None): + return Sample(expr, clocks + 1, domain) == Sample(expr, clocks, domain) + + +def Rose(expr, clocks=0, domain=None): + return ~Sample(expr, clocks + 1, domain) & Sample(expr, clocks, domain) + + +def Fell(expr, clocks=0, domain=None): + return Sample(expr, clocks + 1, domain) & ~Sample(expr, clocks, domain) + + +@final +class Initial(Value): + """Start indicator, for model checking. + + An ``Initial`` signal is ``1`` at the first cycle of model checking, and ``0`` at any other. + """ + def __init__(self, *, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + + def shape(self): + return Shape(1) + + def _rhs_signals(self): + return ValueSet((self,)) + + def __repr__(self): + return "(initial)" + + +class _StatementList(list): + def __repr__(self): + return "({})".format(" ".join(map(repr, self))) + + +class Statement: + def __init__(self, *, src_loc_at=0): + self.src_loc = tracer.get_src_loc(1 + src_loc_at) + + @staticmethod + def cast(obj): + if isinstance(obj, Iterable): + return _StatementList(sum((Statement.cast(e) for e in obj), [])) + else: + if isinstance(obj, Statement): + return _StatementList([obj]) + else: + raise TypeError("Object {!r} is not an nMigen statement".format(obj)) + + +@final +class Assign(Statement): + def __init__(self, lhs, rhs, *, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + self.lhs = Value.cast(lhs) + self.rhs = Value.cast(rhs) + + def _lhs_signals(self): + return self.lhs._lhs_signals() + + def _rhs_signals(self): + return self.lhs._rhs_signals() | self.rhs._rhs_signals() + + def __repr__(self): + return "(eq {!r} {!r})".format(self.lhs, self.rhs) + + +class UnusedProperty(UnusedMustUse): + pass + + +class Property(Statement, MustUse): + _MustUse__warning = UnusedProperty + + def __init__(self, test, *, _check=None, _en=None, src_loc_at=0): + super().__init__(src_loc_at=src_loc_at) + self.test = Value.cast(test) + self._check = _check + self._en = _en + if self._check is None: + self._check = Signal(reset_less=True, name="${}$check".format(self._kind)) + self._check.src_loc = self.src_loc + if _en is None: + self._en = Signal(reset_less=True, name="${}$en".format(self._kind)) + self._en.src_loc = self.src_loc + + def _lhs_signals(self): + return ValueSet((self._en, self._check)) + + def _rhs_signals(self): + return self.test._rhs_signals() + + def __repr__(self): + return "({} {!r})".format(self._kind, self.test) + + +@final +class Assert(Property): + _kind = "assert" + + +@final +class Assume(Property): + _kind = "assume" + + +@final +class Cover(Property): + _kind = "cover" + + +# @final +class Switch(Statement): + def __init__(self, test, cases, *, src_loc=None, src_loc_at=0, case_src_locs={}): + if src_loc is None: + super().__init__(src_loc_at=src_loc_at) + else: + # Switch is a bit special in terms of location tracking because it is usually created + # long after the control has left the statement that directly caused its creation. + self.src_loc = src_loc + # Switch is also a bit special in that its parts also have location information. It can't + # be automatically traced, so whatever constructs a Switch may optionally provide it. + self.case_src_locs = {} + + self.test = Value.cast(test) + self.cases = OrderedDict() + for orig_keys, stmts in cases.items(): + # Map: None -> (); key -> (key,); (key...) -> (key...) + keys = orig_keys + if keys is None: + keys = () + if not isinstance(keys, tuple): + keys = (keys,) + # Map: 2 -> "0010"; "0010" -> "0010" + new_keys = () + for key in keys: + if isinstance(key, str): + key = "".join(key.split()) # remove whitespace + elif isinstance(key, int): + key = format(key, "b").rjust(len(self.test), "0") + elif isinstance(key, Enum): + key = format(key.value, "b").rjust(len(self.test), "0") + else: + raise TypeError("Object {!r} cannot be used as a switch key" + .format(key)) + assert len(key) == len(self.test) + new_keys = (*new_keys, key) + if not isinstance(stmts, Iterable): + stmts = [stmts] + self.cases[new_keys] = Statement.cast(stmts) + if orig_keys in case_src_locs: + self.case_src_locs[new_keys] = case_src_locs[orig_keys] + + def _lhs_signals(self): + signals = union((s._lhs_signals() for ss in self.cases.values() for s in ss), + start=ValueSet()) + return signals + + def _rhs_signals(self): + signals = union((s._rhs_signals() for ss in self.cases.values() for s in ss), + start=ValueSet()) + return self.test._rhs_signals() | signals + + def __repr__(self): + def case_repr(keys, stmts): + stmts_repr = " ".join(map(repr, stmts)) + if keys == (): + return "(default {})".format(stmts_repr) + elif len(keys) == 1: + return "(case {} {})".format(keys[0], stmts_repr) + else: + return "(case ({}) {})".format(" ".join(keys), stmts_repr) + case_reprs = [case_repr(keys, stmts) for keys, stmts in self.cases.items()] + return "(switch {!r} {})".format(self.test, " ".join(case_reprs)) + + +class _MappedKeyCollection(metaclass=ABCMeta): + @abstractmethod + def _map_key(self, key): + pass # :nocov: + + @abstractmethod + def _unmap_key(self, key): + pass # :nocov: + + +class _MappedKeyDict(MutableMapping, _MappedKeyCollection): + def __init__(self, pairs=()): + self._storage = OrderedDict() + for key, value in pairs: + self[key] = value + + def __getitem__(self, key): + key = None if key is None else self._map_key(key) + return self._storage[key] + + def __setitem__(self, key, value): + key = None if key is None else self._map_key(key) + self._storage[key] = value + + def __delitem__(self, key): + key = None if key is None else self._map_key(key) + del self._storage[key] + + def __iter__(self): + for key in self._storage: + if key is None: + yield None + else: + yield self._unmap_key(key) + + def __eq__(self, other): + if not isinstance(other, type(self)): + return False + if len(self) != len(other): + return False + for ak, bk in zip(sorted(self._storage), sorted(other._storage)): + if ak != bk: + return False + if self._storage[ak] != other._storage[bk]: + return False + return True + + def __len__(self): + return len(self._storage) + + def __repr__(self): + pairs = ["({!r}, {!r})".format(k, v) for k, v in self.items()] + return "{}.{}([{}])".format(type(self).__module__, type(self).__name__, + ", ".join(pairs)) + + +class _MappedKeySet(MutableSet, _MappedKeyCollection): + def __init__(self, elements=()): + self._storage = OrderedDict() + for elem in elements: + self.add(elem) + + def add(self, value): + self._storage[self._map_key(value)] = None + + def update(self, values): + for value in values: + self.add(value) + + def discard(self, value): + if value in self: + del self._storage[self._map_key(value)] + + def __contains__(self, value): + return self._map_key(value) in self._storage + + def __iter__(self): + for key in [k for k in self._storage]: + yield self._unmap_key(key) + + def __len__(self): + return len(self._storage) + + def __repr__(self): + return "{}.{}({})".format(type(self).__module__, type(self).__name__, + ", ".join(repr(x) for x in self)) + + +class ValueKey: + def __init__(self, value): + self.value = Value.cast(value) + if isinstance(self.value, Const): + self._hash = hash(self.value.value) + elif isinstance(self.value, (Signal, AnyValue)): + self._hash = hash(self.value.duid) + elif isinstance(self.value, (ClockSignal, ResetSignal)): + self._hash = hash(self.value.domain) + elif isinstance(self.value, Operator): + self._hash = hash((self.value.operator, + tuple(ValueKey(o) for o in self.value.operands))) + elif isinstance(self.value, Slice): + self._hash = hash((ValueKey(self.value.value), self.value.start, self.value.stop)) + elif isinstance(self.value, Part): + self._hash = hash((ValueKey(self.value.value), ValueKey(self.value.offset), + self.value.width, self.value.stride)) + elif isinstance(self.value, Cat): + self._hash = hash(tuple(ValueKey(o) for o in self.value.parts)) + elif isinstance(self.value, ArrayProxy): + self._hash = hash((ValueKey(self.value.index), + tuple(ValueKey(e) for e in self.value._iter_as_values()))) + elif isinstance(self.value, Sample): + self._hash = hash((ValueKey(self.value.value), self.value.clocks, self.value.domain)) + elif isinstance(self.value, Initial): + self._hash = 0 + else: # :nocov: + raise TypeError("Object {!r} cannot be used as a key in value collections" + .format(self.value)) + + def __hash__(self): + return self._hash + + def __eq__(self, other): + if type(other) is not ValueKey: + return False + if type(self.value) is not type(other.value): + return False + + if isinstance(self.value, Const): + return self.value.value == other.value.value + elif isinstance(self.value, (Signal, AnyValue)): + return self.value is other.value + elif isinstance(self.value, (ClockSignal, ResetSignal)): + return self.value.domain == other.value.domain + elif isinstance(self.value, Operator): + return (self.value.operator == other.value.operator and + len(self.value.operands) == len(other.value.operands) and + all(ValueKey(a) == ValueKey(b) + for a, b in zip(self.value.operands, other.value.operands))) + elif isinstance(self.value, Slice): + return (ValueKey(self.value.value) == ValueKey(other.value.value) and + self.value.start == other.value.start and + self.value.stop == other.value.stop) + elif isinstance(self.value, Part): + return (ValueKey(self.value.value) == ValueKey(other.value.value) and + ValueKey(self.value.offset) == ValueKey(other.value.offset) and + self.value.width == other.value.width and + self.value.stride == other.value.stride) + elif isinstance(self.value, Cat): + return all(ValueKey(a) == ValueKey(b) + for a, b in zip(self.value.parts, other.value.parts)) + elif isinstance(self.value, ArrayProxy): + return (ValueKey(self.value.index) == ValueKey(other.value.index) and + len(self.value.elems) == len(other.value.elems) and + all(ValueKey(a) == ValueKey(b) + for a, b in zip(self.value._iter_as_values(), + other.value._iter_as_values()))) + elif isinstance(self.value, Sample): + return (ValueKey(self.value.value) == ValueKey(other.value.value) and + self.value.clocks == other.value.clocks and + self.value.domain == self.value.domain) + elif isinstance(self.value, Initial): + return True + else: # :nocov: + raise TypeError("Object {!r} cannot be used as a key in value collections" + .format(self.value)) + + def __lt__(self, other): + if not isinstance(other, ValueKey): + return False + if type(self.value) != type(other.value): + return False + + if isinstance(self.value, Const): + return self.value < other.value + elif isinstance(self.value, (Signal, AnyValue)): + return self.value.duid < other.value.duid + elif isinstance(self.value, Slice): + return (ValueKey(self.value.value) < ValueKey(other.value.value) and + self.value.start < other.value.start and + self.value.end < other.value.end) + else: # :nocov: + raise TypeError("Object {!r} cannot be used as a key in value collections") + + def __repr__(self): + return "<{}.ValueKey {!r}>".format(__name__, self.value) + + +class ValueDict(_MappedKeyDict): + _map_key = ValueKey + _unmap_key = lambda self, key: key.value + + +class ValueSet(_MappedKeySet): + _map_key = ValueKey + _unmap_key = lambda self, key: key.value + + +class SignalKey: + def __init__(self, signal): + self.signal = signal + if isinstance(signal, Signal): + self._intern = (0, signal.duid) + elif type(signal) is ClockSignal: + self._intern = (1, signal.domain) + elif type(signal) is ResetSignal: + self._intern = (2, signal.domain) + else: + raise TypeError("Object {!r} is not an nMigen signal".format(signal)) + + def __hash__(self): + return hash(self._intern) + + def __eq__(self, other): + if type(other) is not SignalKey: + return False + return self._intern == other._intern + + def __lt__(self, other): + if type(other) is not SignalKey: + raise TypeError("Object {!r} cannot be compared to a SignalKey".format(signal)) + return self._intern < other._intern + + def __repr__(self): + return "<{}.SignalKey {!r}>".format(__name__, self.signal) + + +class SignalDict(_MappedKeyDict): + _map_key = SignalKey + _unmap_key = lambda self, key: key.signal + + +class SignalSet(_MappedKeySet): + _map_key = SignalKey + _unmap_key = lambda self, key: key.signal diff --git a/nmigen/hdl/cd.py b/nmigen/hdl/cd.py new file mode 100644 index 0000000..cd08929 --- /dev/null +++ b/nmigen/hdl/cd.py @@ -0,0 +1,84 @@ +from .. import tracer +from .ast import Signal + + +__all__ = ["ClockDomain", "DomainError"] + + +class DomainError(Exception): + pass + + +class ClockDomain: + """Synchronous domain. + + Parameters + ---------- + name : str or None + Domain name. If ``None`` (the default) the name is inferred from the variable name this + ``ClockDomain`` is assigned to (stripping any `"cd_"` prefix). + reset_less : bool + If ``True``, the domain does not use a reset signal. Registers within this domain are + still all initialized to their reset state once, e.g. through Verilog `"initial"` + statements. + clock_edge : str + The edge of the clock signal on which signals are sampled. Must be one of "pos" or "neg". + async_reset : bool + If ``True``, the domain uses an asynchronous reset, and registers within this domain + are initialized to their reset state when reset level changes. Otherwise, registers + are initialized to reset state at the next clock cycle when reset is asserted. + local : bool + If ``True``, the domain will propagate only downwards in the design hierarchy. Otherwise, + the domain will propagate everywhere. + + Attributes + ---------- + clk : Signal, inout + The clock for this domain. Can be driven or used to drive other signals (preferably + in combinatorial context). + rst : Signal or None, inout + Reset signal for this domain. Can be driven or used to drive. + """ + + @staticmethod + def _name_for(domain_name, signal_name): + if domain_name == "sync": + return signal_name + else: + return "{}_{}".format(domain_name, signal_name) + + def __init__(self, name=None, *, clk_edge="pos", reset_less=False, async_reset=False, + local=False): + if name is None: + try: + name = tracer.get_var_name() + except tracer.NameNotFound: + raise ValueError("Clock domain name must be specified explicitly") + if name.startswith("cd_"): + name = name[3:] + if name == "comb": + raise ValueError("Domain '{}' may not be clocked".format(name)) + + if clk_edge not in ("pos", "neg"): + raise ValueError("Domain clock edge must be one of 'pos' or 'neg', not {!r}" + .format(clk_edge)) + + self.name = name + + self.clk = Signal(name=self._name_for(name, "clk"), src_loc_at=1) + self.clk_edge = clk_edge + + if reset_less: + self.rst = None + else: + self.rst = Signal(name=self._name_for(name, "rst"), src_loc_at=1) + + self.async_reset = async_reset + + self.local = local + + def rename(self, new_name): + self.name = new_name + self.clk.name = self._name_for(new_name, "clk") + if self.rst is not None: + self.rst.name = self._name_for(new_name, "rst") diff --git a/nmigen/hdl/dsl.py b/nmigen/hdl/dsl.py new file mode 100644 index 0000000..85ae472 --- /dev/null +++ b/nmigen/hdl/dsl.py @@ -0,0 +1,546 @@ +from collections import OrderedDict, namedtuple +from collections.abc import Iterable +from contextlib import contextmanager, _GeneratorContextManager +from functools import wraps +from enum import Enum +import warnings + +from .._utils import flatten, bits_for, deprecated +from .. import tracer +from .ast import * +from .ir import * +from .cd import * +from .xfrm import * + + +__all__ = ["SyntaxError", "SyntaxWarning", "Module"] + + +class SyntaxError(Exception): + pass + + +class SyntaxWarning(Warning): + pass + + +class _ModuleBuilderProxy: + def __init__(self, builder, depth): + object.__setattr__(self, "_builder", builder) + object.__setattr__(self, "_depth", depth) + + +class _ModuleBuilderDomain(_ModuleBuilderProxy): + def __init__(self, builder, depth, domain): + super().__init__(builder, depth) + self._domain = domain + + def __iadd__(self, assigns): + self._builder._add_statement(assigns, domain=self._domain, depth=self._depth) + return self + + +class _ModuleBuilderDomains(_ModuleBuilderProxy): + def __getattr__(self, name): + if name == "submodules": + warnings.warn("Using '.d.{}' would add statements to clock domain {!r}; " + "did you mean .{} instead?" + .format(name, name, name), + SyntaxWarning, stacklevel=2) + if name == "comb": + domain = None + else: + domain = name + return _ModuleBuilderDomain(self._builder, self._depth, domain) + + def __getitem__(self, name): + return self.__getattr__(name) + + def __setattr__(self, name, value): + if name == "_depth": + object.__setattr__(self, name, value) + elif not isinstance(value, _ModuleBuilderDomain): + raise AttributeError("Cannot assign 'd.{}' attribute; did you mean 'd.{} +='?" + .format(name, name)) + + def __setitem__(self, name, value): + return self.__setattr__(name, value) + + +class _ModuleBuilderRoot: + def __init__(self, builder, depth): + self._builder = builder + self.domain = self.d = _ModuleBuilderDomains(builder, depth) + + def __getattr__(self, name): + if name in ("comb", "sync"): + raise AttributeError("'{}' object has no attribute '{}'; did you mean 'd.{}'?" + .format(type(self).__name__, name, name)) + raise AttributeError("'{}' object has no attribute '{}'" + .format(type(self).__name__, name)) + + +class _ModuleBuilderSubmodules: + def __init__(self, builder): + object.__setattr__(self, "_builder", builder) + + def __iadd__(self, modules): + for module in flatten([modules]): + self._builder._add_submodule(module) + return self + + def __setattr__(self, name, submodule): + self._builder._add_submodule(submodule, name) + + def __setitem__(self, name, value): + return self.__setattr__(name, value) + + def __getattr__(self, name): + return self._builder._get_submodule(name) + + def __getitem__(self, name): + return self.__getattr__(name) + + +class _ModuleBuilderDomainSet: + def __init__(self, builder): + object.__setattr__(self, "_builder", builder) + + def __iadd__(self, domains): + for domain in flatten([domains]): + if not isinstance(domain, ClockDomain): + raise TypeError("Only clock domains may be added to `m.domains`, not {!r}" + .format(domain)) + self._builder._add_domain(domain) + return self + + def __setattr__(self, name, domain): + if not isinstance(domain, ClockDomain): + raise TypeError("Only clock domains may be added to `m.domains`, not {!r}" + .format(domain)) + if domain.name != name: + raise NameError("Clock domain name {!r} must match name in `m.domains.{} += ...` " + "syntax" + .format(domain.name, name)) + self._builder._add_domain(domain) + + +# It's not particularly clean to depend on an internal interface, but, unfortunately, __bool__ +# must be defined on a class to be called during implicit conversion. +class _GuardedContextManager(_GeneratorContextManager): + def __init__(self, keyword, func, args, kwds): + self.keyword = keyword + return super().__init__(func, args, kwds) + + def __bool__(self): + raise SyntaxError("`if m.{kw}(...):` does not work; use `with m.{kw}(...)`" + .format(kw=self.keyword)) + + +def _guardedcontextmanager(keyword): + def decorator(func): + @wraps(func) + def helper(*args, **kwds): + return _GuardedContextManager(keyword, func, args, kwds) + return helper + return decorator + + +class FSM: + def __init__(self, state, encoding, decoding): + self.state = state + self.encoding = encoding + self.decoding = decoding + + def ongoing(self, name): + if name not in self.encoding: + self.encoding[name] = len(self.encoding) + return Operator("==", [self.state, self.encoding[name]], src_loc_at=0) + + +class Module(_ModuleBuilderRoot, Elaboratable): + @classmethod + def __init_subclass__(cls): + raise SyntaxError("Instead of inheriting from `Module`, inherit from `Elaboratable` " + "and return a `Module` from the `elaborate(self, platform)` method") + + def __init__(self): + _ModuleBuilderRoot.__init__(self, self, depth=0) + self.submodules = _ModuleBuilderSubmodules(self) + self.domains = _ModuleBuilderDomainSet(self) + + self._statements = Statement.cast([]) + self._ctrl_context = None + self._ctrl_stack = [] + + self._driving = SignalDict() + self._named_submodules = {} + self._anon_submodules = [] + self._domains = [] + self._generated = {} + + def _check_context(self, construct, context): + if self._ctrl_context != context: + if self._ctrl_context is None: + raise SyntaxError("{} is not permitted outside of {}" + .format(construct, context)) + else: + if self._ctrl_context == "Switch": + secondary_context = "Case" + if self._ctrl_context == "FSM": + secondary_context = "State" + raise SyntaxError("{} is not permitted directly inside of {}; it is permitted " + "inside of {} {}" + .format(construct, self._ctrl_context, + self._ctrl_context, secondary_context)) + + def _get_ctrl(self, name): + if self._ctrl_stack: + top_name, top_data = self._ctrl_stack[-1] + if top_name == name: + return top_data + + def _flush_ctrl(self): + while len(self._ctrl_stack) > self.domain._depth: + self._pop_ctrl() + + def _set_ctrl(self, name, data): + self._flush_ctrl() + self._ctrl_stack.append((name, data)) + return data + + def _check_signed_cond(self, cond): + cond = Value.cast(cond) + width, signed = cond.shape() + if signed: + warnings.warn("Signed values in If/Elif conditions usually result from inverting " + "Python booleans with ~, which leads to unexpected results: ~True is " + "-2, which is truthful. Replace `~flag` with `not flag`. (If this is " + "a false positive, silence this warning with " + "`m.If(x)` → `m.If(x.bool())`.)", + SyntaxWarning, stacklevel=4) + return cond + + @_guardedcontextmanager("If") + def If(self, cond): + self._check_context("If", context=None) + cond = self._check_signed_cond(cond) + src_loc = tracer.get_src_loc(src_loc_at=1) + if_data = self._set_ctrl("If", { + "tests": [], + "bodies": [], + "src_loc": src_loc, + "src_locs": [], + }) + try: + _outer_case, self._statements = self._statements, [] + self.domain._depth += 1 + yield + self._flush_ctrl() + if_data["tests"].append(cond) + if_data["bodies"].append(self._statements) + if_data["src_locs"].append(src_loc) + finally: + self.domain._depth -= 1 + self._statements = _outer_case + + @_guardedcontextmanager("Elif") + def Elif(self, cond): + self._check_context("Elif", context=None) + cond = self._check_signed_cond(cond) + src_loc = tracer.get_src_loc(src_loc_at=1) + if_data = self._get_ctrl("If") + if if_data is None: + raise SyntaxError("Elif without preceding If") + try: + _outer_case, self._statements = self._statements, [] + self.domain._depth += 1 + yield + self._flush_ctrl() + if_data["tests"].append(cond) + if_data["bodies"].append(self._statements) + if_data["src_locs"].append(src_loc) + finally: + self.domain._depth -= 1 + self._statements = _outer_case + + @_guardedcontextmanager("Else") + def Else(self): + self._check_context("Else", context=None) + src_loc = tracer.get_src_loc(src_loc_at=1) + if_data = self._get_ctrl("If") + if if_data is None: + raise SyntaxError("Else without preceding If/Elif") + try: + _outer_case, self._statements = self._statements, [] + self.domain._depth += 1 + yield + self._flush_ctrl() + if_data["bodies"].append(self._statements) + if_data["src_locs"].append(src_loc) + finally: + self.domain._depth -= 1 + self._statements = _outer_case + self._pop_ctrl() + + @contextmanager + def Switch(self, test): + self._check_context("Switch", context=None) + switch_data = self._set_ctrl("Switch", { + "test": Value.cast(test), + "cases": OrderedDict(), + "src_loc": tracer.get_src_loc(src_loc_at=1), + "case_src_locs": {}, + }) + try: + self._ctrl_context = "Switch" + self.domain._depth += 1 + yield + finally: + self.domain._depth -= 1 + self._ctrl_context = None + self._pop_ctrl() + + @contextmanager + def Case(self, *patterns): + self._check_context("Case", context="Switch") + src_loc = tracer.get_src_loc(src_loc_at=1) + switch_data = self._get_ctrl("Switch") + new_patterns = () + for pattern in patterns: + if not isinstance(pattern, (int, str, Enum)): + raise SyntaxError("Case pattern must be an integer, a string, or an enumeration, " + "not {!r}" + .format(pattern)) + if isinstance(pattern, str) and any(bit not in "01- \t" for bit in pattern): + raise SyntaxError("Case pattern '{}' must consist of 0, 1, and - (don't care) " + "bits, and may include whitespace" + .format(pattern)) + if (isinstance(pattern, str) and + len("".join(pattern.split())) != len(switch_data["test"])): + raise SyntaxError("Case pattern '{}' must have the same width as switch value " + "(which is {})" + .format(pattern, len(switch_data["test"]))) + if isinstance(pattern, int) and bits_for(pattern) > len(switch_data["test"]): + warnings.warn("Case pattern '{:b}' is wider than switch value " + "(which has width {}); comparison will never be true" + .format(pattern, len(switch_data["test"])), + SyntaxWarning, stacklevel=3) + continue + if isinstance(pattern, Enum) and bits_for(pattern.value) > len(switch_data["test"]): + warnings.warn("Case pattern '{:b}' ({}.{}) is wider than switch value " + "(which has width {}); comparison will never be true" + .format(pattern.value, pattern.__class__.__name__, pattern.name, + len(switch_data["test"])), + SyntaxWarning, stacklevel=3) + continue + new_patterns = (*new_patterns, pattern) + try: + _outer_case, self._statements = self._statements, [] + self._ctrl_context = None + yield + self._flush_ctrl() + # If none of the provided cases can possibly be true, omit this branch completely. + # This needs to be differentiated from no cases being provided in the first place, + # which means the branch will always match. + if not (patterns and not new_patterns): + switch_data["cases"][new_patterns] = self._statements + switch_data["case_src_locs"][new_patterns] = src_loc + finally: + self._ctrl_context = "Switch" + self._statements = _outer_case + + def Default(self): + return self.Case() + + @contextmanager + def FSM(self, reset=None, domain="sync", name="fsm"): + self._check_context("FSM", context=None) + if domain == "comb": + raise ValueError("FSM may not be driven by the '{}' domain".format(domain)) + fsm_data = self._set_ctrl("FSM", { + "name": name, + "signal": Signal(name="{}_state".format(name), src_loc_at=2), + "reset": reset, + "domain": domain, + "encoding": OrderedDict(), + "decoding": OrderedDict(), + "states": OrderedDict(), + "src_loc": tracer.get_src_loc(src_loc_at=1), + "state_src_locs": {}, + }) + self._generated[name] = fsm = \ + FSM(fsm_data["signal"], fsm_data["encoding"], fsm_data["decoding"]) + try: + self._ctrl_context = "FSM" + self.domain._depth += 1 + yield fsm + for state_name in fsm_data["encoding"]: + if state_name not in fsm_data["states"]: + raise NameError("FSM state '{}' is referenced but not defined" + .format(state_name)) + finally: + self.domain._depth -= 1 + self._ctrl_context = None + self._pop_ctrl() + + @contextmanager + def State(self, name): + self._check_context("FSM State", context="FSM") + src_loc = tracer.get_src_loc(src_loc_at=1) + fsm_data = self._get_ctrl("FSM") + if name in fsm_data["states"]: + raise NameError("FSM state '{}' is already defined".format(name)) + if name not in fsm_data["encoding"]: + fsm_data["encoding"][name] = len(fsm_data["encoding"]) + try: + _outer_case, self._statements = self._statements, [] + self._ctrl_context = None + yield + self._flush_ctrl() + fsm_data["states"][name] = self._statements + fsm_data["state_src_locs"][name] = src_loc + finally: + self._ctrl_context = "FSM" + self._statements = _outer_case + + @property + def next(self): + raise SyntaxError("Only assignment to `m.next` is permitted") + + @next.setter + def next(self, name): + if self._ctrl_context != "FSM": + for level, (ctrl_name, ctrl_data) in enumerate(reversed(self._ctrl_stack)): + if ctrl_name == "FSM": + if name not in ctrl_data["encoding"]: + ctrl_data["encoding"][name] = len(ctrl_data["encoding"]) + self._add_statement( + assigns=[ctrl_data["signal"].eq(ctrl_data["encoding"][name])], + domain=ctrl_data["domain"], + depth=len(self._ctrl_stack)) + return + + raise SyntaxError("`m.next = <...>` is only permitted inside an FSM state") + + def _pop_ctrl(self): + name, data = self._ctrl_stack.pop() + src_loc = data["src_loc"] + + if name == "If": + if_tests, if_bodies = data["tests"], data["bodies"] + if_src_locs = data["src_locs"] + + tests, cases = [], OrderedDict() + for if_test, if_case in zip(if_tests + [None], if_bodies): + if if_test is not None: + if_test = Value.cast(if_test) + if len(if_test) != 1: + if_test = if_test.bool() + tests.append(if_test) + + if if_test is not None: + match = ("1" + "-" * (len(tests) - 1)).rjust(len(if_tests), "-") + else: + match = None + cases[match] = if_case + + self._statements.append(Switch(Cat(tests), cases, + src_loc=src_loc, case_src_locs=dict(zip(cases, if_src_locs)))) + + if name == "Switch": + switch_test, switch_cases = data["test"], data["cases"] + switch_case_src_locs = data["case_src_locs"] + + self._statements.append(Switch(switch_test, switch_cases, + src_loc=src_loc, case_src_locs=switch_case_src_locs)) + + if name == "FSM": + fsm_signal, fsm_reset, fsm_encoding, fsm_decoding, fsm_states = \ + data["signal"], data["reset"], data["encoding"], data["decoding"], data["states"] + fsm_state_src_locs = data["state_src_locs"] + if not fsm_states: + return + fsm_signal.width = bits_for(len(fsm_encoding) - 1) + if fsm_reset is None: + fsm_signal.reset = fsm_encoding[next(iter(fsm_states))] + else: + fsm_signal.reset = fsm_encoding[fsm_reset] + # The FSM is encoded such that the state with encoding 0 is always the reset state. + fsm_decoding.update((n, s) for s, n in fsm_encoding.items()) + fsm_signal.decoder = lambda n: "{}/{}".format(fsm_decoding[n], n) + self._statements.append(Switch(fsm_signal, + OrderedDict((fsm_encoding[name], stmts) for name, stmts in fsm_states.items()), + src_loc=src_loc, case_src_locs={fsm_encoding[name]: fsm_state_src_locs[name] + for name in fsm_states})) + + def _add_statement(self, assigns, domain, depth, compat_mode=False): + def domain_name(domain): + if domain is None: + return "comb" + else: + return domain + + while len(self._ctrl_stack) > self.domain._depth: + self._pop_ctrl() + + for stmt in Statement.cast(assigns): + if not compat_mode and not isinstance(stmt, (Assign, Assert, Assume, Cover)): + raise SyntaxError( + "Only assignments and property checks may be appended to d.{}" + .format(domain_name(domain))) + + stmt._MustUse__used = True + stmt = SampleDomainInjector(domain)(stmt) + + for signal in stmt._lhs_signals(): + if signal not in self._driving: + self._driving[signal] = domain + elif self._driving[signal] != domain: + cd_curr = self._driving[signal] + raise SyntaxError( + "Driver-driver conflict: trying to drive {!r} from d.{}, but it is " + "already driven from d.{}" + .format(signal, domain_name(domain), domain_name(cd_curr))) + + self._statements.append(stmt) + + def _add_submodule(self, submodule, name=None): + if not hasattr(submodule, "elaborate"): + raise TypeError("Trying to add {!r}, which does not implement .elaborate(), as " + "a submodule".format(submodule)) + if name == None: + self._anon_submodules.append(submodule) + else: + if name in self._named_submodules: + raise NameError("Submodule named '{}' already exists".format(name)) + self._named_submodules[name] = submodule + + def _get_submodule(self, name): + if name in self._named_submodules: + return self._named_submodules[name] + else: + raise AttributeError("No submodule named '{}' exists".format(name)) + + def _add_domain(self, cd): + self._domains.append(cd) + + def _flush(self): + while self._ctrl_stack: + self._pop_ctrl() + + def elaborate(self, platform): + self._flush() + + fragment = Fragment() + for name in self._named_submodules: + fragment.add_subfragment(Fragment.get(self._named_submodules[name], platform), name) + for submodule in self._anon_submodules: + fragment.add_subfragment(Fragment.get(submodule, platform), None) + statements = SampleDomainInjector("sync")(self._statements) + fragment.add_statements(statements) + for signal, domain in self._driving.items(): + fragment.add_driver(signal, domain) + fragment.add_domains(self._domains) + fragment.generated.update(self._generated) + return fragment diff --git a/nmigen/hdl/ir.py b/nmigen/hdl/ir.py new file mode 100644 index 0000000..b9963c1 --- /dev/null +++ b/nmigen/hdl/ir.py @@ -0,0 +1,588 @@ +from abc import ABCMeta, abstractmethod +from collections import defaultdict, OrderedDict +from functools import reduce +import warnings +import traceback +import sys + +from .._utils import * +from .._unused import * +from .ast import * +from .cd import * + + +__all__ = ["UnusedElaboratable", "Elaboratable", "DriverConflict", "Fragment", "Instance"] + + +class UnusedElaboratable(UnusedMustUse): + pass + + +class Elaboratable(MustUse, metaclass=ABCMeta): + _MustUse__warning = UnusedElaboratable + + +class DriverConflict(UserWarning): + pass + + +class Fragment: + @staticmethod + def get(obj, platform): + code = None + while True: + if isinstance(obj, Fragment): + return obj + elif isinstance(obj, Elaboratable): + code = obj.elaborate.__code__ + obj._MustUse__used = True + obj = obj.elaborate(platform) + elif hasattr(obj, "elaborate"): + warnings.warn( + message="Class {!r} is an elaboratable that does not explicitly inherit from " + "Elaboratable; doing so would improve diagnostics" + .format(type(obj)), + category=RuntimeWarning, + stacklevel=2) + code = obj.elaborate.__code__ + obj = obj.elaborate(platform) + else: + raise AttributeError("Object {!r} cannot be elaborated".format(obj)) + if obj is None and code is not None: + warnings.warn_explicit( + message=".elaborate() returned None; missing return statement?", + category=UserWarning, + filename=code.co_filename, + lineno=code.co_firstlineno) + + def __init__(self): + self.ports = SignalDict() + self.drivers = OrderedDict() + self.statements = [] + self.domains = OrderedDict() + self.subfragments = [] + self.attrs = OrderedDict() + self.generated = OrderedDict() + self.flatten = False + + def add_ports(self, *ports, dir): + assert dir in ("i", "o", "io") + for port in flatten(ports): + self.ports[port] = dir + + def iter_ports(self, dir=None): + if dir is None: + yield from self.ports + else: + for port, port_dir in self.ports.items(): + if port_dir == dir: + yield port + + def add_driver(self, signal, domain=None): + if domain not in self.drivers: + self.drivers[domain] = SignalSet() + self.drivers[domain].add(signal) + + def iter_drivers(self): + for domain, signals in self.drivers.items(): + for signal in signals: + yield domain, signal + + def iter_comb(self): + if None in self.drivers: + yield from self.drivers[None] + + def iter_sync(self): + for domain, signals in self.drivers.items(): + if domain is None: + continue + for signal in signals: + yield domain, signal + + def iter_signals(self): + signals = SignalSet() + signals |= self.ports.keys() + for domain, domain_signals in self.drivers.items(): + if domain is not None: + cd = self.domains[domain] + signals.add(cd.clk) + if cd.rst is not None: + signals.add(cd.rst) + signals |= domain_signals + return signals + + def add_domains(self, *domains): + for domain in flatten(domains): + assert isinstance(domain, ClockDomain) + assert domain.name not in self.domains + self.domains[domain.name] = domain + + def iter_domains(self): + yield from self.domains + + def add_statements(self, *stmts): + for stmt in Statement.cast(stmts): + stmt._MustUse__used = True + self.statements.append(stmt) + + def add_subfragment(self, subfragment, name=None): + assert isinstance(subfragment, Fragment) + self.subfragments.append((subfragment, name)) + + def find_subfragment(self, name_or_index): + if isinstance(name_or_index, int): + if name_or_index < len(self.subfragments): + subfragment, name = self.subfragments[name_or_index] + return subfragment + raise NameError("No subfragment at index #{}".format(name_or_index)) + else: + for subfragment, name in self.subfragments: + if name == name_or_index: + return subfragment + raise NameError("No subfragment with name '{}'".format(name_or_index)) + + def find_generated(self, *path): + if len(path) > 1: + path_component, *path = path + return self.find_subfragment(path_component).find_generated(*path) + else: + item, = path + return self.generated[item] + + def elaborate(self, platform): + return self + + def _merge_subfragment(self, subfragment): + # Merge subfragment's everything except clock domains into this fragment. + # Flattening is done after clock domain propagation, so we can assume the domains + # are already the same in every involved fragment in the first place. + self.ports.update(subfragment.ports) + for domain, signal in subfragment.iter_drivers(): + self.add_driver(signal, domain) + self.statements += subfragment.statements + self.subfragments += subfragment.subfragments + + # Remove the merged subfragment. + found = False + for i, (check_subfrag, check_name) in enumerate(self.subfragments): # :nobr: + if subfragment == check_subfrag: + del self.subfragments[i] + found = True + break + assert found + + def _resolve_hierarchy_conflicts(self, hierarchy=("top",), mode="warn"): + assert mode in ("silent", "warn", "error") + + driver_subfrags = SignalDict() + memory_subfrags = OrderedDict() + def add_subfrag(registry, entity, entry): + # Because of missing domain insertion, at the point when this code runs, we have + # a mixture of bound and unbound {Clock,Reset}Signals. Map the bound ones to + # the actual signals (because the signal itself can be driven as well); but leave + # the unbound ones as it is, because there's no concrete signal for it yet anyway. + if isinstance(entity, ClockSignal) and entity.domain in self.domains: + entity = self.domains[entity.domain].clk + elif isinstance(entity, ResetSignal) and entity.domain in self.domains: + entity = self.domains[entity.domain].rst + + if entity not in registry: + registry[entity] = set() + registry[entity].add(entry) + + # For each signal driven by this fragment and/or its subfragments, determine which + # subfragments also drive it. + for domain, signal in self.iter_drivers(): + add_subfrag(driver_subfrags, signal, (None, hierarchy)) + + flatten_subfrags = set() + for i, (subfrag, name) in enumerate(self.subfragments): + if name is None: + name = "".format(i) + subfrag_hierarchy = hierarchy + (name,) + + if subfrag.flatten: + # Always flatten subfragments that explicitly request it. + flatten_subfrags.add((subfrag, subfrag_hierarchy)) + + if isinstance(subfrag, Instance): + # For memories (which are subfragments, but semantically a part of superfragment), + # record that this fragment is driving it. + if subfrag.type in ("$memrd", "$memwr"): + memory = subfrag.parameters["MEMID"] + add_subfrag(memory_subfrags, memory, (None, hierarchy)) + + # Never flatten instances. + continue + + # First, recurse into subfragments and let them detect driver conflicts as well. + subfrag_drivers, subfrag_memories = \ + subfrag._resolve_hierarchy_conflicts(subfrag_hierarchy, mode) + + # Second, classify subfragments by signals they drive and memories they use. + for signal in subfrag_drivers: + add_subfrag(driver_subfrags, signal, (subfrag, subfrag_hierarchy)) + for memory in subfrag_memories: + add_subfrag(memory_subfrags, memory, (subfrag, subfrag_hierarchy)) + + # Find out the set of subfragments that needs to be flattened into this fragment + # to resolve driver-driver conflicts. + def flatten_subfrags_if_needed(subfrags): + if len(subfrags) == 1: + return [] + flatten_subfrags.update((f, h) for f, h in subfrags if f is not None) + return list(sorted(".".join(h) for f, h in subfrags)) + + for signal, subfrags in driver_subfrags.items(): + subfrag_names = flatten_subfrags_if_needed(subfrags) + if not subfrag_names: + continue + + # While we're at it, show a message. + message = ("Signal '{}' is driven from multiple fragments: {}" + .format(signal, ", ".join(subfrag_names))) + if mode == "error": + raise DriverConflict(message) + elif mode == "warn": + message += "; hierarchy will be flattened" + warnings.warn_explicit(message, DriverConflict, *signal.src_loc) + + for memory, subfrags in memory_subfrags.items(): + subfrag_names = flatten_subfrags_if_needed(subfrags) + if not subfrag_names: + continue + + # While we're at it, show a message. + message = ("Memory '{}' is accessed from multiple fragments: {}" + .format(memory.name, ", ".join(subfrag_names))) + if mode == "error": + raise DriverConflict(message) + elif mode == "warn": + message += "; hierarchy will be flattened" + warnings.warn_explicit(message, DriverConflict, *memory.src_loc) + + # Flatten hierarchy. + for subfrag, subfrag_hierarchy in sorted(flatten_subfrags, key=lambda x: x[1]): + self._merge_subfragment(subfrag) + + # If we flattened anything, we might be in a situation where we have a driver conflict + # again, e.g. if we had a tree of fragments like A --- B --- C where only fragments + # A and C were driving a signal S. In that case, since B is not driving S itself, + # processing B will not result in any flattening, but since B is transitively driving S, + # processing A will flatten B into it. Afterwards, we have a tree like AB --- C, which + # has another conflict. + if any(flatten_subfrags): + # Try flattening again. + return self._resolve_hierarchy_conflicts(hierarchy, mode) + + # Nothing was flattened, we're done! + return (SignalSet(driver_subfrags.keys()), + set(memory_subfrags.keys())) + + def _propagate_domains_up(self, hierarchy=("top",)): + from .xfrm import DomainRenamer + + domain_subfrags = defaultdict(lambda: set()) + + # For each domain defined by a subfragment, determine which subfragments define it. + for i, (subfrag, name) in enumerate(self.subfragments): + # First, recurse into subfragments and let them propagate domains up as well. + hier_name = name + if hier_name is None: + hier_name = "".format(i) + subfrag._propagate_domains_up(hierarchy + (hier_name,)) + + # Second, classify subfragments by domains they define. + for domain_name, domain in subfrag.domains.items(): + if domain.local: + continue + domain_subfrags[domain_name].add((subfrag, name, i)) + + # For each domain defined by more than one subfragment, rename the domain in each + # of the subfragments such that they no longer conflict. + for domain_name, subfrags in domain_subfrags.items(): + if len(subfrags) == 1: + continue + + names = [n for f, n, i in subfrags] + if not all(names): + names = sorted("".format(i) if n is None else "'{}'".format(n) + for f, n, i in subfrags) + raise DomainError("Domain '{}' is defined by subfragments {} of fragment '{}'; " + "it is necessary to either rename subfragment domains " + "explicitly, or give names to subfragments" + .format(domain_name, ", ".join(names), ".".join(hierarchy))) + + if len(names) != len(set(names)): + names = sorted("#{}".format(i) for f, n, i in subfrags) + raise DomainError("Domain '{}' is defined by subfragments {} of fragment '{}', " + "some of which have identical names; it is necessary to either " + "rename subfragment domains explicitly, or give distinct names " + "to subfragments" + .format(domain_name, ", ".join(names), ".".join(hierarchy))) + + for subfrag, name, i in subfrags: + domain_name_map = {domain_name: "{}_{}".format(name, domain_name)} + self.subfragments[i] = (DomainRenamer(domain_name_map)(subfrag), name) + + # Finally, collect the (now unique) subfragment domains, and merge them into our domains. + for subfrag, name in self.subfragments: + for domain_name, domain in subfrag.domains.items(): + if domain.local: + continue + self.add_domains(domain) + + def _propagate_domains_down(self): + # For each domain defined in this fragment, ensure it also exists in all subfragments. + for subfrag, name in self.subfragments: + for domain in self.iter_domains(): + if domain in subfrag.domains: + assert self.domains[domain] is subfrag.domains[domain] + else: + subfrag.add_domains(self.domains[domain]) + + subfrag._propagate_domains_down() + + def _create_missing_domains(self, missing_domain, *, platform=None): + from .xfrm import DomainCollector + + collector = DomainCollector() + collector(self) + + new_domains = [] + for domain_name in collector.used_domains - collector.defined_domains: + if domain_name is None: + continue + value = missing_domain(domain_name) + if value is None: + raise DomainError("Domain '{}' is used but not defined".format(domain_name)) + if type(value) is ClockDomain: + self.add_domains(value) + # And expose ports on the newly added clock domain, since it is added directly + # and there was no chance to add any logic driving it. + new_domains.append(value) + else: + new_fragment = Fragment.get(value, platform=platform) + if domain_name not in new_fragment.domains: + defined = new_fragment.domains.keys() + raise DomainError( + "Fragment returned by missing domain callback does not define " + "requested domain '{}' (defines {})." + .format(domain_name, ", ".join("'{}'".format(n) for n in defined))) + self.add_subfragment(new_fragment, "cd_{}".format(domain_name)) + self.add_domains(new_fragment.domains.values()) + return new_domains + + def _propagate_domains(self, missing_domain, *, platform=None): + self._propagate_domains_up() + self._propagate_domains_down() + self._resolve_hierarchy_conflicts() + new_domains = self._create_missing_domains(missing_domain, platform=platform) + self._propagate_domains_down() + return new_domains + + def _prepare_use_def_graph(self, parent, level, uses, defs, ios, top): + def add_uses(*sigs, self=self): + for sig in flatten(sigs): + if sig not in uses: + uses[sig] = set() + uses[sig].add(self) + + def add_defs(*sigs): + for sig in flatten(sigs): + if sig not in defs: + defs[sig] = self + else: + assert defs[sig] is self + + def add_io(*sigs): + for sig in flatten(sigs): + if sig not in ios: + ios[sig] = self + else: + assert ios[sig] is self + + # Collect all signals we're driving (on LHS of statements), and signals we're using + # (on RHS of statements, or in clock domains). + for stmt in self.statements: + add_uses(stmt._rhs_signals()) + add_defs(stmt._lhs_signals()) + + for domain, _ in self.iter_sync(): + cd = self.domains[domain] + add_uses(cd.clk) + if cd.rst is not None: + add_uses(cd.rst) + + # Repeat for subfragments. + for subfrag, name in self.subfragments: + if isinstance(subfrag, Instance): + for port_name, (value, dir) in subfrag.named_ports.items(): + if dir == "i": + # Prioritize defs over uses. + rhs_without_outputs = value._rhs_signals() - subfrag.iter_ports(dir="o") + subfrag.add_ports(rhs_without_outputs, dir=dir) + add_uses(value._rhs_signals()) + if dir == "o": + subfrag.add_ports(value._lhs_signals(), dir=dir) + add_defs(value._lhs_signals()) + if dir == "io": + subfrag.add_ports(value._lhs_signals(), dir=dir) + add_io(value._lhs_signals()) + else: + parent[subfrag] = self + level [subfrag] = level[self] + 1 + + subfrag._prepare_use_def_graph(parent, level, uses, defs, ios, top) + + def _propagate_ports(self, ports, all_undef_as_ports): + # Take this fragment graph: + # + # __ B (def: q, use: p r) + # / + # A (def: p, use: q r) + # \ + # \_ C (def: r, use: p q) + # + # We need to consider three cases. + # 1. Signal p requires an input port in B; + # 2. Signal r requires an output port in C; + # 3. Signal r requires an output port in C and an input port in B. + # + # Adding these ports can be in general done in three steps for each signal: + # 1. Find the least common ancestor of all uses and defs. + # 2. Going upwards from the single def, add output ports. + # 3. Going upwards from all uses, add input ports. + + parent = {self: None} + level = {self: 0} + uses = SignalDict() + defs = SignalDict() + ios = SignalDict() + self._prepare_use_def_graph(parent, level, uses, defs, ios, self) + + ports = SignalSet(ports) + if all_undef_as_ports: + for sig in uses: + if sig in defs: + continue + ports.add(sig) + for sig in ports: + if sig not in uses: + uses[sig] = set() + uses[sig].add(self) + + @memoize + def lca_of(fragu, fragv): + # Normalize fragu to be deeper than fragv. + if level[fragu] < level[fragv]: + fragu, fragv = fragv, fragu + # Find ancestor of fragu on the same level as fragv. + for _ in range(level[fragu] - level[fragv]): + fragu = parent[fragu] + # If fragv was the ancestor of fragv, we're done. + if fragu == fragv: + return fragu + # Otherwise, they are at the same level but in different branches. Step both fragu + # and fragv until we find the common ancestor. + while parent[fragu] != parent[fragv]: + fragu = parent[fragu] + fragv = parent[fragv] + return parent[fragu] + + for sig in uses: + if sig in defs: + lca = reduce(lca_of, uses[sig], defs[sig]) + else: + lca = reduce(lca_of, uses[sig]) + + for frag in uses[sig]: + if sig in defs and frag is defs[sig]: + continue + while frag != lca: + frag.add_ports(sig, dir="i") + frag = parent[frag] + + if sig in defs: + frag = defs[sig] + while frag != lca: + frag.add_ports(sig, dir="o") + frag = parent[frag] + + for sig in ios: + frag = ios[sig] + while frag is not None: + frag.add_ports(sig, dir="io") + frag = parent[frag] + + for sig in ports: + if sig in ios: + continue + if sig in defs: + self.add_ports(sig, dir="o") + else: + self.add_ports(sig, dir="i") + + def prepare(self, ports=None, missing_domain=lambda name: ClockDomain(name)): + from .xfrm import SampleLowerer, DomainLowerer + + fragment = SampleLowerer()(self) + new_domains = fragment._propagate_domains(missing_domain) + fragment = DomainLowerer()(fragment) + if ports is None: + fragment._propagate_ports(ports=(), all_undef_as_ports=True) + else: + mapped_ports = [] + # Lower late bound signals like ClockSignal() to ports. + port_lowerer = DomainLowerer(fragment.domains) + for port in ports: + if not isinstance(port, (Signal, ClockSignal, ResetSignal)): + raise TypeError("Only signals may be added as ports, not {!r}" + .format(port)) + mapped_ports.append(port_lowerer.on_value(port)) + # Add ports for all newly created missing clock domains, since not doing so defeats + # the purpose of domain auto-creation. (It's possible to refer to these ports before + # the domain actually exists through late binding, but it's inconvenient.) + for cd in new_domains: + mapped_ports.append(cd.clk) + if cd.rst is not None: + mapped_ports.append(cd.rst) + fragment._propagate_ports(ports=mapped_ports, all_undef_as_ports=False) + return fragment + + +class Instance(Fragment): + def __init__(self, type, *args, **kwargs): + super().__init__() + + self.type = type + self.parameters = OrderedDict() + self.named_ports = OrderedDict() + + for (kind, name, value) in args: + if kind == "a": + self.attrs[name] = value + elif kind == "p": + self.parameters[name] = value + elif kind in ("i", "o", "io"): + self.named_ports[name] = (Value.cast(value), kind) + else: + raise NameError("Instance argument {!r} should be a tuple (kind, name, value) " + "where kind is one of \"p\", \"i\", \"o\", or \"io\"" + .format((kind, name, value))) + + for kw, arg in kwargs.items(): + if kw.startswith("a_"): + self.attrs[kw[2:]] = arg + elif kw.startswith("p_"): + self.parameters[kw[2:]] = arg + elif kw.startswith("i_"): + self.named_ports[kw[2:]] = (Value.cast(arg), "i") + elif kw.startswith("o_"): + self.named_ports[kw[2:]] = (Value.cast(arg), "o") + elif kw.startswith("io_"): + self.named_ports[kw[3:]] = (Value.cast(arg), "io") + else: + raise NameError("Instance keyword argument {}={!r} does not start with one of " + "\"p_\", \"i_\", \"o_\", or \"io_\"" + .format(kw, arg)) diff --git a/nmigen/hdl/mem.py b/nmigen/hdl/mem.py new file mode 100644 index 0000000..235cbd2 --- /dev/null +++ b/nmigen/hdl/mem.py @@ -0,0 +1,234 @@ +import operator +from collections import OrderedDict + +from .. import tracer +from .ast import * +from .ir import Elaboratable, Instance + + +__all__ = ["Memory", "ReadPort", "WritePort", "DummyPort"] + + +class Memory: + """A word addressable storage. + + Parameters + ---------- + width : int + Access granularity. Each storage element of this memory is ``width`` bits in size. + depth : int + Word count. This memory contains ``depth`` storage elements. + init : list of int + Initial values. At power on, each storage element in this memory is initialized to + the corresponding element of ``init``, if any, or to zero otherwise. + Uninitialized memories are not currently supported. + name : str + Name hint for this memory. If ``None`` (default) the name is inferred from the variable + name this ``Signal`` is assigned to. + attrs : dict + Dictionary of synthesis attributes. + + Attributes + ---------- + width : int + depth : int + init : list of int + attrs : dict + """ + def __init__(self, *, width, depth, init=None, name=None, attrs=None, simulate=True): + if not isinstance(width, int) or width < 0: + raise TypeError("Memory width must be a non-negative integer, not {!r}" + .format(width)) + if not isinstance(depth, int) or depth < 0: + raise TypeError("Memory depth must be a non-negative integer, not {!r}" + .format(depth)) + + self.name = name or tracer.get_var_name(depth=2, default="$memory") + self.src_loc = tracer.get_src_loc() + + self.width = width + self.depth = depth + self.attrs = OrderedDict(() if attrs is None else attrs) + + # Array of signals for simulation. + self._array = Array() + if simulate: + for addr in range(self.depth): + self._array.append(Signal(self.width, name="{}({})" + .format(name or "memory", addr))) + + self.init = init + + @property + def init(self): + return self._init + + @init.setter + def init(self, new_init): + self._init = [] if new_init is None else list(new_init) + if len(self.init) > self.depth: + raise ValueError("Memory initialization value count exceed memory depth ({} > {})" + .format(len(self.init), self.depth)) + + try: + for addr in range(len(self._array)): + if addr < len(self._init): + self._array[addr].reset = operator.index(self._init[addr]) + else: + self._array[addr].reset = 0 + except TypeError as e: + raise TypeError("Memory initialization value at address {:x}: {}" + .format(addr, e)) from None + + def read_port(self, *, src_loc_at=0, **kwargs): + return ReadPort(self, src_loc_at=1 + src_loc_at, **kwargs) + + def write_port(self, *, src_loc_at=0, **kwargs): + return WritePort(self, src_loc_at=1 + src_loc_at, **kwargs) + + def __getitem__(self, index): + """Simulation only.""" + return self._array[index] + + +class ReadPort(Elaboratable): + def __init__(self, memory, *, domain="sync", transparent=True, src_loc_at=0): + if domain == "comb" and not transparent: + raise ValueError("Read port cannot be simultaneously asynchronous and non-transparent") + + self.memory = memory + self.domain = domain + self.transparent = transparent + + self.addr = Signal(range(memory.depth), + name="{}_r_addr".format(memory.name), src_loc_at=1 + src_loc_at) + self.data = Signal(memory.width, + name="{}_r_data".format(memory.name), src_loc_at=1 + src_loc_at) + if self.domain != "comb" and not transparent: + self.en = Signal(name="{}_r_en".format(memory.name), reset=1, + src_loc_at=2 + src_loc_at) + else: + self.en = Const(1) + + def elaborate(self, platform): + f = Instance("$memrd", + p_MEMID=self.memory, + p_ABITS=self.addr.width, + p_WIDTH=self.data.width, + p_CLK_ENABLE=self.domain != "comb", + p_CLK_POLARITY=1, + p_TRANSPARENT=self.transparent, + i_CLK=ClockSignal(self.domain) if self.domain != "comb" else Const(0), + i_EN=self.en, + i_ADDR=self.addr, + o_DATA=self.data, + ) + if self.domain == "comb": + # Asynchronous port + f.add_statements(self.data.eq(self.memory._array[self.addr])) + f.add_driver(self.data) + elif not self.transparent: + # Synchronous, read-before-write port + f.add_statements( + Switch(self.en, { + 1: self.data.eq(self.memory._array[self.addr]) + }) + ) + f.add_driver(self.data, self.domain) + else: + # Synchronous, write-through port + # This model is a bit unconventional. We model transparent ports as asynchronous ports + # that are latched when the clock is high. This isn't exactly correct, but it is very + # close to the correct behavior of a transparent port, and the difference should only + # be observable in pathological cases of clock gating. A register is injected to + # the address input to achieve the correct address-to-data latency. Also, the reset + # value of the data output is forcibly set to the 0th initial value, if any--note that + # many FPGAs do not guarantee this behavior! + if len(self.memory.init) > 0: + self.data.reset = self.memory.init[0] + latch_addr = Signal.like(self.addr) + f.add_statements( + latch_addr.eq(self.addr), + Switch(ClockSignal(self.domain), { + 0: self.data.eq(self.data), + 1: self.data.eq(self.memory._array[latch_addr]), + }), + ) + f.add_driver(latch_addr, self.domain) + f.add_driver(self.data) + return f + + +class WritePort(Elaboratable): + def __init__(self, memory, *, domain="sync", granularity=None, src_loc_at=0): + if granularity is None: + granularity = memory.width + if not isinstance(granularity, int) or granularity < 0: + raise TypeError("Write port granularity must be a non-negative integer, not {!r}" + .format(granularity)) + if granularity > memory.width: + raise ValueError("Write port granularity must not be greater than memory width " + "({} > {})" + .format(granularity, memory.width)) + if memory.width // granularity * granularity != memory.width: + raise ValueError("Write port granularity must divide memory width evenly") + + self.memory = memory + self.domain = domain + self.granularity = granularity + + self.addr = Signal(range(memory.depth), + name="{}_w_addr".format(memory.name), src_loc_at=1 + src_loc_at) + self.data = Signal(memory.width, + name="{}_w_data".format(memory.name), src_loc_at=1 + src_loc_at) + self.en = Signal(memory.width // granularity, + name="{}_w_en".format(memory.name), src_loc_at=1 + src_loc_at) + + def elaborate(self, platform): + f = Instance("$memwr", + p_MEMID=self.memory, + p_ABITS=self.addr.width, + p_WIDTH=self.data.width, + p_CLK_ENABLE=1, + p_CLK_POLARITY=1, + p_PRIORITY=0, + i_CLK=ClockSignal(self.domain), + i_EN=Cat(Repl(en_bit, self.granularity) for en_bit in self.en), + i_ADDR=self.addr, + i_DATA=self.data, + ) + if len(self.en) > 1: + for index, en_bit in enumerate(self.en): + offset = index * self.granularity + bits = slice(offset, offset + self.granularity) + write_data = self.memory._array[self.addr][bits].eq(self.data[bits]) + f.add_statements(Switch(en_bit, { 1: write_data })) + else: + write_data = self.memory._array[self.addr].eq(self.data) + f.add_statements(Switch(self.en, { 1: write_data })) + for signal in self.memory._array: + f.add_driver(signal, self.domain) + return f + + +class DummyPort: + """Dummy memory port. + + This port can be used in place of either a read or a write port for testing and verification. + It does not include any read/write port specific attributes, i.e. none besides ``"domain"``; + any such attributes may be set manually. + """ + def __init__(self, *, data_width, addr_width, domain="sync", name=None, granularity=None): + self.domain = domain + + if granularity is None: + granularity = data_width + if name is None: + name = tracer.get_var_name(depth=2, default="dummy") + + self.addr = Signal(addr_width, + name="{}_addr".format(name), src_loc_at=1) + self.data = Signal(data_width, + name="{}_data".format(name), src_loc_at=1) + self.en = Signal(data_width // granularity, + name="{}_en".format(name), src_loc_at=1) diff --git a/nmigen/hdl/rec.py b/nmigen/hdl/rec.py new file mode 100644 index 0000000..54b270a --- /dev/null +++ b/nmigen/hdl/rec.py @@ -0,0 +1,228 @@ +from enum import Enum +from collections import OrderedDict +from functools import reduce + +from .. import tracer +from .._utils import union, deprecated +from .ast import * + + +__all__ = ["Direction", "DIR_NONE", "DIR_FANOUT", "DIR_FANIN", "Layout", "Record"] + + +Direction = Enum('Direction', ('NONE', 'FANOUT', 'FANIN')) + +DIR_NONE = Direction.NONE +DIR_FANOUT = Direction.FANOUT +DIR_FANIN = Direction.FANIN + + +class Layout: + @staticmethod + def cast(obj, *, src_loc_at=0): + if isinstance(obj, Layout): + return obj + return Layout(obj, src_loc_at=1 + src_loc_at) + + def __init__(self, fields, *, src_loc_at=0): + self.fields = OrderedDict() + for field in fields: + if not isinstance(field, tuple) or len(field) not in (2, 3): + raise TypeError("Field {!r} has invalid layout: should be either " + "(name, shape) or (name, shape, direction)" + .format(field)) + if len(field) == 2: + name, shape = field + direction = DIR_NONE + if isinstance(shape, list): + shape = Layout.cast(shape) + else: + name, shape, direction = field + if not isinstance(direction, Direction): + raise TypeError("Field {!r} has invalid direction: should be a Direction " + "instance like DIR_FANIN" + .format(field)) + if not isinstance(name, str): + raise TypeError("Field {!r} has invalid name: should be a string" + .format(field)) + if not isinstance(shape, Layout): + try: + shape = Shape.cast(shape, src_loc_at=1 + src_loc_at) + except Exception as error: + raise TypeError("Field {!r} has invalid shape: should be castable to Shape " + "or a list of fields of a nested record" + .format(field)) + if name in self.fields: + raise NameError("Field {!r} has a name that is already present in the layout" + .format(field)) + self.fields[name] = (shape, direction) + + def __getitem__(self, item): + if isinstance(item, tuple): + return Layout([ + (name, shape, dir) + for (name, (shape, dir)) in self.fields.items() + if name in item + ]) + + return self.fields[item] + + def __iter__(self): + for name, (shape, dir) in self.fields.items(): + yield (name, shape, dir) + + def __eq__(self, other): + return self.fields == other.fields + + +# Unlike most Values, Record *can* be subclassed. +class Record(Value): + @staticmethod + def like(other, *, name=None, name_suffix=None, src_loc_at=0): + if name is not None: + new_name = str(name) + elif name_suffix is not None: + new_name = other.name + str(name_suffix) + else: + new_name = tracer.get_var_name(depth=2 + src_loc_at, default=None) + + def concat(a, b): + if a is None: + return b + return "{}__{}".format(a, b) + + fields = {} + for field_name in other.fields: + field = other[field_name] + if isinstance(field, Record): + fields[field_name] = Record.like(field, name=concat(new_name, field_name), + src_loc_at=1 + src_loc_at) + else: + fields[field_name] = Signal.like(field, name=concat(new_name, field_name), + src_loc_at=1 + src_loc_at) + + return Record(other.layout, name=new_name, fields=fields, src_loc_at=1) + + def __init__(self, layout, *, name=None, fields=None, src_loc_at=0): + if name is None: + name = tracer.get_var_name(depth=2 + src_loc_at, default=None) + + self.name = name + self.src_loc = tracer.get_src_loc(src_loc_at) + + def concat(a, b): + if a is None: + return b + return "{}__{}".format(a, b) + + self.layout = Layout.cast(layout, src_loc_at=1 + src_loc_at) + self.fields = OrderedDict() + for field_name, field_shape, field_dir in self.layout: + if fields is not None and field_name in fields: + field = fields[field_name] + if isinstance(field_shape, Layout): + assert isinstance(field, Record) and field_shape == field.layout + else: + assert isinstance(field, Signal) and field_shape == field.shape() + self.fields[field_name] = field + else: + if isinstance(field_shape, Layout): + self.fields[field_name] = Record(field_shape, name=concat(name, field_name), + src_loc_at=1 + src_loc_at) + else: + self.fields[field_name] = Signal(field_shape, name=concat(name, field_name), + src_loc_at=1 + src_loc_at) + + def __getattr__(self, name): + return self[name] + + def __getitem__(self, item): + if isinstance(item, str): + try: + return self.fields[item] + except KeyError: + if self.name is None: + reference = "Unnamed record" + else: + reference = "Record '{}'".format(self.name) + raise AttributeError("{} does not have a field '{}'. Did you mean one of: {}?" + .format(reference, item, ", ".join(self.fields))) from None + elif isinstance(item, tuple): + return Record(self.layout[item], fields={ + field_name: field_value + for field_name, field_value in self.fields.items() + if field_name in item + }) + else: + return super().__getitem__(item) + + def shape(self): + return Shape(sum(len(f) for f in self.fields.values())) + + def _lhs_signals(self): + return union((f._lhs_signals() for f in self.fields.values()), start=SignalSet()) + + def _rhs_signals(self): + return union((f._rhs_signals() for f in self.fields.values()), start=SignalSet()) + + def __repr__(self): + fields = [] + for field_name, field in self.fields.items(): + if isinstance(field, Signal): + fields.append(field_name) + else: + fields.append(repr(field)) + name = self.name + if name is None: + name = "" + return "(rec {} {})".format(name, " ".join(fields)) + + def connect(self, *subordinates, include=None, exclude=None): + def rec_name(record): + if record.name is None: + return "unnamed record" + else: + return "record '{}'".format(record.name) + + for field in include or {}: + if field not in self.fields: + raise AttributeError("Cannot include field '{}' because it is not present in {}" + .format(field, rec_name(self))) + for field in exclude or {}: + if field not in self.fields: + raise AttributeError("Cannot exclude field '{}' because it is not present in {}" + .format(field, rec_name(self))) + + stmts = [] + for field in self.fields: + if include is not None and field not in include: + continue + if exclude is not None and field in exclude: + continue + + shape, direction = self.layout[field] + if not isinstance(shape, Layout) and direction == DIR_NONE: + raise TypeError("Cannot connect field '{}' of {} because it does not have " + "a direction" + .format(field, rec_name(self))) + + item = self.fields[field] + subord_items = [] + for subord in subordinates: + if field not in subord.fields: + raise AttributeError("Cannot connect field '{}' of {} to subordinate {} " + "because the subordinate record does not have this field" + .format(field, rec_name(self), rec_name(subord))) + subord_items.append(subord.fields[field]) + + if isinstance(shape, Layout): + sub_include = include[field] if include and field in include else None + sub_exclude = exclude[field] if exclude and field in exclude else None + stmts += item.connect(*subord_items, include=sub_include, exclude=sub_exclude) + else: + if direction == DIR_FANOUT: + stmts += [sub_item.eq(item) for sub_item in subord_items] + if direction == DIR_FANIN: + stmts += [item.eq(reduce(lambda a, b: a | b, subord_items))] + + return stmts diff --git a/nmigen/hdl/xfrm.py b/nmigen/hdl/xfrm.py new file mode 100644 index 0000000..fb4dad9 --- /dev/null +++ b/nmigen/hdl/xfrm.py @@ -0,0 +1,754 @@ +from abc import ABCMeta, abstractmethod +from collections import OrderedDict +from collections.abc import Iterable + +from .._utils import flatten, deprecated +from .. import tracer +from .ast import * +from .ast import _StatementList +from .cd import * +from .ir import * +from .rec import * + + +__all__ = ["ValueVisitor", "ValueTransformer", + "StatementVisitor", "StatementTransformer", + "FragmentTransformer", + "TransformedElaboratable", + "DomainCollector", "DomainRenamer", "DomainLowerer", + "SampleDomainInjector", "SampleLowerer", + "SwitchCleaner", "LHSGroupAnalyzer", "LHSGroupFilter", + "ResetInserter", "EnableInserter"] + + +class ValueVisitor(metaclass=ABCMeta): + @abstractmethod + def on_Const(self, value): + pass # :nocov: + + @abstractmethod + def on_AnyConst(self, value): + pass # :nocov: + + @abstractmethod + def on_AnySeq(self, value): + pass # :nocov: + + @abstractmethod + def on_Signal(self, value): + pass # :nocov: + + @abstractmethod + def on_Record(self, value): + pass # :nocov: + + @abstractmethod + def on_ClockSignal(self, value): + pass # :nocov: + + @abstractmethod + def on_ResetSignal(self, value): + pass # :nocov: + + @abstractmethod + def on_Operator(self, value): + pass # :nocov: + + @abstractmethod + def on_Slice(self, value): + pass # :nocov: + + @abstractmethod + def on_Part(self, value): + pass # :nocov: + + @abstractmethod + def on_Cat(self, value): + pass # :nocov: + + @abstractmethod + def on_Repl(self, value): + pass # :nocov: + + @abstractmethod + def on_ArrayProxy(self, value): + pass # :nocov: + + @abstractmethod + def on_Sample(self, value): + pass # :nocov: + + @abstractmethod + def on_Initial(self, value): + pass # :nocov: + + def on_unknown_value(self, value): + raise TypeError("Cannot transform value {!r}".format(value)) # :nocov: + + def replace_value_src_loc(self, value, new_value): + return True + + def on_value(self, value): + if type(value) is Const: + new_value = self.on_Const(value) + elif type(value) is AnyConst: + new_value = self.on_AnyConst(value) + elif type(value) is AnySeq: + new_value = self.on_AnySeq(value) + elif isinstance(value, Signal): + # Uses `isinstance()` and not `type() is` because nmigen.compat requires it. + new_value = self.on_Signal(value) + elif isinstance(value, Record): + # Uses `isinstance()` and not `type() is` to allow inheriting from Record. + new_value = self.on_Record(value) + elif type(value) is ClockSignal: + new_value = self.on_ClockSignal(value) + elif type(value) is ResetSignal: + new_value = self.on_ResetSignal(value) + elif type(value) is Operator: + new_value = self.on_Operator(value) + elif type(value) is Slice: + new_value = self.on_Slice(value) + elif type(value) is Part: + new_value = self.on_Part(value) + elif type(value) is Cat: + new_value = self.on_Cat(value) + elif type(value) is Repl: + new_value = self.on_Repl(value) + elif type(value) is ArrayProxy: + new_value = self.on_ArrayProxy(value) + elif type(value) is Sample: + new_value = self.on_Sample(value) + elif type(value) is Initial: + new_value = self.on_Initial(value) + elif isinstance(value, UserValue): + # Uses `isinstance()` and not `type() is` to allow inheriting. + new_value = self.on_value(value._lazy_lower()) + else: + new_value = self.on_unknown_value(value) + if isinstance(new_value, Value) and self.replace_value_src_loc(value, new_value): + new_value.src_loc = value.src_loc + return new_value + + def __call__(self, value): + return self.on_value(value) + + +class ValueTransformer(ValueVisitor): + def on_Const(self, value): + return value + + def on_AnyConst(self, value): + return value + + def on_AnySeq(self, value): + return value + + def on_Signal(self, value): + return value + + def on_Record(self, value): + return value + + def on_ClockSignal(self, value): + return value + + def on_ResetSignal(self, value): + return value + + def on_Operator(self, value): + return Operator(value.operator, [self.on_value(o) for o in value.operands]) + + def on_Slice(self, value): + return Slice(self.on_value(value.value), value.start, value.stop) + + def on_Part(self, value): + return Part(self.on_value(value.value), self.on_value(value.offset), + value.width, value.stride) + + def on_Cat(self, value): + return Cat(self.on_value(o) for o in value.parts) + + def on_Repl(self, value): + return Repl(self.on_value(value.value), value.count) + + def on_ArrayProxy(self, value): + return ArrayProxy([self.on_value(elem) for elem in value._iter_as_values()], + self.on_value(value.index)) + + def on_Sample(self, value): + return Sample(self.on_value(value.value), value.clocks, value.domain) + + def on_Initial(self, value): + return value + + +class StatementVisitor(metaclass=ABCMeta): + @abstractmethod + def on_Assign(self, stmt): + pass # :nocov: + + @abstractmethod + def on_Assert(self, stmt): + pass # :nocov: + + @abstractmethod + def on_Assume(self, stmt): + pass # :nocov: + + @abstractmethod + def on_Cover(self, stmt): + pass # :nocov: + + @abstractmethod + def on_Switch(self, stmt): + pass # :nocov: + + @abstractmethod + def on_statements(self, stmts): + pass # :nocov: + + def on_unknown_statement(self, stmt): + raise TypeError("Cannot transform statement {!r}".format(stmt)) # :nocov: + + def replace_statement_src_loc(self, stmt, new_stmt): + return True + + def on_statement(self, stmt): + if type(stmt) is Assign: + new_stmt = self.on_Assign(stmt) + elif type(stmt) is Assert: + new_stmt = self.on_Assert(stmt) + elif type(stmt) is Assume: + new_stmt = self.on_Assume(stmt) + elif type(stmt) is Cover: + new_stmt = self.on_Cover(stmt) + elif isinstance(stmt, Switch): + # Uses `isinstance()` and not `type() is` because nmigen.compat requires it. + new_stmt = self.on_Switch(stmt) + elif isinstance(stmt, Iterable): + new_stmt = self.on_statements(stmt) + else: + new_stmt = self.on_unknown_statement(stmt) + if isinstance(new_stmt, Statement) and self.replace_statement_src_loc(stmt, new_stmt): + new_stmt.src_loc = stmt.src_loc + if isinstance(new_stmt, Switch) and isinstance(stmt, Switch): + new_stmt.case_src_locs = stmt.case_src_locs + if isinstance(new_stmt, Property): + new_stmt._MustUse__used = True + return new_stmt + + def __call__(self, stmt): + return self.on_statement(stmt) + + +class StatementTransformer(StatementVisitor): + def on_value(self, value): + return value + + def on_Assign(self, stmt): + return Assign(self.on_value(stmt.lhs), self.on_value(stmt.rhs)) + + def on_Assert(self, stmt): + return Assert(self.on_value(stmt.test), _check=stmt._check, _en=stmt._en) + + def on_Assume(self, stmt): + return Assume(self.on_value(stmt.test), _check=stmt._check, _en=stmt._en) + + def on_Cover(self, stmt): + return Cover(self.on_value(stmt.test), _check=stmt._check, _en=stmt._en) + + def on_Switch(self, stmt): + cases = OrderedDict((k, self.on_statement(s)) for k, s in stmt.cases.items()) + return Switch(self.on_value(stmt.test), cases) + + def on_statements(self, stmts): + return _StatementList(flatten(self.on_statement(stmt) for stmt in stmts)) + + +class FragmentTransformer: + def map_subfragments(self, fragment, new_fragment): + for subfragment, name in fragment.subfragments: + new_fragment.add_subfragment(self(subfragment), name) + + def map_ports(self, fragment, new_fragment): + for port, dir in fragment.ports.items(): + new_fragment.add_ports(port, dir=dir) + + def map_named_ports(self, fragment, new_fragment): + if hasattr(self, "on_value"): + for name, (value, dir) in fragment.named_ports.items(): + new_fragment.named_ports[name] = self.on_value(value), dir + else: + new_fragment.named_ports = OrderedDict(fragment.named_ports.items()) + + def map_domains(self, fragment, new_fragment): + for domain in fragment.iter_domains(): + new_fragment.add_domains(fragment.domains[domain]) + + def map_statements(self, fragment, new_fragment): + if hasattr(self, "on_statement"): + new_fragment.add_statements(map(self.on_statement, fragment.statements)) + else: + new_fragment.add_statements(fragment.statements) + + def map_drivers(self, fragment, new_fragment): + for domain, signal in fragment.iter_drivers(): + new_fragment.add_driver(signal, domain) + + def on_fragment(self, fragment): + if isinstance(fragment, Instance): + new_fragment = Instance(fragment.type) + new_fragment.parameters = OrderedDict(fragment.parameters) + self.map_named_ports(fragment, new_fragment) + else: + new_fragment = Fragment() + new_fragment.flatten = fragment.flatten + new_fragment.attrs = OrderedDict(fragment.attrs) + self.map_ports(fragment, new_fragment) + self.map_subfragments(fragment, new_fragment) + self.map_domains(fragment, new_fragment) + self.map_statements(fragment, new_fragment) + self.map_drivers(fragment, new_fragment) + return new_fragment + + def __call__(self, value, *, src_loc_at=0): + if isinstance(value, Fragment): + return self.on_fragment(value) + elif isinstance(value, TransformedElaboratable): + value._transforms_.append(self) + return value + elif hasattr(value, "elaborate"): + value = TransformedElaboratable(value, src_loc_at=1 + src_loc_at) + value._transforms_.append(self) + return value + else: + raise AttributeError("Object {!r} cannot be elaborated".format(value)) + + +class TransformedElaboratable(Elaboratable): + def __init__(self, elaboratable, *, src_loc_at=0): + assert hasattr(elaboratable, "elaborate") + + # Fields prefixed and suffixed with underscore to avoid as many conflicts with the inner + # object as possible, since we're forwarding attribute requests to it. + self._elaboratable_ = elaboratable + self._transforms_ = [] + + def __getattr__(self, attr): + return getattr(self._elaboratable_, attr) + + def elaborate(self, platform): + fragment = Fragment.get(self._elaboratable_, platform) + for transform in self._transforms_: + fragment = transform(fragment) + return fragment + + +class DomainCollector(ValueVisitor, StatementVisitor): + def __init__(self): + self.used_domains = set() + self.defined_domains = set() + self._local_domains = set() + + def _add_used_domain(self, domain_name): + if domain_name is None: + return + if domain_name in self._local_domains: + return + self.used_domains.add(domain_name) + + def on_ignore(self, value): + pass + + on_Const = on_ignore + on_AnyConst = on_ignore + on_AnySeq = on_ignore + on_Signal = on_ignore + + def on_ClockSignal(self, value): + self._add_used_domain(value.domain) + + def on_ResetSignal(self, value): + self._add_used_domain(value.domain) + + on_Record = on_ignore + + def on_Operator(self, value): + for o in value.operands: + self.on_value(o) + + def on_Slice(self, value): + self.on_value(value.value) + + def on_Part(self, value): + self.on_value(value.value) + self.on_value(value.offset) + + def on_Cat(self, value): + for o in value.parts: + self.on_value(o) + + def on_Repl(self, value): + self.on_value(value.value) + + def on_ArrayProxy(self, value): + for elem in value._iter_as_values(): + self.on_value(elem) + self.on_value(value.index) + + def on_Sample(self, value): + self.on_value(value.value) + + def on_Initial(self, value): + pass + + def on_Assign(self, stmt): + self.on_value(stmt.lhs) + self.on_value(stmt.rhs) + + def on_property(self, stmt): + self.on_value(stmt.test) + + on_Assert = on_property + on_Assume = on_property + on_Cover = on_property + + def on_Switch(self, stmt): + self.on_value(stmt.test) + for stmts in stmt.cases.values(): + self.on_statement(stmts) + + def on_statements(self, stmts): + for stmt in stmts: + self.on_statement(stmt) + + def on_fragment(self, fragment): + if isinstance(fragment, Instance): + for name, (value, dir) in fragment.named_ports.items(): + self.on_value(value) + + old_local_domains, self._local_domains = self._local_domains, set(self._local_domains) + for domain_name, domain in fragment.domains.items(): + if domain.local: + self._local_domains.add(domain_name) + else: + self.defined_domains.add(domain_name) + + self.on_statements(fragment.statements) + for domain_name in fragment.drivers: + self._add_used_domain(domain_name) + for subfragment, name in fragment.subfragments: + self.on_fragment(subfragment) + + self._local_domains = old_local_domains + + def __call__(self, fragment): + self.on_fragment(fragment) + + +class DomainRenamer(FragmentTransformer, ValueTransformer, StatementTransformer): + def __init__(self, domain_map): + if isinstance(domain_map, str): + domain_map = {"sync": domain_map} + for src, dst in domain_map.items(): + if src == "comb": + raise ValueError("Domain '{}' may not be renamed".format(src)) + if dst == "comb": + raise ValueError("Domain '{}' may not be renamed to '{}'".format(src, dst)) + self.domain_map = OrderedDict(domain_map) + + def on_ClockSignal(self, value): + if value.domain in self.domain_map: + return ClockSignal(self.domain_map[value.domain]) + return value + + def on_ResetSignal(self, value): + if value.domain in self.domain_map: + return ResetSignal(self.domain_map[value.domain]) + return value + + def map_domains(self, fragment, new_fragment): + for domain in fragment.iter_domains(): + cd = fragment.domains[domain] + if domain in self.domain_map: + if cd.name == domain: + # Rename the actual ClockDomain object. + cd.rename(self.domain_map[domain]) + else: + assert cd.name == self.domain_map[domain] + new_fragment.add_domains(cd) + + def map_drivers(self, fragment, new_fragment): + for domain, signals in fragment.drivers.items(): + if domain in self.domain_map: + domain = self.domain_map[domain] + for signal in signals: + new_fragment.add_driver(self.on_value(signal), domain) + + +class DomainLowerer(FragmentTransformer, ValueTransformer, StatementTransformer): + def __init__(self, domains=None): + self.domains = domains + + def _resolve(self, domain, context): + if domain not in self.domains: + raise DomainError("Signal {!r} refers to nonexistent domain '{}'" + .format(context, domain)) + return self.domains[domain] + + def map_drivers(self, fragment, new_fragment): + for domain, signal in fragment.iter_drivers(): + new_fragment.add_driver(self.on_value(signal), domain) + + def replace_value_src_loc(self, value, new_value): + return not isinstance(value, (ClockSignal, ResetSignal)) + + def on_ClockSignal(self, value): + domain = self._resolve(value.domain, value) + return domain.clk + + def on_ResetSignal(self, value): + domain = self._resolve(value.domain, value) + if domain.rst is None: + if value.allow_reset_less: + return Const(0) + else: + raise DomainError("Signal {!r} refers to reset of reset-less domain '{}'" + .format(value, value.domain)) + return domain.rst + + def _insert_resets(self, fragment): + for domain_name, signals in fragment.drivers.items(): + if domain_name is None: + continue + domain = fragment.domains[domain_name] + if domain.rst is None: + continue + stmts = [signal.eq(Const(signal.reset, signal.width)) + for signal in signals if not signal.reset_less] + fragment.add_statements(Switch(domain.rst, {1: stmts})) + + def on_fragment(self, fragment): + self.domains = fragment.domains + new_fragment = super().on_fragment(fragment) + self._insert_resets(new_fragment) + return new_fragment + + +class SampleDomainInjector(ValueTransformer, StatementTransformer): + def __init__(self, domain): + self.domain = domain + + def on_Sample(self, value): + if value.domain is not None: + return value + return Sample(value.value, value.clocks, self.domain) + + def __call__(self, stmts): + return self.on_statement(stmts) + + +class SampleLowerer(FragmentTransformer, ValueTransformer, StatementTransformer): + def __init__(self): + self.initial = None + self.sample_cache = None + self.sample_stmts = None + + def _name_reset(self, value): + if isinstance(value, Const): + return "c${}".format(value.value), value.value + elif isinstance(value, Signal): + return "s${}".format(value.name), value.reset + elif isinstance(value, ClockSignal): + return "clk", 0 + elif isinstance(value, ResetSignal): + return "rst", 1 + elif isinstance(value, Initial): + return "init", 0 # Past(Initial()) produces 0, 1, 0, 0, ... + else: + raise NotImplementedError # :nocov: + + def on_Sample(self, value): + if value in self.sample_cache: + return self.sample_cache[value] + + sampled_value = self.on_value(value.value) + if value.clocks == 0: + sample = sampled_value + else: + assert value.domain is not None + sampled_name, sampled_reset = self._name_reset(value.value) + name = "$sample${}${}${}".format(sampled_name, value.domain, value.clocks) + sample = Signal.like(value.value, name=name, reset_less=True, reset=sampled_reset) + sample.attrs["nmigen.sample_reg"] = True + + prev_sample = self.on_Sample(Sample(sampled_value, value.clocks - 1, value.domain)) + if value.domain not in self.sample_stmts: + self.sample_stmts[value.domain] = [] + self.sample_stmts[value.domain].append(sample.eq(prev_sample)) + + self.sample_cache[value] = sample + return sample + + def on_Initial(self, value): + if self.initial is None: + self.initial = Signal(name="init") + return self.initial + + def map_statements(self, fragment, new_fragment): + self.initial = None + self.sample_cache = ValueDict() + self.sample_stmts = OrderedDict() + new_fragment.add_statements(map(self.on_statement, fragment.statements)) + for domain, stmts in self.sample_stmts.items(): + new_fragment.add_statements(stmts) + for stmt in stmts: + new_fragment.add_driver(stmt.lhs, domain) + if self.initial is not None: + new_fragment.add_subfragment(Instance("$initstate", o_Y=self.initial)) + + +class SwitchCleaner(StatementVisitor): + def on_ignore(self, stmt): + return stmt + + on_Assign = on_ignore + on_Assert = on_ignore + on_Assume = on_ignore + on_Cover = on_ignore + + def on_Switch(self, stmt): + cases = OrderedDict((k, self.on_statement(s)) for k, s in stmt.cases.items()) + if any(len(s) for s in cases.values()): + return Switch(stmt.test, cases) + + def on_statements(self, stmts): + stmts = flatten(self.on_statement(stmt) for stmt in stmts) + return _StatementList(stmt for stmt in stmts if stmt is not None) + + +class LHSGroupAnalyzer(StatementVisitor): + def __init__(self): + self.signals = SignalDict() + self.unions = OrderedDict() + + def find(self, signal): + if signal not in self.signals: + self.signals[signal] = len(self.signals) + group = self.signals[signal] + while group in self.unions: + group = self.unions[group] + self.signals[signal] = group + return group + + def unify(self, root, *leaves): + root_group = self.find(root) + for leaf in leaves: + leaf_group = self.find(leaf) + if root_group == leaf_group: + continue + self.unions[leaf_group] = root_group + + def groups(self): + groups = OrderedDict() + for signal in self.signals: + group = self.find(signal) + if group not in groups: + groups[group] = SignalSet() + groups[group].add(signal) + return groups + + def on_Assign(self, stmt): + lhs_signals = stmt._lhs_signals() + if lhs_signals: + self.unify(*stmt._lhs_signals()) + + def on_property(self, stmt): + lhs_signals = stmt._lhs_signals() + if lhs_signals: + self.unify(*stmt._lhs_signals()) + + on_Assert = on_property + on_Assume = on_property + on_Cover = on_property + + def on_Switch(self, stmt): + for case_stmts in stmt.cases.values(): + self.on_statements(case_stmts) + + def on_statements(self, stmts): + for stmt in stmts: + self.on_statement(stmt) + + def __call__(self, stmts): + self.on_statements(stmts) + return self.groups() + + +class LHSGroupFilter(SwitchCleaner): + def __init__(self, signals): + self.signals = signals + + def on_Assign(self, stmt): + # The invariant provided by LHSGroupAnalyzer is that all signals that ever appear together + # on LHS are a part of the same group, so it is sufficient to check any of them. + lhs_signals = stmt.lhs._lhs_signals() + if lhs_signals: + any_lhs_signal = next(iter(lhs_signals)) + if any_lhs_signal in self.signals: + return stmt + + def on_property(self, stmt): + any_lhs_signal = next(iter(stmt._lhs_signals())) + if any_lhs_signal in self.signals: + return stmt + + on_Assert = on_property + on_Assume = on_property + on_Cover = on_property + + +class _ControlInserter(FragmentTransformer): + def __init__(self, controls): + self.src_loc = None + if isinstance(controls, Value): + controls = {"sync": controls} + self.controls = OrderedDict(controls) + + def on_fragment(self, fragment): + new_fragment = super().on_fragment(fragment) + for domain, signals in fragment.drivers.items(): + if domain is None or domain not in self.controls: + continue + self._insert_control(new_fragment, domain, signals) + return new_fragment + + def _insert_control(self, fragment, domain, signals): + raise NotImplementedError # :nocov: + + def __call__(self, value, *, src_loc_at=0): + self.src_loc = tracer.get_src_loc(src_loc_at=src_loc_at) + return super().__call__(value, src_loc_at=1 + src_loc_at) + + +class ResetInserter(_ControlInserter): + def _insert_control(self, fragment, domain, signals): + stmts = [s.eq(Const(s.reset, s.width)) for s in signals if not s.reset_less] + fragment.add_statements(Switch(self.controls[domain], {1: stmts}, src_loc=self.src_loc)) + + +class EnableInserter(_ControlInserter): + def _insert_control(self, fragment, domain, signals): + stmts = [s.eq(s) for s in signals] + fragment.add_statements(Switch(self.controls[domain], {0: stmts}, src_loc=self.src_loc)) + + def on_fragment(self, fragment): + new_fragment = super().on_fragment(fragment) + if isinstance(new_fragment, Instance) and new_fragment.type in ("$memrd", "$memwr"): + clk_port, clk_dir = new_fragment.named_ports["CLK"] + if isinstance(clk_port, ClockSignal) and clk_port.domain in self.controls: + en_port, en_dir = new_fragment.named_ports["EN"] + en_port = Mux(self.controls[clk_port.domain], en_port, Const(0, len(en_port))) + new_fragment.named_ports["EN"] = en_port, en_dir + return new_fragment diff --git a/nmigen/lib/__init__.py b/nmigen/lib/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nmigen/lib/__pycache__/__init__.cpython-37.pyc b/nmigen/lib/__pycache__/__init__.cpython-37.pyc new file mode 100644 index 0000000..a82a5fa Binary files /dev/null and b/nmigen/lib/__pycache__/__init__.cpython-37.pyc differ diff --git a/nmigen/lib/__pycache__/cdc.cpython-37.pyc b/nmigen/lib/__pycache__/cdc.cpython-37.pyc new file mode 100644 index 0000000..14e7560 Binary files /dev/null and b/nmigen/lib/__pycache__/cdc.cpython-37.pyc differ diff --git a/nmigen/lib/__pycache__/coding.cpython-37.pyc b/nmigen/lib/__pycache__/coding.cpython-37.pyc new file mode 100644 index 0000000..a79a437 Binary files /dev/null and b/nmigen/lib/__pycache__/coding.cpython-37.pyc differ diff --git a/nmigen/lib/__pycache__/fifo.cpython-37.pyc b/nmigen/lib/__pycache__/fifo.cpython-37.pyc new file mode 100644 index 0000000..e5be384 Binary files /dev/null and b/nmigen/lib/__pycache__/fifo.cpython-37.pyc differ diff --git a/nmigen/lib/cdc.py b/nmigen/lib/cdc.py new file mode 100644 index 0000000..09cdd95 --- /dev/null +++ b/nmigen/lib/cdc.py @@ -0,0 +1,252 @@ +from .._utils import deprecated +from .. import * + + +__all__ = ["FFSynchronizer", "AsyncFFSynchronizer", "ResetSynchronizer", "PulseSynchronizer"] + + +def _check_stages(stages): + if not isinstance(stages, int) or stages < 1: + raise TypeError("Synchronization stage count must be a positive integer, not {!r}" + .format(stages)) + if stages < 2: + raise ValueError("Synchronization stage count may not safely be less than 2") + + +class FFSynchronizer(Elaboratable): + """Resynchronise a signal to a different clock domain. + + Consists of a chain of flip-flops. Eliminates metastabilities at the output, but provides + no other guarantee as to the safe domain-crossing of a signal. + + Parameters + ---------- + i : Signal(n), in + Signal to be resynchronised. + o : Signal(n), out + Signal connected to synchroniser output. + o_domain : str + Name of output clock domain. + reset : int + Reset value of the flip-flops. On FPGAs, even if ``reset_less`` is True, + the :class:`FFSynchronizer` is still set to this value during initialization. + reset_less : bool + If ``True`` (the default), this :class:`FFSynchronizer` is unaffected by ``o_domain`` + reset. See "Note on Reset" below. + stages : int + Number of synchronization stages between input and output. The lowest safe number is 2, + with higher numbers reducing MTBF further, at the cost of increased latency. + max_input_delay : None or float + Maximum delay from the input signal's clock to the first synchronization stage, in seconds. + If specified and the platform does not support it, elaboration will fail. + + Platform override + ----------------- + Define the ``get_ff_sync`` platform method to override the implementation of + :class:`FFSynchronizer`, e.g. to instantiate library cells directly. + + Note on Reset + ------------- + :class:`FFSynchronizer` is non-resettable by default. Usually this is the safest option; + on FPGAs the :class:`FFSynchronizer` will still be initialized to its ``reset`` value when + the FPGA loads its configuration. + + However, in designs where the value of the :class:`FFSynchronizer` must be valid immediately + after reset, consider setting ``reset_less`` to False if any of the following is true: + + - You are targeting an ASIC, or an FPGA that does not allow arbitrary initial flip-flop states; + - Your design features warm (non-power-on) resets of ``o_domain``, so the one-time + initialization at power on is insufficient; + - Your design features a sequenced reset, and the :class:`FFSynchronizer` must maintain + its reset value until ``o_domain`` reset specifically is deasserted. + + :class:`FFSynchronizer` is reset by the ``o_domain`` reset only. + """ + def __init__(self, i, o, *, o_domain="sync", reset=0, reset_less=True, stages=2, + max_input_delay=None): + _check_stages(stages) + + self.i = i + self.o = o + + self._reset = reset + self._reset_less = reset_less + self._o_domain = o_domain + self._stages = stages + + self._max_input_delay = max_input_delay + + def elaborate(self, platform): + if hasattr(platform, "get_ff_sync"): + return platform.get_ff_sync(self) + + if self._max_input_delay is not None: + raise NotImplementedError("Platform '{}' does not support constraining input delay " + "for FFSynchronizer" + .format(type(platform).__name__)) + + m = Module() + flops = [Signal(self.i.shape(), name="stage{}".format(index), + reset=self._reset, reset_less=self._reset_less) + for index in range(self._stages)] + for i, o in zip((self.i, *flops), flops): + m.d[self._o_domain] += o.eq(i) + m.d.comb += self.o.eq(flops[-1]) + return m + + +class AsyncFFSynchronizer(Elaboratable): + """Synchronize deassertion of an asynchronous signal. + + The signal driven by the :class:`AsyncFFSynchronizer` is asserted asynchronously and deasserted + synchronously, eliminating metastability during deassertion. + + This synchronizer is primarily useful for resets and reset-like signals. + + Parameters + ---------- + i : Signal(1), in + Asynchronous input signal, to be synchronized. + o : Signal(1), out + Synchronously released output signal. + domain : str + Name of clock domain to reset. + stages : int, >=2 + Number of synchronization stages between input and output. The lowest safe number is 2, + with higher numbers reducing MTBF further, at the cost of increased deassertion latency. + async_edge : str + The edge of the input signal which causes the output to be set. Must be one of "pos" or "neg". + """ + def __init__(self, i, o, *, domain="sync", stages=2, async_edge="pos", max_input_delay=None): + _check_stages(stages) + + self.i = i + self.o = o + + self._domain = domain + self._stages = stages + + if async_edge not in ("pos", "neg"): + raise ValueError("AsyncFFSynchronizer async edge must be one of 'pos' or 'neg', " + "not {!r}" + .format(async_edge)) + self._edge = async_edge + + self._max_input_delay = max_input_delay + + def elaborate(self, platform): + if hasattr(platform, "get_async_ff_sync"): + return platform.get_async_ff_sync(self) + + if self._max_input_delay is not None: + raise NotImplementedError("Platform '{}' does not support constraining input delay " + "for AsyncFFSynchronizer" + .format(type(platform).__name__)) + + m = Module() + m.domains += ClockDomain("async_ff", async_reset=True, local=True) + flops = [Signal(1, name="stage{}".format(index), reset=1) + for index in range(self._stages)] + for i, o in zip((0, *flops), flops): + m.d.async_ff += o.eq(i) + + if self._edge == "pos": + m.d.comb += ResetSignal("async_ff").eq(self.i) + else: + m.d.comb += ResetSignal("async_ff").eq(~self.i) + + m.d.comb += [ + ClockSignal("async_ff").eq(ClockSignal(self._domain)), + self.o.eq(flops[-1]) + ] + + return m + + +class ResetSynchronizer(Elaboratable): + """Synchronize deassertion of a clock domain reset. + + The reset of the clock domain driven by the :class:`ResetSynchronizer` is asserted + asynchronously and deasserted synchronously, eliminating metastability during deassertion. + + The driven clock domain could use a reset that is asserted either synchronously or + asynchronously; a reset is always deasserted synchronously. A domain with an asynchronously + asserted reset is useful if the clock of the domain may be gated, yet the domain still + needs to be reset promptly; otherwise, synchronously asserted reset (the default) should + be used. + + Parameters + ---------- + arst : Signal(1), in + Asynchronous reset signal, to be synchronized. + domain : str + Name of clock domain to reset. + stages : int, >=2 + Number of synchronization stages between input and output. The lowest safe number is 2, + with higher numbers reducing MTBF further, at the cost of increased deassertion latency. + max_input_delay : None or float + Maximum delay from the input signal's clock to the first synchronization stage, in seconds. + If specified and the platform does not support it, elaboration will fail. + + Platform override + ----------------- + Define the ``get_reset_sync`` platform method to override the implementation of + :class:`ResetSynchronizer`, e.g. to instantiate library cells directly. + """ + def __init__(self, arst, *, domain="sync", stages=2, max_input_delay=None): + _check_stages(stages) + + self.arst = arst + + self._domain = domain + self._stages = stages + + self._max_input_delay = max_input_delay + + def elaborate(self, platform): + return AsyncFFSynchronizer(self.arst, ResetSignal(self._domain), domain=self._domain, + stages=self._stages, max_input_delay=self._max_input_delay) + + +class PulseSynchronizer(Elaboratable): + """A one-clock pulse on the input produces a one-clock pulse on the output. + + If the output clock is faster than the input clock, then the input may be safely asserted at + 100% duty cycle. Otherwise, if the clock ratio is n : 1, the input may be asserted at most once + in every n input clocks, else pulses may be dropped. + Other than this there is no constraint on the ratio of input and output clock frequency. + + Parameters + ---------- + i_domain : str + Name of input clock domain. + o-domain : str + Name of output clock domain. + sync_stages : int + Number of synchronisation flops between the two clock domains. 2 is the default, and + minimum safe value. High-frequency designs may choose to increase this. + """ + def __init__(self, i_domain, o_domain, sync_stages=2): + if not isinstance(sync_stages, int) or sync_stages < 1: + raise TypeError("sync_stages must be a positive integer, not '{!r}'".format(sync_stages)) + + self.i = Signal() + self.o = Signal() + self.i_domain = i_domain + self.o_domain = o_domain + self.sync_stages = sync_stages + + def elaborate(self, platform): + m = Module() + + itoggle = Signal() + otoggle = Signal() + ff_sync = m.submodules.ff_sync = \ + FFSynchronizer(itoggle, otoggle, o_domain=self.o_domain, stages=self.sync_stages) + otoggle_prev = Signal() + + m.d[self.i_domain] += itoggle.eq(itoggle ^ self.i) + m.d[self.o_domain] += otoggle_prev.eq(otoggle) + m.d.comb += self.o.eq(otoggle ^ otoggle_prev) + + return m diff --git a/nmigen/lib/coding.py b/nmigen/lib/coding.py new file mode 100644 index 0000000..5abfbd7 --- /dev/null +++ b/nmigen/lib/coding.py @@ -0,0 +1,186 @@ +"""Encoders and decoders between binary and one-hot representation.""" + +from .. import * + + +__all__ = [ + "Encoder", "Decoder", + "PriorityEncoder", "PriorityDecoder", + "GrayEncoder", "GrayDecoder", +] + + +class Encoder(Elaboratable): + """Encode one-hot to binary. + + If one bit in ``i`` is asserted, ``n`` is low and ``o`` indicates the asserted bit. + Otherwise, ``n`` is high and ``o`` is ``0``. + + Parameters + ---------- + width : int + Bit width of the input + + Attributes + ---------- + i : Signal(width), in + One-hot input. + o : Signal(range(width)), out + Encoded binary. + n : Signal, out + Invalid: either none or multiple input bits are asserted. + """ + def __init__(self, width): + self.width = width + + self.i = Signal(width) + self.o = Signal(range(width)) + self.n = Signal() + + def elaborate(self, platform): + m = Module() + with m.Switch(self.i): + for j in range(self.width): + with m.Case(1 << j): + m.d.comb += self.o.eq(j) + with m.Case(): + m.d.comb += self.n.eq(1) + return m + + +class PriorityEncoder(Elaboratable): + """Priority encode requests to binary. + + If any bit in ``i`` is asserted, ``n`` is low and ``o`` indicates the least significant + asserted bit. + Otherwise, ``n`` is high and ``o`` is ``0``. + + Parameters + ---------- + width : int + Bit width of the input. + + Attributes + ---------- + i : Signal(width), in + Input requests. + o : Signal(range(width)), out + Encoded binary. + n : Signal, out + Invalid: no input bits are asserted. + """ + def __init__(self, width): + self.width = width + + self.i = Signal(width) + self.o = Signal(range(width)) + self.n = Signal() + + def elaborate(self, platform): + m = Module() + for j in reversed(range(self.width)): + with m.If(self.i[j]): + m.d.comb += self.o.eq(j) + m.d.comb += self.n.eq(self.i == 0) + return m + + +class Decoder(Elaboratable): + """Decode binary to one-hot. + + If ``n`` is low, only the ``i``th bit in ``o`` is asserted. + If ``n`` is high, ``o`` is ``0``. + + Parameters + ---------- + width : int + Bit width of the output. + + Attributes + ---------- + i : Signal(range(width)), in + Input binary. + o : Signal(width), out + Decoded one-hot. + n : Signal, in + Invalid, no output bits are to be asserted. + """ + def __init__(self, width): + self.width = width + + self.i = Signal(range(width)) + self.n = Signal() + self.o = Signal(width) + + def elaborate(self, platform): + m = Module() + with m.Switch(self.i): + for j in range(len(self.o)): + with m.Case(j): + m.d.comb += self.o.eq(1 << j) + with m.If(self.n): + m.d.comb += self.o.eq(0) + return m + + +class PriorityDecoder(Decoder): + """Decode binary to priority request. + + Identical to :class:`Decoder`. + """ + + +class GrayEncoder(Elaboratable): + """Encode binary to Gray code. + + Parameters + ---------- + width : int + Bit width. + + Attributes + ---------- + i : Signal(width), in + Input natural binary. + o : Signal(width), out + Encoded Gray code. + """ + def __init__(self, width): + self.width = width + + self.i = Signal(width) + self.o = Signal(width) + + def elaborate(self, platform): + m = Module() + m.d.comb += self.o.eq(self.i ^ self.i[1:]) + return m + + +class GrayDecoder(Elaboratable): + """Decode Gray code to binary. + + Parameters + ---------- + width : int + Bit width. + + Attributes + ---------- + i : Signal(width), in + Input Gray code. + o : Signal(width), out + Decoded natural binary. + """ + def __init__(self, width): + self.width = width + + self.i = Signal(width) + self.o = Signal(width) + + def elaborate(self, platform): + m = Module() + m.d.comb += self.o[-1].eq(self.i[-1]) + for i in reversed(range(self.width - 1)): + m.d.comb += self.o[i].eq(self.o[i + 1] ^ self.i[i]) + return m diff --git a/nmigen/lib/fifo.py b/nmigen/lib/fifo.py new file mode 100644 index 0000000..ee3fb5c --- /dev/null +++ b/nmigen/lib/fifo.py @@ -0,0 +1,450 @@ +"""First-in first-out queues.""" + +from .. import * +from ..asserts import * +from .._utils import log2_int, deprecated +from .coding import GrayEncoder +from .cdc import FFSynchronizer + + +__all__ = ["FIFOInterface", "SyncFIFO", "SyncFIFOBuffered", "AsyncFIFO", "AsyncFIFOBuffered"] + + +class FIFOInterface: + _doc_template = """ + {description} + + Parameters + ---------- + width : int + Bit width of data entries. + depth : int + Depth of the queue. If zero, the FIFO cannot be read from or written to. + {parameters} + + Attributes + ---------- + {attributes} + w_data : in, width + Input data. + w_rdy : out + Asserted if there is space in the queue, i.e. ``w_en`` can be asserted to write + a new entry. + w_en : in + Write strobe. Latches ``w_data`` into the queue. Does nothing if ``w_rdy`` is not asserted. + {w_attributes} + r_data : out, width + Output data. {r_data_valid} + r_rdy : out + Asserted if there is an entry in the queue, i.e. ``r_en`` can be asserted to read + an existing entry. + r_en : in + Read strobe. Makes the next entry (if any) available on ``r_data`` at the next cycle. + Does nothing if ``r_rdy`` is not asserted. + {r_attributes} + """ + + __doc__ = _doc_template.format(description=""" + Data written to the input interface (``w_data``, ``w_rdy``, ``w_en``) is buffered and can be + read at the output interface (``r_data``, ``r_rdy``, ``r_en`). The data entry written first + to the input also appears first on the output. + """, + parameters="", + r_data_valid="The conditions in which ``r_data`` is valid depends on the type of the queue.", + attributes=""" + fwft : bool + First-word fallthrough. If set, when ``r_rdy`` rises, the first entry is already + available, i.e. ``r_data`` is valid. Otherwise, after ``r_rdy`` rises, it is necessary + to strobe ``r_en`` for ``r_data`` to become valid. + """.strip(), + w_attributes="", + r_attributes="") + + def __init__(self, *, width, depth, fwft): + if not isinstance(width, int) or width < 0: + raise TypeError("FIFO width must be a non-negative integer, not {!r}" + .format(width)) + if not isinstance(depth, int) or depth < 0: + raise TypeError("FIFO depth must be a non-negative integer, not {!r}" + .format(depth)) + self.width = width + self.depth = depth + self.fwft = fwft + + self.w_data = Signal(width, reset_less=True) + self.w_rdy = Signal() # writable; not full + self.w_en = Signal() + + self.r_data = Signal(width, reset_less=True) + self.r_rdy = Signal() # readable; not empty + self.r_en = Signal() + + +def _incr(signal, modulo): + if modulo == 2 ** len(signal): + return signal + 1 + else: + return Mux(signal == modulo - 1, 0, signal + 1) + + +class SyncFIFO(Elaboratable, FIFOInterface): + __doc__ = FIFOInterface._doc_template.format( + description=""" + Synchronous first in, first out queue. + + Read and write interfaces are accessed from the same clock domain. If different clock domains + are needed, use :class:`AsyncFIFO`. + """.strip(), + parameters=""" + fwft : bool + First-word fallthrough. If set, when the queue is empty and an entry is written into it, + that entry becomes available on the output on the same clock cycle. Otherwise, it is + necessary to assert ``r_en`` for ``r_data`` to become valid. + """.strip(), + r_data_valid=""" + For FWFT queues, valid if ``r_rdy`` is asserted. For non-FWFT queues, valid on the next + cycle after ``r_rdy`` and ``r_en`` have been asserted. + """.strip(), + attributes="", + r_attributes=""" + level : out + Number of unread entries. + """.strip(), + w_attributes="") + + def __init__(self, *, width, depth, fwft=True): + super().__init__(width=width, depth=depth, fwft=fwft) + + self.level = Signal(range(depth + 1)) + + def elaborate(self, platform): + m = Module() + if self.depth == 0: + m.d.comb += [ + self.w_rdy.eq(0), + self.r_rdy.eq(0), + ] + return m + + m.d.comb += [ + self.w_rdy.eq(self.level != self.depth), + self.r_rdy.eq(self.level != 0) + ] + + do_read = self.r_rdy & self.r_en + do_write = self.w_rdy & self.w_en + + storage = Memory(width=self.width, depth=self.depth) + w_port = m.submodules.w_port = storage.write_port() + r_port = m.submodules.r_port = storage.read_port( + domain="comb" if self.fwft else "sync", transparent=self.fwft) + produce = Signal(range(self.depth)) + consume = Signal(range(self.depth)) + + m.d.comb += [ + w_port.addr.eq(produce), + w_port.data.eq(self.w_data), + w_port.en.eq(self.w_en & self.w_rdy) + ] + with m.If(do_write): + m.d.sync += produce.eq(_incr(produce, self.depth)) + + m.d.comb += [ + r_port.addr.eq(consume), + self.r_data.eq(r_port.data), + ] + if not self.fwft: + m.d.comb += r_port.en.eq(self.r_en) + with m.If(do_read): + m.d.sync += consume.eq(_incr(consume, self.depth)) + + with m.If(do_write & ~do_read): + m.d.sync += self.level.eq(self.level + 1) + with m.If(do_read & ~do_write): + m.d.sync += self.level.eq(self.level - 1) + + if platform == "formal": + # TODO: move this logic to SymbiYosys + with m.If(Initial()): + m.d.comb += [ + Assume(produce < self.depth), + Assume(consume < self.depth), + ] + with m.If(produce == consume): + m.d.comb += Assume((self.level == 0) | (self.level == self.depth)) + with m.If(produce > consume): + m.d.comb += Assume(self.level == (produce - consume)) + with m.If(produce < consume): + m.d.comb += Assume(self.level == (self.depth + produce - consume)) + with m.Else(): + m.d.comb += [ + Assert(produce < self.depth), + Assert(consume < self.depth), + ] + with m.If(produce == consume): + m.d.comb += Assert((self.level == 0) | (self.level == self.depth)) + with m.If(produce > consume): + m.d.comb += Assert(self.level == (produce - consume)) + with m.If(produce < consume): + m.d.comb += Assert(self.level == (self.depth + produce - consume)) + + return m + + +class SyncFIFOBuffered(Elaboratable, FIFOInterface): + __doc__ = FIFOInterface._doc_template.format( + description=""" + Buffered synchronous first in, first out queue. + + This queue's interface is identical to :class:`SyncFIFO` configured as ``fwft=True``, but it + does not use asynchronous memory reads, which are incompatible with FPGA block RAMs. + + In exchange, the latency between an entry being written to an empty queue and that entry + becoming available on the output is increased by one cycle compared to :class:`SyncFIFO`. + """.strip(), + parameters=""" + fwft : bool + Always set. + """.strip(), + attributes="", + r_data_valid="Valid if ``r_rdy`` is asserted.", + r_attributes=""" + level : out + Number of unread entries. + """.strip(), + w_attributes="") + + def __init__(self, *, width, depth): + super().__init__(width=width, depth=depth, fwft=True) + + self.level = Signal(range(depth + 1)) + + def elaborate(self, platform): + m = Module() + if self.depth == 0: + m.d.comb += [ + self.w_rdy.eq(0), + self.r_rdy.eq(0), + ] + return m + + # Effectively, this queue treats the output register of the non-FWFT inner queue as + # an additional storage element. + m.submodules.unbuffered = fifo = SyncFIFO(width=self.width, depth=self.depth - 1, + fwft=False) + + m.d.comb += [ + fifo.w_data.eq(self.w_data), + fifo.w_en.eq(self.w_en), + self.w_rdy.eq(fifo.w_rdy), + ] + + m.d.comb += [ + self.r_data.eq(fifo.r_data), + fifo.r_en.eq(fifo.r_rdy & (~self.r_rdy | self.r_en)), + ] + with m.If(fifo.r_en): + m.d.sync += self.r_rdy.eq(1) + with m.Elif(self.r_en): + m.d.sync += self.r_rdy.eq(0) + + m.d.comb += self.level.eq(fifo.level + self.r_rdy) + + return m + + +class AsyncFIFO(Elaboratable, FIFOInterface): + __doc__ = FIFOInterface._doc_template.format( + description=""" + Asynchronous first in, first out queue. + + Read and write interfaces are accessed from different clock domains, which can be set when + constructing the FIFO. + + :class:`AsyncFIFO` only supports power of 2 depths. Unless ``exact_depth`` is specified, + the ``depth`` parameter is rounded up to the next power of 2. + """.strip(), + parameters=""" + r_domain : str + Read clock domain. + w_domain : str + Write clock domain. + """.strip(), + attributes=""" + fwft : bool + Always set. + """.strip(), + r_data_valid="Valid if ``r_rdy`` is asserted.", + r_attributes="", + w_attributes="") + + def __init__(self, *, width, depth, r_domain="read", w_domain="write", exact_depth=False): + if depth != 0: + try: + depth_bits = log2_int(depth, need_pow2=exact_depth) + depth = 1 << depth_bits + except ValueError as e: + raise ValueError("AsyncFIFO only supports depths that are powers of 2; requested " + "exact depth {} is not" + .format(depth)) from None + else: + depth_bits = 0 + super().__init__(width=width, depth=depth, fwft=True) + + self._r_domain = r_domain + self._w_domain = w_domain + self._ctr_bits = depth_bits + 1 + + def elaborate(self, platform): + m = Module() + if self.depth == 0: + m.d.comb += [ + self.w_rdy.eq(0), + self.r_rdy.eq(0), + ] + return m + + # The design of this queue is the "style #2" from Clifford E. Cummings' paper "Simulation + # and Synthesis Techniques for Asynchronous FIFO Design": + # http://www.sunburst-design.com/papers/CummingsSNUG2002SJ_FIFO1.pdf + + do_write = self.w_rdy & self.w_en + do_read = self.r_rdy & self.r_en + + # TODO: extract this pattern into lib.cdc.GrayCounter + produce_w_bin = Signal(self._ctr_bits) + produce_w_nxt = Signal(self._ctr_bits) + m.d.comb += produce_w_nxt.eq(produce_w_bin + do_write) + m.d[self._w_domain] += produce_w_bin.eq(produce_w_nxt) + + consume_r_bin = Signal(self._ctr_bits) + consume_r_nxt = Signal(self._ctr_bits) + m.d.comb += consume_r_nxt.eq(consume_r_bin + do_read) + m.d[self._r_domain] += consume_r_bin.eq(consume_r_nxt) + + produce_w_gry = Signal(self._ctr_bits) + produce_r_gry = Signal(self._ctr_bits) + produce_enc = m.submodules.produce_enc = \ + GrayEncoder(self._ctr_bits) + produce_cdc = m.submodules.produce_cdc = \ + FFSynchronizer(produce_w_gry, produce_r_gry, o_domain=self._r_domain) + m.d.comb += produce_enc.i.eq(produce_w_nxt), + m.d[self._w_domain] += produce_w_gry.eq(produce_enc.o) + + consume_r_gry = Signal(self._ctr_bits) + consume_w_gry = Signal(self._ctr_bits) + consume_enc = m.submodules.consume_enc = \ + GrayEncoder(self._ctr_bits) + consume_cdc = m.submodules.consume_cdc = \ + FFSynchronizer(consume_r_gry, consume_w_gry, o_domain=self._w_domain) + m.d.comb += consume_enc.i.eq(consume_r_nxt) + m.d[self._r_domain] += consume_r_gry.eq(consume_enc.o) + + w_full = Signal() + r_empty = Signal() + m.d.comb += [ + w_full.eq((produce_w_gry[-1] != consume_w_gry[-1]) & + (produce_w_gry[-2] != consume_w_gry[-2]) & + (produce_w_gry[:-2] == consume_w_gry[:-2])), + r_empty.eq(consume_r_gry == produce_r_gry), + ] + + storage = Memory(width=self.width, depth=self.depth) + w_port = m.submodules.w_port = storage.write_port(domain=self._w_domain) + r_port = m.submodules.r_port = storage.read_port (domain=self._r_domain, + transparent=False) + m.d.comb += [ + w_port.addr.eq(produce_w_bin[:-1]), + w_port.data.eq(self.w_data), + w_port.en.eq(do_write), + self.w_rdy.eq(~w_full), + ] + m.d.comb += [ + r_port.addr.eq(consume_r_nxt[:-1]), + self.r_data.eq(r_port.data), + r_port.en.eq(1), + self.r_rdy.eq(~r_empty), + ] + + if platform == "formal": + with m.If(Initial()): + m.d.comb += Assume(produce_w_gry == (produce_w_bin ^ produce_w_bin[1:])) + m.d.comb += Assume(consume_r_gry == (consume_r_bin ^ consume_r_bin[1:])) + + return m + + +class AsyncFIFOBuffered(Elaboratable, FIFOInterface): + __doc__ = FIFOInterface._doc_template.format( + description=""" + Buffered asynchronous first in, first out queue. + + Read and write interfaces are accessed from different clock domains, which can be set when + constructing the FIFO. + + :class:`AsyncFIFOBuffered` only supports power of 2 plus one depths. Unless ``exact_depth`` + is specified, the ``depth`` parameter is rounded up to the next power of 2 plus one. + (The output buffer acts as an additional queue element.) + + This queue's interface is identical to :class:`AsyncFIFO`, but it has an additional register + on the output, improving timing in case of block RAM that has large clock-to-output delay. + + In exchange, the latency between an entry being written to an empty queue and that entry + becoming available on the output is increased by one cycle compared to :class:`AsyncFIFO`. + """.strip(), + parameters=""" + r_domain : str + Read clock domain. + w_domain : str + Write clock domain. + """.strip(), + attributes=""" + fwft : bool + Always set. + """.strip(), + r_data_valid="Valid if ``r_rdy`` is asserted.", + r_attributes="", + w_attributes="") + + def __init__(self, *, width, depth, r_domain="read", w_domain="write", exact_depth=False): + if depth != 0: + try: + depth_bits = log2_int(max(0, depth - 1), need_pow2=exact_depth) + depth = (1 << depth_bits) + 1 + except ValueError as e: + raise ValueError("AsyncFIFOBuffered only supports depths that are one higher " + "than powers of 2; requested exact depth {} is not" + .format(depth)) from None + super().__init__(width=width, depth=depth, fwft=True) + + self._r_domain = r_domain + self._w_domain = w_domain + + def elaborate(self, platform): + m = Module() + if self.depth == 0: + m.d.comb += [ + self.w_rdy.eq(0), + self.r_rdy.eq(0), + ] + return m + + m.submodules.unbuffered = fifo = AsyncFIFO(width=self.width, depth=self.depth - 1, + r_domain=self._r_domain, w_domain=self._w_domain) + + m.d.comb += [ + fifo.w_data.eq(self.w_data), + self.w_rdy.eq(fifo.w_rdy), + fifo.w_en.eq(self.w_en), + ] + + with m.If(self.r_en | ~self.r_rdy): + m.d[self._r_domain] += [ + self.r_data.eq(fifo.r_data), + self.r_rdy.eq(fifo.r_rdy), + ] + m.d.comb += [ + fifo.r_en.eq(1) + ] + + return m diff --git a/nmigen/lib/io.py b/nmigen/lib/io.py new file mode 100644 index 0000000..7c950ae --- /dev/null +++ b/nmigen/lib/io.py @@ -0,0 +1,106 @@ +from .. import * +from ..hdl.rec import * + + +__all__ = ["pin_layout", "Pin"] + + +def pin_layout(width, dir, xdr=0): + """ + Layout of the platform interface of a pin or several pins, which may be used inside + user-defined records. + + See :class:`Pin` for details. + """ + if not isinstance(width, int) or width < 1: + raise TypeError("Width must be a positive integer, not {!r}" + .format(width)) + if dir not in ("i", "o", "oe", "io"): + raise TypeError("Direction must be one of \"i\", \"o\", \"io\", or \"oe\", not {!r}""" + .format(dir)) + if not isinstance(xdr, int) or xdr < 0: + raise TypeError("Gearing ratio must be a non-negative integer, not {!r}" + .format(xdr)) + + fields = [] + if dir in ("i", "io"): + if xdr > 0: + fields.append(("i_clk", 1)) + if xdr in (0, 1): + fields.append(("i", width)) + else: + for n in range(xdr): + fields.append(("i{}".format(n), width)) + if dir in ("o", "oe", "io"): + if xdr > 0: + fields.append(("o_clk", 1)) + if xdr in (0, 1): + fields.append(("o", width)) + else: + for n in range(xdr): + fields.append(("o{}".format(n), width)) + if dir in ("oe", "io"): + fields.append(("oe", 1)) + return Layout(fields) + + +class Pin(Record): + """ + An interface to an I/O buffer or a group of them that provides uniform access to input, output, + or tristate buffers that may include a 1:n gearbox. (A 1:2 gearbox is typically called "DDR".) + + A :class:`Pin` is identical to a :class:`Record` that uses the corresponding :meth:`pin_layout` + except that it allos accessing the parameters like ``width`` as attributes. It is legal to use + a plain :class:`Record` anywhere a :class:`Pin` is used, provided that these attributes are + not necessary. + + Parameters + ---------- + width : int + Width of the ``i``/``iN`` and ``o``/``oN`` signals. + dir : ``"i"``, ``"o"``, ``"io"``, ``"oe"`` + Direction of the buffers. If ``"i"`` is specified, only the ``i``/``iN`` signals are + present. If ``"o"`` is specified, only the ``o``/``oN`` signals are present. If ``"oe"`` is + specified, the ``o``/``oN`` signals are present, and an ``oe`` signal is present. + If ``"io"`` is specified, both the ``i``/``iN`` and ``o``/``oN`` signals are present, and + an ``oe`` signal is present. + xdr : int + Gearbox ratio. If equal to 0, the I/O buffer is combinatorial, and only ``i``/``o`` + signals are present. If equal to 1, the I/O buffer is SDR, and only ``i``/``o`` signals are + present. If greater than 1, the I/O buffer includes a gearbox, and ``iN``/``oN`` signals + are present instead, where ``N in range(0, N)``. For example, if ``xdr=2``, the I/O buffer + is DDR; the signal ``i0`` reflects the value at the rising edge, and the signal ``i1`` + reflects the value at the falling edge. + name : str + Name of the underlying record. + + Attributes + ---------- + i_clk: + I/O buffer input clock. Synchronizes `i*`. Present if ``xdr`` is nonzero. + i : Signal, out + I/O buffer input, without gearing. Present if ``dir="i"`` or ``dir="io"``, and ``xdr`` is + equal to 0 or 1. + i0, i1, ... : Signal, out + I/O buffer inputs, with gearing. Present if ``dir="i"`` or ``dir="io"``, and ``xdr`` is + greater than 1. + o_clk: + I/O buffer output clock. Synchronizes `o*`, including `oe`. Present if ``xdr`` is nonzero. + o : Signal, in + I/O buffer output, without gearing. Present if ``dir="o"`` or ``dir="io"``, and ``xdr`` is + equal to 0 or 1. + o0, o1, ... : Signal, in + I/O buffer outputs, with gearing. Present if ``dir="o"`` or ``dir="io"``, and ``xdr`` is + greater than 1. + oe : Signal, in + I/O buffer output enable. Present if ``dir="io"`` or ``dir="oe"``. Buffers generally + cannot change direction more than once per cycle, so at most one output enable signal + is present. + """ + def __init__(self, width, dir, *, xdr=0, name=None, src_loc_at=0): + self.width = width + self.dir = dir + self.xdr = xdr + + super().__init__(pin_layout(self.width, self.dir, self.xdr), + name=name, src_loc_at=src_loc_at + 1) diff --git a/nmigen/rpc.py b/nmigen/rpc.py new file mode 100644 index 0000000..a03ec23 --- /dev/null +++ b/nmigen/rpc.py @@ -0,0 +1,111 @@ +import sys +import json +import argparse +import importlib + +from .hdl import Signal, Record, Elaboratable +from .back import rtlil + + +__all__ = ["main"] + + +def _collect_modules(names): + modules = {} + for name in names: + py_module_name, py_class_name = name.rsplit(".", 1) + py_module = importlib.import_module(py_module_name) + if py_class_name == "*": + for py_class_name in py_module.__all__: + py_class = py_module.__dict__[py_class_name] + if not issubclass(py_class, Elaboratable): + continue + modules["{}.{}".format(py_module_name, py_class_name)] = py_class + else: + py_class = py_module.__dict__[py_class_name] + if not isinstance(py_class, type) or not issubclass(py_class, Elaboratable): + raise TypeError("{}.{} is not a class inheriting from Elaboratable" + .format(py_module_name, py_class_name)) + modules[name] = py_class + return modules + + +def _serve_yosys(modules): + while True: + request_json = sys.stdin.readline() + if not request_json: break + request = json.loads(request_json) + + if request["method"] == "modules": + response = {"modules": list(modules.keys())} + + elif request["method"] == "derive": + module_name = request["module"] + + args, kwargs = [], {} + for parameter_name, parameter in request["parameters"].items(): + if parameter["type"] == "unsigned": + parameter_value = int(parameter["value"], 2) + elif parameter["type"] == "signed": + width = len(parameter["value"]) + parameter_value = int(parameter["value"], 2) + if parameter_value & (1 << (width - 1)): + parameter_value = -((1 << width) - value) + elif parameter["type"] == "string": + parameter_value = parameter["value"] + elif parameter["type"] == "real": + parameter_value = float(parameter["value"]) + else: + raise NotImplementedError("Unrecognized parameter type {}" + .format(parameter_name)) + if parameter_name.startswith("$"): + index = int(parameter_name[1:]) + while len(args) < index: + args.append(None) + args[index] = parameter_value + if parameter_name.startswith("\\"): + kwargs[parameter_name[1:]] = parameter_value + + try: + elaboratable = modules[module_name](*args, **kwargs) + ports = [] + # By convention, any public attribute that is a Signal or a Record is + # considered a port. + for port_name, port in vars(elaboratable).items(): + if not port_name.startswith("_") and isinstance(port, (Signal, Record)): + ports += port._lhs_signals() + rtlil_text = rtlil.convert(elaboratable, name=module_name, ports=ports) + response = {"frontend": "ilang", "source": rtlil_text} + except Exception as error: + response = {"error": "{}: {}".format(type(error).__name__, str(error))} + + else: + return {"error": "Unrecognized method {!r}".format(request["method"])} + + sys.stdout.write(json.dumps(response)) + sys.stdout.write("\n") + sys.stdout.flush() + + +def main(): + parser = argparse.ArgumentParser(description=r""" + The nMigen RPC server allows a HDL synthesis program to request an nMigen module to + be elaborated on demand using the parameters it provides. For example, using Yosys together + with the nMigen RPC server allows instantiating parametric nMigen modules directly + from Verilog. + """) + def add_modules_arg(parser): + parser.add_argument("modules", metavar="MODULE", type=str, nargs="+", + help="import and provide MODULES") + protocols = parser.add_subparsers(metavar="PROTOCOL", dest="protocol", required=True) + protocol_yosys = protocols.add_parser("yosys", help="use Yosys JSON-based RPC protocol") + add_modules_arg(protocol_yosys) + + args = parser.parse_args() + modules = _collect_modules(args.modules) + if args.protocol == "yosys": + _serve_yosys(modules) + + +if __name__ == "__main__": + main() diff --git a/nmigen/test/__init__.py b/nmigen/test/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nmigen/test/__pycache__/__init__.cpython-37.pyc b/nmigen/test/__pycache__/__init__.cpython-37.pyc new file mode 100644 index 0000000..040fe64 Binary files /dev/null and b/nmigen/test/__pycache__/__init__.cpython-37.pyc differ diff --git a/nmigen/test/__pycache__/utils.cpython-37.pyc b/nmigen/test/__pycache__/utils.cpython-37.pyc new file mode 100644 index 0000000..bebfb96 Binary files /dev/null and b/nmigen/test/__pycache__/utils.cpython-37.pyc differ diff --git a/nmigen/test/compat/__init__.py b/nmigen/test/compat/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nmigen/test/compat/support.py b/nmigen/test/compat/support.py new file mode 100644 index 0000000..8491652 --- /dev/null +++ b/nmigen/test/compat/support.py @@ -0,0 +1,16 @@ +from ..._utils import _ignore_deprecated +from ...compat import * +from ...compat.fhdl import verilog + + +class SimCase: + def setUp(self, *args, **kwargs): + with _ignore_deprecated(): + self.tb = self.TestBench(*args, **kwargs) + + def test_to_verilog(self): + verilog.convert(self.tb) + + def run_with(self, generator): + with _ignore_deprecated(): + run_simulation(self.tb, generator) diff --git a/nmigen/test/compat/test_coding.py b/nmigen/test/compat/test_coding.py new file mode 100644 index 0000000..452a048 --- /dev/null +++ b/nmigen/test/compat/test_coding.py @@ -0,0 +1,116 @@ +# nmigen: UnusedElaboratable=no + +import unittest + +from ...compat import * +from ...compat.genlib.coding import * + +from .support import SimCase + + +class EncCase(SimCase, unittest.TestCase): + class TestBench(Module): + def __init__(self): + self.submodules.dut = Encoder(8) + + def test_sizes(self): + self.assertEqual(len(self.tb.dut.i), 8) + self.assertEqual(len(self.tb.dut.o), 3) + self.assertEqual(len(self.tb.dut.n), 1) + + def test_run_sequence(self): + seq = list(range(1<<8)) + def gen(): + for _ in range(256): + if seq: + yield self.tb.dut.i.eq(seq.pop(0)) + yield + if (yield self.tb.dut.n): + self.assertNotIn((yield self.tb.dut.i), [1< 0: + self.assertEqual(i & 1<<(o - 1), 0) + self.assertGreaterEqual(i, 1< 0: + self.assertEqual(i & 1<<(o - 1), 0) + self.assertGreaterEqual(i, 1< q, + lambda p, q: p >= q, + lambda p, q: p < q, + lambda p, q: p <= q, + lambda p, q: p == q, + lambda p, q: p != q, + ] + self.vals = [] + for asign in 1, -1: + for bsign in 1, -1: + for f in comps: + r = Signal() + r0 = f(asign*self.a, bsign*self.b) + self.comb += r.eq(r0) + self.vals.append((asign, bsign, f, r, r0.op)) + + def test_comparisons(self): + def gen(): + for i in range(-4, 4): + yield self.tb.a.eq(i) + yield self.tb.b.eq(i) + yield + a = yield self.tb.a + b = yield self.tb.b + for asign, bsign, f, r, op in self.tb.vals: + r, r0 = (yield r), f(asign*a, bsign*b) + self.assertEqual(r, int(r0), + "got {}, want {}*{} {} {}*{} = {}".format( + r, asign, a, op, bsign, b, r0)) + self.run_with(gen()) diff --git a/nmigen/test/compat/test_size.py b/nmigen/test/compat/test_size.py new file mode 100644 index 0000000..e3864f9 --- /dev/null +++ b/nmigen/test/compat/test_size.py @@ -0,0 +1,21 @@ +import unittest + +from ..._utils import _ignore_deprecated +from ...compat import * + + +def _same_slices(a, b): + return a.value is b.value and a.start == b.start and a.stop == b.stop + + +class SignalSizeCase(unittest.TestCase): + def setUp(self): + self.i = C(0xaa) + self.j = C(-127) + with _ignore_deprecated(): + self.s = Signal((13, True)) + + def test_len(self): + self.assertEqual(len(self.s), 13) + self.assertEqual(len(self.i), 8) + self.assertEqual(len(self.j), 8) diff --git a/nmigen/test/test_build_dsl.py b/nmigen/test/test_build_dsl.py new file mode 100644 index 0000000..bf901bf --- /dev/null +++ b/nmigen/test/test_build_dsl.py @@ -0,0 +1,328 @@ +from collections import OrderedDict + +from ..build.dsl import * +from .utils import * + + +class PinsTestCase(FHDLTestCase): + def test_basic(self): + p = Pins("A0 A1 A2") + self.assertEqual(repr(p), "(pins io A0 A1 A2)") + self.assertEqual(len(p.names), 3) + self.assertEqual(p.dir, "io") + self.assertEqual(p.invert, False) + self.assertEqual(list(p), ["A0", "A1", "A2"]) + + def test_invert(self): + p = PinsN("A0") + self.assertEqual(repr(p), "(pins-n io A0)") + self.assertEqual(p.invert, True) + + def test_invert_arg(self): + p = Pins("A0", invert=True) + self.assertEqual(p.invert, True) + + def test_conn(self): + p = Pins("0 1 2", conn=("pmod", 0)) + self.assertEqual(list(p), ["pmod_0:0", "pmod_0:1", "pmod_0:2"]) + p = Pins("0 1 2", conn=("pmod", "a")) + self.assertEqual(list(p), ["pmod_a:0", "pmod_a:1", "pmod_a:2"]) + + def test_map_names(self): + p = Pins("0 1 2", conn=("pmod", 0)) + mapping = { + "pmod_0:0": "A0", + "pmod_0:1": "A1", + "pmod_0:2": "A2", + } + self.assertEqual(p.map_names(mapping, p), ["A0", "A1", "A2"]) + + def test_map_names_recur(self): + p = Pins("0", conn=("pmod", 0)) + mapping = { + "pmod_0:0": "ext_0:1", + "ext_0:1": "A1", + } + self.assertEqual(p.map_names(mapping, p), ["A1"]) + + def test_wrong_names(self): + with self.assertRaises(TypeError, + msg="Names must be a whitespace-separated string, not ['A0', 'A1', 'A2']"): + p = Pins(["A0", "A1", "A2"]) + + def test_wrong_dir(self): + with self.assertRaises(TypeError, + msg="Direction must be one of \"i\", \"o\", \"oe\", or \"io\", not 'wrong'"): + p = Pins("A0 A1", dir="wrong") + + def test_wrong_conn(self): + with self.assertRaises(TypeError, + msg="Connector must be None or a pair of string (connector name) and " + "integer/string (connector number), not ('foo', None)"): + p = Pins("A0 A1", conn=("foo", None)) + + def test_wrong_map_names(self): + p = Pins("0 1 2", conn=("pmod", 0)) + mapping = { + "pmod_0:0": "A0", + } + with self.assertRaises(NameError, + msg="Resource (pins io pmod_0:0 pmod_0:1 pmod_0:2) refers to nonexistent " + "connector pin pmod_0:1"): + p.map_names(mapping, p) + + def test_wrong_assert_width(self): + with self.assertRaises(AssertionError, + msg="3 names are specified (0 1 2), but 4 names are expected"): + Pins("0 1 2", assert_width=4) + + +class DiffPairsTestCase(FHDLTestCase): + def test_basic(self): + dp = DiffPairs(p="A0 A1", n="B0 B1") + self.assertEqual(repr(dp), "(diffpairs io (p A0 A1) (n B0 B1))") + self.assertEqual(dp.p.names, ["A0", "A1"]) + self.assertEqual(dp.n.names, ["B0", "B1"]) + self.assertEqual(dp.dir, "io") + self.assertEqual(list(dp), [("A0", "B0"), ("A1", "B1")]) + + def test_invert(self): + dp = DiffPairsN(p="A0", n="B0") + self.assertEqual(repr(dp), "(diffpairs-n io (p A0) (n B0))") + self.assertEqual(dp.p.names, ["A0"]) + self.assertEqual(dp.n.names, ["B0"]) + self.assertEqual(dp.invert, True) + + def test_conn(self): + dp = DiffPairs(p="0 1 2", n="3 4 5", conn=("pmod", 0)) + self.assertEqual(list(dp), [ + ("pmod_0:0", "pmod_0:3"), + ("pmod_0:1", "pmod_0:4"), + ("pmod_0:2", "pmod_0:5"), + ]) + + def test_dir(self): + dp = DiffPairs("A0", "B0", dir="o") + self.assertEqual(dp.dir, "o") + self.assertEqual(dp.p.dir, "o") + self.assertEqual(dp.n.dir, "o") + + def test_wrong_width(self): + with self.assertRaises(TypeError, + msg="Positive and negative pins must have the same width, but (pins io A0) " + "and (pins io B0 B1) do not"): + dp = DiffPairs("A0", "B0 B1") + + def test_wrong_assert_width(self): + with self.assertRaises(AssertionError, + msg="3 names are specified (0 1 2), but 4 names are expected"): + DiffPairs("0 1 2", "3 4 5", assert_width=4) + + +class AttrsTestCase(FHDLTestCase): + def test_basic(self): + a = Attrs(IO_STANDARD="LVCMOS33", PULLUP=1) + self.assertEqual(a["IO_STANDARD"], "LVCMOS33") + self.assertEqual(repr(a), "(attrs IO_STANDARD='LVCMOS33' PULLUP=1)") + + def test_remove(self): + a = Attrs(FOO=None) + self.assertEqual(a["FOO"], None) + self.assertEqual(repr(a), "(attrs !FOO)") + + def test_callable(self): + fn = lambda self: "FOO" + a = Attrs(FOO=fn) + self.assertEqual(a["FOO"], fn) + self.assertEqual(repr(a), "(attrs FOO={!r})".format(fn)) + + def test_wrong_value(self): + with self.assertRaises(TypeError, + msg="Value of attribute FOO must be None, int, str, or callable, not 1.0"): + a = Attrs(FOO=1.0) + + +class ClockTestCase(FHDLTestCase): + def test_basic(self): + c = Clock(1_000_000) + self.assertEqual(c.frequency, 1e6) + self.assertEqual(c.period, 1e-6) + self.assertEqual(repr(c), "(clock 1000000.0)") + + +class SubsignalTestCase(FHDLTestCase): + def test_basic_pins(self): + s = Subsignal("a", Pins("A0"), Attrs(IOSTANDARD="LVCMOS33")) + self.assertEqual(repr(s), + "(subsignal a (pins io A0) (attrs IOSTANDARD='LVCMOS33'))") + + def test_basic_diffpairs(self): + s = Subsignal("a", DiffPairs("A0", "B0")) + self.assertEqual(repr(s), + "(subsignal a (diffpairs io (p A0) (n B0)))") + + def test_basic_subsignals(self): + s = Subsignal("a", + Subsignal("b", Pins("A0")), + Subsignal("c", Pins("A1"))) + self.assertEqual(repr(s), + "(subsignal a (subsignal b (pins io A0)) " + "(subsignal c (pins io A1)))") + + def test_attrs(self): + s = Subsignal("a", + Subsignal("b", Pins("A0")), + Subsignal("c", Pins("A0"), Attrs(SLEW="FAST")), + Attrs(IOSTANDARD="LVCMOS33")) + self.assertEqual(s.attrs, {"IOSTANDARD": "LVCMOS33"}) + self.assertEqual(s.ios[0].attrs, {}) + self.assertEqual(s.ios[1].attrs, {"SLEW": "FAST"}) + + def test_attrs_many(self): + s = Subsignal("a", Pins("A0"), Attrs(SLEW="FAST"), Attrs(PULLUP="1")) + self.assertEqual(s.attrs, {"SLEW": "FAST", "PULLUP": "1"}) + + def test_clock(self): + s = Subsignal("a", Pins("A0"), Clock(1e6)) + self.assertEqual(s.clock.frequency, 1e6) + + def test_wrong_empty_io(self): + with self.assertRaises(ValueError, msg="Missing I/O constraints"): + s = Subsignal("a") + + def test_wrong_io(self): + with self.assertRaises(TypeError, + msg="Constraint must be one of Pins, DiffPairs, Subsignal, Attrs, or Clock, " + "not 'wrong'"): + s = Subsignal("a", "wrong") + + def test_wrong_pins(self): + with self.assertRaises(TypeError, + msg="Pins and DiffPairs are incompatible with other location or subsignal " + "constraints, but (pins io A1) appears after (pins io A0)"): + s = Subsignal("a", Pins("A0"), Pins("A1")) + + def test_wrong_diffpairs(self): + with self.assertRaises(TypeError, + msg="Pins and DiffPairs are incompatible with other location or subsignal " + "constraints, but (pins io A1) appears after (diffpairs io (p A0) (n B0))"): + s = Subsignal("a", DiffPairs("A0", "B0"), Pins("A1")) + + def test_wrong_subsignals(self): + with self.assertRaises(TypeError, + msg="Pins and DiffPairs are incompatible with other location or subsignal " + "constraints, but (pins io B0) appears after (subsignal b (pins io A0))"): + s = Subsignal("a", Subsignal("b", Pins("A0")), Pins("B0")) + + def test_wrong_clock(self): + with self.assertRaises(TypeError, + msg="Clock constraint can only be applied to Pins or DiffPairs, not " + "(subsignal b (pins io A0))"): + s = Subsignal("a", Subsignal("b", Pins("A0")), Clock(1e6)) + + def test_wrong_clock_many(self): + with self.assertRaises(ValueError, + msg="Clock constraint can be applied only once"): + s = Subsignal("a", Pins("A0"), Clock(1e6), Clock(1e7)) + + +class ResourceTestCase(FHDLTestCase): + def test_basic(self): + r = Resource("serial", 0, + Subsignal("tx", Pins("A0", dir="o")), + Subsignal("rx", Pins("A1", dir="i")), + Attrs(IOSTANDARD="LVCMOS33")) + self.assertEqual(repr(r), "(resource serial 0" + " (subsignal tx (pins o A0))" + " (subsignal rx (pins i A1))" + " (attrs IOSTANDARD='LVCMOS33'))") + + def test_family(self): + ios = [Subsignal("clk", Pins("A0", dir="o"))] + r1 = Resource.family(0, default_name="spi", ios=ios) + r2 = Resource.family("spi_flash", 0, default_name="spi", ios=ios) + r3 = Resource.family("spi_flash", 0, default_name="spi", ios=ios, name_suffix="4x") + r4 = Resource.family(0, default_name="spi", ios=ios, name_suffix="2x") + self.assertEqual(r1.name, "spi") + self.assertEqual(r1.ios, ios) + self.assertEqual(r2.name, "spi_flash") + self.assertEqual(r2.ios, ios) + self.assertEqual(r3.name, "spi_flash_4x") + self.assertEqual(r3.ios, ios) + self.assertEqual(r4.name, "spi_2x") + self.assertEqual(r4.ios, ios) + + +class ConnectorTestCase(FHDLTestCase): + def test_string(self): + c = Connector("pmod", 0, "A0 A1 A2 A3 - - A4 A5 A6 A7 - -") + self.assertEqual(c.name, "pmod") + self.assertEqual(c.number, 0) + self.assertEqual(c.mapping, OrderedDict([ + ("1", "A0"), + ("2", "A1"), + ("3", "A2"), + ("4", "A3"), + ("7", "A4"), + ("8", "A5"), + ("9", "A6"), + ("10", "A7"), + ])) + self.assertEqual(list(c), [ + ("pmod_0:1", "A0"), + ("pmod_0:2", "A1"), + ("pmod_0:3", "A2"), + ("pmod_0:4", "A3"), + ("pmod_0:7", "A4"), + ("pmod_0:8", "A5"), + ("pmod_0:9", "A6"), + ("pmod_0:10", "A7"), + ]) + self.assertEqual(repr(c), + "(connector pmod 0 1=>A0 2=>A1 3=>A2 4=>A3 7=>A4 8=>A5 9=>A6 10=>A7)") + + def test_dict(self): + c = Connector("ext", 1, {"DP0": "A0", "DP1": "A1"}) + self.assertEqual(c.name, "ext") + self.assertEqual(c.number, 1) + self.assertEqual(c.mapping, OrderedDict([ + ("DP0", "A0"), + ("DP1", "A1"), + ])) + + def test_conn(self): + c = Connector("pmod", 0, "0 1 2 3 - - 4 5 6 7 - -", conn=("expansion", 0)) + self.assertEqual(c.mapping, OrderedDict([ + ("1", "expansion_0:0"), + ("2", "expansion_0:1"), + ("3", "expansion_0:2"), + ("4", "expansion_0:3"), + ("7", "expansion_0:4"), + ("8", "expansion_0:5"), + ("9", "expansion_0:6"), + ("10", "expansion_0:7"), + ])) + + def test_str_name(self): + c = Connector("ext", "A", "0 1 2") + self.assertEqual(c.name, "ext") + self.assertEqual(c.number, "A") + + def test_conn_wrong_name(self): + with self.assertRaises(TypeError, + msg="Connector must be None or a pair of string (connector name) and " + "integer/string (connector number), not ('foo', None)"): + Connector("ext", "A", "0 1 2", conn=("foo", None)) + + def test_wrong_io(self): + with self.assertRaises(TypeError, + msg="Connector I/Os must be a dictionary or a string, not []"): + Connector("pmod", 0, []) + + def test_wrong_dict_key_value(self): + with self.assertRaises(TypeError, + msg="Connector pin name must be a string, not 0"): + Connector("pmod", 0, {0: "A"}) + with self.assertRaises(TypeError, + msg="Platform pin name must be a string, not 0"): + Connector("pmod", 0, {"A": 0}) diff --git a/nmigen/test/test_build_plat.py b/nmigen/test/test_build_plat.py new file mode 100644 index 0000000..2b2ec62 --- /dev/null +++ b/nmigen/test/test_build_plat.py @@ -0,0 +1,52 @@ +from .. import * +from ..build.plat import * +from .utils import * + + +class MockPlatform(Platform): + resources = [] + connectors = [] + + required_tools = [] + + def toolchain_prepare(self, fragment, name, **kwargs): + raise NotImplementedError + + +class PlatformTestCase(FHDLTestCase): + def setUp(self): + self.platform = MockPlatform() + + def test_add_file_str(self): + self.platform.add_file("x.txt", "foo") + self.assertEqual(self.platform.extra_files["x.txt"], "foo") + + def test_add_file_bytes(self): + self.platform.add_file("x.txt", b"foo") + self.assertEqual(self.platform.extra_files["x.txt"], b"foo") + + def test_add_file_exact_duplicate(self): + self.platform.add_file("x.txt", b"foo") + self.platform.add_file("x.txt", b"foo") + + def test_add_file_io(self): + with open(__file__) as f: + self.platform.add_file("x.txt", f) + with open(__file__) as f: + self.assertEqual(self.platform.extra_files["x.txt"], f.read()) + + def test_add_file_wrong_filename(self): + with self.assertRaises(TypeError, + msg="File name must be a string, not 1"): + self.platform.add_file(1, "") + + def test_add_file_wrong_contents(self): + with self.assertRaises(TypeError, + msg="File contents must be str, bytes, or a file-like object, not 1"): + self.platform.add_file("foo", 1) + + def test_add_file_wrong_duplicate(self): + self.platform.add_file("foo", "") + with self.assertRaises(ValueError, + msg="File 'foo' already exists"): + self.platform.add_file("foo", "bar") diff --git a/nmigen/test/test_build_res.py b/nmigen/test/test_build_res.py new file mode 100644 index 0000000..3b3a9d5 --- /dev/null +++ b/nmigen/test/test_build_res.py @@ -0,0 +1,298 @@ +# nmigen: UnusedElaboratable=no + +from .. import * +from ..hdl.rec import * +from ..lib.io import * +from ..build.dsl import * +from ..build.res import * +from .utils import * + + +class ResourceManagerTestCase(FHDLTestCase): + def setUp(self): + self.resources = [ + Resource("clk100", 0, DiffPairs("H1", "H2", dir="i"), Clock(100e6)), + Resource("clk50", 0, Pins("K1"), Clock(50e6)), + Resource("user_led", 0, Pins("A0", dir="o")), + Resource("i2c", 0, + Subsignal("scl", Pins("N10", dir="o")), + Subsignal("sda", Pins("N11")) + ) + ] + self.connectors = [ + Connector("pmod", 0, "B0 B1 B2 B3 - -"), + ] + self.cm = ResourceManager(self.resources, self.connectors) + + def test_basic(self): + self.cm = ResourceManager(self.resources, self.connectors) + self.assertEqual(self.cm.resources, { + ("clk100", 0): self.resources[0], + ("clk50", 0): self.resources[1], + ("user_led", 0): self.resources[2], + ("i2c", 0): self.resources[3] + }) + self.assertEqual(self.cm.connectors, { + ("pmod", 0): self.connectors[0], + }) + + def test_add_resources(self): + new_resources = [ + Resource("user_led", 1, Pins("A1", dir="o")) + ] + self.cm.add_resources(new_resources) + self.assertEqual(self.cm.resources, { + ("clk100", 0): self.resources[0], + ("clk50", 0): self.resources[1], + ("user_led", 0): self.resources[2], + ("i2c", 0): self.resources[3], + ("user_led", 1): new_resources[0] + }) + + def test_lookup(self): + r = self.cm.lookup("user_led", 0) + self.assertIs(r, self.cm.resources["user_led", 0]) + + def test_request_basic(self): + r = self.cm.lookup("user_led", 0) + user_led = self.cm.request("user_led", 0) + + self.assertIsInstance(user_led, Pin) + self.assertEqual(user_led.name, "user_led_0") + self.assertEqual(user_led.width, 1) + self.assertEqual(user_led.dir, "o") + + ports = list(self.cm.iter_ports()) + self.assertEqual(len(ports), 1) + + self.assertEqual(list(self.cm.iter_port_constraints()), [ + ("user_led_0__io", ["A0"], {}) + ]) + + def test_request_with_dir(self): + i2c = self.cm.request("i2c", 0, dir={"sda": "o"}) + self.assertIsInstance(i2c, Record) + self.assertIsInstance(i2c.sda, Pin) + self.assertEqual(i2c.sda.dir, "o") + + def test_request_tristate(self): + i2c = self.cm.request("i2c", 0) + self.assertEqual(i2c.sda.dir, "io") + + ports = list(self.cm.iter_ports()) + self.assertEqual(len(ports), 2) + scl, sda = ports + self.assertEqual(ports[1].name, "i2c_0__sda__io") + self.assertEqual(ports[1].width, 1) + + self.assertEqual(list(self.cm.iter_single_ended_pins()), [ + (i2c.scl, scl, {}, False), + (i2c.sda, sda, {}, False), + ]) + self.assertEqual(list(self.cm.iter_port_constraints()), [ + ("i2c_0__scl__io", ["N10"], {}), + ("i2c_0__sda__io", ["N11"], {}) + ]) + + def test_request_diffpairs(self): + clk100 = self.cm.request("clk100", 0) + self.assertIsInstance(clk100, Pin) + self.assertEqual(clk100.dir, "i") + self.assertEqual(clk100.width, 1) + + ports = list(self.cm.iter_ports()) + self.assertEqual(len(ports), 2) + p, n = ports + self.assertEqual(p.name, "clk100_0__p") + self.assertEqual(p.width, clk100.width) + self.assertEqual(n.name, "clk100_0__n") + self.assertEqual(n.width, clk100.width) + + self.assertEqual(list(self.cm.iter_differential_pins()), [ + (clk100, p, n, {}, False), + ]) + self.assertEqual(list(self.cm.iter_port_constraints()), [ + ("clk100_0__p", ["H1"], {}), + ("clk100_0__n", ["H2"], {}), + ]) + + def test_request_inverted(self): + new_resources = [ + Resource("cs", 0, PinsN("X0")), + Resource("clk", 0, DiffPairsN("Y0", "Y1")), + ] + self.cm.add_resources(new_resources) + + sig_cs = self.cm.request("cs") + sig_clk = self.cm.request("clk") + port_cs, port_clk_p, port_clk_n = self.cm.iter_ports() + self.assertEqual(list(self.cm.iter_single_ended_pins()), [ + (sig_cs, port_cs, {}, True), + ]) + self.assertEqual(list(self.cm.iter_differential_pins()), [ + (sig_clk, port_clk_p, port_clk_n, {}, True), + ]) + + def test_request_raw(self): + clk50 = self.cm.request("clk50", 0, dir="-") + self.assertIsInstance(clk50, Record) + self.assertIsInstance(clk50.io, Signal) + + ports = list(self.cm.iter_ports()) + self.assertEqual(len(ports), 1) + self.assertIs(ports[0], clk50.io) + + def test_request_raw_diffpairs(self): + clk100 = self.cm.request("clk100", 0, dir="-") + self.assertIsInstance(clk100, Record) + self.assertIsInstance(clk100.p, Signal) + self.assertIsInstance(clk100.n, Signal) + + ports = list(self.cm.iter_ports()) + self.assertEqual(len(ports), 2) + self.assertIs(ports[0], clk100.p) + self.assertIs(ports[1], clk100.n) + + def test_request_via_connector(self): + self.cm.add_resources([ + Resource("spi", 0, + Subsignal("ss", Pins("1", conn=("pmod", 0))), + Subsignal("clk", Pins("2", conn=("pmod", 0))), + Subsignal("miso", Pins("3", conn=("pmod", 0))), + Subsignal("mosi", Pins("4", conn=("pmod", 0))), + ) + ]) + spi0 = self.cm.request("spi", 0) + self.assertEqual(list(self.cm.iter_port_constraints()), [ + ("spi_0__ss__io", ["B0"], {}), + ("spi_0__clk__io", ["B1"], {}), + ("spi_0__miso__io", ["B2"], {}), + ("spi_0__mosi__io", ["B3"], {}), + ]) + + def test_request_via_nested_connector(self): + new_connectors = [ + Connector("pmod_extension", 0, "1 2 3 4 - -", conn=("pmod", 0)), + ] + self.cm.add_connectors(new_connectors) + self.cm.add_resources([ + Resource("spi", 0, + Subsignal("ss", Pins("1", conn=("pmod_extension", 0))), + Subsignal("clk", Pins("2", conn=("pmod_extension", 0))), + Subsignal("miso", Pins("3", conn=("pmod_extension", 0))), + Subsignal("mosi", Pins("4", conn=("pmod_extension", 0))), + ) + ]) + spi0 = self.cm.request("spi", 0) + self.assertEqual(list(self.cm.iter_port_constraints()), [ + ("spi_0__ss__io", ["B0"], {}), + ("spi_0__clk__io", ["B1"], {}), + ("spi_0__miso__io", ["B2"], {}), + ("spi_0__mosi__io", ["B3"], {}), + ]) + + def test_request_clock(self): + clk100 = self.cm.request("clk100", 0) + clk50 = self.cm.request("clk50", 0, dir="i") + clk100_port_p, clk100_port_n, clk50_port = self.cm.iter_ports() + self.assertEqual(list(self.cm.iter_clock_constraints()), [ + (clk100.i, clk100_port_p, 100e6), + (clk50.i, clk50_port, 50e6) + ]) + + def test_add_clock(self): + i2c = self.cm.request("i2c") + self.cm.add_clock_constraint(i2c.scl.o, 100e3) + self.assertEqual(list(self.cm.iter_clock_constraints()), [ + (i2c.scl.o, None, 100e3) + ]) + + def test_wrong_resources(self): + with self.assertRaises(TypeError, msg="Object 'wrong' is not a Resource"): + self.cm.add_resources(['wrong']) + + def test_wrong_resources_duplicate(self): + with self.assertRaises(NameError, + msg="Trying to add (resource user_led 0 (pins o A1)), but " + "(resource user_led 0 (pins o A0)) has the same name and number"): + self.cm.add_resources([Resource("user_led", 0, Pins("A1", dir="o"))]) + + def test_wrong_connectors(self): + with self.assertRaises(TypeError, msg="Object 'wrong' is not a Connector"): + self.cm.add_connectors(['wrong']) + + def test_wrong_connectors_duplicate(self): + with self.assertRaises(NameError, + msg="Trying to add (connector pmod 0 1=>1 2=>2), but " + "(connector pmod 0 1=>B0 2=>B1 3=>B2 4=>B3) has the same name and number"): + self.cm.add_connectors([Connector("pmod", 0, "1 2")]) + + def test_wrong_lookup(self): + with self.assertRaises(ResourceError, + msg="Resource user_led#1 does not exist"): + r = self.cm.lookup("user_led", 1) + + def test_wrong_clock_signal(self): + with self.assertRaises(TypeError, + msg="Object None is not a Signal"): + self.cm.add_clock_constraint(None, 10e6) + + def test_wrong_clock_frequency(self): + with self.assertRaises(TypeError, + msg="Frequency must be a number, not None"): + self.cm.add_clock_constraint(Signal(), None) + + def test_wrong_request_duplicate(self): + with self.assertRaises(ResourceError, + msg="Resource user_led#0 has already been requested"): + self.cm.request("user_led", 0) + self.cm.request("user_led", 0) + + def test_wrong_request_duplicate_physical(self): + self.cm.add_resources([ + Resource("clk20", 0, Pins("H1", dir="i")), + ]) + self.cm.request("clk100", 0) + with self.assertRaises(ResourceError, + msg="Resource component clk20_0 uses physical pin H1, but it is already " + "used by resource component clk100_0 that was requested earlier"): + self.cm.request("clk20", 0) + + def test_wrong_request_with_dir(self): + with self.assertRaises(TypeError, + msg="Direction must be one of \"i\", \"o\", \"oe\", \"io\", or \"-\", " + "not 'wrong'"): + user_led = self.cm.request("user_led", 0, dir="wrong") + + def test_wrong_request_with_dir_io(self): + with self.assertRaises(ValueError, + msg="Direction of (pins o A0) cannot be changed from \"o\" to \"i\"; direction " + "can be changed from \"io\" to \"i\", \"o\", or \"oe\", or from anything " + "to \"-\""): + user_led = self.cm.request("user_led", 0, dir="i") + + def test_wrong_request_with_dir_dict(self): + with self.assertRaises(TypeError, + msg="Directions must be a dict, not 'i', because (resource i2c 0 (subsignal scl " + "(pins o N10)) (subsignal sda (pins io N11))) " + "has subsignals"): + i2c = self.cm.request("i2c", 0, dir="i") + + def test_wrong_request_with_wrong_xdr(self): + with self.assertRaises(ValueError, + msg="Data rate of (pins o A0) must be a non-negative integer, not -1"): + user_led = self.cm.request("user_led", 0, xdr=-1) + + def test_wrong_request_with_xdr_dict(self): + with self.assertRaises(TypeError, + msg="Data rate must be a dict, not 2, because (resource i2c 0 (subsignal scl " + "(pins o N10)) (subsignal sda (pins io N11))) " + "has subsignals"): + i2c = self.cm.request("i2c", 0, xdr=2) + + def test_wrong_clock_constraint_twice(self): + clk100 = self.cm.request("clk100") + with self.assertRaises(ValueError, + msg="Cannot add clock constraint on (sig clk100_0__i), which is already " + "constrained to 100000000.0 Hz"): + self.cm.add_clock_constraint(clk100.i, 1e6) diff --git a/nmigen/test/test_compat.py b/nmigen/test/test_compat.py new file mode 100644 index 0000000..62e2d92 --- /dev/null +++ b/nmigen/test/test_compat.py @@ -0,0 +1,9 @@ +from ..hdl.ir import Fragment +from ..compat import * +from .utils import * + + +class CompatTestCase(FHDLTestCase): + def test_fragment_get(self): + m = Module() + f = Fragment.get(m, platform=None) diff --git a/nmigen/test/test_examples.py b/nmigen/test/test_examples.py new file mode 100644 index 0000000..94b9377 --- /dev/null +++ b/nmigen/test/test_examples.py @@ -0,0 +1,28 @@ +import sys +import subprocess +from pathlib import Path + +from .utils import * + + +def example_test(name): + path = (Path(__file__).parent / ".." / ".." / "examples" / name).resolve() + def test_function(self): + subprocess.check_call([sys.executable, str(path), "generate"], stdout=subprocess.DEVNULL) + return test_function + + +class ExamplesTestCase(FHDLTestCase): + test_alu = example_test("basic/alu.py") + test_alu_hier = example_test("basic/alu_hier.py") + test_arst = example_test("basic/arst.py") + test_cdc = example_test("basic/cdc.py") + test_ctr = example_test("basic/ctr.py") + test_ctr_en = example_test("basic/ctr_en.py") + test_fsm = example_test("basic/fsm.py") + test_gpio = example_test("basic/gpio.py") + test_inst = example_test("basic/inst.py") + test_mem = example_test("basic/mem.py") + test_pmux = example_test("basic/pmux.py") + test_por = example_test("basic/por.py") + test_uart = example_test("basic/uart.py") diff --git a/nmigen/test/test_hdl_ast.py b/nmigen/test/test_hdl_ast.py new file mode 100644 index 0000000..c0d26ee --- /dev/null +++ b/nmigen/test/test_hdl_ast.py @@ -0,0 +1,926 @@ +import warnings +from enum import Enum + +from ..hdl.ast import * +from .utils import * + + +class UnsignedEnum(Enum): + FOO = 1 + BAR = 2 + BAZ = 3 + + +class SignedEnum(Enum): + FOO = -1 + BAR = 0 + BAZ = +1 + + +class StringEnum(Enum): + FOO = "a" + BAR = "b" + + +class ShapeTestCase(FHDLTestCase): + def test_make(self): + s1 = Shape() + self.assertEqual(s1.width, 1) + self.assertEqual(s1.signed, False) + s2 = Shape(signed=True) + self.assertEqual(s2.width, 1) + self.assertEqual(s2.signed, True) + s3 = Shape(3, True) + self.assertEqual(s3.width, 3) + self.assertEqual(s3.signed, True) + + def test_make_wrong(self): + with self.assertRaises(TypeError, + msg="Width must be a non-negative integer, not -1"): + Shape(-1) + + def test_tuple(self): + width, signed = Shape() + self.assertEqual(width, 1) + self.assertEqual(signed, False) + + def test_unsigned(self): + s1 = unsigned(2) + self.assertIsInstance(s1, Shape) + self.assertEqual(s1.width, 2) + self.assertEqual(s1.signed, False) + + def test_signed(self): + s1 = signed(2) + self.assertIsInstance(s1, Shape) + self.assertEqual(s1.width, 2) + self.assertEqual(s1.signed, True) + + def test_cast_shape(self): + s1 = Shape.cast(unsigned(1)) + self.assertEqual(s1.width, 1) + self.assertEqual(s1.signed, False) + s2 = Shape.cast(signed(3)) + self.assertEqual(s2.width, 3) + self.assertEqual(s2.signed, True) + + def test_cast_int(self): + s1 = Shape.cast(2) + self.assertEqual(s1.width, 2) + self.assertEqual(s1.signed, False) + + def test_cast_int_wrong(self): + with self.assertRaises(TypeError, + msg="Width must be a non-negative integer, not -1"): + Shape.cast(-1) + + def test_cast_tuple(self): + with warnings.catch_warnings(): + warnings.filterwarnings(action="ignore", category=DeprecationWarning) + s1 = Shape.cast((1, True)) + self.assertEqual(s1.width, 1) + self.assertEqual(s1.signed, True) + + def test_cast_tuple_wrong(self): + with warnings.catch_warnings(): + warnings.filterwarnings(action="ignore", category=DeprecationWarning) + with self.assertRaises(TypeError, + msg="Width must be a non-negative integer, not -1"): + Shape.cast((-1, True)) + + def test_cast_range(self): + s1 = Shape.cast(range(0, 8)) + self.assertEqual(s1.width, 3) + self.assertEqual(s1.signed, False) + s2 = Shape.cast(range(0, 9)) + self.assertEqual(s2.width, 4) + self.assertEqual(s2.signed, False) + s3 = Shape.cast(range(-7, 8)) + self.assertEqual(s3.width, 4) + self.assertEqual(s3.signed, True) + s4 = Shape.cast(range(0, 1)) + self.assertEqual(s4.width, 1) + self.assertEqual(s4.signed, False) + s5 = Shape.cast(range(-1, 0)) + self.assertEqual(s5.width, 1) + self.assertEqual(s5.signed, True) + s6 = Shape.cast(range(0, 0)) + self.assertEqual(s6.width, 0) + self.assertEqual(s6.signed, False) + s7 = Shape.cast(range(-1, -1)) + self.assertEqual(s7.width, 0) + self.assertEqual(s7.signed, True) + + def test_cast_enum(self): + s1 = Shape.cast(UnsignedEnum) + self.assertEqual(s1.width, 2) + self.assertEqual(s1.signed, False) + s2 = Shape.cast(SignedEnum) + self.assertEqual(s2.width, 2) + self.assertEqual(s2.signed, True) + + def test_cast_enum_bad(self): + with self.assertRaises(TypeError, + msg="Only enumerations with integer values can be used as value shapes"): + Shape.cast(StringEnum) + + def test_cast_bad(self): + with self.assertRaises(TypeError, + msg="Object 'foo' cannot be used as value shape"): + Shape.cast("foo") + + +class ValueTestCase(FHDLTestCase): + def test_cast(self): + self.assertIsInstance(Value.cast(0), Const) + self.assertIsInstance(Value.cast(True), Const) + c = Const(0) + self.assertIs(Value.cast(c), c) + with self.assertRaises(TypeError, + msg="Object 'str' cannot be converted to an nMigen value"): + Value.cast("str") + + def test_cast_enum(self): + e1 = Value.cast(UnsignedEnum.FOO) + self.assertIsInstance(e1, Const) + self.assertEqual(e1.shape(), unsigned(2)) + e2 = Value.cast(SignedEnum.FOO) + self.assertIsInstance(e2, Const) + self.assertEqual(e2.shape(), signed(2)) + + def test_cast_enum_wrong(self): + with self.assertRaises(TypeError, + msg="Only enumerations with integer values can be used as value shapes"): + Value.cast(StringEnum.FOO) + + def test_bool(self): + with self.assertRaises(TypeError, + msg="Attempted to convert nMigen value to boolean"): + if Const(0): + pass + + def test_len(self): + self.assertEqual(len(Const(10)), 4) + + def test_getitem_int(self): + s1 = Const(10)[0] + self.assertIsInstance(s1, Slice) + self.assertEqual(s1.start, 0) + self.assertEqual(s1.stop, 1) + s2 = Const(10)[-1] + self.assertIsInstance(s2, Slice) + self.assertEqual(s2.start, 3) + self.assertEqual(s2.stop, 4) + with self.assertRaises(IndexError, + msg="Cannot index 5 bits into 4-bit value"): + Const(10)[5] + + def test_getitem_slice(self): + s1 = Const(10)[1:3] + self.assertIsInstance(s1, Slice) + self.assertEqual(s1.start, 1) + self.assertEqual(s1.stop, 3) + s2 = Const(10)[1:-2] + self.assertIsInstance(s2, Slice) + self.assertEqual(s2.start, 1) + self.assertEqual(s2.stop, 2) + s3 = Const(31)[::2] + self.assertIsInstance(s3, Cat) + self.assertIsInstance(s3.parts[0], Slice) + self.assertEqual(s3.parts[0].start, 0) + self.assertEqual(s3.parts[0].stop, 1) + self.assertIsInstance(s3.parts[1], Slice) + self.assertEqual(s3.parts[1].start, 2) + self.assertEqual(s3.parts[1].stop, 3) + self.assertIsInstance(s3.parts[2], Slice) + self.assertEqual(s3.parts[2].start, 4) + self.assertEqual(s3.parts[2].stop, 5) + + def test_getitem_wrong(self): + with self.assertRaises(TypeError, + msg="Cannot index value with 'str'"): + Const(31)["str"] + + +class ConstTestCase(FHDLTestCase): + def test_shape(self): + self.assertEqual(Const(0).shape(), unsigned(1)) + self.assertIsInstance(Const(0).shape(), Shape) + self.assertEqual(Const(1).shape(), unsigned(1)) + self.assertEqual(Const(10).shape(), unsigned(4)) + self.assertEqual(Const(-10).shape(), signed(5)) + + self.assertEqual(Const(1, 4).shape(), unsigned(4)) + self.assertEqual(Const(-1, 4).shape(), signed(4)) + self.assertEqual(Const(1, signed(4)).shape(), signed(4)) + self.assertEqual(Const(0, unsigned(0)).shape(), unsigned(0)) + + def test_shape_wrong(self): + with self.assertRaises(TypeError, + msg="Width must be a non-negative integer, not -1"): + Const(1, -1) + + def test_normalization(self): + self.assertEqual(Const(0b10110, signed(5)).value, -10) + + def test_value(self): + self.assertEqual(Const(10).value, 10) + + def test_repr(self): + self.assertEqual(repr(Const(10)), "(const 4'd10)") + self.assertEqual(repr(Const(-10)), "(const 5'sd-10)") + + def test_hash(self): + with self.assertRaises(TypeError): + hash(Const(0)) + + +class OperatorTestCase(FHDLTestCase): + def test_bool(self): + v = Const(0, 4).bool() + self.assertEqual(repr(v), "(b (const 4'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_invert(self): + v = ~Const(0, 4) + self.assertEqual(repr(v), "(~ (const 4'd0))") + self.assertEqual(v.shape(), unsigned(4)) + + def test_as_unsigned(self): + v = Const(-1, signed(4)).as_unsigned() + self.assertEqual(repr(v), "(u (const 4'sd-1))") + self.assertEqual(v.shape(), unsigned(4)) + + def test_as_signed(self): + v = Const(1, unsigned(4)).as_signed() + self.assertEqual(repr(v), "(s (const 4'd1))") + self.assertEqual(v.shape(), signed(4)) + + def test_neg(self): + v1 = -Const(0, unsigned(4)) + self.assertEqual(repr(v1), "(- (const 4'd0))") + self.assertEqual(v1.shape(), signed(5)) + v2 = -Const(0, signed(4)) + self.assertEqual(repr(v2), "(- (const 4'sd0))") + self.assertEqual(v2.shape(), signed(5)) + + def test_add(self): + v1 = Const(0, unsigned(4)) + Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(+ (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(7)) + v2 = Const(0, signed(4)) + Const(0, signed(6)) + self.assertEqual(v2.shape(), signed(7)) + v3 = Const(0, signed(4)) + Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(6)) + v4 = Const(0, unsigned(4)) + Const(0, signed(4)) + self.assertEqual(v4.shape(), signed(6)) + v5 = 10 + Const(0, 4) + self.assertEqual(v5.shape(), unsigned(5)) + + def test_sub(self): + v1 = Const(0, unsigned(4)) - Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(- (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(7)) + v2 = Const(0, signed(4)) - Const(0, signed(6)) + self.assertEqual(v2.shape(), signed(7)) + v3 = Const(0, signed(4)) - Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(6)) + v4 = Const(0, unsigned(4)) - Const(0, signed(4)) + self.assertEqual(v4.shape(), signed(6)) + v5 = 10 - Const(0, 4) + self.assertEqual(v5.shape(), unsigned(5)) + + def test_mul(self): + v1 = Const(0, unsigned(4)) * Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(* (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(10)) + v2 = Const(0, signed(4)) * Const(0, signed(6)) + self.assertEqual(v2.shape(), signed(10)) + v3 = Const(0, signed(4)) * Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(8)) + v5 = 10 * Const(0, 4) + self.assertEqual(v5.shape(), unsigned(8)) + + def test_mod(self): + v1 = Const(0, unsigned(4)) % Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(% (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(4)) + v3 = Const(0, signed(4)) % Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(4)) + v5 = 10 % Const(0, 4) + self.assertEqual(v5.shape(), unsigned(4)) + + def test_mod_wrong(self): + with self.assertRaises(NotImplementedError, + msg="Division by a signed value is not supported"): + Const(0, signed(4)) % Const(0, signed(6)) + + def test_floordiv(self): + v1 = Const(0, unsigned(4)) // Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(// (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(4)) + v3 = Const(0, signed(4)) // Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(4)) + v5 = 10 // Const(0, 4) + self.assertEqual(v5.shape(), unsigned(4)) + + def test_floordiv_wrong(self): + with self.assertRaises(NotImplementedError, + msg="Division by a signed value is not supported"): + Const(0, signed(4)) // Const(0, signed(6)) + + def test_and(self): + v1 = Const(0, unsigned(4)) & Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(& (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(6)) + v2 = Const(0, signed(4)) & Const(0, signed(6)) + self.assertEqual(v2.shape(), signed(6)) + v3 = Const(0, signed(4)) & Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(5)) + v4 = Const(0, unsigned(4)) & Const(0, signed(4)) + self.assertEqual(v4.shape(), signed(5)) + v5 = 10 & Const(0, 4) + self.assertEqual(v5.shape(), unsigned(4)) + + def test_or(self): + v1 = Const(0, unsigned(4)) | Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(| (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(6)) + v2 = Const(0, signed(4)) | Const(0, signed(6)) + self.assertEqual(v2.shape(), signed(6)) + v3 = Const(0, signed(4)) | Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(5)) + v4 = Const(0, unsigned(4)) | Const(0, signed(4)) + self.assertEqual(v4.shape(), signed(5)) + v5 = 10 | Const(0, 4) + self.assertEqual(v5.shape(), unsigned(4)) + + def test_xor(self): + v1 = Const(0, unsigned(4)) ^ Const(0, unsigned(6)) + self.assertEqual(repr(v1), "(^ (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(6)) + v2 = Const(0, signed(4)) ^ Const(0, signed(6)) + self.assertEqual(v2.shape(), signed(6)) + v3 = Const(0, signed(4)) ^ Const(0, unsigned(4)) + self.assertEqual(v3.shape(), signed(5)) + v4 = Const(0, unsigned(4)) ^ Const(0, signed(4)) + self.assertEqual(v4.shape(), signed(5)) + v5 = 10 ^ Const(0, 4) + self.assertEqual(v5.shape(), unsigned(4)) + + def test_shl(self): + v1 = Const(1, 4) << Const(4) + self.assertEqual(repr(v1), "(<< (const 4'd1) (const 3'd4))") + self.assertEqual(v1.shape(), unsigned(11)) + + def test_shl_wrong(self): + with self.assertRaises(NotImplementedError, + msg="Shift by a signed value is not supported"): + 1 << Const(0, signed(6)) + with self.assertRaises(NotImplementedError, + msg="Shift by a signed value is not supported"): + Const(1, unsigned(4)) << -1 + + def test_shr(self): + v1 = Const(1, 4) >> Const(4) + self.assertEqual(repr(v1), "(>> (const 4'd1) (const 3'd4))") + self.assertEqual(v1.shape(), unsigned(4)) + + def test_shr_wrong(self): + with self.assertRaises(NotImplementedError, + msg="Shift by a signed value is not supported"): + 1 << Const(0, signed(6)) + with self.assertRaises(NotImplementedError, + msg="Shift by a signed value is not supported"): + Const(1, unsigned(4)) << -1 + + def test_lt(self): + v = Const(0, 4) < Const(0, 6) + self.assertEqual(repr(v), "(< (const 4'd0) (const 6'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_le(self): + v = Const(0, 4) <= Const(0, 6) + self.assertEqual(repr(v), "(<= (const 4'd0) (const 6'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_gt(self): + v = Const(0, 4) > Const(0, 6) + self.assertEqual(repr(v), "(> (const 4'd0) (const 6'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_ge(self): + v = Const(0, 4) >= Const(0, 6) + self.assertEqual(repr(v), "(>= (const 4'd0) (const 6'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_eq(self): + v = Const(0, 4) == Const(0, 6) + self.assertEqual(repr(v), "(== (const 4'd0) (const 6'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_ne(self): + v = Const(0, 4) != Const(0, 6) + self.assertEqual(repr(v), "(!= (const 4'd0) (const 6'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_mux(self): + s = Const(0) + v1 = Mux(s, Const(0, unsigned(4)), Const(0, unsigned(6))) + self.assertEqual(repr(v1), "(m (const 1'd0) (const 4'd0) (const 6'd0))") + self.assertEqual(v1.shape(), unsigned(6)) + v2 = Mux(s, Const(0, signed(4)), Const(0, signed(6))) + self.assertEqual(v2.shape(), signed(6)) + v3 = Mux(s, Const(0, signed(4)), Const(0, unsigned(4))) + self.assertEqual(v3.shape(), signed(5)) + v4 = Mux(s, Const(0, unsigned(4)), Const(0, signed(4))) + self.assertEqual(v4.shape(), signed(5)) + + def test_mux_wide(self): + s = Const(0b100) + v = Mux(s, Const(0, unsigned(4)), Const(0, unsigned(6))) + self.assertEqual(repr(v), "(m (b (const 3'd4)) (const 4'd0) (const 6'd0))") + + def test_mux_bool(self): + v = Mux(True, Const(0), Const(0)) + self.assertEqual(repr(v), "(m (const 1'd1) (const 1'd0) (const 1'd0))") + + def test_bool(self): + v = Const(0).bool() + self.assertEqual(repr(v), "(b (const 1'd0))") + self.assertEqual(v.shape(), unsigned(1)) + + def test_any(self): + v = Const(0b101).any() + self.assertEqual(repr(v), "(r| (const 3'd5))") + + def test_all(self): + v = Const(0b101).all() + self.assertEqual(repr(v), "(r& (const 3'd5))") + + def test_xor(self): + v = Const(0b101).xor() + self.assertEqual(repr(v), "(r^ (const 3'd5))") + + def test_matches(self): + s = Signal(4) + self.assertRepr(s.matches(), "(const 1'd0)") + self.assertRepr(s.matches(1), """ + (== (sig s) (const 1'd1)) + """) + self.assertRepr(s.matches(0, 1), """ + (r| (cat (== (sig s) (const 1'd0)) (== (sig s) (const 1'd1)))) + """) + self.assertRepr(s.matches("10--"), """ + (== (& (sig s) (const 4'd12)) (const 4'd8)) + """) + self.assertRepr(s.matches("1 0--"), """ + (== (& (sig s) (const 4'd12)) (const 4'd8)) + """) + + def test_matches_enum(self): + s = Signal(SignedEnum) + self.assertRepr(s.matches(SignedEnum.FOO), """ + (== (sig s) (const 1'sd-1)) + """) + + def test_matches_width_wrong(self): + s = Signal(4) + with self.assertRaises(SyntaxError, + msg="Match pattern '--' must have the same width as match value (which is 4)"): + s.matches("--") + with self.assertWarns(SyntaxWarning, + msg="Match pattern '10110' is wider than match value (which has width 4); " + "comparison will never be true"): + s.matches(0b10110) + + def test_matches_bits_wrong(self): + s = Signal(4) + with self.assertRaises(SyntaxError, + msg="Match pattern 'abc' must consist of 0, 1, and - (don't care) bits, " + "and may include whitespace"): + s.matches("abc") + + def test_matches_pattern_wrong(self): + s = Signal(4) + with self.assertRaises(SyntaxError, + msg="Match pattern must be an integer, a string, or an enumeration, not 1.0"): + s.matches(1.0) + + def test_hash(self): + with self.assertRaises(TypeError): + hash(Const(0) + Const(0)) + + +class SliceTestCase(FHDLTestCase): + def test_shape(self): + s1 = Const(10)[2] + self.assertEqual(s1.shape(), unsigned(1)) + self.assertIsInstance(s1.shape(), Shape) + s2 = Const(-10)[0:2] + self.assertEqual(s2.shape(), unsigned(2)) + + def test_start_end_negative(self): + c = Const(0, 8) + s1 = Slice(c, 0, -1) + self.assertEqual((s1.start, s1.stop), (0, 7)) + s1 = Slice(c, -4, -1) + self.assertEqual((s1.start, s1.stop), (4, 7)) + + def test_start_end_wrong(self): + with self.assertRaises(TypeError, + msg="Slice start must be an integer, not 'x'"): + Slice(0, "x", 1) + with self.assertRaises(TypeError, + msg="Slice stop must be an integer, not 'x'"): + Slice(0, 1, "x") + + def test_start_end_out_of_range(self): + c = Const(0, 8) + with self.assertRaises(IndexError, + msg="Cannot start slice 10 bits into 8-bit value"): + Slice(c, 10, 12) + with self.assertRaises(IndexError, + msg="Cannot stop slice 12 bits into 8-bit value"): + Slice(c, 0, 12) + with self.assertRaises(IndexError, + msg="Slice start 4 must be less than slice stop 2"): + Slice(c, 4, 2) + + def test_repr(self): + s1 = Const(10)[2] + self.assertEqual(repr(s1), "(slice (const 4'd10) 2:3)") + + +class BitSelectTestCase(FHDLTestCase): + def setUp(self): + self.c = Const(0, 8) + self.s = Signal(range(self.c.width)) + + def test_shape(self): + s1 = self.c.bit_select(self.s, 2) + self.assertIsInstance(s1, Part) + self.assertEqual(s1.shape(), unsigned(2)) + self.assertIsInstance(s1.shape(), Shape) + s2 = self.c.bit_select(self.s, 0) + self.assertIsInstance(s2, Part) + self.assertEqual(s2.shape(), unsigned(0)) + + def test_stride(self): + s1 = self.c.bit_select(self.s, 2) + self.assertIsInstance(s1, Part) + self.assertEqual(s1.stride, 1) + + def test_const(self): + s1 = self.c.bit_select(1, 2) + self.assertIsInstance(s1, Slice) + self.assertRepr(s1, """(slice (const 8'd0) 1:3)""") + + def test_width_wrong(self): + with self.assertRaises(TypeError): + self.c.bit_select(self.s, -1) + + def test_repr(self): + s = self.c.bit_select(self.s, 2) + self.assertEqual(repr(s), "(part (const 8'd0) (sig s) 2 1)") + + +class WordSelectTestCase(FHDLTestCase): + def setUp(self): + self.c = Const(0, 8) + self.s = Signal(range(self.c.width)) + + def test_shape(self): + s1 = self.c.word_select(self.s, 2) + self.assertIsInstance(s1, Part) + self.assertEqual(s1.shape(), unsigned(2)) + self.assertIsInstance(s1.shape(), Shape) + + def test_stride(self): + s1 = self.c.word_select(self.s, 2) + self.assertIsInstance(s1, Part) + self.assertEqual(s1.stride, 2) + + def test_const(self): + s1 = self.c.word_select(1, 2) + self.assertIsInstance(s1, Slice) + self.assertRepr(s1, """(slice (const 8'd0) 2:4)""") + + def test_width_wrong(self): + with self.assertRaises(TypeError): + self.c.word_select(self.s, 0) + with self.assertRaises(TypeError): + self.c.word_select(self.s, -1) + + def test_repr(self): + s = self.c.word_select(self.s, 2) + self.assertEqual(repr(s), "(part (const 8'd0) (sig s) 2 2)") + + +class CatTestCase(FHDLTestCase): + def test_shape(self): + c0 = Cat() + self.assertEqual(c0.shape(), unsigned(0)) + self.assertIsInstance(c0.shape(), Shape) + c1 = Cat(Const(10)) + self.assertEqual(c1.shape(), unsigned(4)) + c2 = Cat(Const(10), Const(1)) + self.assertEqual(c2.shape(), unsigned(5)) + c3 = Cat(Const(10), Const(1), Const(0)) + self.assertEqual(c3.shape(), unsigned(6)) + + def test_repr(self): + c1 = Cat(Const(10), Const(1)) + self.assertEqual(repr(c1), "(cat (const 4'd10) (const 1'd1))") + + +class ReplTestCase(FHDLTestCase): + def test_shape(self): + s1 = Repl(Const(10), 3) + self.assertEqual(s1.shape(), unsigned(12)) + self.assertIsInstance(s1.shape(), Shape) + s2 = Repl(Const(10), 0) + self.assertEqual(s2.shape(), unsigned(0)) + + def test_count_wrong(self): + with self.assertRaises(TypeError): + Repl(Const(10), -1) + with self.assertRaises(TypeError): + Repl(Const(10), "str") + + def test_repr(self): + s = Repl(Const(10), 3) + self.assertEqual(repr(s), "(repl (const 4'd10) 3)") + + +class ArrayTestCase(FHDLTestCase): + def test_acts_like_array(self): + a = Array([1,2,3]) + self.assertSequenceEqual(a, [1,2,3]) + self.assertEqual(a[1], 2) + a[1] = 4 + self.assertSequenceEqual(a, [1,4,3]) + del a[1] + self.assertSequenceEqual(a, [1,3]) + a.insert(1, 2) + self.assertSequenceEqual(a, [1,2,3]) + + def test_becomes_immutable(self): + a = Array([1,2,3]) + s1 = Signal(range(len(a))) + s2 = Signal(range(len(a))) + v1 = a[s1] + v2 = a[s2] + with self.assertRaisesRegex(ValueError, + regex=r"^Array can no longer be mutated after it was indexed with a value at "): + a[1] = 2 + with self.assertRaisesRegex(ValueError, + regex=r"^Array can no longer be mutated after it was indexed with a value at "): + del a[1] + with self.assertRaisesRegex(ValueError, + regex=r"^Array can no longer be mutated after it was indexed with a value at "): + a.insert(1, 2) + + def test_repr(self): + a = Array([1,2,3]) + self.assertEqual(repr(a), "(array mutable [1, 2, 3])") + s = Signal(range(len(a))) + v = a[s] + self.assertEqual(repr(a), "(array [1, 2, 3])") + + +class ArrayProxyTestCase(FHDLTestCase): + def test_index_shape(self): + m = Array(Array(x * y for y in range(1, 4)) for x in range(1, 4)) + a = Signal(range(3)) + b = Signal(range(3)) + v = m[a][b] + self.assertEqual(v.shape(), unsigned(4)) + + def test_attr_shape(self): + from collections import namedtuple + pair = namedtuple("pair", ("p", "n")) + a = Array(pair(i, -i) for i in range(10)) + s = Signal(range(len(a))) + v = a[s] + self.assertEqual(v.p.shape(), unsigned(4)) + self.assertEqual(v.n.shape(), signed(6)) + + def test_repr(self): + a = Array([1, 2, 3]) + s = Signal(range(3)) + v = a[s] + self.assertEqual(repr(v), "(proxy (array [1, 2, 3]) (sig s))") + + +class SignalTestCase(FHDLTestCase): + def test_shape(self): + s1 = Signal() + self.assertEqual(s1.shape(), unsigned(1)) + self.assertIsInstance(s1.shape(), Shape) + s2 = Signal(2) + self.assertEqual(s2.shape(), unsigned(2)) + s3 = Signal(unsigned(2)) + self.assertEqual(s3.shape(), unsigned(2)) + s4 = Signal(signed(2)) + self.assertEqual(s4.shape(), signed(2)) + s5 = Signal(0) + self.assertEqual(s5.shape(), unsigned(0)) + s6 = Signal(range(16)) + self.assertEqual(s6.shape(), unsigned(4)) + s7 = Signal(range(4, 16)) + self.assertEqual(s7.shape(), unsigned(4)) + s8 = Signal(range(-4, 16)) + self.assertEqual(s8.shape(), signed(5)) + s9 = Signal(range(-20, 16)) + self.assertEqual(s9.shape(), signed(6)) + s10 = Signal(range(0)) + self.assertEqual(s10.shape(), unsigned(0)) + s11 = Signal(range(1)) + self.assertEqual(s11.shape(), unsigned(1)) + + def test_shape_wrong(self): + with self.assertRaises(TypeError, + msg="Width must be a non-negative integer, not -10"): + Signal(-10) + + def test_name(self): + s1 = Signal() + self.assertEqual(s1.name, "s1") + s2 = Signal(name="sig") + self.assertEqual(s2.name, "sig") + + def test_reset(self): + s1 = Signal(4, reset=0b111, reset_less=True) + self.assertEqual(s1.reset, 0b111) + self.assertEqual(s1.reset_less, True) + + def test_reset_enum(self): + s1 = Signal(2, reset=UnsignedEnum.BAR) + self.assertEqual(s1.reset, 2) + with self.assertRaises(TypeError, + msg="Reset value has to be an int or an integral Enum" + ): + Signal(1, reset=StringEnum.FOO) + + def test_reset_narrow(self): + with self.assertWarns(SyntaxWarning, + msg="Reset value 8 requires 4 bits to represent, but the signal only has 3 bits"): + Signal(3, reset=8) + with self.assertWarns(SyntaxWarning, + msg="Reset value 4 requires 4 bits to represent, but the signal only has 3 bits"): + Signal(signed(3), reset=4) + with self.assertWarns(SyntaxWarning, + msg="Reset value -5 requires 4 bits to represent, but the signal only has 3 bits"): + Signal(signed(3), reset=-5) + + def test_attrs(self): + s1 = Signal() + self.assertEqual(s1.attrs, {}) + s2 = Signal(attrs={"no_retiming": True}) + self.assertEqual(s2.attrs, {"no_retiming": True}) + + def test_repr(self): + s1 = Signal() + self.assertEqual(repr(s1), "(sig s1)") + + def test_like(self): + s1 = Signal.like(Signal(4)) + self.assertEqual(s1.shape(), unsigned(4)) + s2 = Signal.like(Signal(range(-15, 1))) + self.assertEqual(s2.shape(), signed(5)) + s3 = Signal.like(Signal(4, reset=0b111, reset_less=True)) + self.assertEqual(s3.reset, 0b111) + self.assertEqual(s3.reset_less, True) + s4 = Signal.like(Signal(attrs={"no_retiming": True})) + self.assertEqual(s4.attrs, {"no_retiming": True}) + s5 = Signal.like(Signal(decoder=str)) + self.assertEqual(s5.decoder, str) + s6 = Signal.like(10) + self.assertEqual(s6.shape(), unsigned(4)) + s7 = [Signal.like(Signal(4))][0] + self.assertEqual(s7.name, "$like") + s8 = Signal.like(s1, name_suffix="_ff") + self.assertEqual(s8.name, "s1_ff") + + def test_decoder(self): + class Color(Enum): + RED = 1 + BLUE = 2 + s = Signal(decoder=Color) + self.assertEqual(s.decoder(1), "RED/1") + self.assertEqual(s.decoder(3), "3") + + def test_enum(self): + s1 = Signal(UnsignedEnum) + self.assertEqual(s1.shape(), unsigned(2)) + s2 = Signal(SignedEnum) + self.assertEqual(s2.shape(), signed(2)) + self.assertEqual(s2.decoder(SignedEnum.FOO), "FOO/-1") + + +class ClockSignalTestCase(FHDLTestCase): + def test_domain(self): + s1 = ClockSignal() + self.assertEqual(s1.domain, "sync") + s2 = ClockSignal("pix") + self.assertEqual(s2.domain, "pix") + + with self.assertRaises(TypeError, + msg="Clock domain name must be a string, not 1"): + ClockSignal(1) + + def test_shape(self): + s1 = ClockSignal() + self.assertEqual(s1.shape(), unsigned(1)) + self.assertIsInstance(s1.shape(), Shape) + + def test_repr(self): + s1 = ClockSignal() + self.assertEqual(repr(s1), "(clk sync)") + + def test_wrong_name_comb(self): + with self.assertRaises(ValueError, + msg="Domain 'comb' does not have a clock"): + ClockSignal("comb") + + +class ResetSignalTestCase(FHDLTestCase): + def test_domain(self): + s1 = ResetSignal() + self.assertEqual(s1.domain, "sync") + s2 = ResetSignal("pix") + self.assertEqual(s2.domain, "pix") + + with self.assertRaises(TypeError, + msg="Clock domain name must be a string, not 1"): + ResetSignal(1) + + def test_shape(self): + s1 = ResetSignal() + self.assertEqual(s1.shape(), unsigned(1)) + self.assertIsInstance(s1.shape(), Shape) + + def test_repr(self): + s1 = ResetSignal() + self.assertEqual(repr(s1), "(rst sync)") + + def test_wrong_name_comb(self): + with self.assertRaises(ValueError, + msg="Domain 'comb' does not have a reset"): + ResetSignal("comb") + + +class MockUserValue(UserValue): + def __init__(self, lowered): + super().__init__() + self.lower_count = 0 + self.lowered = lowered + + def lower(self): + self.lower_count += 1 + return self.lowered + + +class UserValueTestCase(FHDLTestCase): + def test_shape(self): + uv = MockUserValue(1) + self.assertEqual(uv.shape(), unsigned(1)) + self.assertIsInstance(uv.shape(), Shape) + uv.lowered = 2 + self.assertEqual(uv.shape(), unsigned(1)) + self.assertEqual(uv.lower_count, 1) + + +class SampleTestCase(FHDLTestCase): + def test_const(self): + s = Sample(1, 1, "sync") + self.assertEqual(s.shape(), unsigned(1)) + + def test_signal(self): + s1 = Sample(Signal(2), 1, "sync") + self.assertEqual(s1.shape(), unsigned(2)) + s2 = Sample(ClockSignal(), 1, "sync") + s3 = Sample(ResetSignal(), 1, "sync") + + def test_wrong_value_operator(self): + with self.assertRaises(TypeError, + "Sampled value must be a signal or a constant, not " + "(+ (sig $signal) (const 1'd1))"): + Sample(Signal() + 1, 1, "sync") + + def test_wrong_clocks_neg(self): + with self.assertRaises(ValueError, + "Cannot sample a value 1 cycles in the future"): + Sample(Signal(), -1, "sync") + + def test_wrong_domain(self): + with self.assertRaises(TypeError, + "Domain name must be a string or None, not 0"): + Sample(Signal(), 1, 0) + + +class InitialTestCase(FHDLTestCase): + def test_initial(self): + i = Initial() + self.assertEqual(i.shape(), unsigned(1)) diff --git a/nmigen/test/test_hdl_cd.py b/nmigen/test/test_hdl_cd.py new file mode 100644 index 0000000..85bcfc8 --- /dev/null +++ b/nmigen/test/test_hdl_cd.py @@ -0,0 +1,78 @@ +from ..hdl.cd import * +from .utils import * + + +class ClockDomainTestCase(FHDLTestCase): + def test_name(self): + sync = ClockDomain() + self.assertEqual(sync.name, "sync") + self.assertEqual(sync.clk.name, "clk") + self.assertEqual(sync.rst.name, "rst") + self.assertEqual(sync.local, False) + pix = ClockDomain() + self.assertEqual(pix.name, "pix") + self.assertEqual(pix.clk.name, "pix_clk") + self.assertEqual(pix.rst.name, "pix_rst") + cd_pix = ClockDomain() + self.assertEqual(pix.name, "pix") + dom = [ClockDomain("foo")][0] + self.assertEqual(dom.name, "foo") + with self.assertRaises(ValueError, + msg="Clock domain name must be specified explicitly"): + ClockDomain() + cd_reset = ClockDomain(local=True) + self.assertEqual(cd_reset.local, True) + + def test_edge(self): + sync = ClockDomain() + self.assertEqual(sync.clk_edge, "pos") + sync = ClockDomain(clk_edge="pos") + self.assertEqual(sync.clk_edge, "pos") + sync = ClockDomain(clk_edge="neg") + self.assertEqual(sync.clk_edge, "neg") + + def test_edge_wrong(self): + with self.assertRaises(ValueError, + msg="Domain clock edge must be one of 'pos' or 'neg', not 'xxx'"): + ClockDomain("sync", clk_edge="xxx") + + def test_with_reset(self): + pix = ClockDomain() + self.assertIsNotNone(pix.clk) + self.assertIsNotNone(pix.rst) + self.assertFalse(pix.async_reset) + + def test_without_reset(self): + pix = ClockDomain(reset_less=True) + self.assertIsNotNone(pix.clk) + self.assertIsNone(pix.rst) + self.assertFalse(pix.async_reset) + + def test_async_reset(self): + pix = ClockDomain(async_reset=True) + self.assertIsNotNone(pix.clk) + self.assertIsNotNone(pix.rst) + self.assertTrue(pix.async_reset) + + def test_rename(self): + sync = ClockDomain() + self.assertEqual(sync.name, "sync") + self.assertEqual(sync.clk.name, "clk") + self.assertEqual(sync.rst.name, "rst") + sync.rename("pix") + self.assertEqual(sync.name, "pix") + self.assertEqual(sync.clk.name, "pix_clk") + self.assertEqual(sync.rst.name, "pix_rst") + + def test_rename_reset_less(self): + sync = ClockDomain(reset_less=True) + self.assertEqual(sync.name, "sync") + self.assertEqual(sync.clk.name, "clk") + sync.rename("pix") + self.assertEqual(sync.name, "pix") + self.assertEqual(sync.clk.name, "pix_clk") + + def test_wrong_name_comb(self): + with self.assertRaises(ValueError, + msg="Domain 'comb' may not be clocked"): + comb = ClockDomain() diff --git a/nmigen/test/test_hdl_dsl.py b/nmigen/test/test_hdl_dsl.py new file mode 100644 index 0000000..1c5686b --- /dev/null +++ b/nmigen/test/test_hdl_dsl.py @@ -0,0 +1,762 @@ +# nmigen: UnusedElaboratable=no + +from collections import OrderedDict +from enum import Enum + +from ..hdl.ast import * +from ..hdl.cd import * +from ..hdl.dsl import * +from .utils import * + + +class DSLTestCase(FHDLTestCase): + def setUp(self): + self.s1 = Signal() + self.s2 = Signal() + self.s3 = Signal() + self.c1 = Signal() + self.c2 = Signal() + self.c3 = Signal() + self.w1 = Signal(4) + + def test_cant_inherit(self): + with self.assertRaises(SyntaxError, + msg="Instead of inheriting from `Module`, inherit from `Elaboratable` and " + "return a `Module` from the `elaborate(self, platform)` method"): + class ORGate(Module): + pass + + def test_d_comb(self): + m = Module() + m.d.comb += self.c1.eq(1) + m._flush() + self.assertEqual(m._driving[self.c1], None) + self.assertRepr(m._statements, """( + (eq (sig c1) (const 1'd1)) + )""") + + def test_d_sync(self): + m = Module() + m.d.sync += self.c1.eq(1) + m._flush() + self.assertEqual(m._driving[self.c1], "sync") + self.assertRepr(m._statements, """( + (eq (sig c1) (const 1'd1)) + )""") + + def test_d_pix(self): + m = Module() + m.d.pix += self.c1.eq(1) + m._flush() + self.assertEqual(m._driving[self.c1], "pix") + self.assertRepr(m._statements, """( + (eq (sig c1) (const 1'd1)) + )""") + + def test_d_index(self): + m = Module() + m.d["pix"] += self.c1.eq(1) + m._flush() + self.assertEqual(m._driving[self.c1], "pix") + self.assertRepr(m._statements, """( + (eq (sig c1) (const 1'd1)) + )""") + + def test_d_no_conflict(self): + m = Module() + m.d.comb += self.w1[0].eq(1) + m.d.comb += self.w1[1].eq(1) + + def test_d_conflict(self): + m = Module() + with self.assertRaises(SyntaxError, + msg="Driver-driver conflict: trying to drive (sig c1) from d.sync, but it " + "is already driven from d.comb"): + m.d.comb += self.c1.eq(1) + m.d.sync += self.c1.eq(1) + + def test_d_wrong(self): + m = Module() + with self.assertRaises(AttributeError, + msg="Cannot assign 'd.pix' attribute; did you mean 'd.pix +='?"): + m.d.pix = None + + def test_d_asgn_wrong(self): + m = Module() + with self.assertRaises(SyntaxError, + msg="Only assignments and property checks may be appended to d.sync"): + m.d.sync += Switch(self.s1, {}) + + def test_comb_wrong(self): + m = Module() + with self.assertRaises(AttributeError, + msg="'Module' object has no attribute 'comb'; did you mean 'd.comb'?"): + m.comb += self.c1.eq(1) + + def test_sync_wrong(self): + m = Module() + with self.assertRaises(AttributeError, + msg="'Module' object has no attribute 'sync'; did you mean 'd.sync'?"): + m.sync += self.c1.eq(1) + + def test_attr_wrong(self): + m = Module() + with self.assertRaises(AttributeError, + msg="'Module' object has no attribute 'nonexistentattr'"): + m.nonexistentattr + + def test_d_suspicious(self): + m = Module() + with self.assertWarns(SyntaxWarning, + msg="Using '.d.submodules' would add statements to clock domain " + "'submodules'; did you mean .submodules instead?"): + m.d.submodules += [] + + def test_clock_signal(self): + m = Module() + m.d.comb += ClockSignal("pix").eq(ClockSignal()) + self.assertRepr(m._statements, """ + ( + (eq (clk pix) (clk sync)) + ) + """) + + def test_reset_signal(self): + m = Module() + m.d.comb += ResetSignal("pix").eq(1) + self.assertRepr(m._statements, """ + ( + (eq (rst pix) (const 1'd1)) + ) + """) + + def test_sample_domain(self): + m = Module() + i = Signal() + o1 = Signal() + o2 = Signal() + o3 = Signal() + m.d.sync += o1.eq(Past(i)) + m.d.pix += o2.eq(Past(i)) + m.d.pix += o3.eq(Past(i, domain="sync")) + f = m.elaborate(platform=None) + self.assertRepr(f.statements, """ + ( + (eq (sig o1) (sample (sig i) @ sync[1])) + (eq (sig o2) (sample (sig i) @ pix[1])) + (eq (sig o3) (sample (sig i) @ sync[1])) + ) + """) + + def test_If(self): + m = Module() + with m.If(self.s1): + m.d.comb += self.c1.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (cat (sig s1)) + (case 1 (eq (sig c1) (const 1'd1))) + ) + ) + """) + + def test_If_Elif(self): + m = Module() + with m.If(self.s1): + m.d.comb += self.c1.eq(1) + with m.Elif(self.s2): + m.d.sync += self.c2.eq(0) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (cat (sig s1) (sig s2)) + (case -1 (eq (sig c1) (const 1'd1))) + (case 1- (eq (sig c2) (const 1'd0))) + ) + ) + """) + + def test_If_Elif_Else(self): + m = Module() + with m.If(self.s1): + m.d.comb += self.c1.eq(1) + with m.Elif(self.s2): + m.d.sync += self.c2.eq(0) + with m.Else(): + m.d.comb += self.c3.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (cat (sig s1) (sig s2)) + (case -1 (eq (sig c1) (const 1'd1))) + (case 1- (eq (sig c2) (const 1'd0))) + (default (eq (sig c3) (const 1'd1))) + ) + ) + """) + + def test_If_If(self): + m = Module() + with m.If(self.s1): + m.d.comb += self.c1.eq(1) + with m.If(self.s2): + m.d.comb += self.c2.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (cat (sig s1)) + (case 1 (eq (sig c1) (const 1'd1))) + ) + (switch (cat (sig s2)) + (case 1 (eq (sig c2) (const 1'd1))) + ) + ) + """) + + def test_If_nested_If(self): + m = Module() + with m.If(self.s1): + m.d.comb += self.c1.eq(1) + with m.If(self.s2): + m.d.comb += self.c2.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (cat (sig s1)) + (case 1 (eq (sig c1) (const 1'd1)) + (switch (cat (sig s2)) + (case 1 (eq (sig c2) (const 1'd1))) + ) + ) + ) + ) + """) + + def test_If_dangling_Else(self): + m = Module() + with m.If(self.s1): + m.d.comb += self.c1.eq(1) + with m.If(self.s2): + m.d.comb += self.c2.eq(1) + with m.Else(): + m.d.comb += self.c3.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (cat (sig s1)) + (case 1 + (eq (sig c1) (const 1'd1)) + (switch (cat (sig s2)) + (case 1 (eq (sig c2) (const 1'd1))) + ) + ) + (default + (eq (sig c3) (const 1'd1)) + ) + ) + ) + """) + + def test_Elif_wrong(self): + m = Module() + with self.assertRaises(SyntaxError, + msg="Elif without preceding If"): + with m.Elif(self.s2): + pass + + def test_Else_wrong(self): + m = Module() + with self.assertRaises(SyntaxError, + msg="Else without preceding If/Elif"): + with m.Else(): + pass + + def test_If_wide(self): + m = Module() + with m.If(self.w1): + m.d.comb += self.c1.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (cat (b (sig w1))) + (case 1 (eq (sig c1) (const 1'd1))) + ) + ) + """) + + def test_If_signed_suspicious(self): + m = Module() + with self.assertWarns(SyntaxWarning, + msg="Signed values in If/Elif conditions usually result from inverting Python " + "booleans with ~, which leads to unexpected results: ~True is -2, which is " + "truthful. Replace `~flag` with `not flag`. (If this is a false positive, " + "silence this warning with `m.If(x)` → `m.If(x.bool())`.)"): + with m.If(~True): + pass + + def test_Elif_signed_suspicious(self): + m = Module() + with m.If(0): + pass + with self.assertWarns(SyntaxWarning, + msg="Signed values in If/Elif conditions usually result from inverting Python " + "booleans with ~, which leads to unexpected results: ~True is -2, which is " + "truthful. Replace `~flag` with `not flag`. (If this is a false positive, " + "silence this warning with `m.If(x)` → `m.If(x.bool())`.)"): + with m.Elif(~True): + pass + + def test_if_If_Elif_Else(self): + m = Module() + with self.assertRaises(SyntaxError, + msg="`if m.If(...):` does not work; use `with m.If(...)`"): + if m.If(0): + pass + with m.If(0): + pass + with self.assertRaises(SyntaxError, + msg="`if m.Elif(...):` does not work; use `with m.Elif(...)`"): + if m.Elif(0): + pass + with self.assertRaises(SyntaxError, + msg="`if m.Else(...):` does not work; use `with m.Else(...)`"): + if m.Else(): + pass + + def test_Switch(self): + m = Module() + with m.Switch(self.w1): + with m.Case(3): + m.d.comb += self.c1.eq(1) + with m.Case("11--"): + m.d.comb += self.c2.eq(1) + with m.Case("1 0--"): + m.d.comb += self.c2.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (sig w1) + (case 0011 (eq (sig c1) (const 1'd1))) + (case 11-- (eq (sig c2) (const 1'd1))) + (case 10-- (eq (sig c2) (const 1'd1))) + ) + ) + """) + + def test_Switch_default_Case(self): + m = Module() + with m.Switch(self.w1): + with m.Case(3): + m.d.comb += self.c1.eq(1) + with m.Case(): + m.d.comb += self.c2.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (sig w1) + (case 0011 (eq (sig c1) (const 1'd1))) + (default (eq (sig c2) (const 1'd1))) + ) + ) + """) + + def test_Switch_default_Default(self): + m = Module() + with m.Switch(self.w1): + with m.Case(3): + m.d.comb += self.c1.eq(1) + with m.Default(): + m.d.comb += self.c2.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (sig w1) + (case 0011 (eq (sig c1) (const 1'd1))) + (default (eq (sig c2) (const 1'd1))) + ) + ) + """) + + def test_Switch_const_test(self): + m = Module() + with m.Switch(1): + with m.Case(1): + m.d.comb += self.c1.eq(1) + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (const 1'd1) + (case 1 (eq (sig c1) (const 1'd1))) + ) + ) + """) + + def test_Switch_enum(self): + class Color(Enum): + RED = 1 + BLUE = 2 + m = Module() + se = Signal(Color) + with m.Switch(se): + with m.Case(Color.RED): + m.d.comb += self.c1.eq(1) + self.assertRepr(m._statements, """ + ( + (switch (sig se) + (case 01 (eq (sig c1) (const 1'd1))) + ) + ) + """) + + def test_Case_width_wrong(self): + class Color(Enum): + RED = 0b10101010 + m = Module() + with m.Switch(self.w1): + with self.assertRaises(SyntaxError, + msg="Case pattern '--' must have the same width as switch value (which is 4)"): + with m.Case("--"): + pass + with self.assertWarns(SyntaxWarning, + msg="Case pattern '10110' is wider than switch value (which has width 4); " + "comparison will never be true"): + with m.Case(0b10110): + pass + with self.assertWarns(SyntaxWarning, + msg="Case pattern '10101010' (Color.RED) is wider than switch value " + "(which has width 4); comparison will never be true"): + with m.Case(Color.RED): + pass + self.assertRepr(m._statements, """ + ( + (switch (sig w1) ) + ) + """) + + def test_Case_bits_wrong(self): + m = Module() + with m.Switch(self.w1): + with self.assertRaises(SyntaxError, + msg="Case pattern 'abc' must consist of 0, 1, and - (don't care) bits, " + "and may include whitespace"): + with m.Case("abc"): + pass + + def test_Case_pattern_wrong(self): + m = Module() + with m.Switch(self.w1): + with self.assertRaises(SyntaxError, + msg="Case pattern must be an integer, a string, or an enumeration, not 1.0"): + with m.Case(1.0): + pass + + def test_Case_outside_Switch_wrong(self): + m = Module() + with self.assertRaises(SyntaxError, + msg="Case is not permitted outside of Switch"): + with m.Case(): + pass + + def test_If_inside_Switch_wrong(self): + m = Module() + with m.Switch(self.s1): + with self.assertRaises(SyntaxError, + msg="If is not permitted directly inside of Switch; " + "it is permitted inside of Switch Case"): + with m.If(self.s2): + pass + + def test_FSM_basic(self): + a = Signal() + b = Signal() + c = Signal() + m = Module() + with m.FSM(): + with m.State("FIRST"): + m.d.comb += a.eq(1) + m.next = "SECOND" + with m.State("SECOND"): + m.d.sync += b.eq(~b) + with m.If(c): + m.next = "FIRST" + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (sig fsm_state) + (case 0 + (eq (sig a) (const 1'd1)) + (eq (sig fsm_state) (const 1'd1)) + ) + (case 1 + (eq (sig b) (~ (sig b))) + (switch (cat (sig c)) + (case 1 + (eq (sig fsm_state) (const 1'd0))) + ) + ) + ) + ) + """) + self.assertEqual({repr(k): v for k, v in m._driving.items()}, { + "(sig a)": None, + "(sig fsm_state)": "sync", + "(sig b)": "sync", + }) + + frag = m.elaborate(platform=None) + fsm = frag.find_generated("fsm") + self.assertIsInstance(fsm.state, Signal) + self.assertEqual(fsm.encoding, OrderedDict({ + "FIRST": 0, + "SECOND": 1, + })) + self.assertEqual(fsm.decoding, OrderedDict({ + 0: "FIRST", + 1: "SECOND" + })) + + def test_FSM_reset(self): + a = Signal() + m = Module() + with m.FSM(reset="SECOND"): + with m.State("FIRST"): + m.d.comb += a.eq(0) + m.next = "SECOND" + with m.State("SECOND"): + m.next = "FIRST" + m._flush() + self.assertRepr(m._statements, """ + ( + (switch (sig fsm_state) + (case 0 + (eq (sig a) (const 1'd0)) + (eq (sig fsm_state) (const 1'd1)) + ) + (case 1 + (eq (sig fsm_state) (const 1'd0)) + ) + ) + ) + """) + + def test_FSM_ongoing(self): + a = Signal() + b = Signal() + m = Module() + with m.FSM() as fsm: + m.d.comb += b.eq(fsm.ongoing("SECOND")) + with m.State("FIRST"): + pass + m.d.comb += a.eq(fsm.ongoing("FIRST")) + with m.State("SECOND"): + pass + m._flush() + self.assertEqual(m._generated["fsm"].state.reset, 1) + self.maxDiff = 10000 + self.assertRepr(m._statements, """ + ( + (eq (sig b) (== (sig fsm_state) (const 1'd0))) + (eq (sig a) (== (sig fsm_state) (const 1'd1))) + (switch (sig fsm_state) + (case 1 + ) + (case 0 + ) + ) + ) + """) + + def test_FSM_empty(self): + m = Module() + with m.FSM(): + pass + self.assertRepr(m._statements, """ + () + """) + + def test_FSM_wrong_domain(self): + m = Module() + with self.assertRaises(ValueError, + msg="FSM may not be driven by the 'comb' domain"): + with m.FSM(domain="comb"): + pass + + def test_FSM_wrong_undefined(self): + m = Module() + with self.assertRaises(NameError, + msg="FSM state 'FOO' is referenced but not defined"): + with m.FSM() as fsm: + fsm.ongoing("FOO") + + def test_FSM_wrong_redefined(self): + m = Module() + with m.FSM(): + with m.State("FOO"): + pass + with self.assertRaises(NameError, + msg="FSM state 'FOO' is already defined"): + with m.State("FOO"): + pass + + def test_FSM_wrong_next(self): + m = Module() + with self.assertRaises(SyntaxError, + msg="Only assignment to `m.next` is permitted"): + m.next + with self.assertRaises(SyntaxError, + msg="`m.next = <...>` is only permitted inside an FSM state"): + m.next = "FOO" + with self.assertRaises(SyntaxError, + msg="`m.next = <...>` is only permitted inside an FSM state"): + with m.FSM(): + m.next = "FOO" + + def test_If_inside_FSM_wrong(self): + m = Module() + with m.FSM(): + with m.State("FOO"): + pass + with self.assertRaises(SyntaxError, + msg="If is not permitted directly inside of FSM; " + "it is permitted inside of FSM State"): + with m.If(self.s2): + pass + + def test_auto_pop_ctrl(self): + m = Module() + with m.If(self.w1): + m.d.comb += self.c1.eq(1) + m.d.comb += self.c2.eq(1) + self.assertRepr(m._statements, """ + ( + (switch (cat (b (sig w1))) + (case 1 (eq (sig c1) (const 1'd1))) + ) + (eq (sig c2) (const 1'd1)) + ) + """) + + def test_submodule_anon(self): + m1 = Module() + m2 = Module() + m1.submodules += m2 + self.assertEqual(m1._anon_submodules, [m2]) + self.assertEqual(m1._named_submodules, {}) + + def test_submodule_anon_multi(self): + m1 = Module() + m2 = Module() + m3 = Module() + m1.submodules += m2, m3 + self.assertEqual(m1._anon_submodules, [m2, m3]) + self.assertEqual(m1._named_submodules, {}) + + def test_submodule_named(self): + m1 = Module() + m2 = Module() + m1.submodules.foo = m2 + self.assertEqual(m1._anon_submodules, []) + self.assertEqual(m1._named_submodules, {"foo": m2}) + + def test_submodule_named_index(self): + m1 = Module() + m2 = Module() + m1.submodules["foo"] = m2 + self.assertEqual(m1._anon_submodules, []) + self.assertEqual(m1._named_submodules, {"foo": m2}) + + def test_submodule_wrong(self): + m = Module() + with self.assertRaises(TypeError, + msg="Trying to add 1, which does not implement .elaborate(), as a submodule"): + m.submodules.foo = 1 + with self.assertRaises(TypeError, + msg="Trying to add 1, which does not implement .elaborate(), as a submodule"): + m.submodules += 1 + + def test_submodule_named_conflict(self): + m1 = Module() + m2 = Module() + m1.submodules.foo = m2 + with self.assertRaises(NameError, msg="Submodule named 'foo' already exists"): + m1.submodules.foo = m2 + + def test_submodule_get(self): + m1 = Module() + m2 = Module() + m1.submodules.foo = m2 + m3 = m1.submodules.foo + self.assertEqual(m2, m3) + + def test_submodule_get_index(self): + m1 = Module() + m2 = Module() + m1.submodules["foo"] = m2 + m3 = m1.submodules["foo"] + self.assertEqual(m2, m3) + + def test_submodule_get_unset(self): + m1 = Module() + with self.assertRaises(AttributeError, msg="No submodule named 'foo' exists"): + m2 = m1.submodules.foo + with self.assertRaises(AttributeError, msg="No submodule named 'foo' exists"): + m2 = m1.submodules["foo"] + + def test_domain_named_implicit(self): + m = Module() + m.domains += ClockDomain("sync") + self.assertEqual(len(m._domains), 1) + + def test_domain_named_explicit(self): + m = Module() + m.domains.foo = ClockDomain() + self.assertEqual(len(m._domains), 1) + self.assertEqual(m._domains[0].name, "foo") + + def test_domain_add_wrong(self): + m = Module() + with self.assertRaises(TypeError, + msg="Only clock domains may be added to `m.domains`, not 1"): + m.domains.foo = 1 + with self.assertRaises(TypeError, + msg="Only clock domains may be added to `m.domains`, not 1"): + m.domains += 1 + + def test_domain_add_wrong_name(self): + m = Module() + with self.assertRaises(NameError, + msg="Clock domain name 'bar' must match name in `m.domains.foo += ...` syntax"): + m.domains.foo = ClockDomain("bar") + + def test_lower(self): + m1 = Module() + m1.d.comb += self.c1.eq(self.s1) + m2 = Module() + m2.d.comb += self.c2.eq(self.s2) + m2.d.sync += self.c3.eq(self.s3) + m1.submodules.foo = m2 + + f1 = m1.elaborate(platform=None) + self.assertRepr(f1.statements, """ + ( + (eq (sig c1) (sig s1)) + ) + """) + self.assertEqual(f1.drivers, { + None: SignalSet((self.c1,)) + }) + self.assertEqual(len(f1.subfragments), 1) + (f2, f2_name), = f1.subfragments + self.assertEqual(f2_name, "foo") + self.assertRepr(f2.statements, """ + ( + (eq (sig c2) (sig s2)) + (eq (sig c3) (sig s3)) + ) + """) + self.assertEqual(f2.drivers, { + None: SignalSet((self.c2,)), + "sync": SignalSet((self.c3,)) + }) + self.assertEqual(len(f2.subfragments), 0) diff --git a/nmigen/test/test_hdl_ir.py b/nmigen/test/test_hdl_ir.py new file mode 100644 index 0000000..2ea762a --- /dev/null +++ b/nmigen/test/test_hdl_ir.py @@ -0,0 +1,852 @@ +# nmigen: UnusedElaboratable=no + +from collections import OrderedDict + +from ..hdl.ast import * +from ..hdl.cd import * +from ..hdl.ir import * +from ..hdl.mem import * +from .utils import * + + +class BadElaboratable(Elaboratable): + def elaborate(self, platform): + return + + +class FragmentGetTestCase(FHDLTestCase): + def test_get_wrong(self): + with self.assertRaises(AttributeError, + msg="Object None cannot be elaborated"): + Fragment.get(None, platform=None) + + with self.assertWarns(UserWarning, + msg=".elaborate() returned None; missing return statement?"): + with self.assertRaises(AttributeError, + msg="Object None cannot be elaborated"): + Fragment.get(BadElaboratable(), platform=None) + + +class FragmentGeneratedTestCase(FHDLTestCase): + def test_find_subfragment(self): + f1 = Fragment() + f2 = Fragment() + f1.add_subfragment(f2, "f2") + + self.assertEqual(f1.find_subfragment(0), f2) + self.assertEqual(f1.find_subfragment("f2"), f2) + + def test_find_subfragment_wrong(self): + f1 = Fragment() + f2 = Fragment() + f1.add_subfragment(f2, "f2") + + with self.assertRaises(NameError, + msg="No subfragment at index #1"): + f1.find_subfragment(1) + with self.assertRaises(NameError, + msg="No subfragment with name 'fx'"): + f1.find_subfragment("fx") + + def test_find_generated(self): + f1 = Fragment() + f2 = Fragment() + f2.generated["sig"] = sig = Signal() + f1.add_subfragment(f2, "f2") + + self.assertEqual(SignalKey(f1.find_generated("f2", "sig")), + SignalKey(sig)) + + +class FragmentDriversTestCase(FHDLTestCase): + def test_empty(self): + f = Fragment() + self.assertEqual(list(f.iter_comb()), []) + self.assertEqual(list(f.iter_sync()), []) + + +class FragmentPortsTestCase(FHDLTestCase): + def setUp(self): + self.s1 = Signal() + self.s2 = Signal() + self.s3 = Signal() + self.c1 = Signal() + self.c2 = Signal() + self.c3 = Signal() + + def test_empty(self): + f = Fragment() + self.assertEqual(list(f.iter_ports()), []) + + f._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f.ports, SignalDict([])) + + def test_iter_signals(self): + f = Fragment() + f.add_ports(self.s1, self.s2, dir="io") + self.assertEqual(SignalSet((self.s1, self.s2)), f.iter_signals()) + + def test_self_contained(self): + f = Fragment() + f.add_statements( + self.c1.eq(self.s1), + self.s1.eq(self.c1) + ) + + f._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f.ports, SignalDict([])) + + def test_infer_input(self): + f = Fragment() + f.add_statements( + self.c1.eq(self.s1) + ) + + f._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f.ports, SignalDict([ + (self.s1, "i") + ])) + + def test_request_output(self): + f = Fragment() + f.add_statements( + self.c1.eq(self.s1) + ) + + f._propagate_ports(ports=(self.c1,), all_undef_as_ports=True) + self.assertEqual(f.ports, SignalDict([ + (self.s1, "i"), + (self.c1, "o") + ])) + + def test_input_in_subfragment(self): + f1 = Fragment() + f1.add_statements( + self.c1.eq(self.s1) + ) + f2 = Fragment() + f2.add_statements( + self.s1.eq(0) + ) + f1.add_subfragment(f2) + f1._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f1.ports, SignalDict()) + self.assertEqual(f2.ports, SignalDict([ + (self.s1, "o"), + ])) + + def test_input_only_in_subfragment(self): + f1 = Fragment() + f2 = Fragment() + f2.add_statements( + self.c1.eq(self.s1) + ) + f1.add_subfragment(f2) + f1._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f1.ports, SignalDict([ + (self.s1, "i"), + ])) + self.assertEqual(f2.ports, SignalDict([ + (self.s1, "i"), + ])) + + def test_output_from_subfragment(self): + f1 = Fragment() + f1.add_statements( + self.c1.eq(0) + ) + f2 = Fragment() + f2.add_statements( + self.c2.eq(1) + ) + f1.add_subfragment(f2) + + f1._propagate_ports(ports=(self.c2,), all_undef_as_ports=True) + self.assertEqual(f1.ports, SignalDict([ + (self.c2, "o"), + ])) + self.assertEqual(f2.ports, SignalDict([ + (self.c2, "o"), + ])) + + def test_output_from_subfragment_2(self): + f1 = Fragment() + f1.add_statements( + self.c1.eq(self.s1) + ) + f2 = Fragment() + f2.add_statements( + self.c2.eq(self.s1) + ) + f1.add_subfragment(f2) + f3 = Fragment() + f3.add_statements( + self.s1.eq(0) + ) + f2.add_subfragment(f3) + + f1._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f2.ports, SignalDict([ + (self.s1, "o"), + ])) + + def test_input_output_sibling(self): + f1 = Fragment() + f2 = Fragment() + f2.add_statements( + self.c1.eq(self.c2) + ) + f1.add_subfragment(f2) + f3 = Fragment() + f3.add_statements( + self.c2.eq(0) + ) + f3.add_driver(self.c2) + f1.add_subfragment(f3) + + f1._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f1.ports, SignalDict()) + + def test_output_input_sibling(self): + f1 = Fragment() + f2 = Fragment() + f2.add_statements( + self.c2.eq(0) + ) + f2.add_driver(self.c2) + f1.add_subfragment(f2) + f3 = Fragment() + f3.add_statements( + self.c1.eq(self.c2) + ) + f1.add_subfragment(f3) + + f1._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f1.ports, SignalDict()) + + def test_input_cd(self): + sync = ClockDomain() + f = Fragment() + f.add_statements( + self.c1.eq(self.s1) + ) + f.add_domains(sync) + f.add_driver(self.c1, "sync") + + f._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f.ports, SignalDict([ + (self.s1, "i"), + (sync.clk, "i"), + (sync.rst, "i"), + ])) + + def test_input_cd_reset_less(self): + sync = ClockDomain(reset_less=True) + f = Fragment() + f.add_statements( + self.c1.eq(self.s1) + ) + f.add_domains(sync) + f.add_driver(self.c1, "sync") + + f._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f.ports, SignalDict([ + (self.s1, "i"), + (sync.clk, "i"), + ])) + + def test_inout(self): + s = Signal() + f1 = Fragment() + f2 = Instance("foo", io_x=s) + f1.add_subfragment(f2) + + f1._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f1.ports, SignalDict([ + (s, "io") + ])) + + def test_in_out_same_signal(self): + s = Signal() + + f1 = Instance("foo", i_x=s, o_y=s) + f2 = Fragment() + f2.add_subfragment(f1) + + f2._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f1.ports, SignalDict([ + (s, "o") + ])) + + f3 = Instance("foo", o_y=s, i_x=s) + f4 = Fragment() + f4.add_subfragment(f3) + + f4._propagate_ports(ports=(), all_undef_as_ports=True) + self.assertEqual(f3.ports, SignalDict([ + (s, "o") + ])) + + def test_clk_rst(self): + sync = ClockDomain() + f = Fragment() + f.add_domains(sync) + + f = f.prepare(ports=(ClockSignal("sync"), ResetSignal("sync"))) + self.assertEqual(f.ports, SignalDict([ + (sync.clk, "i"), + (sync.rst, "i"), + ])) + + def test_port_wrong(self): + f = Fragment() + with self.assertRaises(TypeError, + msg="Only signals may be added as ports, not (const 1'd1)"): + f.prepare(ports=(Const(1),)) + +class FragmentDomainsTestCase(FHDLTestCase): + def test_iter_signals(self): + cd1 = ClockDomain() + cd2 = ClockDomain(reset_less=True) + s1 = Signal() + s2 = Signal() + + f = Fragment() + f.add_domains(cd1, cd2) + f.add_driver(s1, "cd1") + self.assertEqual(SignalSet((cd1.clk, cd1.rst, s1)), f.iter_signals()) + f.add_driver(s2, "cd2") + self.assertEqual(SignalSet((cd1.clk, cd1.rst, cd2.clk, s1, s2)), f.iter_signals()) + + def test_propagate_up(self): + cd = ClockDomain() + + f1 = Fragment() + f2 = Fragment() + f1.add_subfragment(f2) + f2.add_domains(cd) + + f1._propagate_domains_up() + self.assertEqual(f1.domains, {"cd": cd}) + + def test_propagate_up_local(self): + cd = ClockDomain(local=True) + + f1 = Fragment() + f2 = Fragment() + f1.add_subfragment(f2) + f2.add_domains(cd) + + f1._propagate_domains_up() + self.assertEqual(f1.domains, {}) + + def test_domain_conflict(self): + cda = ClockDomain("sync") + cdb = ClockDomain("sync") + + fa = Fragment() + fa.add_domains(cda) + fb = Fragment() + fb.add_domains(cdb) + f = Fragment() + f.add_subfragment(fa, "a") + f.add_subfragment(fb, "b") + + f._propagate_domains_up() + self.assertEqual(f.domains, {"a_sync": cda, "b_sync": cdb}) + (fa, _), (fb, _) = f.subfragments + self.assertEqual(fa.domains, {"a_sync": cda}) + self.assertEqual(fb.domains, {"b_sync": cdb}) + + def test_domain_conflict_anon(self): + cda = ClockDomain("sync") + cdb = ClockDomain("sync") + + fa = Fragment() + fa.add_domains(cda) + fb = Fragment() + fb.add_domains(cdb) + f = Fragment() + f.add_subfragment(fa, "a") + f.add_subfragment(fb) + + with self.assertRaises(DomainError, + msg="Domain 'sync' is defined by subfragments 'a', of fragment " + "'top'; it is necessary to either rename subfragment domains explicitly, " + "or give names to subfragments"): + f._propagate_domains_up() + + def test_domain_conflict_name(self): + cda = ClockDomain("sync") + cdb = ClockDomain("sync") + + fa = Fragment() + fa.add_domains(cda) + fb = Fragment() + fb.add_domains(cdb) + f = Fragment() + f.add_subfragment(fa, "x") + f.add_subfragment(fb, "x") + + with self.assertRaises(DomainError, + msg="Domain 'sync' is defined by subfragments #0, #1 of fragment 'top', some " + "of which have identical names; it is necessary to either rename subfragment " + "domains explicitly, or give distinct names to subfragments"): + f._propagate_domains_up() + + def test_domain_conflict_rename_drivers(self): + cda = ClockDomain("sync") + cdb = ClockDomain("sync") + + fa = Fragment() + fa.add_domains(cda) + fb = Fragment() + fb.add_domains(cdb) + fb.add_driver(ResetSignal("sync"), None) + f = Fragment() + f.add_subfragment(fa, "a") + f.add_subfragment(fb, "b") + + f._propagate_domains_up() + fb_new, _ = f.subfragments[1] + self.assertEqual(fb_new.drivers, OrderedDict({ + None: SignalSet((ResetSignal("b_sync"),)) + })) + + def test_domain_conflict_rename_drivers(self): + cda = ClockDomain("sync") + cdb = ClockDomain("sync") + s = Signal() + + fa = Fragment() + fa.add_domains(cda) + fb = Fragment() + fb.add_domains(cdb) + f = Fragment() + f.add_subfragment(fa, "a") + f.add_subfragment(fb, "b") + f.add_driver(s, "b_sync") + + f._propagate_domains(lambda name: ClockDomain(name)) + + def test_propagate_down(self): + cd = ClockDomain() + + f1 = Fragment() + f2 = Fragment() + f1.add_domains(cd) + f1.add_subfragment(f2) + + f1._propagate_domains_down() + self.assertEqual(f2.domains, {"cd": cd}) + + def test_propagate_down_idempotent(self): + cd = ClockDomain() + + f1 = Fragment() + f1.add_domains(cd) + f2 = Fragment() + f2.add_domains(cd) + f1.add_subfragment(f2) + + f1._propagate_domains_down() + self.assertEqual(f1.domains, {"cd": cd}) + self.assertEqual(f2.domains, {"cd": cd}) + + def test_propagate(self): + cd = ClockDomain() + + f1 = Fragment() + f2 = Fragment() + f1.add_domains(cd) + f1.add_subfragment(f2) + + new_domains = f1._propagate_domains(missing_domain=lambda name: None) + self.assertEqual(f1.domains, {"cd": cd}) + self.assertEqual(f2.domains, {"cd": cd}) + self.assertEqual(new_domains, []) + + def test_propagate_missing(self): + s1 = Signal() + f1 = Fragment() + f1.add_driver(s1, "sync") + + with self.assertRaises(DomainError, + msg="Domain 'sync' is used but not defined"): + f1._propagate_domains(missing_domain=lambda name: None) + + def test_propagate_create_missing(self): + s1 = Signal() + f1 = Fragment() + f1.add_driver(s1, "sync") + f2 = Fragment() + f1.add_subfragment(f2) + + new_domains = f1._propagate_domains(missing_domain=lambda name: ClockDomain(name)) + self.assertEqual(f1.domains.keys(), {"sync"}) + self.assertEqual(f2.domains.keys(), {"sync"}) + self.assertEqual(f1.domains["sync"], f2.domains["sync"]) + self.assertEqual(new_domains, [f1.domains["sync"]]) + + def test_propagate_create_missing_fragment(self): + s1 = Signal() + f1 = Fragment() + f1.add_driver(s1, "sync") + + cd = ClockDomain("sync") + f2 = Fragment() + f2.add_domains(cd) + + new_domains = f1._propagate_domains(missing_domain=lambda name: f2) + self.assertEqual(f1.domains.keys(), {"sync"}) + self.assertEqual(f1.domains["sync"], f2.domains["sync"]) + self.assertEqual(new_domains, []) + self.assertEqual(f1.subfragments, [ + (f2, "cd_sync") + ]) + + def test_propagate_create_missing_fragment_many_domains(self): + s1 = Signal() + f1 = Fragment() + f1.add_driver(s1, "sync") + + cd_por = ClockDomain("por") + cd_sync = ClockDomain("sync") + f2 = Fragment() + f2.add_domains(cd_por, cd_sync) + + new_domains = f1._propagate_domains(missing_domain=lambda name: f2) + self.assertEqual(f1.domains.keys(), {"sync", "por"}) + self.assertEqual(f2.domains.keys(), {"sync", "por"}) + self.assertEqual(f1.domains["sync"], f2.domains["sync"]) + self.assertEqual(new_domains, []) + self.assertEqual(f1.subfragments, [ + (f2, "cd_sync") + ]) + + def test_propagate_create_missing_fragment_wrong(self): + s1 = Signal() + f1 = Fragment() + f1.add_driver(s1, "sync") + + f2 = Fragment() + f2.add_domains(ClockDomain("foo")) + + with self.assertRaises(DomainError, + msg="Fragment returned by missing domain callback does not define requested " + "domain 'sync' (defines 'foo')."): + f1._propagate_domains(missing_domain=lambda name: f2) + + +class FragmentHierarchyConflictTestCase(FHDLTestCase): + def setUp_self_sub(self): + self.s1 = Signal() + self.c1 = Signal() + self.c2 = Signal() + + self.f1 = Fragment() + self.f1.add_statements(self.c1.eq(0)) + self.f1.add_driver(self.s1) + self.f1.add_driver(self.c1, "sync") + + self.f1a = Fragment() + self.f1.add_subfragment(self.f1a, "f1a") + + self.f2 = Fragment() + self.f2.add_statements(self.c2.eq(1)) + self.f2.add_driver(self.s1) + self.f2.add_driver(self.c2, "sync") + self.f1.add_subfragment(self.f2) + + self.f1b = Fragment() + self.f1.add_subfragment(self.f1b, "f1b") + + self.f2a = Fragment() + self.f2.add_subfragment(self.f2a, "f2a") + + def test_conflict_self_sub(self): + self.setUp_self_sub() + + self.f1._resolve_hierarchy_conflicts(mode="silent") + self.assertEqual(self.f1.subfragments, [ + (self.f1a, "f1a"), + (self.f1b, "f1b"), + (self.f2a, "f2a"), + ]) + self.assertRepr(self.f1.statements, """ + ( + (eq (sig c1) (const 1'd0)) + (eq (sig c2) (const 1'd1)) + ) + """) + self.assertEqual(self.f1.drivers, { + None: SignalSet((self.s1,)), + "sync": SignalSet((self.c1, self.c2)), + }) + + def test_conflict_self_sub_error(self): + self.setUp_self_sub() + + with self.assertRaises(DriverConflict, + msg="Signal '(sig s1)' is driven from multiple fragments: top, top."): + self.f1._resolve_hierarchy_conflicts(mode="error") + + def test_conflict_self_sub_warning(self): + self.setUp_self_sub() + + with self.assertWarns(DriverConflict, + msg="Signal '(sig s1)' is driven from multiple fragments: top, top.; " + "hierarchy will be flattened"): + self.f1._resolve_hierarchy_conflicts(mode="warn") + + def setUp_sub_sub(self): + self.s1 = Signal() + self.c1 = Signal() + self.c2 = Signal() + + self.f1 = Fragment() + + self.f2 = Fragment() + self.f2.add_driver(self.s1) + self.f2.add_statements(self.c1.eq(0)) + self.f1.add_subfragment(self.f2) + + self.f3 = Fragment() + self.f3.add_driver(self.s1) + self.f3.add_statements(self.c2.eq(1)) + self.f1.add_subfragment(self.f3) + + def test_conflict_sub_sub(self): + self.setUp_sub_sub() + + self.f1._resolve_hierarchy_conflicts(mode="silent") + self.assertEqual(self.f1.subfragments, []) + self.assertRepr(self.f1.statements, """ + ( + (eq (sig c1) (const 1'd0)) + (eq (sig c2) (const 1'd1)) + ) + """) + + def setUp_self_subsub(self): + self.s1 = Signal() + self.c1 = Signal() + self.c2 = Signal() + + self.f1 = Fragment() + self.f1.add_driver(self.s1) + + self.f2 = Fragment() + self.f2.add_statements(self.c1.eq(0)) + self.f1.add_subfragment(self.f2) + + self.f3 = Fragment() + self.f3.add_driver(self.s1) + self.f3.add_statements(self.c2.eq(1)) + self.f2.add_subfragment(self.f3) + + def test_conflict_self_subsub(self): + self.setUp_self_subsub() + + self.f1._resolve_hierarchy_conflicts(mode="silent") + self.assertEqual(self.f1.subfragments, []) + self.assertRepr(self.f1.statements, """ + ( + (eq (sig c1) (const 1'd0)) + (eq (sig c2) (const 1'd1)) + ) + """) + + def setUp_memory(self): + self.m = Memory(width=8, depth=4) + self.fr = self.m.read_port().elaborate(platform=None) + self.fw = self.m.write_port().elaborate(platform=None) + self.f1 = Fragment() + self.f2 = Fragment() + self.f2.add_subfragment(self.fr) + self.f1.add_subfragment(self.f2) + self.f3 = Fragment() + self.f3.add_subfragment(self.fw) + self.f1.add_subfragment(self.f3) + + def test_conflict_memory(self): + self.setUp_memory() + + self.f1._resolve_hierarchy_conflicts(mode="silent") + self.assertEqual(self.f1.subfragments, [ + (self.fr, None), + (self.fw, None), + ]) + + def test_conflict_memory_error(self): + self.setUp_memory() + + with self.assertRaises(DriverConflict, + msg="Memory 'm' is accessed from multiple fragments: top., " + "top."): + self.f1._resolve_hierarchy_conflicts(mode="error") + + def test_conflict_memory_warning(self): + self.setUp_memory() + + with self.assertWarns(DriverConflict, + msg="Memory 'm' is accessed from multiple fragments: top., " + "top.; hierarchy will be flattened"): + self.f1._resolve_hierarchy_conflicts(mode="warn") + + def test_explicit_flatten(self): + self.f1 = Fragment() + self.f2 = Fragment() + self.f2.flatten = True + self.f1.add_subfragment(self.f2) + + self.f1._resolve_hierarchy_conflicts(mode="silent") + self.assertEqual(self.f1.subfragments, []) + + def test_no_conflict_local_domains(self): + f1 = Fragment() + cd1 = ClockDomain("d", local=True) + f1.add_domains(cd1) + f1.add_driver(ClockSignal("d")) + f2 = Fragment() + cd2 = ClockDomain("d", local=True) + f2.add_domains(cd2) + f2.add_driver(ClockSignal("d")) + f3 = Fragment() + f3.add_subfragment(f1) + f3.add_subfragment(f2) + f3.prepare() + + +class InstanceTestCase(FHDLTestCase): + def test_construct(self): + s1 = Signal() + s2 = Signal() + s3 = Signal() + s4 = Signal() + s5 = Signal() + s6 = Signal() + inst = Instance("foo", + ("a", "ATTR1", 1), + ("p", "PARAM1", 0x1234), + ("i", "s1", s1), + ("o", "s2", s2), + ("io", "s3", s3), + a_ATTR2=2, + p_PARAM2=0x5678, + i_s4=s4, + o_s5=s5, + io_s6=s6, + ) + self.assertEqual(inst.attrs, OrderedDict([ + ("ATTR1", 1), + ("ATTR2", 2), + ])) + self.assertEqual(inst.parameters, OrderedDict([ + ("PARAM1", 0x1234), + ("PARAM2", 0x5678), + ])) + self.assertEqual(inst.named_ports, OrderedDict([ + ("s1", (s1, "i")), + ("s2", (s2, "o")), + ("s3", (s3, "io")), + ("s4", (s4, "i")), + ("s5", (s5, "o")), + ("s6", (s6, "io")), + ])) + + def test_cast_ports(self): + inst = Instance("foo", + ("i", "s1", 1), + ("o", "s2", 2), + ("io", "s3", 3), + i_s4=4, + o_s5=5, + io_s6=6, + ) + self.assertRepr(inst.named_ports["s1"][0], "(const 1'd1)") + self.assertRepr(inst.named_ports["s2"][0], "(const 2'd2)") + self.assertRepr(inst.named_ports["s3"][0], "(const 2'd3)") + self.assertRepr(inst.named_ports["s4"][0], "(const 3'd4)") + self.assertRepr(inst.named_ports["s5"][0], "(const 3'd5)") + self.assertRepr(inst.named_ports["s6"][0], "(const 3'd6)") + + def test_wrong_construct_arg(self): + s = Signal() + with self.assertRaises(NameError, + msg="Instance argument ('', 's1', (sig s)) should be a tuple " + "(kind, name, value) where kind is one of \"p\", \"i\", \"o\", or \"io\""): + Instance("foo", ("", "s1", s)) + + def test_wrong_construct_kwarg(self): + s = Signal() + with self.assertRaises(NameError, + msg="Instance keyword argument x_s1=(sig s) does not start with one of " + "\"p_\", \"i_\", \"o_\", or \"io_\""): + Instance("foo", x_s1=s) + + def setUp_cpu(self): + self.rst = Signal() + self.stb = Signal() + self.pins = Signal(8) + self.datal = Signal(4) + self.datah = Signal(4) + self.inst = Instance("cpu", + p_RESET=0x1234, + i_clk=ClockSignal(), + i_rst=self.rst, + o_stb=self.stb, + o_data=Cat(self.datal, self.datah), + io_pins=self.pins[:] + ) + self.wrap = Fragment() + self.wrap.add_subfragment(self.inst) + + def test_init(self): + self.setUp_cpu() + f = self.inst + self.assertEqual(f.type, "cpu") + self.assertEqual(f.parameters, OrderedDict([("RESET", 0x1234)])) + self.assertEqual(list(f.named_ports.keys()), ["clk", "rst", "stb", "data", "pins"]) + self.assertEqual(f.ports, SignalDict([])) + + def test_prepare(self): + self.setUp_cpu() + f = self.wrap.prepare() + sync_clk = f.domains["sync"].clk + self.assertEqual(f.ports, SignalDict([ + (sync_clk, "i"), + (self.rst, "i"), + (self.pins, "io"), + ])) + + def test_prepare_explicit_ports(self): + self.setUp_cpu() + f = self.wrap.prepare(ports=[self.rst, self.stb]) + sync_clk = f.domains["sync"].clk + sync_rst = f.domains["sync"].rst + self.assertEqual(f.ports, SignalDict([ + (sync_clk, "i"), + (sync_rst, "i"), + (self.rst, "i"), + (self.stb, "o"), + (self.pins, "io"), + ])) + + def test_prepare_slice_in_port(self): + s = Signal(2) + f = Fragment() + f.add_subfragment(Instance("foo", o_O=s[0])) + f.add_subfragment(Instance("foo", o_O=s[1])) + fp = f.prepare(ports=[s], missing_domain=lambda name: None) + self.assertEqual(fp.ports, SignalDict([ + (s, "o"), + ])) + + def test_prepare_attrs(self): + self.setUp_cpu() + self.inst.attrs["ATTR"] = 1 + f = self.inst.prepare() + self.assertEqual(f.attrs, OrderedDict([ + ("ATTR", 1), + ])) diff --git a/nmigen/test/test_hdl_mem.py b/nmigen/test/test_hdl_mem.py new file mode 100644 index 0000000..131c639 --- /dev/null +++ b/nmigen/test/test_hdl_mem.py @@ -0,0 +1,137 @@ +# nmigen: UnusedElaboratable=no + +from ..hdl.ast import * +from ..hdl.mem import * +from .utils import * + + +class MemoryTestCase(FHDLTestCase): + def test_name(self): + m1 = Memory(width=8, depth=4) + self.assertEqual(m1.name, "m1") + m2 = [Memory(width=8, depth=4)][0] + self.assertEqual(m2.name, "$memory") + m3 = Memory(width=8, depth=4, name="foo") + self.assertEqual(m3.name, "foo") + + def test_geometry(self): + m = Memory(width=8, depth=4) + self.assertEqual(m.width, 8) + self.assertEqual(m.depth, 4) + + def test_geometry_wrong(self): + with self.assertRaises(TypeError, + msg="Memory width must be a non-negative integer, not -1"): + m = Memory(width=-1, depth=4) + with self.assertRaises(TypeError, + msg="Memory depth must be a non-negative integer, not -1"): + m = Memory(width=8, depth=-1) + + def test_init(self): + m = Memory(width=8, depth=4, init=range(4)) + self.assertEqual(m.init, [0, 1, 2, 3]) + + def test_init_wrong_count(self): + with self.assertRaises(ValueError, + msg="Memory initialization value count exceed memory depth (8 > 4)"): + m = Memory(width=8, depth=4, init=range(8)) + + def test_init_wrong_type(self): + with self.assertRaises(TypeError, + msg="Memory initialization value at address 1: " + "'str' object cannot be interpreted as an integer"): + m = Memory(width=8, depth=4, init=[1, "0"]) + + def test_attrs(self): + m1 = Memory(width=8, depth=4) + self.assertEqual(m1.attrs, {}) + m2 = Memory(width=8, depth=4, attrs={"ram_block": True}) + self.assertEqual(m2.attrs, {"ram_block": True}) + + def test_read_port_transparent(self): + mem = Memory(width=8, depth=4) + rdport = mem.read_port() + self.assertEqual(rdport.memory, mem) + self.assertEqual(rdport.domain, "sync") + self.assertEqual(rdport.transparent, True) + self.assertEqual(len(rdport.addr), 2) + self.assertEqual(len(rdport.data), 8) + self.assertEqual(len(rdport.en), 1) + self.assertIsInstance(rdport.en, Const) + self.assertEqual(rdport.en.value, 1) + + def test_read_port_non_transparent(self): + mem = Memory(width=8, depth=4) + rdport = mem.read_port(transparent=False) + self.assertEqual(rdport.memory, mem) + self.assertEqual(rdport.domain, "sync") + self.assertEqual(rdport.transparent, False) + self.assertEqual(len(rdport.en), 1) + self.assertIsInstance(rdport.en, Signal) + self.assertEqual(rdport.en.reset, 1) + + def test_read_port_asynchronous(self): + mem = Memory(width=8, depth=4) + rdport = mem.read_port(domain="comb") + self.assertEqual(rdport.memory, mem) + self.assertEqual(rdport.domain, "comb") + self.assertEqual(rdport.transparent, True) + self.assertEqual(len(rdport.en), 1) + self.assertIsInstance(rdport.en, Const) + self.assertEqual(rdport.en.value, 1) + + def test_read_port_wrong(self): + mem = Memory(width=8, depth=4) + with self.assertRaises(ValueError, + msg="Read port cannot be simultaneously asynchronous and non-transparent"): + mem.read_port(domain="comb", transparent=False) + + def test_write_port(self): + mem = Memory(width=8, depth=4) + wrport = mem.write_port() + self.assertEqual(wrport.memory, mem) + self.assertEqual(wrport.domain, "sync") + self.assertEqual(wrport.granularity, 8) + self.assertEqual(len(wrport.addr), 2) + self.assertEqual(len(wrport.data), 8) + self.assertEqual(len(wrport.en), 1) + + def test_write_port_granularity(self): + mem = Memory(width=8, depth=4) + wrport = mem.write_port(granularity=2) + self.assertEqual(wrport.memory, mem) + self.assertEqual(wrport.domain, "sync") + self.assertEqual(wrport.granularity, 2) + self.assertEqual(len(wrport.addr), 2) + self.assertEqual(len(wrport.data), 8) + self.assertEqual(len(wrport.en), 4) + + def test_write_port_granularity_wrong(self): + mem = Memory(width=8, depth=4) + with self.assertRaises(TypeError, + msg="Write port granularity must be a non-negative integer, not -1"): + mem.write_port(granularity=-1) + with self.assertRaises(ValueError, + msg="Write port granularity must not be greater than memory width (10 > 8)"): + mem.write_port(granularity=10) + with self.assertRaises(ValueError, + msg="Write port granularity must divide memory width evenly"): + mem.write_port(granularity=3) + + +class DummyPortTestCase(FHDLTestCase): + def test_name(self): + p1 = DummyPort(data_width=8, addr_width=2) + self.assertEqual(p1.addr.name, "p1_addr") + p2 = [DummyPort(data_width=8, addr_width=2)][0] + self.assertEqual(p2.addr.name, "dummy_addr") + p3 = DummyPort(data_width=8, addr_width=2, name="foo") + self.assertEqual(p3.addr.name, "foo_addr") + + def test_sizes(self): + p1 = DummyPort(data_width=8, addr_width=2) + self.assertEqual(p1.addr.width, 2) + self.assertEqual(p1.data.width, 8) + self.assertEqual(p1.en.width, 1) + p2 = DummyPort(data_width=8, addr_width=2, granularity=2) + self.assertEqual(p2.en.width, 4) diff --git a/nmigen/test/test_hdl_rec.py b/nmigen/test/test_hdl_rec.py new file mode 100644 index 0000000..dd9dd40 --- /dev/null +++ b/nmigen/test/test_hdl_rec.py @@ -0,0 +1,313 @@ +from enum import Enum + +from ..hdl.ast import * +from ..hdl.rec import * +from .utils import * + + +class UnsignedEnum(Enum): + FOO = 1 + BAR = 2 + BAZ = 3 + + +class LayoutTestCase(FHDLTestCase): + def test_fields(self): + layout = Layout.cast([ + ("cyc", 1), + ("data", signed(32)), + ("stb", 1, DIR_FANOUT), + ("ack", 1, DIR_FANIN), + ("info", [ + ("a", 1), + ("b", 1), + ]) + ]) + + self.assertEqual(layout["cyc"], ((1, False), DIR_NONE)) + self.assertEqual(layout["data"], ((32, True), DIR_NONE)) + self.assertEqual(layout["stb"], ((1, False), DIR_FANOUT)) + self.assertEqual(layout["ack"], ((1, False), DIR_FANIN)) + sublayout = layout["info"][0] + self.assertEqual(layout["info"][1], DIR_NONE) + self.assertEqual(sublayout["a"], ((1, False), DIR_NONE)) + self.assertEqual(sublayout["b"], ((1, False), DIR_NONE)) + + def test_enum_field(self): + layout = Layout.cast([ + ("enum", UnsignedEnum), + ("enum_dir", UnsignedEnum, DIR_FANOUT), + ]) + self.assertEqual(layout["enum"], ((2, False), DIR_NONE)) + self.assertEqual(layout["enum_dir"], ((2, False), DIR_FANOUT)) + + def test_range_field(self): + layout = Layout.cast([ + ("range", range(0, 7)), + ]) + self.assertEqual(layout["range"], ((3, False), DIR_NONE)) + + def test_slice_tuple(self): + layout = Layout.cast([ + ("a", 1), + ("b", 2), + ("c", 3) + ]) + expect = Layout.cast([ + ("a", 1), + ("c", 3) + ]) + self.assertEqual(layout["a", "c"], expect) + + def test_wrong_field(self): + with self.assertRaises(TypeError, + msg="Field (1,) has invalid layout: should be either (name, shape) or " + "(name, shape, direction)"): + Layout.cast([(1,)]) + + def test_wrong_name(self): + with self.assertRaises(TypeError, + msg="Field (1, 1) has invalid name: should be a string"): + Layout.cast([(1, 1)]) + + def test_wrong_name_duplicate(self): + with self.assertRaises(NameError, + msg="Field ('a', 2) has a name that is already present in the layout"): + Layout.cast([("a", 1), ("a", 2)]) + + def test_wrong_direction(self): + with self.assertRaises(TypeError, + msg="Field ('a', 1, 0) has invalid direction: should be a Direction " + "instance like DIR_FANIN"): + Layout.cast([("a", 1, 0)]) + + def test_wrong_shape(self): + with self.assertRaises(TypeError, + msg="Field ('a', 'x') has invalid shape: should be castable to Shape or " + "a list of fields of a nested record"): + Layout.cast([("a", "x")]) + + +class RecordTestCase(FHDLTestCase): + def test_basic(self): + r = Record([ + ("stb", 1), + ("data", 32), + ("info", [ + ("a", 1), + ("b", 1), + ]) + ]) + + self.assertEqual(repr(r), "(rec r stb data (rec r__info a b))") + self.assertEqual(len(r), 35) + self.assertIsInstance(r.stb, Signal) + self.assertEqual(r.stb.name, "r__stb") + self.assertEqual(r["stb"].name, "r__stb") + + self.assertTrue(hasattr(r, "stb")) + self.assertFalse(hasattr(r, "xxx")) + + def test_unnamed(self): + r = [Record([ + ("stb", 1) + ])][0] + + self.assertEqual(repr(r), "(rec stb)") + self.assertEqual(r.stb.name, "stb") + + def test_iter(self): + r = Record([ + ("data", 4), + ("stb", 1), + ]) + + self.assertEqual(repr(r[0]), "(slice (rec r data stb) 0:1)") + self.assertEqual(repr(r[0:3]), "(slice (rec r data stb) 0:3)") + + def test_wrong_field(self): + r = Record([ + ("stb", 1), + ("ack", 1), + ]) + with self.assertRaises(AttributeError, + msg="Record 'r' does not have a field 'en'. Did you mean one of: stb, ack?"): + r["en"] + with self.assertRaises(AttributeError, + msg="Record 'r' does not have a field 'en'. Did you mean one of: stb, ack?"): + r.en + + def test_wrong_field_unnamed(self): + r = [Record([ + ("stb", 1), + ("ack", 1), + ])][0] + with self.assertRaises(AttributeError, + msg="Unnamed record does not have a field 'en'. Did you mean one of: stb, ack?"): + r.en + + def test_construct_with_fields(self): + ns = Signal(1) + nr = Record([ + ("burst", 1) + ]) + r = Record([ + ("stb", 1), + ("info", [ + ("burst", 1) + ]) + ], fields={ + "stb": ns, + "info": nr + }) + self.assertIs(r.stb, ns) + self.assertIs(r.info, nr) + + def test_like(self): + r1 = Record([("a", 1), ("b", 2)]) + r2 = Record.like(r1) + self.assertEqual(r1.layout, r2.layout) + self.assertEqual(r2.name, "r2") + r3 = Record.like(r1, name="foo") + self.assertEqual(r3.name, "foo") + r4 = Record.like(r1, name_suffix="foo") + self.assertEqual(r4.name, "r1foo") + + def test_like_modifications(self): + r1 = Record([("a", 1), ("b", [("s", 1)])]) + self.assertEqual(r1.a.name, "r1__a") + self.assertEqual(r1.b.name, "r1__b") + self.assertEqual(r1.b.s.name, "r1__b__s") + r1.a.reset = 1 + r1.b.s.reset = 1 + r2 = Record.like(r1) + self.assertEqual(r2.a.reset, 1) + self.assertEqual(r2.b.s.reset, 1) + self.assertEqual(r2.a.name, "r2__a") + self.assertEqual(r2.b.name, "r2__b") + self.assertEqual(r2.b.s.name, "r2__b__s") + + def test_slice_tuple(self): + r1 = Record([("a", 1), ("b", 2), ("c", 3)]) + r2 = r1["a", "c"] + self.assertEqual(r2.layout, Layout([("a", 1), ("c", 3)])) + self.assertIs(r2.a, r1.a) + self.assertIs(r2.c, r1.c) + + +class ConnectTestCase(FHDLTestCase): + def setUp_flat(self): + self.core_layout = [ + ("addr", 32, DIR_FANOUT), + ("data_r", 32, DIR_FANIN), + ("data_w", 32, DIR_FANIN), + ] + self.periph_layout = [ + ("addr", 32, DIR_FANOUT), + ("data_r", 32, DIR_FANIN), + ("data_w", 32, DIR_FANIN), + ] + + def setUp_nested(self): + self.core_layout = [ + ("addr", 32, DIR_FANOUT), + ("data", [ + ("r", 32, DIR_FANIN), + ("w", 32, DIR_FANIN), + ]), + ] + self.periph_layout = [ + ("addr", 32, DIR_FANOUT), + ("data", [ + ("r", 32, DIR_FANIN), + ("w", 32, DIR_FANIN), + ]), + ] + + def test_flat(self): + self.setUp_flat() + + core = Record(self.core_layout) + periph1 = Record(self.periph_layout) + periph2 = Record(self.periph_layout) + + stmts = core.connect(periph1, periph2) + self.assertRepr(stmts, """( + (eq (sig periph1__addr) (sig core__addr)) + (eq (sig periph2__addr) (sig core__addr)) + (eq (sig core__data_r) (| (sig periph1__data_r) (sig periph2__data_r))) + (eq (sig core__data_w) (| (sig periph1__data_w) (sig periph2__data_w))) + )""") + + def test_flat_include(self): + self.setUp_flat() + + core = Record(self.core_layout) + periph1 = Record(self.periph_layout) + periph2 = Record(self.periph_layout) + + stmts = core.connect(periph1, periph2, include={"addr": True}) + self.assertRepr(stmts, """( + (eq (sig periph1__addr) (sig core__addr)) + (eq (sig periph2__addr) (sig core__addr)) + )""") + + def test_flat_exclude(self): + self.setUp_flat() + + core = Record(self.core_layout) + periph1 = Record(self.periph_layout) + periph2 = Record(self.periph_layout) + + stmts = core.connect(periph1, periph2, exclude={"addr": True}) + self.assertRepr(stmts, """( + (eq (sig core__data_r) (| (sig periph1__data_r) (sig periph2__data_r))) + (eq (sig core__data_w) (| (sig periph1__data_w) (sig periph2__data_w))) + )""") + + def test_nested(self): + self.setUp_nested() + + core = Record(self.core_layout) + periph1 = Record(self.periph_layout) + periph2 = Record(self.periph_layout) + + stmts = core.connect(periph1, periph2) + self.maxDiff = None + self.assertRepr(stmts, """( + (eq (sig periph1__addr) (sig core__addr)) + (eq (sig periph2__addr) (sig core__addr)) + (eq (sig core__data__r) (| (sig periph1__data__r) (sig periph2__data__r))) + (eq (sig core__data__w) (| (sig periph1__data__w) (sig periph2__data__w))) + )""") + + def test_wrong_include_exclude(self): + self.setUp_flat() + + core = Record(self.core_layout) + periph = Record(self.periph_layout) + + with self.assertRaises(AttributeError, + msg="Cannot include field 'foo' because it is not present in record 'core'"): + core.connect(periph, include={"foo": True}) + + with self.assertRaises(AttributeError, + msg="Cannot exclude field 'foo' because it is not present in record 'core'"): + core.connect(periph, exclude={"foo": True}) + + def test_wrong_direction(self): + recs = [Record([("x", 1)]) for _ in range(2)] + + with self.assertRaises(TypeError, + msg="Cannot connect field 'x' of unnamed record because it does not have " + "a direction"): + recs[0].connect(recs[1]) + + def test_wrong_missing_field(self): + core = Record([("addr", 32, DIR_FANOUT)]) + periph = Record([]) + + with self.assertRaises(AttributeError, + msg="Cannot connect field 'addr' of record 'core' to subordinate record 'periph' " + "because the subordinate record does not have this field"): + core.connect(periph) diff --git a/nmigen/test/test_hdl_xfrm.py b/nmigen/test/test_hdl_xfrm.py new file mode 100644 index 0000000..e5f1745 --- /dev/null +++ b/nmigen/test/test_hdl_xfrm.py @@ -0,0 +1,622 @@ +# nmigen: UnusedElaboratable=no + +from ..hdl.ast import * +from ..hdl.cd import * +from ..hdl.ir import * +from ..hdl.xfrm import * +from ..hdl.mem import * +from .utils import * + + +class DomainRenamerTestCase(FHDLTestCase): + def setUp(self): + self.s1 = Signal() + self.s2 = Signal() + self.s3 = Signal() + self.s4 = Signal() + self.s5 = Signal() + self.c1 = Signal() + + def test_rename_signals(self): + f = Fragment() + f.add_statements( + self.s1.eq(ClockSignal()), + ResetSignal().eq(self.s2), + self.s3.eq(0), + self.s4.eq(ClockSignal("other")), + self.s5.eq(ResetSignal("other")), + ) + f.add_driver(self.s1, None) + f.add_driver(self.s2, None) + f.add_driver(self.s3, "sync") + + f = DomainRenamer("pix")(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s1) (clk pix)) + (eq (rst pix) (sig s2)) + (eq (sig s3) (const 1'd0)) + (eq (sig s4) (clk other)) + (eq (sig s5) (rst other)) + ) + """) + self.assertEqual(f.drivers, { + None: SignalSet((self.s1, self.s2)), + "pix": SignalSet((self.s3,)), + }) + + def test_rename_multi(self): + f = Fragment() + f.add_statements( + self.s1.eq(ClockSignal()), + self.s2.eq(ResetSignal("other")), + ) + + f = DomainRenamer({"sync": "pix", "other": "pix2"})(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s1) (clk pix)) + (eq (sig s2) (rst pix2)) + ) + """) + + def test_rename_cd(self): + cd_sync = ClockDomain() + cd_pix = ClockDomain() + + f = Fragment() + f.add_domains(cd_sync, cd_pix) + + f = DomainRenamer("ext")(f) + self.assertEqual(cd_sync.name, "ext") + self.assertEqual(f.domains, { + "ext": cd_sync, + "pix": cd_pix, + }) + + def test_rename_cd_subfragment(self): + cd_sync = ClockDomain() + cd_pix = ClockDomain() + + f1 = Fragment() + f1.add_domains(cd_sync, cd_pix) + f2 = Fragment() + f2.add_domains(cd_sync) + f1.add_subfragment(f2) + + f1 = DomainRenamer("ext")(f1) + self.assertEqual(cd_sync.name, "ext") + self.assertEqual(f1.domains, { + "ext": cd_sync, + "pix": cd_pix, + }) + + def test_rename_wrong_to_comb(self): + with self.assertRaises(ValueError, + msg="Domain 'sync' may not be renamed to 'comb'"): + DomainRenamer("comb") + + def test_rename_wrong_from_comb(self): + with self.assertRaises(ValueError, + msg="Domain 'comb' may not be renamed"): + DomainRenamer({"comb": "sync"}) + + +class DomainLowererTestCase(FHDLTestCase): + def setUp(self): + self.s = Signal() + + def test_lower_clk(self): + sync = ClockDomain() + f = Fragment() + f.add_domains(sync) + f.add_statements( + self.s.eq(ClockSignal("sync")) + ) + + f = DomainLowerer()(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s) (sig clk)) + ) + """) + + def test_lower_rst(self): + sync = ClockDomain() + f = Fragment() + f.add_domains(sync) + f.add_statements( + self.s.eq(ResetSignal("sync")) + ) + + f = DomainLowerer()(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s) (sig rst)) + ) + """) + + def test_lower_rst_reset_less(self): + sync = ClockDomain(reset_less=True) + f = Fragment() + f.add_domains(sync) + f.add_statements( + self.s.eq(ResetSignal("sync", allow_reset_less=True)) + ) + + f = DomainLowerer()(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s) (const 1'd0)) + ) + """) + + def test_lower_drivers(self): + sync = ClockDomain() + pix = ClockDomain() + f = Fragment() + f.add_domains(sync, pix) + f.add_driver(ClockSignal("pix"), None) + f.add_driver(ResetSignal("pix"), "sync") + + f = DomainLowerer()(f) + self.assertEqual(f.drivers, { + None: SignalSet((pix.clk,)), + "sync": SignalSet((pix.rst,)) + }) + + def test_lower_wrong_domain(self): + f = Fragment() + f.add_statements( + self.s.eq(ClockSignal("xxx")) + ) + + with self.assertRaises(DomainError, + msg="Signal (clk xxx) refers to nonexistent domain 'xxx'"): + DomainLowerer()(f) + + def test_lower_wrong_reset_less_domain(self): + sync = ClockDomain(reset_less=True) + f = Fragment() + f.add_domains(sync) + f.add_statements( + self.s.eq(ResetSignal("sync")) + ) + + with self.assertRaises(DomainError, + msg="Signal (rst sync) refers to reset of reset-less domain 'sync'"): + DomainLowerer()(f) + + +class SampleLowererTestCase(FHDLTestCase): + def setUp(self): + self.i = Signal() + self.o1 = Signal() + self.o2 = Signal() + self.o3 = Signal() + + def test_lower_signal(self): + f = Fragment() + f.add_statements( + self.o1.eq(Sample(self.i, 2, "sync")), + self.o2.eq(Sample(self.i, 1, "sync")), + self.o3.eq(Sample(self.i, 1, "pix")), + ) + + f = SampleLowerer()(f) + self.assertRepr(f.statements, """ + ( + (eq (sig o1) (sig $sample$s$i$sync$2)) + (eq (sig o2) (sig $sample$s$i$sync$1)) + (eq (sig o3) (sig $sample$s$i$pix$1)) + (eq (sig $sample$s$i$sync$1) (sig i)) + (eq (sig $sample$s$i$sync$2) (sig $sample$s$i$sync$1)) + (eq (sig $sample$s$i$pix$1) (sig i)) + ) + """) + self.assertEqual(len(f.drivers["sync"]), 2) + self.assertEqual(len(f.drivers["pix"]), 1) + + def test_lower_const(self): + f = Fragment() + f.add_statements( + self.o1.eq(Sample(1, 2, "sync")), + ) + + f = SampleLowerer()(f) + self.assertRepr(f.statements, """ + ( + (eq (sig o1) (sig $sample$c$1$sync$2)) + (eq (sig $sample$c$1$sync$1) (const 1'd1)) + (eq (sig $sample$c$1$sync$2) (sig $sample$c$1$sync$1)) + ) + """) + self.assertEqual(len(f.drivers["sync"]), 2) + + +class SwitchCleanerTestCase(FHDLTestCase): + def test_clean(self): + a = Signal() + b = Signal() + c = Signal() + stmts = [ + Switch(a, { + 1: a.eq(0), + 0: [ + b.eq(1), + Switch(b, {1: [ + Switch(a|b, {}) + ]}) + ] + }) + ] + + self.assertRepr(SwitchCleaner()(stmts), """ + ( + (switch (sig a) + (case 1 + (eq (sig a) (const 1'd0))) + (case 0 + (eq (sig b) (const 1'd1))) + ) + ) + """) + + +class LHSGroupAnalyzerTestCase(FHDLTestCase): + def test_no_group_unrelated(self): + a = Signal() + b = Signal() + stmts = [ + a.eq(0), + b.eq(0), + ] + + groups = LHSGroupAnalyzer()(stmts) + self.assertEqual(list(groups.values()), [ + SignalSet((a,)), + SignalSet((b,)), + ]) + + def test_group_related(self): + a = Signal() + b = Signal() + stmts = [ + a.eq(0), + Cat(a, b).eq(0), + ] + + groups = LHSGroupAnalyzer()(stmts) + self.assertEqual(list(groups.values()), [ + SignalSet((a, b)), + ]) + + def test_no_loops(self): + a = Signal() + b = Signal() + stmts = [ + a.eq(0), + Cat(a, b).eq(0), + Cat(a, b).eq(0), + ] + + groups = LHSGroupAnalyzer()(stmts) + self.assertEqual(list(groups.values()), [ + SignalSet((a, b)), + ]) + + def test_switch(self): + a = Signal() + b = Signal() + stmts = [ + a.eq(0), + Switch(a, { + 1: b.eq(0), + }) + ] + + groups = LHSGroupAnalyzer()(stmts) + self.assertEqual(list(groups.values()), [ + SignalSet((a,)), + SignalSet((b,)), + ]) + + def test_lhs_empty(self): + stmts = [ + Cat().eq(0) + ] + + groups = LHSGroupAnalyzer()(stmts) + self.assertEqual(list(groups.values()), [ + ]) + + +class LHSGroupFilterTestCase(FHDLTestCase): + def test_filter(self): + a = Signal() + b = Signal() + c = Signal() + stmts = [ + Switch(a, { + 1: a.eq(0), + 0: [ + b.eq(1), + Switch(b, {1: []}) + ] + }) + ] + + self.assertRepr(LHSGroupFilter(SignalSet((a,)))(stmts), """ + ( + (switch (sig a) + (case 1 + (eq (sig a) (const 1'd0))) + (case 0 ) + ) + ) + """) + + def test_lhs_empty(self): + stmts = [ + Cat().eq(0) + ] + + self.assertRepr(LHSGroupFilter(SignalSet())(stmts), "()") + + +class ResetInserterTestCase(FHDLTestCase): + def setUp(self): + self.s1 = Signal() + self.s2 = Signal(reset=1) + self.s3 = Signal(reset=1, reset_less=True) + self.c1 = Signal() + + def test_reset_default(self): + f = Fragment() + f.add_statements( + self.s1.eq(1) + ) + f.add_driver(self.s1, "sync") + + f = ResetInserter(self.c1)(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s1) (const 1'd1)) + (switch (sig c1) + (case 1 (eq (sig s1) (const 1'd0))) + ) + ) + """) + + def test_reset_cd(self): + f = Fragment() + f.add_statements( + self.s1.eq(1), + self.s2.eq(0), + ) + f.add_domains(ClockDomain("sync")) + f.add_driver(self.s1, "sync") + f.add_driver(self.s2, "pix") + + f = ResetInserter({"pix": self.c1})(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s1) (const 1'd1)) + (eq (sig s2) (const 1'd0)) + (switch (sig c1) + (case 1 (eq (sig s2) (const 1'd1))) + ) + ) + """) + + def test_reset_value(self): + f = Fragment() + f.add_statements( + self.s2.eq(0) + ) + f.add_driver(self.s2, "sync") + + f = ResetInserter(self.c1)(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s2) (const 1'd0)) + (switch (sig c1) + (case 1 (eq (sig s2) (const 1'd1))) + ) + ) + """) + + def test_reset_less(self): + f = Fragment() + f.add_statements( + self.s3.eq(0) + ) + f.add_driver(self.s3, "sync") + + f = ResetInserter(self.c1)(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s3) (const 1'd0)) + (switch (sig c1) + (case 1 ) + ) + ) + """) + + +class EnableInserterTestCase(FHDLTestCase): + def setUp(self): + self.s1 = Signal() + self.s2 = Signal() + self.s3 = Signal() + self.c1 = Signal() + + def test_enable_default(self): + f = Fragment() + f.add_statements( + self.s1.eq(1) + ) + f.add_driver(self.s1, "sync") + + f = EnableInserter(self.c1)(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s1) (const 1'd1)) + (switch (sig c1) + (case 0 (eq (sig s1) (sig s1))) + ) + ) + """) + + def test_enable_cd(self): + f = Fragment() + f.add_statements( + self.s1.eq(1), + self.s2.eq(0), + ) + f.add_driver(self.s1, "sync") + f.add_driver(self.s2, "pix") + + f = EnableInserter({"pix": self.c1})(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s1) (const 1'd1)) + (eq (sig s2) (const 1'd0)) + (switch (sig c1) + (case 0 (eq (sig s2) (sig s2))) + ) + ) + """) + + def test_enable_subfragment(self): + f1 = Fragment() + f1.add_statements( + self.s1.eq(1) + ) + f1.add_driver(self.s1, "sync") + + f2 = Fragment() + f2.add_statements( + self.s2.eq(1) + ) + f2.add_driver(self.s2, "sync") + f1.add_subfragment(f2) + + f1 = EnableInserter(self.c1)(f1) + (f2, _), = f1.subfragments + self.assertRepr(f1.statements, """ + ( + (eq (sig s1) (const 1'd1)) + (switch (sig c1) + (case 0 (eq (sig s1) (sig s1))) + ) + ) + """) + self.assertRepr(f2.statements, """ + ( + (eq (sig s2) (const 1'd1)) + (switch (sig c1) + (case 0 (eq (sig s2) (sig s2))) + ) + ) + """) + + def test_enable_read_port(self): + mem = Memory(width=8, depth=4) + f = EnableInserter(self.c1)(mem.read_port(transparent=False)).elaborate(platform=None) + self.assertRepr(f.named_ports["EN"][0], """ + (m (sig c1) (sig mem_r_en) (const 1'd0)) + """) + + def test_enable_write_port(self): + mem = Memory(width=8, depth=4) + f = EnableInserter(self.c1)(mem.write_port()).elaborate(platform=None) + self.assertRepr(f.named_ports["EN"][0], """ + (m (sig c1) (cat (repl (slice (sig mem_w_en) 0:1) 8)) (const 8'd0)) + """) + + +class _MockElaboratable(Elaboratable): + def __init__(self): + self.s1 = Signal() + + def elaborate(self, platform): + f = Fragment() + f.add_statements( + self.s1.eq(1) + ) + f.add_driver(self.s1, "sync") + return f + + +class TransformedElaboratableTestCase(FHDLTestCase): + def setUp(self): + self.c1 = Signal() + self.c2 = Signal() + + def test_getattr(self): + e = _MockElaboratable() + te = EnableInserter(self.c1)(e) + + self.assertIs(te.s1, e.s1) + + def test_composition(self): + e = _MockElaboratable() + te1 = EnableInserter(self.c1)(e) + te2 = ResetInserter(self.c2)(te1) + + self.assertIsInstance(te1, TransformedElaboratable) + self.assertIs(te1, te2) + + f = Fragment.get(te2, None) + self.assertRepr(f.statements, """ + ( + (eq (sig s1) (const 1'd1)) + (switch (sig c1) + (case 0 (eq (sig s1) (sig s1))) + ) + (switch (sig c2) + (case 1 (eq (sig s1) (const 1'd0))) + ) + ) + """) + + +class MockUserValue(UserValue): + def __init__(self, lowered): + super().__init__() + self.lowered = lowered + + def lower(self): + return self.lowered + + +class UserValueTestCase(FHDLTestCase): + def setUp(self): + self.s = Signal() + self.c = Signal() + self.uv = MockUserValue(self.s) + + def test_lower(self): + sync = ClockDomain() + f = Fragment() + f.add_domains(sync) + f.add_statements( + self.uv.eq(1) + ) + for signal in self.uv._lhs_signals(): + f.add_driver(signal, "sync") + + f = ResetInserter(self.c)(f) + f = DomainLowerer()(f) + self.assertRepr(f.statements, """ + ( + (eq (sig s) (const 1'd1)) + (switch (sig c) + (case 1 (eq (sig s) (const 1'd0))) + ) + (switch (sig rst) + (case 1 (eq (sig s) (const 1'd0))) + ) + ) + """) diff --git a/nmigen/test/test_lib_cdc.py b/nmigen/test/test_lib_cdc.py new file mode 100644 index 0000000..44fe155 --- /dev/null +++ b/nmigen/test/test_lib_cdc.py @@ -0,0 +1,229 @@ +# nmigen: UnusedElaboratable=no + +from .utils import * +from ..hdl import * +from ..back.pysim import * +from ..lib.cdc import * + + +class FFSynchronizerTestCase(FHDLTestCase): + def test_stages_wrong(self): + with self.assertRaises(TypeError, + msg="Synchronization stage count must be a positive integer, not 0"): + FFSynchronizer(Signal(), Signal(), stages=0) + with self.assertRaises(ValueError, + msg="Synchronization stage count may not safely be less than 2"): + FFSynchronizer(Signal(), Signal(), stages=1) + + def test_basic(self): + i = Signal() + o = Signal() + frag = FFSynchronizer(i, o) + + sim = Simulator(frag) + sim.add_clock(1e-6) + def process(): + self.assertEqual((yield o), 0) + yield i.eq(1) + yield Tick() + self.assertEqual((yield o), 0) + yield Tick() + self.assertEqual((yield o), 0) + yield Tick() + self.assertEqual((yield o), 1) + sim.add_process(process) + sim.run() + + def test_reset_value(self): + i = Signal(reset=1) + o = Signal() + frag = FFSynchronizer(i, o, reset=1) + + sim = Simulator(frag) + sim.add_clock(1e-6) + def process(): + self.assertEqual((yield o), 1) + yield i.eq(0) + yield Tick() + self.assertEqual((yield o), 1) + yield Tick() + self.assertEqual((yield o), 1) + yield Tick() + self.assertEqual((yield o), 0) + sim.add_process(process) + sim.run() + + +class AsyncFFSynchronizerTestCase(FHDLTestCase): + def test_stages_wrong(self): + with self.assertRaises(TypeError, + msg="Synchronization stage count must be a positive integer, not 0"): + ResetSynchronizer(Signal(), stages=0) + with self.assertRaises(ValueError, + msg="Synchronization stage count may not safely be less than 2"): + ResetSynchronizer(Signal(), stages=1) + + def test_edge_wrong(self): + with self.assertRaises(ValueError, + msg="AsyncFFSynchronizer async edge must be one of 'pos' or 'neg', not 'xxx'"): + AsyncFFSynchronizer(Signal(), Signal(), domain="sync", async_edge="xxx") + + def test_pos_edge(self): + i = Signal() + o = Signal() + m = Module() + m.domains += ClockDomain("sync") + m.submodules += AsyncFFSynchronizer(i, o) + + sim = Simulator(m) + sim.add_clock(1e-6) + def process(): + # initial reset + self.assertEqual((yield i), 0) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + + yield i.eq(1) + yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield i.eq(0) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + sim.add_process(process) + with sim.write_vcd("test.vcd"): + sim.run() + + def test_neg_edge(self): + i = Signal(reset=1) + o = Signal() + m = Module() + m.domains += ClockDomain("sync") + m.submodules += AsyncFFSynchronizer(i, o, async_edge="neg") + + sim = Simulator(m) + sim.add_clock(1e-6) + def process(): + # initial reset + self.assertEqual((yield i), 1) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + + yield i.eq(0) + yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield i.eq(1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield o), 0) + yield Tick(); yield Delay(1e-8) + sim.add_process(process) + with sim.write_vcd("test.vcd"): + sim.run() + + +class ResetSynchronizerTestCase(FHDLTestCase): + def test_stages_wrong(self): + with self.assertRaises(TypeError, + msg="Synchronization stage count must be a positive integer, not 0"): + ResetSynchronizer(Signal(), stages=0) + with self.assertRaises(ValueError, + msg="Synchronization stage count may not safely be less than 2"): + ResetSynchronizer(Signal(), stages=1) + + def test_basic(self): + arst = Signal() + m = Module() + m.domains += ClockDomain("sync") + m.submodules += ResetSynchronizer(arst) + s = Signal(reset=1) + m.d.sync += s.eq(0) + + sim = Simulator(m) + sim.add_clock(1e-6) + def process(): + # initial reset + self.assertEqual((yield s), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield s), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield s), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield s), 0) + yield Tick(); yield Delay(1e-8) + + yield arst.eq(1) + yield Delay(1e-8) + self.assertEqual((yield s), 0) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield s), 1) + yield arst.eq(0) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield s), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield s), 1) + yield Tick(); yield Delay(1e-8) + self.assertEqual((yield s), 0) + yield Tick(); yield Delay(1e-8) + sim.add_process(process) + with sim.write_vcd("test.vcd"): + sim.run() + + +# TODO: test with distinct clocks +class PulseSynchronizerTestCase(FHDLTestCase): + def test_paramcheck(self): + with self.assertRaises(TypeError): + ps = PulseSynchronizer("w", "r", sync_stages=0) + with self.assertRaises(TypeError): + ps = PulseSynchronizer("w", "r", sync_stages="abc") + ps = PulseSynchronizer("w", "r", sync_stages = 1) + + def test_smoke(self): + m = Module() + m.domains += ClockDomain("sync") + ps = m.submodules.dut = PulseSynchronizer("sync", "sync") + + sim = Simulator(m) + sim.add_clock(1e-6) + def process(): + yield ps.i.eq(0) + # TODO: think about reset + for n in range(5): + yield Tick() + # Make sure no pulses are generated in quiescent state + for n in range(3): + yield Tick() + self.assertEqual((yield ps.o), 0) + # Check conservation of pulses + accum = 0 + for n in range(10): + yield ps.i.eq(1 if n < 4 else 0) + yield Tick() + accum += yield ps.o + self.assertEqual(accum, 4) + sim.add_process(process) + sim.run() diff --git a/nmigen/test/test_lib_coding.py b/nmigen/test/test_lib_coding.py new file mode 100644 index 0000000..cf582e3 --- /dev/null +++ b/nmigen/test/test_lib_coding.py @@ -0,0 +1,126 @@ +from .utils import * +from ..hdl import * +from ..asserts import * +from ..back.pysim import * +from ..lib.coding import * + + +class EncoderTestCase(FHDLTestCase): + def test_basic(self): + enc = Encoder(4) + def process(): + self.assertEqual((yield enc.n), 1) + self.assertEqual((yield enc.o), 0) + + yield enc.i.eq(0b0001) + yield Settle() + self.assertEqual((yield enc.n), 0) + self.assertEqual((yield enc.o), 0) + + yield enc.i.eq(0b0100) + yield Settle() + self.assertEqual((yield enc.n), 0) + self.assertEqual((yield enc.o), 2) + + yield enc.i.eq(0b0110) + yield Settle() + self.assertEqual((yield enc.n), 1) + self.assertEqual((yield enc.o), 0) + + sim = Simulator(enc) + sim.add_process(process) + sim.run() + + +class PriorityEncoderTestCase(FHDLTestCase): + def test_basic(self): + enc = PriorityEncoder(4) + def process(): + self.assertEqual((yield enc.n), 1) + self.assertEqual((yield enc.o), 0) + + yield enc.i.eq(0b0001) + yield Settle() + self.assertEqual((yield enc.n), 0) + self.assertEqual((yield enc.o), 0) + + yield enc.i.eq(0b0100) + yield Settle() + self.assertEqual((yield enc.n), 0) + self.assertEqual((yield enc.o), 2) + + yield enc.i.eq(0b0110) + yield Settle() + self.assertEqual((yield enc.n), 0) + self.assertEqual((yield enc.o), 1) + + sim = Simulator(enc) + sim.add_process(process) + sim.run() + + +class DecoderTestCase(FHDLTestCase): + def test_basic(self): + dec = Decoder(4) + def process(): + self.assertEqual((yield dec.o), 0b0001) + + yield dec.i.eq(1) + yield Settle() + self.assertEqual((yield dec.o), 0b0010) + + yield dec.i.eq(3) + yield Settle() + self.assertEqual((yield dec.o), 0b1000) + + yield dec.n.eq(1) + yield Settle() + self.assertEqual((yield dec.o), 0b0000) + + sim = Simulator(dec) + sim.add_process(process) + sim.run() + + +class ReversibleSpec(Elaboratable): + def __init__(self, encoder_cls, decoder_cls, args): + self.encoder_cls = encoder_cls + self.decoder_cls = decoder_cls + self.coder_args = args + + def elaborate(self, platform): + m = Module() + enc, dec = self.encoder_cls(*self.coder_args), self.decoder_cls(*self.coder_args) + m.submodules += enc, dec + m.d.comb += [ + dec.i.eq(enc.o), + Assert(enc.i == dec.o) + ] + return m + + +class HammingDistanceSpec(Elaboratable): + def __init__(self, distance, encoder_cls, args): + self.distance = distance + self.encoder_cls = encoder_cls + self.coder_args = args + + def elaborate(self, platform): + m = Module() + enc1, enc2 = self.encoder_cls(*self.coder_args), self.encoder_cls(*self.coder_args) + m.submodules += enc1, enc2 + m.d.comb += [ + Assume(enc1.i + 1 == enc2.i), + Assert(sum(enc1.o ^ enc2.o) == self.distance) + ] + return m + + +class GrayCoderTestCase(FHDLTestCase): + def test_reversible(self): + spec = ReversibleSpec(encoder_cls=GrayEncoder, decoder_cls=GrayDecoder, args=(16,)) + self.assertFormal(spec, mode="prove") + + def test_distance(self): + spec = HammingDistanceSpec(distance=1, encoder_cls=GrayEncoder, args=(16,)) + self.assertFormal(spec, mode="prove") diff --git a/nmigen/test/test_lib_fifo.py b/nmigen/test/test_lib_fifo.py new file mode 100644 index 0000000..38019f9 --- /dev/null +++ b/nmigen/test/test_lib_fifo.py @@ -0,0 +1,272 @@ +# nmigen: UnusedElaboratable=no + +from .utils import * +from ..hdl import * +from ..asserts import * +from ..back.pysim import * +from ..lib.fifo import * + + +class FIFOTestCase(FHDLTestCase): + def test_depth_wrong(self): + with self.assertRaises(TypeError, + msg="FIFO width must be a non-negative integer, not -1"): + FIFOInterface(width=-1, depth=8, fwft=True) + with self.assertRaises(TypeError, + msg="FIFO depth must be a non-negative integer, not -1"): + FIFOInterface(width=8, depth=-1, fwft=True) + + def test_sync_depth(self): + self.assertEqual(SyncFIFO(width=8, depth=0).depth, 0) + self.assertEqual(SyncFIFO(width=8, depth=1).depth, 1) + self.assertEqual(SyncFIFO(width=8, depth=2).depth, 2) + + def test_sync_buffered_depth(self): + self.assertEqual(SyncFIFOBuffered(width=8, depth=0).depth, 0) + self.assertEqual(SyncFIFOBuffered(width=8, depth=1).depth, 1) + self.assertEqual(SyncFIFOBuffered(width=8, depth=2).depth, 2) + + def test_async_depth(self): + self.assertEqual(AsyncFIFO(width=8, depth=0 ).depth, 0) + self.assertEqual(AsyncFIFO(width=8, depth=1 ).depth, 1) + self.assertEqual(AsyncFIFO(width=8, depth=2 ).depth, 2) + self.assertEqual(AsyncFIFO(width=8, depth=3 ).depth, 4) + self.assertEqual(AsyncFIFO(width=8, depth=4 ).depth, 4) + self.assertEqual(AsyncFIFO(width=8, depth=15).depth, 16) + self.assertEqual(AsyncFIFO(width=8, depth=16).depth, 16) + self.assertEqual(AsyncFIFO(width=8, depth=17).depth, 32) + + def test_async_depth_wrong(self): + with self.assertRaises(ValueError, + msg="AsyncFIFO only supports depths that are powers of 2; " + "requested exact depth 15 is not"): + AsyncFIFO(width=8, depth=15, exact_depth=True) + + def test_async_buffered_depth(self): + self.assertEqual(AsyncFIFOBuffered(width=8, depth=0 ).depth, 0) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=1 ).depth, 2) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=2 ).depth, 2) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=3 ).depth, 3) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=4 ).depth, 5) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=15).depth, 17) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=16).depth, 17) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=17).depth, 17) + self.assertEqual(AsyncFIFOBuffered(width=8, depth=18).depth, 33) + + def test_async_buffered_depth_wrong(self): + with self.assertRaises(ValueError, + msg="AsyncFIFOBuffered only supports depths that are one higher than powers of 2; " + "requested exact depth 16 is not"): + AsyncFIFOBuffered(width=8, depth=16, exact_depth=True) + +class FIFOModel(Elaboratable, FIFOInterface): + """ + Non-synthesizable first-in first-out queue, implemented naively as a chain of registers. + """ + def __init__(self, *, width, depth, fwft, r_domain, w_domain): + super().__init__(width=width, depth=depth, fwft=fwft) + + self.r_domain = r_domain + self.w_domain = w_domain + + self.level = Signal(range(self.depth + 1)) + + def elaborate(self, platform): + m = Module() + + storage = Memory(width=self.width, depth=self.depth) + w_port = m.submodules.w_port = storage.write_port(domain=self.w_domain) + r_port = m.submodules.r_port = storage.read_port (domain="comb") + + produce = Signal(range(self.depth)) + consume = Signal(range(self.depth)) + + m.d.comb += self.r_rdy.eq(self.level > 0) + m.d.comb += r_port.addr.eq((consume + 1) % self.depth) + if self.fwft: + m.d.comb += self.r_data.eq(r_port.data) + with m.If(self.r_en & self.r_rdy): + if not self.fwft: + m.d[self.r_domain] += self.r_data.eq(r_port.data) + m.d[self.r_domain] += consume.eq(r_port.addr) + + m.d.comb += self.w_rdy.eq(self.level < self.depth) + m.d.comb += w_port.data.eq(self.w_data) + with m.If(self.w_en & self.w_rdy): + m.d.comb += w_port.addr.eq((produce + 1) % self.depth) + m.d.comb += w_port.en.eq(1) + m.d[self.w_domain] += produce.eq(w_port.addr) + + with m.If(ResetSignal(self.r_domain) | ResetSignal(self.w_domain)): + m.d.sync += self.level.eq(0) + with m.Else(): + m.d.sync += self.level.eq(self.level + + (self.w_rdy & self.w_en) + - (self.r_rdy & self.r_en)) + + m.d.comb += Assert(ResetSignal(self.r_domain) == ResetSignal(self.w_domain)) + + return m + + +class FIFOModelEquivalenceSpec(Elaboratable): + """ + The first-in first-out queue model equivalence specification: for any inputs and control + signals, the behavior of the implementation under test exactly matches the ideal model, + except for behavior not defined by the model. + """ + def __init__(self, fifo, r_domain, w_domain): + self.fifo = fifo + + self.r_domain = r_domain + self.w_domain = w_domain + + def elaborate(self, platform): + m = Module() + m.submodules.dut = dut = self.fifo + m.submodules.gold = gold = FIFOModel(width=dut.width, depth=dut.depth, fwft=dut.fwft, + r_domain=self.r_domain, w_domain=self.w_domain) + + m.d.comb += [ + gold.r_en.eq(dut.r_rdy & dut.r_en), + gold.w_en.eq(dut.w_en), + gold.w_data.eq(dut.w_data), + ] + + m.d.comb += Assert(dut.r_rdy.implies(gold.r_rdy)) + m.d.comb += Assert(dut.w_rdy.implies(gold.w_rdy)) + if hasattr(dut, "level"): + m.d.comb += Assert(dut.level == gold.level) + + if dut.fwft: + m.d.comb += Assert(dut.r_rdy + .implies(dut.r_data == gold.r_data)) + else: + m.d.comb += Assert((Past(dut.r_rdy, domain=self.r_domain) & + Past(dut.r_en, domain=self.r_domain)) + .implies(dut.r_data == gold.r_data)) + + return m + + +class FIFOContractSpec(Elaboratable): + """ + The first-in first-out queue contract specification: if two elements are written to the queue + consecutively, they must be read out consecutively at some later point, no matter all other + circumstances, with the exception of reset. + """ + def __init__(self, fifo, *, r_domain, w_domain, bound): + self.fifo = fifo + self.r_domain = r_domain + self.w_domain = w_domain + self.bound = bound + + def elaborate(self, platform): + m = Module() + m.submodules.dut = fifo = self.fifo + + m.domains += ClockDomain("sync") + m.d.comb += ResetSignal().eq(0) + if self.w_domain != "sync": + m.domains += ClockDomain(self.w_domain) + m.d.comb += ResetSignal(self.w_domain).eq(0) + if self.r_domain != "sync": + m.domains += ClockDomain(self.r_domain) + m.d.comb += ResetSignal(self.r_domain).eq(0) + + entry_1 = AnyConst(fifo.width) + entry_2 = AnyConst(fifo.width) + + with m.FSM(domain=self.w_domain) as write_fsm: + with m.State("WRITE-1"): + with m.If(fifo.w_rdy): + m.d.comb += [ + fifo.w_data.eq(entry_1), + fifo.w_en.eq(1) + ] + m.next = "WRITE-2" + with m.State("WRITE-2"): + with m.If(fifo.w_rdy): + m.d.comb += [ + fifo.w_data.eq(entry_2), + fifo.w_en.eq(1) + ] + m.next = "DONE" + with m.State("DONE"): + pass + + with m.FSM(domain=self.r_domain) as read_fsm: + read_1 = Signal(fifo.width) + read_2 = Signal(fifo.width) + with m.State("READ"): + m.d.comb += fifo.r_en.eq(1) + if fifo.fwft: + r_rdy = fifo.r_rdy + else: + r_rdy = Past(fifo.r_rdy, domain=self.r_domain) + with m.If(r_rdy): + m.d.sync += [ + read_1.eq(read_2), + read_2.eq(fifo.r_data), + ] + with m.If((read_1 == entry_1) & (read_2 == entry_2)): + m.next = "DONE" + with m.State("DONE"): + pass + + with m.If(Initial()): + m.d.comb += Assume(write_fsm.ongoing("WRITE-1")) + m.d.comb += Assume(read_fsm.ongoing("READ")) + with m.If(Past(Initial(), self.bound - 1)): + m.d.comb += Assert(read_fsm.ongoing("DONE")) + + if self.w_domain != "sync" or self.r_domain != "sync": + m.d.comb += Assume(Rose(ClockSignal(self.w_domain)) | + Rose(ClockSignal(self.r_domain))) + + return m + + +class FIFOFormalCase(FHDLTestCase): + def check_sync_fifo(self, fifo): + self.assertFormal(FIFOModelEquivalenceSpec(fifo, r_domain="sync", w_domain="sync"), + mode="bmc", depth=fifo.depth + 1) + self.assertFormal(FIFOContractSpec(fifo, r_domain="sync", w_domain="sync", + bound=fifo.depth * 2 + 1), + mode="hybrid", depth=fifo.depth * 2 + 1) + + def test_sync_fwft_pot(self): + self.check_sync_fifo(SyncFIFO(width=8, depth=4, fwft=True)) + + def test_sync_fwft_npot(self): + self.check_sync_fifo(SyncFIFO(width=8, depth=5, fwft=True)) + + def test_sync_not_fwft_pot(self): + self.check_sync_fifo(SyncFIFO(width=8, depth=4, fwft=False)) + + def test_sync_not_fwft_npot(self): + self.check_sync_fifo(SyncFIFO(width=8, depth=5, fwft=False)) + + def test_sync_buffered_pot(self): + self.check_sync_fifo(SyncFIFOBuffered(width=8, depth=4)) + + def test_sync_buffered_potp1(self): + self.check_sync_fifo(SyncFIFOBuffered(width=8, depth=5)) + + def test_sync_buffered_potm1(self): + self.check_sync_fifo(SyncFIFOBuffered(width=8, depth=3)) + + def check_async_fifo(self, fifo): + # TODO: properly doing model equivalence checking on this likely requires multiclock, + # which is not really documented nor is it clear how to use it. + # self.assertFormal(FIFOModelEquivalenceSpec(fifo, r_domain="read", w_domain="write"), + # mode="bmc", depth=fifo.depth * 3 + 1) + self.assertFormal(FIFOContractSpec(fifo, r_domain="read", w_domain="write", + bound=fifo.depth * 4 + 1), + mode="hybrid", depth=fifo.depth * 4 + 1) + + def test_async(self): + self.check_async_fifo(AsyncFIFO(width=8, depth=4)) + + def test_async_buffered(self): + self.check_async_fifo(AsyncFIFOBuffered(width=8, depth=4)) diff --git a/nmigen/test/test_lib_io.py b/nmigen/test/test_lib_io.py new file mode 100644 index 0000000..cb1cd46 --- /dev/null +++ b/nmigen/test/test_lib_io.py @@ -0,0 +1,199 @@ +from .utils import * +from ..hdl import * +from ..hdl.rec import * +from ..back.pysim import * +from ..lib.io import * + + +class PinLayoutCombTestCase(FHDLTestCase): + def test_pin_layout_i(self): + layout_1 = pin_layout(1, dir="i") + self.assertEqual(layout_1.fields, { + "i": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="i") + self.assertEqual(layout_2.fields, { + "i": ((2, False), DIR_NONE), + }) + + def test_pin_layout_o(self): + layout_1 = pin_layout(1, dir="o") + self.assertEqual(layout_1.fields, { + "o": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="o") + self.assertEqual(layout_2.fields, { + "o": ((2, False), DIR_NONE), + }) + + def test_pin_layout_oe(self): + layout_1 = pin_layout(1, dir="oe") + self.assertEqual(layout_1.fields, { + "o": ((1, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="oe") + self.assertEqual(layout_2.fields, { + "o": ((2, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + def test_pin_layout_io(self): + layout_1 = pin_layout(1, dir="io") + self.assertEqual(layout_1.fields, { + "i": ((1, False), DIR_NONE), + "o": ((1, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="io") + self.assertEqual(layout_2.fields, { + "i": ((2, False), DIR_NONE), + "o": ((2, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + +class PinLayoutSDRTestCase(FHDLTestCase): + def test_pin_layout_i(self): + layout_1 = pin_layout(1, dir="i", xdr=1) + self.assertEqual(layout_1.fields, { + "i_clk": ((1, False), DIR_NONE), + "i": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="i", xdr=1) + self.assertEqual(layout_2.fields, { + "i_clk": ((1, False), DIR_NONE), + "i": ((2, False), DIR_NONE), + }) + + def test_pin_layout_o(self): + layout_1 = pin_layout(1, dir="o", xdr=1) + self.assertEqual(layout_1.fields, { + "o_clk": ((1, False), DIR_NONE), + "o": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="o", xdr=1) + self.assertEqual(layout_2.fields, { + "o_clk": ((1, False), DIR_NONE), + "o": ((2, False), DIR_NONE), + }) + + def test_pin_layout_oe(self): + layout_1 = pin_layout(1, dir="oe", xdr=1) + self.assertEqual(layout_1.fields, { + "o_clk": ((1, False), DIR_NONE), + "o": ((1, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="oe", xdr=1) + self.assertEqual(layout_2.fields, { + "o_clk": ((1, False), DIR_NONE), + "o": ((2, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + def test_pin_layout_io(self): + layout_1 = pin_layout(1, dir="io", xdr=1) + self.assertEqual(layout_1.fields, { + "i_clk": ((1, False), DIR_NONE), + "i": ((1, False), DIR_NONE), + "o_clk": ((1, False), DIR_NONE), + "o": ((1, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="io", xdr=1) + self.assertEqual(layout_2.fields, { + "i_clk": ((1, False), DIR_NONE), + "i": ((2, False), DIR_NONE), + "o_clk": ((1, False), DIR_NONE), + "o": ((2, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + +class PinLayoutDDRTestCase(FHDLTestCase): + def test_pin_layout_i(self): + layout_1 = pin_layout(1, dir="i", xdr=2) + self.assertEqual(layout_1.fields, { + "i_clk": ((1, False), DIR_NONE), + "i0": ((1, False), DIR_NONE), + "i1": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="i", xdr=2) + self.assertEqual(layout_2.fields, { + "i_clk": ((1, False), DIR_NONE), + "i0": ((2, False), DIR_NONE), + "i1": ((2, False), DIR_NONE), + }) + + def test_pin_layout_o(self): + layout_1 = pin_layout(1, dir="o", xdr=2) + self.assertEqual(layout_1.fields, { + "o_clk": ((1, False), DIR_NONE), + "o0": ((1, False), DIR_NONE), + "o1": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="o", xdr=2) + self.assertEqual(layout_2.fields, { + "o_clk": ((1, False), DIR_NONE), + "o0": ((2, False), DIR_NONE), + "o1": ((2, False), DIR_NONE), + }) + + def test_pin_layout_oe(self): + layout_1 = pin_layout(1, dir="oe", xdr=2) + self.assertEqual(layout_1.fields, { + "o_clk": ((1, False), DIR_NONE), + "o0": ((1, False), DIR_NONE), + "o1": ((1, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="oe", xdr=2) + self.assertEqual(layout_2.fields, { + "o_clk": ((1, False), DIR_NONE), + "o0": ((2, False), DIR_NONE), + "o1": ((2, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + def test_pin_layout_io(self): + layout_1 = pin_layout(1, dir="io", xdr=2) + self.assertEqual(layout_1.fields, { + "i_clk": ((1, False), DIR_NONE), + "i0": ((1, False), DIR_NONE), + "i1": ((1, False), DIR_NONE), + "o_clk": ((1, False), DIR_NONE), + "o0": ((1, False), DIR_NONE), + "o1": ((1, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + layout_2 = pin_layout(2, dir="io", xdr=2) + self.assertEqual(layout_2.fields, { + "i_clk": ((1, False), DIR_NONE), + "i0": ((2, False), DIR_NONE), + "i1": ((2, False), DIR_NONE), + "o_clk": ((1, False), DIR_NONE), + "o0": ((2, False), DIR_NONE), + "o1": ((2, False), DIR_NONE), + "oe": ((1, False), DIR_NONE), + }) + + +class PinTestCase(FHDLTestCase): + def test_attributes(self): + pin = Pin(2, dir="io", xdr=2) + self.assertEqual(pin.width, 2) + self.assertEqual(pin.dir, "io") + self.assertEqual(pin.xdr, 2) diff --git a/nmigen/test/test_sim.py b/nmigen/test/test_sim.py new file mode 100644 index 0000000..d3585a4 --- /dev/null +++ b/nmigen/test/test_sim.py @@ -0,0 +1,726 @@ +import os +from contextlib import contextmanager + +from .utils import * +from .._utils import flatten, union +from ..hdl.ast import * +from ..hdl.cd import * +from ..hdl.mem import * +from ..hdl.rec import * +from ..hdl.dsl import * +from ..hdl.ir import * +from ..back.pysim import * + + +class SimulatorUnitTestCase(FHDLTestCase): + def assertStatement(self, stmt, inputs, output, reset=0): + inputs = [Value.cast(i) for i in inputs] + output = Value.cast(output) + + isigs = [Signal(i.shape(), name=n) for i, n in zip(inputs, "abcd")] + osig = Signal(output.shape(), name="y", reset=reset) + + stmt = stmt(osig, *isigs) + frag = Fragment() + frag.add_statements(stmt) + for signal in flatten(s._lhs_signals() for s in Statement.cast(stmt)): + frag.add_driver(signal) + + sim = Simulator(frag) + def process(): + for isig, input in zip(isigs, inputs): + yield isig.eq(input) + yield Settle() + self.assertEqual((yield osig), output.value) + sim.add_process(process) + with sim.write_vcd("test.vcd", "test.gtkw", traces=[*isigs, osig]): + sim.run() + + def test_invert(self): + stmt = lambda y, a: y.eq(~a) + self.assertStatement(stmt, [C(0b0000, 4)], C(0b1111, 4)) + self.assertStatement(stmt, [C(0b1010, 4)], C(0b0101, 4)) + self.assertStatement(stmt, [C(0, 4)], C(-1, 4)) + + def test_neg(self): + stmt = lambda y, a: y.eq(-a) + self.assertStatement(stmt, [C(0b0000, 4)], C(0b0000, 4)) + self.assertStatement(stmt, [C(0b0001, 4)], C(0b1111, 4)) + self.assertStatement(stmt, [C(0b1010, 4)], C(0b0110, 4)) + self.assertStatement(stmt, [C(1, 4)], C(-1, 4)) + self.assertStatement(stmt, [C(5, 4)], C(-5, 4)) + + def test_bool(self): + stmt = lambda y, a: y.eq(a.bool()) + self.assertStatement(stmt, [C(0, 4)], C(0)) + self.assertStatement(stmt, [C(1, 4)], C(1)) + self.assertStatement(stmt, [C(2, 4)], C(1)) + + def test_as_unsigned(self): + stmt = lambda y, a, b: y.eq(a.as_unsigned() == b) + self.assertStatement(stmt, [C(0b01, signed(2)), C(0b0001, unsigned(4))], C(1)) + self.assertStatement(stmt, [C(0b11, signed(2)), C(0b0011, unsigned(4))], C(1)) + + def test_as_signed(self): + stmt = lambda y, a, b: y.eq(a.as_signed() == b) + self.assertStatement(stmt, [C(0b01, unsigned(2)), C(0b0001, signed(4))], C(1)) + self.assertStatement(stmt, [C(0b11, unsigned(2)), C(0b1111, signed(4))], C(1)) + + def test_any(self): + stmt = lambda y, a: y.eq(a.any()) + self.assertStatement(stmt, [C(0b00, 2)], C(0)) + self.assertStatement(stmt, [C(0b01, 2)], C(1)) + self.assertStatement(stmt, [C(0b10, 2)], C(1)) + self.assertStatement(stmt, [C(0b11, 2)], C(1)) + + def test_all(self): + stmt = lambda y, a: y.eq(a.all()) + self.assertStatement(stmt, [C(0b00, 2)], C(0)) + self.assertStatement(stmt, [C(0b01, 2)], C(0)) + self.assertStatement(stmt, [C(0b10, 2)], C(0)) + self.assertStatement(stmt, [C(0b11, 2)], C(1)) + + def test_xor_unary(self): + stmt = lambda y, a: y.eq(a.xor()) + self.assertStatement(stmt, [C(0b00, 2)], C(0)) + self.assertStatement(stmt, [C(0b01, 2)], C(1)) + self.assertStatement(stmt, [C(0b10, 2)], C(1)) + self.assertStatement(stmt, [C(0b11, 2)], C(0)) + + def test_add(self): + stmt = lambda y, a, b: y.eq(a + b) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(1, 4)) + self.assertStatement(stmt, [C(-5, 4), C(-5, 4)], C(-10, 5)) + + def test_sub(self): + stmt = lambda y, a, b: y.eq(a - b) + self.assertStatement(stmt, [C(2, 4), C(1, 4)], C(1, 4)) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(-1, 4)) + self.assertStatement(stmt, [C(0, 4), C(10, 4)], C(-10, 5)) + + def test_mul(self): + stmt = lambda y, a, b: y.eq(a * b) + self.assertStatement(stmt, [C(2, 4), C(1, 4)], C(2, 8)) + self.assertStatement(stmt, [C(2, 4), C(2, 4)], C(4, 8)) + self.assertStatement(stmt, [C(7, 4), C(7, 4)], C(49, 8)) + + def test_floordiv(self): + stmt = lambda y, a, b: y.eq(a // b) + self.assertStatement(stmt, [C(2, 4), C(1, 4)], C(2, 8)) + self.assertStatement(stmt, [C(2, 4), C(2, 4)], C(1, 8)) + self.assertStatement(stmt, [C(7, 4), C(2, 4)], C(3, 8)) + + def test_and(self): + stmt = lambda y, a, b: y.eq(a & b) + self.assertStatement(stmt, [C(0b1100, 4), C(0b1010, 4)], C(0b1000, 4)) + + def test_or(self): + stmt = lambda y, a, b: y.eq(a | b) + self.assertStatement(stmt, [C(0b1100, 4), C(0b1010, 4)], C(0b1110, 4)) + + def test_xor_binary(self): + stmt = lambda y, a, b: y.eq(a ^ b) + self.assertStatement(stmt, [C(0b1100, 4), C(0b1010, 4)], C(0b0110, 4)) + + def test_shl(self): + stmt = lambda y, a, b: y.eq(a << b) + self.assertStatement(stmt, [C(0b1001, 4), C(0)], C(0b1001, 5)) + self.assertStatement(stmt, [C(0b1001, 4), C(3)], C(0b1001000, 7)) + + def test_shr(self): + stmt = lambda y, a, b: y.eq(a >> b) + self.assertStatement(stmt, [C(0b1001, 4), C(0)], C(0b1001, 4)) + self.assertStatement(stmt, [C(0b1001, 4), C(2)], C(0b10, 4)) + + def test_eq(self): + stmt = lambda y, a, b: y.eq(a == b) + self.assertStatement(stmt, [C(0, 4), C(0, 4)], C(1)) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(0)) + self.assertStatement(stmt, [C(1, 4), C(0, 4)], C(0)) + + def test_ne(self): + stmt = lambda y, a, b: y.eq(a != b) + self.assertStatement(stmt, [C(0, 4), C(0, 4)], C(0)) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(1)) + self.assertStatement(stmt, [C(1, 4), C(0, 4)], C(1)) + + def test_lt(self): + stmt = lambda y, a, b: y.eq(a < b) + self.assertStatement(stmt, [C(0, 4), C(0, 4)], C(0)) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(1)) + self.assertStatement(stmt, [C(1, 4), C(0, 4)], C(0)) + + def test_ge(self): + stmt = lambda y, a, b: y.eq(a >= b) + self.assertStatement(stmt, [C(0, 4), C(0, 4)], C(1)) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(0)) + self.assertStatement(stmt, [C(1, 4), C(0, 4)], C(1)) + + def test_gt(self): + stmt = lambda y, a, b: y.eq(a > b) + self.assertStatement(stmt, [C(0, 4), C(0, 4)], C(0)) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(0)) + self.assertStatement(stmt, [C(1, 4), C(0, 4)], C(1)) + + def test_le(self): + stmt = lambda y, a, b: y.eq(a <= b) + self.assertStatement(stmt, [C(0, 4), C(0, 4)], C(1)) + self.assertStatement(stmt, [C(0, 4), C(1, 4)], C(1)) + self.assertStatement(stmt, [C(1, 4), C(0, 4)], C(0)) + + def test_mux(self): + stmt = lambda y, a, b, c: y.eq(Mux(c, a, b)) + self.assertStatement(stmt, [C(2, 4), C(3, 4), C(0)], C(3, 4)) + self.assertStatement(stmt, [C(2, 4), C(3, 4), C(1)], C(2, 4)) + + def test_slice(self): + stmt1 = lambda y, a: y.eq(a[2]) + self.assertStatement(stmt1, [C(0b10110100, 8)], C(0b1, 1)) + stmt2 = lambda y, a: y.eq(a[2:4]) + self.assertStatement(stmt2, [C(0b10110100, 8)], C(0b01, 2)) + + def test_slice_lhs(self): + stmt1 = lambda y, a: y[2].eq(a) + self.assertStatement(stmt1, [C(0b0, 1)], C(0b11111011, 8), reset=0b11111111) + stmt2 = lambda y, a: y[2:4].eq(a) + self.assertStatement(stmt2, [C(0b01, 2)], C(0b11110111, 8), reset=0b11111011) + + def test_bit_select(self): + stmt = lambda y, a, b: y.eq(a.bit_select(b, 3)) + self.assertStatement(stmt, [C(0b10110100, 8), C(0)], C(0b100, 3)) + self.assertStatement(stmt, [C(0b10110100, 8), C(2)], C(0b101, 3)) + self.assertStatement(stmt, [C(0b10110100, 8), C(3)], C(0b110, 3)) + + def test_bit_select_lhs(self): + stmt = lambda y, a, b: y.bit_select(a, 3).eq(b) + self.assertStatement(stmt, [C(0), C(0b100, 3)], C(0b11111100, 8), reset=0b11111111) + self.assertStatement(stmt, [C(2), C(0b101, 3)], C(0b11110111, 8), reset=0b11111111) + self.assertStatement(stmt, [C(3), C(0b110, 3)], C(0b11110111, 8), reset=0b11111111) + + def test_word_select(self): + stmt = lambda y, a, b: y.eq(a.word_select(b, 3)) + self.assertStatement(stmt, [C(0b10110100, 8), C(0)], C(0b100, 3)) + self.assertStatement(stmt, [C(0b10110100, 8), C(1)], C(0b110, 3)) + self.assertStatement(stmt, [C(0b10110100, 8), C(2)], C(0b010, 3)) + + def test_word_select_lhs(self): + stmt = lambda y, a, b: y.word_select(a, 3).eq(b) + self.assertStatement(stmt, [C(0), C(0b100, 3)], C(0b11111100, 8), reset=0b11111111) + self.assertStatement(stmt, [C(1), C(0b101, 3)], C(0b11101111, 8), reset=0b11111111) + self.assertStatement(stmt, [C(2), C(0b110, 3)], C(0b10111111, 8), reset=0b11111111) + + def test_cat(self): + stmt = lambda y, *xs: y.eq(Cat(*xs)) + self.assertStatement(stmt, [C(0b10, 2), C(0b01, 2)], C(0b0110, 4)) + + def test_cat_lhs(self): + l = Signal(3) + m = Signal(3) + n = Signal(3) + stmt = lambda y, a: [Cat(l, m, n).eq(a), y.eq(Cat(n, m, l))] + self.assertStatement(stmt, [C(0b100101110, 9)], C(0b110101100, 9)) + + def test_nested_cat_lhs(self): + l = Signal(3) + m = Signal(3) + n = Signal(3) + stmt = lambda y, a: [Cat(Cat(l, Cat(m)), n).eq(a), y.eq(Cat(n, m, l))] + self.assertStatement(stmt, [C(0b100101110, 9)], C(0b110101100, 9)) + + def test_record(self): + rec = Record([ + ("l", 1), + ("m", 2), + ]) + stmt = lambda y, a: [rec.eq(a), y.eq(rec)] + self.assertStatement(stmt, [C(0b101, 3)], C(0b101, 3)) + + def test_repl(self): + stmt = lambda y, a: y.eq(Repl(a, 3)) + self.assertStatement(stmt, [C(0b10, 2)], C(0b101010, 6)) + + def test_array(self): + array = Array([1, 4, 10]) + stmt = lambda y, a: y.eq(array[a]) + self.assertStatement(stmt, [C(0)], C(1)) + self.assertStatement(stmt, [C(1)], C(4)) + self.assertStatement(stmt, [C(2)], C(10)) + + def test_array_oob(self): + array = Array([1, 4, 10]) + stmt = lambda y, a: y.eq(array[a]) + self.assertStatement(stmt, [C(3)], C(10)) + self.assertStatement(stmt, [C(4)], C(10)) + + def test_array_lhs(self): + l = Signal(3, reset=1) + m = Signal(3, reset=4) + n = Signal(3, reset=7) + array = Array([l, m, n]) + stmt = lambda y, a, b: [array[a].eq(b), y.eq(Cat(*array))] + self.assertStatement(stmt, [C(0), C(0b000)], C(0b111100000)) + self.assertStatement(stmt, [C(1), C(0b010)], C(0b111010001)) + self.assertStatement(stmt, [C(2), C(0b100)], C(0b100100001)) + + def test_array_lhs_oob(self): + l = Signal(3) + m = Signal(3) + n = Signal(3) + array = Array([l, m, n]) + stmt = lambda y, a, b: [array[a].eq(b), y.eq(Cat(*array))] + self.assertStatement(stmt, [C(3), C(0b001)], C(0b001000000)) + self.assertStatement(stmt, [C(4), C(0b010)], C(0b010000000)) + + def test_array_index(self): + array = Array(Array(x * y for y in range(10)) for x in range(10)) + stmt = lambda y, a, b: y.eq(array[a][b]) + for x in range(10): + for y in range(10): + self.assertStatement(stmt, [C(x), C(y)], C(x * y)) + + def test_array_attr(self): + from collections import namedtuple + pair = namedtuple("pair", ("p", "n")) + + array = Array(pair(x, -x) for x in range(10)) + stmt = lambda y, a: y.eq(array[a].p + array[a].n) + for i in range(10): + self.assertStatement(stmt, [C(i)], C(0)) + + +class SimulatorIntegrationTestCase(FHDLTestCase): + @contextmanager + def assertSimulation(self, module, deadline=None): + sim = Simulator(module) + yield sim + with sim.write_vcd("test.vcd", "test.gtkw"): + if deadline is None: + sim.run() + else: + sim.run_until(deadline) + + def setUp_counter(self): + self.count = Signal(3, reset=4) + self.sync = ClockDomain() + + self.m = Module() + self.m.d.sync += self.count.eq(self.count + 1) + self.m.domains += self.sync + + def test_counter_process(self): + self.setUp_counter() + with self.assertSimulation(self.m) as sim: + def process(): + self.assertEqual((yield self.count), 4) + yield Delay(1e-6) + self.assertEqual((yield self.count), 4) + yield self.sync.clk.eq(1) + self.assertEqual((yield self.count), 4) + yield Settle() + self.assertEqual((yield self.count), 5) + yield Delay(1e-6) + self.assertEqual((yield self.count), 5) + yield self.sync.clk.eq(0) + self.assertEqual((yield self.count), 5) + yield Settle() + self.assertEqual((yield self.count), 5) + for _ in range(3): + yield Delay(1e-6) + yield self.sync.clk.eq(1) + yield Delay(1e-6) + yield self.sync.clk.eq(0) + self.assertEqual((yield self.count), 0) + sim.add_process(process) + + def test_counter_clock_and_sync_process(self): + self.setUp_counter() + with self.assertSimulation(self.m) as sim: + sim.add_clock(1e-6, domain="sync") + def process(): + self.assertEqual((yield self.count), 4) + self.assertEqual((yield self.sync.clk), 1) + yield + self.assertEqual((yield self.count), 5) + self.assertEqual((yield self.sync.clk), 1) + for _ in range(3): + yield + self.assertEqual((yield self.count), 0) + sim.add_sync_process(process) + + def test_reset(self): + self.setUp_counter() + sim = Simulator(self.m) + sim.add_clock(1e-6) + times = 0 + def process(): + nonlocal times + self.assertEqual((yield self.count), 4) + yield + self.assertEqual((yield self.count), 5) + yield + self.assertEqual((yield self.count), 6) + yield + times += 1 + sim.add_sync_process(process) + sim.run() + sim.reset() + sim.run() + self.assertEqual(times, 2) + + def setUp_alu(self): + self.a = Signal(8) + self.b = Signal(8) + self.o = Signal(8) + self.x = Signal(8) + self.s = Signal(2) + self.sync = ClockDomain(reset_less=True) + + self.m = Module() + self.m.d.comb += self.x.eq(self.a ^ self.b) + with self.m.Switch(self.s): + with self.m.Case(0): + self.m.d.sync += self.o.eq(self.a + self.b) + with self.m.Case(1): + self.m.d.sync += self.o.eq(self.a - self.b) + with self.m.Case(): + self.m.d.sync += self.o.eq(0) + self.m.domains += self.sync + + def test_alu(self): + self.setUp_alu() + with self.assertSimulation(self.m) as sim: + sim.add_clock(1e-6) + def process(): + yield self.a.eq(5) + yield self.b.eq(1) + yield + self.assertEqual((yield self.x), 4) + yield + self.assertEqual((yield self.o), 6) + yield self.s.eq(1) + yield + yield + self.assertEqual((yield self.o), 4) + yield self.s.eq(2) + yield + yield + self.assertEqual((yield self.o), 0) + sim.add_sync_process(process) + + def setUp_multiclock(self): + self.sys = ClockDomain() + self.pix = ClockDomain() + + self.m = Module() + self.m.domains += self.sys, self.pix + + def test_multiclock(self): + self.setUp_multiclock() + with self.assertSimulation(self.m) as sim: + sim.add_clock(1e-6, domain="sys") + sim.add_clock(0.3e-6, domain="pix") + + def sys_process(): + yield Passive() + yield + yield + self.fail() + def pix_process(): + yield + yield + yield + sim.add_sync_process(sys_process, domain="sys") + sim.add_sync_process(pix_process, domain="pix") + + def setUp_lhs_rhs(self): + self.i = Signal(8) + self.o = Signal(8) + + self.m = Module() + self.m.d.comb += self.o.eq(self.i) + + def test_complex_lhs_rhs(self): + self.setUp_lhs_rhs() + with self.assertSimulation(self.m) as sim: + def process(): + yield self.i.eq(0b10101010) + yield self.i[:4].eq(-1) + yield Settle() + self.assertEqual((yield self.i[:4]), 0b1111) + self.assertEqual((yield self.i), 0b10101111) + sim.add_process(process) + + def test_run_until(self): + m = Module() + s = Signal() + m.d.sync += s.eq(0) + with self.assertSimulation(m, deadline=100e-6) as sim: + sim.add_clock(1e-6) + def process(): + for _ in range(101): + yield Delay(1e-6) + self.fail() + sim.add_process(process) + + def test_add_process_wrong(self): + with self.assertSimulation(Module()) as sim: + with self.assertRaises(TypeError, + msg="Cannot add a process 1 because it is not a generator function"): + sim.add_process(1) + + def test_add_process_wrong_generator(self): + with self.assertSimulation(Module()) as sim: + with self.assertWarns(DeprecationWarning, + msg="instead of generators, use generator functions as processes; " + "this allows the simulator to be repeatedly reset"): + def process(): + yield Delay() + sim.add_process(process()) + + def test_add_clock_wrong_twice(self): + m = Module() + s = Signal() + m.d.sync += s.eq(0) + with self.assertSimulation(m) as sim: + sim.add_clock(1) + with self.assertRaises(ValueError, + msg="Domain 'sync' already has a clock driving it"): + sim.add_clock(1) + + def test_add_clock_wrong_missing(self): + m = Module() + with self.assertSimulation(m) as sim: + with self.assertRaises(ValueError, + msg="Domain 'sync' is not present in simulation"): + sim.add_clock(1) + + def test_add_clock_if_exists(self): + m = Module() + with self.assertSimulation(m) as sim: + sim.add_clock(1, if_exists=True) + + def test_command_wrong(self): + survived = False + with self.assertSimulation(Module()) as sim: + def process(): + nonlocal survived + with self.assertRaisesRegex(TypeError, + regex=r"Received unsupported command 1 from process .+?"): + yield 1 + yield Settle() + survived = True + sim.add_process(process) + self.assertTrue(survived) + + def setUp_memory(self, rd_synchronous=True, rd_transparent=True, wr_granularity=None): + self.m = Module() + self.memory = Memory(width=8, depth=4, init=[0xaa, 0x55]) + self.m.submodules.rdport = self.rdport = \ + self.memory.read_port(domain="sync" if rd_synchronous else "comb", + transparent=rd_transparent) + self.m.submodules.wrport = self.wrport = \ + self.memory.write_port(granularity=wr_granularity) + + def test_memory_init(self): + self.setUp_memory() + with self.assertSimulation(self.m) as sim: + def process(): + self.assertEqual((yield self.rdport.data), 0xaa) + yield self.rdport.addr.eq(1) + yield + yield + self.assertEqual((yield self.rdport.data), 0x55) + yield self.rdport.addr.eq(2) + yield + yield + self.assertEqual((yield self.rdport.data), 0x00) + sim.add_clock(1e-6) + sim.add_sync_process(process) + + def test_memory_write(self): + self.setUp_memory() + with self.assertSimulation(self.m) as sim: + def process(): + yield self.wrport.addr.eq(4) + yield self.wrport.data.eq(0x33) + yield self.wrport.en.eq(1) + yield + yield self.wrport.en.eq(0) + yield self.rdport.addr.eq(4) + yield + self.assertEqual((yield self.rdport.data), 0x33) + sim.add_clock(1e-6) + sim.add_sync_process(process) + + def test_memory_write_granularity(self): + self.setUp_memory(wr_granularity=4) + with self.assertSimulation(self.m) as sim: + def process(): + yield self.wrport.data.eq(0x50) + yield self.wrport.en.eq(0b00) + yield + yield self.wrport.en.eq(0) + yield + self.assertEqual((yield self.rdport.data), 0xaa) + yield self.wrport.en.eq(0b10) + yield + yield self.wrport.en.eq(0) + yield + self.assertEqual((yield self.rdport.data), 0x5a) + yield self.wrport.data.eq(0x33) + yield self.wrport.en.eq(0b01) + yield + yield self.wrport.en.eq(0) + yield + self.assertEqual((yield self.rdport.data), 0x53) + sim.add_clock(1e-6) + sim.add_sync_process(process) + + def test_memory_read_before_write(self): + self.setUp_memory(rd_transparent=False) + with self.assertSimulation(self.m) as sim: + def process(): + yield self.wrport.data.eq(0x33) + yield self.wrport.en.eq(1) + yield + self.assertEqual((yield self.rdport.data), 0xaa) + yield + self.assertEqual((yield self.rdport.data), 0xaa) + yield Settle() + self.assertEqual((yield self.rdport.data), 0x33) + sim.add_clock(1e-6) + sim.add_sync_process(process) + + def test_memory_write_through(self): + self.setUp_memory(rd_transparent=True) + with self.assertSimulation(self.m) as sim: + def process(): + yield self.wrport.data.eq(0x33) + yield self.wrport.en.eq(1) + yield + self.assertEqual((yield self.rdport.data), 0xaa) + yield Settle() + self.assertEqual((yield self.rdport.data), 0x33) + yield + yield self.rdport.addr.eq(1) + yield Settle() + self.assertEqual((yield self.rdport.data), 0x33) + sim.add_clock(1e-6) + sim.add_sync_process(process) + + def test_memory_async_read_write(self): + self.setUp_memory(rd_synchronous=False) + with self.assertSimulation(self.m) as sim: + def process(): + yield self.rdport.addr.eq(0) + yield Settle() + self.assertEqual((yield self.rdport.data), 0xaa) + yield self.rdport.addr.eq(1) + yield Settle() + self.assertEqual((yield self.rdport.data), 0x55) + yield self.rdport.addr.eq(0) + yield self.wrport.addr.eq(0) + yield self.wrport.data.eq(0x33) + yield self.wrport.en.eq(1) + yield Tick("sync") + self.assertEqual((yield self.rdport.data), 0xaa) + yield Settle() + self.assertEqual((yield self.rdport.data), 0x33) + sim.add_clock(1e-6) + sim.add_process(process) + + def test_memory_read_only(self): + self.m = Module() + self.memory = Memory(width=8, depth=4, init=[0xaa, 0x55]) + self.m.submodules.rdport = self.rdport = self.memory.read_port() + with self.assertSimulation(self.m) as sim: + def process(): + self.assertEqual((yield self.rdport.data), 0xaa) + yield self.rdport.addr.eq(1) + yield + yield + self.assertEqual((yield self.rdport.data), 0x55) + sim.add_clock(1e-6) + sim.add_sync_process(process) + + def test_sample_helpers(self): + m = Module() + s = Signal(2) + def mk(x): + y = Signal.like(x) + m.d.comb += y.eq(x) + return y + p0, r0, f0, s0 = mk(Past(s, 0)), mk(Rose(s)), mk(Fell(s)), mk(Stable(s)) + p1, r1, f1, s1 = mk(Past(s)), mk(Rose(s, 1)), mk(Fell(s, 1)), mk(Stable(s, 1)) + p2, r2, f2, s2 = mk(Past(s, 2)), mk(Rose(s, 2)), mk(Fell(s, 2)), mk(Stable(s, 2)) + p3, r3, f3, s3 = mk(Past(s, 3)), mk(Rose(s, 3)), mk(Fell(s, 3)), mk(Stable(s, 3)) + with self.assertSimulation(m) as sim: + def process_gen(): + yield s.eq(0b10) + yield + yield + yield s.eq(0b01) + yield + def process_check(): + yield + yield + yield + + self.assertEqual((yield p0), 0b01) + self.assertEqual((yield p1), 0b10) + self.assertEqual((yield p2), 0b10) + self.assertEqual((yield p3), 0b00) + + self.assertEqual((yield s0), 0b0) + self.assertEqual((yield s1), 0b1) + self.assertEqual((yield s2), 0b0) + self.assertEqual((yield s3), 0b1) + + self.assertEqual((yield r0), 0b01) + self.assertEqual((yield r1), 0b00) + self.assertEqual((yield r2), 0b10) + self.assertEqual((yield r3), 0b00) + + self.assertEqual((yield f0), 0b10) + self.assertEqual((yield f1), 0b00) + self.assertEqual((yield f2), 0b00) + self.assertEqual((yield f3), 0b00) + sim.add_clock(1e-6) + sim.add_sync_process(process_gen) + sim.add_sync_process(process_check) + + def test_vcd_wrong_nonzero_time(self): + s = Signal() + m = Module() + m.d.sync += s.eq(s) + sim = Simulator(m) + sim.add_clock(1e-6) + sim.run_until(1e-5) + with self.assertRaisesRegex(ValueError, + regex=r"^Cannot start writing waveforms after advancing simulation time$"): + with sim.write_vcd(open(os.path.devnull, "wt")): + pass + + def test_vcd_wrong_twice(self): + s = Signal() + m = Module() + m.d.sync += s.eq(s) + sim = Simulator(m) + sim.add_clock(1e-6) + with self.assertRaisesRegex(ValueError, + regex=r"^Already writing waveforms to .+$"): + with sim.write_vcd(open(os.path.devnull, "wt")): + with sim.write_vcd(open(os.path.devnull, "wt")): + pass + + +class SimulatorRegressionTestCase(FHDLTestCase): + def test_bug_325(self): + dut = Module() + dut.d.comb += Signal().eq(Cat()) + Simulator(dut).run() + + def test_bug_325_bis(self): + dut = Module() + dut.d.comb += Signal().eq(Repl(Const(1), 0)) + Simulator(dut).run() diff --git a/nmigen/test/utils.py b/nmigen/test/utils.py new file mode 100644 index 0000000..29bd5b6 --- /dev/null +++ b/nmigen/test/utils.py @@ -0,0 +1,104 @@ +import os +import re +import shutil +import subprocess +import textwrap +import traceback +import unittest +import warnings +from contextlib import contextmanager + +from ..hdl.ast import * +from ..hdl.ir import * +from ..back import rtlil +from .._toolchain import require_tool + + +__all__ = ["FHDLTestCase"] + + +class FHDLTestCase(unittest.TestCase): + def assertRepr(self, obj, repr_str): + if isinstance(obj, list): + obj = Statement.cast(obj) + def prepare_repr(repr_str): + repr_str = re.sub(r"\s+", " ", repr_str) + repr_str = re.sub(r"\( (?=\()", "(", repr_str) + repr_str = re.sub(r"\) (?=\))", ")", repr_str) + return repr_str.strip() + self.assertEqual(prepare_repr(repr(obj)), prepare_repr(repr_str)) + + @contextmanager + def assertRaises(self, exception, msg=None): + with super().assertRaises(exception) as cm: + yield + if msg is not None: + # WTF? unittest.assertRaises is completely broken. + self.assertEqual(str(cm.exception), msg) + + @contextmanager + def assertRaisesRegex(self, exception, regex=None): + with super().assertRaises(exception) as cm: + yield + if regex is not None: + # unittest.assertRaisesRegex also seems broken... + self.assertRegex(str(cm.exception), regex) + + @contextmanager + def assertWarns(self, category, msg=None): + with warnings.catch_warnings(record=True) as warns: + yield + self.assertEqual(len(warns), 1) + self.assertEqual(warns[0].category, category) + if msg is not None: + self.assertEqual(str(warns[0].message), msg) + + def assertFormal(self, spec, mode="bmc", depth=1, engine="smtbmc"): + caller, *_ = traceback.extract_stack(limit=2) + spec_root, _ = os.path.splitext(caller.filename) + spec_dir = os.path.dirname(spec_root) + spec_name = "{}_{}".format( + os.path.basename(spec_root).replace("test_", "spec_"), + caller.name.replace("test_", "") + ) + + # The sby -f switch seems not fully functional when sby is reading from stdin. + if os.path.exists(os.path.join(spec_dir, spec_name)): + shutil.rmtree(os.path.join(spec_dir, spec_name)) + + if mode == "hybrid": + # A mix of BMC and k-induction, as per personal communication with Claire Wolf. + script = "setattr -unset init w:* a:nmigen.sample_reg %d" + mode = "bmc" + else: + script = "" + + config = textwrap.dedent("""\ + [options] + mode {mode} + depth {depth} + wait on + + [engines] + {engine} + + [script] + read_ilang top.il + prep + {script} + + [file top.il] + {rtlil} + """).format( + mode=mode, + depth=depth, + engine=engine, + script=script, + rtlil=rtlil.convert(Fragment.get(spec, platform="formal")) + ) + with subprocess.Popen([require_tool("sby"), "-f", "-d", spec_name], cwd=spec_dir, + universal_newlines=True, + stdin=subprocess.PIPE, stdout=subprocess.PIPE) as proc: + stdout, stderr = proc.communicate(config) + if proc.returncode != 0: + self.fail("Formal verification failed:\n" + stdout) diff --git a/nmigen/tracer.py b/nmigen/tracer.py new file mode 100644 index 0000000..f02ecd3 --- /dev/null +++ b/nmigen/tracer.py @@ -0,0 +1,55 @@ +import sys +from opcode import opname + + +__all__ = ["NameNotFound", "get_var_name", "get_src_loc"] + + +class NameNotFound(Exception): + pass + + +_raise_exception = object() + + +def get_var_name(depth=2, default=_raise_exception): + frame = sys._getframe(depth) + code = frame.f_code + call_index = frame.f_lasti + while True: + call_opc = opname[code.co_code[call_index]] + if call_opc in ("EXTENDED_ARG",): + call_index += 2 + else: + break + if call_opc not in ("CALL_FUNCTION", "CALL_FUNCTION_KW", "CALL_FUNCTION_EX", "CALL_METHOD"): + return None + + index = call_index + 2 + while True: + opc = opname[code.co_code[index]] + if opc in ("STORE_NAME", "STORE_ATTR"): + name_index = int(code.co_code[index + 1]) + return code.co_names[name_index] + elif opc == "STORE_FAST": + name_index = int(code.co_code[index + 1]) + return code.co_varnames[name_index] + elif opc == "STORE_DEREF": + name_index = int(code.co_code[index + 1]) + return code.co_cellvars[name_index] + elif opc in ("LOAD_GLOBAL", "LOAD_ATTR", "LOAD_FAST", "LOAD_DEREF", + "DUP_TOP", "BUILD_LIST"): + index += 2 + else: + if default is _raise_exception: + raise NameNotFound + else: + return default + + +def get_src_loc(src_loc_at=0): + # n-th frame: get_src_loc() + # n-1th frame: caller of get_src_loc() (usually constructor) + # n-2th frame: caller of caller (usually user code) + frame = sys._getframe(2 + src_loc_at) + return (frame.f_code.co_filename, frame.f_lineno) diff --git a/nmigen/utils.py b/nmigen/utils.py new file mode 100644 index 0000000..227258a --- /dev/null +++ b/nmigen/utils.py @@ -0,0 +1,21 @@ +__all__ = ["log2_int", "bits_for"] + + +def log2_int(n, need_pow2=True): + if n == 0: + return 0 + r = (n - 1).bit_length() + if need_pow2 and (1 << r) != n: + raise ValueError("{} is not a power of 2".format(n)) + return r + + +def bits_for(n, require_sign_bit=False): + if n > 0: + r = log2_int(n + 1, False) + else: + require_sign_bit = True + r = log2_int(-n, False) + if require_sign_bit: + r += 1 + return r diff --git a/nmigen/vendor/__init__.py b/nmigen/vendor/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nmigen/vendor/intel.py b/nmigen/vendor/intel.py new file mode 100644 index 0000000..2f0c81f --- /dev/null +++ b/nmigen/vendor/intel.py @@ -0,0 +1,423 @@ +from abc import abstractproperty + +from ..hdl import * +from ..build import * + + +__all__ = ["IntelPlatform"] + + +class IntelPlatform(TemplatedPlatform): + """ + Required tools: + * ``quartus_map`` + * ``quartus_fit`` + * ``quartus_asm`` + * ``quartus_sta`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_Quartus``, if present. + + Available overrides: + * ``nproc``: sets the number of cores used by all tools. + * ``quartus_map_opts``: adds extra options for ``quartus_map``. + * ``quartus_fit_opts``: adds extra options for ``quartus_fit``. + * ``quartus_asm_opts``: adds extra options for ``quartus_asm``. + * ``quartus_sta_opts``: adds extra options for ``quartus_sta``. + + Build products: + * ``*.rpt``: toolchain reports. + * ``{{name}}.sof``: bitstream as SRAM object file. + * ``{{name}}.rbf``: bitstream as raw binary file. + """ + + toolchain = "Quartus" + + device = abstractproperty() + package = abstractproperty() + speed = abstractproperty() + suffix = "" + + quartus_suppressed_warnings = [ + 10264, # All case item expressions in this case statement are onehot + 10270, # Incomplete Verilog case statement has no default case item + 10335, # Unrecognized synthesis attribute + 10763, # Verilog case statement has overlapping case item expressions with non-constant or don't care bits + 10935, # Verilog casex/casez overlaps with a previous casex/vasez item expression + 12125, # Using design file which is not specified as a design file for the current project, but contains definitions used in project + 18236, # Number of processors not specified in QSF + 292013, # Feature is only available with a valid subscription license + ] + + required_tools = [ + "quartus_map", + "quartus_fit", + "quartus_asm", + "quartus_sta", + ] + + file_templates = { + **TemplatedPlatform.build_script_templates, + "build_{{name}}.sh": r""" + # {{autogenerated}} + if [ -n "${{platform._toolchain_env_var}}" ]; then + QUARTUS_ROOTDIR=$(dirname $(dirname "${{platform._toolchain_env_var}}")) + # Quartus' qenv.sh does not work with `set -e`. + . "${{platform._toolchain_env_var}}" + fi + set -e{{verbose("x")}} + {{emit_commands("sh")}} + """, + # Quartus doesn't like constructs like (* keep = 32'd1 *), even though they mean the same + # thing as (* keep = 1 *); use -decimal to work around that. + "{{name}}.v": r""" + /* {{autogenerated}} */ + {{emit_verilog(["-decimal"])}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog(["-decimal"])}} + """, + "{{name}}.qsf": r""" + # {{autogenerated}} + {% if get_override("nproc") -%} + set_global_assignment -name NUM_PARALLEL_PROCESSORS {{get_override("nproc")}} + {% endif %} + + {% for file in platform.iter_extra_files(".v") -%} + set_global_assignment -name VERILOG_FILE "{{file}}" + {% endfor %} + {% for file in platform.iter_extra_files(".sv") -%} + set_global_assignment -name SYSTEMVERILOG_FILE "{{file}}" + {% endfor %} + {% for file in platform.iter_extra_files(".vhd", ".vhdl") -%} + set_global_assignment -name VHDL_FILE "{{file}}" + {% endfor %} + set_global_assignment -name VERILOG_FILE {{name}}.v + set_global_assignment -name TOP_LEVEL_ENTITY {{name}} + + set_global_assignment -name DEVICE {{platform.device}}{{platform.package}}{{platform.speed}}{{platform.suffix}} + {% for port_name, pin_name, extras in platform.iter_port_constraints_bits() -%} + set_location_assignment -to "{{port_name}}" PIN_{{pin_name}} + {% for key, value in extras.items() -%} + set_instance_assignment -to "{{port_name}}" -name {{key}} "{{value}}" + {% endfor %} + {% endfor %} + + set_global_assignment -name GENERATE_RBF_FILE ON + """, + "{{name}}.sdc": r""" + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + {% if port_signal is not none -%} + create_clock -name {{port_signal.name}} -period {{1000000000/frequency}} [get_ports {{port_signal.name}}] + {% else -%} + create_clock -name {{net_signal.name}} -period {{1000000000/frequency}} [get_nets {{net_signal|hierarchy("|")}}] + {% endif %} + {% endfor %} + """, + "{{name}}.srf": r""" + {% for warning in platform.quartus_suppressed_warnings %} + { "" "" "" "{{name}}.v" { } { } 0 {{warning}} "" 0 0 "Design Software" 0 -1 0 ""} + {% endfor %} + """, + } + command_templates = [ + r""" + {{invoke_tool("quartus_map")}} + {{get_override("quartus_map_opts")|options}} + --rev={{name}} {{name}} + """, + r""" + {{invoke_tool("quartus_fit")}} + {{get_override("quartus_fit_opts")|options}} + --rev={{name}} {{name}} + """, + r""" + {{invoke_tool("quartus_asm")}} + {{get_override("quartus_asm_opts")|options}} + --rev={{name}} {{name}} + """, + r""" + {{invoke_tool("quartus_sta")}} + {{get_override("quartus_sta_opts")|options}} + --rev={{name}} {{name}} + """, + ] + + def add_clock_constraint(self, clock, frequency): + super().add_clock_constraint(clock, frequency) + # Make sure the net constrained in the SDC file is kept through synthesis; it is redundant + # after Quartus flattens the hierarchy and will be eliminated if not explicitly kept. + clock.attrs["keep"] = 1 + + # The altiobuf_* and altddio_* primitives are explained in the following Intel documents: + # * https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/ug/ug_altiobuf.pdf + # * https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/ug/ug_altddio.pdf + # See also errata mentioned in: https://www.intel.com/content/www/us/en/programmable/support/support-resources/knowledge-base/solutions/rd11192012_735.html. + + @staticmethod + def _get_ireg(m, pin, invert): + def get_ineg(i): + if invert: + i_neg = Signal.like(i, name_suffix="_neg") + m.d.comb += i.eq(~i_neg) + return i_neg + else: + return i + + if pin.xdr == 0: + return get_ineg(pin.i) + elif pin.xdr == 1: + i_sdr = Signal(pin.width, name="{}_i_sdr") + m.submodules += Instance("$dff", + p_CLK_POLARITY=1, + p_WIDTH=pin.width, + i_CLK=pin.i_clk, + i_D=i_sdr, + o_Q=get_ineg(pin.i), + ) + return i_sdr + elif pin.xdr == 2: + i_ddr = Signal(pin.width, name="{}_i_ddr".format(pin.name)) + m.submodules["{}_i_ddr".format(pin.name)] = Instance("altddio_in", + p_width=pin.width, + i_datain=i_ddr, + i_inclock=pin.i_clk, + o_dataout_h=get_ineg(pin.i0), + o_dataout_l=get_ineg(pin.i1), + ) + return i_ddr + assert False + + @staticmethod + def _get_oreg(m, pin, invert): + def get_oneg(o): + if invert: + o_neg = Signal.like(o, name_suffix="_neg") + m.d.comb += o_neg.eq(~o) + return o_neg + else: + return o + + if pin.xdr == 0: + return get_oneg(pin.o) + elif pin.xdr == 1: + o_sdr = Signal(pin.width, name="{}_o_sdr".format(pin.name)) + m.submodules += Instance("$dff", + p_CLK_POLARITY=1, + p_WIDTH=pin.width, + i_CLK=pin.o_clk, + i_D=get_oneg(pin.o), + o_Q=o_sdr, + ) + return o_sdr + elif pin.xdr == 2: + o_ddr = Signal(pin.width, name="{}_o_ddr".format(pin.name)) + m.submodules["{}_o_ddr".format(pin.name)] = Instance("altddio_out", + p_width=pin.width, + o_dataout=o_ddr, + i_outclock=pin.o_clk, + i_datain_h=get_oneg(pin.o0), + i_datain_l=get_oneg(pin.o1), + ) + return o_ddr + assert False + + @staticmethod + def _get_oereg(m, pin): + # altiobuf_ requires an output enable signal for each pin, but pin.oe is 1 bit wide. + if pin.xdr == 0: + return Repl(pin.oe, pin.width) + elif pin.xdr in (1, 2): + oe_reg = Signal(pin.width, name="{}_oe_reg".format(pin.name)) + oe_reg.attrs["useioff"] = "1" + m.submodules += Instance("$dff", + p_CLK_POLARITY=1, + p_WIDTH=pin.width, + i_CLK=pin.o_clk, + i_D=pin.oe, + o_Q=oe_reg, + ) + return oe_reg + assert False + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_in", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="FALSE", + i_datain=port, + o_dataout=self._get_ireg(m, pin, invert) + ) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_out", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="FALSE", + p_use_oe="FALSE", + i_datain=self._get_oreg(m, pin, invert), + o_dataout=port, + ) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_out", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="FALSE", + p_use_oe="TRUE", + i_datain=self._get_oreg(m, pin, invert), + o_dataout=port, + i_oe=self._get_oereg(m, pin) + ) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_bidir", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="FALSE", + i_datain=self._get_oreg(m, pin, invert), + io_dataio=port, + o_dataout=self._get_ireg(m, pin, invert), + i_oe=self._get_oereg(m, pin), + ) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + p_port.attrs["useioff"] = 1 + n_port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_in", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="TRUE", + i_datain=p_port, + i_datain_b=n_port, + o_dataout=self._get_ireg(m, pin, invert) + ) + return m + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + p_port.attrs["useioff"] = 1 + n_port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_out", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="TRUE", + p_use_oe="FALSE", + i_datain=self._get_oreg(m, pin, invert), + o_dataout=p_port, + o_dataout_b=n_port, + ) + return m + + def get_diff_tristate(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + p_port.attrs["useioff"] = 1 + n_port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_out", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="TRUE", + p_use_oe="TRUE", + i_datain=self._get_oreg(m, pin, invert), + o_dataout=p_port, + o_dataout_b=n_port, + i_oe=self._get_oereg(m, pin), + ) + return m + + def get_diff_input_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + if pin.xdr == 1: + p_port.attrs["useioff"] = 1 + n_port.attrs["useioff"] = 1 + + m = Module() + m.submodules[pin.name] = Instance("altiobuf_bidir", + p_enable_bus_hold="FALSE", + p_number_of_channels=pin.width, + p_use_differential_mode="TRUE", + i_datain=self._get_oreg(m, pin, invert), + io_dataio=p_port, + io_dataio_b=n_port, + o_dataout=self._get_ireg(m, pin, invert), + i_oe=self._get_oereg(m, pin), + ) + return m + + # The altera_std_synchronizer{,_bundle} megafunctions embed SDC constraints that mark false + # paths, so use them instead of our default implementation. + + def get_ff_sync(self, ff_sync): + return Instance("altera_std_synchronizer_bundle", + p_width=len(ff_sync.i), + p_depth=ff_sync._stages, + i_clk=ClockSignal(ff_sync._o_domain), + i_reset_n=Const(1), + i_din=ff_sync.i, + o_dout=ff_sync.o, + ) + + def get_async_ff_sync(self, async_ff_sync): + m = Module() + sync_output = Signal() + if async_ff_sync._edge == "pos": + m.submodules += Instance("altera_std_synchronizer", + p_depth=async_ff_sync._stages, + i_clk=ClockSignal(async_ff_sync._domain), + i_reset_n=~async_ff_sync.i, + i_din=Const(1), + o_dout=sync_output, + ) + else: + m.submodules += Instance("altera_std_synchronizer", + p_depth=async_ff_sync._stages, + i_clk=ClockSignal(async_ff_sync._domain), + i_reset_n=async_ff_sync.i, + i_din=Const(1), + o_dout=sync_output, + ) + m.d.comb += async_ff_sync.o.eq(~sync_output) + return m diff --git a/nmigen/vendor/lattice_ecp5.py b/nmigen/vendor/lattice_ecp5.py new file mode 100644 index 0000000..691ac0b --- /dev/null +++ b/nmigen/vendor/lattice_ecp5.py @@ -0,0 +1,574 @@ +from abc import abstractproperty + +from ..hdl import * +from ..build import * + + +__all__ = ["LatticeECP5Platform"] + + +class LatticeECP5Platform(TemplatedPlatform): + """ + Trellis toolchain + ----------------- + + Required tools: + * ``yosys`` + * ``nextpnr-ecp5`` + * ``ecppack`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_Trellis``, if present. + + Available overrides: + * ``verbose``: enables logging of informational messages to standard error. + * ``read_verilog_opts``: adds options for ``read_verilog`` Yosys command. + * ``synth_opts``: adds options for ``synth_ecp5`` Yosys command. + * ``script_after_read``: inserts commands after ``read_ilang`` in Yosys script. + * ``script_after_synth``: inserts commands after ``synth_ecp5`` in Yosys script. + * ``yosys_opts``: adds extra options for ``yosys``. + * ``nextpnr_opts``: adds extra options for ``nextpnr-ecp5``. + * ``ecppack_opts``: adds extra options for ``ecppack``. + * ``add_preferences``: inserts commands at the end of the LPF file. + + Build products: + * ``{{name}}.rpt``: Yosys log. + * ``{{name}}.json``: synthesized RTL. + * ``{{name}}.tim``: nextpnr log. + * ``{{name}}.config``: ASCII bitstream. + * ``{{name}}.bit``: binary bitstream. + * ``{{name}}.svf``: JTAG programming vector. + + Diamond toolchain + ----------------- + + Required tools: + * ``pnmainc`` + * ``ddtcmd`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_Diamond``, if present. + + Available overrides: + * ``script_project``: inserts commands before ``prj_project save`` in Tcl script. + * ``script_after_export``: inserts commands after ``prj_run Export`` in Tcl script. + * ``add_preferences``: inserts commands at the end of the LPF file. + * ``add_constraints``: inserts commands at the end of the XDC file. + + Build products: + * ``{{name}}_impl/{{name}}_impl.htm``: consolidated log. + * ``{{name}}.bit``: binary bitstream. + * ``{{name}}.svf``: JTAG programming vector. + """ + + toolchain = None # selected when creating platform + + device = abstractproperty() + package = abstractproperty() + speed = abstractproperty() + grade = "C" # [C]ommercial, [I]ndustrial + + # Trellis templates + + _nextpnr_device_options = { + "LFE5U-12F": "--25k", + "LFE5U-25F": "--25k", + "LFE5U-45F": "--45k", + "LFE5U-85F": "--85k", + "LFE5UM-12F": "--um-25k", + "LFE5UM-25F": "--um-25k", + "LFE5UM-45F": "--um-45k", + "LFE5UM-85F": "--um-85k", + "LFE5UM5G-12F": "--um5g-25k", + "LFE5UM5G-25F": "--um5g-25k", + "LFE5UM5G-45F": "--um5g-45k", + "LFE5UM5G-85F": "--um5g-85k", + } + _nextpnr_package_options = { + "BG256": "caBGA256", + "MG285": "csfBGA285", + "BG381": "caBGA381", + "BG554": "caBGA554", + "BG756": "caBGA756", + } + + _trellis_required_tools = [ + "yosys", + "nextpnr-ecp5", + "ecppack" + ] + _trellis_file_templates = { + **TemplatedPlatform.build_script_templates, + "{{name}}.il": r""" + # {{autogenerated}} + {{emit_rtlil()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}.ys": r""" + # {{autogenerated}} + {% for file in platform.iter_extra_files(".v") -%} + read_verilog {{get_override("read_verilog_opts")|options}} {{file}} + {% endfor %} + {% for file in platform.iter_extra_files(".sv") -%} + read_verilog -sv {{get_override("read_verilog_opts")|options}} {{file}} + {% endfor %} + {% for file in platform.iter_extra_files(".il") -%} + read_ilang {{file}} + {% endfor %} + read_ilang {{name}}.il + {{get_override("script_after_read")|default("# (script_after_read placeholder)")}} + synth_ecp5 {{get_override("synth_opts")|options}} -top {{name}} + {{get_override("script_after_synth")|default("# (script_after_synth placeholder)")}} + write_json {{name}}.json + """, + "{{name}}.lpf": r""" + # {{autogenerated}} + BLOCK ASYNCPATHS; + BLOCK RESETPATHS; + {% for port_name, pin_name, extras in platform.iter_port_constraints_bits() -%} + LOCATE COMP "{{port_name}}" SITE "{{pin_name}}"; + {% if extras -%} + IOBUF PORT "{{port_name}}" + {%- for key, value in extras.items() %} {{key}}={{value}}{% endfor %}; + {% endif %} + {% endfor %} + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + {% if port_signal is not none -%} + FREQUENCY PORT "{{port_signal.name}}" {{frequency}} HZ; + {% else -%} + FREQUENCY NET "{{net_signal|hierarchy(".")}}" {{frequency}} HZ; + {% endif %} + {% endfor %} + {{get_override("add_preferences")|default("# (add_preferences placeholder)")}} + """ + } + _trellis_command_templates = [ + r""" + {{invoke_tool("yosys")}} + {{quiet("-q")}} + {{get_override("yosys_opts")|options}} + -l {{name}}.rpt + {{name}}.ys + """, + r""" + {{invoke_tool("nextpnr-ecp5")}} + {{quiet("--quiet")}} + {{get_override("nextpnr_opts")|options}} + --log {{name}}.tim + {{platform._nextpnr_device_options[platform.device]}} + --package {{platform._nextpnr_package_options[platform.package]|upper}} + --speed {{platform.speed}} + --json {{name}}.json + --lpf {{name}}.lpf + --textcfg {{name}}.config + """, + r""" + {{invoke_tool("ecppack")}} + {{verbose("--verbose")}} + {{get_override("ecppack_opts")|options}} + --input {{name}}.config + --bit {{name}}.bit + --svf {{name}}.svf + """ + ] + + # Diamond templates + + _diamond_required_tools = [ + "yosys", + "pnmainc", + "ddtcmd" + ] + _diamond_file_templates = { + **TemplatedPlatform.build_script_templates, + "build_{{name}}.sh": r""" + # {{autogenerated}} + set -e{{verbose("x")}} + if [ -z "$BASH" ] ; then exec /bin/bash "$0" "$@"; fi + if [ -n "${{platform._toolchain_env_var}}" ]; then + bindir=$(dirname "${{platform._toolchain_env_var}}") + . "${{platform._toolchain_env_var}}" + fi + {{emit_commands("sh")}} + """, + "{{name}}.v": r""" + /* {{autogenerated}} */ + {{emit_verilog()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}.tcl": r""" + prj_project new -name {{name}} -impl impl -impl_dir top_impl \ + -dev {{platform.device}}-{{platform.speed}}{{platform.package}}{{platform.grade}} \ + -lpf {{name}}.lpf \ + -synthesis synplify + {% for file in platform.iter_extra_files(".v", ".sv", ".vhd", ".vhdl") -%} + prj_src add "{{file}}" + {% endfor %} + prj_src add {{name}}.v + prj_impl option top {{name}} + prj_src add {{name}}.sdc + {{get_override("script_project")|default("# (script_project placeholder)")}} + prj_project save + prj_run Synthesis -impl impl -forceAll + prj_run Translate -impl impl -forceAll + prj_run Map -impl impl -forceAll + prj_run PAR -impl impl -forceAll + prj_run Export -impl "impl" -forceAll -task Bitgen + {{get_override("script_after_export")|default("# (script_after_export placeholder)")}} + """, + "{{name}}.lpf": r""" + # {{autogenerated}} + BLOCK ASYNCPATHS; + BLOCK RESETPATHS; + {% for port_name, pin_name, extras in platform.iter_port_constraints_bits() -%} + LOCATE COMP "{{port_name}}" SITE "{{pin_name}}"; + IOBUF PORT "{{port_name}}" + {%- for key, value in extras.items() %} {{key}}={{value}}{% endfor %}; + {% endfor %} + {{get_override("add_preferences")|default("# (add_preferences placeholder)")}} + """, + "{{name}}.sdc": r""" + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + {% if port_signal is not none -%} + create_clock -name {{port_signal.name}} -period {{1000000000/frequency}} [get_ports {{port_signal.name}}] + {% else -%} + create_clock -name {{net_signal.name}} -period {{1000000000/frequency}} [get_nets {{net_signal|hierarchy("/")}}] + {% endif %} + {% endfor %} + {{get_override("add_constraints")|default("# (add_constraints placeholder)")}} + """, + } + _diamond_command_templates = [ + # These don't have any usable command-line option overrides. + r""" + {{invoke_tool("pnmainc")}} + {{name}}.tcl + """, + r""" + {{invoke_tool("ddtcmd")}} + -oft -bit + -if {{name}}_impl/{{name}}_impl.bit -of {{name}}.bit + """, + r""" + {{invoke_tool("ddtcmd")}} + -oft -svfsingle -revd -op "Fast Program" + -if {{name}}_impl/{{name}}_impl.bit -of {{name}}.svf + """, + ] + + # Common logic + + def __init__(self, *, toolchain="Trellis"): + super().__init__() + + assert toolchain in ("Trellis", "Diamond") + self.toolchain = toolchain + + @property + def required_tools(self): + if self.toolchain == "Trellis": + return self._trellis_required_tools + if self.toolchain == "Diamond": + return self._diamond_required_tools + assert False + + @property + def file_templates(self): + if self.toolchain == "Trellis": + return self._trellis_file_templates + if self.toolchain == "Diamond": + return self._diamond_file_templates + assert False + + @property + def command_templates(self): + if self.toolchain == "Trellis": + return self._trellis_command_templates + if self.toolchain == "Diamond": + return self._diamond_command_templates + assert False + + @property + def default_clk_constraint(self): + if self.default_clk == "OSCG": + return Clock(310e6 / self.oscg_div) + return super().default_clk_constraint + + def create_missing_domain(self, name): + # Lattice ECP5 devices have two global set/reset signals: PUR, which is driven at startup + # by the configuration logic and unconditionally resets every storage element, and GSR, + # which is driven by user logic and each storage element may be configured as affected or + # unaffected by GSR. PUR is purely asynchronous, so even though it is a low-skew global + # network, its deassertion may violate a setup/hold constraint with relation to a user + # clock. To avoid this, a GSR/SGSR instance should be driven synchronized to user clock. + if name == "sync" and self.default_clk is not None: + m = Module() + if self.default_clk == "OSCG": + if not hasattr(self, "oscg_div"): + raise ValueError("OSCG divider (oscg_div) must be an integer between 2 " + "and 128") + if not isinstance(self.oscg_div, int) or self.oscg_div < 2 or self.oscg_div > 128: + raise ValueError("OSCG divider (oscg_div) must be an integer between 2 " + "and 128, not {!r}" + .format(self.oscg_div)) + clk_i = Signal() + m.submodules += Instance("OSCG", p_DIV=self.oscg_div, o_OSC=clk_i) + else: + clk_i = self.request(self.default_clk).i + if self.default_rst is not None: + rst_i = self.request(self.default_rst).i + else: + rst_i = Const(0) + + gsr0 = Signal() + gsr1 = Signal() + # There is no end-of-startup signal on ECP5, but PUR is released after IOB enable, so + # a simple reset synchronizer (with PUR as the asynchronous reset) does the job. + m.submodules += [ + Instance("FD1S3AX", p_GSR="DISABLED", i_CK=clk_i, i_D=~rst_i, o_Q=gsr0), + Instance("FD1S3AX", p_GSR="DISABLED", i_CK=clk_i, i_D=gsr0, o_Q=gsr1), + # Although we already synchronize the reset input to user clock, SGSR has dedicated + # clock routing to the center of the FPGA; use that just in case it turns out to be + # more reliable. (None of this is documented.) + Instance("SGSR", i_CLK=clk_i, i_GSR=gsr1), + ] + # GSR implicitly connects to every appropriate storage element. As such, the sync + # domain is reset-less; domains driven by other clocks would need to have dedicated + # reset circuitry or otherwise meet setup/hold constraints on their own. + m.domains += ClockDomain("sync", reset_less=True) + m.d.comb += ClockSignal("sync").eq(clk_i) + return m + + _single_ended_io_types = [ + "HSUL12", "LVCMOS12", "LVCMOS15", "LVCMOS18", "LVCMOS25", "LVCMOS33", "LVTTL33", + "SSTL135_I", "SSTL135_II", "SSTL15_I", "SSTL15_II", "SSTL18_I", "SSTL18_II", + ] + _differential_io_types = [ + "BLVDS25", "BLVDS25E", "HSUL12D", "LVCMOS18D", "LVCMOS25D", "LVCMOS33D", + "LVDS", "LVDS25E", "LVPECL33", "LVPECL33E", "LVTTL33D", "MLVDS", "MLVDS25E", + "SLVS", "SSTL135D_II", "SSTL15D_II", "SSTL18D_II", "SUBLVDS", + ] + + def should_skip_port_component(self, port, attrs, component): + # On ECP5, a differential IO is placed by only instantiating an IO buffer primitive at + # the PIOA or PIOC location, which is always the non-inverting pin. + if attrs.get("IO_TYPE", "LVCMOS25") in self._differential_io_types and component == "n": + return True + return False + + def _get_xdr_buffer(self, m, pin, *, i_invert=False, o_invert=False): + def get_ireg(clk, d, q): + for bit in range(len(q)): + m.submodules += Instance("IFS1P3DX", + i_SCLK=clk, + i_SP=Const(1), + i_CD=Const(0), + i_D=d[bit], + o_Q=q[bit] + ) + + def get_oreg(clk, d, q): + for bit in range(len(q)): + m.submodules += Instance("OFS1P3DX", + i_SCLK=clk, + i_SP=Const(1), + i_CD=Const(0), + i_D=d[bit], + o_Q=q[bit] + ) + + def get_iddr(sclk, d, q0, q1): + for bit in range(len(d)): + m.submodules += Instance("IDDRX1F", + i_SCLK=sclk, + i_RST=Const(0), + i_D=d[bit], + o_Q0=q0[bit], o_Q1=q1[bit] + ) + + def get_oddr(sclk, d0, d1, q): + for bit in range(len(q)): + m.submodules += Instance("ODDRX1F", + i_SCLK=sclk, + i_RST=Const(0), + i_D0=d0[bit], i_D1=d1[bit], + o_Q=q[bit] + ) + + def get_ineg(z, invert): + if invert: + a = Signal.like(z, name_suffix="_n") + m.d.comb += z.eq(~a) + return a + else: + return z + + def get_oneg(a, invert): + if invert: + z = Signal.like(a, name_suffix="_n") + m.d.comb += z.eq(~a) + return z + else: + return a + + if "i" in pin.dir: + if pin.xdr < 2: + pin_i = get_ineg(pin.i, i_invert) + elif pin.xdr == 2: + pin_i0 = get_ineg(pin.i0, i_invert) + pin_i1 = get_ineg(pin.i1, i_invert) + if "o" in pin.dir: + if pin.xdr < 2: + pin_o = get_oneg(pin.o, o_invert) + elif pin.xdr == 2: + pin_o0 = get_oneg(pin.o0, o_invert) + pin_o1 = get_oneg(pin.o1, o_invert) + + i = o = t = None + if "i" in pin.dir: + i = Signal(pin.width, name="{}_xdr_i".format(pin.name)) + if "o" in pin.dir: + o = Signal(pin.width, name="{}_xdr_o".format(pin.name)) + if pin.dir in ("oe", "io"): + t = Signal(1, name="{}_xdr_t".format(pin.name)) + + if pin.xdr == 0: + if "i" in pin.dir: + i = pin_i + if "o" in pin.dir: + o = pin_o + if pin.dir in ("oe", "io"): + t = ~pin.oe + elif pin.xdr == 1: + # Note that currently nextpnr will not pack an FF (*FS1P3DX) into the PIO. + if "i" in pin.dir: + get_ireg(pin.i_clk, i, pin_i) + if "o" in pin.dir: + get_oreg(pin.o_clk, pin_o, o) + if pin.dir in ("oe", "io"): + get_oreg(pin.o_clk, ~pin.oe, t) + elif pin.xdr == 2: + if "i" in pin.dir: + get_iddr(pin.i_clk, i, pin_i0, pin_i1) + if "o" in pin.dir: + get_oddr(pin.o_clk, pin_o0, pin_o1, o) + if pin.dir in ("oe", "io"): + # It looks like Diamond will not pack an OREG as a tristate register in a DDR PIO. + # It is not clear what is the recommended set of primitives for this task. + # Similarly, nextpnr will not pack anything as a tristate register in a DDR PIO. + get_oreg(pin.o_clk, ~pin.oe, t) + else: + assert False + + return (i, o, t) + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IB", + i_I=port[bit], + o_O=i[bit] + ) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OB", + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBZ", + i_T=t, + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("BB", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_B=port[bit] + ) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IB", + i_I=p_port[bit], + o_O=i[bit] + ) + return m + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OB", + i_I=o[bit], + o_O=p_port[bit], + ) + return m + + def get_diff_tristate(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBZ", + i_T=t, + i_I=o[bit], + o_O=p_port[bit], + ) + return m + + def get_diff_input_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("BB", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_B=p_port[bit], + ) + return m + + # CDC primitives are not currently specialized for ECP5. + # While Diamond supports false path constraints; nextpnr-ecp5 does not. diff --git a/nmigen/vendor/lattice_ice40.py b/nmigen/vendor/lattice_ice40.py new file mode 100644 index 0000000..e5570ec --- /dev/null +++ b/nmigen/vendor/lattice_ice40.py @@ -0,0 +1,585 @@ +from abc import abstractproperty + +from ..hdl import * +from ..lib.cdc import ResetSynchronizer +from ..build import * + + +__all__ = ["LatticeICE40Platform"] + + +class LatticeICE40Platform(TemplatedPlatform): + """ + IceStorm toolchain + ------------------ + + Required tools: + * ``yosys`` + * ``nextpnr-ice40`` + * ``icepack`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_IceStorm``, if present. + + Available overrides: + * ``verbose``: enables logging of informational messages to standard error. + * ``read_verilog_opts``: adds options for ``read_verilog`` Yosys command. + * ``synth_opts``: adds options for ``synth_ice40`` Yosys command. + * ``script_after_read``: inserts commands after ``read_ilang`` in Yosys script. + * ``script_after_synth``: inserts commands after ``synth_ice40`` in Yosys script. + * ``yosys_opts``: adds extra options for ``yosys``. + * ``nextpnr_opts``: adds extra options for ``nextpnr-ice40``. + * ``add_pre_pack``: inserts commands at the end in pre-pack Python script. + * ``add_constraints``: inserts commands at the end in the PCF file. + + Build products: + * ``{{name}}.rpt``: Yosys log. + * ``{{name}}.json``: synthesized RTL. + * ``{{name}}.tim``: nextpnr log. + * ``{{name}}.asc``: ASCII bitstream. + * ``{{name}}.bin``: binary bitstream. + + iCECube2 toolchain + ------------------ + + This toolchain comes in two variants: ``LSE-iCECube2`` and ``Synplify-iCECube2``. + + Required tools: + * iCECube2 toolchain + * ``tclsh`` + + The environment is populated by setting the necessary environment variables based on + ``NMIGEN_ENV_iCECube2``, which must point to the root of the iCECube2 installation, and + is required. + + Available overrides: + * ``verbose``: enables logging of informational messages to standard error. + * ``lse_opts``: adds options for LSE. + * ``script_after_add``: inserts commands after ``add_file`` in Synplify Tcl script. + * ``script_after_options``: inserts commands after ``set_option`` in Synplify Tcl script. + * ``add_constraints``: inserts commands in SDC file. + * ``script_after_flow``: inserts commands after ``run_sbt_backend_auto`` in SBT + Tcl script. + + Build products: + * ``{{name}}_lse.log`` (LSE) or ``{{name}}_design/{{name}}.htm`` (Synplify): synthesis log. + * ``sbt/outputs/router/{{name}}_timing.rpt``: timing report. + * ``{{name}}.edf``: EDIF netlist. + * ``{{name}}.bin``: binary bitstream. + """ + + toolchain = None # selected when creating platform + + device = abstractproperty() + package = abstractproperty() + + # IceStorm templates + + _nextpnr_device_options = { + "iCE40LP384": "--lp384", + "iCE40LP1K": "--lp1k", + "iCE40LP4K": "--lp8k", + "iCE40LP8K": "--lp8k", + "iCE40HX1K": "--hx1k", + "iCE40HX4K": "--hx8k", + "iCE40HX8K": "--hx8k", + "iCE40UP5K": "--up5k", + "iCE40UP3K": "--up5k", + "iCE5LP4K": "--u4k", + "iCE5LP2K": "--u4k", + "iCE5LP1K": "--u4k", + } + _nextpnr_package_options = { + "iCE40LP4K": ":4k", + "iCE40HX4K": ":4k", + "iCE40UP3K": "", + "iCE5LP2K": "", + "iCE5LP1K": "", + } + + _icestorm_required_tools = [ + "yosys", + "nextpnr-ice40", + "icepack", + ] + _icestorm_file_templates = { + **TemplatedPlatform.build_script_templates, + "{{name}}.il": r""" + # {{autogenerated}} + {{emit_rtlil()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}.ys": r""" + # {{autogenerated}} + {% for file in platform.iter_extra_files(".v") -%} + read_verilog {{get_override("read_verilog_opts")|options}} {{file}} + {% endfor %} + {% for file in platform.iter_extra_files(".sv") -%} + read_verilog -sv {{get_override("read_verilog_opts")|options}} {{file}} + {% endfor %} + {% for file in platform.iter_extra_files(".il") -%} + read_ilang {{file}} + {% endfor %} + read_ilang {{name}}.il + {{get_override("script_after_read")|default("# (script_after_read placeholder)")}} + synth_ice40 {{get_override("synth_opts")|options}} -top {{name}} + {{get_override("script_after_synth")|default("# (script_after_synth placeholder)")}} + write_json {{name}}.json + """, + "{{name}}.pcf": r""" + # {{autogenerated}} + {% for port_name, pin_name, attrs in platform.iter_port_constraints_bits() -%} + set_io {{port_name}} {{pin_name}} + {% endfor %} + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + set_frequency {{net_signal|hierarchy(".")}} {{frequency/1000000}} + {% endfor%} + {{get_override("add_constraints")|default("# (add_constraints placeholder)")}} + """, + } + _icestorm_command_templates = [ + r""" + {{invoke_tool("yosys")}} + {{quiet("-q")}} + {{get_override("yosys_opts")|options}} + -l {{name}}.rpt + {{name}}.ys + """, + r""" + {{invoke_tool("nextpnr-ice40")}} + {{quiet("--quiet")}} + {{get_override("nextpnr_opts")|options}} + --log {{name}}.tim + {{platform._nextpnr_device_options[platform.device]}} + --package + {{platform.package|lower}}{{platform._nextpnr_package_options[platform.device]}} + --json {{name}}.json + --pcf {{name}}.pcf + --asc {{name}}.asc + """, + r""" + {{invoke_tool("icepack")}} + {{verbose("-v")}} + {{name}}.asc + {{name}}.bin + """ + ] + + # iCECube2 templates + + _icecube2_required_tools = [ + "yosys", + "synthesis", + "synpwrap", + "tclsh", + ] + _icecube2_file_templates = { + **TemplatedPlatform.build_script_templates, + "build_{{name}}.sh": r""" + # {{autogenerated}} + set -e{{verbose("x")}} + if [ -n "${{platform._toolchain_env_var}}" ]; then + # LSE environment + export LD_LIBRARY_PATH=${{platform._toolchain_env_var}}/LSE/bin/lin64:$LD_LIBRARY_PATH + export PATH=${{platform._toolchain_env_var}}/LSE/bin/lin64:$PATH + export FOUNDRY=${{platform._toolchain_env_var}}/LSE + # Synplify environment + export LD_LIBRARY_PATH=${{platform._toolchain_env_var}}/sbt_backend/bin/linux/opt/synpwrap:$LD_LIBRARY_PATH + export PATH=${{platform._toolchain_env_var}}/sbt_backend/bin/linux/opt/synpwrap:$PATH + export SYNPLIFY_PATH=${{platform._toolchain_env_var}}/synpbase + # Common environment + export SBT_DIR=${{platform._toolchain_env_var}}/sbt_backend + else + echo "Variable ${{platform._toolchain_env_var}} must be set" >&2; exit 1 + fi + {{emit_commands("sh")}} + """, + "{{name}}.v": r""" + /* {{autogenerated}} */ + {{emit_verilog()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}_lse.prj": r""" + # {{autogenerated}} + -a SBT{{platform.family}} + -d {{platform.device}} + -t {{platform.package}} + {{get_override("lse_opts")|options|default("# (lse_opts placeholder)")}} + {% for file in platform.iter_extra_files(".v") -%} + -ver {{file}} + {% endfor %} + -ver {{name}}.v + -sdc {{name}}.sdc + -top {{name}} + -output_edif {{name}}.edf + -logfile {{name}}_lse.log + """, + "{{name}}_syn.prj": r""" + # {{autogenerated}} + {% for file in platform.iter_extra_files(".v", ".sv", ".vhd", ".vhdl") -%} + add_file -verilog {{file}} + {% endfor %} + add_file -verilog {{name}}.v + add_file -constraint {{name}}.sdc + {{get_override("script_after_add")|default("# (script_after_add placeholder)")}} + impl -add {{name}}_design -type fpga + set_option -technology SBT{{platform.family}} + set_option -part {{platform.device}} + set_option -package {{platform.package}} + {{get_override("script_after_options")|default("# (script_after_options placeholder)")}} + project -result_format edif + project -result_file {{name}}.edf + impl -active {{name}}_design + project -run compile + project -run map + project -run fpga_mapper + file copy -force -- {{name}}_design/{{name}}.edf {{name}}.edf + """, + "{{name}}.sdc": r""" + # {{autogenerated}} + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + {% if port_signal is not none -%} + create_clock -name {{port_signal.name}} -period {{1000000000/frequency}} [get_ports {{port_signal.name}}] + {% else -%} + create_clock -name {{net_signal.name}} -period {{1000000000/frequency}} [get_nets {{net_signal|hierarchy("/")}}] + {% endif %} + {% endfor %} + {{get_override("add_constraints")|default("# (add_constraints placeholder)")}} + """, + "{{name}}.tcl": r""" + # {{autogenerated}} + set device {{platform.device}}-{{platform.package}} + set top_module {{name}} + set proj_dir . + set output_dir . + set edif_file {{name}} + set tool_options ":edifparser -y {{name}}.pcf" + set sbt_root $::env(SBT_DIR) + append sbt_tcl $sbt_root "/tcl/sbt_backend_synpl.tcl" + source $sbt_tcl + run_sbt_backend_auto $device $top_module $proj_dir $output_dir $tool_options $edif_file + {{get_override("script_after_file")|default("# (script_after_file placeholder)")}} + file copy -force -- sbt/outputs/bitmap/{{name}}_bitmap.bin {{name}}.bin + exit + """, + "{{name}}.pcf": r""" + # {{autogenerated}} + {% for port_name, pin_name, attrs in platform.iter_port_constraints_bits() -%} + set_io {{port_name}} {{pin_name}} + {% endfor %} + """, + } + _lse_icecube2_command_templates = [ + r"""synthesis -f {{name}}_lse.prj""", + r"""tclsh {{name}}.tcl""", + ] + _synplify_icecube2_command_templates = [ + r"""synpwrap -prj {{name}}_syn.prj -log {{name}}_syn.log""", + r"""tclsh {{name}}.tcl""", + ] + + # Common logic + + def __init__(self, *, toolchain="IceStorm"): + super().__init__() + + assert toolchain in ("IceStorm", "LSE-iCECube2", "Synplify-iCECube2") + self.toolchain = toolchain + + @property + def family(self): + if self.device.startswith("iCE40"): + return "iCE40" + if self.device.startswith("iCE5"): + return "iCE5" + assert False + + @property + def _toolchain_env_var(self): + if self.toolchain == "IceStorm": + return f"NMIGEN_ENV_{self.toolchain}" + if self.toolchain in ("LSE-iCECube2", "Synplify-iCECube2"): + return f"NMIGEN_ENV_iCECube2" + assert False + + @property + def required_tools(self): + if self.toolchain == "IceStorm": + return self._icestorm_required_tools + if self.toolchain in ("LSE-iCECube2", "Synplify-iCECube2"): + return self._icecube2_required_tools + assert False + + @property + def file_templates(self): + if self.toolchain == "IceStorm": + return self._icestorm_file_templates + if self.toolchain in ("LSE-iCECube2", "Synplify-iCECube2"): + return self._icecube2_file_templates + assert False + + @property + def command_templates(self): + if self.toolchain == "IceStorm": + return self._icestorm_command_templates + if self.toolchain == "LSE-iCECube2": + return self._lse_icecube2_command_templates + if self.toolchain == "Synplify-iCECube2": + return self._synplify_icecube2_command_templates + assert False + + def create_missing_domain(self, name): + # For unknown reasons (no errata was ever published, and no documentation mentions this + # issue), iCE40 BRAMs read as zeroes for ~3 us after configuration and release of internal + # global reset. Note that this is a *time-based* delay, generated purely by the internal + # oscillator, which may not be observed nor influenced directly. For details, see links: + # * https://github.com/cliffordwolf/icestorm/issues/76#issuecomment-289270411 + # * https://github.com/cliffordwolf/icotools/issues/2#issuecomment-299734673 + # + # To handle this, it is necessary to have a global reset in any iCE40 design that may + # potentially instantiate BRAMs, and assert this reset for >3 us after configuration. + # (We add a margin of 5x to allow for PVT variation.) If the board includes a dedicated + # reset line, this line is ORed with the power on reset. + # + # The power-on reset timer counts up because the vendor tools do not support initialization + # of flip-flops. + if name == "sync" and self.default_clk is not None: + clk_i = self.request(self.default_clk).i + if self.default_rst is not None: + rst_i = self.request(self.default_rst).i + else: + rst_i = Const(0) + + m = Module() + + # Power-on-reset domain + m.domains += ClockDomain("por", reset_less=True, local=True) + delay = int(15e-6 * self.default_clk_frequency) + timer = Signal(range(delay)) + ready = Signal() + m.d.comb += ClockSignal("por").eq(clk_i) + with m.If(timer == delay): + m.d.por += ready.eq(1) + with m.Else(): + m.d.por += timer.eq(timer + 1) + + # Primary domain + m.domains += ClockDomain("sync") + m.d.comb += ClockSignal("sync").eq(clk_i) + if self.default_rst is not None: + m.submodules.reset_sync = ResetSynchronizer(~ready | rst_i, domain="sync") + else: + m.d.comb += ResetSignal("sync").eq(~ready) + + return m + + def should_skip_port_component(self, port, attrs, component): + # On iCE40, a differential input is placed by only instantiating an SB_IO primitive for + # the pin with z=0, which is the non-inverting pin. The pinout unfortunately differs + # between LP/HX and UP series: + # * for LP/HX, z=0 is DPxxB (B is non-inverting, A is inverting) + # * for UP, z=0 is IOB_xxA (A is non-inverting, B is inverting) + if attrs.get("IO_STANDARD", "SB_LVCMOS") == "SB_LVDS_INPUT" and component == "n": + return True + return False + + def _get_io_buffer(self, m, pin, port, attrs, *, i_invert=False, o_invert=False, + invert_lut=False): + def get_dff(clk, d, q): + m.submodules += Instance("$dff", + p_CLK_POLARITY=1, + p_WIDTH=len(d), + i_CLK=clk, + i_D=d, + o_Q=q) + + def get_ineg(y, invert): + if invert_lut: + a = Signal.like(y, name_suffix="_x{}".format(1 if invert else 0)) + for bit in range(len(y)): + m.submodules += Instance("SB_LUT4", + p_LUT_INIT=Const(0b01 if invert else 0b10, 16), + i_I0=a[bit], + i_I1=Const(0), + i_I2=Const(0), + i_I3=Const(0), + o_O=y[bit]) + return a + elif invert: + a = Signal.like(y, name_suffix="_n") + m.d.comb += y.eq(~a) + return a + else: + return y + + def get_oneg(a, invert): + if invert_lut: + y = Signal.like(a, name_suffix="_x{}".format(1 if invert else 0)) + for bit in range(len(a)): + m.submodules += Instance("SB_LUT4", + p_LUT_INIT=Const(0b01 if invert else 0b10, 16), + i_I0=a[bit], + i_I1=Const(0), + i_I2=Const(0), + i_I3=Const(0), + o_O=y[bit]) + return y + elif invert: + y = Signal.like(a, name_suffix="_n") + m.d.comb += y.eq(~a) + return y + else: + return a + + if "GLOBAL" in attrs: + is_global_input = bool(attrs["GLOBAL"]) + del attrs["GLOBAL"] + else: + is_global_input = False + assert not (is_global_input and i_invert) + + if "i" in pin.dir: + if pin.xdr < 2: + pin_i = get_ineg(pin.i, i_invert) + elif pin.xdr == 2: + pin_i0 = get_ineg(pin.i0, i_invert) + pin_i1 = get_ineg(pin.i1, i_invert) + if "o" in pin.dir: + if pin.xdr < 2: + pin_o = get_oneg(pin.o, o_invert) + elif pin.xdr == 2: + pin_o0 = get_oneg(pin.o0, o_invert) + pin_o1 = get_oneg(pin.o1, o_invert) + + if "i" in pin.dir and pin.xdr == 2: + i0_ff = Signal.like(pin_i0, name_suffix="_ff") + i1_ff = Signal.like(pin_i1, name_suffix="_ff") + get_dff(pin.i_clk, i0_ff, pin_i0) + get_dff(pin.i_clk, i1_ff, pin_i1) + if "o" in pin.dir and pin.xdr == 2: + o1_ff = Signal.like(pin_o1, name_suffix="_ff") + get_dff(pin.o_clk, pin_o1, o1_ff) + + for bit in range(len(port)): + io_args = [ + ("io", "PACKAGE_PIN", port[bit]), + *(("p", key, value) for key, value in attrs.items()), + ] + + if "i" not in pin.dir: + # If no input pin is requested, it is important to use a non-registered input pin + # type, because an output-only pin would not have an input clock, and if its input + # is configured as registered, this would prevent a co-located input-capable pin + # from using an input clock. + i_type = 0b01 # PIN_INPUT + elif pin.xdr == 0: + i_type = 0b01 # PIN_INPUT + elif pin.xdr > 0: + i_type = 0b00 # PIN_INPUT_REGISTERED aka PIN_INPUT_DDR + if "o" not in pin.dir: + o_type = 0b0000 # PIN_NO_OUTPUT + elif pin.xdr == 0 and pin.dir == "o": + o_type = 0b0110 # PIN_OUTPUT + elif pin.xdr == 0: + o_type = 0b1010 # PIN_OUTPUT_TRISTATE + elif pin.xdr == 1 and pin.dir == "o": + o_type = 0b0101 # PIN_OUTPUT_REGISTERED + elif pin.xdr == 1: + o_type = 0b1101 # PIN_OUTPUT_REGISTERED_ENABLE_REGISTERED + elif pin.xdr == 2 and pin.dir == "o": + o_type = 0b0100 # PIN_OUTPUT_DDR + elif pin.xdr == 2: + o_type = 0b1100 # PIN_OUTPUT_DDR_ENABLE_REGISTERED + io_args.append(("p", "PIN_TYPE", C((o_type << 2) | i_type, 6))) + + if hasattr(pin, "i_clk"): + io_args.append(("i", "INPUT_CLK", pin.i_clk)) + if hasattr(pin, "o_clk"): + io_args.append(("i", "OUTPUT_CLK", pin.o_clk)) + + if "i" in pin.dir: + if pin.xdr == 0 and is_global_input: + io_args.append(("o", "GLOBAL_BUFFER_OUTPUT", pin.i[bit])) + elif pin.xdr < 2: + io_args.append(("o", "D_IN_0", pin_i[bit])) + elif pin.xdr == 2: + # Re-register both inputs before they enter fabric. This increases hold time + # to an entire cycle, and adds one cycle of latency. + io_args.append(("o", "D_IN_0", i0_ff[bit])) + io_args.append(("o", "D_IN_1", i1_ff[bit])) + if "o" in pin.dir: + if pin.xdr < 2: + io_args.append(("i", "D_OUT_0", pin_o[bit])) + elif pin.xdr == 2: + # Re-register negedge output after it leaves fabric. This increases setup time + # to an entire cycle, and doesn't add latency. + io_args.append(("i", "D_OUT_0", pin_o0[bit])) + io_args.append(("i", "D_OUT_1", o1_ff[bit])) + + if pin.dir in ("oe", "io"): + io_args.append(("i", "OUTPUT_ENABLE", pin.oe)) + + if is_global_input: + m.submodules["{}_{}".format(pin.name, bit)] = Instance("SB_GB_IO", *io_args) + else: + m.submodules["{}_{}".format(pin.name, bit)] = Instance("SB_IO", *io_args) + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + self._get_io_buffer(m, pin, port, attrs, i_invert=invert) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + self._get_io_buffer(m, pin, port, attrs, o_invert=invert) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + self._get_io_buffer(m, pin, port, attrs, o_invert=invert) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + self._get_io_buffer(m, pin, port, attrs, i_invert=invert, o_invert=invert) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + # See comment in should_skip_port_component above. + self._get_io_buffer(m, pin, p_port, attrs, i_invert=invert) + return m + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + # Note that the non-inverting output pin is not driven the same way as a regular + # output pin. The inverter introduces a delay, so for a non-inverting output pin, + # an identical delay is introduced by instantiating a LUT. This makes the waveform + # perfectly symmetric in the xdr=0 case. + self._get_io_buffer(m, pin, p_port, attrs, o_invert= invert, invert_lut=True) + self._get_io_buffer(m, pin, n_port, attrs, o_invert=not invert, invert_lut=True) + return m + + # Tristate and bidirectional buffers are not supported on iCE40 because it requires external + # termination, which is incompatible for input and output differential I/Os. + + # CDC primitives are not currently specialized for iCE40. It is not known if iCECube2 supports + # the necessary attributes; nextpnr-ice40 does not. diff --git a/nmigen/vendor/lattice_machxo2.py b/nmigen/vendor/lattice_machxo2.py new file mode 100644 index 0000000..962994b --- /dev/null +++ b/nmigen/vendor/lattice_machxo2.py @@ -0,0 +1,389 @@ +from abc import abstractproperty + +from ..hdl import * +from ..build import * + + +__all__ = ["LatticeMachXO2Platform"] + + +class LatticeMachXO2Platform(TemplatedPlatform): + """ + Required tools: + * ``pnmainc`` + * ``ddtcmd`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_Diamond``, if present. + + Available overrides: + * ``script_project``: inserts commands before ``prj_project save`` in Tcl script. + * ``script_after_export``: inserts commands after ``prj_run Export`` in Tcl script. + * ``add_preferences``: inserts commands at the end of the LPF file. + * ``add_constraints``: inserts commands at the end of the XDC file. + + Build products: + * ``{{name}}_impl/{{name}}_impl.htm``: consolidated log. + * ``{{name}}.jed``: JEDEC fuse file. + * ``{{name}}.svf``: JTAG programming vector. + """ + + toolchain = "Diamond" + + device = abstractproperty() + package = abstractproperty() + speed = abstractproperty() + grade = "C" # [C]ommercial, [I]ndustrial + + required_tools = [ + "yosys", + "pnmainc", + "ddtcmd" + ] + file_templates = { + **TemplatedPlatform.build_script_templates, + "build_{{name}}.sh": r""" + # {{autogenerated}} + set -e{{verbose("x")}} + if [ -z "$BASH" ] ; then exec /bin/bash "$0" "$@"; fi + if [ -n "${{platform._toolchain_env_var}}" ]; then + bindir=$(dirname "${{platform._toolchain_env_var}}") + . "${{platform._toolchain_env_var}}" + fi + {{emit_commands("sh")}} + """, + "{{name}}.v": r""" + /* {{autogenerated}} */ + {{emit_verilog()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}.tcl": r""" + prj_project new -name {{name}} -impl impl -impl_dir top_impl \ + -dev {{platform.device}}-{{platform.speed}}{{platform.package}}{{platform.grade}} \ + -lpf {{name}}.lpf \ + -synthesis synplify + {% for file in platform.iter_extra_files(".v", ".sv", ".vhd", ".vhdl") -%} + prj_src add "{{file}}" + {% endfor %} + prj_src add {{name}}.v + prj_impl option top {{name}} + prj_src add {{name}}.sdc + {{get_override("script_project")|default("# (script_project placeholder)")}} + prj_project save + prj_run Synthesis -impl impl -forceAll + prj_run Translate -impl impl -forceAll + prj_run Map -impl impl -forceAll + prj_run PAR -impl impl -forceAll + prj_run Export -impl impl -forceAll -task Jedecgen + {{get_override("script_after_export")|default("# (script_after_export placeholder)")}} + """, + "{{name}}.lpf": r""" + # {{autogenerated}} + BLOCK ASYNCPATHS; + BLOCK RESETPATHS; + {% for port_name, pin_name, extras in platform.iter_port_constraints_bits() -%} + LOCATE COMP "{{port_name}}" SITE "{{pin_name}}"; + {% if extras -%} + IOBUF PORT "{{port_name}}" + {%- for key, value in extras.items() %} {{key}}={{value}}{% endfor %}; + {% endif %} + {% endfor %} + {{get_override("add_preferences")|default("# (add_preferences placeholder)")}} + """, + "{{name}}.sdc": r""" + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + {% if port_signal is not none -%} + create_clock -name {{port_signal.name}} -period {{1000000000/frequency}} [get_ports {{port_signal.name}}] + {% else -%} + create_clock -name {{net_signal.name}} -period {{1000000000/frequency}} [get_nets {{net_signal|hierarchy("/")}}] + {% endif %} + {% endfor %} + {{get_override("add_constraints")|default("# (add_constraints placeholder)")}} + """, + } + command_templates = [ + # These don't have any usable command-line option overrides. + r""" + {{invoke_tool("pnmainc")}} + {{name}}.tcl + """, + r""" + {{invoke_tool("ddtcmd")}} + -oft -jed + -dev {{platform.device}}-{{platform.speed}}{{platform.package}}{{platform.grade}} + -if {{name}}_impl/{{name}}_impl.jed -of {{name}}.jed + """, + r""" + {{invoke_tool("ddtcmd")}} + -oft -svfsingle -revd -op "FLASH Erase,Program,Verify" + -if {{name}}_impl/{{name}}_impl.jed -of {{name}}.svf + """, + ] + + def create_missing_domain(self, name): + # Lattice MachXO2 devices have two global set/reset signals: PUR, which is driven at + # startup by the configuration logic and unconditionally resets every storage element, + # and GSR, which is driven by user logic and each storage element may be configured as + # affected or unaffected by GSR. PUR is purely asynchronous, so even though it is + # a low-skew global network, its deassertion may violate a setup/hold constraint with + # relation to a user clock. To avoid this, a GSR/SGSR instance should be driven + # synchronized to user clock. + if name == "sync" and self.default_clk is not None: + clk_i = self.request(self.default_clk).i + if self.default_rst is not None: + rst_i = self.request(self.default_rst).i + else: + rst_i = Const(0) + + gsr0 = Signal() + gsr1 = Signal() + m = Module() + # There is no end-of-startup signal on MachXO2, but PUR is released after IOB enable, + # so a simple reset synchronizer (with PUR as the asynchronous reset) does the job. + m.submodules += [ + Instance("FD1S3AX", p_GSR="DISABLED", i_CK=clk_i, i_D=~rst_i, o_Q=gsr0), + Instance("FD1S3AX", p_GSR="DISABLED", i_CK=clk_i, i_D=gsr0, o_Q=gsr1), + # Although we already synchronize the reset input to user clock, SGSR has dedicated + # clock routing to the center of the FPGA; use that just in case it turns out to be + # more reliable. (None of this is documented.) + Instance("SGSR", i_CLK=clk_i, i_GSR=gsr1), + ] + # GSR implicitly connects to every appropriate storage element. As such, the sync + # domain is reset-less; domains driven by other clocks would need to have dedicated + # reset circuitry or otherwise meet setup/hold constraints on their own. + m.domains += ClockDomain("sync", reset_less=True) + m.d.comb += ClockSignal("sync").eq(clk_i) + return m + + _single_ended_io_types = [ + "PCI33", "LVTTL33", "LVCMOS33", "LVCMOS25", "LVCMOS18", "LVCMOS15", "LVCMOS12", + "LVCMOS25R33", "LVCMOS18R33", "LVCMOS18R25", "LVCMOS15R33", "LVCMOS15R25", "LVCMOS12R33", + "LVCMOS12R25", "LVCMOS10R33", "LVCMOS10R25", "SSTL25_I", "SSTL25_II", "SSTL18_I", + "SSTL18_II", "HSTL18_I", "HSTL18_II", + ] + _differential_io_types = [ + "LVDS25", "LVDS25E", "RSDS25", "RSDS25E", "BLVDS25", "BLVDS25E", "MLVDS25", "MLVDS25E", + "LVPECL33", "LVPECL33E", "SSTL25D_I", "SSTL25D_II", "SSTL18D_I", "SSTL18D_II", + "HSTL18D_I", "HSTL18D_II", "LVTTL33D", "LVCMOS33D", "LVCMOS25D", "LVCMOS18D", "LVCMOS15D", + "LVCMOS12D", "MIPI", + ] + + def should_skip_port_component(self, port, attrs, component): + # On ECP5, a differential IO is placed by only instantiating an IO buffer primitive at + # the PIOA or PIOC location, which is always the non-inverting pin. + if attrs.get("IO_TYPE", "LVCMOS25") in self._differential_io_types and component == "n": + return True + return False + + def _get_xdr_buffer(self, m, pin, *, i_invert=False, o_invert=False): + def get_ireg(clk, d, q): + for bit in range(len(q)): + m.submodules += Instance("IFS1P3DX", + i_SCLK=clk, + i_SP=Const(1), + i_CD=Const(0), + i_D=d[bit], + o_Q=q[bit] + ) + + def get_oreg(clk, d, q): + for bit in range(len(q)): + m.submodules += Instance("OFS1P3DX", + i_SCLK=clk, + i_SP=Const(1), + i_CD=Const(0), + i_D=d[bit], + o_Q=q[bit] + ) + + def get_iddr(sclk, d, q0, q1): + for bit in range(len(d)): + m.submodules += Instance("IDDRXE", + i_SCLK=sclk, + i_RST=Const(0), + i_D=d[bit], + o_Q0=q0[bit], o_Q1=q1[bit] + ) + + def get_oddr(sclk, d0, d1, q): + for bit in range(len(q)): + m.submodules += Instance("ODDRXE", + i_SCLK=sclk, + i_RST=Const(0), + i_D0=d0[bit], i_D1=d1[bit], + o_Q=q[bit] + ) + + def get_ineg(z, invert): + if invert: + a = Signal.like(z, name_suffix="_n") + m.d.comb += z.eq(~a) + return a + else: + return z + + def get_oneg(a, invert): + if invert: + z = Signal.like(a, name_suffix="_n") + m.d.comb += z.eq(~a) + return z + else: + return a + + if "i" in pin.dir: + if pin.xdr < 2: + pin_i = get_ineg(pin.i, i_invert) + elif pin.xdr == 2: + pin_i0 = get_ineg(pin.i0, i_invert) + pin_i1 = get_ineg(pin.i1, i_invert) + if "o" in pin.dir: + if pin.xdr < 2: + pin_o = get_oneg(pin.o, o_invert) + elif pin.xdr == 2: + pin_o0 = get_oneg(pin.o0, o_invert) + pin_o1 = get_oneg(pin.o1, o_invert) + + i = o = t = None + if "i" in pin.dir: + i = Signal(pin.width, name="{}_xdr_i".format(pin.name)) + if "o" in pin.dir: + o = Signal(pin.width, name="{}_xdr_o".format(pin.name)) + if pin.dir in ("oe", "io"): + t = Signal(1, name="{}_xdr_t".format(pin.name)) + + if pin.xdr == 0: + if "i" in pin.dir: + i = pin_i + if "o" in pin.dir: + o = pin_o + if pin.dir in ("oe", "io"): + t = ~pin.oe + elif pin.xdr == 1: + # Note that currently nextpnr will not pack an FF (*FS1P3DX) into the PIO. + if "i" in pin.dir: + get_ireg(pin.i_clk, i, pin_i) + if "o" in pin.dir: + get_oreg(pin.o_clk, pin_o, o) + if pin.dir in ("oe", "io"): + get_oreg(pin.o_clk, ~pin.oe, t) + elif pin.xdr == 2: + if "i" in pin.dir: + get_iddr(pin.i_clk, i, pin_i0, pin_i1) + if "o" in pin.dir: + get_oddr(pin.o_clk, pin_o0, pin_o1, o) + if pin.dir in ("oe", "io"): + # It looks like Diamond will not pack an OREG as a tristate register in a DDR PIO. + # It is not clear what is the recommended set of primitives for this task. + # Similarly, nextpnr will not pack anything as a tristate register in a DDR PIO. + get_oreg(pin.o_clk, ~pin.oe, t) + else: + assert False + + return (i, o, t) + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IB", + i_I=port[bit], + o_O=i[bit] + ) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OB", + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBZ", + i_T=t, + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("BB", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_B=port[bit] + ) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IB", + i_I=p_port[bit], + o_O=i[bit] + ) + return m + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OB", + i_I=o[bit], + o_O=p_port[bit], + ) + return m + + def get_diff_tristate(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBZ", + i_T=t, + i_I=o[bit], + o_O=p_port[bit], + ) + return m + + def get_diff_input_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("BB", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_B=p_port[bit], + ) + return m + + # CDC primitives are not currently specialized for MachXO2. diff --git a/nmigen/vendor/xilinx_7series.py b/nmigen/vendor/xilinx_7series.py new file mode 100644 index 0000000..25bfa28 --- /dev/null +++ b/nmigen/vendor/xilinx_7series.py @@ -0,0 +1,433 @@ +from abc import abstractproperty + +from ..hdl import * +from ..lib.cdc import ResetSynchronizer +from ..build import * + + +__all__ = ["Xilinx7SeriesPlatform"] + + +class Xilinx7SeriesPlatform(TemplatedPlatform): + """ + Required tools: + * ``vivado`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_Vivado``, if present. + + Available overrides: + * ``script_after_read``: inserts commands after ``read_xdc`` in Tcl script. + * ``script_after_synth``: inserts commands after ``synth_design`` in Tcl script. + * ``script_after_place``: inserts commands after ``place_design`` in Tcl script. + * ``script_after_route``: inserts commands after ``route_design`` in Tcl script. + * ``script_before_bitstream``: inserts commands before ``write_bitstream`` in Tcl script. + * ``script_after_bitstream``: inserts commands after ``write_bitstream`` in Tcl script. + * ``add_constraints``: inserts commands in XDC file. + * ``vivado_opts``: adds extra options for ``vivado``. + + Build products: + * ``{{name}}.log``: Vivado log. + * ``{{name}}_timing_synth.rpt``: Vivado report. + * ``{{name}}_utilization_hierarchical_synth.rpt``: Vivado report. + * ``{{name}}_utilization_synth.rpt``: Vivado report. + * ``{{name}}_utilization_hierarchical_place.rpt``: Vivado report. + * ``{{name}}_utilization_place.rpt``: Vivado report. + * ``{{name}}_io.rpt``: Vivado report. + * ``{{name}}_control_sets.rpt``: Vivado report. + * ``{{name}}_clock_utilization.rpt``: Vivado report. + * ``{{name}}_route_status.rpt``: Vivado report. + * ``{{name}}_drc.rpt``: Vivado report. + * ``{{name}}_methodology.rpt``: Vivado report. + * ``{{name}}_timing.rpt``: Vivado report. + * ``{{name}}_power.rpt``: Vivado report. + * ``{{name}}_route.dcp``: Vivado design checkpoint. + * ``{{name}}.bit``: binary bitstream with metadata. + * ``{{name}}.bin``: binary bitstream. + """ + + toolchain = "Vivado" + + device = abstractproperty() + package = abstractproperty() + speed = abstractproperty() + grade = None + + required_tools = [ + "yosys", + "vivado" + ] + file_templates = { + **TemplatedPlatform.build_script_templates, + "build_{{name}}.sh": r""" + # {{autogenerated}} + set -e{{verbose("x")}} + if [ -z "$BASH" ] ; then exec /bin/bash "$0" "$@"; fi + [ -n "${{platform._toolchain_env_var}}" ] && . "${{platform._toolchain_env_var}}" + {{emit_commands("sh")}} + """, + "{{name}}.v": r""" + /* {{autogenerated}} */ + {{emit_verilog()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}.tcl": r""" + # {{autogenerated}} + create_project -force -name {{name}} -part {{platform.device}}{{platform.package}}-{{platform.speed}}{{"-" + platform.grade if platform.grade else ""}} + {% for file in platform.iter_extra_files(".v", ".sv", ".vhd", ".vhdl") -%} + add_files {{file}} + {% endfor %} + add_files {{name}}.v + read_xdc {{name}}.xdc + {% for file in platform.iter_extra_files(".xdc") -%} + read_xdc {{file}} + {% endfor %} + {{get_override("script_after_read")|default("# (script_after_read placeholder)")}} + synth_design -top {{name}} + foreach cell [get_cells -quiet -hier -filter {nmigen.vivado.false_path == "TRUE"}] { + set_false_path -to $cell + } + foreach cell [get_cells -quiet -hier -filter {nmigen.vivado.max_delay != ""}] { + set clock [get_clocks -of_objects \ + [all_fanin -flat -startpoints_only [get_pin $cell/D]]] + if {[llength $clock] != 0} { + set_max_delay -datapath_only -from $clock \ + -to [get_cells $cell] [get_property nmigen.vivado.max_delay $cell] + } + } + {{get_override("script_after_synth")|default("# (script_after_synth placeholder)")}} + report_timing_summary -file {{name}}_timing_synth.rpt + report_utilization -hierarchical -file {{name}}_utilization_hierachical_synth.rpt + report_utilization -file {{name}}_utilization_synth.rpt + opt_design + place_design + {{get_override("script_after_place")|default("# (script_after_place placeholder)")}} + report_utilization -hierarchical -file {{name}}_utilization_hierarchical_place.rpt + report_utilization -file {{name}}_utilization_place.rpt + report_io -file {{name}}_io.rpt + report_control_sets -verbose -file {{name}}_control_sets.rpt + report_clock_utilization -file {{name}}_clock_utilization.rpt + route_design + {{get_override("script_after_route")|default("# (script_after_route placeholder)")}} + phys_opt_design + report_timing_summary -no_header -no_detailed_paths + write_checkpoint -force {{name}}_route.dcp + report_route_status -file {{name}}_route_status.rpt + report_drc -file {{name}}_drc.rpt + report_methodology -file {{name}}_methodology.rpt + report_timing_summary -datasheet -max_paths 10 -file {{name}}_timing.rpt + report_power -file {{name}}_power.rpt + {{get_override("script_before_bitstream")|default("# (script_before_bitstream placeholder)")}} + write_bitstream -force -bin_file {{name}}.bit + {{get_override("script_after_bitstream")|default("# (script_after_bitstream placeholder)")}} + quit + """, + "{{name}}.xdc": r""" + # {{autogenerated}} + {% for port_name, pin_name, attrs in platform.iter_port_constraints_bits() -%} + set_property LOC {{pin_name}} [get_ports {{port_name}}] + {% for attr_name, attr_value in attrs.items() -%} + set_property {{attr_name}} {{attr_value}} [get_ports {{port_name}}] + {% endfor %} + {% endfor %} + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + {% if port_signal is not none -%} + create_clock -name {{port_signal.name}} -period {{1000000000/frequency}} [get_ports {{port_signal.name}}] + {% else -%} + create_clock -name {{net_signal.name}} -period {{1000000000/frequency}} [get_nets {{net_signal|hierarchy("/")}}] + {% endif %} + {% endfor %} + {{get_override("add_constraints")|default("# (add_constraints placeholder)")}} + """ + } + command_templates = [ + r""" + {{invoke_tool("vivado")}} + {{verbose("-verbose")}} + {{get_override("vivado_opts")|options}} + -mode batch + -log {{name}}.log + -source {{name}}.tcl + """ + ] + + def create_missing_domain(self, name): + # Xilinx devices have a global write enable (GWE) signal that asserted during configuraiton + # and deasserted once it ends. Because it is an asynchronous signal (GWE is driven by logic + # syncronous to configuration clock, which is not used by most designs), even though it is + # a low-skew global network, its deassertion may violate a setup/hold constraint with + # relation to a user clock. The recommended solution is to use a BUFGCE driven by the EOS + # signal. For details, see: + # * https://www.xilinx.com/support/answers/44174.html + # * https://www.xilinx.com/support/documentation/white_papers/wp272.pdf + if name == "sync" and self.default_clk is not None: + clk_i = self.request(self.default_clk).i + if self.default_rst is not None: + rst_i = self.request(self.default_rst).i + + m = Module() + ready = Signal() + m.submodules += Instance("STARTUPE2", o_EOS=ready) + m.domains += ClockDomain("sync", reset_less=self.default_rst is None) + m.submodules += Instance("BUFGCE", i_CE=ready, i_I=clk_i, o_O=ClockSignal("sync")) + if self.default_rst is not None: + m.submodules.reset_sync = ResetSynchronizer(rst_i, domain="sync") + return m + + def _get_xdr_buffer(self, m, pin, *, i_invert=False, o_invert=False): + def get_dff(clk, d, q): + # SDR I/O is performed by packing a flip-flop into the pad IOB. + for bit in range(len(q)): + m.submodules += Instance("FDCE", + a_IOB="TRUE", + i_C=clk, + i_CE=Const(1), + i_CLR=Const(0), + i_D=d[bit], + o_Q=q[bit] + ) + + def get_iddr(clk, d, q1, q2): + for bit in range(len(q1)): + m.submodules += Instance("IDDR", + p_DDR_CLK_EDGE="SAME_EDGE_PIPELINED", + p_SRTYPE="ASYNC", + p_INIT_Q1=0, p_INIT_Q2=0, + i_C=clk, + i_CE=Const(1), + i_S=Const(0), i_R=Const(0), + i_D=d[bit], + o_Q1=q1[bit], o_Q2=q2[bit] + ) + + def get_oddr(clk, d1, d2, q): + for bit in range(len(q)): + m.submodules += Instance("ODDR", + p_DDR_CLK_EDGE="SAME_EDGE", + p_SRTYPE="ASYNC", + p_INIT=0, + i_C=clk, + i_CE=Const(1), + i_S=Const(0), i_R=Const(0), + i_D1=d1[bit], i_D2=d2[bit], + o_Q=q[bit] + ) + + def get_ineg(y, invert): + if invert: + a = Signal.like(y, name_suffix="_n") + m.d.comb += y.eq(~a) + return a + else: + return y + + def get_oneg(a, invert): + if invert: + y = Signal.like(a, name_suffix="_n") + m.d.comb += y.eq(~a) + return y + else: + return a + + if "i" in pin.dir: + if pin.xdr < 2: + pin_i = get_ineg(pin.i, i_invert) + elif pin.xdr == 2: + pin_i0 = get_ineg(pin.i0, i_invert) + pin_i1 = get_ineg(pin.i1, i_invert) + if "o" in pin.dir: + if pin.xdr < 2: + pin_o = get_oneg(pin.o, o_invert) + elif pin.xdr == 2: + pin_o0 = get_oneg(pin.o0, o_invert) + pin_o1 = get_oneg(pin.o1, o_invert) + + i = o = t = None + if "i" in pin.dir: + i = Signal(pin.width, name="{}_xdr_i".format(pin.name)) + if "o" in pin.dir: + o = Signal(pin.width, name="{}_xdr_o".format(pin.name)) + if pin.dir in ("oe", "io"): + t = Signal(1, name="{}_xdr_t".format(pin.name)) + + if pin.xdr == 0: + if "i" in pin.dir: + i = pin_i + if "o" in pin.dir: + o = pin_o + if pin.dir in ("oe", "io"): + t = ~pin.oe + elif pin.xdr == 1: + if "i" in pin.dir: + get_dff(pin.i_clk, i, pin_i) + if "o" in pin.dir: + get_dff(pin.o_clk, pin_o, o) + if pin.dir in ("oe", "io"): + get_dff(pin.o_clk, ~pin.oe, t) + elif pin.xdr == 2: + if "i" in pin.dir: + get_iddr(pin.i_clk, i, pin_i0, pin_i1) + if "o" in pin.dir: + get_oddr(pin.o_clk, pin_o0, pin_o1, o) + if pin.dir in ("oe", "io"): + get_dff(pin.o_clk, ~pin.oe, t) + else: + assert False + + return (i, o, t) + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IBUF", + i_I=port[bit], + o_O=i[bit] + ) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUF", + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFT", + i_T=t, + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IOBUF", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_IO=port[bit] + ) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IBUFDS", + i_I=p_port[bit], i_IB=n_port[bit], + o_O=i[bit] + ) + return m + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFDS", + i_I=o[bit], + o_O=p_port[bit], o_OB=n_port[bit] + ) + return m + + def get_diff_tristate(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFTDS", + i_T=t, + i_I=o[bit], + o_O=p_port[bit], o_OB=n_port[bit] + ) + return m + + def get_diff_input_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IOBUFDS", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_IO=p_port[bit], io_IOB=n_port[bit] + ) + return m + + # The synchronizer implementations below apply two separate but related timing constraints. + # + # First, the ASYNC_REG attribute prevents inference of shift registers from synchronizer FFs, + # and constraints the FFs to be placed as close as possible, ideally in one CLB. This attribute + # only affects the synchronizer FFs themselves. + # + # Second, the nmigen.vivado.false_path or nmigen.vivado.max_delay attribute affects the path + # into the synchronizer. If maximum input delay is specified, a datapath-only maximum delay + # constraint is applied, limiting routing delay (and therefore skew) at the synchronizer input. + # Otherwise, a false path constraint is used to omit the input path from the timing analysis. + + def get_ff_sync(self, ff_sync): + m = Module() + flops = [Signal(ff_sync.i.shape(), name="stage{}".format(index), + reset=ff_sync._reset, reset_less=ff_sync._reset_less, + attrs={"ASYNC_REG": "TRUE"}) + for index in range(ff_sync._stages)] + if ff_sync._max_input_delay is None: + flops[0].attrs["nmigen.vivado.false_path"] = "TRUE" + else: + flops[0].attrs["nmigen.vivado.max_delay"] = str(ff_sync._max_input_delay * 1e9) + for i, o in zip((ff_sync.i, *flops), flops): + m.d[ff_sync._o_domain] += o.eq(i) + m.d.comb += ff_sync.o.eq(flops[-1]) + return m + + def get_async_ff_sync(self, async_ff_sync): + m = Module() + m.domains += ClockDomain("async_ff", async_reset=True, local=True) + flops = [Signal(1, name="stage{}".format(index), reset=1, + attrs={"ASYNC_REG": "TRUE"}) + for index in range(async_ff_sync._stages)] + if async_ff_sync._max_input_delay is None: + flops[0].attrs["nmigen.vivado.false_path"] = "TRUE" + else: + flops[0].attrs["nmigen.vivado.max_delay"] = str(async_ff_sync._max_input_delay * 1e9) + for i, o in zip((0, *flops), flops): + m.d.async_ff += o.eq(i) + + if async_ff_sync._edge == "pos": + m.d.comb += ResetSignal("async_ff").eq(asnyc_ff_sync.i) + else: + m.d.comb += ResetSignal("async_ff").eq(~asnyc_ff_sync.i) + + m.d.comb += [ + ClockSignal("async_ff").eq(ClockSignal(asnyc_ff_sync._domain)), + async_ff_sync.o.eq(flops[-1]) + ] + + return m diff --git a/nmigen/vendor/xilinx_spartan_3_6.py b/nmigen/vendor/xilinx_spartan_3_6.py new file mode 100644 index 0000000..c7d37d1 --- /dev/null +++ b/nmigen/vendor/xilinx_spartan_3_6.py @@ -0,0 +1,467 @@ +from abc import abstractproperty + +from ..hdl import * +from ..lib.cdc import ResetSynchronizer +from ..build import * + + +__all__ = ["XilinxSpartan3APlatform", "XilinxSpartan6Platform"] + + +# The interface to Spartan 3 and 6 are substantially the same. Handle +# differences internally using one class and expose user-aliases for +# convenience. +class XilinxSpartan3Or6Platform(TemplatedPlatform): + """ + Required tools: + * ISE toolchain: + * ``xst`` + * ``ngdbuild`` + * ``map`` + * ``par`` + * ``bitgen`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_ISE``, if present. + + Available overrides: + * ``script_after_run``: inserts commands after ``run`` in XST script. + * ``add_constraints``: inserts commands in UCF file. + * ``xst_opts``: adds extra options for ``xst``. + * ``ngdbuild_opts``: adds extra options for ``ngdbuild``. + * ``map_opts``: adds extra options for ``map``. + * ``par_opts``: adds extra options for ``par``. + * ``bitgen_opts``: adds extra and overrides default options for ``bitgen``; + default options: ``-g Compress``. + + Build products: + * ``{{name}}.srp``: synthesis report. + * ``{{name}}.ngc``: synthesized RTL. + * ``{{name}}.bld``: NGDBuild log. + * ``{{name}}.ngd``: design database. + * ``{{name}}_map.map``: MAP log. + * ``{{name}}_map.mrp``: mapping report. + * ``{{name}}_map.ncd``: mapped netlist. + * ``{{name}}.pcf``: physical constraints. + * ``{{name}}_par.par``: PAR log. + * ``{{name}}_par_pad.txt``: I/O usage report. + * ``{{name}}_par.ncd``: place and routed netlist. + * ``{{name}}.drc``: DRC report. + * ``{{name}}.bgn``: BitGen log. + * ``{{name}}.bit``: binary bitstream with metadata. + * ``{{name}}.bin``: raw binary bitstream. + """ + + toolchain = "ISE" + + device = abstractproperty() + package = abstractproperty() + speed = abstractproperty() + + required_tools = [ + "yosys", + "xst", + "ngdbuild", + "map", + "par", + "bitgen", + ] + + @property + def family(self): + device = self.device.upper() + if device.startswith("XC3S"): + if device.endswith("A"): + return "3A" + elif device.endswith("E"): + raise NotImplementedError("""Spartan 3E family is not supported + as a nMigen platform.""") + else: + raise NotImplementedError("""Spartan 3 family is not supported + as a nMigen platform.""") + elif device.startswith("XC6S"): + return "6" + else: + assert False + + file_templates = { + **TemplatedPlatform.build_script_templates, + "build_{{name}}.sh": r""" + # {{autogenerated}} + set -e{{verbose("x")}} + if [ -z "$BASH" ] ; then exec /bin/bash "$0" "$@"; fi + [ -n "${{platform._toolchain_env_var}}" ] && . "${{platform._toolchain_env_var}}" + {{emit_commands("sh")}} + """, + "{{name}}.v": r""" + /* {{autogenerated}} */ + {{emit_verilog()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}.prj": r""" + # {{autogenerated}} + {% for file in platform.iter_extra_files(".vhd", ".vhdl") -%} + vhdl work {{file}} + {% endfor %} + {% for file in platform.iter_extra_files(".v") -%} + verilog work {{file}} + {% endfor %} + verilog work {{name}}.v + """, + "{{name}}.xst": r""" + # {{autogenerated}} + run + -ifn {{name}}.prj + -ofn {{name}}.ngc + -top {{name}} + {% if platform.family in ["3", "3E", "3A"] %} + -use_new_parser yes + {% endif %} + -p {{platform.device}}{{platform.package}}-{{platform.speed}} + {{get_override("script_after_run")|default("# (script_after_run placeholder)")}} + """, + "{{name}}.ucf": r""" + # {{autogenerated}} + {% for port_name, pin_name, attrs in platform.iter_port_constraints_bits() -%} + {% set port_name = port_name|replace("[", "<")|replace("]", ">") -%} + NET "{{port_name}}" LOC={{pin_name}}; + {% for attr_name, attr_value in attrs.items() -%} + NET "{{port_name}}" {{attr_name}}={{attr_value}}; + {% endfor %} + {% endfor %} + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + NET "{{net_signal|hierarchy("/")}}" TNM_NET="PRD{{net_signal|hierarchy("/")}}"; + TIMESPEC "TS{{net_signal|hierarchy("/")}}"=PERIOD "PRD{{net_signal|hierarchy("/")}}" {{1000000000/frequency}} ns HIGH 50%; + {% endfor %} + {{get_override("add_constraints")|default("# (add_constraints placeholder)")}} + """ + } + command_templates = [ + r""" + {{invoke_tool("xst")}} + {{get_override("xst_opts")|options}} + -ifn {{name}}.xst + """, + r""" + {{invoke_tool("ngdbuild")}} + {{quiet("-quiet")}} + {{verbose("-verbose")}} + {{get_override("ngdbuild_opts")|options}} + -uc {{name}}.ucf + {{name}}.ngc + """, + r""" + {{invoke_tool("map")}} + {{verbose("-detail")}} + {{get_override("map_opts")|default([])|options}} + -w + -o {{name}}_map.ncd + {{name}}.ngd + {{name}}.pcf + """, + r""" + {{invoke_tool("par")}} + {{get_override("par_opts")|default([])|options}} + -w + {{name}}_map.ncd + {{name}}_par.ncd + {{name}}.pcf + """, + r""" + {{invoke_tool("bitgen")}} + {{get_override("bitgen_opts")|default(["-g Compress"])|options}} + -w + -g Binary:Yes + {{name}}_par.ncd + {{name}}.bit + """ + ] + + def create_missing_domain(self, name): + # Xilinx devices have a global write enable (GWE) signal that asserted during configuraiton + # and deasserted once it ends. Because it is an asynchronous signal (GWE is driven by logic + # syncronous to configuration clock, which is not used by most designs), even though it is + # a low-skew global network, its deassertion may violate a setup/hold constraint with + # relation to a user clock. The recommended solution is to use a BUFGCE driven by the EOS + # signal (if available). For details, see: + # * https://www.xilinx.com/support/answers/44174.html + # * https://www.xilinx.com/support/documentation/white_papers/wp272.pdf + if self.family != "6": + # Spartan 3 lacks a STARTUP primitive with EOS output; use a simple ResetSynchronizer + # in that case, as is the default. + return super().create_missing_domain(name) + + if name == "sync" and self.default_clk is not None: + clk_i = self.request(self.default_clk).i + if self.default_rst is not None: + rst_i = self.request(self.default_rst).i + + m = Module() + eos = Signal() + m.submodules += Instance("STARTUP_SPARTAN6", o_EOS=eos) + m.domains += ClockDomain("sync", reset_less=self.default_rst is None) + m.submodules += Instance("BUFGCE", i_CE=eos, i_I=clk_i, o_O=ClockSignal("sync")) + if self.default_rst is not None: + m.submodules.reset_sync = ResetSynchronizer(rst_i, domain="sync") + return m + + def _get_xdr_buffer(self, m, pin, *, i_invert=False, o_invert=False): + def get_dff(clk, d, q): + # SDR I/O is performed by packing a flip-flop into the pad IOB. + for bit in range(len(q)): + m.submodules += Instance("FDCE", + a_IOB="TRUE", + i_C=clk, + i_CE=Const(1), + i_CLR=Const(0), + i_D=d[bit], + o_Q=q[bit] + ) + + def get_iddr(clk, d, q0, q1): + for bit in range(len(q0)): + m.submodules += Instance("IDDR2", + p_DDR_ALIGNMENT="C0", + p_SRTYPE="ASYNC", + p_INIT_Q0=0, p_INIT_Q1=0, + i_C0=clk, i_C1=~clk, + i_CE=Const(1), + i_S=Const(0), i_R=Const(0), + i_D=d[bit], + o_Q0=q0[bit], o_Q1=q1[bit] + ) + + def get_oddr(clk, d0, d1, q): + for bit in range(len(q)): + m.submodules += Instance("ODDR2", + p_DDR_ALIGNMENT="C0", + p_SRTYPE="ASYNC", + p_INIT=0, + i_C0=clk, i_C1=~clk, + i_CE=Const(1), + i_S=Const(0), i_R=Const(0), + i_D0=d0[bit], i_D1=d1[bit], + o_Q=q[bit] + ) + + def get_ineg(y, invert): + if invert: + a = Signal.like(y, name_suffix="_n") + m.d.comb += y.eq(~a) + return a + else: + return y + + def get_oneg(a, invert): + if invert: + y = Signal.like(a, name_suffix="_n") + m.d.comb += y.eq(~a) + return y + else: + return a + + if "i" in pin.dir: + if pin.xdr < 2: + pin_i = get_ineg(pin.i, i_invert) + elif pin.xdr == 2: + pin_i0 = get_ineg(pin.i0, i_invert) + pin_i1 = get_ineg(pin.i1, i_invert) + if "o" in pin.dir: + if pin.xdr < 2: + pin_o = get_oneg(pin.o, o_invert) + elif pin.xdr == 2: + pin_o0 = get_oneg(pin.o0, o_invert) + pin_o1 = get_oneg(pin.o1, o_invert) + + i = o = t = None + if "i" in pin.dir: + i = Signal(pin.width, name="{}_xdr_i".format(pin.name)) + if "o" in pin.dir: + o = Signal(pin.width, name="{}_xdr_o".format(pin.name)) + if pin.dir in ("oe", "io"): + t = Signal(1, name="{}_xdr_t".format(pin.name)) + + if pin.xdr == 0: + if "i" in pin.dir: + i = pin_i + if "o" in pin.dir: + o = pin_o + if pin.dir in ("oe", "io"): + t = ~pin.oe + elif pin.xdr == 1: + if "i" in pin.dir: + get_dff(pin.i_clk, i, pin_i) + if "o" in pin.dir: + get_dff(pin.o_clk, pin_o, o) + if pin.dir in ("oe", "io"): + get_dff(pin.o_clk, ~pin.oe, t) + elif pin.xdr == 2: + if "i" in pin.dir: + # Re-register first input before it enters fabric. This allows both inputs to + # enter fabric on the same clock edge, and adds one cycle of latency. + i0_ff = Signal.like(pin_i0, name_suffix="_ff") + get_dff(pin.i_clk, i0_ff, pin_i0) + get_iddr(pin.i_clk, i, i0_ff, pin_i1) + if "o" in pin.dir: + get_oddr(pin.o_clk, pin_o0, pin_o1, o) + if pin.dir in ("oe", "io"): + get_dff(pin.o_clk, ~pin.oe, t) + else: + assert False + + return (i, o, t) + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IBUF", + i_I=port[bit], + o_O=i[bit] + ) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUF", + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFT", + i_T=t, + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IOBUF", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_IO=port[bit] + ) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IBUFDS", + i_I=p_port[bit], i_IB=n_port[bit], + o_O=i[bit] + ) + return m + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFDS", + i_I=o[bit], + o_O=p_port[bit], o_OB=n_port[bit] + ) + return m + + def get_diff_tristate(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFTDS", + i_T=t, + i_I=o[bit], + o_O=p_port[bit], o_OB=n_port[bit] + ) + return m + + def get_diff_input_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IOBUFDS", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_IO=p_port[bit], io_IOB=n_port[bit] + ) + return m + + # The synchronizer implementations below apply the ASYNC_REG attribute. This attribute + # prevents inference of shift registers from synchronizer FFs, and constraints the FFs + # to be placed as close as possible, ideally in one CLB. This attribute only affects + # the synchronizer FFs themselves. + + def get_ff_sync(self, ff_sync): + if ff_sync._max_input_delay is not None: + raise NotImplementedError("Platform '{}' does not support constraining input delay " + "for FFSynchronizer" + .format(type(self).__name__)) + + m = Module() + flops = [Signal(ff_sync.i.shape(), name="stage{}".format(index), + reset=ff_sync._reset, reset_less=ff_sync._reset_less, + attrs={"ASYNC_REG": "TRUE"}) + for index in range(ff_sync._stages)] + for i, o in zip((ff_sync.i, *flops), flops): + m.d[ff_sync._o_domain] += o.eq(i) + m.d.comb += ff_sync.o.eq(flops[-1]) + return m + + def get_async_ff_sync(self, async_ff_sync): + if self._max_input_delay is not None: + raise NotImplementedError("Platform '{}' does not support constraining input delay " + "for AsyncFFSynchronizer" + .format(type(self).__name__)) + + m = Module() + m.domains += ClockDomain("async_ff", async_reset=True, local=True) + flops = [Signal(1, name="stage{}".format(index), reset=1, + attrs={"ASYNC_REG": "TRUE"}) + for index in range(async_ff_sync._stages)] + for i, o in zip((0, *flops), flops): + m.d.async_ff += o.eq(i) + + if async_ff_sync._edge == "pos": + m.d.comb += ResetSignal("async_ff").eq(asnyc_ff_sync.i) + else: + m.d.comb += ResetSignal("async_ff").eq(~asnyc_ff_sync.i) + + m.d.comb += [ + ClockSignal("async_ff").eq(ClockSignal(asnyc_ff_sync._domain)), + async_ff_sync.o.eq(flops[-1]) + ] + + return m + +XilinxSpartan3APlatform = XilinxSpartan3Or6Platform +XilinxSpartan6Platform = XilinxSpartan3Or6Platform diff --git a/nmigen/vendor/xilinx_ultrascale.py b/nmigen/vendor/xilinx_ultrascale.py new file mode 100644 index 0000000..6598f0a --- /dev/null +++ b/nmigen/vendor/xilinx_ultrascale.py @@ -0,0 +1,429 @@ +from abc import abstractproperty + +from ..hdl import * +from ..lib.cdc import ResetSynchronizer +from ..build import * + + +__all__ = ["XilinxUltraScalePlatform"] + + +class XilinxUltraScalePlatform(TemplatedPlatform): + """ + Required tools: + * ``vivado`` + + The environment is populated by running the script specified in the environment variable + ``NMIGEN_ENV_Vivado``, if present. + + Available overrides: + * ``script_after_read``: inserts commands after ``read_xdc`` in Tcl script. + * ``script_after_synth``: inserts commands after ``synth_design`` in Tcl script. + * ``script_after_place``: inserts commands after ``place_design`` in Tcl script. + * ``script_after_route``: inserts commands after ``route_design`` in Tcl script. + * ``script_before_bitstream``: inserts commands before ``write_bitstream`` in Tcl script. + * ``script_after_bitstream``: inserts commands after ``write_bitstream`` in Tcl script. + * ``add_constraints``: inserts commands in XDC file. + * ``vivado_opts``: adds extra options for ``vivado``. + + Build products: + * ``{{name}}.log``: Vivado log. + * ``{{name}}_timing_synth.rpt``: Vivado report. + * ``{{name}}_utilization_hierarchical_synth.rpt``: Vivado report. + * ``{{name}}_utilization_synth.rpt``: Vivado report. + * ``{{name}}_utilization_hierarchical_place.rpt``: Vivado report. + * ``{{name}}_utilization_place.rpt``: Vivado report. + * ``{{name}}_io.rpt``: Vivado report. + * ``{{name}}_control_sets.rpt``: Vivado report. + * ``{{name}}_clock_utilization.rpt``: Vivado report. + * ``{{name}}_route_status.rpt``: Vivado report. + * ``{{name}}_drc.rpt``: Vivado report. + * ``{{name}}_methodology.rpt``: Vivado report. + * ``{{name}}_timing.rpt``: Vivado report. + * ``{{name}}_power.rpt``: Vivado report. + * ``{{name}}_route.dcp``: Vivado design checkpoint. + * ``{{name}}.bit``: binary bitstream with metadata. + * ``{{name}}.bin``: binary bitstream. + """ + + toolchain = "Vivado" + + device = abstractproperty() + package = abstractproperty() + speed = abstractproperty() + grade = None + + required_tools = [ + "yosys", + "vivado" + ] + file_templates = { + **TemplatedPlatform.build_script_templates, + "build_{{name}}.sh": r""" + # {{autogenerated}} + set -e{{verbose("x")}} + if [ -z "$BASH" ] ; then exec /bin/bash "$0" "$@"; fi + [ -n "${{platform._toolchain_env_var}}" ] && . "${{platform._toolchain_env_var}}" + {{emit_commands("sh")}} + """, + "{{name}}.v": r""" + /* {{autogenerated}} */ + {{emit_verilog()}} + """, + "{{name}}.debug.v": r""" + /* {{autogenerated}} */ + {{emit_debug_verilog()}} + """, + "{{name}}.tcl": r""" + # {{autogenerated}} + create_project -force -name {{name}} -part {{platform.device}}-{{platform.package}}-{{platform.speed}}{{"-" + platform.grade if platform.grade else ""}} + {% for file in platform.iter_extra_files(".v", ".sv", ".vhd", ".vhdl") -%} + add_files {{file}} + {% endfor %} + add_files {{name}}.v + read_xdc {{name}}.xdc + {% for file in platform.iter_extra_files(".xdc") -%} + read_xdc {{file}} + {% endfor %} + {{get_override("script_after_read")|default("# (script_after_read placeholder)")}} + synth_design -top {{name}} + foreach cell [get_cells -quiet -hier -filter {nmigen.vivado.false_path == "TRUE"}] { + set_false_path -to $cell + } + foreach cell [get_cells -quiet -hier -filter {nmigen.vivado.max_delay != ""}] { + set clock [get_clocks -of_objects \ + [all_fanin -flat -startpoints_only [get_pin $cell/D]]] + if {[llength $clock] != 0} { + set_max_delay -datapath_only -from $clock \ + -to [get_cells $cell] [get_property nmigen.vivado.max_delay $cell] + } + } + {{get_override("script_after_synth")|default("# (script_after_synth placeholder)")}} + report_timing_summary -file {{name}}_timing_synth.rpt + report_utilization -hierarchical -file {{name}}_utilization_hierachical_synth.rpt + report_utilization -file {{name}}_utilization_synth.rpt + opt_design + place_design + {{get_override("script_after_place")|default("# (script_after_place placeholder)")}} + report_utilization -hierarchical -file {{name}}_utilization_hierarchical_place.rpt + report_utilization -file {{name}}_utilization_place.rpt + report_io -file {{name}}_io.rpt + report_control_sets -verbose -file {{name}}_control_sets.rpt + report_clock_utilization -file {{name}}_clock_utilization.rpt + route_design + {{get_override("script_after_route")|default("# (script_after_route placeholder)")}} + phys_opt_design + report_timing_summary -no_header -no_detailed_paths + write_checkpoint -force {{name}}_route.dcp + report_route_status -file {{name}}_route_status.rpt + report_drc -file {{name}}_drc.rpt + report_methodology -file {{name}}_methodology.rpt + report_timing_summary -datasheet -max_paths 10 -file {{name}}_timing.rpt + report_power -file {{name}}_power.rpt + {{get_override("script_before_bitstream")|default("# (script_before_bitstream placeholder)")}} + write_bitstream -force -bin_file {{name}}.bit + {{get_override("script_after_bitstream")|default("# (script_after_bitstream placeholder)")}} + quit + """, + "{{name}}.xdc": r""" + # {{autogenerated}} + {% for port_name, pin_name, attrs in platform.iter_port_constraints_bits() -%} + set_property LOC {{pin_name}} [get_ports {{port_name}}] + {% for attr_name, attr_value in attrs.items() -%} + set_property {{attr_name}} {{attr_value}} [get_ports {{port_name}}] + {% endfor %} + {% endfor %} + {% for net_signal, port_signal, frequency in platform.iter_clock_constraints() -%} + {% if port_signal is not none -%} + create_clock -name {{port_signal.name}} -period {{1000000000/frequency}} [get_ports {{port_signal.name}}] + {% else -%} + create_clock -name {{net_signal.name}} -period {{1000000000/frequency}} [get_nets {{net_signal|hierarchy("/")}}] + {% endif %} + {% endfor %} + {{get_override("add_constraints")|default("# (add_constraints placeholder)")}} + """ + } + command_templates = [ + r""" + {{invoke_tool("vivado")}} + {{verbose("-verbose")}} + {{get_override("vivado_opts")|options}} + -mode batch + -log {{name}}.log + -source {{name}}.tcl + """ + ] + + def create_missing_domain(self, name): + # Xilinx devices have a global write enable (GWE) signal that asserted during configuraiton + # and deasserted once it ends. Because it is an asynchronous signal (GWE is driven by logic + # syncronous to configuration clock, which is not used by most designs), even though it is + # a low-skew global network, its deassertion may violate a setup/hold constraint with + # relation to a user clock. The recommended solution is to use a BUFGCE driven by the EOS + # signal. For details, see: + # * https://www.xilinx.com/support/answers/44174.html + # * https://www.xilinx.com/support/documentation/white_papers/wp272.pdf + if name == "sync" and self.default_clk is not None: + clk_i = self.request(self.default_clk).i + if self.default_rst is not None: + rst_i = self.request(self.default_rst).i + + m = Module() + ready = Signal() + m.submodules += Instance("STARTUPE3", o_EOS=ready) + m.domains += ClockDomain("sync", reset_less=self.default_rst is None) + m.submodules += Instance("BUFGCE", i_CE=ready, i_I=clk_i, o_O=ClockSignal("sync")) + if self.default_rst is not None: + m.submodules.reset_sync = ResetSynchronizer(rst_i, domain="sync") + return m + + def _get_xdr_buffer(self, m, pin, *, i_invert=False, o_invert=False): + def get_dff(clk, d, q): + # SDR I/O is performed by packing a flip-flop into the pad IOB. + for bit in range(len(q)): + m.submodules += Instance("FDCE", + a_IOB="TRUE", + i_C=clk, + i_CE=Const(1), + i_CLR=Const(0), + i_D=d[bit], + o_Q=q[bit] + ) + + def get_iddr(clk, d, q1, q2): + for bit in range(len(q1)): + m.submodules += Instance("IDDRE1", + p_DDR_CLK_EDGE="SAME_EDGE_PIPELINED", + p_IS_C_INVERTED=0, p_IS_CB_INVERTED=1, + i_C=clk, i_CB=clk, + i_R=Const(0), + i_D=d[bit], + o_Q1=q1[bit], o_Q2=q2[bit] + ) + + def get_oddr(clk, d1, d2, q): + for bit in range(len(q)): + m.submodules += Instance("ODDRE1", + p_DDR_CLK_EDGE="SAME_EDGE", + p_INIT=0, + i_C=clk, + i_SR=Const(0), + i_D1=d1[bit], i_D2=d2[bit], + o_Q=q[bit] + ) + + def get_ineg(y, invert): + if invert: + a = Signal.like(y, name_suffix="_n") + m.d.comb += y.eq(~a) + return a + else: + return y + + def get_oneg(a, invert): + if invert: + y = Signal.like(a, name_suffix="_n") + m.d.comb += y.eq(~a) + return y + else: + return a + + if "i" in pin.dir: + if pin.xdr < 2: + pin_i = get_ineg(pin.i, i_invert) + elif pin.xdr == 2: + pin_i0 = get_ineg(pin.i0, i_invert) + pin_i1 = get_ineg(pin.i1, i_invert) + if "o" in pin.dir: + if pin.xdr < 2: + pin_o = get_oneg(pin.o, o_invert) + elif pin.xdr == 2: + pin_o0 = get_oneg(pin.o0, o_invert) + pin_o1 = get_oneg(pin.o1, o_invert) + + i = o = t = None + if "i" in pin.dir: + i = Signal(pin.width, name="{}_xdr_i".format(pin.name)) + if "o" in pin.dir: + o = Signal(pin.width, name="{}_xdr_o".format(pin.name)) + if pin.dir in ("oe", "io"): + t = Signal(1, name="{}_xdr_t".format(pin.name)) + + if pin.xdr == 0: + if "i" in pin.dir: + i = pin_i + if "o" in pin.dir: + o = pin_o + if pin.dir in ("oe", "io"): + t = ~pin.oe + elif pin.xdr == 1: + if "i" in pin.dir: + get_dff(pin.i_clk, i, pin_i) + if "o" in pin.dir: + get_dff(pin.o_clk, pin_o, o) + if pin.dir in ("oe", "io"): + get_dff(pin.o_clk, ~pin.oe, t) + elif pin.xdr == 2: + if "i" in pin.dir: + get_iddr(pin.i_clk, i, pin_i0, pin_i1) + if "o" in pin.dir: + get_oddr(pin.o_clk, pin_o0, pin_o1, o) + if pin.dir in ("oe", "io"): + get_dff(pin.o_clk, ~pin.oe, t) + else: + assert False + + return (i, o, t) + + def get_input(self, pin, port, attrs, invert): + self._check_feature("single-ended input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IBUF", + i_I=port[bit], + o_O=i[bit] + ) + return m + + def get_output(self, pin, port, attrs, invert): + self._check_feature("single-ended output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUF", + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_tristate(self, pin, port, attrs, invert): + self._check_feature("single-ended tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFT", + i_T=t, + i_I=o[bit], + o_O=port[bit] + ) + return m + + def get_input_output(self, pin, port, attrs, invert): + self._check_feature("single-ended input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IOBUF", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_IO=port[bit] + ) + return m + + def get_diff_input(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IBUFDS", + i_I=p_port[bit], i_IB=n_port[bit], + o_O=i[bit] + ) + return m + + def get_diff_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFDS", + i_I=o[bit], + o_O=p_port[bit], o_OB=n_port[bit] + ) + return m + + def get_diff_tristate(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential tristate", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("OBUFTDS", + i_T=t, + i_I=o[bit], + o_O=p_port[bit], o_OB=n_port[bit] + ) + return m + + def get_diff_input_output(self, pin, p_port, n_port, attrs, invert): + self._check_feature("differential input/output", pin, attrs, + valid_xdrs=(0, 1, 2), valid_attrs=True) + m = Module() + i, o, t = self._get_xdr_buffer(m, pin, i_invert=invert, o_invert=invert) + for bit in range(len(p_port)): + m.submodules["{}_{}".format(pin.name, bit)] = Instance("IOBUFDS", + i_T=t, + i_I=o[bit], + o_O=i[bit], + io_IO=p_port[bit], io_IOB=n_port[bit] + ) + return m + + # The synchronizer implementations below apply two separate but related timing constraints. + # + # First, the ASYNC_REG attribute prevents inference of shift registers from synchronizer FFs, + # and constraints the FFs to be placed as close as possible, ideally in one CLB. This attribute + # only affects the synchronizer FFs themselves. + # + # Second, the nmigen.vivado.false_path or nmigen.vivado.max_delay attribute affects the path + # into the synchronizer. If maximum input delay is specified, a datapath-only maximum delay + # constraint is applied, limiting routing delay (and therefore skew) at the synchronizer input. + # Otherwise, a false path constraint is used to omit the input path from the timing analysis. + + def get_ff_sync(self, ff_sync): + m = Module() + flops = [Signal(ff_sync.i.shape(), name="stage{}".format(index), + reset=ff_sync._reset, reset_less=ff_sync._reset_less, + attrs={"ASYNC_REG": "TRUE"}) + for index in range(ff_sync._stages)] + if ff_sync._max_input_delay is None: + flops[0].attrs["nmigen.vivado.false_path"] = "TRUE" + else: + flops[0].attrs["nmigen.vivado.max_delay"] = str(ff_sync._max_input_delay * 1e9) + for i, o in zip((ff_sync.i, *flops), flops): + m.d[ff_sync._o_domain] += o.eq(i) + m.d.comb += ff_sync.o.eq(flops[-1]) + return m + + def get_async_ff_sync(self, async_ff_sync): + m = Module() + m.domains += ClockDomain("async_ff", async_reset=True, local=True) + flops = [Signal(1, name="stage{}".format(index), reset=1, + attrs={"ASYNC_REG": "TRUE"}) + for index in range(async_ff_sync._stages)] + if async_ff_sync._max_input_delay is None: + flops[0].attrs["nmigen.vivado.false_path"] = "TRUE" + else: + flops[0].attrs["nmigen.vivado.max_delay"] = str(async_ff_sync._max_input_delay * 1e9) + for i, o in zip((0, *flops), flops): + m.d.async_ff += o.eq(i) + + if async_ff_sync._edge == "pos": + m.d.comb += ResetSignal("async_ff").eq(asnyc_ff_sync.i) + else: + m.d.comb += ResetSignal("async_ff").eq(~asnyc_ff_sync.i) + + m.d.comb += [ + ClockSignal("async_ff").eq(ClockSignal(asnyc_ff_sync._domain)), + async_ff_sync.o.eq(flops[-1]) + ] + + return m diff --git a/rvfi/cores/minerva/test/test_instructions.py b/rvfi/cores/minerva/test/test_instructions.py index 4d8c9ac..ebb6ac1 100644 --- a/rvfi/cores/minerva/test/test_instructions.py +++ b/rvfi/cores/minerva/test/test_instructions.py @@ -3,6 +3,42 @@ from nmigen.test.utils import * from ..core import * from ....checks.insn_check import * from ....insns.insn_lui import * +from ....insns.insn_auipc import * +from ....insns.insn_jal import * +from ....insns.insn_jalr import * +from ....insns.insn_beq import * +from ....insns.insn_bne import * +from ....insns.insn_blt import * +from ....insns.insn_bge import * +from ....insns.insn_bltu import * +from ....insns.insn_bgeu import * +from ....insns.insn_lb import * +from ....insns.insn_lh import * +from ....insns.insn_lw import * +from ....insns.insn_lbu import * +from ....insns.insn_lhu import * +from ....insns.insn_sb import * +from ....insns.insn_sh import * +from ....insns.insn_sw import * +from ....insns.insn_addi import * +from ....insns.insn_slti import * +from ....insns.insn_sltiu import * +from ....insns.insn_xori import * +from ....insns.insn_ori import * +from ....insns.insn_andi import * +from ....insns.insn_slli import * +from ....insns.insn_srli import * +from ....insns.insn_srai import * +from ....insns.insn_add import * +from ....insns.insn_sub import * +from ....insns.insn_sll import * +from ....insns.insn_slt import * +from ....insns.insn_sltu import * +from ....insns.insn_xor import * +from ....insns.insn_srl import * +from ....insns.insn_sra import * +from ....insns.insn_or import * +from ....insns.insn_and import * class LuiSpec(Elaboratable): def elaborate(self, platform): @@ -46,4 +82,1588 @@ class LuiSpec(Elaboratable): class LuiTestCase(FHDLTestCase): def verify(self): - self.assertFormal(LuiSpec(), mode="bmc", depth=12) + self.assertFormal(LuiSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class AuipcSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnAuipc, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class AuipcTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(AuipcSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class JalSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnJal, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class JalTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(JalSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class JalrSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnJalr, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class JalrTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(JalrSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class BeqSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnBeq, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class BeqTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(BeqSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class BneSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnBne, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class BneTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(BneSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class BltSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnBlt, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class BltTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(BltSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class BgeSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnBge, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class BgeTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(BgeSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class BltuSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnBltu, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class BltuTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(BltuSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class BgeuSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnBgeu, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class BgeuTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(BgeuSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class LbSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnLb, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class LbTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(LbSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class LhSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnLh, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class LhTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(LhSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class LwSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnLw, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class LwTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(LwSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class LbuSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnLbu, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class LbuTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(LbuSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class LhuSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnLhu, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class LhuTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(LhuSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SbSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSb, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SbTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SbSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class ShSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSh, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class ShTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(ShSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SwSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSw, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SwTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SwSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class AddiSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnAddi, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class AddiTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(AddiSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SltiSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSlti, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SltiTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SltiSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SltiuSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSltiu, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SltiuTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SltiuSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class XoriSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnXori, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class XoriTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(XoriSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class OriSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnOri, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class OriTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(OriSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class AndiSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnAndi, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class AndiTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(AndiSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SlliSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSlli, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SlliTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SlliSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SrliSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSrli, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SrliTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SrliSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SraiSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSrai, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SraiTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SraiSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class AddSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnAdd, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class AddTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(AddSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SubSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSub, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SubTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SubSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SllSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSll, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SllTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SllSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SltSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSlt, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SltTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SltSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SltuSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSltu, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SltuTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SltuSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class XorSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnXor, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class XorTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(XorSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SrlSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSrl, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SrlTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SrlSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class SraSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnSra, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class SraTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(SraSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class OrSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnOr, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class OrTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(OrSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") + +class AndSpec(Elaboratable): + def elaborate(self, platform): + m = Module() + + m.submodules.cpu = cpu = Minerva(with_rvfi=True) + m.submodules.insn_spec = insn_spec = InsnCheck(RISCV_FORMAL_ILEN=32, + RISCV_FORMAL_XLEN=32, + RISCV_FORMAL_CSR_MISA=False, + RISCV_FORMAL_COMPRESSED=False, + RISCV_FORMAL_ALIGNED_MEM=False, + insn_model=InsnAnd, + rvformal_addr_valid=lambda x:Const(1)) + + m.d.comb += insn_spec.reset.eq(0) + m.d.comb += insn_spec.check.eq(1) + + m.d.comb += insn_spec.rvfi_valid.eq(cpu.rvfi.valid) + m.d.comb += insn_spec.rvfi_order.eq(cpu.rvfi.order) + m.d.comb += insn_spec.rvfi_insn.eq(cpu.rvfi.insn) + m.d.comb += insn_spec.rvfi_trap.eq(cpu.rvfi.trap) + m.d.comb += insn_spec.rvfi_halt.eq(cpu.rvfi.halt) + m.d.comb += insn_spec.rvfi_intr.eq(cpu.rvfi.intr) + m.d.comb += insn_spec.rvfi_mode.eq(cpu.rvfi.mode) + m.d.comb += insn_spec.rvfi_ixl.eq(cpu.rvfi.ixl) + m.d.comb += insn_spec.rvfi_rs1_addr.eq(cpu.rvfi.rs1_addr) + m.d.comb += insn_spec.rvfi_rs2_addr.eq(cpu.rvfi.rs2_addr) + m.d.comb += insn_spec.rvfi_rs1_rdata.eq(cpu.rvfi.rs1_rdata) + m.d.comb += insn_spec.rvfi_rs2_rdata.eq(cpu.rvfi.rs2_rdata) + m.d.comb += insn_spec.rvfi_rd_addr.eq(cpu.rvfi.rd_addr) + m.d.comb += insn_spec.rvfi_rd_wdata.eq(cpu.rvfi.rd_wdata) + m.d.comb += insn_spec.rvfi_pc_rdata.eq(cpu.rvfi.pc_rdata) + m.d.comb += insn_spec.rvfi_pc_wdata.eq(cpu.rvfi.pc_wdata) + m.d.comb += insn_spec.rvfi_mem_addr.eq(cpu.rvfi.mem_addr) + m.d.comb += insn_spec.rvfi_mem_rmask.eq(cpu.rvfi.mem_rmask) + m.d.comb += insn_spec.rvfi_mem_wmask.eq(cpu.rvfi.mem_wmask) + m.d.comb += insn_spec.rvfi_mem_rdata.eq(cpu.rvfi.mem_rdata) + m.d.comb += insn_spec.rvfi_mem_wdata.eq(cpu.rvfi.mem_wdata) + + return m + +class AndTestCase(FHDLTestCase): + def verify(self): + self.assertFormal(AndSpec(), mode="bmc", depth=12, engine="smtbmc --nopresat") diff --git a/rvfi/cores/minerva/verify.py b/rvfi/cores/minerva/verify.py index 8663aea..9c4fcca 100644 --- a/rvfi/cores/minerva/verify.py +++ b/rvfi/cores/minerva/verify.py @@ -11,6 +11,42 @@ test.test_2_ways() print("Verifying RV32I instructions ...") LuiTestCase().verify() +AuipcTestCase().verify() +JalTestCase().verify() +JalrTestCase().verify() +BeqTestCase().verify() +BneTestCase().verify() +BltTestCase().verify() +BgeTestCase().verify() +BltuTestCase().verify() +BgeuTestCase().verify() +LbTestCase().verify() +LhTestCase().verify() +LwTestCase().verify() +LbuTestCase().verify() +LhuTestCase().verify() +SbTestCase().verify() +ShTestCase().verify() +SwTestCase().verify() +AddiTestCase().verify() +SltiTestCase().verify() +SltiuTestCase().verify() +XoriTestCase().verify() +OriTestCase().verify() +AndiTestCase().verify() +SlliTestCase().verify() +SrliTestCase().verify() +SraiTestCase().verify() +AddTestCase().verify() +SubTestCase().verify() +SllTestCase().verify() +SltTestCase().verify() +SltuTestCase().verify() +XorTestCase().verify() +SrlTestCase().verify() +SraTestCase().verify() +OrTestCase().verify() +AndTestCase().verify() print("Testing multiplier and divider ...") unittest.main()