diff --git a/README.rst b/README.rst index 9ec774741..01a2f914b 100644 --- a/README.rst +++ b/README.rst @@ -5,7 +5,7 @@ :target: https://m-labs.hk/artiq ARTIQ (Advanced Real-Time Infrastructure for Quantum physics) is a leading-edge control and data acquisition system for quantum information experiments. -It is maintained and developed by `M-Labs `_ and the initial development was for and in partnership with the `Ion Storage Group at NIST `_. ARTIQ is free software and offered to the entire research community as a solution equally applicable to other challenging control tasks, including outside the field of ion trapping. Many laboratories around the world have adopted ARTIQ as their control system, with over a hundred Sinara hardware crates deployed, and some have `contributed `_ to it. +It is maintained and developed by `M-Labs `_ and the initial development was for and in partnership with the `Ion Storage Group at NIST `_. ARTIQ is free software and offered to the entire research community as a solution equally applicable to other challenging control tasks, including outside the field of ion trapping. Many laboratories around the world have adopted ARTIQ as their control system and some have `contributed `_ to it. The system features a high-level programming language that helps describing complex experiments, which is compiled and executed on dedicated hardware with nanosecond timing resolution and sub-microsecond latency. It includes graphical user interfaces to parametrize and schedule experiments and to visualize and explore the results. @@ -29,7 +29,7 @@ Website: https://m-labs.hk/artiq License ======= -Copyright (C) 2014-2023 M-Labs Limited. +Copyright (C) 2014-2024 M-Labs Limited. ARTIQ is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by diff --git a/RELEASE_NOTES.rst b/RELEASE_NOTES.rst index 45573d386..1a93d91c7 100644 --- a/RELEASE_NOTES.rst +++ b/RELEASE_NOTES.rst @@ -8,36 +8,94 @@ ARTIQ-8 (Unreleased) Highlights: -* Hardware support: +* New hardware support: + - Support for Shuttler, a 16-channel 125MSPS DAC card intended for ion transport. + Waveform generator and user API are similar to the NIST PDQ. - Implemented Phaser-servo. This requires recent gateware on Phaser. - - Implemented Phaser-MIQRO support. This requires the Phaser MIQRO gateware - variant. + - Almazny v1.2 with finer RF switch control. + - Metlino and Sayma support has been dropped due to complications with synchronous RTIO clocking. + - More user LEDs are exposed to RTIO on Kasli. + - Implemented Phaser-MIQRO support. This requires the proprietary Phaser MIQRO gateware + variant from QUARTIQ. - Sampler: fixed ADC MU to Volt conversion factor for Sampler v2.2+. For earlier hardware versions, specify the hardware version in the device database file (e.g. ``"hw_rev": "v2.1"``) to use the correct conversion factor. - - Almazny v1.2. It is incompatible with the legacy versions and is the default. To use legacy - versions, specify ``almazny_hw_rev`` in the JSON description. - - Metlino and Sayma support has been dropped due to complications with synchronous RTIO clocking. +* Support for distributed DMA, where DMA is run directly on satellites for corresponding + RTIO events, increasing bandwidth in scenarios with heavy satellite usage. +* Support for subkernels, where kernels are run on satellite device CPUs to offload some + of the processing and RTIO operations. * CPU (on softcore platforms) and AXI bus (on Zynq) are now clocked synchronously with the RTIO clock, to facilitate implementation of local processing on DRTIO satellites, and to slightly reduce RTIO latency. -* MSYS2 packaging for Windows, which replaces Conda. Conda packages are still available to - support legacy installations, but may be removed in a future release. -* Added channel names to RTIO errors. -* Full Python 3.10 support. -* Python's built-in types (such as `float`, or `List[...]`) can now be used in type annotations on - kernel functions. -* Distributed DMA is now supported, allowing DMA to be run directly on satellites for corresponding - RTIO events, increasing bandwidth in scenarios with heavy satellite usage. -* Applet Request Interfaces have been implemented, enabling applets to directly modify datasets - and temporarily set arguments in the dashboard. -* EntryArea widget has been implemented, allowing argument entry widgets to be used in applets. -* Dashboard: +* Support for DRTIO-over-EEM, used with Shuttler. +* Support for WRPLL low-noise clock recovery. +* Enabled event spreading on DRTIO satellites, using high watermark for lane switching. +* Added channel names to RTIO error messages. +* The RTIO analyzer is now proxied by ``aqctl_coreanalyzer_proxy`` typically running on the master + machine, similarly to ``aqctl_moninj_proxy``. +* GUI: + - Integrated waveform analyzer, removing the need for external VCD viewers such as GtkWave. + - Implemented Applet Request Interfaces which allow applets to modify datasets and set the + current values of widgets in the dashboard's experiment windows. + - Implemented a new ``EntryArea`` widget which allows argument entry widgets to be used in applets. - The "Close all applets" command (shortcut: Ctrl-Alt-W) now ignores docked applets, making it a convenient way to clean up after exploratory work without destroying a carefully arranged default workspace. -* Persistent datasets are now stored in a LMDB database for improved performance. PYON databases can - be converted with the script below. + - Hotkeys now organize experiment windows in the order they were last interacted with: + + CTRL+SHIFT+T tiles experiment windows + + CTRL+SHIFT+C cascades experiment windows + - By enabling the ``quickstyle`` option, ``EnumerationValue`` entry widgets can now alternatively display + its choices as buttons that submit the experiment on click. +* Datasets can now be associated with units and scale factors, and displayed accordingly in the dashboard + including applets, like widgets such as ``NumberValue`` already did in earlier ARTIQ versions. +* Experiments can now request arguments interactively from the user at any time. +* Persistent datasets are now stored in a LMDB database for improved performance. +* Python's built-in types (such as ``float``, or ``List[...]``) can now be used in type annotations on + kernel functions. +* MSYS2 packaging for Windows, which replaces Conda. Conda packages are still available to + support legacy installations, but may be removed in a future release. +* Experiments can now be submitted with revisions set to a branch / tag name instead of only git hashes. +* Grabber image input now has an optional timeout. +* On NAR3-supported devices (Kasli-SoC, ZC706), when a Rust panic occurs, a minimal environment is started + where the network and ``artiq_coremgmt`` can be used. This allows the user to inspect logs, change + configuration options, update the firmware, and reboot the device. +* Full Python 3.11 support. + +Breaking changes: + +* ``SimpleApplet`` now calls widget constructors with an additional ``ctl`` parameter for control + operations, which includes dataset operations. It can be ignored if not needed. For an example usage, + refer to the ``big_number.py`` applet. +* ``SimpleApplet`` and ``TitleApplet`` now call ``data_changed`` with additional parameters. Derived applets + should change the function signature as below: + +:: + + # SimpleApplet + def data_changed(self, value, metadata, persist, mods) + # SimpleApplet (old version) + def data_changed(self, data, mods) + # TitleApplet + def data_changed(self, value, metadata, persist, mods, title) + # TitleApplet (old version) + def data_changed(self, data, mods, title) + +Accesses to the data argument should be replaced as below: + +:: + + data[key][0] ==> persist[key] + data[key][1] ==> value[key] + +* The ``ndecimals`` parameter in ``NumberValue`` and ``Scannable`` has been renamed to ``precision``. + Parameters after and including ``scale`` in both constructors are now keyword-only. + Refer to the updated ``no_hardware/arguments_demo.py`` example for current usage. +* Almazny v1.2 is incompatible with the legacy versions and is the default. + To use legacy versions, specify ``almazny_hw_rev`` in the JSON description. +* kasli_generic.py has been merged into kasli.py, and the demonstration designs without JSON descriptions + have been removed. The base classes remain present in kasli.py to support third-party flows without + JSON descriptions. +* Legacy PYON databases should be converted to LMDB with the script below: :: @@ -51,35 +109,7 @@ Highlights: txn.put(key.encode(), pyon.encode((value, {})).encode()) new.close() -Breaking changes: - -* ``SimpleApplet`` now calls widget constructors with an additional ``ctl`` parameter for control - operations, which includes dataset operations. It can be ignored if not needed. For an example usage, - refer to the ``big_number.py`` applet. -* ``SimpleApplet`` and ``TitleApplet`` now call ``data_changed`` with additional parameters. Wrapped widgets - should refactor the function signature as seen below: -:: - - # SimpleApplet - def data_changed(self, value, metadata, persist, mods) - # SimpleApplet (old version) - def data_changed(self, data, mods) - # TitleApplet - def data_changed(self, value, metadata, persist, mods, title) - # TitleApplet (old version) - def data_changed(self, data, mods, title) - -Old syntax should be replaced with the form shown on the right. -:: - - data[key][0] ==> persist[key] - data[key][1] ==> value[key] - data[key][2] ==> metadata[key] - -* The ``ndecimals`` parameter in ``NumberValue`` and ``Scannable`` has been renamed to ``precision``. - Parameters after and including ``scale`` in both constructors are now keyword-only. - Refer to the updated ``no_hardware/arguments_demo.py`` example for current usage. - +* ``artiq.wavesynth`` has been removed. ARTIQ-7 ------- diff --git a/artiq/_version.py b/artiq/_version.py index bf9797c9d..984c86129 100644 --- a/artiq/_version.py +++ b/artiq/_version.py @@ -1,4 +1,7 @@ import os +def get_rev(): + return os.getenv("VERSIONEER_REV", default="unknown") + def get_version(): - return os.getenv("VERSIONEER_OVERRIDE", default="8.0+unknown.beta") + return os.getenv("VERSIONEER_OVERRIDE", default="10.0+unknown.beta") diff --git a/artiq/applets/big_number.py b/artiq/applets/big_number.py index 56458f507..7bf077fe2 100755 --- a/artiq/applets/big_number.py +++ b/artiq/applets/big_number.py @@ -2,6 +2,8 @@ from PyQt5 import QtWidgets, QtCore, QtGui from artiq.applets.simple import SimpleApplet +from artiq.tools import scale_from_metadata +from artiq.gui.tools import LayoutWidget class QResponsiveLCDNumber(QtWidgets.QLCDNumber): @@ -21,29 +23,41 @@ class QCancellableLineEdit(QtWidgets.QLineEdit): super().keyPressEvent(event) -class NumberWidget(QtWidgets.QStackedWidget): +class NumberWidget(LayoutWidget): def __init__(self, args, req): - QtWidgets.QStackedWidget.__init__(self) + LayoutWidget.__init__(self) self.dataset_name = args.dataset self.req = req + self.metadata = dict() + + self.number_area = QtWidgets.QStackedWidget() + self.addWidget(self.number_area, 0, 0) + + self.unit_area = QtWidgets.QLabel() + self.unit_area.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignTop) + self.addWidget(self.unit_area, 0, 1) self.lcd_widget = QResponsiveLCDNumber() self.lcd_widget.setDigitCount(args.digit_count) self.lcd_widget.doubleClicked.connect(self.start_edit) - self.addWidget(self.lcd_widget) + self.number_area.addWidget(self.lcd_widget) self.edit_widget = QCancellableLineEdit() self.edit_widget.setValidator(QtGui.QDoubleValidator()) - self.edit_widget.setAlignment(QtCore.Qt.AlignRight) + self.edit_widget.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignVCenter) self.edit_widget.editCancelled.connect(self.cancel_edit) self.edit_widget.returnPressed.connect(self.confirm_edit) - self.addWidget(self.edit_widget) + self.number_area.addWidget(self.edit_widget) font = QtGui.QFont() font.setPointSize(60) self.edit_widget.setFont(font) - self.setCurrentWidget(self.lcd_widget) + unit_font = QtGui.QFont() + unit_font.setPointSize(20) + self.unit_area.setFont(unit_font) + + self.number_area.setCurrentWidget(self.lcd_widget) def start_edit(self): # QLCDNumber value property contains the value of zero @@ -51,22 +65,32 @@ class NumberWidget(QtWidgets.QStackedWidget): self.edit_widget.setText(str(self.lcd_widget.value())) self.edit_widget.selectAll() self.edit_widget.setFocus() - self.setCurrentWidget(self.edit_widget) + self.number_area.setCurrentWidget(self.edit_widget) def confirm_edit(self): - value = float(self.edit_widget.text()) - self.req.set_dataset(self.dataset_name, value) - self.setCurrentWidget(self.lcd_widget) + scale = scale_from_metadata(self.metadata) + val = float(self.edit_widget.text()) + val *= scale + self.req.set_dataset(self.dataset_name, val, **self.metadata) + self.number_area.setCurrentWidget(self.lcd_widget) def cancel_edit(self): - self.setCurrentWidget(self.lcd_widget) + self.number_area.setCurrentWidget(self.lcd_widget) def data_changed(self, value, metadata, persist, mods): try: - n = float(value[self.dataset_name]) + self.metadata = metadata[self.dataset_name] + # This applet will degenerate other scalar types to native float on edit + # Use the dashboard ChangeEditDialog for consistent type casting + val = float(value[self.dataset_name]) + scale = scale_from_metadata(self.metadata) + val /= scale except (KeyError, ValueError, TypeError): - n = "---" - self.lcd_widget.display(n) + val = "---" + + unit = self.metadata.get("unit", "") + self.unit_area.setText(unit) + self.lcd_widget.display(val) def main(): diff --git a/artiq/applets/plot_xy.py b/artiq/applets/plot_xy.py index d7d67803b..df3dc2aaf 100755 --- a/artiq/applets/plot_xy.py +++ b/artiq/applets/plot_xy.py @@ -24,11 +24,11 @@ class XYPlot(pyqtgraph.PlotWidget): y = value[self.args.y] except KeyError: return - x = value.get(self.args.x, (False, None)) + x = value.get(self.args.x) if x is None: x = np.arange(len(y)) - error = value.get(self.args.error, (False, None)) - fit = value.get(self.args.fit, (False, None)) + error = value.get(self.args.error) + fit = value.get(self.args.fit) if not len(y) or len(y) != len(x): self.mismatch['X values'] = True diff --git a/artiq/applets/simple.py b/artiq/applets/simple.py index cd980bb34..fcc5a4217 100644 --- a/artiq/applets/simple.py +++ b/artiq/applets/simple.py @@ -42,13 +42,13 @@ class _AppletRequestInterface: """ raise NotImplementedError - def set_argument_value(self, expurl, name, value): + def set_argument_value(self, expurl, key, value): """ Temporarily set the value of an argument in a experiment in the dashboard. The value resets to default value when recomputing the argument. :param expurl: Experiment URL identifying the experiment in the dashboard. Example: 'repo:ArgumentsDemo'. - :param name: Name of the argument in the experiment. + :param key: Name of the argument in the experiment. :param value: Object representing the new temporary value of the argument. For ``Scannable`` arguments, this parameter should be a ``ScanObject``. The type of the ``ScanObject`` will be set as the selected type when this function is called. """ @@ -77,10 +77,10 @@ class AppletRequestIPC(_AppletRequestInterface): mod = {"action": "append", "path": [key, 1], "x": value} self.ipc.update_dataset(mod) - def set_argument_value(self, expurl, name, value): + def set_argument_value(self, expurl, key, value): if isinstance(value, ScanObject): value = value.describe() - self.ipc.set_argument_value(expurl, name, value) + self.ipc.set_argument_value(expurl, key, value) class AppletRequestRPC(_AppletRequestInterface): @@ -182,10 +182,10 @@ class AppletIPCClient(AsyncioChildComm): self.write_pyon({"action": "update_dataset", "mod": mod}) - def set_argument_value(self, expurl, name, value): + def set_argument_value(self, expurl, key, value): self.write_pyon({"action": "set_argument_value", "expurl": expurl, - "name": name, + "key": key, "value": value}) @@ -255,7 +255,7 @@ class SimpleApplet: if self.embed is None: dataset_ctl = RPCClient() self.loop.run_until_complete(dataset_ctl.connect_rpc( - self.args.server, self.args.port_control, "master_dataset_db")) + self.args.server, self.args.port_control, "dataset_db")) self.req = AppletRequestRPC(self.loop, dataset_ctl) else: self.req = AppletRequestIPC(self.ipc) diff --git a/artiq/browser/datasets.py b/artiq/browser/datasets.py index 815f7acbf..6e5473786 100644 --- a/artiq/browser/datasets.py +++ b/artiq/browser/datasets.py @@ -33,7 +33,7 @@ class DatasetCtl: try: remote = RPCClient() await remote.connect_rpc(self.master_host, self.master_port, - "master_dataset_db") + "dataset_db") try: if op_name == "set": await remote.set(key_or_mod, value, persist, metadata) diff --git a/artiq/browser/experiments.py b/artiq/browser/experiments.py index c34ea5832..c5cf9c54c 100644 --- a/artiq/browser/experiments.py +++ b/artiq/browser/experiments.py @@ -10,87 +10,26 @@ import h5py from sipyco import pyon from artiq import __artiq_dir__ as artiq_dir -from artiq.gui.tools import (LayoutWidget, WheelFilter, - log_level_to_name, get_open_file_name) -from artiq.gui.entries import procdesc_to_entry +from artiq.gui.tools import (LayoutWidget, log_level_to_name, get_open_file_name) +from artiq.gui.entries import procdesc_to_entry, EntryTreeWidget from artiq.master.worker import Worker, log_worker_exception logger = logging.getLogger(__name__) -class _ArgumentEditor(QtWidgets.QTreeWidget): +class _ArgumentEditor(EntryTreeWidget): def __init__(self, dock): - QtWidgets.QTreeWidget.__init__(self) - self.setColumnCount(3) - self.header().setStretchLastSection(False) - try: - set_resize_mode = self.header().setSectionResizeMode - except AttributeError: - set_resize_mode = self.header().setResizeMode - set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents) - set_resize_mode(1, QtWidgets.QHeaderView.Stretch) - set_resize_mode(2, QtWidgets.QHeaderView.ResizeToContents) - self.header().setVisible(False) - self.setSelectionMode(self.NoSelection) - self.setHorizontalScrollMode(self.ScrollPerPixel) - self.setVerticalScrollMode(self.ScrollPerPixel) - - self.setStyleSheet("QTreeWidget {background: " + - self.palette().midlight().color().name() + " ;}") - - self.viewport().installEventFilter(WheelFilter(self.viewport(), True)) - - self._groups = dict() - self._arg_to_widgets = dict() + EntryTreeWidget.__init__(self) self._dock = dock if not self._dock.arguments: - self.addTopLevelItem(QtWidgets.QTreeWidgetItem(["No arguments"])) - gradient = QtGui.QLinearGradient( - 0, 0, 0, QtGui.QFontMetrics(self.font()).lineSpacing()*2.5) - gradient.setColorAt(0, self.palette().base().color()) - gradient.setColorAt(1, self.palette().midlight().color()) + self.insertTopLevelItem(0, QtWidgets.QTreeWidgetItem(["No arguments"])) for name, argument in self._dock.arguments.items(): - widgets = dict() - self._arg_to_widgets[name] = widgets + self.set_argument(name, argument) - entry = procdesc_to_entry(argument["desc"])(argument) - widget_item = QtWidgets.QTreeWidgetItem([name]) - if argument["tooltip"]: - widget_item.setToolTip(0, argument["tooltip"]) - widgets["entry"] = entry - widgets["widget_item"] = widget_item + self.quickStyleClicked.connect(self._dock._run_clicked) - for col in range(3): - widget_item.setBackground(col, gradient) - font = widget_item.font(0) - font.setBold(True) - widget_item.setFont(0, font) - - if argument["group"] is None: - self.addTopLevelItem(widget_item) - else: - self._get_group(argument["group"]).addChild(widget_item) - fix_layout = LayoutWidget() - widgets["fix_layout"] = fix_layout - fix_layout.addWidget(entry) - self.setItemWidget(widget_item, 1, fix_layout) - - recompute_argument = QtWidgets.QToolButton() - recompute_argument.setToolTip("Re-run the experiment's build " - "method and take the default value") - recompute_argument.setIcon( - QtWidgets.QApplication.style().standardIcon( - QtWidgets.QStyle.SP_BrowserReload)) - recompute_argument.clicked.connect( - partial(self._recompute_argument_clicked, name)) - fix_layout = LayoutWidget() - fix_layout.addWidget(recompute_argument) - self.setItemWidget(widget_item, 2, fix_layout) - - widget_item = QtWidgets.QTreeWidgetItem() - self.addTopLevelItem(widget_item) recompute_arguments = QtWidgets.QPushButton("Recompute all arguments") recompute_arguments.setIcon( QtWidgets.QApplication.style().standardIcon( @@ -100,7 +39,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget): load = QtWidgets.QPushButton("Set arguments from HDF5") load.setToolTip("Set arguments from currently selected HDF5 file") load.setIcon(QtWidgets.QApplication.style().standardIcon( - QtWidgets.QStyle.SP_DialogApplyButton)) + QtWidgets.QStyle.SP_DialogApplyButton)) load.clicked.connect(self._load_clicked) buttons = LayoutWidget() @@ -108,21 +47,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget): buttons.addWidget(load, 1, 2) for i, s in enumerate((1, 0, 0, 1)): buttons.layout.setColumnStretch(i, s) - self.setItemWidget(widget_item, 1, buttons) - - def _get_group(self, name): - if name in self._groups: - return self._groups[name] - group = QtWidgets.QTreeWidgetItem([name]) - for col in range(3): - group.setBackground(col, self.palette().mid()) - group.setForeground(col, self.palette().brightText()) - font = group.font(col) - font.setBold(True) - group.setFont(col, font) - self.addTopLevelItem(group) - self._groups[name] = group - return group + self.setItemWidget(self.bottom_item, 1, buttons) def _load_clicked(self): asyncio.ensure_future(self._dock.load_hdf5_task()) @@ -130,8 +55,8 @@ class _ArgumentEditor(QtWidgets.QTreeWidget): def _recompute_arguments_clicked(self): asyncio.ensure_future(self._dock._recompute_arguments()) - def _recompute_argument_clicked(self, name): - asyncio.ensure_future(self._recompute_argument(name)) + def reset_entry(self, key): + asyncio.ensure_future(self._recompute_argument(key)) async def _recompute_argument(self, name): try: @@ -146,29 +71,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget): state = procdesc_to_entry(procdesc).default_state(procdesc) argument["desc"] = procdesc argument["state"] = state - - widgets = self._arg_to_widgets[name] - - widgets["entry"].deleteLater() - widgets["entry"] = procdesc_to_entry(procdesc)(argument) - widgets["fix_layout"] = LayoutWidget() - widgets["fix_layout"].addWidget(widgets["entry"]) - self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"]) - self.updateGeometries() - - def save_state(self): - expanded = [] - for k, v in self._groups.items(): - if v.isExpanded(): - expanded.append(k) - return {"expanded": expanded} - - def restore_state(self, state): - for e in state["expanded"]: - try: - self._groups[e].setExpanded(True) - except KeyError: - pass + self.update_argument(name, argument) log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] @@ -277,8 +180,8 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): state = self.argeditor.save_state() self.argeditor.deleteLater() self.argeditor = _ArgumentEditor(self) - self.argeditor.restore_state(state) self.layout.addWidget(self.argeditor, 0, 0, 1, 5) + self.argeditor.restore_state(state) async def load_hdf5_task(self, filename=None): if filename is None: @@ -466,6 +369,8 @@ class ExperimentsArea(QtWidgets.QMdiArea): def initialize_submission_arguments(self, arginfo): arguments = OrderedDict() for name, (procdesc, group, tooltip) in arginfo.items(): + if procdesc["ty"] == "EnumerationValue" and procdesc["quickstyle"]: + procdesc["quickstyle"] = False state = procdesc_to_entry(procdesc).default_state(procdesc) arguments[name] = { "desc": procdesc, @@ -508,5 +413,9 @@ class ExperimentsArea(QtWidgets.QMdiArea): self.open_experiments.append(dock) return dock + def set_argument_value(self, expurl, name, value): + logger.warning("Unable to set argument '%s', dropping change. " + "'set_argument_value' not supported in browser.", name) + def on_dock_closed(self, dock): self.open_experiments.remove(dock) diff --git a/artiq/build_soc.py b/artiq/build_soc.py index f51c8885a..0e6deb499 100644 --- a/artiq/build_soc.py +++ b/artiq/build_soc.py @@ -71,7 +71,6 @@ def build_artiq_soc(soc, argdict): if not soc.config["DRTIO_ROLE"] == "satellite": builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime")) else: - # Assume DRTIO satellite. builder.add_software_package("satman", os.path.join(firmware_dir, "satman")) try: builder.build() diff --git a/artiq/coredevice/ad53xx.py b/artiq/coredevice/ad53xx.py index 9d746fed3..e1b29c5a5 100644 --- a/artiq/coredevice/ad53xx.py +++ b/artiq/coredevice/ad53xx.py @@ -243,7 +243,7 @@ class AD53xx: def write_gain_mu(self, channel: int32, gain: int32 = 0xffff): """Program the gain register for a DAC channel. - The DAC output is not updated until LDAC is pulsed (see :meth load:). + The DAC output is not updated until LDAC is pulsed (see :meth:`load`). This method advances the timeline by the duration of one SPI transfer. :param gain: 16-bit gain register value (default: 0xffff) @@ -255,7 +255,7 @@ class AD53xx: def write_offset_mu(self, channel: int32, offset: int32 = 0x8000): """Program the offset register for a DAC channel. - The DAC output is not updated until LDAC is pulsed (see :meth load:). + The DAC output is not updated until LDAC is pulsed (see :meth:`load`). This method advances the timeline by the duration of one SPI transfer. :param offset: 16-bit offset register value (default: 0x8000) @@ -268,7 +268,7 @@ class AD53xx: """Program the DAC offset voltage for a channel. An offset of +V can be used to trim out a DAC offset error of -V. - The DAC output is not updated until LDAC is pulsed (see :meth load:). + The DAC output is not updated until LDAC is pulsed (see :meth:`load`). This method advances the timeline by the duration of one SPI transfer. :param voltage: the offset voltage @@ -280,7 +280,7 @@ class AD53xx: def write_dac_mu(self, channel: int32, value: int32): """Program the DAC input register for a channel. - The DAC output is not updated until LDAC is pulsed (see :meth load:). + The DAC output is not updated until LDAC is pulsed (see :meth:`load`). This method advances the timeline by the duration of one SPI transfer. """ self.bus.write( @@ -290,7 +290,7 @@ class AD53xx: def write_dac(self, channel: int32, voltage: float): """Program the DAC output voltage for a channel. - The DAC output is not updated until LDAC is pulsed (see :meth load:). + The DAC output is not updated until LDAC is pulsed (see :meth:`load`). This method advances the timeline by the duration of one SPI transfer. """ self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs, @@ -323,7 +323,7 @@ class AD53xx: If no LDAC device was defined, the LDAC pulse is skipped. - See :meth load:. + See :meth:`load`. :param values: list of DAC values to program :param channels: list of DAC channels to program. If not specified, @@ -366,7 +366,7 @@ class AD53xx: """Two-point calibration of a DAC channel. Programs the offset and gain register to trim out DAC errors. Does not - take effect until LDAC is pulsed (see :meth load:). + take effect until LDAC is pulsed (see :meth:`load`). Calibration consists of measuring the DAC output voltage for a channel with the DAC set to zero-scale (0x0000) and full-scale (0xffff). diff --git a/artiq/coredevice/ad9910.py b/artiq/coredevice/ad9910.py index 39d2adfa6..be4f3afd4 100644 --- a/artiq/coredevice/ad9910.py +++ b/artiq/coredevice/ad9910.py @@ -1024,7 +1024,7 @@ class AD9910: """ if not self.cpld.sync_div: raise ValueError("parent cpld does not drive SYNC") - search_span = 31 + search_span = 13 # FIXME https://github.com/sinara-hw/Urukul/issues/16 # should both be 2-4 once kasli sync_in jitter is identified min_window = 0 diff --git a/artiq/coredevice/adf5356.py b/artiq/coredevice/adf5356.py index 11af0d6a0..29d7820ef 100644 --- a/artiq/coredevice/adf5356.py +++ b/artiq/coredevice/adf5356.py @@ -94,10 +94,11 @@ class ADF5356: :param blind: Do not attempt to verify presence. """ + self.sync() if not blind: # MUXOUT = VDD self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 1) - self.sync() + self.write(self.regs[4]) self.core.delay(1000. * us) if not self.read_muxout(): raise ValueError("MUXOUT not high") @@ -105,7 +106,7 @@ class ADF5356: # MUXOUT = DGND self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 2) - self.sync() + self.write(self.regs[4]) self.core.delay(1000. * us) if self.read_muxout(): raise ValueError("MUXOUT not low") @@ -113,8 +114,7 @@ class ADF5356: # MUXOUT = digital lock-detect self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 6) - else: - self.sync() + self.write(self.regs[4]) @kernel def set_att(self, att: float): diff --git a/artiq/coredevice/almazny.py b/artiq/coredevice/almazny.py index 2e9dee6eb..129c8d106 100644 --- a/artiq/coredevice/almazny.py +++ b/artiq/coredevice/almazny.py @@ -130,11 +130,6 @@ class AlmaznyLegacy: ) self.core.delay(100. * us) - @kernel - def _update_all_registers(self): - for i in range(4): - self._update_register(i) - @nac3 class AlmaznyChannel: diff --git a/artiq/coredevice/comm_analyzer.py b/artiq/coredevice/comm_analyzer.py index 09ad3618f..9fad6666a 100644 --- a/artiq/coredevice/comm_analyzer.py +++ b/artiq/coredevice/comm_analyzer.py @@ -2,15 +2,22 @@ from operator import itemgetter from collections import namedtuple from itertools import count from contextlib import contextmanager +from sipyco import keepalive +import asyncio from enum import Enum import struct import logging import socket +import math logger = logging.getLogger(__name__) +DEFAULT_REF_PERIOD = 1e-9 +ANALYZER_MAGIC = b"ARTIQ Analyzer Proxy\n" + + class MessageType(Enum): output = 0b00 input = 0b01 @@ -34,6 +41,13 @@ class ExceptionType(Enum): i_overflow = 0b100001 +class WaveformType(Enum): + ANALOG = 0 + BIT = 1 + VECTOR = 2 + LOG = 3 + + def get_analyzer_dump(host, port=1382): sock = socket.create_connection((host, port)) try: @@ -104,6 +118,8 @@ def decode_dump(data): (sent_bytes, total_byte_count, error_occurred, log_channel, dds_onehot_sel) = parts + logger.debug("analyzer dump has length %d", sent_bytes) + expected_len = sent_bytes + 15 if expected_len != len(data): raise ValueError("analyzer dump has incorrect length " @@ -115,15 +131,83 @@ def decode_dump(data): if total_byte_count > sent_bytes: logger.info("analyzer ring buffer has wrapped %d times", total_byte_count//sent_bytes) + if sent_bytes == 0: + logger.warning("analyzer dump is empty") position = 15 messages = [] for _ in range(sent_bytes//32): messages.append(decode_message(data[position:position+32])) position += 32 + + if len(messages) == 1 and isinstance(messages[0], StoppedMessage): + logger.warning("analyzer dump is empty aside from stop message") + return DecodedDump(log_channel, bool(dds_onehot_sel), messages) +# simplified from sipyco broadcast Receiver +class AnalyzerProxyReceiver: + def __init__(self, receive_cb, disconnect_cb=None): + self.receive_cb = receive_cb + self.disconnect_cb = disconnect_cb + + async def connect(self, host, port): + self.reader, self.writer = \ + await keepalive.async_open_connection(host, port) + try: + line = await self.reader.readline() + assert line == ANALYZER_MAGIC + self.receive_task = asyncio.create_task(self._receive_cr()) + except: + self.writer.close() + del self.reader + del self.writer + raise + + async def close(self): + self.disconnect_cb = None + try: + self.receive_task.cancel() + try: + await self.receive_task + except asyncio.CancelledError: + pass + finally: + self.writer.close() + del self.reader + del self.writer + + async def _receive_cr(self): + try: + while True: + endian_byte = await self.reader.read(1) + if endian_byte == b"E": + endian = '>' + elif endian_byte == b"e": + endian = '<' + elif endian_byte == b"": + # EOF reached, connection lost + return + else: + raise ValueError + payload_length_word = await self.reader.readexactly(4) + payload_length = struct.unpack(endian + "I", payload_length_word)[0] + if payload_length > 10 * 512 * 1024: + # 10x buffer size of firmware + raise ValueError + + # The remaining header length is 11 bytes. + remaining_data = await self.reader.readexactly(payload_length + 11) + data = endian_byte + payload_length_word + remaining_data + self.receive_cb(data) + except Exception: + logger.error("analyzer receiver connection terminating with exception", exc_info=True) + finally: + if self.disconnect_cb is not None: + self.disconnect_cb() + + def vcd_codes(): codechars = [chr(i) for i in range(33, 127)] for n in count(): @@ -150,38 +234,129 @@ class VCDChannel: integer_cast = struct.unpack(">Q", struct.pack(">d", x))[0] self.set_value("{:064b}".format(integer_cast)) + def set_log(self, log_message): + value = "" + for c in log_message: + value += "{:08b}".format(ord(c)) + self.set_value(value) + class VCDManager: def __init__(self, fileobj): self.out = fileobj self.codes = vcd_codes() self.current_time = None + self.start_time = 0 def set_timescale_ps(self, timescale): self.out.write("$timescale {}ps $end\n".format(round(timescale))) - def get_channel(self, name, width): + def get_channel(self, name, width, ty, precision=0, unit=""): code = next(self.codes) self.out.write("$var wire {width} {code} {name} $end\n" .format(name=name, code=code, width=width)) return VCDChannel(self.out, code) @contextmanager - def scope(self, name): - self.out.write("$scope module {} $end\n".format(name)) + def scope(self, scope, name): + self.out.write("$scope module {}/{} $end\n".format(scope, name)) yield self.out.write("$upscope $end\n") def set_time(self, time): + time -= self.start_time if time != self.current_time: self.out.write("#{}\n".format(time)) self.current_time = time + def set_start_time(self, time): + self.start_time = time + + def set_end_time(self, time): + pass + + +class WaveformManager: + def __init__(self): + self.current_time = 0 + self.start_time = 0 + self.end_time = 0 + self.channels = list() + self.current_scope = "" + self.trace = {"timescale": 1, "stopped_x": None, "logs": dict(), "data": dict()} + + def set_timescale_ps(self, timescale): + self.trace["timescale"] = int(timescale) + + def get_channel(self, name, width, ty, precision=0, unit=""): + if ty == WaveformType.LOG: + self.trace["logs"][self.current_scope + name] = (ty, width, precision, unit) + data = self.trace["data"][self.current_scope + name] = list() + channel = WaveformChannel(data, self.current_time) + self.channels.append(channel) + return channel + + @contextmanager + def scope(self, scope, name): + old_scope = self.current_scope + self.current_scope = scope + "/" + yield + self.current_scope = old_scope + + def set_time(self, time): + time -= self.start_time + for channel in self.channels: + channel.set_time(time) + + def set_start_time(self, time): + self.start_time = time + if self.trace["stopped_x"] is not None: + self.trace["stopped_x"] = self.end_time - self.start_time + + def set_end_time(self, time): + self.end_time = time + self.trace["stopped_x"] = self.end_time - self.start_time + + +class WaveformChannel: + def __init__(self, data, current_time): + self.data = data + self.current_time = current_time + + def set_value(self, value): + self.data.append((self.current_time, value)) + + def set_value_double(self, x): + self.data.append((self.current_time, x)) + + def set_time(self, time): + self.current_time = time + + def set_log(self, log_message): + self.data.append((self.current_time, log_message)) + + +class ChannelSignatureManager: + def __init__(self): + self.current_scope = "" + self.channels = dict() + + def get_channel(self, name, width, ty, precision=0, unit=""): + self.channels[self.current_scope + name] = (ty, width, precision, unit) + return None + + @contextmanager + def scope(self, scope, name): + old_scope = self.current_scope + self.current_scope = scope + "/" + yield + self.current_scope = old_scope + class TTLHandler: - def __init__(self, vcd_manager, name): + def __init__(self, manager, name): self.name = name - self.channel_value = vcd_manager.get_channel("ttl/" + name, 1) + self.channel_value = manager.get_channel("ttl/" + name, 1, ty=WaveformType.BIT) self.last_value = "X" self.oe = True @@ -206,11 +381,12 @@ class TTLHandler: class TTLClockGenHandler: - def __init__(self, vcd_manager, name, ref_period): + def __init__(self, manager, name, ref_period): self.name = name self.ref_period = ref_period - self.channel_frequency = vcd_manager.get_channel( - "ttl_clkgen/" + name, 64) + precision = max(0, math.ceil(math.log10(2**24 * ref_period) + 6)) + self.channel_frequency = manager.get_channel( + "ttl_clkgen/" + name, 64, ty=WaveformType.ANALOG, precision=precision, unit="MHz") def process_message(self, message): if isinstance(message, OutputMessage): @@ -221,8 +397,8 @@ class TTLClockGenHandler: class DDSHandler: - def __init__(self, vcd_manager, onehot_sel, sysclk): - self.vcd_manager = vcd_manager + def __init__(self, manager, onehot_sel, sysclk): + self.manager = manager self.onehot_sel = onehot_sel self.sysclk = sysclk @@ -231,11 +407,18 @@ class DDSHandler: def add_dds_channel(self, name, dds_channel_nr): dds_channel = dict() - with self.vcd_manager.scope("dds/{}".format(name)): + frequency_precision = max(0, math.ceil(math.log10(2**32 / self.sysclk) + 6)) + phase_precision = max(0, math.ceil(math.log10(2**16))) + with self.manager.scope("dds", name): dds_channel["vcd_frequency"] = \ - self.vcd_manager.get_channel(name + "/frequency", 64) + self.manager.get_channel(name + "/frequency", 64, + ty=WaveformType.ANALOG, + precision=frequency_precision, + unit="MHz") dds_channel["vcd_phase"] = \ - self.vcd_manager.get_channel(name + "/phase", 64) + self.manager.get_channel(name + "/phase", 64, + ty=WaveformType.ANALOG, + precision=phase_precision) dds_channel["ftw"] = [None, None] dds_channel["pow"] = None self.dds_channels[dds_channel_nr] = dds_channel @@ -285,10 +468,10 @@ class DDSHandler: class WishboneHandler: - def __init__(self, vcd_manager, name, read_bit): + def __init__(self, manager, name, read_bit): self._reads = [] self._read_bit = read_bit - self.stb = vcd_manager.get_channel("{}/{}".format(name, "stb"), 1) + self.stb = manager.get_channel(name + "/stb", 1, ty=WaveformType.BIT) def process_message(self, message): self.stb.set_value("1") @@ -318,16 +501,17 @@ class WishboneHandler: class SPIMasterHandler(WishboneHandler): - def __init__(self, vcd_manager, name): + def __init__(self, manager, name): self.channels = {} - with vcd_manager.scope("spi/{}".format(name)): - super().__init__(vcd_manager, name, read_bit=0b100) + self.scope = "spi" + with manager.scope("spi", name): + super().__init__(manager, name, read_bit=0b100) for reg_name, reg_width in [ ("config", 32), ("chip_select", 16), ("write_length", 8), ("read_length", 8), ("write", 32), ("read", 32)]: - self.channels[reg_name] = vcd_manager.get_channel( - "{}/{}".format(name, reg_name), reg_width) + self.channels[reg_name] = manager.get_channel( + "{}/{}".format(name, reg_name), reg_width, ty=WaveformType.VECTOR) def process_write(self, address, data): if address == 0: @@ -352,11 +536,12 @@ class SPIMasterHandler(WishboneHandler): class SPIMaster2Handler(WishboneHandler): - def __init__(self, vcd_manager, name): + def __init__(self, manager, name): self._reads = [] self.channels = {} - with vcd_manager.scope("spi2/{}".format(name)): - self.stb = vcd_manager.get_channel("{}/{}".format(name, "stb"), 1) + self.scope = "spi2" + with manager.scope("spi2", name): + self.stb = manager.get_channel(name + "/stb", 1, ty=WaveformType.BIT) for reg_name, reg_width in [ ("flags", 8), ("length", 5), @@ -364,8 +549,8 @@ class SPIMaster2Handler(WishboneHandler): ("chip_select", 8), ("write", 32), ("read", 32)]: - self.channels[reg_name] = vcd_manager.get_channel( - "{}/{}".format(name, reg_name), reg_width) + self.channels[reg_name] = manager.get_channel( + "{}/{}".format(name, reg_name), reg_width, ty=WaveformType.VECTOR) def process_message(self, message): self.stb.set_value("1") @@ -413,11 +598,12 @@ def _extract_log_chars(data): class LogHandler: - def __init__(self, vcd_manager, vcd_log_channels): - self.vcd_channels = dict() - for name, maxlength in vcd_log_channels.items(): - self.vcd_channels[name] = vcd_manager.get_channel("log/" + name, - maxlength*8) + def __init__(self, manager, log_channels): + self.channels = dict() + for name, maxlength in log_channels.items(): + self.channels[name] = manager.get_channel("logs/" + name, + maxlength * 8, + ty=WaveformType.LOG) self.current_entry = "" def process_message(self, message): @@ -425,15 +611,12 @@ class LogHandler: self.current_entry += _extract_log_chars(message.data) if len(self.current_entry) > 1 and self.current_entry[-1] == "\x1D": channel_name, log_message = self.current_entry[:-1].split("\x1E", maxsplit=1) - vcd_value = "" - for c in log_message: - vcd_value += "{:08b}".format(ord(c)) - self.vcd_channels[channel_name].set_value(vcd_value) + self.channels[channel_name].set_log(log_message) self.current_entry = "" -def get_vcd_log_channels(log_channel, messages): - vcd_log_channels = dict() +def get_log_channels(log_channel, messages): + log_channels = dict() log_entry = "" for message in messages: if (isinstance(message, OutputMessage) @@ -442,13 +625,13 @@ def get_vcd_log_channels(log_channel, messages): if len(log_entry) > 1 and log_entry[-1] == "\x1D": channel_name, log_message = log_entry[:-1].split("\x1E", maxsplit=1) l = len(log_message) - if channel_name in vcd_log_channels: - if vcd_log_channels[channel_name] < l: - vcd_log_channels[channel_name] = l + if channel_name in log_channels: + if log_channels[channel_name] < l: + log_channels[channel_name] = l else: - vcd_log_channels[channel_name] = l + log_channels[channel_name] = l log_entry = "" - return vcd_log_channels + return log_channels def get_single_device_argument(devices, module, cls, argument): @@ -475,7 +658,7 @@ def get_dds_sysclk(devices): ("AD9914",), "sysclk") -def create_channel_handlers(vcd_manager, devices, ref_period, +def create_channel_handlers(manager, devices, ref_period, dds_sysclk, dds_onehot_sel): channel_handlers = dict() for name, desc in sorted(devices.items(), key=itemgetter(0)): @@ -483,11 +666,11 @@ def create_channel_handlers(vcd_manager, devices, ref_period, if (desc["module"] == "artiq.coredevice.ttl" and desc["class"] in {"TTLOut", "TTLInOut"}): channel = desc["arguments"]["channel"] - channel_handlers[channel] = TTLHandler(vcd_manager, name) + channel_handlers[channel] = TTLHandler(manager, name) if (desc["module"] == "artiq.coredevice.ttl" and desc["class"] == "TTLClockGen"): channel = desc["arguments"]["channel"] - channel_handlers[channel] = TTLClockGenHandler(vcd_manager, name, ref_period) + channel_handlers[channel] = TTLClockGenHandler(manager, name, ref_period) if (desc["module"] == "artiq.coredevice.ad9914" and desc["class"] == "AD9914"): dds_bus_channel = desc["arguments"]["bus_channel"] @@ -495,37 +678,60 @@ def create_channel_handlers(vcd_manager, devices, ref_period, if dds_bus_channel in channel_handlers: dds_handler = channel_handlers[dds_bus_channel] else: - dds_handler = DDSHandler(vcd_manager, dds_onehot_sel, dds_sysclk) + dds_handler = DDSHandler(manager, dds_onehot_sel, dds_sysclk) channel_handlers[dds_bus_channel] = dds_handler dds_handler.add_dds_channel(name, dds_channel) if (desc["module"] == "artiq.coredevice.spi2" and desc["class"] == "SPIMaster"): channel = desc["arguments"]["channel"] channel_handlers[channel] = SPIMaster2Handler( - vcd_manager, name) + manager, name) return channel_handlers +def get_channel_list(devices): + manager = ChannelSignatureManager() + create_channel_handlers(manager, devices, 1e-9, 3e9, False) + ref_period = get_ref_period(devices) + if ref_period is None: + ref_period = DEFAULT_REF_PERIOD + precision = max(0, math.ceil(math.log10(1 / ref_period) - 6)) + manager.get_channel("rtio_slack", 64, ty=WaveformType.ANALOG, precision=precision, unit="us") + return manager.channels + + def get_message_time(message): return getattr(message, "timestamp", message.rtio_counter) def decoded_dump_to_vcd(fileobj, devices, dump, uniform_interval=False): vcd_manager = VCDManager(fileobj) + decoded_dump_to_target(vcd_manager, devices, dump, uniform_interval) + + +def decoded_dump_to_waveform_data(devices, dump, uniform_interval=False): + manager = WaveformManager() + decoded_dump_to_target(manager, devices, dump, uniform_interval) + return manager.trace + + +def decoded_dump_to_target(manager, devices, dump, uniform_interval): ref_period = get_ref_period(devices) - if ref_period is not None: - if not uniform_interval: - vcd_manager.set_timescale_ps(ref_period*1e12) - else: + if ref_period is None: logger.warning("unable to determine core device ref_period") - ref_period = 1e-9 # guess + ref_period = DEFAULT_REF_PERIOD + if not uniform_interval: + manager.set_timescale_ps(ref_period*1e12) dds_sysclk = get_dds_sysclk(devices) if dds_sysclk is None: logger.warning("unable to determine DDS sysclk") dds_sysclk = 3e9 # guess if isinstance(dump.messages[-1], StoppedMessage): + m = dump.messages[-1] + end_time = get_message_time(m) + manager.set_end_time(end_time) messages = dump.messages[:-1] else: logger.warning("StoppedMessage missing") @@ -533,38 +739,39 @@ def decoded_dump_to_vcd(fileobj, devices, dump, uniform_interval=False): messages = sorted(messages, key=get_message_time) channel_handlers = create_channel_handlers( - vcd_manager, devices, ref_period, + manager, devices, ref_period, dds_sysclk, dump.dds_onehot_sel) - vcd_log_channels = get_vcd_log_channels(dump.log_channel, messages) + log_channels = get_log_channels(dump.log_channel, messages) channel_handlers[dump.log_channel] = LogHandler( - vcd_manager, vcd_log_channels) + manager, log_channels) if uniform_interval: # RTIO event timestamp in machine units - timestamp = vcd_manager.get_channel("timestamp", 64) + timestamp = manager.get_channel("timestamp", 64, ty=WaveformType.VECTOR) # RTIO time interval between this and the next timed event # in SI seconds - interval = vcd_manager.get_channel("interval", 64) - slack = vcd_manager.get_channel("rtio_slack", 64) + interval = manager.get_channel("interval", 64, ty=WaveformType.ANALOG) + slack = manager.get_channel("rtio_slack", 64, ty=WaveformType.ANALOG) - vcd_manager.set_time(0) + manager.set_time(0) start_time = 0 for m in messages: start_time = get_message_time(m) if start_time: break - - t0 = 0 + if not uniform_interval: + manager.set_start_time(start_time) + t0 = start_time for i, message in enumerate(messages): if message.channel in channel_handlers: - t = get_message_time(message) - start_time + t = get_message_time(message) if t >= 0: if uniform_interval: interval.set_value_double((t - t0)*ref_period) - vcd_manager.set_time(i) + manager.set_time(i) timestamp.set_value("{:064b}".format(t)) t0 = t else: - vcd_manager.set_time(t) + manager.set_time(t) channel_handlers[message.channel].process_message(message) if isinstance(message, OutputMessage): slack.set_value_double( diff --git a/artiq/coredevice/comm_kernel.py b/artiq/coredevice/comm_kernel.py index c58945fbd..61ac6cad4 100644 --- a/artiq/coredevice/comm_kernel.py +++ b/artiq/coredevice/comm_kernel.py @@ -28,6 +28,8 @@ class Request(Enum): RPCReply = 7 RPCException = 8 + SubkernelUpload = 9 + class Reply(Enum): SystemInfo = 2 @@ -316,6 +318,7 @@ class CommKernel: self.unpack_float64 = struct.Struct(self.endian + "d").unpack self.pack_header = struct.Struct(self.endian + "lB").pack + self.pack_int8 = struct.Struct(self.endian + "B").pack self.pack_int32 = struct.Struct(self.endian + "l").pack self.pack_int64 = struct.Struct(self.endian + "q").pack self.pack_float64 = struct.Struct(self.endian + "d").pack @@ -430,7 +433,7 @@ class CommKernel: self._write(chunk) def _write_int8(self, value): - self._write(value) + self._write(self.pack_int8(value)) def _write_int32(self, value): self._write(self.pack_int32(value)) @@ -490,6 +493,19 @@ class CommKernel: else: self._read_expect(Reply.LoadCompleted) + def upload_subkernel(self, kernel_library, id, destination): + self._write_header(Request.SubkernelUpload) + self._write_int32(id) + self._write_int8(destination) + self._write_bytes(kernel_library) + self._flush() + + self._read_header() + if self._read_type == Reply.LoadFailed: + raise LoadError(self._read_string()) + else: + self._read_expect(Reply.LoadCompleted) + def run(self): self._write_empty(Request.RunKernel) self._flush() @@ -557,12 +573,12 @@ class CommKernel: self._write_bool(value) elif tag == "i": check(isinstance(value, (int, numpy.int32)) and - (-2**31 <= value < 2**31), + (-2**31 <= value <= 2**31-1), lambda: "32-bit int") self._write_int32(value) elif tag == "I": check(isinstance(value, (int, numpy.int32, numpy.int64)) and - (-2**63 <= value < 2**63), + (-2**63 <= value <= 2**63-1), lambda: "64-bit int") self._write_int64(value) elif tag == "f": @@ -571,8 +587,8 @@ class CommKernel: self._write_float64(value) elif tag == "F": check(isinstance(value, Fraction) and - (-2**63 <= value.numerator < 2**63) and - (-2**63 <= value.denominator < 2**63), + (-2**63 <= value.numerator <= 2**63-1) and + (-2**63 <= value.denominator <= 2**63-1), lambda: "64-bit Fraction") self._write_int64(value.numerator) self._write_int64(value.denominator) diff --git a/artiq/coredevice/comm_moninj.py b/artiq/coredevice/comm_moninj.py index e02da526c..1de7f50df 100644 --- a/artiq/coredevice/comm_moninj.py +++ b/artiq/coredevice/comm_moninj.py @@ -94,9 +94,7 @@ class CommMonInj: self.injection_status_cb(channel, override, value) else: raise ValueError("Unknown packet type", ty) - except asyncio.CancelledError: - raise - except: + except Exception: logger.error("Moninj connection terminating with exception", exc_info=True) finally: if self.disconnect_cb is not None: diff --git a/artiq/coredevice/core.py b/artiq/coredevice/core.py index 6aecfd60f..d78473991 100644 --- a/artiq/coredevice/core.py +++ b/artiq/coredevice/core.py @@ -47,26 +47,45 @@ class Core: :param ref_multiplier: ratio between the RTIO fine timestamp frequency and the RTIO coarse timestamp frequency (e.g. SERDES multiplication factor). + :param analyzer_proxy: name of the core device analyzer proxy to trigger + (optional). + :param analyze_at_run_end: automatically trigger the core device analyzer + proxy after the Experiment's run stage finishes. """ ref_period: KernelInvariant[float] ref_multiplier: KernelInvariant[int32] coarse_ref_period: KernelInvariant[float] - def __init__(self, dmgr, host, ref_period, ref_multiplier=8, target="rv32g"): + def __init__(self, dmgr, + host, ref_period, + analyzer_proxy=None, analyze_at_run_end=False, + ref_multiplier=8, + target="rv32g", satellite_cpu_targets={}): self.ref_period = ref_period self.ref_multiplier = ref_multiplier + self.coarse_ref_period = ref_period*ref_multiplier if host is None: self.comm = CommKernelDummy() else: self.comm = CommKernel(host) + self.first_run = True self.dmgr = dmgr self.core = self self.comm.core = self + self.target = target self.analyzed = False self.compiler = nac3artiq.NAC3(target, artiq_builtins) + + self.analyzer_proxy_name = analyzer_proxy + self.analyze_at_run_end = analyze_at_run_end + self.analyzer_proxy = None + + def notify_run_end(self): + if self.analyze_at_run_end: + self.trigger_analyzer_proxy() def close(self): self.comm.close() @@ -95,6 +114,7 @@ class Core: def run(self, function, args, kwargs): embedding_map = EmbeddingMap() kernel_library = self.compile(function, args, kwargs, embedding_map) + if self.first_run: self.comm.check_system_info() self.first_run = False @@ -175,6 +195,24 @@ class Core: min_now = rtio_get_counter() + int64(125000) if now_mu() < min_now: at_mu(min_now) + + def trigger_analyzer_proxy(self): + """Causes the core analyzer proxy to retrieve a dump from the device, + and distribute it to all connected clients (typically dashboards). + + Returns only after the dump has been retrieved from the device. + + Raises IOError if no analyzer proxy has been configured, or if the + analyzer proxy fails. In the latter case, more details would be + available in the proxy log. + """ + if self.analyzer_proxy is None: + if self.analyzer_proxy_name is not None: + self.analyzer_proxy = self.dmgr.get(self.analyzer_proxy_name) + if self.analyzer_proxy is None: + raise IOError("No analyzer proxy configured") + else: + self.analyzer_proxy.trigger() class RunTool: diff --git a/artiq/coredevice/coredevice_generic.schema.json b/artiq/coredevice/coredevice_generic.schema.json index e05460a64..7994f95d4 100644 --- a/artiq/coredevice/coredevice_generic.schema.json +++ b/artiq/coredevice/coredevice_generic.schema.json @@ -49,6 +49,10 @@ "default": 125e6, "description": "RTIO frequency" }, + "enable_wrpll": { + "type": "boolean", + "default": false + }, "core_addr": { "type": "string", "format": "ipv4", @@ -308,8 +312,7 @@ "clk_div": { "type": "integer", "minimum": 0, - "maximum": 3, - "default": 0 + "maximum": 3 }, "pll_n": { "type": "integer" @@ -582,8 +585,11 @@ "maxItems": 2 }, "drtio_destination": { - "type": "integer", - "default": 4 + "type": "integer" + }, + "hw_rev": { + "type": "string", + "enum": ["v1.0", "v1.1"] } }, "required": ["ports"] diff --git a/artiq/coredevice/exceptions.py b/artiq/coredevice/exceptions.py index 8afc444a7..f5df489ef 100644 --- a/artiq/coredevice/exceptions.py +++ b/artiq/coredevice/exceptions.py @@ -44,6 +44,14 @@ class DMAError(Exception): artiq_builtin = True +@nac3 +class SubkernelError(Exception): + """Raised when an operation regarding a subkernel is invalid + or cannot be completed. + """ + artiq_builtin = True + + @nac3 class ClockFailure(Exception): """Raised when RTIO PLL has lost lock.""" diff --git a/artiq/coredevice/grabber.py b/artiq/coredevice/grabber.py index be713053c..d57db347b 100644 --- a/artiq/coredevice/grabber.py +++ b/artiq/coredevice/grabber.py @@ -1,7 +1,7 @@ from numpy import int32, int64 from artiq.language.core import * -from artiq.coredevice.rtio import rtio_output, rtio_input_data +from artiq.coredevice.rtio import rtio_output, rtio_input_timestamped_data from artiq.coredevice.core import Core @@ -12,6 +12,12 @@ class OutOfSyncException(Exception): pass +@nac3 +class GrabberTimeoutException(Exception): + """Raised when a timeout occurs while attempting to read Grabber RTIO input events.""" + pass + + @nac3 class Grabber: """Driver for the Grabber camera interface.""" @@ -86,10 +92,10 @@ class Grabber: self.gate_roi(0) @kernel - def input_mu(self, data: list[int32]): + def input_mu(self, data: list[int32], timeout_mu: int64 = int64(-1)): """ Retrieves the accumulated values for one frame from the ROI engines. - Blocks until values are available. + Blocks until values are available or timeout is reached. The input list must be a list of integers of the same length as there are enabled ROI engines. This method replaces the elements of the @@ -99,15 +105,26 @@ class Grabber: If the number of elements in the list does not match the number of ROI engines that produced output, an exception will be raised during this call or the next. + + If the timeout is reached before data is available, the exception + GrabberTimeoutException is raised. + + :param timeout_mu: Timestamp at which a timeout will occur. Set to -1 + (default) to disable timeout. """ channel = self.channel_base + 1 - sentinel = rtio_input_data(channel) + timestamp, sentinel = rtio_input_timestamped_data(timeout_mu, channel) + if timestamp == int64(-1): + raise GrabberTimeoutException("Timeout before Grabber frame available") if sentinel != self.sentinel: raise OutOfSyncException() for i in range(len(data)): - roi_output = rtio_input_data(channel) + timestamp, roi_output = rtio_input_timestamped_data(timeout_mu, channel) if roi_output == self.sentinel: raise OutOfSyncException() + if timestamp == int64(-1): + raise GrabberTimeoutException( + "Timeout retrieving ROIs (attempting to read more ROIs than enabled?)") data[i] = roi_output diff --git a/artiq/coredevice/kasli_i2c.py b/artiq/coredevice/kasli_i2c.py index 8485811d2..478985a92 100644 --- a/artiq/coredevice/kasli_i2c.py +++ b/artiq/coredevice/kasli_i2c.py @@ -34,14 +34,14 @@ class KasliEEPROM: port: KernelInvariant[int32] address: KernelInvariant[int32] - def __init__(self, dmgr, port, busno=0, - core_device="core", sw0_device="i2c_switch0", sw1_device="i2c_switch1"): + def __init__(self, dmgr, port, address=0xa0, busno=0, + core_device="core", sw0_device="i2c_switch0", sw1_device="i2c_switch1"): self.core = dmgr.get(core_device) self.sw0 = dmgr.get(sw0_device) self.sw1 = dmgr.get(sw1_device) self.busno = busno self.port = port_mapping[port] - self.address = 0xa0 # i2c 8 bit + self.address = address # i2c 8 bit @kernel def select(self): diff --git a/artiq/coredevice/sampler.py b/artiq/coredevice/sampler.py index 15372a7e1..6de2d8606 100644 --- a/artiq/coredevice/sampler.py +++ b/artiq/coredevice/sampler.py @@ -25,7 +25,7 @@ def adc_mu_to_volt(data: int32, gain: int32 = 0, corrected_fs: bool = True) -> f :param data: 16 bit signed ADC word :param gain: PGIA gain setting (0: 1, ..., 3: 1000) :param corrected_fs: use corrected ADC FS reference. - Should be True for Samplers' revisions after v2.1. False for v2.1 and earlier. + Should be True for Samplers' revisions after v2.1. False for v2.1 and earlier. :return: Voltage in Volts """ volt_per_lsb = 0. diff --git a/artiq/dashboard/applets_ccb.py b/artiq/dashboard/applets_ccb.py index 78c7ab673..29afbbc1d 100644 --- a/artiq/dashboard/applets_ccb.py +++ b/artiq/dashboard/applets_ccb.py @@ -36,7 +36,7 @@ class AppletsCCBDock(applets.AppletsDock): ccbp_group_menu.addAction(self.ccbp_group_create) actiongroup.addAction(self.ccbp_group_create) self.ccbp_group_enable = QtWidgets.QAction("Create and enable/disable applets", - self.table) + self.table) self.ccbp_group_enable.setCheckable(True) self.ccbp_group_enable.triggered.connect(lambda: self.set_ccbp("enable")) ccbp_group_menu.addAction(self.ccbp_group_enable) diff --git a/artiq/dashboard/datasets.py b/artiq/dashboard/datasets.py index 97146bb33..749828d61 100644 --- a/artiq/dashboard/datasets.py +++ b/artiq/dashboard/datasets.py @@ -8,7 +8,6 @@ from sipyco import pyon from artiq.tools import scale_from_metadata, short_format, exc_to_warning from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel from artiq.gui.models import DictSyncTreeSepModel -from artiq.gui.scientific_spinbox import ScientificSpinBox logger = logging.getLogger(__name__) @@ -81,8 +80,7 @@ class CreateEditDialog(QtWidgets.QDialog): t = value.dtype if value is np.ndarray else type(value) if scale != 1 and np.issubdtype(t, np.number): # degenerates to float type - value_edit_string = self.value_to_edit_string( - np.float64(value / scale)) + value_edit_string = self.value_to_edit_string(value / scale) self.unit_widget.setText(metadata.get('unit', '')) self.scale_widget.setText(str(metadata.get('scale', ''))) self.precision_widget.setText(str(metadata.get('precision', ''))) @@ -109,11 +107,13 @@ class CreateEditDialog(QtWidgets.QDialog): t = value.dtype if value is np.ndarray else type(value) if scale != 1 and np.issubdtype(t, np.number): # degenerates to float type - value = np.float64(value * scale) + value = float(value * scale) if self.key and self.key != key: - asyncio.ensure_future(exc_to_warning(rename(self.key, key, value, metadata, persist, self.dataset_ctl))) + asyncio.ensure_future(exc_to_warning(rename(self.key, key, value, metadata, persist, + self.dataset_ctl))) else: - asyncio.ensure_future(exc_to_warning(self.dataset_ctl.set(key, value, metadata=metadata, persist=persist))) + asyncio.ensure_future(exc_to_warning(self.dataset_ctl.set(key, value, metadata=metadata, + persist=persist))) self.key = key QtWidgets.QDialog.accept(self) @@ -163,7 +163,7 @@ class CreateEditDialog(QtWidgets.QDialog): class Model(DictSyncTreeSepModel): - def __init__(self, init): + def __init__(self, init): DictSyncTreeSepModel.__init__(self, ".", ["Dataset", "Persistent", "Value"], init) diff --git a/artiq/dashboard/experiments.py b/artiq/dashboard/experiments.py index 74c3496a9..10bfa5a0c 100644 --- a/artiq/dashboard/experiments.py +++ b/artiq/dashboard/experiments.py @@ -9,10 +9,10 @@ import h5py from sipyco import pyon -from artiq.gui.entries import procdesc_to_entry, ScanEntry +from artiq.gui.entries import procdesc_to_entry, EntryTreeWidget from artiq.gui.fuzzy_select import FuzzySelectWidget -from artiq.gui.tools import (LayoutWidget, WheelFilter, - log_level_to_name, get_open_file_name) +from artiq.gui.tools import (LayoutWidget, log_level_to_name, get_open_file_name) +from artiq.tools import parse_devarg_override, unparse_devarg_override logger = logging.getLogger(__name__) @@ -24,99 +24,23 @@ logger = logging.getLogger(__name__) # 2. file:@ -class _ArgumentEditor(QtWidgets.QTreeWidget): +class _ArgumentEditor(EntryTreeWidget): def __init__(self, manager, dock, expurl): self.manager = manager self.expurl = expurl - QtWidgets.QTreeWidget.__init__(self) - self.setColumnCount(3) - self.header().setStretchLastSection(False) - if hasattr(self.header(), "setSectionResizeMode"): - set_resize_mode = self.header().setSectionResizeMode - else: - set_resize_mode = self.header().setResizeMode - set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents) - set_resize_mode(1, QtWidgets.QHeaderView.Stretch) - set_resize_mode(2, QtWidgets.QHeaderView.ResizeToContents) - self.header().setVisible(False) - self.setSelectionMode(self.NoSelection) - self.setHorizontalScrollMode(self.ScrollPerPixel) - self.setVerticalScrollMode(self.ScrollPerPixel) - - self.setStyleSheet("QTreeWidget {background: " + - self.palette().midlight().color().name() + " ;}") - - self.viewport().installEventFilter(WheelFilter(self.viewport(), True)) - - self._groups = dict() - self._arg_to_widgets = dict() + EntryTreeWidget.__init__(self) arguments = self.manager.get_submission_arguments(self.expurl) if not arguments: - self.addTopLevelItem(QtWidgets.QTreeWidgetItem(["No arguments"])) + self.insertTopLevelItem(0, QtWidgets.QTreeWidgetItem(["No arguments"])) - gradient = QtGui.QLinearGradient( - 0, 0, 0, QtGui.QFontMetrics(self.font()).lineSpacing()*2.5) - gradient.setColorAt(0, self.palette().base().color()) - gradient.setColorAt(1, self.palette().midlight().color()) for name, argument in arguments.items(): - widgets = dict() - self._arg_to_widgets[name] = widgets + self.set_argument(name, argument) - entry = procdesc_to_entry(argument["desc"])(argument) - widget_item = QtWidgets.QTreeWidgetItem([name]) - if argument["tooltip"]: - widget_item.setToolTip(0, argument["tooltip"]) - widgets["entry"] = entry - widgets["widget_item"] = widget_item + self.quickStyleClicked.connect(dock.submit_clicked) - for col in range(3): - widget_item.setBackground(col, gradient) - font = widget_item.font(0) - font.setBold(True) - widget_item.setFont(0, font) - - if argument["group"] is None: - self.addTopLevelItem(widget_item) - else: - self._get_group(argument["group"]).addChild(widget_item) - fix_layout = LayoutWidget() - widgets["fix_layout"] = fix_layout - fix_layout.addWidget(entry) - self.setItemWidget(widget_item, 1, fix_layout) - recompute_argument = QtWidgets.QToolButton() - recompute_argument.setToolTip("Re-run the experiment's build " - "method and take the default value") - recompute_argument.setIcon( - QtWidgets.QApplication.style().standardIcon( - QtWidgets.QStyle.SP_BrowserReload)) - recompute_argument.clicked.connect( - partial(self._recompute_argument_clicked, name)) - - tool_buttons = LayoutWidget() - tool_buttons.addWidget(recompute_argument, 1) - - disable_other_scans = QtWidgets.QToolButton() - widgets["disable_other_scans"] = disable_other_scans - disable_other_scans.setIcon( - QtWidgets.QApplication.style().standardIcon( - QtWidgets.QStyle.SP_DialogResetButton)) - disable_other_scans.setToolTip("Disable all other scans in " - "this experiment") - disable_other_scans.clicked.connect( - partial(self._disable_other_scans, name)) - tool_buttons.layout.setRowStretch(0, 1) - tool_buttons.layout.setRowStretch(3, 1) - tool_buttons.addWidget(disable_other_scans, 2) - if not isinstance(entry, ScanEntry): - disable_other_scans.setVisible(False) - - self.setItemWidget(widget_item, 2, tool_buttons) - - widget_item = QtWidgets.QTreeWidgetItem() - self.addTopLevelItem(widget_item) recompute_arguments = QtWidgets.QPushButton("Recompute all arguments") recompute_arguments.setIcon( QtWidgets.QApplication.style().standardIcon( @@ -135,41 +59,10 @@ class _ArgumentEditor(QtWidgets.QTreeWidget): buttons.layout.setColumnStretch(1, 0) buttons.layout.setColumnStretch(2, 0) buttons.layout.setColumnStretch(3, 1) - self.setItemWidget(widget_item, 1, buttons) + self.setItemWidget(self.bottom_item, 1, buttons) - def _get_group(self, name): - if name in self._groups: - return self._groups[name] - group = QtWidgets.QTreeWidgetItem([name]) - for col in range(3): - group.setBackground(col, self.palette().mid()) - group.setForeground(col, self.palette().brightText()) - font = group.font(col) - font.setBold(True) - group.setFont(col, font) - self.addTopLevelItem(group) - self._groups[name] = group - return group - - def update_argument(self, name, argument): - widgets = self._arg_to_widgets[name] - - # Qt needs a setItemWidget() to handle layout correctly, - # simply replacing the entry inside the LayoutWidget - # results in a bug. - - widgets["entry"].deleteLater() - widgets["entry"] = procdesc_to_entry(argument["desc"])(argument) - widgets["disable_other_scans"].setVisible( - isinstance(widgets["entry"], ScanEntry)) - widgets["fix_layout"].deleteLater() - widgets["fix_layout"] = LayoutWidget() - widgets["fix_layout"].addWidget(widgets["entry"]) - self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"]) - self.updateGeometries() - - def _recompute_argument_clicked(self, name): - asyncio.ensure_future(self._recompute_argument(name)) + def reset_entry(self, key): + asyncio.ensure_future(self._recompute_argument(key)) async def _recompute_argument(self, name): try: @@ -186,30 +79,6 @@ class _ArgumentEditor(QtWidgets.QTreeWidget): argument["state"] = state self.update_argument(name, argument) - def _disable_other_scans(self, current_name): - for name, widgets in self._arg_to_widgets.items(): - if (name != current_name - and isinstance(widgets["entry"], ScanEntry)): - widgets["entry"].disable() - - def save_state(self): - expanded = [] - for k, v in self._groups.items(): - if v.isExpanded(): - expanded.append(k) - return { - "expanded": expanded, - "scroll": self.verticalScrollBar().value() - } - - def restore_state(self, state): - for e in state["expanded"]: - try: - self._groups[e].setExpanded(True) - except KeyError: - pass - self.verticalScrollBar().setValue(state["scroll"]) - # Hooks that allow user-supplied argument editors to react to imminent user # actions. Here, we always keep the manager-stored submission arguments # up-to-date, so no further action is required. @@ -229,7 +98,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): def __init__(self, manager, expurl): QtWidgets.QMdiSubWindow.__init__(self) qfm = QtGui.QFontMetrics(self.font()) - self.resize(100*qfm.averageCharWidth(), 30*qfm.lineSpacing()) + self.resize(100 * qfm.averageCharWidth(), 30 * qfm.lineSpacing()) self.setWindowTitle(expurl) self.setWindowIcon(QtWidgets.QApplication.style().standardIcon( QtWidgets.QStyle.SP_FileDialogContentsView)) @@ -262,17 +131,17 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): datetime.setDate(QtCore.QDate.currentDate()) else: datetime.setDateTime(QtCore.QDateTime.fromMSecsSinceEpoch( - int(scheduling["due_date"]*1000))) + int(scheduling["due_date"] * 1000))) datetime_en.setChecked(scheduling["due_date"] is not None) def update_datetime(dt): - scheduling["due_date"] = dt.toMSecsSinceEpoch()/1000 + scheduling["due_date"] = dt.toMSecsSinceEpoch() / 1000 datetime_en.setChecked(True) datetime.dateTimeChanged.connect(update_datetime) def update_datetime_en(checked): if checked: - due_date = datetime.dateTime().toMSecsSinceEpoch()/1000 + due_date = datetime.dateTime().toMSecsSinceEpoch() / 1000 else: due_date = None scheduling["due_date"] = due_date @@ -305,7 +174,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): flush = self.flush flush.setToolTip("Flush the pipeline (of current- and higher-priority " "experiments) before starting the experiment") - self.layout.addWidget(flush, 2, 2, 1, 2) + self.layout.addWidget(flush, 2, 2) flush.setChecked(scheduling["flush"]) @@ -313,6 +182,20 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): scheduling["flush"] = bool(checked) flush.stateChanged.connect(update_flush) + devarg_override = QtWidgets.QComboBox() + devarg_override.setEditable(True) + devarg_override.lineEdit().setPlaceholderText("Override device arguments") + devarg_override.lineEdit().setClearButtonEnabled(True) + devarg_override.insertItem(0, "core:analyze_at_run_end=True") + self.layout.addWidget(devarg_override, 2, 3) + + devarg_override.setCurrentText(options["devarg_override"]) + + def update_devarg_override(text): + options["devarg_override"] = text + devarg_override.editTextChanged.connect(update_devarg_override) + self.devarg_override = devarg_override + log_level = QtWidgets.QComboBox() log_level.addItems(log_levels) log_level.setCurrentIndex(1) @@ -333,9 +216,11 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): if "repo_rev" in options: repo_rev = QtWidgets.QLineEdit() repo_rev.setPlaceholderText("current") - repo_rev_label = QtWidgets.QLabel("Revision:") + repo_rev.setClearButtonEnabled(True) + repo_rev_label = QtWidgets.QLabel("Rev / ref:") repo_rev_label.setToolTip("Experiment repository revision " - "(commit ID) to use") + "(commit ID) or reference (branch " + "or tag) to use") self.layout.addWidget(repo_rev_label, 3, 2) self.layout.addWidget(repo_rev, 3, 3) @@ -352,7 +237,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): submit = QtWidgets.QPushButton("Submit") submit.setIcon(QtWidgets.QApplication.style().standardIcon( - QtWidgets.QStyle.SP_DialogOkButton)) + QtWidgets.QStyle.SP_DialogOkButton)) submit.setToolTip("Schedule the experiment (Ctrl+Return)") submit.setShortcut("CTRL+RETURN") submit.setSizePolicy(QtWidgets.QSizePolicy.Expanding, @@ -362,7 +247,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): reqterm = QtWidgets.QPushButton("Terminate instances") reqterm.setIcon(QtWidgets.QApplication.style().standardIcon( - QtWidgets.QStyle.SP_DialogCancelButton)) + QtWidgets.QStyle.SP_DialogCancelButton)) reqterm.setToolTip("Request termination of instances (Ctrl+Backspace)") reqterm.setShortcut("CTRL+BACKSPACE") reqterm.setSizePolicy(QtWidgets.QSizePolicy.Expanding, @@ -404,8 +289,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): arginfo = expdesc["arginfo"] for k, v in overrides.items(): # Some values (e.g. scans) may have multiple defaults in a list - if ("default" in arginfo[k][0] - and isinstance(arginfo[k][0]["default"], list)): + if ("default" in arginfo[k][0] and isinstance(arginfo[k][0]["default"], list)): arginfo[k][0]["default"].insert(0, v) else: arginfo[k][0]["default"] = v @@ -416,8 +300,8 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): editor_class = self.manager.get_argument_editor_class(self.expurl) self.argeditor = editor_class(self.manager, self, self.expurl) - self.argeditor.restore_state(argeditor_state) self.layout.addWidget(self.argeditor, 0, 0, 1, 5) + self.argeditor.restore_state(argeditor_state) def contextMenuEvent(self, event): menu = QtWidgets.QMenu(self) @@ -465,11 +349,14 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow): return try: + if "devarg_override" in expid: + self.devarg_override.setCurrentText( + unparse_devarg_override(expid["devarg_override"])) self.log_level.setCurrentIndex(log_levels.index( log_level_to_name(expid["log_level"]))) - if ("repo_rev" in expid and - expid["repo_rev"] != "N/A" and - hasattr(self, "repo_rev")): + if "repo_rev" in expid and \ + expid["repo_rev"] != "N/A" and \ + hasattr(self, "repo_rev"): self.repo_rev.setText(expid["repo_rev"]) except: logger.error("Could not set submission options from HDF5 expid", @@ -638,7 +525,8 @@ class ExperimentManager: else: # mutated by _ExperimentDock options = { - "log_level": logging.WARNING + "log_level": logging.WARNING, + "devarg_override": "" } if expurl[:5] == "repo:": options["repo_rev"] = None @@ -658,7 +546,7 @@ class ExperimentManager: self.submission_arguments[expurl] = arguments self.argument_ui_names[expurl] = ui_name return arguments - + def set_argument_value(self, expurl, name, value): try: argument = self.submission_arguments[expurl][name] @@ -671,7 +559,8 @@ class ExperimentManager: if expurl in self.open_experiments.keys(): self.open_experiments[expurl].argeditor.update_argument(name, argument) except: - logger.warn("Failed to set value for argument \"{}\" in experiment: {}.".format(name, expurl), exc_info=1) + logger.warn("Failed to set value for argument \"{}\" in experiment: {}." + .format(name, expurl), exc_info=1) def get_submission_arguments(self, expurl): if expurl in self.submission_arguments: @@ -681,8 +570,8 @@ class ExperimentManager: raise ValueError("Submission arguments must be preinitialized " "when not using repository") class_desc = self.explist[expurl[5:]] - return self.initialize_submission_arguments(expurl, - class_desc["arginfo"], class_desc.get("argument_ui", None)) + return self.initialize_submission_arguments(expurl, class_desc["arginfo"], + class_desc.get("argument_ui", None)) def open_experiment(self, expurl): if expurl in self.open_experiments: @@ -719,8 +608,13 @@ class ExperimentManager: del self.open_experiments[expurl] async def _submit_task(self, expurl, *args): - rid = await self.schedule_ctl.submit(*args) - logger.info("Submitted '%s', RID is %d", expurl, rid) + try: + rid = await self.schedule_ctl.submit(*args) + except KeyError: + expid = args[1] + logger.error("Submission failed - revision \"%s\" was not found", expid["repo_rev"]) + else: + logger.info("Submitted '%s', RID is %d", expurl, rid) def submit(self, expurl): file, class_name, _ = self.resolve_expurl(expurl) @@ -733,7 +627,14 @@ class ExperimentManager: entry_cls = procdesc_to_entry(argument["desc"]) argument_values[name] = entry_cls.state_to_value(argument["state"]) + try: + devarg_override = parse_devarg_override(options["devarg_override"]) + except: + logger.error("Failed to parse device argument overrides for %s", expurl) + return + expid = { + "devarg_override": devarg_override, "log_level": options["log_level"], "file": file, "class_name": class_name, @@ -770,9 +671,9 @@ class ExperimentManager: repo_match = "repo_rev" in expid else: repo_match = "repo_rev" not in expid - if (repo_match and - ("file" in expid and expid["file"] == file) and - expid["class_name"] == class_name): + if repo_match and \ + ("file" in expid and expid["file"] == file) and \ + expid["class_name"] == class_name: rids.append(rid) asyncio.ensure_future(self._request_term_multiple(rids)) @@ -792,7 +693,7 @@ class ExperimentManager: for class_name, class_desc in description.items(): expurl = "file:{}@{}".format(class_name, file) self.initialize_submission_arguments(expurl, class_desc["arginfo"], - class_desc.get("argument_ui", None)) + class_desc.get("argument_ui", None)) if expurl in self.open_experiments: self.open_experiments[expurl].close() self.open_experiment(expurl) @@ -826,6 +727,7 @@ class ExperimentManager: self.is_quick_open_shown = True dialog = _QuickOpenDialog(self) + def closed(): self.is_quick_open_shown = False dialog.closed.connect(closed) diff --git a/artiq/dashboard/explorer.py b/artiq/dashboard/explorer.py index f9dab39dd..f8c8df1f0 100644 --- a/artiq/dashboard/explorer.py +++ b/artiq/dashboard/explorer.py @@ -94,7 +94,7 @@ class _OpenFileDialog(QtWidgets.QDialog): else: break self.explorer.current_directory = \ - self.explorer.current_directory[:idx+1] + self.explorer.current_directory[:idx + 1] if self.explorer.current_directory == "/": self.explorer.current_directory = "" asyncio.ensure_future(self.refresh_view()) @@ -103,6 +103,7 @@ class _OpenFileDialog(QtWidgets.QDialog): asyncio.ensure_future(self.refresh_view()) else: file = self.explorer.current_directory + selected + async def open_task(): try: await self.exp_manager.open_file(file) @@ -232,7 +233,7 @@ class ExplorerDock(QtWidgets.QDockWidget): set_shortcut_menu = QtWidgets.QMenu() for i in range(12): - action = QtWidgets.QAction("F" + str(i+1), self.el) + action = QtWidgets.QAction("F" + str(i + 1), self.el) action.triggered.connect(partial(self.set_shortcut, i)) set_shortcut_menu.addAction(action) @@ -246,12 +247,14 @@ class ExplorerDock(QtWidgets.QDockWidget): scan_repository_action = QtWidgets.QAction("Scan repository HEAD", self.el) + def scan_repository(): asyncio.ensure_future(experiment_db_ctl.scan_repository_async()) scan_repository_action.triggered.connect(scan_repository) self.el.addAction(scan_repository_action) scan_ddb_action = QtWidgets.QAction("Scan device database", self.el) + def scan_ddb(): asyncio.ensure_future(device_db_ctl.scan()) scan_ddb_action.triggered.connect(scan_ddb) @@ -292,7 +295,7 @@ class ExplorerDock(QtWidgets.QDockWidget): if expname is not None: expurl = "repo:" + expname self.d_shortcuts.set_shortcut(nr, expurl) - logger.info("Set shortcut F%d to '%s'", nr+1, expurl) + logger.info("Set shortcut F%d to '%s'", nr + 1, expurl) def update_scanning(self, scanning): if scanning: diff --git a/artiq/dashboard/interactive_args.py b/artiq/dashboard/interactive_args.py new file mode 100644 index 000000000..d46e0d847 --- /dev/null +++ b/artiq/dashboard/interactive_args.py @@ -0,0 +1,155 @@ +import logging +import asyncio + +from PyQt5 import QtCore, QtWidgets, QtGui + +from artiq.gui.models import DictSyncModel +from artiq.gui.entries import EntryTreeWidget, procdesc_to_entry +from artiq.gui.tools import LayoutWidget + + +logger = logging.getLogger(__name__) + + +class Model(DictSyncModel): + def __init__(self, init): + DictSyncModel.__init__(self, ["RID", "Title", "Args"], init) + + def convert(self, k, v, column): + if column == 0: + return k + elif column == 1: + txt = ": " + v["title"] if v["title"] != "" else "" + return str(k) + txt + elif column == 2: + return v["arglist_desc"] + else: + raise ValueError + + def sort_key(self, k, v): + return k + + +class _InteractiveArgsRequest(EntryTreeWidget): + supplied = QtCore.pyqtSignal(int, dict) + cancelled = QtCore.pyqtSignal(int) + + def __init__(self, rid, arglist_desc): + EntryTreeWidget.__init__(self) + self.rid = rid + self.arguments = dict() + for key, procdesc, group, tooltip in arglist_desc: + self.arguments[key] = {"desc": procdesc, "group": group, "tooltip": tooltip} + self.set_argument(key, self.arguments[key]) + self.quickStyleClicked.connect(self.supply) + cancel_btn = QtWidgets.QPushButton("Cancel") + cancel_btn.setIcon(QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_DialogCancelButton)) + cancel_btn.clicked.connect(self.cancel) + supply_btn = QtWidgets.QPushButton("Supply") + supply_btn.setIcon(QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_DialogOkButton)) + supply_btn.clicked.connect(self.supply) + buttons = LayoutWidget() + buttons.addWidget(cancel_btn, 1, 1) + buttons.addWidget(supply_btn, 1, 2) + buttons.layout.setColumnStretch(0, 1) + buttons.layout.setColumnStretch(1, 0) + buttons.layout.setColumnStretch(2, 0) + buttons.layout.setColumnStretch(3, 1) + self.setItemWidget(self.bottom_item, 1, buttons) + + def supply(self): + argument_values = dict() + for key, argument in self.arguments.items(): + entry_cls = procdesc_to_entry(argument["desc"]) + argument_values[key] = entry_cls.state_to_value(argument["state"]) + self.supplied.emit(self.rid, argument_values) + + def cancel(self): + self.cancelled.emit(self.rid) + + +class _InteractiveArgsView(QtWidgets.QStackedWidget): + supplied = QtCore.pyqtSignal(int, dict) + cancelled = QtCore.pyqtSignal(int) + + def __init__(self): + QtWidgets.QStackedWidget.__init__(self) + self.tabs = QtWidgets.QTabWidget() + self.default_label = QtWidgets.QLabel("No pending interactive arguments requests.") + self.default_label.setAlignment(QtCore.Qt.AlignCenter) + font = QtGui.QFont(self.default_label.font()) + font.setItalic(True) + self.default_label.setFont(font) + self.addWidget(self.tabs) + self.addWidget(self.default_label) + self.model = Model({}) + + def setModel(self, model): + self.setCurrentIndex(1) + for i in range(self.tabs.count()): + widget = self.tabs.widget(i) + self.tabs.removeTab(i) + widget.deleteLater() + self.model = model + self.model.rowsInserted.connect(self.rowsInserted) + self.model.rowsRemoved.connect(self.rowsRemoved) + for i in range(self.model.rowCount(QtCore.QModelIndex())): + self._insert_widget(i) + + def _insert_widget(self, row): + rid = self.model.data(self.model.index(row, 0), QtCore.Qt.DisplayRole) + title = self.model.data(self.model.index(row, 1), QtCore.Qt.DisplayRole) + arglist_desc = self.model.data(self.model.index(row, 2), QtCore.Qt.DisplayRole) + inter_args_request = _InteractiveArgsRequest(rid, arglist_desc) + inter_args_request.supplied.connect(self.supplied) + inter_args_request.cancelled.connect(self.cancelled) + self.tabs.insertTab(row, inter_args_request, title) + + def rowsInserted(self, parent, first, last): + assert first == last + self.setCurrentIndex(0) + self._insert_widget(first) + + def rowsRemoved(self, parent, first, last): + assert first == last + widget = self.tabs.widget(first) + self.tabs.removeTab(first) + widget.deleteLater() + if self.tabs.count() == 0: + self.setCurrentIndex(1) + + +class InteractiveArgsDock(QtWidgets.QDockWidget): + def __init__(self, interactive_args_sub, interactive_args_rpc): + QtWidgets.QDockWidget.__init__(self, "Interactive Args") + self.setObjectName("Interactive Args") + self.setFeatures( + QtWidgets.QDockWidget.DockWidgetMovable | QtWidgets.QDockWidget.DockWidgetFloatable) + self.interactive_args_rpc = interactive_args_rpc + self.request_view = _InteractiveArgsView() + self.request_view.supplied.connect(self.supply) + self.request_view.cancelled.connect(self.cancel) + self.setWidget(self.request_view) + interactive_args_sub.add_setmodel_callback(self.request_view.setModel) + + def supply(self, rid, values): + asyncio.ensure_future(self._supply_task(rid, values)) + + async def _supply_task(self, rid, values): + try: + await self.interactive_args_rpc.supply(rid, values) + except Exception: + logger.error("failed to supply interactive arguments for experiment: %d", + rid, exc_info=True) + + def cancel(self, rid): + asyncio.ensure_future(self._cancel_task(rid)) + + async def _cancel_task(self, rid): + try: + await self.interactive_args_rpc.cancel(rid) + except Exception: + logger.error("failed to cancel interactive args request for experiment: %d", + rid, exc_info=True) diff --git a/artiq/dashboard/moninj.py b/artiq/dashboard/moninj.py index 3ee97b897..e83497289 100644 --- a/artiq/dashboard/moninj.py +++ b/artiq/dashboard/moninj.py @@ -2,23 +2,20 @@ import asyncio import logging import textwrap from collections import namedtuple +from functools import partial -from PyQt5 import QtCore, QtWidgets, QtGui +from PyQt5 import QtCore, QtWidgets -from sipyco.sync_struct import Subscriber - -from artiq.coredevice.comm_moninj import * -from artiq.coredevice.ad9910 import ( - _AD9910_REG_PROFILE0, _AD9910_REG_PROFILE7, - _AD9910_REG_FTW, _AD9910_REG_CFR1 -) -from artiq.coredevice.ad9912_reg import AD9912_POW1, AD9912_SER_CONF -from artiq.gui.tools import LayoutWidget -from artiq.gui.flowlayout import FlowLayout +from artiq.coredevice.comm_moninj import CommMonInj, TTLOverride, TTLProbe +from artiq.coredevice.ad9912_reg import AD9912_SER_CONF +from artiq.gui.tools import LayoutWidget, QDockWidgetCloseDetect, DoubleClickLineEdit +from artiq.gui.dndwidgets import VDragScrollArea, DragDropFlowLayoutWidget +from artiq.gui.models import DictSyncTreeSepModel logger = logging.getLogger(__name__) + class _CancellableLineEdit(QtWidgets.QLineEdit): def escapePressedConnect(self, cb): self.esc_cb = cb @@ -37,6 +34,7 @@ class _TTLWidget(QtWidgets.QFrame): self.channel = channel self.set_mode = dm.ttl_set_mode self.force_out = force_out + self.title = title self.setFrameShape(QtWidgets.QFrame.Box) self.setFrameShadow(QtWidgets.QFrame.Raised) @@ -133,7 +131,7 @@ class _TTLWidget(QtWidgets.QFrame): else: color = "" self.value.setText("{}".format( - color, value_s)) + color, value_s)) oe = self.cur_oe or self.force_out direction = "OUT" if oe else "IN" self.direction.setText("" + direction + "") @@ -148,40 +146,13 @@ class _TTLWidget(QtWidgets.QFrame): self.programmatic_change = False def sort_key(self): - return self.channel + return (0, self.channel, 0) + def uid(self): + return self.title -class _SimpleDisplayWidget(QtWidgets.QFrame): - def __init__(self, title): - QtWidgets.QFrame.__init__(self) - - self.setFrameShape(QtWidgets.QFrame.Box) - self.setFrameShadow(QtWidgets.QFrame.Raised) - - grid = QtWidgets.QGridLayout() - grid.setContentsMargins(0, 0, 0, 0) - grid.setHorizontalSpacing(0) - grid.setVerticalSpacing(0) - self.setLayout(grid) - label = QtWidgets.QLabel(title) - label.setAlignment(QtCore.Qt.AlignCenter) - grid.addWidget(label, 1, 1) - - self.value = QtWidgets.QLabel() - self.value.setAlignment(QtCore.Qt.AlignCenter) - grid.addWidget(self.value, 2, 1, 6, 1) - - grid.setRowStretch(1, 1) - grid.setRowStretch(2, 0) - grid.setRowStretch(3, 1) - - self.refresh_display() - - def refresh_display(self): - raise NotImplementedError - - def sort_key(self): - raise NotImplementedError + def to_model_path(self): + return "ttl/{}".format(self.title) class _DDSModel: @@ -191,7 +162,7 @@ class _DDSModel: self.cur_reg = 0 self.dds_type = dds_type self.is_urukul = dds_type in ["AD9910", "AD9912"] - + if dds_type == "AD9914": self.ftw_per_hz = 2**32 / ref_clk else: @@ -216,13 +187,15 @@ class _DDSModel: class _DDSWidget(QtWidgets.QFrame): - def __init__(self, dm, title, bus_channel=0, channel=0, dds_model=None): + def __init__(self, dm, title, bus_channel, channel, + dds_type, ref_clk, cpld=None, pll=1, clk_div=0): self.dm = dm self.bus_channel = bus_channel self.channel = channel self.dds_name = title self.cur_frequency = 0 - self.dds_model = dds_model + self.dds_model = _DDSModel(dds_type, ref_clk, cpld, pll, clk_div) + self.title = title QtWidgets.QFrame.__init__(self) @@ -283,7 +256,7 @@ class _DDSWidget(QtWidgets.QFrame): set_btn.setText("Set") set_btn.setToolTip("Set frequency") set_grid.addWidget(set_btn, 0, 1, 1, 1) - + # for urukuls also allow switching off RF if self.dds_model.is_urukul: off_btn = QtWidgets.QToolButton() @@ -327,18 +300,17 @@ class _DDSWidget(QtWidgets.QFrame): def set_clicked(self, set): self.data_stack.setCurrentIndex(1) self.button_stack.setCurrentIndex(1) - self.value_edit.setText("{:.7f}" - .format(self.cur_frequency/1e6)) + self.value_edit.setText("{:.7f}".format(self.cur_frequency / 1e6)) self.value_edit.setFocus() self.value_edit.selectAll() def off_clicked(self, set): self.dm.dds_channel_toggle(self.dds_name, self.dds_model, sw=False) - + def apply_changes(self, apply): self.data_stack.setCurrentIndex(0) self.button_stack.setCurrentIndex(0) - frequency = float(self.value_edit.text())*1e6 + frequency = float(self.value_edit.text()) * 1e6 self.dm.dds_set_frequency(self.dds_name, self.dds_model, frequency) def cancel_changes(self, cancel): @@ -347,28 +319,61 @@ class _DDSWidget(QtWidgets.QFrame): def refresh_display(self): self.cur_frequency = self.dds_model.cur_frequency - self.value_label.setText("{:.7f}" - .format(self.cur_frequency/1e6)) - self.value_edit.setText("{:.7f}" - .format(self.cur_frequency/1e6)) + self.value_label.setText("{:.7f}".format(self.cur_frequency / 1e6)) + self.value_edit.setText("{:.7f}".format(self.cur_frequency / 1e6)) def sort_key(self): - return (self.bus_channel, self.channel) + return (1, self.bus_channel, self.channel) + + def uid(self): + return self.title + + def to_model_path(self): + return "dds/{}".format(self.title) -class _DACWidget(_SimpleDisplayWidget): +class _DACWidget(QtWidgets.QFrame): def __init__(self, dm, spi_channel, channel, title): + QtWidgets.QFrame.__init__(self) self.spi_channel = spi_channel self.channel = channel self.cur_value = 0 - _SimpleDisplayWidget.__init__(self, "{} ch{}".format(title, channel)) + self.title = title + + self.setFrameShape(QtWidgets.QFrame.Box) + self.setFrameShadow(QtWidgets.QFrame.Raised) + + grid = QtWidgets.QGridLayout() + grid.setContentsMargins(0, 0, 0, 0) + grid.setHorizontalSpacing(0) + grid.setVerticalSpacing(0) + self.setLayout(grid) + label = QtWidgets.QLabel("{} ch{}".format(title, channel)) + label.setAlignment(QtCore.Qt.AlignCenter) + grid.addWidget(label, 1, 1) + + self.value = QtWidgets.QLabel() + self.value.setAlignment(QtCore.Qt.AlignCenter) + grid.addWidget(self.value, 2, 1, 6, 1) + + grid.setRowStretch(1, 1) + grid.setRowStretch(2, 0) + grid.setRowStretch(3, 1) + + self.refresh_display() def refresh_display(self): self.value.setText("{:.3f} %" - .format(self.cur_value*100/2**16)) + .format(self.cur_value * 100 / 2**16)) def sort_key(self): - return (self.spi_channel, self.channel) + return (2, self.spi_channel, self.channel) + + def uid(self): + return (self.title, self.channel) + + def to_model_path(self): + return "dac/{} ch{}".format(self.title, self.channel) _WidgetDesc = namedtuple("_WidgetDesc", "uid comment cls arguments") @@ -392,18 +397,15 @@ def setup_from_ddb(ddb): force_out = v["class"] == "TTLOut" widget = _WidgetDesc(k, comment, _TTLWidget, (channel, force_out, k)) description.add(widget) - elif (v["module"] == "artiq.coredevice.ad9914" - and v["class"] == "AD9914"): + elif (v["module"] == "artiq.coredevice.ad9914" and v["class"] == "AD9914"): bus_channel = v["arguments"]["bus_channel"] channel = v["arguments"]["channel"] dds_sysclk = v["arguments"]["sysclk"] - model = _DDSModel(v["class"], dds_sysclk) - widget = _WidgetDesc(k, comment, _DDSWidget, (k, bus_channel, channel, model)) + widget = _WidgetDesc(k, comment, _DDSWidget, + (k, bus_channel, channel, v["class"], dds_sysclk)) description.add(widget) - elif (v["module"] == "artiq.coredevice.ad9910" - and v["class"] == "AD9910") or \ - (v["module"] == "artiq.coredevice.ad9912" - and v["class"] == "AD9912"): + elif (v["module"] == "artiq.coredevice.ad9910" and v["class"] == "AD9910") or \ + (v["module"] == "artiq.coredevice.ad9912" and v["class"] == "AD9912"): channel = v["arguments"]["chip_select"] - 4 if channel < 0: continue @@ -413,18 +415,20 @@ def setup_from_ddb(ddb): pll = v["arguments"]["pll_n"] refclk = ddb[dds_cpld]["arguments"]["refclk"] clk_div = v["arguments"].get("clk_div", 0) - model = _DDSModel( v["class"], refclk, dds_cpld, pll, clk_div) - widget = _WidgetDesc(k, comment, _DDSWidget, (k, bus_channel, channel, model)) - description.add(widget) - elif ( (v["module"] == "artiq.coredevice.ad53xx" and v["class"] == "AD53xx") - or (v["module"] == "artiq.coredevice.zotino" and v["class"] == "Zotino")): + widget = _WidgetDesc(k, comment, _DDSWidget, + (k, bus_channel, channel, v["class"], + refclk, dds_cpld, pll, clk_div)) + description.add(widget) + elif (v["module"] == "artiq.coredevice.ad53xx" and v["class"] == "AD53xx") or \ + (v["module"] == "artiq.coredevice.zotino" and v["class"] == "Zotino"): spi_device = v["arguments"]["spi_device"] spi_device = ddb[spi_device] while isinstance(spi_device, str): spi_device = ddb[spi_device] spi_channel = spi_device["arguments"]["channel"] for channel in range(32): - widget = _WidgetDesc((k, channel), comment, _DACWidget, (spi_channel, channel, k)) + widget = _WidgetDesc((k, channel), comment, _DACWidget, + (spi_channel, channel, k)) description.add(widget) elif v["type"] == "controller" and k == "core_moninj": mi_addr = v["host"] @@ -449,18 +453,15 @@ class _DeviceManager: self.widgets_by_uid = dict() self.dds_sysclk = 0 - self.ttl_cb = lambda: None self.ttl_widgets = dict() - self.dds_cb = lambda: None self.dds_widgets = dict() - self.dac_cb = lambda: None self.dac_widgets = dict() + self.channels_cb = lambda: None def init_ddb(self, ddb): self.ddb = ddb - return ddb - def notify(self, mod): + def notify_ddb(self, mod): mi_addr, mi_port, description = setup_from_ddb(self.ddb) if (mi_addr, mi_port) != (self.mi_addr, self.mi_port): @@ -471,24 +472,8 @@ class _DeviceManager: for to_remove in self.description - description: widget = self.widgets_by_uid[to_remove.uid] del self.widgets_by_uid[to_remove.uid] - - if isinstance(widget, _TTLWidget): - self.setup_ttl_monitoring(False, widget.channel) - widget.deleteLater() - del self.ttl_widgets[widget.channel] - self.ttl_cb() - elif isinstance(widget, _DDSWidget): - self.setup_dds_monitoring(False, widget.bus_channel, widget.channel) - widget.deleteLater() - del self.dds_widgets[(widget.bus_channel, widget.channel)] - self.dds_cb() - elif isinstance(widget, _DACWidget): - self.setup_dac_monitoring(False, widget.spi_channel, widget.channel) - widget.deleteLater() - del self.dac_widgets[(widget.spi_channel, widget.channel)] - self.dac_cb() - else: - raise ValueError + self.setup_monitoring(False, widget) + widget.deleteLater() for to_add in description - self.description: widget = to_add.cls(self, *to_add.arguments) @@ -496,26 +481,15 @@ class _DeviceManager: widget.setToolTip(to_add.comment) self.widgets_by_uid[to_add.uid] = widget - if isinstance(widget, _TTLWidget): - self.ttl_widgets[widget.channel] = widget - self.ttl_cb() - self.setup_ttl_monitoring(True, widget.channel) - elif isinstance(widget, _DDSWidget): - self.dds_widgets[(widget.bus_channel, widget.channel)] = widget - self.dds_cb() - self.setup_dds_monitoring(True, widget.bus_channel, widget.channel) - elif isinstance(widget, _DACWidget): - self.dac_widgets[(widget.spi_channel, widget.channel)] = widget - self.dac_cb() - self.setup_dac_monitoring(True, widget.spi_channel, widget.channel) - else: - raise ValueError + if description != self.description: + self.channels_cb() self.description = description def ttl_set_mode(self, channel, mode): if self.mi_connection is not None: - widget = self.ttl_widgets[channel] + widget_uid = self.ttl_widgets[channel] + widget = self.widgets_by_uid[widget_uid] if mode == "0": widget.cur_override = True widget.cur_level = False @@ -565,7 +539,7 @@ class _DeviceManager: cpld_dev = """self.setattr_device("core_cache") self.setattr_device("{}")""".format(dds_model.cpld) - # `sta`/`rf_sw`` variables are guaranteed for urukuls + # `sta`/`rf_sw`` variables are guaranteed for urukuls # so {action} can use it # if there's no RF enabled, CPLD may have not been initialized # but if there is, it has been initialised - no need to do again @@ -624,8 +598,8 @@ class _DeviceManager: channel_init=channel_init)) asyncio.ensure_future( self._submit_by_content( - dds_exp, - title, + dds_exp, + title, log_msg)) def dds_set_frequency(self, dds_channel, dds_model, freq): @@ -640,8 +614,8 @@ class _DeviceManager: dds_channel, dds_model, action, - "SetDDS", - "Set DDS {} {}MHz".format(dds_channel, freq/1e6)) + "SetDDS", + "Set DDS {} {}MHz".format(dds_channel, freq / 1e6)) def dds_channel_toggle(self, dds_channel, dds_model, sw=True): # urukul only @@ -661,9 +635,34 @@ class _DeviceManager: dds_channel, dds_model, action, - "ToggleDDS", + "ToggleDDS", "Toggle DDS {} {}".format(dds_channel, "on" if sw else "off")) + def setup_monitoring(self, enable, widget): + if isinstance(widget, _TTLWidget): + key = widget.channel + args = (key,) + subscribers = self.ttl_widgets + subscribe_func = self.setup_ttl_monitoring + elif isinstance(widget, _DDSWidget): + key = (widget.bus_channel, widget.channel) + args = key + subscribers = self.dds_widgets + subscribe_func = self.setup_dds_monitoring + elif isinstance(widget, _DACWidget): + key = (widget.spi_channel, widget.channel) + args = key + subscribers = self.dac_widgets + subscribe_func = self.setup_dac_monitoring + else: + raise ValueError + if enable and key not in subscribers: + subscribers[key] = widget.uid() + subscribe_func(enable, *args) + elif not enable and key in subscribers: + subscribe_func(enable, *args) + del subscribers[key] + def setup_ttl_monitoring(self, enable, channel): if self.mi_connection is not None: self.mi_connection.monitor_probe(enable, channel, TTLProbe.level.value) @@ -683,24 +682,28 @@ class _DeviceManager: def monitor_cb(self, channel, probe, value): if channel in self.ttl_widgets: - widget = self.ttl_widgets[channel] + widget_uid = self.ttl_widgets[channel] + widget = self.widgets_by_uid[widget_uid] if probe == TTLProbe.level.value: widget.cur_level = bool(value) elif probe == TTLProbe.oe.value: widget.cur_oe = bool(value) widget.refresh_display() elif (channel, probe) in self.dds_widgets: - widget = self.dds_widgets[(channel, probe)] + widget_uid = self.dds_widgets[(channel, probe)] + widget = self.widgets_by_uid[widget_uid] widget.dds_model.monitor_update(probe, value) widget.refresh_display() elif (channel, probe) in self.dac_widgets: - widget = self.dac_widgets[(channel, probe)] + widget_uid = self.dac_widgets[(channel, probe)] + widget = self.widgets_by_uid[widget_uid] widget.cur_value = value widget.refresh_display() def injection_status_cb(self, channel, override, value): if channel in self.ttl_widgets: - widget = self.ttl_widgets[channel] + widget_uid = self.ttl_widgets[channel] + widget = self.widgets_by_uid[widget_uid] if override == TTLOverride.en.value: widget.cur_override = bool(value) if override == TTLOverride.level.value: @@ -719,14 +722,12 @@ class _DeviceManager: await self.mi_connection.close() self.mi_connection = None new_mi_connection = CommMonInj(self.monitor_cb, self.injection_status_cb, - self.disconnect_cb) + self.disconnect_cb) try: await new_mi_connection.connect(self.mi_addr, self.mi_port) - except asyncio.CancelledError: - logger.info("cancelled connection to moninj") - break - except: - logger.error("failed to connect to moninj. Is aqctl_moninj_proxy running?", exc_info=True) + except Exception: + logger.error("failed to connect to moninj. Is aqctl_moninj_proxy running?", + exc_info=True) await asyncio.sleep(10.) self.reconnect_mi.set() else: @@ -736,9 +737,9 @@ class _DeviceManager: for ttl_channel in self.ttl_widgets.keys(): self.setup_ttl_monitoring(True, ttl_channel) for bus_channel, channel in self.dds_widgets.keys(): - self.setup_dds_monitoring(True, bus_channel, channel) + self.setup_dds_monitoring(True, bus_channel, channel) for spi_channel, channel in self.dac_widgets.keys(): - self.setup_dac_monitoring(True, spi_channel, channel) + self.setup_dac_monitoring(True, spi_channel, channel) async def close(self): self.mi_connector_task.cancel() @@ -750,48 +751,233 @@ class _DeviceManager: await self.mi_connection.close() -class _MonInjDock(QtWidgets.QDockWidget): - def __init__(self, name): - QtWidgets.QDockWidget.__init__(self, name) +class Model(DictSyncTreeSepModel): + def __init__(self, init): + DictSyncTreeSepModel.__init__(self, "/", ["Channels"], init) + + def clear(self): + for k in self.backing_store: + self._del_item(self, k.split(self.separator)) + self.backing_store.clear() + + def update(self, d): + for k, v in d.items(): + self[v.to_model_path()] = v + + +class _AddChannelDialog(QtWidgets.QDialog): + def __init__(self, parent, model): + QtWidgets.QDialog.__init__(self, parent=parent) + self.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu) + self.setWindowTitle("Add channels") + + layout = QtWidgets.QVBoxLayout() + self.setLayout(layout) + + self._model = model + self._tree_view = QtWidgets.QTreeView() + self._tree_view.setHeaderHidden(True) + self._tree_view.setSelectionBehavior( + QtWidgets.QAbstractItemView.SelectItems) + self._tree_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection) + self._tree_view.setModel(self._model) + layout.addWidget(self._tree_view) + + self._button_box = QtWidgets.QDialogButtonBox( + QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel + ) + self._button_box.setCenterButtons(True) + self._button_box.accepted.connect(self.add_channels) + self._button_box.rejected.connect(self.reject) + layout.addWidget(self._button_box) + + def add_channels(self): + selection = self._tree_view.selectedIndexes() + channels = [] + for select in selection: + key = self._model.index_to_key(select) + if key is not None: + channels.append(self._model[key].ref) + self.channels = channels + self.accept() + + +class _MonInjDock(QDockWidgetCloseDetect): + def __init__(self, name, manager): + QtWidgets.QDockWidget.__init__(self, "MonInj") self.setObjectName(name) self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable | QtWidgets.QDockWidget.DockWidgetFloatable) + grid = LayoutWidget() + self.setWidget(grid) + self.manager = manager + self.widget_uids = None + + newdock = QtWidgets.QToolButton() + newdock.setToolTip("Create new moninj dock") + newdock.setIcon(QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_FileDialogNewFolder)) + newdock.clicked.connect(lambda: self.manager.create_new_dock()) + grid.addWidget(newdock, 0, 0) + + self.channel_dialog = _AddChannelDialog(self, self.manager.channel_model) + self.channel_dialog.accepted.connect(self.add_channels) + + dialog_btn = QtWidgets.QToolButton() + dialog_btn.setToolTip("Add channels") + dialog_btn.setIcon( + QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_FileDialogListView)) + dialog_btn.clicked.connect(self.channel_dialog.open) + grid.addWidget(dialog_btn, 0, 1) + + self.label = DoubleClickLineEdit(name) + self.label.setStyleSheet("background:transparent;") + grid.addWidget(self.label, 0, 2) + + scroll_area = VDragScrollArea(self) + grid.addWidget(scroll_area, 1, 0, 1, 10) + self.flow = DragDropFlowLayoutWidget() + scroll_area.setWidgetResizable(True) + scroll_area.setWidget(self.flow) + self.flow.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + self.flow.customContextMenuRequested.connect(self.custom_context_menu) + + def custom_context_menu(self, pos): + index = self.flow._get_index(pos) + if index == -1: + return + menu = QtWidgets.QMenu() + delete_action = QtWidgets.QAction("Delete widget", menu) + delete_action.triggered.connect(partial(self.delete_widget, index)) + menu.addAction(delete_action) + menu.exec_(self.flow.mapToGlobal(pos)) + + def delete_all_widgets(self): + for index in reversed(range(self.flow.count())): + self.delete_widget(index, True) + + + def delete_widget(self, index, checked): + widget = self.flow.itemAt(index).widget() + widget.hide() + self.manager.dm.setup_monitoring(False, widget) + self.flow.layout.takeAt(index) + + def add_channels(self): + channels = self.channel_dialog.channels + self.layout_widgets(channels) def layout_widgets(self, widgets): - scroll_area = QtWidgets.QScrollArea() - self.setWidget(scroll_area) - - grid = FlowLayout() - grid_widget = QtWidgets.QWidget() - grid_widget.setLayout(grid) - for widget in sorted(widgets, key=lambda w: w.sort_key()): - grid.addWidget(widget) + widget.show() + self.manager.dm.setup_monitoring(True, widget) + self.flow.addWidget(widget) - scroll_area.setWidgetResizable(True) - scroll_area.setWidget(grid_widget) + def restore_widgets(self): + if self.widget_uids is not None: + widgets_by_uid = self.manager.dm.widgets_by_uid + widgets = list() + for uid in self.widget_uids: + if uid in widgets_by_uid: + widgets.append(widgets_by_uid[uid]) + else: + logger.warning("removing moninj widget {}".format(uid)) + self.layout_widgets(widgets) + self.widget_uids = None + + def _save_widget_uids(self): + uids = [] + for i in range(self.flow.count()): + uids.append(self.flow.itemAt(i).widget().uid()) + return uids + + def save_state(self): + return { + "dock_label": self.label.text(), + "widget_uids": self._save_widget_uids() + } + + def restore_state(self, state): + try: + label = state["dock_label"] + except KeyError: + pass + else: + self.label._text = label + self.label.setText(label) + try: + self.widget_uids = state["widget_uids"] + except KeyError: + pass class MonInj: - def __init__(self, schedule_ctl): - self.ttl_dock = _MonInjDock("TTL") - self.dds_dock = _MonInjDock("DDS") - self.dac_dock = _MonInjDock("DAC") - + def __init__(self, schedule_ctl, main_window): + self.docks = dict() + self.main_window = main_window self.dm = _DeviceManager(schedule_ctl) - self.dm.ttl_cb = lambda: self.ttl_dock.layout_widgets( - self.dm.ttl_widgets.values()) - self.dm.dds_cb = lambda: self.dds_dock.layout_widgets( - self.dm.dds_widgets.values()) - self.dm.dac_cb = lambda: self.dac_dock.layout_widgets( - self.dm.dac_widgets.values()) + self.dm.channels_cb = self.add_channels + self.channel_model = Model({}) - self.subscriber = Subscriber("devices", self.dm.init_ddb, self.dm.notify) + def add_channels(self): + self.channel_model.clear() + self.channel_model.update(self.dm.widgets_by_uid) + for dock in self.docks.values(): + dock.restore_widgets() - async def start(self, server, port): - await self.subscriber.connect(server, port) + def create_new_dock(self, add_to_area=True): + n = 0 + name = "moninj0" + while name in self.docks: + n += 1 + name = "moninj" + str(n) + + dock = _MonInjDock(name, self) + self.docks[name] = dock + if add_to_area: + self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock) + dock.setFloating(True) + dock.sigClosed.connect(partial(self.on_dock_closed, name)) + self.update_closable() + return dock + + def on_dock_closed(self, name): + dock = self.docks[name] + del self.docks[name] + self.update_closable() + dock.delete_all_widgets() + dock.hide() # dock may be parent, only delete on exit + + def update_closable(self): + flags = (QtWidgets.QDockWidget.DockWidgetMovable | + QtWidgets.QDockWidget.DockWidgetFloatable) + if len(self.docks) > 1: + flags |= QtWidgets.QDockWidget.DockWidgetClosable + for dock in self.docks.values(): + dock.setFeatures(flags) + + def first_moninj_dock(self): + if self.docks: + return None + dock = self.create_new_dock(False) + return dock + + def save_state(self): + return {name: dock.save_state() for name, dock in self.docks.items()} + + def restore_state(self, state): + if self.docks: + raise NotImplementedError + for name, dock_state in state.items(): + dock = _MonInjDock(name, self) + self.docks[name] = dock + dock.restore_state(dock_state) + self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock) + dock.sigClosed.connect(partial(self.on_dock_closed, name)) + self.update_closable() async def stop(self): - await self.subscriber.close() if self.dm is not None: await self.dm.close() diff --git a/artiq/dashboard/schedule.py b/artiq/dashboard/schedule.py index d6b74a0a7..94c229258 100644 --- a/artiq/dashboard/schedule.py +++ b/artiq/dashboard/schedule.py @@ -15,9 +15,8 @@ logger = logging.getLogger(__name__) class Model(DictSyncModel): def __init__(self, init): DictSyncModel.__init__(self, - ["RID", "Pipeline", "Status", "Prio", "Due date", - "Revision", "File", "Class name"], - init) + ["RID", "Pipeline", "Status", "Prio", "Due date", + "Revision", "File", "Class name"], init) def sort_key(self, k, v): # order by priority, and then by due date and RID @@ -96,14 +95,14 @@ class ScheduleDock(QtWidgets.QDockWidget): cw = QtGui.QFontMetrics(self.font()).averageCharWidth() h = self.table.horizontalHeader() - h.resizeSection(0, 7*cw) - h.resizeSection(1, 12*cw) - h.resizeSection(2, 16*cw) - h.resizeSection(3, 6*cw) - h.resizeSection(4, 16*cw) - h.resizeSection(5, 30*cw) - h.resizeSection(6, 20*cw) - h.resizeSection(7, 20*cw) + h.resizeSection(0, 7 * cw) + h.resizeSection(1, 12 * cw) + h.resizeSection(2, 16 * cw) + h.resizeSection(3, 6 * cw) + h.resizeSection(4, 16 * cw) + h.resizeSection(5, 30 * cw) + h.resizeSection(6, 20 * cw) + h.resizeSection(7, 20 * cw) def set_model(self, model): self.table_model = model @@ -143,7 +142,7 @@ class ScheduleDock(QtWidgets.QDockWidget): selected_rid = self.table_model.row_to_key[row] pipeline = self.table_model.backing_store[selected_rid]["pipeline"] logger.info("Requesting termination of all " - "experiments in pipeline '%s'", pipeline) + "experiments in pipeline '%s'", pipeline) rids = set() for rid, info in self.table_model.backing_store.items(): @@ -151,7 +150,6 @@ class ScheduleDock(QtWidgets.QDockWidget): rids.add(rid) asyncio.ensure_future(self.request_term_multiple(rids)) - def save_state(self): return bytes(self.table.horizontalHeader().saveState()) diff --git a/artiq/dashboard/shortcuts.py b/artiq/dashboard/shortcuts.py index c7750d2a7..30217b5f6 100644 --- a/artiq/dashboard/shortcuts.py +++ b/artiq/dashboard/shortcuts.py @@ -3,8 +3,6 @@ from functools import partial from PyQt5 import QtCore, QtWidgets -from artiq.gui.tools import LayoutWidget - logger = logging.getLogger(__name__) @@ -35,7 +33,7 @@ class ShortcutsDock(QtWidgets.QDockWidget): for i in range(12): row = i + 1 - layout.addWidget(QtWidgets.QLabel("F" + str(i+1)), row, 0) + layout.addWidget(QtWidgets.QLabel("F" + str(i + 1)), row, 0) label = QtWidgets.QLabel() label.setSizePolicy(QtWidgets.QSizePolicy.Ignored, @@ -70,7 +68,7 @@ class ShortcutsDock(QtWidgets.QDockWidget): "open": open, "submit": submit } - shortcut = QtWidgets.QShortcut("F" + str(i+1), main_window) + shortcut = QtWidgets.QShortcut("F" + str(i + 1), main_window) shortcut.setContext(QtCore.Qt.ApplicationShortcut) shortcut.activated.connect(partial(self._activated, i)) diff --git a/artiq/dashboard/waveform.py b/artiq/dashboard/waveform.py new file mode 100644 index 000000000..c97ed8c16 --- /dev/null +++ b/artiq/dashboard/waveform.py @@ -0,0 +1,914 @@ +import os +import asyncio +import logging +import bisect +import itertools +import math + +from PyQt5 import QtCore, QtWidgets, QtGui + +import pyqtgraph as pg +import numpy as np + +from sipyco.pc_rpc import AsyncioClient +from sipyco import pyon + +from artiq.tools import exc_to_warning, short_format +from artiq.coredevice import comm_analyzer +from artiq.coredevice.comm_analyzer import WaveformType +from artiq.gui.tools import LayoutWidget, get_open_file_name, get_save_file_name +from artiq.gui.models import DictSyncTreeSepModel +from artiq.gui.dndwidgets import VDragScrollArea, VDragDropSplitter + + +logger = logging.getLogger(__name__) + +WAVEFORM_MIN_HEIGHT = 50 +WAVEFORM_MAX_HEIGHT = 200 + + +class ProxyClient(): + def __init__(self, receive_cb, timeout=5, timer=5, timer_backoff=1.1): + self.receive_cb = receive_cb + self.receiver = None + self.addr = None + self.port_proxy = None + self.port = None + self._reconnect_event = asyncio.Event() + self.timeout = timeout + self.timer = timer + self.timer_cur = timer + self.timer_backoff = timer_backoff + self._reconnect_task = asyncio.ensure_future(self._reconnect()) + + def update_address(self, addr, port, port_proxy): + self.addr = addr + self.port = port + self.port_proxy = port_proxy + self._reconnect_event.set() + + async def trigger_proxy_task(self): + remote = AsyncioClient() + try: + try: + if self.addr is None: + logger.error("missing core_analyzer host in device db") + return + await remote.connect_rpc(self.addr, self.port, "coreanalyzer_proxy_control") + except: + logger.error("error connecting to analyzer proxy control", exc_info=True) + return + await remote.trigger() + except: + logger.error("analyzer proxy reported failure", exc_info=True) + finally: + remote.close_rpc() + + async def _reconnect(self): + while True: + await self._reconnect_event.wait() + self._reconnect_event.clear() + if self.receiver is not None: + await self.receiver.close() + self.receiver = None + new_receiver = comm_analyzer.AnalyzerProxyReceiver( + self.receive_cb, self.disconnect_cb) + try: + if self.addr is not None: + await asyncio.wait_for(new_receiver.connect(self.addr, self.port_proxy), + self.timeout) + logger.info("ARTIQ dashboard connected to analyzer proxy (%s)", self.addr) + self.timer_cur = self.timer + self.receiver = new_receiver + continue + except Exception: + logger.error("error connecting to analyzer proxy", exc_info=True) + try: + await asyncio.wait_for(self._reconnect_event.wait(), self.timer_cur) + except asyncio.TimeoutError: + self.timer_cur *= self.timer_backoff + self._reconnect_event.set() + else: + self.timer_cur = self.timer + + async def close(self): + self._reconnect_task.cancel() + try: + await asyncio.wait_for(self._reconnect_task, None) + except asyncio.CancelledError: + pass + if self.receiver is not None: + await self.receiver.close() + + def disconnect_cb(self): + logger.error("lost connection to analyzer proxy") + self._reconnect_event.set() + + +class _BackgroundItem(pg.GraphicsWidgetAnchor, pg.GraphicsWidget): + def __init__(self, parent, rect): + pg.GraphicsWidget.__init__(self, parent) + pg.GraphicsWidgetAnchor.__init__(self) + self.item = QtWidgets.QGraphicsRectItem(rect, self) + brush = QtGui.QBrush(QtGui.QColor(10, 10, 10, 140)) + self.item.setBrush(brush) + + +class _BaseWaveform(pg.PlotWidget): + cursorMove = QtCore.pyqtSignal(float) + + def __init__(self, name, width, precision, unit, + parent=None, pen="r", stepMode="right", connect="finite"): + pg.PlotWidget.__init__(self, + parent=parent, + x=None, + y=None, + pen=pen, + stepMode=stepMode, + connect=connect) + + self.setMinimumHeight(WAVEFORM_MIN_HEIGHT) + self.setMaximumHeight(WAVEFORM_MAX_HEIGHT) + self.setMenuEnabled(False) + self.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu) + + self.name = name + self.width = width + self.precision = precision + self.unit = unit + + self.x_data = [] + self.y_data = [] + + self.plot_item = self.getPlotItem() + self.plot_item.hideButtons() + self.plot_item.hideAxis("top") + self.plot_item.getAxis("bottom").setStyle(showValues=False, tickLength=0) + self.plot_item.getAxis("left").setStyle(showValues=False, tickLength=0) + self.plot_item.setRange(yRange=(0, 1), padding=0.1) + self.plot_item.showGrid(x=True, y=True) + + self.plot_data_item = self.plot_item.listDataItems()[0] + self.plot_data_item.setClipToView(True) + + self.view_box = self.plot_item.getViewBox() + self.view_box.setMouseEnabled(x=True, y=False) + self.view_box.disableAutoRange(axis=pg.ViewBox.YAxis) + self.view_box.setLimits(xMin=0, minXRange=20) + + self.title_label = pg.LabelItem(self.name, parent=self.plot_item) + self.title_label.anchor(itemPos=(0, 0), parentPos=(0, 0), offset=(0, 0)) + self.title_label.setAttr('justify', 'left') + self.title_label.setZValue(10) + + rect = self.title_label.boundingRect() + rect.setHeight(rect.height() * 2) + rect.setWidth(225) + self.label_bg = _BackgroundItem(parent=self.plot_item, rect=rect) + self.label_bg.anchor(itemPos=(0, 0), parentPos=(0, 0), offset=(0, 0)) + + self.cursor = pg.InfiniteLine() + self.cursor_y = None + self.addItem(self.cursor) + + self.cursor_label = pg.LabelItem('', parent=self.plot_item) + self.cursor_label.anchor(itemPos=(0, 0), parentPos=(0, 0), offset=(0, 20)) + self.cursor_label.setAttr('justify', 'left') + self.cursor_label.setZValue(10) + + def setStoppedX(self, stopped_x): + self.stopped_x = stopped_x + self.view_box.setLimits(xMax=stopped_x) + + def setData(self, data): + if len(data) == 0: + self.x_data, self.y_data = [], [] + else: + self.x_data, self.y_data = zip(*data) + + def onDataChange(self, data): + raise NotImplementedError + + def onCursorMove(self, x): + self.cursor.setValue(x) + if len(self.x_data) < 1: + return + ind = bisect.bisect_left(self.x_data, x) - 1 + dr = self.plot_data_item.dataRect() + self.cursor_y = None + if dr is not None and 0 <= ind < len(self.y_data): + self.cursor_y = self.y_data[ind] + + def mouseMoveEvent(self, e): + if e.buttons() == QtCore.Qt.LeftButton \ + and e.modifiers() == QtCore.Qt.ShiftModifier: + drag = QtGui.QDrag(self) + mime = QtCore.QMimeData() + drag.setMimeData(mime) + pixmapi = QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_FileIcon) + drag.setPixmap(pixmapi.pixmap(32)) + drag.exec_(QtCore.Qt.MoveAction) + else: + super().mouseMoveEvent(e) + + def wheelEvent(self, e): + if e.modifiers() & QtCore.Qt.ControlModifier: + super().wheelEvent(e) + + def mouseDoubleClickEvent(self, e): + pos = self.view_box.mapSceneToView(e.pos()) + self.cursorMove.emit(pos.x()) + + +class BitWaveform(_BaseWaveform): + def __init__(self, name, width, precision, unit, parent=None): + _BaseWaveform.__init__(self, name, width, precision, unit, parent) + self.plot_item.showGrid(x=True, y=False) + self._arrows = [] + + def onDataChange(self, data): + try: + self.setData(data) + for arw in self._arrows: + self.removeItem(arw) + self._arrows = [] + l = len(data) + display_y = np.empty(l) + display_x = np.empty(l) + display_map = { + "X": 0.5, + "1": 1, + "0": 0 + } + previous_y = None + for i, coord in enumerate(data): + x, y = coord + dis_y = display_map[y] + if previous_y == y: + arw = pg.ArrowItem(pxMode=True, angle=90) + self.addItem(arw) + self._arrows.append(arw) + arw.setPos(x, dis_y) + display_y[i] = dis_y + display_x[i] = x + previous_y = y + self.plot_data_item.setData(x=display_x, y=display_y) + except: + logger.error("Error when displaying waveform: %s", self.name, exc_info=True) + for arw in self._arrows: + self.removeItem(arw) + self.plot_data_item.setData(x=[], y=[]) + + def onCursorMove(self, x): + _BaseWaveform.onCursorMove(self, x) + if self.cursor_y is not None: + self.cursor_label.setText(self.cursor_y) + else: + self.cursor_label.setText("") + + +class AnalogWaveform(_BaseWaveform): + def __init__(self, name, width, precision, unit, parent=None): + _BaseWaveform.__init__(self, name, width, precision, unit, parent) + + def onDataChange(self, data): + try: + self.setData(data) + self.plot_data_item.setData(x=self.x_data, y=self.y_data) + if len(data) > 0: + max_y = max(self.y_data) + min_y = min(self.y_data) + self.plot_item.setRange(yRange=(min_y, max_y), padding=0.1) + except: + logger.error("Error when displaying waveform: %s", self.name, exc_info=True) + self.plot_data_item.setData(x=[], y=[]) + + def onCursorMove(self, x): + _BaseWaveform.onCursorMove(self, x) + if self.cursor_y is not None: + t = short_format(self.cursor_y, {"precision": self.precision, "unit": self.unit}) + else: + t = "" + self.cursor_label.setText(t) + + +class BitVectorWaveform(_BaseWaveform): + def __init__(self, name, width, precision, unit, parent=None): + _BaseWaveform.__init__(self, name, width, precision, parent) + self._labels = [] + self._format_string = "{:0=" + str(math.ceil(width / 4)) + "X}" + self.view_box.sigTransformChanged.connect(self._update_labels) + self.plot_item.showGrid(x=True, y=False) + + def _update_labels(self): + for label in self._labels: + self.removeItem(label) + xmin, xmax = self.view_box.viewRange()[0] + left_label_i = bisect.bisect_left(self.x_data, xmin) + right_label_i = bisect.bisect_right(self.x_data, xmax) + 1 + for i, j in itertools.pairwise(range(left_label_i, right_label_i)): + x1 = self.x_data[i] + x2 = self.x_data[j] if j < len(self.x_data) else self.stopped_x + lbl = self._labels[i] + bounds = lbl.boundingRect() + bounds_view = self.view_box.mapSceneToView(bounds) + if bounds_view.boundingRect().width() < x2 - x1: + self.addItem(lbl) + + def onDataChange(self, data): + try: + self.setData(data) + for lbl in self._labels: + self.plot_item.removeItem(lbl) + self._labels = [] + l = len(data) + display_x = np.empty(l * 2) + display_y = np.empty(l * 2) + for i, coord in enumerate(data): + x, y = coord + display_x[i * 2] = x + display_x[i * 2 + 1] = x + display_y[i * 2] = 0 + display_y[i * 2 + 1] = int(int(y) != 0) + lbl = pg.TextItem( + self._format_string.format(int(y, 2)), anchor=(0, 0.5)) + lbl.setPos(x, 0.5) + lbl.setTextWidth(100) + self._labels.append(lbl) + self.plot_data_item.setData(x=display_x, y=display_y) + except: + logger.error("Error when displaying waveform: %s", self.name, exc_info=True) + for lbl in self._labels: + self.plot_item.removeItem(lbl) + self.plot_data_item.setData(x=[], y=[]) + + def onCursorMove(self, x): + _BaseWaveform.onCursorMove(self, x) + if self.cursor_y is not None: + t = self._format_string.format(int(self.cursor_y, 2)) + else: + t = "" + self.cursor_label.setText(t) + + +class LogWaveform(_BaseWaveform): + def __init__(self, name, width, precision, unit, parent=None): + _BaseWaveform.__init__(self, name, width, precision, parent) + self.plot_data_item.opts['pen'] = None + self.plot_data_item.opts['symbol'] = 'x' + self._labels = [] + self.plot_item.showGrid(x=True, y=False) + + def onDataChange(self, data): + try: + self.setData(data) + for lbl in self._labels: + self.plot_item.removeItem(lbl) + self._labels = [] + self.plot_data_item.setData( + x=self.x_data, y=np.ones(len(self.x_data))) + if len(data) == 0: + return + old_x = data[0][0] + old_msg = data[0][1] + for x, msg in data[1:]: + if x == old_x: + old_msg += "\n" + msg + else: + lbl = pg.TextItem(old_msg) + self.addItem(lbl) + self._labels.append(lbl) + lbl.setPos(old_x, 1) + old_msg = msg + old_x = x + lbl = pg.TextItem(old_msg) + self.addItem(lbl) + self._labels.append(lbl) + lbl.setPos(old_x, 1) + except: + logger.error("Error when displaying waveform: %s", self.name, exc_info=True) + for lbl in self._labels: + self.plot_item.removeItem(lbl) + self.plot_data_item.setData(x=[], y=[]) + + +class _WaveformView(QtWidgets.QWidget): + cursorMove = QtCore.pyqtSignal(float) + + def __init__(self, parent): + QtWidgets.QWidget.__init__(self, parent=parent) + + self._stopped_x = None + self._timescale = 1 + self._cursor_x = 0 + + layout = QtWidgets.QVBoxLayout() + layout.setContentsMargins(0, 0, 0, 0) + layout.setSpacing(0) + self.setLayout(layout) + + self._ref_axis = pg.PlotWidget() + self._ref_axis.hideAxis("bottom") + self._ref_axis.hideAxis("left") + self._ref_axis.hideButtons() + self._ref_axis.setFixedHeight(45) + self._ref_axis.setMenuEnabled(False) + self._top = pg.AxisItem("top") + self._top.setScale(1e-12) + self._top.setLabel(units="s") + self._ref_axis.setAxisItems({"top": self._top}) + layout.addWidget(self._ref_axis) + + self._ref_vb = self._ref_axis.getPlotItem().getViewBox() + self._ref_vb.setFixedHeight(0) + self._ref_vb.setMouseEnabled(x=True, y=False) + self._ref_vb.setLimits(xMin=0) + + scroll_area = VDragScrollArea(self) + scroll_area.setWidgetResizable(True) + scroll_area.setContentsMargins(0, 0, 0, 0) + scroll_area.setFrameShape(QtWidgets.QFrame.NoFrame) + scroll_area.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) + layout.addWidget(scroll_area) + + self._splitter = VDragDropSplitter(parent=scroll_area) + self._splitter.setHandleWidth(1) + scroll_area.setWidget(self._splitter) + + self.cursorMove.connect(self.onCursorMove) + + self.confirm_delete_dialog = QtWidgets.QMessageBox(self) + self.confirm_delete_dialog.setIcon( + QtWidgets.QMessageBox.Icon.Warning + ) + self.confirm_delete_dialog.setText("Delete all waveforms?") + self.confirm_delete_dialog.setStandardButtons( + QtWidgets.QMessageBox.Ok | QtWidgets.QMessageBox.Cancel + ) + self.confirm_delete_dialog.setDefaultButton( + QtWidgets.QMessageBox.Ok + ) + + def setModel(self, model): + self._model = model + self._model.dataChanged.connect(self.onDataChange) + self._model.rowsInserted.connect(self.onInsert) + self._model.rowsRemoved.connect(self.onRemove) + self._model.rowsMoved.connect(self.onMove) + self._splitter.dropped.connect(self._model.move) + self.confirm_delete_dialog.accepted.connect(self._model.clear) + + def setTimescale(self, timescale): + self._timescale = timescale + self._top.setScale(1e-12 * timescale) + + def setStoppedX(self, stopped_x): + self._stopped_x = stopped_x + self._ref_vb.setLimits(xMax=stopped_x) + self._ref_vb.setRange(xRange=(0, stopped_x)) + for i in range(self._model.rowCount()): + self._splitter.widget(i).setStoppedX(stopped_x) + + def resetZoom(self): + if self._stopped_x is not None: + self._ref_vb.setRange(xRange=(0, self._stopped_x)) + + def onDataChange(self, top, bottom, roles): + self.cursorMove.emit(0) + first = top.row() + last = bottom.row() + data_row = self._model.headers.index("data") + for i in range(first, last + 1): + data = self._model.data(self._model.index(i, data_row)) + self._splitter.widget(i).onDataChange(data) + + def onInsert(self, parent, first, last): + for i in range(first, last + 1): + w = self._create_waveform(i) + self._splitter.insertWidget(i, w) + self._resize() + + def onRemove(self, parent, first, last): + for i in reversed(range(first, last + 1)): + w = self._splitter.widget(i) + w.deleteLater() + self._splitter.refresh() + self._resize() + + def onMove(self, src_parent, src_start, src_end, dest_parent, dest_row): + w = self._splitter.widget(src_start) + self._splitter.insertWidget(dest_row, w) + + def onCursorMove(self, x): + self._cursor_x = x + for i in range(self._model.rowCount()): + self._splitter.widget(i).onCursorMove(x) + + def _create_waveform(self, row): + name, ty, width, precision, unit = ( + self._model.data(self._model.index(row, i)) for i in range(5)) + waveform_cls = { + WaveformType.BIT: BitWaveform, + WaveformType.VECTOR: BitVectorWaveform, + WaveformType.ANALOG: AnalogWaveform, + WaveformType.LOG: LogWaveform + }[ty] + w = waveform_cls(name, width, precision, unit, parent=self._splitter) + w.setXLink(self._ref_vb) + w.setStoppedX(self._stopped_x) + w.cursorMove.connect(self.cursorMove) + w.onCursorMove(self._cursor_x) + action = QtWidgets.QAction("Delete waveform", w) + action.triggered.connect(lambda: self._delete_waveform(w)) + w.addAction(action) + action = QtWidgets.QAction("Delete all waveforms", w) + action.triggered.connect(self.confirm_delete_dialog.open) + w.addAction(action) + return w + + def _delete_waveform(self, waveform): + row = self._splitter.indexOf(waveform) + self._model.pop(row) + + def _resize(self): + self._splitter.setFixedHeight( + int((WAVEFORM_MIN_HEIGHT + WAVEFORM_MAX_HEIGHT) * self._model.rowCount() / 2)) + + +class _WaveformModel(QtCore.QAbstractTableModel): + def __init__(self): + self.backing_struct = [] + self.headers = ["name", "type", "width", "precision", "unit", "data"] + QtCore.QAbstractTableModel.__init__(self) + + def rowCount(self, parent=QtCore.QModelIndex()): + return len(self.backing_struct) + + def columnCount(self, parent=QtCore.QModelIndex()): + return len(self.headers) + + def data(self, index, role=QtCore.Qt.DisplayRole): + if index.isValid(): + return self.backing_struct[index.row()][index.column()] + return None + + def extend(self, data): + length = len(self.backing_struct) + len_data = len(data) + self.beginInsertRows(QtCore.QModelIndex(), length, length + len_data - 1) + self.backing_struct.extend(data) + self.endInsertRows() + + def pop(self, row): + self.beginRemoveRows(QtCore.QModelIndex(), row, row) + self.backing_struct.pop(row) + self.endRemoveRows() + + def move(self, src, dest): + if src == dest: + return + if src < dest: + dest, src = src, dest + self.beginMoveRows(QtCore.QModelIndex(), src, src, QtCore.QModelIndex(), dest) + self.backing_struct.insert(dest, self.backing_struct.pop(src)) + self.endMoveRows() + + def clear(self): + self.beginRemoveRows(QtCore.QModelIndex(), 0, len(self.backing_struct) - 1) + self.backing_struct.clear() + self.endRemoveRows() + + def export_list(self): + return [[row[0], row[1].value, *row[2:5]] for row in self.backing_struct] + + def import_list(self, channel_list): + self.clear() + data = [[row[0], WaveformType(row[1]), *row[2:5], []] for row in channel_list] + self.extend(data) + + def update_data(self, waveform_data, top, bottom): + name_col = self.headers.index("name") + data_col = self.headers.index("data") + for i in range(top, bottom): + name = self.data(self.index(i, name_col)) + self.backing_struct[i][data_col] = waveform_data.get(name, []) + self.dataChanged.emit(self.index(i, data_col), + self.index(i, data_col)) + + def update_all(self, waveform_data): + self.update_data(waveform_data, 0, self.rowCount()) + + +class _CursorTimeControl(QtWidgets.QLineEdit): + submit = QtCore.pyqtSignal(float) + + def __init__(self, parent): + QtWidgets.QLineEdit.__init__(self, parent=parent) + self._text = "" + self._value = 0 + self._timescale = 1 + self.setDisplayValue(0) + self.textChanged.connect(self._onTextChange) + self.returnPressed.connect(self._onReturnPress) + + def setTimescale(self, timescale): + self._timescale = timescale + + def _onTextChange(self, text): + self._text = text + + def setDisplayValue(self, value): + self._value = value + self._text = pg.siFormat(value * 1e-12 * self._timescale, + suffix="s", + allowUnicode=False, + precision=15) + self.setText(self._text) + + def _setValueFromText(self, text): + try: + self._value = pg.siEval(text) * (1e12 / self._timescale) + except: + logger.error("Error when parsing cursor time input", exc_info=True) + + def _onReturnPress(self): + self._setValueFromText(self._text) + self.setDisplayValue(self._value) + self.submit.emit(self._value) + self.clearFocus() + + +class Model(DictSyncTreeSepModel): + def __init__(self, init): + DictSyncTreeSepModel.__init__(self, "/", ["Channels"], init) + + def clear(self): + for k in self.backing_store: + self._del_item(self, k.split(self.separator)) + self.backing_store.clear() + + def update(self, d): + for k, v in d.items(): + self[k] = v + + +class _AddChannelDialog(QtWidgets.QDialog): + def __init__(self, parent, model): + QtWidgets.QDialog.__init__(self, parent=parent) + self.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu) + self.setWindowTitle("Add channels") + + layout = QtWidgets.QVBoxLayout() + self.setLayout(layout) + + self._model = model + self._tree_view = QtWidgets.QTreeView() + self._tree_view.setHeaderHidden(True) + self._tree_view.setSelectionBehavior( + QtWidgets.QAbstractItemView.SelectItems) + self._tree_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection) + self._tree_view.setModel(self._model) + layout.addWidget(self._tree_view) + + self._button_box = QtWidgets.QDialogButtonBox( + QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel + ) + self._button_box.setCenterButtons(True) + self._button_box.accepted.connect(self.add_channels) + self._button_box.rejected.connect(self.reject) + layout.addWidget(self._button_box) + + def add_channels(self): + selection = self._tree_view.selectedIndexes() + channels = [] + for select in selection: + key = self._model.index_to_key(select) + if key is not None: + channels.append([key, *self._model[key].ref, []]) + self.channels = channels + self.accept() + + +class WaveformDock(QtWidgets.QDockWidget): + def __init__(self, timeout, timer, timer_backoff): + QtWidgets.QDockWidget.__init__(self, "Waveform") + self.setObjectName("Waveform") + self.setFeatures( + QtWidgets.QDockWidget.DockWidgetMovable | QtWidgets.QDockWidget.DockWidgetFloatable) + + self._channel_model = Model({}) + self._waveform_model = _WaveformModel() + + self._ddb = None + self._dump = None + + self._waveform_data = { + "timescale": 1, + "stopped_x": None, + "logs": dict(), + "data": dict(), + } + + self._current_dir = os.getcwd() + + self.proxy_client = ProxyClient(self.on_dump_receive, + timeout, + timer, + timer_backoff) + + grid = LayoutWidget() + self.setWidget(grid) + + self._menu_btn = QtWidgets.QPushButton() + self._menu_btn.setIcon( + QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_FileDialogStart)) + grid.addWidget(self._menu_btn, 0, 0) + + self._request_dump_btn = QtWidgets.QToolButton() + self._request_dump_btn.setToolTip("Fetch analyzer data from device") + self._request_dump_btn.setIcon( + QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_BrowserReload)) + self._request_dump_btn.clicked.connect( + lambda: asyncio.ensure_future(exc_to_warning(self.proxy_client.trigger_proxy_task()))) + grid.addWidget(self._request_dump_btn, 0, 1) + + self._add_channel_dialog = _AddChannelDialog(self, self._channel_model) + self._add_channel_dialog.accepted.connect(self._add_channels) + + self._add_btn = QtWidgets.QToolButton() + self._add_btn.setToolTip("Add channels...") + self._add_btn.setIcon( + QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_FileDialogListView)) + self._add_btn.clicked.connect(self._add_channel_dialog.open) + grid.addWidget(self._add_btn, 0, 2) + + self._file_menu = QtWidgets.QMenu() + self._add_async_action("Open trace...", self.load_trace) + self._add_async_action("Save trace...", self.save_trace) + self._add_async_action("Save trace as VCD...", self.save_vcd) + self._add_async_action("Open channel list...", self.load_channels) + self._add_async_action("Save channel list...", self.save_channels) + self._menu_btn.setMenu(self._file_menu) + + self._waveform_view = _WaveformView(self) + self._waveform_view.setModel(self._waveform_model) + grid.addWidget(self._waveform_view, 1, 0, colspan=12) + + self._reset_zoom_btn = QtWidgets.QToolButton() + self._reset_zoom_btn.setToolTip("Reset zoom") + self._reset_zoom_btn.setIcon( + QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_TitleBarMaxButton)) + self._reset_zoom_btn.clicked.connect(self._waveform_view.resetZoom) + grid.addWidget(self._reset_zoom_btn, 0, 3) + + self._cursor_control = _CursorTimeControl(self) + self._waveform_view.cursorMove.connect(self._cursor_control.setDisplayValue) + self._cursor_control.submit.connect(self._waveform_view.onCursorMove) + grid.addWidget(self._cursor_control, 0, 4, colspan=6) + + def _add_async_action(self, label, coro): + action = QtWidgets.QAction(label, self) + action.triggered.connect( + lambda: asyncio.ensure_future(exc_to_warning(coro()))) + self._file_menu.addAction(action) + + def _add_channels(self): + channels = self._add_channel_dialog.channels + count = self._waveform_model.rowCount() + self._waveform_model.extend(channels) + self._waveform_model.update_data(self._waveform_data['data'], + count, + count + len(channels)) + + def on_dump_receive(self, dump): + self._dump = dump + decoded_dump = comm_analyzer.decode_dump(dump) + waveform_data = comm_analyzer.decoded_dump_to_waveform_data(self._ddb, decoded_dump) + self._waveform_data.update(waveform_data) + self._channel_model.update(self._waveform_data['logs']) + self._waveform_model.update_all(self._waveform_data['data']) + self._waveform_view.setStoppedX(self._waveform_data['stopped_x']) + self._waveform_view.setTimescale(self._waveform_data['timescale']) + self._cursor_control.setTimescale(self._waveform_data['timescale']) + + async def load_trace(self): + try: + filename = await get_open_file_name( + self, + "Load Analyzer Trace", + self._current_dir, + "All files (*.*)") + except asyncio.CancelledError: + return + self._current_dir = os.path.dirname(filename) + try: + with open(filename, 'rb') as f: + dump = f.read() + self.on_dump_receive(dump) + except: + logger.error("Failed to open analyzer trace", exc_info=True) + + async def save_trace(self): + if self._dump is None: + logger.error("No analyzer trace stored in dashboard, " + "try loading from file or fetching from device") + return + try: + filename = await get_save_file_name( + self, + "Save Analyzer Trace", + self._current_dir, + "All files (*.*)") + except asyncio.CancelledError: + return + self._current_dir = os.path.dirname(filename) + try: + with open(filename, 'wb') as f: + f.write(self._dump) + except: + logger.error("Failed to save analyzer trace", exc_info=True) + + async def save_vcd(self): + if self._dump is None: + logger.error("No analyzer trace stored in dashboard, " + "try loading from file or fetching from device") + return + try: + filename = await get_save_file_name( + self, + "Save VCD", + self._current_dir, + "All files (*.*)") + except asyncio.CancelledError: + return + self._current_dir = os.path.dirname(filename) + try: + decoded_dump = comm_analyzer.decode_dump(self._dump) + with open(filename, 'w') as f: + comm_analyzer.decoded_dump_to_vcd(f, self._ddb, decoded_dump) + except: + logger.error("Failed to save trace as VCD", exc_info=True) + + async def load_channels(self): + try: + filename = await get_open_file_name( + self, + "Open channel list", + self._current_dir, + "PYON files (*.pyon);;All files (*.*)") + except asyncio.CancelledError: + return + self._current_dir = os.path.dirname(filename) + try: + channel_list = pyon.load_file(filename) + self._waveform_model.import_list(channel_list) + self._waveform_model.update_all(self._waveform_data['data']) + except: + logger.error("Failed to open channel list", exc_info=True) + + async def save_channels(self): + try: + filename = await get_save_file_name( + self, + "Save channel list", + self._current_dir, + "PYON files (*.pyon);;All files (*.*)") + except asyncio.CancelledError: + return + self._current_dir = os.path.dirname(filename) + try: + channel_list = self._waveform_model.export_list() + pyon.store_file(filename, channel_list) + except: + logger.error("Failed to save channel list", exc_info=True) + + def _process_ddb(self): + channel_list = comm_analyzer.get_channel_list(self._ddb) + self._channel_model.clear() + self._channel_model.update(channel_list) + desc = self._ddb.get("core_analyzer") + if desc is not None: + addr = desc["host"] + port_proxy = desc.get("port_proxy", 1385) + port = desc.get("port", 1386) + self.proxy_client.update_address(addr, port, port_proxy) + else: + self.proxy_client.update_address(None, None, None) + + def init_ddb(self, ddb): + self._ddb = ddb + self._process_ddb() + return ddb + + def notify_ddb(self, mod): + self._process_ddb() + + async def stop(self): + if self.proxy_client is not None: + await self.proxy_client.close() diff --git a/artiq/examples/artiq_ipython_notebook.ipynb b/artiq/examples/artiq_ipython_notebook.ipynb index 7964cd5a4..5f988b3b9 100644 --- a/artiq/examples/artiq_ipython_notebook.ipynb +++ b/artiq/examples/artiq_ipython_notebook.ipynb @@ -127,7 +127,7 @@ "# let's connect to the master\n", "\n", "schedule, exps, datasets = [\n", - " Client(\"::1\", 3251, \"master_\" + i) for i in\n", + " Client(\"::1\", 3251, i) for i in\n", " \"schedule experiment_db dataset_db\".split()]\n", "\n", "print(\"current schedule\")\n", diff --git a/artiq/examples/kasli_shuttler/kasli_shuttler.json b/artiq/examples/kasli_shuttler/kasli_shuttler.json index 7d938ae11..2e36191b0 100644 --- a/artiq/examples/kasli_shuttler/kasli_shuttler.json +++ b/artiq/examples/kasli_shuttler/kasli_shuttler.json @@ -6,6 +6,7 @@ "peripherals": [ { "type": "shuttler", + "hw_rev": "v1.1", "ports": [0] }, { diff --git a/artiq/examples/kasli_suservo/device_db.py b/artiq/examples/kasli_suservo/device_db.py index c52b82a94..ded4ada96 100644 --- a/artiq/examples/kasli_suservo/device_db.py +++ b/artiq/examples/kasli_suservo/device_db.py @@ -5,7 +5,11 @@ device_db = { "type": "local", "module": "artiq.coredevice.core", "class": "Core", - "arguments": {"host": core_addr, "ref_period": 1e-9} + "arguments": { + "host": core_addr, + "ref_period": 1e-9, + "analyzer_proxy": "core_analyzer" + } }, "core_log": { "type": "controller", @@ -20,6 +24,13 @@ device_db = { "port": 1384, "command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr }, + "core_analyzer": { + "type": "controller", + "host": "::1", + "port_proxy": 1385, + "port": 1386, + "command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr + }, "core_cache": { "type": "local", "module": "artiq.coredevice.cache", diff --git a/artiq/examples/kc705_nist_clock/device_db.py b/artiq/examples/kc705_nist_clock/device_db.py index 1930f584b..f21d1c5ef 100644 --- a/artiq/examples/kc705_nist_clock/device_db.py +++ b/artiq/examples/kc705_nist_clock/device_db.py @@ -9,7 +9,11 @@ device_db = { "type": "local", "module": "artiq.coredevice.core", "class": "Core", - "arguments": {"host": core_addr, "ref_period": 1e-9} + "arguments": { + "host": core_addr, + "ref_period": 1e-9, + "analyzer_proxy": "core_analyzer" + } }, "core_log": { "type": "controller", @@ -24,6 +28,13 @@ device_db = { "port": 1384, "command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr }, + "core_analyzer": { + "type": "controller", + "host": "::1", + "port_proxy": 1385, + "port": 1386, + "command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr + }, "core_cache": { "type": "local", "module": "artiq.coredevice.cache", diff --git a/artiq/examples/no_hardware/repository/interactive.py b/artiq/examples/no_hardware/repository/interactive.py new file mode 100644 index 000000000..657721505 --- /dev/null +++ b/artiq/examples/no_hardware/repository/interactive.py @@ -0,0 +1,34 @@ +from artiq.experiment import * + + +class InteractiveDemo(EnvExperiment): + def build(self): + pass + + def run(self): + print("Waiting for user input...") + with self.interactive(title="Interactive Demo") as interactive: + interactive.setattr_argument("pyon_value", + PYONValue(self.get_dataset("foo", default=42))) + interactive.setattr_argument("number", NumberValue(42e-6, + unit="us", + precision=4)) + interactive.setattr_argument("integer", NumberValue(42, + step=1, precision=0)) + interactive.setattr_argument("string", StringValue("Hello World")) + interactive.setattr_argument("scan", Scannable(global_max=400, + default=NoScan(325), + precision=6)) + interactive.setattr_argument("boolean", BooleanValue(True), "Group") + interactive.setattr_argument("enum", + EnumerationValue(["foo", "bar", "quux"], "foo"), + "Group") + print("Done! Values:") + print(interactive.pyon_value) + print(interactive.boolean) + print(interactive.enum) + print(interactive.number, type(interactive.number)) + print(interactive.integer, type(interactive.integer)) + print(interactive.string) + for i in interactive.scan: + print(i) diff --git a/artiq/firmware/Cargo.lock b/artiq/firmware/Cargo.lock index 01d482c21..527a662ee 100644 --- a/artiq/firmware/Cargo.lock +++ b/artiq/firmware/Cargo.lock @@ -1,10 +1,13 @@ # This file is automatically @generated by Cargo. # It is not intended for manual editing. +# And yet, manual edits have been made. Crate yanking should be illegal. +version = 3 + [[package]] name = "addr2line" -version = "0.14.0" +version = "0.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7c0929d69e78dd9bf5408269919fcbcaeb2e35e5d43e5815517cdc6a8e11a423" +checksum = "3e61f2b7f93d2c7d2b08263acaa4a363b3e276806c68af6134c44f523bf1aacd" dependencies = [ "cpp_demangle", "fallible-iterator", @@ -28,15 +31,20 @@ checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" [[package]] name = "ahash" -version = "0.4.8" +version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0453232ace82dee0dd0b4c87a59bd90f7b53b314f3e0f61fe2ee7c8a16482289" +checksum = "efa60d2eadd8b12a996add391db32bd1153eac697ba4869660c0016353611426" +dependencies = [ + "getrandom", + "once_cell", + "version_check", +] [[package]] name = "aho-corasick" -version = "0.7.15" +version = "0.7.20" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7404febffaa47dac81aa44dba71523c9d069b1bdc50a77db41195149e17f68e5" +checksum = "cc936419f96fa211c1b9166887b38e5e40b19958e5b895be7c1f93adec7071ac" dependencies = [ "memchr", ] @@ -45,6 +53,12 @@ dependencies = [ name = "alloc_list" version = "0.0.0" +[[package]] +name = "arrayvec" +version = "0.7.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "96d30a06541fbafbc7f82ed10c06164cfbd2c401138f6addd8404629c4b16711" + [[package]] name = "bare-metal" version = "0.2.5" @@ -99,6 +113,7 @@ name = "bootloader" version = "0.0.0" dependencies = [ "addr2line", + "ahash", "board_misoc", "build_misoc", "byteorder", @@ -108,15 +123,18 @@ dependencies = [ "dlmalloc", "fortanix-sgx-abi", "getopts", + "getrandom", "hashbrown", "hermit-abi", - "libc 0.2.79", - "memchr", + "libc 0.2.99", "miniz_oxide 0.4.0", + "object", + "once_cell", "riscv", "rustc-demangle", "smoltcp", "unicode-width", + "version_check", "wasi", ] @@ -138,9 +156,9 @@ checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610" [[package]] name = "cc" -version = "1.0.60" +version = "1.0.69" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ef611cc68ff783f18535d77ddd080185275713d852c4f5cbb6122c462a7a825c" +checksum = "e70cc2f62c6ce1868963827bd677764c62d07c3d9a3e1fb1177ee1a9ab199eb2" [[package]] name = "cfg-if" @@ -156,9 +174,9 @@ checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "compiler_builtins" -version = "0.1.39" +version = "0.1.49" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3748f82c7d366a0b4950257d19db685d4958d2fa27c6d164a3f069fec42b748b" +checksum = "20b1438ef42c655665a8ab2c1c6d605a305f031d38d9be689ddfef41a20f3aa2" [[package]] name = "cpp_demangle" @@ -180,9 +198,9 @@ dependencies = [ [[package]] name = "crc32fast" -version = "1.4.2" +version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a97769d94ddab943e4510d138150169a2758b5ef3eb191a9ee688de3e23ef7b3" +checksum = "b3855a8a784b474f333699ef2bbca9db2c4a1f6d9088a90a2d25b1eb53111eaa" dependencies = [ "cfg-if 1.0.0", ] @@ -199,7 +217,7 @@ version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "332570860c2edf2d57914987bf9e24835425f75825086b6ba7d1e6a3e4f1f254" dependencies = [ - "libc 0.2.79", + "libc 0.2.99", ] [[package]] @@ -245,7 +263,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f54427cfd1c7829e2a139fcefea601bf088ebca651d2bf53ebc600eac295dae" dependencies = [ "crc32fast", - "miniz_oxide 0.7.3", + "miniz_oxide 0.7.2", ] [[package]] @@ -257,9 +275,9 @@ checksum = "c56c422ef86062869b2d57ae87270608dc5929969dd130a6e248979cf4fb6ca6" [[package]] name = "fringe" version = "1.2.1" -source = "git+https://git.m-labs.hk/M-Labs/libfringe.git?rev=3ecbe5#3ecbe53f7644b18ee46ebd5b2ca12c9cbceec43a" +source = "git+https://git.m-labs.hk/M-Labs/libfringe.git?rev=53a964#53a964a63d2d384b22ae1949a471a732003a30b9" dependencies = [ - "libc 0.2.79", + "libc 0.2.99", ] [[package]] @@ -272,10 +290,21 @@ dependencies = [ ] [[package]] -name = "gimli" -version = "0.23.0" +name = "getrandom" +version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f6503fe142514ca4799d4c26297c4248239fe8838d827db6bd6065c6ed29a6ce" +checksum = "ee8025cf36f917e6a52cce185b7c7177689b838b7ec138364e50cc2277a56cf4" +dependencies = [ + "cfg-if 0.1.10", + "libc 0.2.99", + "wasi", +] + +[[package]] +name = "gimli" +version = "0.25.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f0a01e0497841a3b2db4f8afa483cce65f7e96a3498bd6c541734792aeac8fe7" dependencies = [ "fallible-iterator", "stable_deref_trait", @@ -283,20 +312,20 @@ dependencies = [ [[package]] name = "hashbrown" -version = "0.9.0" +version = "0.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "00d63df3d41950fb462ed38308eea019113ad1508da725bbedcd0fa5a85ef5f7" +checksum = "362385356d610bd1e5a408ddf8d022041774b683f345a1d2cfcb4f60f8ae2db5" dependencies = [ "ahash", ] [[package]] name = "hermit-abi" -version = "0.1.17" +version = "0.1.19" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5aca5565f760fb5b220e499d72710ed156fdb74e631659e99377d9ebfbd13ae8" +checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33" dependencies = [ - "libc 0.2.79", + "libc 0.2.99", ] [[package]] @@ -337,9 +366,9 @@ version = "0.1.0" [[package]] name = "libc" -version = "0.2.79" +version = "0.2.99" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2448f6066e80e3bfc792e9c98bf705b4b0fc6e8ef5b43e5889aff0eaa9c58743" +checksum = "a7f823d141fe0a24df1e23b4af4e3c7ba9e5966ec514ea068c93024aa7deb765" [[package]] name = "log" @@ -379,9 +408,9 @@ checksum = "0ca88d725a0a943b096803bd34e73a4437208b6077654cc4ecb2947a5f91618d" [[package]] name = "memchr" -version = "2.3.4" +version = "2.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0ee1c47aaa256ecabcaea351eae4a9b01ef39ed810004e298d2511ed284b1525" +checksum = "308cc39be01b73d0d18f82a0e7b2a3df85245f84af96fdddc5d202d27e47b86a" [[package]] name = "miniz_oxide" @@ -394,23 +423,29 @@ dependencies = [ [[package]] name = "miniz_oxide" -version = "0.7.3" +version = "0.7.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "87dfd01fe195c66b572b37921ad8803d010623c0aca821bea2302239d155cdae" +checksum = "9d811f3e15f28568be3407c8e7fdb6514c1cda3cb30683f15b6a1a1dc4ea14a7" dependencies = [ "adler 1.0.2", ] [[package]] name = "object" -version = "0.22.0" +version = "0.26.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8d3b63360ec3cb337817c2dbd47ab4a0f170d285d8e5a2064600f3def1402397" +checksum = "39f37e50073ccad23b6d09bcb5b263f4e76d3bb6038e4a3c08e52162ffa8abc2" dependencies = [ "flate2", - "wasmparser", + "memchr", ] +[[package]] +name = "once_cell" +version = "1.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "692fcb63b64b1758029e0a96ee63e049ce8c5948587f2f7208df04625e5f6b56" + [[package]] name = "proto_artiq" version = "0.0.0" @@ -433,9 +468,9 @@ checksum = "7a6e920b65c65f10b2ae65c831a81a073a89edd28c7cce89475bff467ab4167a" [[package]] name = "regex" -version = "1.4.6" +version = "1.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2a26af418b574bd56588335b3a3659a65725d4e636eb1016c2f9e3b38c7cc759" +checksum = "8b1f693b24f6ac912f4893ef08244d70b6067480d2f1a46e950c9691e6749d1d" dependencies = [ "aho-corasick", "memchr", @@ -491,14 +526,15 @@ dependencies = [ "proto_artiq", "riscv", "smoltcp", + "tar-no-std", "unwind_backtrace", ] [[package]] name = "rustc-demangle" -version = "0.1.18" +version = "0.1.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6e3bad0ee36814ca07d7968269dd4b7ec89ec2da10c4bb613928d3077083c232" +checksum = "7ef03e0a2b150c7a90d01faf6254c9c48a41e95fb2a8c2ac1c6f0d2b9aefc342" [[package]] name = "rustc_version" @@ -517,6 +553,9 @@ dependencies = [ "board_artiq", "board_misoc", "build_misoc", + "cslice", + "eh", + "io", "log", "proto_artiq", "riscv", @@ -590,6 +629,16 @@ dependencies = [ "syn", ] +[[package]] +name = "tar-no-std" +version = "0.1.8" +source = "git+https://git.m-labs.hk/M-Labs/tar-no-std?rev=2ab6dc5#2ab6dc58e5249c59c4eb03eaf3a119bcdd678d32" +dependencies = [ + "arrayvec", + "bitflags", + "log", +] + [[package]] name = "unicode-width" version = "0.1.8" @@ -618,14 +667,14 @@ dependencies = [ "unwind", ] +[[package]] +name = "version_check" +version = "0.9.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5fecdca9a5291cc2b8dcf7dc02453fee791a280f3743cb0905f8822ae463b3fe" + [[package]] name = "wasi" version = "0.9.0+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cccddf32554fecc6acb585f82a32a72e28b48f8c4c1883ddfeeeaa96f7d8e519" - -[[package]] -name = "wasmparser" -version = "0.57.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "32fddd575d477c6e9702484139cf9f23dcd554b06d185ed0f56c857dd3a47aa6" diff --git a/artiq/firmware/bootloader/Cargo.toml b/artiq/firmware/bootloader/Cargo.toml index 58cf68e33..911864e30 100644 --- a/artiq/firmware/bootloader/Cargo.toml +++ b/artiq/firmware/bootloader/Cargo.toml @@ -20,18 +20,23 @@ smoltcp = { version = "0.8.2", default-features = false, features = ["medium-eth riscv = { version = "0.6.0", features = ["inline-asm"] } # insanity required by using cargo build-std over xbuild with nix +# cargo update does not work thanks to ahash 0.7 problems [dev-dependencies] getopts = "=0.2.21" -libc = "=0.2.79" +libc = "=0.2.99" unicode-width = "=0.1.8" -addr2line = "=0.14.0" -memchr = "=2.3.4" -hashbrown = "=0.9.0" +addr2line = "=0.16.0" +hashbrown = "=0.11.0" # must be injected into lockfile manually +ahash = "=0.7.0" # must be injected into lockfile manually miniz_oxide = "=0.4.0" -rustc-demangle = "=0.1.18" -hermit-abi = "=0.1.17" +rustc-demangle = "=0.1.21" +hermit-abi = "=0.1.19" dlmalloc = "=0.2.1" -wasi = "=0.9.0" fortanix-sgx-abi = "=0.3.3" -cc = "=1.0.60" -compiler_builtins = "=0.1.39" \ No newline at end of file +cc = "=1.0.69" +compiler_builtins = "=0.1.49" +version_check = "=0.9.3" +once_cell = "=1.8.0" +wasi = "=0.9.0" +getrandom = "=0.2.0" +object = "=0.26.2" \ No newline at end of file diff --git a/artiq/firmware/bootloader/main.rs b/artiq/firmware/bootloader/main.rs index 50772d8f7..a7ac7fd7c 100644 --- a/artiq/firmware/bootloader/main.rs +++ b/artiq/firmware/bootloader/main.rs @@ -500,7 +500,7 @@ pub extern fn main() -> i32 { println!(r"|_| |_|_|____/ \___/ \____|"); println!(""); println!("MiSoC Bootloader"); - println!("Copyright (c) 2017-2023 M-Labs Limited"); + println!("Copyright (c) 2017-2024 M-Labs Limited"); println!(""); #[cfg(has_ethmac)] diff --git a/artiq/firmware/runtime/runtime.ld b/artiq/firmware/firmware.ld similarity index 79% rename from artiq/firmware/runtime/runtime.ld rename to artiq/firmware/firmware.ld index 9f60bf3ac..9778cb541 100644 --- a/artiq/firmware/runtime/runtime.ld +++ b/artiq/firmware/firmware.ld @@ -6,7 +6,7 @@ ENTRY(_reset_handler) * ld does not allow this expression here. */ MEMORY { - runtime (RWX) : ORIGIN = 0x40000000, LENGTH = 0x4000000 /* 64M */ + firmware (RWX) : ORIGIN = 0x40000000, LENGTH = 0x4000000 /* 64M */ } SECTIONS @@ -14,24 +14,24 @@ SECTIONS .vectors : { *(.vectors) - } > runtime + } > firmware .text : { *(.text .text.*) - } > runtime + } > firmware .eh_frame : { __eh_frame_start = .; KEEP(*(.eh_frame)) __eh_frame_end = .; - } > runtime + } > firmware .eh_frame_hdr : { KEEP(*(.eh_frame_hdr)) - } > runtime + } > firmware __eh_frame_hdr_start = SIZEOF(.eh_frame_hdr) > 0 ? ADDR(.eh_frame_hdr) : 0; __eh_frame_hdr_end = SIZEOF(.eh_frame_hdr) > 0 ? . : 0; @@ -39,35 +39,35 @@ SECTIONS .gcc_except_table : { *(.gcc_except_table) - } > runtime + } > firmware /* https://sourceware.org/bugzilla/show_bug.cgi?id=20475 */ .got : { *(.got) - } > runtime + } > firmware .got.plt : { *(.got.plt) - } > runtime + } > firmware .rodata : { *(.rodata .rodata.*) - } > runtime + } > firmware .data : { *(.data .data.*) - } > runtime + } > firmware .bss (NOLOAD) : ALIGN(4) { _fbss = .; *(.sbss .sbss.* .bss .bss.*); _ebss = .; - } > runtime + } > firmware .stack (NOLOAD) : ALIGN(0x1000) { @@ -76,12 +76,12 @@ SECTIONS _estack = .; . += 0x10000; _fstack = . - 16; - } > runtime + } > firmware .heap (NOLOAD) : ALIGN(16) { _fheap = .; - . = ORIGIN(runtime) + LENGTH(runtime); + . = ORIGIN(firmware) + LENGTH(firmware); _eheap = .; - } > runtime + } > firmware } diff --git a/artiq/firmware/ksupport/api.rs b/artiq/firmware/ksupport/api.rs index 1682f818b..e4d7b2482 100644 --- a/artiq/firmware/ksupport/api.rs +++ b/artiq/firmware/ksupport/api.rs @@ -157,6 +157,11 @@ static mut API: &'static [(&'static str, *const ())] = &[ api!(dma_retrieve = ::dma_retrieve), api!(dma_playback = ::dma_playback), + api!(subkernel_load_run = ::subkernel_load_run), + api!(subkernel_send_message = ::subkernel_send_message), + api!(subkernel_await_message = ::subkernel_await_message), + api!(subkernel_await_finish = ::subkernel_await_finish), + api!(i2c_start = ::nrt_bus::i2c::start), api!(i2c_restart = ::nrt_bus::i2c::restart), api!(i2c_stop = ::nrt_bus::i2c::stop), diff --git a/artiq/firmware/ksupport/eh_artiq.rs b/artiq/firmware/ksupport/eh_artiq.rs index d69608e9d..04c6e723e 100644 --- a/artiq/firmware/ksupport/eh_artiq.rs +++ b/artiq/firmware/ksupport/eh_artiq.rs @@ -55,12 +55,14 @@ struct ExceptionBuffer { exception_count: usize, } +const EXCEPTION: uw::_Unwind_Exception = uw::_Unwind_Exception { + exception_class: EXCEPTION_CLASS, + exception_cleanup: cleanup, + private: [0; uw::unwinder_private_data_size], +}; + static mut EXCEPTION_BUFFER: ExceptionBuffer = ExceptionBuffer { - uw_exceptions: [uw::_Unwind_Exception { - exception_class: EXCEPTION_CLASS, - exception_cleanup: cleanup, - private: [0; uw::unwinder_private_data_size], - }; MAX_INFLIGHT_EXCEPTIONS], + uw_exceptions: [EXCEPTION; MAX_INFLIGHT_EXCEPTIONS], exceptions: [None; MAX_INFLIGHT_EXCEPTIONS + 1], exception_stack: [-1; MAX_INFLIGHT_EXCEPTIONS + 1], backtrace: [(0, 0); MAX_BACKTRACE_SIZE], @@ -74,11 +76,7 @@ static mut EXCEPTION_BUFFER: ExceptionBuffer = ExceptionBuffer { }; pub unsafe extern fn reset_exception_buffer(payload_addr: usize) { - EXCEPTION_BUFFER.uw_exceptions = [uw::_Unwind_Exception { - exception_class: EXCEPTION_CLASS, - exception_cleanup: cleanup, - private: [0; uw::unwinder_private_data_size], - }; MAX_INFLIGHT_EXCEPTIONS]; + EXCEPTION_BUFFER.uw_exceptions = [EXCEPTION; MAX_INFLIGHT_EXCEPTIONS]; EXCEPTION_BUFFER.exceptions = [None; MAX_INFLIGHT_EXCEPTIONS + 1]; EXCEPTION_BUFFER.exception_stack = [-1; MAX_INFLIGHT_EXCEPTIONS + 1]; EXCEPTION_BUFFER.backtrace_size = 0; @@ -151,8 +149,7 @@ pub extern fn personality(version: c_int, } #[export_name="__artiq_raise"] -#[unwind(allowed)] -pub unsafe extern fn raise(exception: *const Exception) -> ! { +pub unsafe extern "C-unwind" fn raise(exception: *const Exception) -> ! { let count = EXCEPTION_BUFFER.exception_count; let stack = &mut EXCEPTION_BUFFER.exception_stack; let diff = exception as isize - EXCEPTION_BUFFER.exceptions.as_ptr() as isize; @@ -222,8 +219,7 @@ pub unsafe extern fn raise(exception: *const Exception) -> ! { #[export_name="__artiq_resume"] -#[unwind(allowed)] -pub unsafe extern fn resume() -> ! { +pub unsafe extern "C-unwind" fn resume() -> ! { assert!(EXCEPTION_BUFFER.exception_count != 0); let i = EXCEPTION_BUFFER.exception_stack[EXCEPTION_BUFFER.exception_count - 1]; assert!(i != -1); @@ -233,8 +229,7 @@ pub unsafe extern fn resume() -> ! { } #[export_name="__artiq_end_catch"] -#[unwind(allowed)] -pub unsafe extern fn end_catch() { +pub unsafe extern "C-unwind" fn end_catch() { let mut count = EXCEPTION_BUFFER.exception_count; assert!(count != 0); // we remove all exceptions with SP <= current exception SP @@ -333,8 +328,7 @@ extern fn stop_fn(_version: c_int, } } -// Must be kept in sync with preallocate_runtime_exception_names() in artiq/language/embedding_map.py -static EXCEPTION_ID_LOOKUP: [(&str, u32); 11] = [ +static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [ ("RuntimeError", 0), ("RTIOUnderflow", 1), ("RTIOOverflow", 2), @@ -346,6 +340,7 @@ static EXCEPTION_ID_LOOKUP: [(&str, u32); 11] = [ ("ZeroDivisionError", 8), ("IndexError", 9), ("UnwrapNoneError", 10), + ("SubkernelError", 11) ]; pub fn get_exception_id(name: &str) -> u32 { diff --git a/artiq/firmware/ksupport/lib.rs b/artiq/firmware/ksupport/lib.rs index 685516e2b..42f9de3e5 100644 --- a/artiq/firmware/ksupport/lib.rs +++ b/artiq/firmware/ksupport/lib.rs @@ -1,5 +1,5 @@ -#![feature(lang_items, llvm_asm, panic_unwind, libc, unwind_attributes, - panic_info_message, nll, const_in_array_repeat_expressions)] +#![feature(lang_items, asm, panic_unwind, libc, + panic_info_message, nll, c_unwind)] #![no_std] extern crate libc; @@ -21,7 +21,6 @@ use dyld::Library; use board_artiq::{mailbox, rpc_queue}; use proto_artiq::{kernel_proto, rpc_proto}; use kernel_proto::*; -#[cfg(has_rtio_dma)] use board_misoc::csr; use riscv::register::{mcause, mepc, mtval}; @@ -31,8 +30,9 @@ fn send(request: &Message) { } fn recv R>(f: F) -> R { - while mailbox::receive() == 0 {} - let result = f(unsafe { &*(mailbox::receive() as *const Message) }); + let mut msg_ptr = 0; + while msg_ptr == 0 { msg_ptr = mailbox::receive(); } + let result = f(unsafe { &*(msg_ptr as *const Message) }); mailbox::acknowledge(); result } @@ -122,7 +122,6 @@ pub extern fn send_to_rtio_log(text: CSlice) { rtio::log(text.as_ref()) } -#[unwind(aborts)] extern fn rpc_send(service: u32, tag: &CSlice, data: *const *const ()) { while !rpc_queue::empty() {} send(&RpcSend { @@ -133,13 +132,12 @@ extern fn rpc_send(service: u32, tag: &CSlice, data: *const *const ()) { }) } -#[unwind(aborts)] extern fn rpc_send_async(service: u32, tag: &CSlice, data: *const *const ()) { while rpc_queue::full() {} rpc_queue::enqueue(|mut slice| { let length = { let mut writer = Cursor::new(&mut slice[4..]); - rpc_proto::send_args(&mut writer, service, tag.as_ref(), data)?; + rpc_proto::send_args(&mut writer, service, tag.as_ref(), data, true)?; writer.position() }; io::ProtoWrite::write_u32(&mut slice, length as u32) @@ -171,8 +169,7 @@ extern fn rpc_send_async(service: u32, tag: &CSlice, data: *const *const ()) /// to the maximum required for any of the possible types according to the target ABI). /// /// If the RPC call resulted in an exception, it is reconstructed and raised. -#[unwind(allowed)] -extern fn rpc_recv(slot: *mut ()) -> usize { +extern "C-unwind" fn rpc_recv(slot: *mut ()) -> usize { send(&RpcRecvRequest(slot)); recv!(&RpcRecvReply(ref result) => { match result { @@ -204,7 +201,6 @@ fn terminate(exceptions: &'static [Option>], loop {} } -#[unwind(aborts)] extern fn cache_get<'a>(key: &CSlice) -> *const CSlice<'a, i32> { send(&CacheGetRequest { key: str::from_utf8(key.as_ref()).unwrap() @@ -214,8 +210,7 @@ extern fn cache_get<'a>(key: &CSlice) -> *const CSlice<'a, i32> { }) } -#[unwind(allowed)] -extern fn cache_put(key: &CSlice, list: &CSlice) { +extern "C-unwind" fn cache_put(key: &CSlice, list: &CSlice) { send(&CachePutRequest { key: str::from_utf8(key.as_ref()).unwrap(), value: list.as_ref() @@ -248,8 +243,7 @@ fn dma_record_flush() { } } -#[unwind(allowed)] -extern fn dma_record_start(name: &CSlice) { +extern "C-unwind" fn dma_record_start(name: &CSlice) { let name = str::from_utf8(name.as_ref()).unwrap(); unsafe { @@ -268,8 +262,7 @@ extern fn dma_record_start(name: &CSlice) { } } -#[unwind(allowed)] -extern fn dma_record_stop(duration: i64, enable_ddma: bool) { +extern "C-unwind" fn dma_record_stop(duration: i64, enable_ddma: bool) { unsafe { dma_record_flush(); @@ -291,7 +284,6 @@ extern fn dma_record_stop(duration: i64, enable_ddma: bool) { } } -#[unwind(aborts)] #[inline(always)] unsafe fn dma_record_output_prepare(timestamp: i64, target: i32, words: usize) -> &'static mut [u8] { @@ -328,7 +320,6 @@ unsafe fn dma_record_output_prepare(timestamp: i64, target: i32, data } -#[unwind(aborts)] extern fn dma_record_output(target: i32, word: i32) { unsafe { let timestamp = ((csr::rtio::now_hi_read() as i64) << 32) | (csr::rtio::now_lo_read() as i64); @@ -342,7 +333,6 @@ extern fn dma_record_output(target: i32, word: i32) { } } -#[unwind(aborts)] extern fn dma_record_output_wide(target: i32, words: &CSlice) { assert!(words.len() <= 16); // enforce the hardware limit @@ -361,7 +351,6 @@ extern fn dma_record_output_wide(target: i32, words: &CSlice) { } } -#[unwind(aborts)] extern fn dma_erase(name: &CSlice) { let name = str::from_utf8(name.as_ref()).unwrap(); @@ -375,8 +364,7 @@ struct DmaTrace { uses_ddma: bool, } -#[unwind(allowed)] -extern fn dma_retrieve(name: &CSlice) -> DmaTrace { +extern "C-unwind" fn dma_retrieve(name: &CSlice) -> DmaTrace { let name = str::from_utf8(name.as_ref()).unwrap(); send(&DmaRetrieveRequest { name: name }); @@ -396,9 +384,8 @@ extern fn dma_retrieve(name: &CSlice) -> DmaTrace { }) } -#[cfg(has_rtio_dma)] -#[unwind(allowed)] -extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) { +#[cfg(kernel_has_rtio_dma)] +extern "C-unwind" fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) { assert!(ptr % 64 == 0); unsafe { @@ -454,10 +441,97 @@ extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) { } } -#[cfg(not(has_rtio_dma))] -#[unwind(allowed)] -extern fn dma_playback(_timestamp: i64, _ptr: i32, _uses_ddma: bool) { - unimplemented!("not(has_rtio_dma)") +#[cfg(all(not(kernel_has_rtio_dma), not(has_rtio_dma)))] +extern "C-unwind" fn dma_playback(_timestamp: i64, _ptr: i32, _uses_ddma: bool) { + unimplemented!("not(kernel_has_rtio_dma)") +} + +// for satellite (has_rtio_dma but not in kernel) +#[cfg(all(not(kernel_has_rtio_dma), has_rtio_dma))] +extern "C-unwind" fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) { + // DDMA is always used on satellites, so the `uses_ddma` setting is ignored + // StartRemoteRequest reused as "normal" start request + send(&DmaStartRemoteRequest { id: ptr as i32, timestamp: timestamp }); + // skip awaitremoterequest - it's a given + recv!(&DmaAwaitRemoteReply { timeout, error, channel, timestamp } => { + if timeout { + raise!("DMAError", + "Error running DMA on satellite device, timed out waiting for results"); + } + if error & 1 != 0 { + raise!("RTIOUnderflow", + "RTIO underflow at channel {rtio_channel_info:0}, {1} mu", + channel as i64, timestamp as i64, 0); + } + if error & 2 != 0 { + raise!("RTIODestinationUnreachable", + "RTIO destination unreachable, output, at channel {rtio_channel_info:0}, {1} mu", + channel as i64, timestamp as i64, 0); + } + }); +} + + +extern "C-unwind" fn subkernel_load_run(id: u32, destination: u8, run: bool) { + send(&SubkernelLoadRunRequest { id: id, destination: destination, run: run }); + recv!(&SubkernelLoadRunReply { succeeded } => { + if !succeeded { + raise!("SubkernelError", + "Error loading or running the subkernel"); + } + }); +} + +extern "C-unwind" fn subkernel_await_finish(id: u32, timeout: i64) { + send(&SubkernelAwaitFinishRequest { id: id, timeout: timeout }); + recv!(SubkernelAwaitFinishReply { status } => { + match status { + SubkernelStatus::NoError => (), + SubkernelStatus::IncorrectState => raise!("SubkernelError", + "Subkernel not running"), + SubkernelStatus::Timeout => raise!("SubkernelError", + "Subkernel timed out"), + SubkernelStatus::CommLost => raise!("SubkernelError", + "Lost communication with satellite"), + SubkernelStatus::OtherError => raise!("SubkernelError", + "An error occurred during subkernel operation") + } + }) +} + +extern fn subkernel_send_message(id: u32, is_return: bool, destination: u8, + count: u8, tag: &CSlice, data: *const *const ()) { + send(&SubkernelMsgSend { + id: id, + destination: if is_return { None } else { Some(destination) }, + count: count, + tag: tag.as_ref(), + data: data + }); +} + +extern "C-unwind" fn subkernel_await_message(id: i32, timeout: i64, tags: &CSlice, min: u8, max: u8) -> u8 { + send(&SubkernelMsgRecvRequest { id: id, timeout: timeout, tags: tags.as_ref() }); + recv!(SubkernelMsgRecvReply { status, count } => { + match status { + SubkernelStatus::NoError => { + if count < &min || count > &max { + raise!("SubkernelError", + "Received less or more arguments than expected"); + } + *count + } + SubkernelStatus::IncorrectState => raise!("SubkernelError", + "Subkernel not running"), + SubkernelStatus::Timeout => raise!("SubkernelError", + "Subkernel timed out"), + SubkernelStatus::CommLost => raise!("SubkernelError", + "Lost communication with satellite"), + SubkernelStatus::OtherError => raise!("SubkernelError", + "An error occurred during subkernel operation") + } + }) + // RpcRecvRequest should be called `count` times after this to receive message data } unsafe fn attribute_writeback(typeinfo: *const ()) { @@ -561,8 +635,7 @@ pub unsafe fn main() { } #[no_mangle] -#[unwind(allowed)] -pub unsafe extern fn exception(_regs: *const u32) { +pub unsafe extern "C-unwind" fn exception(_regs: *const u32) { let pc = mepc::read(); let cause = mcause::read().cause(); let mtval = mtval::read(); @@ -580,7 +653,6 @@ pub unsafe extern fn exception(_regs: *const u32) { } #[no_mangle] -#[unwind(allowed)] -pub extern fn abort() { +pub extern "C-unwind" fn abort() { panic!("aborted") } diff --git a/artiq/firmware/libboard_artiq/Cargo.toml b/artiq/firmware/libboard_artiq/Cargo.toml index 8405ee892..f884b0fa0 100644 --- a/artiq/firmware/libboard_artiq/Cargo.toml +++ b/artiq/firmware/libboard_artiq/Cargo.toml @@ -25,3 +25,4 @@ proto_artiq = { path = "../libproto_artiq" } [features] uart_console = [] alloc = [] +calibrate_wrpll_skew = [] diff --git a/artiq/firmware/libboard_artiq/drtioaux.rs b/artiq/firmware/libboard_artiq/drtioaux.rs index 818775d7b..40cc8bb54 100644 --- a/artiq/firmware/libboard_artiq/drtioaux.rs +++ b/artiq/firmware/libboard_artiq/drtioaux.rs @@ -69,9 +69,9 @@ fn receive(linkno: u8, f: F) -> Result, Error> let linkidx = linkno as usize; unsafe { if (DRTIOAUX[linkidx].aux_rx_present_read)() == 1 { - let ptr = DRTIOAUX_MEM[linkidx].base + DRTIOAUX_MEM[linkidx].size / 2; - let len = (DRTIOAUX[linkidx].aux_rx_length_read)(); - let result = f(slice::from_raw_parts(ptr as *mut u8, len as usize)); + let read_ptr = (DRTIOAUX[linkidx].aux_read_pointer_read)() as usize; + let ptr = DRTIOAUX_MEM[linkidx].base + (DRTIOAUX_MEM[linkidx].size / 2) + (read_ptr * 0x400); + let result = f(slice::from_raw_parts(ptr as *mut u8, 0x400 as usize)); (DRTIOAUX[linkidx].aux_rx_present_write)(1); Ok(Some(result?)) } else { @@ -86,21 +86,17 @@ pub fn recv(linkno: u8) -> Result, Error> { } receive(linkno, |buffer| { - if buffer.len() < 8 { - return Err(IoError::UnexpectedEnd.into()) - } - let mut reader = Cursor::new(buffer); - let checksum_at = buffer.len() - 4; + let packet = Packet::read_from(&mut reader)?; + let padding = (12 - (reader.position() % 8)) % 8; + let checksum_at = reader.position() + padding; let checksum = crc::crc32::checksum_ieee(&reader.get_ref()[0..checksum_at]); reader.set_position(checksum_at); if reader.read_u32()? != checksum { return Err(Error::CorruptedPacket) } - reader.set_position(0); - - Ok(Packet::read_from(&mut reader)?) + Ok(packet) }) } diff --git a/artiq/firmware/libboard_artiq/lib.rs b/artiq/firmware/libboard_artiq/lib.rs index f564c98e4..906a84e1a 100644 --- a/artiq/firmware/libboard_artiq/lib.rs +++ b/artiq/firmware/libboard_artiq/lib.rs @@ -1,4 +1,4 @@ -#![feature(lang_items, never_type)] +#![feature(asm, lang_items, never_type)] #![no_std] extern crate failure; @@ -24,6 +24,8 @@ pub mod rpc_queue; #[cfg(has_si5324)] pub mod si5324; +#[cfg(has_si549)] +pub mod si549; #[cfg(has_grabber)] pub mod grabber; diff --git a/artiq/firmware/libboard_artiq/mailbox.rs b/artiq/firmware/libboard_artiq/mailbox.rs index 9c1f374f6..fd4ed7d67 100644 --- a/artiq/firmware/libboard_artiq/mailbox.rs +++ b/artiq/firmware/libboard_artiq/mailbox.rs @@ -6,7 +6,11 @@ static mut LAST: usize = 0; pub unsafe fn send(data: usize) { LAST = data; - write_volatile(MAILBOX, data) + // after Rust toolchain update to LLVM12, this empty asm! block is required + // to ensure that the compiler doesn't take any shortcuts + // otherwise, the comm CPU will read garbage data and crash + asm!("", options(preserves_flags, readonly, nostack)); + write_volatile(MAILBOX, data); } pub fn acknowledged() -> bool { diff --git a/artiq/firmware/libboard_artiq/si549.rs b/artiq/firmware/libboard_artiq/si549.rs new file mode 100644 index 000000000..f610e9a6d --- /dev/null +++ b/artiq/firmware/libboard_artiq/si549.rs @@ -0,0 +1,862 @@ +use board_misoc::{clock, csr}; +use log::info; + +const ADDRESS: u8 = 0x67; + +const ADPLL_MAX: i32 = (950.0 / 0.0001164) as i32; + +pub struct DividerConfig { + pub hsdiv: u16, + pub lsdiv: u8, + pub fbdiv: u64, +} + +pub struct FrequencySetting { + pub main: DividerConfig, + pub helper: DividerConfig, +} + +mod i2c { + use super::*; + + #[derive(Clone, Copy)] + pub enum DCXO { + Main, + Helper, + } + + fn half_period() { + clock::spin_us(1) + } + + fn sda_i(dcxo: DCXO) -> bool { + match dcxo { + DCXO::Main => unsafe { csr::wrpll::main_dcxo_sda_in_read() == 1 }, + DCXO::Helper => unsafe { csr::wrpll::helper_dcxo_sda_in_read() == 1 }, + } + } + + fn sda_oe(dcxo: DCXO, oe: bool) { + let val = if oe { 1 } else { 0 }; + match dcxo { + DCXO::Main => unsafe { csr::wrpll::main_dcxo_sda_oe_write(val) }, + DCXO::Helper => unsafe { csr::wrpll::helper_dcxo_sda_oe_write(val) }, + }; + } + + fn sda_o(dcxo: DCXO, o: bool) { + let val = if o { 1 } else { 0 }; + match dcxo { + DCXO::Main => unsafe { csr::wrpll::main_dcxo_sda_out_write(val) }, + DCXO::Helper => unsafe { csr::wrpll::helper_dcxo_sda_out_write(val) }, + }; + } + + fn scl_oe(dcxo: DCXO, oe: bool) { + let val = if oe { 1 } else { 0 }; + match dcxo { + DCXO::Main => unsafe { csr::wrpll::main_dcxo_scl_oe_write(val) }, + DCXO::Helper => unsafe { csr::wrpll::helper_dcxo_scl_oe_write(val) }, + }; + } + + fn scl_o(dcxo: DCXO, o: bool) { + let val = if o { 1 } else { 0 }; + match dcxo { + DCXO::Main => unsafe { csr::wrpll::main_dcxo_scl_out_write(val) }, + DCXO::Helper => unsafe { csr::wrpll::helper_dcxo_scl_out_write(val) }, + }; + } + + pub fn init(dcxo: DCXO) -> Result<(), &'static str> { + // Set SCL as output, and high level + scl_o(dcxo, true); + scl_oe(dcxo, true); + // Prepare a zero level on SDA so that sda_oe pulls it down + sda_o(dcxo, false); + // Release SDA + sda_oe(dcxo, false); + + // Check the I2C bus is ready + half_period(); + half_period(); + if !sda_i(dcxo) { + // Try toggling SCL a few times + for _bit in 0..8 { + scl_o(dcxo, false); + half_period(); + scl_o(dcxo, true); + half_period(); + } + } + + if !sda_i(dcxo) { + return Err("SDA is stuck low and doesn't get unstuck"); + } + Ok(()) + } + + pub fn start(dcxo: DCXO) { + // Set SCL high then SDA low + scl_o(dcxo, true); + half_period(); + sda_oe(dcxo, true); + half_period(); + } + + pub fn stop(dcxo: DCXO) { + // First, make sure SCL is low, so that the target releases the SDA line + scl_o(dcxo, false); + half_period(); + // Set SCL high then SDA high + sda_oe(dcxo, true); + scl_o(dcxo, true); + half_period(); + sda_oe(dcxo, false); + half_period(); + } + + pub fn write(dcxo: DCXO, data: u8) -> bool { + // MSB first + for bit in (0..8).rev() { + // Set SCL low and set our bit on SDA + scl_o(dcxo, false); + sda_oe(dcxo, data & (1 << bit) == 0); + half_period(); + // Set SCL high ; data is shifted on the rising edge of SCL + scl_o(dcxo, true); + half_period(); + } + // Check ack + // Set SCL low, then release SDA so that the I2C target can respond + scl_o(dcxo, false); + half_period(); + sda_oe(dcxo, false); + // Set SCL high and check for ack + scl_o(dcxo, true); + half_period(); + // returns true if acked (I2C target pulled SDA low) + !sda_i(dcxo) + } + + pub fn read(dcxo: DCXO, ack: bool) -> u8 { + // Set SCL low first, otherwise setting SDA as input may cause a transition + // on SDA with SCL high which will be interpreted as START/STOP condition. + scl_o(dcxo, false); + half_period(); // make sure SCL has settled low + sda_oe(dcxo, false); + + let mut data: u8 = 0; + + // MSB first + for bit in (0..8).rev() { + scl_o(dcxo, false); + half_period(); + // Set SCL high and shift data + scl_o(dcxo, true); + half_period(); + if sda_i(dcxo) { + data |= 1 << bit + } + } + // Send ack + // Set SCL low and pull SDA low when acking + scl_o(dcxo, false); + if ack { + sda_oe(dcxo, true) + } + half_period(); + // then set SCL high + scl_o(dcxo, true); + half_period(); + + data + } +} + +fn write(dcxo: i2c::DCXO, reg: u8, val: u8) -> Result<(), &'static str> { + i2c::start(dcxo); + if !i2c::write(dcxo, ADDRESS << 1) { + return Err("Si549 failed to ack write address"); + } + if !i2c::write(dcxo, reg) { + return Err("Si549 failed to ack register"); + } + if !i2c::write(dcxo, val) { + return Err("Si549 failed to ack value"); + } + i2c::stop(dcxo); + Ok(()) +} + +fn read(dcxo: i2c::DCXO, reg: u8) -> Result { + i2c::start(dcxo); + if !i2c::write(dcxo, ADDRESS << 1) { + return Err("Si549 failed to ack write address"); + } + if !i2c::write(dcxo, reg) { + return Err("Si549 failed to ack register"); + } + i2c::stop(dcxo); + + i2c::start(dcxo); + if !i2c::write(dcxo, (ADDRESS << 1) | 1) { + return Err("Si549 failed to ack read address"); + } + let val = i2c::read(dcxo, false); + i2c::stop(dcxo); + Ok(val) +} + +fn setup(dcxo: i2c::DCXO, config: &DividerConfig) -> Result<(), &'static str> { + i2c::init(dcxo)?; + + write(dcxo, 255, 0x00)?; // PAGE + write(dcxo, 69, 0x00)?; // Disable FCAL override. + write(dcxo, 17, 0x00)?; // Synchronously disable output + + // The Si549 has no ID register, so we check that it responds correctly + // by writing values to a RAM-like register and reading them back. + for test_value in 0..255 { + write(dcxo, 23, test_value)?; + let readback = read(dcxo, 23)?; + if readback != test_value { + return Err("Si549 detection failed"); + } + } + + write(dcxo, 23, config.hsdiv as u8)?; + write(dcxo, 24, (config.hsdiv >> 8) as u8 | (config.lsdiv << 4))?; + write(dcxo, 26, config.fbdiv as u8)?; + write(dcxo, 27, (config.fbdiv >> 8) as u8)?; + write(dcxo, 28, (config.fbdiv >> 16) as u8)?; + write(dcxo, 29, (config.fbdiv >> 24) as u8)?; + write(dcxo, 30, (config.fbdiv >> 32) as u8)?; + write(dcxo, 31, (config.fbdiv >> 40) as u8)?; + + write(dcxo, 7, 0x08)?; // Start FCAL + clock::spin_us(30_000); // Internal FCAL VCO calibration + write(dcxo, 17, 0x01)?; // Synchronously enable output + + Ok(()) +} + +pub fn main_setup(settings: &FrequencySetting) -> Result<(), &'static str> { + unsafe { + csr::wrpll::main_dcxo_bitbang_enable_write(1); + csr::wrpll::main_dcxo_i2c_address_write(ADDRESS); + } + + setup(i2c::DCXO::Main, &settings.main)?; + + // Si549 maximum settling time for large frequency change. + clock::spin_us(40_000); + + unsafe { + csr::wrpll::main_dcxo_bitbang_enable_write(0); + } + + info!("Main Si549 started"); + Ok(()) +} + +pub fn helper_setup(settings: &FrequencySetting) -> Result<(), &'static str> { + unsafe { + csr::wrpll::helper_reset_write(1); + csr::wrpll::helper_dcxo_bitbang_enable_write(1); + csr::wrpll::helper_dcxo_i2c_address_write(ADDRESS); + } + + setup(i2c::DCXO::Helper, &settings.helper)?; + + // Si549 maximum settling time for large frequency change. + clock::spin_us(40_000); + + unsafe { + csr::wrpll::helper_reset_write(0); + csr::wrpll::helper_dcxo_bitbang_enable_write(0); + } + info!("Helper Si549 started"); + Ok(()) +} + +fn set_adpll(dcxo: i2c::DCXO, adpll: i32) -> Result<(), &'static str> { + if adpll.abs() > ADPLL_MAX { + return Err("adpll is too large"); + } + + match dcxo { + i2c::DCXO::Main => unsafe { + if csr::wrpll::main_dcxo_bitbang_enable_read() == 1 { + return Err("Main si549 bitbang mode is active when using gateware i2c"); + } + + while csr::wrpll::main_dcxo_adpll_busy_read() == 1 {} + if csr::wrpll::main_dcxo_nack_read() == 1 { + return Err("Main si549 failed to ack adpll write"); + } + + csr::wrpll::main_dcxo_i2c_address_write(ADDRESS); + csr::wrpll::main_dcxo_adpll_write(adpll as u32); + + csr::wrpll::main_dcxo_adpll_stb_write(1); + }, + i2c::DCXO::Helper => unsafe { + if csr::wrpll::helper_dcxo_bitbang_enable_read() == 1 { + return Err("Helper si549 bitbang mode is active when using gateware i2c"); + } + + while csr::wrpll::helper_dcxo_adpll_busy_read() == 1 {} + if csr::wrpll::helper_dcxo_nack_read() == 1 { + return Err("Helper si549 failed to ack adpll write"); + } + + csr::wrpll::helper_dcxo_i2c_address_write(ADDRESS); + csr::wrpll::helper_dcxo_adpll_write(adpll as u32); + + csr::wrpll::helper_dcxo_adpll_stb_write(1); + }, + }; + + Ok(()) +} + +#[cfg(has_wrpll)] +pub mod wrpll { + + use super::*; + + const BEATING_PERIOD: i32 = 0x8000; + const BEATING_HALFPERIOD: i32 = 0x4000; + const COUNTER_WIDTH: u32 = 24; + const DIV_WIDTH: u32 = 2; + + // y[n] = b0*x[n] + b1*x[n-1] + b2*x[n-2] - a1*y[n-1] - a2*y[n-2] + struct FilterParameters { + pub b0: i64, + pub b1: i64, + pub b2: i64, + pub a1: i64, + pub a2: i64, + } + + #[cfg(rtio_frequency = "100.0")] + const LPF: FilterParameters = FilterParameters { + b0: 10905723400, // 0.03967479060647884 * 1 << 38 + b1: 21811446800, // 0.07934958121295768 * 1 << 38 + b2: 10905723400, // 0.03967479060647884 * 1 << 38 + a1: -381134538612, // -1.3865593741228928 * 1 << 38 + a2: 149879525269, // 0.5452585365488082 * 1 << 38 + }; + + #[cfg(rtio_frequency = "125.0")] + const LPF: FilterParameters = FilterParameters { + b0: 19816511911, // 0.07209205036273991 * 1 << 38 + b1: 39633023822, // 0.14418410072547982 * 1 << 38 + b2: 19816511911, // 0.07209205036273991 * 1 << 38 + a1: -168062510414, // -0.6114078511562919 * 1 << 38 + a2: -27549348884, // -0.10022394739274834 * 1 << 38 + }; + + static mut H_ADPLL1: i32 = 0; + static mut H_ADPLL2: i32 = 0; + static mut PERIOD_ERR1: i32 = 0; + static mut PERIOD_ERR2: i32 = 0; + + static mut M_ADPLL1: i32 = 0; + static mut M_ADPLL2: i32 = 0; + static mut PHASE_ERR1: i32 = 0; + static mut PHASE_ERR2: i32 = 0; + + static mut BASE_ADPLL: i32 = 0; + + #[derive(Clone, Copy)] + pub enum ISR { + RefTag, + MainTag, + } + + mod tag_collector { + use super::*; + + #[cfg(wrpll_ref_clk = "GT_CDR")] + static mut TAG_OFFSET: u32 = 23890; + #[cfg(wrpll_ref_clk = "SMA_CLKIN")] + static mut TAG_OFFSET: u32 = 0; + static mut REF_TAG: u32 = 0; + static mut REF_TAG_READY: bool = false; + static mut MAIN_TAG: u32 = 0; + static mut MAIN_TAG_READY: bool = false; + + pub fn reset() { + clear_phase_diff_ready(); + unsafe { + REF_TAG = 0; + MAIN_TAG = 0; + } + } + + pub fn clear_phase_diff_ready() { + unsafe { + REF_TAG_READY = false; + MAIN_TAG_READY = false; + } + } + + pub fn collect_tags(interrupt: ISR) { + match interrupt { + ISR::RefTag => unsafe { + REF_TAG = csr::wrpll::ref_tag_read(); + REF_TAG_READY = true; + }, + ISR::MainTag => unsafe { + MAIN_TAG = csr::wrpll::main_tag_read(); + MAIN_TAG_READY = true; + }, + } + } + + pub fn phase_diff_ready() -> bool { + unsafe { REF_TAG_READY && MAIN_TAG_READY } + } + + #[cfg(feature = "calibrate_wrpll_skew")] + pub fn set_tag_offset(offset: u32) { + unsafe { + TAG_OFFSET = offset; + } + } + + #[cfg(feature = "calibrate_wrpll_skew")] + pub fn get_tag_offset() -> u32 { + unsafe { TAG_OFFSET } + } + + pub fn get_period_error() -> i32 { + // n * BEATING_PERIOD - REF_TAG(n) mod BEATING_PERIOD + let mut period_error = unsafe { + REF_TAG + .overflowing_neg() + .0 + .rem_euclid(BEATING_PERIOD as u32) as i32 + }; + // mapping tags from [0, 2π] -> [-π, π] + if period_error > BEATING_HALFPERIOD { + period_error -= BEATING_PERIOD + } + period_error + } + + pub fn get_phase_error() -> i32 { + // MAIN_TAG(n) - REF_TAG(n) - TAG_OFFSET mod BEATING_PERIOD + let mut phase_error = unsafe { + MAIN_TAG + .overflowing_sub(REF_TAG + TAG_OFFSET) + .0 + .rem_euclid(BEATING_PERIOD as u32) as i32 + }; + + // mapping tags from [0, 2π] -> [-π, π] + if phase_error > BEATING_HALFPERIOD { + phase_error -= BEATING_PERIOD + } + phase_error + } + } + + fn set_isr(en: bool) { + let val = if en { 1 } else { 0 }; + unsafe { + csr::wrpll::ref_tag_ev_enable_write(val); + csr::wrpll::main_tag_ev_enable_write(val); + } + } + + fn set_base_adpll() -> Result<(), &'static str> { + let count2adpll = |error: i32| { + ((error as f64 * 1e6) / (0.0001164 * (1 << (COUNTER_WIDTH - DIV_WIDTH)) as f64)) as i32 + }; + + let (ref_count, main_count) = get_freq_counts(); + unsafe { + BASE_ADPLL = count2adpll(ref_count as i32 - main_count as i32); + set_adpll(i2c::DCXO::Main, BASE_ADPLL)?; + set_adpll(i2c::DCXO::Helper, BASE_ADPLL)?; + } + Ok(()) + } + + fn get_freq_counts() -> (u32, u32) { + unsafe { + csr::wrpll::frequency_counter_update_write(1); + while csr::wrpll::frequency_counter_busy_read() == 1 {} + #[cfg(wrpll_ref_clk = "GT_CDR")] + let ref_count = csr::wrpll::frequency_counter_counter_rtio_rx0_read(); + #[cfg(wrpll_ref_clk = "SMA_CLKIN")] + let ref_count = csr::wrpll::frequency_counter_counter_ref_read(); + let main_count = csr::wrpll::frequency_counter_counter_sys_read(); + + (ref_count, main_count) + } + } + + fn reset_plls() -> Result<(), &'static str> { + unsafe { + H_ADPLL1 = 0; + H_ADPLL2 = 0; + PERIOD_ERR1 = 0; + PERIOD_ERR2 = 0; + M_ADPLL1 = 0; + M_ADPLL2 = 0; + PHASE_ERR1 = 0; + PHASE_ERR2 = 0; + } + set_adpll(i2c::DCXO::Main, 0)?; + set_adpll(i2c::DCXO::Helper, 0)?; + // wait for adpll to transfer and DCXO to settle + clock::spin_us(200); + Ok(()) + } + + fn clear_pending(interrupt: ISR) { + match interrupt { + ISR::RefTag => unsafe { csr::wrpll::ref_tag_ev_pending_write(1) }, + ISR::MainTag => unsafe { csr::wrpll::main_tag_ev_pending_write(1) }, + }; + } + + fn is_pending(interrupt: ISR) -> bool { + match interrupt { + ISR::RefTag => unsafe { csr::wrpll::ref_tag_ev_pending_read() == 1 }, + ISR::MainTag => unsafe { csr::wrpll::main_tag_ev_pending_read() == 1 }, + } + } + + pub fn interrupt_handler() { + if is_pending(ISR::RefTag) { + tag_collector::collect_tags(ISR::RefTag); + clear_pending(ISR::RefTag); + helper_pll().expect("failed to run helper DCXO PLL"); + } + + if is_pending(ISR::MainTag) { + tag_collector::collect_tags(ISR::MainTag); + clear_pending(ISR::MainTag); + } + + if tag_collector::phase_diff_ready() { + main_pll().expect("failed to run main DCXO PLL"); + tag_collector::clear_phase_diff_ready(); + } + } + + fn helper_pll() -> Result<(), &'static str> { + let period_err = tag_collector::get_period_error(); + unsafe { + let adpll = (((LPF.b0 * period_err as i64) + + (LPF.b1 * PERIOD_ERR1 as i64) + + (LPF.b2 * PERIOD_ERR2 as i64) + - (LPF.a1 * H_ADPLL1 as i64) + - (LPF.a2 * H_ADPLL2 as i64)) + >> 38) as i32; + set_adpll(i2c::DCXO::Helper, BASE_ADPLL + adpll)?; + H_ADPLL2 = H_ADPLL1; + PERIOD_ERR2 = PERIOD_ERR1; + H_ADPLL1 = adpll; + PERIOD_ERR1 = period_err; + }; + Ok(()) + } + + fn main_pll() -> Result<(), &'static str> { + let phase_err = tag_collector::get_phase_error(); + unsafe { + let adpll = (((LPF.b0 * phase_err as i64) + + (LPF.b1 * PHASE_ERR1 as i64) + + (LPF.b2 * PHASE_ERR2 as i64) + - (LPF.a1 * M_ADPLL1 as i64) + - (LPF.a2 * M_ADPLL2 as i64)) + >> 38) as i32; + set_adpll(i2c::DCXO::Main, BASE_ADPLL + adpll)?; + M_ADPLL2 = M_ADPLL1; + PHASE_ERR2 = PHASE_ERR1; + M_ADPLL1 = adpll; + PHASE_ERR1 = phase_err; + }; + Ok(()) + } + + #[cfg(wrpll_ref_clk = "GT_CDR")] + fn test_skew() -> Result<(), &'static str> { + // wait for PLL to stabilize + clock::spin_us(20_000); + + info!("testing the skew of SYS CLK..."); + if has_timing_error() { + return Err("the skew cannot satisfy setup/hold time constraint of RX synchronizer"); + } + info!("the skew of SYS CLK met the timing constraint"); + Ok(()) + } + + #[cfg(wrpll_ref_clk = "GT_CDR")] + fn has_timing_error() -> bool { + unsafe { + csr::wrpll_skewtester::error_write(1); + } + clock::spin_us(5_000); + unsafe { csr::wrpll_skewtester::error_read() == 1 } + } + + #[cfg(feature = "calibrate_wrpll_skew")] + fn find_edge(target: bool) -> Result { + const STEP: u32 = 8; + const STABLE_THRESHOLD: u32 = 10; + + enum FSM { + Init, + WaitEdge, + GotEdge, + } + + let mut state: FSM = FSM::Init; + let mut offset: u32 = tag_collector::get_tag_offset(); + let mut median_edge: u32 = 0; + let mut stable_counter: u32 = 0; + + for _ in 0..(BEATING_PERIOD as u32 / STEP) as usize { + tag_collector::set_tag_offset(offset); + offset += STEP; + // wait for PLL to stabilize + clock::spin_us(20_000); + + let error = has_timing_error(); + // A median edge deglitcher + match state { + FSM::Init => { + if error != target { + stable_counter += 1; + } else { + stable_counter = 0; + } + + if stable_counter >= STABLE_THRESHOLD { + state = FSM::WaitEdge; + stable_counter = 0; + } + } + FSM::WaitEdge => { + if error == target { + state = FSM::GotEdge; + median_edge = offset; + } + } + FSM::GotEdge => { + if error != target { + median_edge += STEP; + stable_counter = 0; + } else { + stable_counter += 1; + } + + if stable_counter >= STABLE_THRESHOLD { + return Ok(median_edge); + } + } + } + } + return Err("failed to find timing error edge"); + } + + #[cfg(feature = "calibrate_wrpll_skew")] + fn calibrate_skew() -> Result<(), &'static str> { + info!("calibrating skew to meet timing constraint..."); + + // clear calibrated value + tag_collector::set_tag_offset(0); + let rising = find_edge(true)? as i32; + let falling = find_edge(false)? as i32; + + let width = BEATING_PERIOD - (falling - rising); + let result = falling + width / 2; + tag_collector::set_tag_offset(result as u32); + + info!( + "calibration successful, error zone: {} -> {}, width: {} ({}deg), middle of working region: {}", + rising, + falling, + width, + 360 * width / BEATING_PERIOD, + result, + ); + + Ok(()) + } + + pub fn select_recovered_clock(rc: bool) { + set_isr(false); + + if rc { + tag_collector::reset(); + reset_plls().expect("failed to reset main and helper PLL"); + + // get within capture range + set_base_adpll().expect("failed to set base adpll"); + + // clear gateware pending flag + clear_pending(ISR::RefTag); + clear_pending(ISR::MainTag); + + // use nFIQ to avoid IRQ being disabled by mutex lock and mess up PLL + set_isr(true); + info!("WRPLL interrupt enabled"); + + #[cfg(feature = "calibrate_wrpll_skew")] + calibrate_skew().expect("failed to set the correct skew"); + + #[cfg(wrpll_ref_clk = "GT_CDR")] + test_skew().expect("skew test failed"); + } + } +} + +#[cfg(has_wrpll_refclk)] +pub mod wrpll_refclk { + use super::*; + + pub struct MmcmSetting { + pub clkout0_reg1: u16, //0x08 + pub clkout0_reg2: u16, //0x09 + pub clkfbout_reg1: u16, //0x14 + pub clkfbout_reg2: u16, //0x15 + pub div_reg: u16, //0x16 + pub lock_reg1: u16, //0x18 + pub lock_reg2: u16, //0x19 + pub lock_reg3: u16, //0x1A + pub power_reg: u16, //0x28 + pub filt_reg1: u16, //0x4E + pub filt_reg2: u16, //0x4F + } + + fn one_clock_cycle() { + unsafe { + csr::wrpll_refclk::mmcm_dclk_write(1); + csr::wrpll_refclk::mmcm_dclk_write(0); + } + } + + fn set_addr(address: u8) { + unsafe { + csr::wrpll_refclk::mmcm_daddr_write(address); + } + } + + fn set_data(value: u16) { + unsafe { + csr::wrpll_refclk::mmcm_din_write(value); + } + } + + fn set_enable(en: bool) { + let val = if en { 1 } else { 0 }; + unsafe { + csr::wrpll_refclk::mmcm_den_write(val); + } + } + + fn set_write_enable(en: bool) { + let val = if en { 1 } else { 0 }; + unsafe { + csr::wrpll_refclk::mmcm_dwen_write(val); + } + } + + fn get_data() -> u16 { + unsafe { csr::wrpll_refclk::mmcm_dout_read() } + } + + fn drp_ready() -> bool { + unsafe { csr::wrpll_refclk::mmcm_dready_read() == 1 } + } + + #[allow(dead_code)] + fn read(address: u8) -> u16 { + set_addr(address); + set_enable(true); + // Set DADDR on the mmcm and assert DEN for one clock cycle + one_clock_cycle(); + + set_enable(false); + while !drp_ready() { + // keep the clock signal until data is ready + one_clock_cycle(); + } + get_data() + } + + fn write(address: u8, value: u16) { + set_addr(address); + set_data(value); + set_write_enable(true); + set_enable(true); + // Set DADDR, DI on the mmcm and assert DWE, DEN for one clock cycle + one_clock_cycle(); + + set_write_enable(false); + set_enable(false); + while !drp_ready() { + // keep the clock signal until write is finished + one_clock_cycle(); + } + } + + fn reset(rst: bool) { + let val = if rst { 1 } else { 0 }; + unsafe { + csr::wrpll_refclk::mmcm_reset_write(val) + } + } + + pub fn setup(settings: MmcmSetting, mmcm_bypass: bool) -> Result<(), &'static str> { + unsafe { + csr::wrpll_refclk::refclk_reset_write(1); + } + + if mmcm_bypass { + info!("Bypassing mmcm"); + unsafe { + csr::wrpll_refclk::mmcm_bypass_write(1); + } + } else { + // Based on "DRP State Machine" from XAPP888 + // hold reset HIGH during mmcm config + reset(true); + write(0x08, settings.clkout0_reg1); + write(0x09, settings.clkout0_reg2); + write(0x14, settings.clkfbout_reg1); + write(0x15, settings.clkfbout_reg2); + write(0x16, settings.div_reg); + write(0x18, settings.lock_reg1); + write(0x19, settings.lock_reg2); + write(0x1A, settings.lock_reg3); + write(0x28, settings.power_reg); + write(0x4E, settings.filt_reg1); + write(0x4F, settings.filt_reg2); + reset(false); + + // wait for the mmcm to lock + clock::spin_us(100); + + let locked = unsafe { csr::wrpll_refclk::mmcm_locked_read() == 1 }; + if !locked { + return Err("mmcm failed to generate 125MHz ref clock from SMA CLKIN"); + } + } + + unsafe { + csr::wrpll_refclk::refclk_reset_write(0); + } + + Ok(()) + } +} diff --git a/artiq/firmware/libboard_misoc/io_expander.rs b/artiq/firmware/libboard_misoc/io_expander.rs index b86f4bd80..bce02cf94 100644 --- a/artiq/firmware/libboard_misoc/io_expander.rs +++ b/artiq/firmware/libboard_misoc/io_expander.rs @@ -27,6 +27,26 @@ impl IoExpander { const VIRTUAL_LED_MAPPING0: [(u8, u8, u8); 2] = [(0, 0, 6), (1, 1, 6)]; const VIRTUAL_LED_MAPPING1: [(u8, u8, u8); 2] = [(2, 0, 6), (3, 1, 6)]; + #[cfg(has_si549)] + const IODIR_CLK_SEL: u8 = 0x80; // out + #[cfg(has_si5324)] + const IODIR_CLK_SEL: u8 = 0x00; // in + + #[cfg(has_si549)] + const CLK_SEL_OUT: u8 = 1 << 7; + #[cfg(has_si5324)] + const CLK_SEL_OUT: u8 = 0; + + const IODIR0 : [u8; 2] = [ + 0xFF, + 0xFF & !IODIR_CLK_SEL + ]; + + const OUT_TAR0 : [u8; 2] = [ + 0, + CLK_SEL_OUT + ]; + // Both expanders on SHARED I2C bus let mut io_expander = match index { 0 => IoExpander { @@ -34,9 +54,9 @@ impl IoExpander { port: 11, address: 0x40, virtual_led_mapping: &VIRTUAL_LED_MAPPING0, - iodir: [0xff; 2], + iodir: IODIR0, out_current: [0; 2], - out_target: [0; 2], + out_target: OUT_TAR0, registers: Registers { iodira: 0x00, iodirb: 0x01, @@ -153,10 +173,10 @@ impl IoExpander { } self.update_iodir()?; - self.out_current[0] = 0x00; - self.write(self.registers.gpioa, 0x00)?; - self.out_current[1] = 0x00; - self.write(self.registers.gpiob, 0x00)?; + self.write(self.registers.gpioa, self.out_target[0])?; + self.out_current[0] = self.out_target[0]; + self.write(self.registers.gpiob, self.out_target[1])?; + self.out_current[1] = self.out_target[1]; Ok(()) } diff --git a/artiq/firmware/libboard_misoc/lib.rs b/artiq/firmware/libboard_misoc/lib.rs index 4374a44f9..8f56e3f65 100644 --- a/artiq/firmware/libboard_misoc/lib.rs +++ b/artiq/firmware/libboard_misoc/lib.rs @@ -1,5 +1,4 @@ #![no_std] -#![feature(llvm_asm)] #![feature(asm)] extern crate byteorder; diff --git a/artiq/firmware/libboard_misoc/riscv32/boot.rs b/artiq/firmware/libboard_misoc/riscv32/boot.rs index 0d2254da1..1ef0bd44f 100644 --- a/artiq/firmware/libboard_misoc/riscv32/boot.rs +++ b/artiq/firmware/libboard_misoc/riscv32/boot.rs @@ -2,29 +2,26 @@ use super::{cache, pmp}; use riscv::register::*; pub unsafe fn reset() -> ! { - llvm_asm!(r#" - j _reset_handler - nop - "# : : : : "volatile"); - loop {} + asm!("j _reset_handler", + "nop", + options(nomem, nostack, noreturn) + ); } pub unsafe fn jump(addr: usize) -> ! { cache::flush_cpu_icache(); - llvm_asm!(r#" - jalr x0, 0($0) - nop - "# : : "r"(addr) : : "volatile"); - loop {} + asm!("jalr x0, 0({0})", + "nop", + in(reg) addr, + options(nomem, nostack, noreturn) + ); } pub unsafe fn start_user(addr: usize) -> ! { pmp::enable_user_memory(); mstatus::set_mpp(mstatus::MPP::User); mepc::write(addr); - llvm_asm!( - "mret" - : : : : "volatile" + asm!("mret", + options(nomem, nostack, noreturn) ); - unreachable!() } diff --git a/artiq/firmware/libboard_misoc/riscv32/cache.rs b/artiq/firmware/libboard_misoc/riscv32/cache.rs index 12fc9f3bb..ac0b4ed12 100644 --- a/artiq/firmware/libboard_misoc/riscv32/cache.rs +++ b/artiq/firmware/libboard_misoc/riscv32/cache.rs @@ -7,20 +7,24 @@ use mem; pub fn flush_cpu_icache() { unsafe { - llvm_asm!(r#" - fence.i - nop - nop - nop - nop - nop - "# : : : : "volatile"); + asm!( + "fence.i", + "nop", + "nop", + "nop", + "nop", + "nop", + options(preserves_flags, nostack) + ); } } pub fn flush_cpu_dcache() { unsafe { - llvm_asm!(".word(0x500F)" : : : : "volatile"); + asm!( + ".word(0x500F)", + options(preserves_flags) + ); } } diff --git a/artiq/firmware/libboard_misoc/riscv32/irq.rs b/artiq/firmware/libboard_misoc/riscv32/irq.rs new file mode 100644 index 000000000..3c632e952 --- /dev/null +++ b/artiq/firmware/libboard_misoc/riscv32/irq.rs @@ -0,0 +1,49 @@ +use riscv::register::{mie, mstatus}; + +fn vmim_write(val: usize) { + unsafe { + asm!("csrw {csr}, {rs}", rs = in(reg) val, csr = const 0xBC0); + } +} + +fn vmim_read() -> usize { + let r: usize; + unsafe { + asm!("csrr {rd}, {csr}", rd = out(reg) r, csr = const 0xBC0); + } + r +} + +fn vmip_read() -> usize { + let r: usize; + unsafe { + asm!("csrr {rd}, {csr}", rd = out(reg) r, csr = const 0xFC0); + } + r +} + +pub fn enable_interrupts() { + unsafe { + mstatus::set_mie(); + mie::set_mext(); + } +} + +pub fn disable_interrupts() { + unsafe { + mstatus::clear_mie(); + mie::clear_mext(); + } +} + +pub fn enable(id: u32) { + vmim_write(vmim_read() | (1 << id)); +} + +pub fn disable(id: u32) { + vmim_write(vmim_read() & !(1 << id)); +} + +pub fn is_pending(id: u32) -> bool { + (vmip_read() >> id) & 1 == 1 +} diff --git a/artiq/firmware/libboard_misoc/riscv32/mod.rs b/artiq/firmware/libboard_misoc/riscv32/mod.rs index a8f498adc..2408c8664 100644 --- a/artiq/firmware/libboard_misoc/riscv32/mod.rs +++ b/artiq/firmware/libboard_misoc/riscv32/mod.rs @@ -1,3 +1,4 @@ pub mod cache; pub mod boot; +pub mod irq; pub mod pmp; diff --git a/artiq/firmware/libeh/lib.rs b/artiq/firmware/libeh/lib.rs index 4fdc4b880..dfdbf6034 100644 --- a/artiq/firmware/libeh/lib.rs +++ b/artiq/firmware/libeh/lib.rs @@ -1,4 +1,4 @@ -#![feature(lang_items, panic_unwind, libc, unwind_attributes, int_bits_const)] +#![feature(lang_items, panic_unwind, libc)] #![no_std] extern crate cslice; diff --git a/artiq/firmware/libproto_artiq/drtioaux_proto.rs b/artiq/firmware/libproto_artiq/drtioaux_proto.rs index 94f26dca7..a2e51ca65 100644 --- a/artiq/firmware/libproto_artiq/drtioaux_proto.rs +++ b/artiq/firmware/libproto_artiq/drtioaux_proto.rs @@ -14,8 +14,51 @@ impl From> for Error { } } -pub const DMA_TRACE_MAX_SIZE: usize = /*max size*/512 - /*CRC*/4 - /*packet ID*/1 - /*trace ID*/4 - /*last*/1 -/*length*/2; -pub const ANALYZER_MAX_SIZE: usize = /*max size*/512 - /*CRC*/4 - /*packet ID*/1 - /*last*/1 - /*length*/2; +// maximum size of arbitrary payloads +// used by satellite -> master analyzer, subkernel exceptions +pub const SAT_PAYLOAD_MAX_SIZE: usize = /*max size*/1024 - /*CRC*/4 - /*packet ID*/1 - /*last*/1 - /*length*/2; +// used by DDMA, subkernel program data (need to provide extra ID and destination) +pub const MASTER_PAYLOAD_MAX_SIZE: usize = SAT_PAYLOAD_MAX_SIZE - /*source*/1 - /*destination*/1 - /*ID*/4; + +#[derive(PartialEq, Clone, Copy, Debug)] +#[repr(u8)] +pub enum PayloadStatus { + Middle = 0, + First = 1, + Last = 2, + FirstAndLast = 3, +} + +impl From for PayloadStatus { + fn from(value: u8) -> PayloadStatus { + match value { + 0 => PayloadStatus::Middle, + 1 => PayloadStatus::First, + 2 => PayloadStatus::Last, + 3 => PayloadStatus::FirstAndLast, + _ => unreachable!(), + } + } +} + +impl PayloadStatus { + pub fn is_first(self) -> bool { + self == PayloadStatus::First || self == PayloadStatus::FirstAndLast + } + + pub fn is_last(self) -> bool { + self == PayloadStatus::Last || self == PayloadStatus::FirstAndLast + } + + pub fn from_status(first: bool, last: bool) -> PayloadStatus { + match (first, last) { + (true, true) => PayloadStatus::FirstAndLast, + (true, false) => PayloadStatus::First, + (false, true) => PayloadStatus::Last, + (false, false) => PayloadStatus::Middle + } + } +} #[derive(PartialEq, Debug)] pub enum Packet { @@ -61,15 +104,29 @@ pub enum Packet { AnalyzerHeaderRequest { destination: u8 }, AnalyzerHeader { sent_bytes: u32, total_byte_count: u64, overflow_occurred: bool }, AnalyzerDataRequest { destination: u8 }, - AnalyzerData { last: bool, length: u16, data: [u8; ANALYZER_MAX_SIZE]}, + AnalyzerData { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE]}, - DmaAddTraceRequest { destination: u8, id: u32, last: bool, length: u16, trace: [u8; DMA_TRACE_MAX_SIZE] }, - DmaAddTraceReply { succeeded: bool }, - DmaRemoveTraceRequest { destination: u8, id: u32 }, - DmaRemoveTraceReply { succeeded: bool }, - DmaPlaybackRequest { destination: u8, id: u32, timestamp: u64 }, - DmaPlaybackReply { succeeded: bool }, - DmaPlaybackStatus { destination: u8, id: u32, error: u8, channel: u32, timestamp: u64 } + DmaAddTraceRequest { + source: u8, destination: u8, + id: u32, status: PayloadStatus, + length: u16, trace: [u8; MASTER_PAYLOAD_MAX_SIZE] + }, + DmaAddTraceReply { source: u8, destination: u8, id: u32, succeeded: bool }, + DmaRemoveTraceRequest { source: u8, destination: u8, id: u32 }, + DmaRemoveTraceReply { destination: u8, succeeded: bool }, + DmaPlaybackRequest { source: u8, destination: u8, id: u32, timestamp: u64 }, + DmaPlaybackReply { destination: u8, succeeded: bool }, + DmaPlaybackStatus { source: u8, destination: u8, id: u32, error: u8, channel: u32, timestamp: u64 }, + + SubkernelAddDataRequest { destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] }, + SubkernelAddDataReply { succeeded: bool }, + SubkernelLoadRunRequest { source: u8, destination: u8, id: u32, run: bool }, + SubkernelLoadRunReply { destination: u8, succeeded: bool }, + SubkernelFinished { destination: u8, id: u32, with_exception: bool, exception_src: u8 }, + SubkernelExceptionRequest { destination: u8 }, + SubkernelException { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE] }, + SubkernelMessage { source: u8, destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] }, + SubkernelMessageAck { destination: u8 }, } impl Packet { @@ -215,7 +272,7 @@ impl Packet { 0xa3 => { let last = reader.read_bool()?; let length = reader.read_u16()?; - let mut data: [u8; ANALYZER_MAX_SIZE] = [0; ANALYZER_MAX_SIZE]; + let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE]; reader.read_exact(&mut data[0..length as usize])?; Packet::AnalyzerData { last: last, @@ -224,40 +281,50 @@ impl Packet { } }, - 0xb0 => { + 0xb0 => { + let source = reader.read_u8()?; let destination = reader.read_u8()?; let id = reader.read_u32()?; - let last = reader.read_bool()?; + let status = reader.read_u8()?; let length = reader.read_u16()?; - let mut trace: [u8; DMA_TRACE_MAX_SIZE] = [0; DMA_TRACE_MAX_SIZE]; + let mut trace: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; reader.read_exact(&mut trace[0..length as usize])?; Packet::DmaAddTraceRequest { + source: source, destination: destination, id: id, - last: last, + status: PayloadStatus::from(status), length: length as u16, trace: trace, } }, 0xb1 => Packet::DmaAddTraceReply { + source: reader.read_u8()?, + destination: reader.read_u8()?, + id: reader.read_u32()?, succeeded: reader.read_bool()? }, 0xb2 => Packet::DmaRemoveTraceRequest { + source: reader.read_u8()?, destination: reader.read_u8()?, id: reader.read_u32()? }, 0xb3 => Packet::DmaRemoveTraceReply { + destination: reader.read_u8()?, succeeded: reader.read_bool()? }, 0xb4 => Packet::DmaPlaybackRequest { + source: reader.read_u8()?, destination: reader.read_u8()?, id: reader.read_u32()?, timestamp: reader.read_u64()? }, 0xb5 => Packet::DmaPlaybackReply { + destination: reader.read_u8()?, succeeded: reader.read_bool()? }, 0xb6 => Packet::DmaPlaybackStatus { + source: reader.read_u8()?, destination: reader.read_u8()?, id: reader.read_u32()?, error: reader.read_u8()?, @@ -265,6 +332,75 @@ impl Packet { timestamp: reader.read_u64()? }, + 0xc0 => { + let destination = reader.read_u8()?; + let id = reader.read_u32()?; + let status = reader.read_u8()?; + let length = reader.read_u16()?; + let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; + reader.read_exact(&mut data[0..length as usize])?; + Packet::SubkernelAddDataRequest { + destination: destination, + id: id, + status: PayloadStatus::from(status), + length: length as u16, + data: data, + } + }, + 0xc1 => Packet::SubkernelAddDataReply { + succeeded: reader.read_bool()? + }, + 0xc4 => Packet::SubkernelLoadRunRequest { + source: reader.read_u8()?, + destination: reader.read_u8()?, + id: reader.read_u32()?, + run: reader.read_bool()? + }, + 0xc5 => Packet::SubkernelLoadRunReply { + destination: reader.read_u8()?, + succeeded: reader.read_bool()? + }, + 0xc8 => Packet::SubkernelFinished { + destination: reader.read_u8()?, + id: reader.read_u32()?, + with_exception: reader.read_bool()?, + exception_src: reader.read_u8()? + }, + 0xc9 => Packet::SubkernelExceptionRequest { + destination: reader.read_u8()? + }, + 0xca => { + let last = reader.read_bool()?; + let length = reader.read_u16()?; + let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE]; + reader.read_exact(&mut data[0..length as usize])?; + Packet::SubkernelException { + last: last, + length: length, + data: data + } + }, + 0xcb => { + let source = reader.read_u8()?; + let destination = reader.read_u8()?; + let id = reader.read_u32()?; + let status = reader.read_u8()?; + let length = reader.read_u16()?; + let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; + reader.read_exact(&mut data[0..length as usize])?; + Packet::SubkernelMessage { + source: source, + destination: destination, + id: id, + status: PayloadStatus::from(status), + length: length as u16, + data: data, + } + }, + 0xcc => Packet::SubkernelMessageAck { + destination: reader.read_u8()? + }, + ty => return Err(Error::UnknownPacket(ty)) }) } @@ -445,48 +581,144 @@ impl Packet { writer.write_all(&data[0..length as usize])?; }, - Packet::DmaAddTraceRequest { destination, id, last, trace, length } => { + Packet::DmaAddTraceRequest { source, destination, id, status, trace, length } => { writer.write_u8(0xb0)?; + writer.write_u8(source)?; writer.write_u8(destination)?; writer.write_u32(id)?; - writer.write_bool(last)?; + writer.write_u8(status as u8)?; // trace may be broken down to fit within drtio aux memory limit // will be reconstructed by satellite writer.write_u16(length)?; writer.write_all(&trace[0..length as usize])?; }, - Packet::DmaAddTraceReply { succeeded } => { + Packet::DmaAddTraceReply { source, destination, id, succeeded } => { writer.write_u8(0xb1)?; + writer.write_u8(source)?; + writer.write_u8(destination)?; + writer.write_u32(id)?; writer.write_bool(succeeded)?; }, - Packet::DmaRemoveTraceRequest { destination, id } => { + Packet::DmaRemoveTraceRequest { source, destination, id } => { writer.write_u8(0xb2)?; + writer.write_u8(source)?; writer.write_u8(destination)?; writer.write_u32(id)?; }, - Packet::DmaRemoveTraceReply { succeeded } => { + Packet::DmaRemoveTraceReply { destination, succeeded } => { writer.write_u8(0xb3)?; + writer.write_u8(destination)?; writer.write_bool(succeeded)?; }, - Packet::DmaPlaybackRequest { destination, id, timestamp } => { + Packet::DmaPlaybackRequest { source, destination, id, timestamp } => { writer.write_u8(0xb4)?; + writer.write_u8(source)?; writer.write_u8(destination)?; writer.write_u32(id)?; writer.write_u64(timestamp)?; }, - Packet::DmaPlaybackReply { succeeded } => { + Packet::DmaPlaybackReply { destination, succeeded } => { writer.write_u8(0xb5)?; + writer.write_u8(destination)?; writer.write_bool(succeeded)?; }, - Packet::DmaPlaybackStatus { destination, id, error, channel, timestamp } => { + Packet::DmaPlaybackStatus { source, destination, id, error, channel, timestamp } => { writer.write_u8(0xb6)?; + writer.write_u8(source)?; writer.write_u8(destination)?; writer.write_u32(id)?; writer.write_u8(error)?; writer.write_u32(channel)?; writer.write_u64(timestamp)?; - } + }, + + Packet::SubkernelAddDataRequest { destination, id, status, data, length } => { + writer.write_u8(0xc0)?; + writer.write_u8(destination)?; + writer.write_u32(id)?; + writer.write_u8(status as u8)?; + writer.write_u16(length)?; + writer.write_all(&data[0..length as usize])?; + }, + Packet::SubkernelAddDataReply { succeeded } => { + writer.write_u8(0xc1)?; + writer.write_bool(succeeded)?; + }, + Packet::SubkernelLoadRunRequest { source, destination, id, run } => { + writer.write_u8(0xc4)?; + writer.write_u8(source)?; + writer.write_u8(destination)?; + writer.write_u32(id)?; + writer.write_bool(run)?; + }, + Packet::SubkernelLoadRunReply { destination, succeeded } => { + writer.write_u8(0xc5)?; + writer.write_u8(destination)?; + writer.write_bool(succeeded)?; + }, + Packet::SubkernelFinished { destination, id, with_exception, exception_src } => { + writer.write_u8(0xc8)?; + writer.write_u8(destination)?; + writer.write_u32(id)?; + writer.write_bool(with_exception)?; + writer.write_u8(exception_src)?; + }, + Packet::SubkernelExceptionRequest { destination } => { + writer.write_u8(0xc9)?; + writer.write_u8(destination)?; + }, + Packet::SubkernelException { last, length, data } => { + writer.write_u8(0xca)?; + writer.write_bool(last)?; + writer.write_u16(length)?; + writer.write_all(&data[0..length as usize])?; + }, + Packet::SubkernelMessage { source, destination, id, status, data, length } => { + writer.write_u8(0xcb)?; + writer.write_u8(source)?; + writer.write_u8(destination)?; + writer.write_u32(id)?; + writer.write_u8(status as u8)?; + writer.write_u16(length)?; + writer.write_all(&data[0..length as usize])?; + }, + Packet::SubkernelMessageAck { destination } => { + writer.write_u8(0xcc)?; + writer.write_u8(destination)?; + }, } Ok(()) } + + pub fn routable_destination(&self) -> Option { + // only for packets that could be re-routed, not only forwarded + match self { + Packet::DmaAddTraceRequest { destination, .. } => Some(*destination), + Packet::DmaAddTraceReply { destination, .. } => Some(*destination), + Packet::DmaRemoveTraceRequest { destination, .. } => Some(*destination), + Packet::DmaRemoveTraceReply { destination, .. } => Some(*destination), + Packet::DmaPlaybackRequest { destination, .. } => Some(*destination), + Packet::DmaPlaybackReply { destination, .. } => Some(*destination), + Packet::SubkernelLoadRunRequest { destination, .. } => Some(*destination), + Packet::SubkernelLoadRunReply { destination, .. } => Some(*destination), + Packet::SubkernelMessage { destination, .. } => Some(*destination), + Packet::SubkernelMessageAck { destination, .. } => Some(*destination), + Packet::DmaPlaybackStatus { destination, .. } => Some(*destination), + Packet::SubkernelFinished { destination, .. } => Some(*destination), + _ => None + } + } + + pub fn expects_response(&self) -> bool { + // returns true if the routable packet should elicit a response + // e.g. reply, ACK packets end a conversation, + // and firmware should not wait for response + match self { + Packet::DmaAddTraceReply { .. } | Packet::DmaRemoveTraceReply { .. } | + Packet::DmaPlaybackReply { .. } | Packet::SubkernelLoadRunReply { .. } | + Packet::SubkernelMessageAck { .. } | Packet::DmaPlaybackStatus { .. } | + Packet::SubkernelFinished { .. } => false, + _ => true + } + } } diff --git a/artiq/firmware/libproto_artiq/kernel_proto.rs b/artiq/firmware/libproto_artiq/kernel_proto.rs index 2ecb39c3b..1a2f057b7 100644 --- a/artiq/firmware/libproto_artiq/kernel_proto.rs +++ b/artiq/firmware/libproto_artiq/kernel_proto.rs @@ -10,6 +10,15 @@ pub const KERNELCPU_LAST_ADDRESS: usize = 0x4fffffff; // section in ksupport.elf. pub const KSUPPORT_HEADER_SIZE: usize = 0x74; +#[derive(Debug)] +pub enum SubkernelStatus { + NoError, + Timeout, + IncorrectState, + CommLost, + OtherError +} + #[derive(Debug)] pub enum Message<'a> { LoadRequest(&'a [u8]), @@ -94,6 +103,14 @@ pub enum Message<'a> { SpiReadReply { succeeded: bool, data: u32 }, SpiBasicReply { succeeded: bool }, + SubkernelLoadRunRequest { id: u32, destination: u8, run: bool }, + SubkernelLoadRunReply { succeeded: bool }, + SubkernelAwaitFinishRequest { id: u32, timeout: i64 }, + SubkernelAwaitFinishReply { status: SubkernelStatus }, + SubkernelMsgSend { id: u32, destination: Option, count: u8, tag: &'a [u8], data: *const *const () }, + SubkernelMsgRecvRequest { id: i32, timeout: i64, tags: &'a [u8] }, + SubkernelMsgRecvReply { status: SubkernelStatus, count: u8 }, + Log(fmt::Arguments<'a>), LogSlice(&'a str) } diff --git a/artiq/firmware/libproto_artiq/rpc_proto.rs b/artiq/firmware/libproto_artiq/rpc_proto.rs index cc567d7fb..80284c39e 100644 --- a/artiq/firmware/libproto_artiq/rpc_proto.rs +++ b/artiq/firmware/libproto_artiq/rpc_proto.rs @@ -190,9 +190,9 @@ unsafe fn recv_value(reader: &mut R, tag: Tag, data: &mut *mut (), } } -pub fn recv_return(reader: &mut R, tag_bytes: &[u8], data: *mut (), +pub fn recv_return<'a, R, E>(reader: &mut R, tag_bytes: &'a [u8], data: *mut (), alloc: &dyn Fn(usize) -> Result<*mut (), E>) - -> Result<(), E> + -> Result<&'a [u8], E> where R: Read + ?Sized, E: From> { @@ -204,14 +204,16 @@ pub fn recv_return(reader: &mut R, tag_bytes: &[u8], data: *mut (), let mut data = data; unsafe { recv_value(reader, tag, &mut data, alloc)? }; - Ok(()) + Ok(it.data) } -unsafe fn send_elements(writer: &mut W, elt_tag: Tag, length: usize, data: *const ()) +unsafe fn send_elements(writer: &mut W, elt_tag: Tag, length: usize, data: *const (), write_tags: bool) -> Result<(), Error> where W: Write + ?Sized { - writer.write_u8(elt_tag.as_u8())?; + if write_tags { + writer.write_u8(elt_tag.as_u8())?; + } match elt_tag { // we cannot use NativeEndian::from_slice_i32 as the data is not mutable, // and that is not needed as the data is already in native endian @@ -230,14 +232,14 @@ unsafe fn send_elements(writer: &mut W, elt_tag: Tag, length: usize, data: *c _ => { let mut data = data; for _ in 0..length { - send_value(writer, elt_tag, &mut data)?; + send_value(writer, elt_tag, &mut data, write_tags)?; } } } Ok(()) } -unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const ()) +unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const (), write_tags: bool) -> Result<(), Error> where W: Write + ?Sized { @@ -248,8 +250,9 @@ unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const ()) $map }) } - - writer.write_u8(tag.as_u8())?; + if write_tags { + writer.write_u8(tag.as_u8())?; + } match tag { Tag::None => Ok(()), Tag::Bool => @@ -269,12 +272,14 @@ unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const ()) writer.write_bytes((*ptr).as_ref())), Tag::Tuple(it, arity) => { let mut it = it.clone(); - writer.write_u8(arity)?; + if write_tags { + writer.write_u8(arity)?; + } let mut max_alignment = 0; for _ in 0..arity { let tag = it.next().expect("truncated tag"); max_alignment = core::cmp::max(max_alignment, tag.alignment()); - send_value(writer, tag, data)? + send_value(writer, tag, data, write_tags)? } *data = round_up_const(*data, max_alignment); Ok(()) @@ -286,11 +291,13 @@ unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const ()) let length = (**ptr).length as usize; writer.write_u32((**ptr).length)?; let tag = it.clone().next().expect("truncated tag"); - send_elements(writer, tag, length, (**ptr).elements) + send_elements(writer, tag, length, (**ptr).elements, write_tags) }) } Tag::Array(it, num_dims) => { - writer.write_u8(num_dims)?; + if write_tags { + writer.write_u8(num_dims)?; + } consume_value!(*const(), |buffer| { let elt_tag = it.clone().next().expect("truncated tag"); @@ -302,14 +309,14 @@ unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const ()) }) } let length = total_len as usize; - send_elements(writer, elt_tag, length, *buffer) + send_elements(writer, elt_tag, length, *buffer, write_tags) }) } Tag::Range(it) => { let tag = it.clone().next().expect("truncated tag"); - send_value(writer, tag, data)?; - send_value(writer, tag, data)?; - send_value(writer, tag, data)?; + send_value(writer, tag, data, write_tags)?; + send_value(writer, tag, data, write_tags)?; + send_value(writer, tag, data, write_tags)?; Ok(()) } Tag::Keyword(it) => { @@ -319,7 +326,7 @@ unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const ()) writer.write_string(str::from_utf8((*ptr).name.as_ref()).unwrap())?; let tag = it.clone().next().expect("truncated tag"); let mut data = ptr.offset(1) as *const (); - send_value(writer, tag, &mut data) + send_value(writer, tag, &mut data, write_tags) }) // Tag::Keyword never appears in composite types, so we don't have // to accurately advance data. @@ -333,7 +340,7 @@ unsafe fn send_value(writer: &mut W, tag: Tag, data: &mut *const ()) } } -pub fn send_args(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const *const ()) +pub fn send_args(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const *const (), write_tags: bool) -> Result<(), Error> where W: Write + ?Sized { @@ -350,7 +357,7 @@ pub fn send_args(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const for index in 0.. { if let Some(arg_tag) = args_it.next() { let mut data = unsafe { *data.offset(index) }; - unsafe { send_value(writer, arg_tag, &mut data)? }; + unsafe { send_value(writer, arg_tag, &mut data, write_tags)? }; } else { break } @@ -482,7 +489,7 @@ mod tag { #[derive(Debug, Clone, Copy)] pub struct TagIterator<'a> { - data: &'a [u8] + pub data: &'a [u8] } impl<'a> TagIterator<'a> { diff --git a/artiq/firmware/libproto_artiq/session_proto.rs b/artiq/firmware/libproto_artiq/session_proto.rs index d4277c26c..d5266ec3d 100644 --- a/artiq/firmware/libproto_artiq/session_proto.rs +++ b/artiq/firmware/libproto_artiq/session_proto.rs @@ -84,6 +84,8 @@ pub enum Request { column: u32, function: u32, }, + + UploadSubkernel { id: u32, destination: u8, kernel: Vec }, } #[derive(Debug)] @@ -137,6 +139,11 @@ impl Request { column: reader.read_u32()?, function: reader.read_u32()? }, + 9 => Request::UploadSubkernel { + id: reader.read_u32()?, + destination: reader.read_u8()?, + kernel: reader.read_bytes()? + }, ty => return Err(Error::UnknownPacket(ty)) }) diff --git a/artiq/firmware/libunwind/src/lib.rs b/artiq/firmware/libunwind/src/lib.rs index a087a59ec..5567320c3 100644 --- a/artiq/firmware/libunwind/src/lib.rs +++ b/artiq/firmware/libunwind/src/lib.rs @@ -3,8 +3,8 @@ #![feature(link_cfg)] #![feature(nll)] #![feature(staged_api)] -#![feature(unwind_attributes)] #![feature(static_nobundle)] +#![feature(c_unwind)] #![cfg_attr(not(target_env = "msvc"), feature(libc))] mod libunwind; diff --git a/artiq/firmware/libunwind/src/libunwind.rs b/artiq/firmware/libunwind/src/libunwind.rs index 646bf912e..35a5ed51f 100644 --- a/artiq/firmware/libunwind/src/libunwind.rs +++ b/artiq/firmware/libunwind/src/libunwind.rs @@ -81,9 +81,14 @@ pub type _Unwind_Exception_Cleanup_Fn = all(feature = "llvm-libunwind", any(target_os = "fuchsia", target_os = "linux")), link(name = "unwind", kind = "static") )] -extern "C" { - #[unwind(allowed)] +extern "C-unwind" { pub fn _Unwind_Resume(exception: *mut _Unwind_Exception) -> !; +} +#[cfg_attr( + all(feature = "llvm-libunwind", any(target_os = "fuchsia", target_os = "linux")), + link(name = "unwind", kind = "static") +)] +extern "C" { pub fn _Unwind_DeleteException(exception: *mut _Unwind_Exception); pub fn _Unwind_GetLanguageSpecificData(ctx: *mut _Unwind_Context) -> *mut c_void; pub fn _Unwind_GetRegionStart(ctx: *mut _Unwind_Context) -> _Unwind_Ptr; @@ -230,9 +235,13 @@ if #[cfg(not(all(target_os = "ios", target_arch = "arm")))] { #[cfg_attr(all(feature = "llvm-libunwind", any(target_os = "fuchsia", target_os = "linux")), link(name = "unwind", kind = "static"))] - extern "C" { - #[unwind(allowed)] + extern "C-unwind" { pub fn _Unwind_RaiseException(exception: *mut _Unwind_Exception) -> _Unwind_Reason_Code; + } + #[cfg_attr(all(feature = "llvm-libunwind", + any(target_os = "fuchsia", target_os = "linux")), + link(name = "unwind", kind = "static"))] + extern "C" { pub fn _Unwind_Backtrace(trace: _Unwind_Trace_Fn, trace_argument: *mut c_void) -> _Unwind_Reason_Code; @@ -242,8 +251,7 @@ if #[cfg(not(all(target_os = "ios", target_arch = "arm")))] { #[cfg_attr(all(feature = "llvm-libunwind", any(target_os = "fuchsia", target_os = "linux")), link(name = "unwind", kind = "static"))] - extern "C" { - #[unwind(allowed)] + extern "C-unwind" { pub fn _Unwind_SjLj_RaiseException(e: *mut _Unwind_Exception) -> _Unwind_Reason_Code; } diff --git a/artiq/firmware/riscv32g-unknown-none-elf.json b/artiq/firmware/riscv32g-unknown-none-elf.json index 2a3fb8bfb..5e980f11f 100644 --- a/artiq/firmware/riscv32g-unknown-none-elf.json +++ b/artiq/firmware/riscv32g-unknown-none-elf.json @@ -21,20 +21,6 @@ }, "relro-level": "full", "target-family": "unix", - "target-pointer-width": "32", - "unsupported-abis": [ - "cdecl", - "stdcall", - "fastcall", - "vectorcall", - "thiscall", - "aapcs", - "win64", - "sysv64", - "ptx-kernel", - "msp430-interrupt", - "x86-interrupt", - "amdgpu-kernel" - ] + "target-pointer-width": "32" } \ No newline at end of file diff --git a/artiq/firmware/riscv32ima-unknown-none-elf.json b/artiq/firmware/riscv32ima-unknown-none-elf.json index e41408842..064347edb 100644 --- a/artiq/firmware/riscv32ima-unknown-none-elf.json +++ b/artiq/firmware/riscv32ima-unknown-none-elf.json @@ -13,19 +13,5 @@ "max-atomic-width": 32, "panic-strategy": "unwind", "relocation-model": "static", - "target-pointer-width": "32", - "unsupported-abis": [ - "cdecl", - "stdcall", - "fastcall", - "vectorcall", - "thiscall", - "aapcs", - "win64", - "sysv64", - "ptx-kernel", - "msp430-interrupt", - "x86-interrupt", - "amdgpu-kernel" - ] + "target-pointer-width": "32" } diff --git a/artiq/firmware/runtime/Cargo.toml b/artiq/firmware/runtime/Cargo.toml index d0223a47c..0d132d5a9 100644 --- a/artiq/firmware/runtime/Cargo.toml +++ b/artiq/firmware/runtime/Cargo.toml @@ -37,6 +37,10 @@ features = ["alloc", "medium-ethernet", "proto-ipv4", "proto-ipv6", "socket-tcp" [dependencies.fringe] git = "https://git.m-labs.hk/M-Labs/libfringe.git" -rev = "3ecbe5" +rev = "53a964" default-features = false features = ["alloc"] + +[dependencies.tar-no-std] +git = "https://git.m-labs.hk/M-Labs/tar-no-std" +rev = "2ab6dc5" diff --git a/artiq/firmware/runtime/Makefile b/artiq/firmware/runtime/Makefile index f7914e85d..17b587f88 100644 --- a/artiq/firmware/runtime/Makefile +++ b/artiq/firmware/runtime/Makefile @@ -19,7 +19,7 @@ $(RUSTOUT)/libruntime.a: --target $(RUNTIME_DIRECTORY)/../$(CARGO_TRIPLE).json runtime.elf: $(RUSTOUT)/libruntime.a ksupport_data.o - $(link) -T $(RUNTIME_DIRECTORY)/runtime.ld \ + $(link) -T $(RUNTIME_DIRECTORY)/../firmware.ld \ -lunwind-vexriscv-bare -m elf32lriscv ksupport_data.o: ../ksupport/ksupport.elf diff --git a/artiq/firmware/runtime/analyzer.rs b/artiq/firmware/runtime/analyzer.rs index fa62a535c..355da1977 100644 --- a/artiq/firmware/runtime/analyzer.rs +++ b/artiq/firmware/runtime/analyzer.rs @@ -52,21 +52,18 @@ pub mod remote_analyzer { pub data: Vec } - pub fn get_data(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, + pub fn get_data(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, up_destinations: &Urc> - ) -> Result { + ) -> Result { // gets data from satellites and returns consolidated data let mut remote_data: Vec = Vec::new(); let mut remote_overflow = false; let mut remote_sent_bytes = 0; let mut remote_total_bytes = 0; - let data_vec = match drtio::analyzer_query( - io, aux_mutex, routing_table, up_destinations - ) { - Ok(data_vec) => data_vec, - Err(e) => return Err(e) - }; + let data_vec = drtio::analyzer_query( + io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, up_destinations + )?; for data in data_vec { remote_total_bytes += data.total_byte_count; remote_sent_bytes += data.sent_bytes; @@ -85,7 +82,8 @@ pub mod remote_analyzer { -fn worker(stream: &mut TcpStream, _io: &Io, _aux_mutex: &Mutex, +fn worker(stream: &mut TcpStream, _io: &Io, _aux_mutex: &Mutex, + _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex, _routing_table: &drtio_routing::RoutingTable, _up_destinations: &Urc> ) -> Result<(), IoError> { @@ -99,7 +97,7 @@ fn worker(stream: &mut TcpStream, _io: &Io, _aux_mutex: &Mutex, #[cfg(has_drtio)] let remote = remote_analyzer::get_data( - _io, _aux_mutex, _routing_table, _up_destinations); + _io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, _up_destinations); #[cfg(has_drtio)] let (header, remote_data) = match remote { Ok(remote) => (Header { @@ -146,7 +144,7 @@ fn worker(stream: &mut TcpStream, _io: &Io, _aux_mutex: &Mutex, Ok(()) } -pub fn thread(io: Io, aux_mutex: &Mutex, +pub fn thread(io: Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &Urc>, up_destinations: &Urc>) { let listener = TcpListener::new(&io, 65535); @@ -161,7 +159,7 @@ pub fn thread(io: Io, aux_mutex: &Mutex, disarm(); let routing_table = routing_table.borrow(); - match worker(&mut stream, &io, aux_mutex, &routing_table, up_destinations) { + match worker(&mut stream, &io, aux_mutex, ddma_mutex, subkernel_mutex, &routing_table, up_destinations) { Ok(()) => (), Err(err) => error!("analyzer aborted: {}", err) } diff --git a/artiq/firmware/runtime/kern_hwreq.rs b/artiq/firmware/runtime/kern_hwreq.rs index 7fd0b379c..9dea387d5 100644 --- a/artiq/firmware/runtime/kern_hwreq.rs +++ b/artiq/firmware/runtime/kern_hwreq.rs @@ -11,13 +11,15 @@ use board_artiq::spi as local_spi; #[cfg(has_drtio)] mod remote_i2c { use drtioaux; + use drtio_routing; use rtio_mgt::drtio; use sched::{Io, Mutex}; - pub fn start(io: &Io, aux_mutex: &Mutex, + pub fn start(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8 ) -> Result<(), &'static str> { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::I2cStartRequest { destination: destination, busno: busno @@ -32,15 +34,16 @@ mod remote_i2c { } Err(e) => { error!("aux packet error ({})", e); - Err(e) + Err("aux packet error") } } } - pub fn restart(io: &Io, aux_mutex: &Mutex, + pub fn restart(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8 ) -> Result<(), &'static str> { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::I2cRestartRequest { destination: destination, busno: busno @@ -55,15 +58,16 @@ mod remote_i2c { } Err(e) => { error!("aux packet error ({})", e); - Err(e) + Err("aux packet error") } } } - pub fn stop(io: &Io, aux_mutex: &Mutex, + pub fn stop(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8 ) -> Result<(), &'static str> { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::I2cStopRequest { destination: destination, busno: busno @@ -78,15 +82,16 @@ mod remote_i2c { } Err(e) => { error!("aux packet error ({})", e); - Err(e) + Err("aux packet error") } } } - pub fn write(io: &Io, aux_mutex: &Mutex, + pub fn write(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8, data: u8 ) -> Result { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::I2cWriteRequest { destination: destination, busno: busno, @@ -102,15 +107,16 @@ mod remote_i2c { } Err(e) => { error!("aux packet error ({})", e); - Err(e) + Err("aux packet error") } } } - pub fn read(io: &Io, aux_mutex: &Mutex, + pub fn read(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8, ack: bool ) -> Result { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::I2cReadRequest { destination: destination, busno: busno, @@ -126,15 +132,16 @@ mod remote_i2c { } Err(e) => { error!("aux packet error ({})", e); - Err(e) + Err("aux packet error") } } } - pub fn switch_select(io: &Io, aux_mutex: &Mutex, + pub fn switch_select(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8, address: u8, mask: u8 ) -> Result<(), &'static str> { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::I2cSwitchSelectRequest { destination: destination, busno: busno, @@ -151,7 +158,7 @@ mod remote_i2c { } Err(e) => { error!("aux packet error ({})", e); - Err(e) + Err("aux packet error") } } } @@ -160,13 +167,15 @@ mod remote_i2c { #[cfg(has_drtio)] mod remote_spi { use drtioaux; + use drtio_routing; use rtio_mgt::drtio; use sched::{Io, Mutex}; - pub fn set_config(io: &Io, aux_mutex: &Mutex, + pub fn set_config(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8, flags: u8, length: u8, div: u8, cs: u8 ) -> Result<(), ()> { - let reply = drtio::aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::SpiSetConfigRequest { + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::SpiSetConfigRequest { destination: destination, busno: busno, flags: flags, @@ -189,10 +198,11 @@ mod remote_spi { } } - pub fn write(io: &Io, aux_mutex: &Mutex, + pub fn write(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8, data: u32 ) -> Result<(), ()> { - let reply = drtio::aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::SpiWriteRequest { + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::SpiWriteRequest { destination: destination, busno: busno, data: data @@ -212,9 +222,10 @@ mod remote_spi { } } - pub fn read(io: &Io, aux_mutex: &Mutex, linkno: u8, destination: u8, busno: u8 + pub fn read(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, busno: u8 ) -> Result { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::SpiReadRequest { destination: destination, busno: busno @@ -238,7 +249,7 @@ mod remote_spi { #[cfg(has_drtio)] macro_rules! dispatch { - ($io:ident, $aux_mutex:ident, $mod_local:ident, $mod_remote:ident, $routing_table:ident, $busno:expr, $func:ident $(, $param:expr)*) => {{ + ($io:ident, $aux_mutex:ident, $ddma_mutex:ident, $subkernel_mutex:ident, $mod_local:ident, $mod_remote:ident, $routing_table:ident, $busno:expr, $func:ident $(, $param:expr)*) => {{ let destination = ($busno >> 16) as u8; let busno = $busno as u8; let hop = $routing_table.0[destination as usize][0]; @@ -246,27 +257,27 @@ macro_rules! dispatch { $mod_local::$func(busno, $($param, )*) } else { let linkno = hop - 1; - $mod_remote::$func($io, $aux_mutex, linkno, destination, busno, $($param, )*) + $mod_remote::$func($io, $aux_mutex, $ddma_mutex, $subkernel_mutex, $routing_table, linkno, destination, busno, $($param, )*) } }} } #[cfg(not(has_drtio))] macro_rules! dispatch { - ($io:ident, $aux_mutex:ident, $mod_local:ident, $mod_remote:ident, $routing_table:ident, $busno:expr, $func:ident $(, $param:expr)*) => {{ + ($io:ident, $aux_mutex:ident, $ddma_mutex:ident, $subkernel_mutex:ident, $mod_local:ident, $mod_remote:ident, $routing_table:ident, $busno:expr, $func:ident $(, $param:expr)*) => {{ let busno = $busno as u8; $mod_local::$func(busno, $($param, )*) }} } -pub fn process_kern_hwreq(io: &Io, aux_mutex: &Mutex, - _routing_table: &drtio_routing::RoutingTable, +pub fn process_kern_hwreq(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, _up_destinations: &Urc>, request: &kern::Message) -> Result> { match request { &kern::RtioInitRequest => { info!("resetting RTIO"); - rtio_mgt::reset(io, aux_mutex); + rtio_mgt::reset(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table); kern_acknowledge() } @@ -282,47 +293,47 @@ pub fn process_kern_hwreq(io: &Io, aux_mutex: &Mutex, } &kern::I2cStartRequest { busno } => { - let succeeded = dispatch!(io, aux_mutex, local_i2c, remote_i2c, _routing_table, busno, start).is_ok(); + let succeeded = dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_i2c, remote_i2c, routing_table, busno, start).is_ok(); kern_send(io, &kern::I2cBasicReply { succeeded: succeeded }) } &kern::I2cRestartRequest { busno } => { - let succeeded = dispatch!(io, aux_mutex, local_i2c, remote_i2c, _routing_table, busno, restart).is_ok(); + let succeeded = dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_i2c, remote_i2c, routing_table, busno, restart).is_ok(); kern_send(io, &kern::I2cBasicReply { succeeded: succeeded }) } &kern::I2cStopRequest { busno } => { - let succeeded = dispatch!(io, aux_mutex, local_i2c, remote_i2c, _routing_table, busno, stop).is_ok(); + let succeeded = dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_i2c, remote_i2c, routing_table, busno, stop).is_ok(); kern_send(io, &kern::I2cBasicReply { succeeded: succeeded }) } &kern::I2cWriteRequest { busno, data } => { - match dispatch!(io, aux_mutex, local_i2c, remote_i2c, _routing_table, busno, write, data) { + match dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_i2c, remote_i2c, routing_table, busno, write, data) { Ok(ack) => kern_send(io, &kern::I2cWriteReply { succeeded: true, ack: ack }), Err(_) => kern_send(io, &kern::I2cWriteReply { succeeded: false, ack: false }) } } &kern::I2cReadRequest { busno, ack } => { - match dispatch!(io, aux_mutex, local_i2c, remote_i2c, _routing_table, busno, read, ack) { + match dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_i2c, remote_i2c, routing_table, busno, read, ack) { Ok(data) => kern_send(io, &kern::I2cReadReply { succeeded: true, data: data }), Err(_) => kern_send(io, &kern::I2cReadReply { succeeded: false, data: 0xff }) } } &kern::I2cSwitchSelectRequest { busno, address, mask } => { - let succeeded = dispatch!(io, aux_mutex, local_i2c, remote_i2c, _routing_table, busno, + let succeeded = dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_i2c, remote_i2c, routing_table, busno, switch_select, address, mask).is_ok(); kern_send(io, &kern::I2cBasicReply { succeeded: succeeded }) } &kern::SpiSetConfigRequest { busno, flags, length, div, cs } => { - let succeeded = dispatch!(io, aux_mutex, local_spi, remote_spi, _routing_table, busno, + let succeeded = dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_spi, remote_spi, routing_table, busno, set_config, flags, length, div, cs).is_ok(); kern_send(io, &kern::SpiBasicReply { succeeded: succeeded }) }, &kern::SpiWriteRequest { busno, data } => { - let succeeded = dispatch!(io, aux_mutex, local_spi, remote_spi, _routing_table, busno, + let succeeded = dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_spi, remote_spi, routing_table, busno, write, data).is_ok(); kern_send(io, &kern::SpiBasicReply { succeeded: succeeded }) } &kern::SpiReadRequest { busno } => { - match dispatch!(io, aux_mutex, local_spi, remote_spi, _routing_table, busno, read) { + match dispatch!(io, aux_mutex, ddma_mutex, subkernel_mutex, local_spi, remote_spi, routing_table, busno, read) { Ok(data) => kern_send(io, &kern::SpiReadReply { succeeded: true, data: data }), Err(_) => kern_send(io, &kern::SpiReadReply { succeeded: false, data: 0 }) } diff --git a/artiq/firmware/runtime/kernel.rs b/artiq/firmware/runtime/kernel.rs index 42c1f2f05..dda190cd9 100644 --- a/artiq/firmware/runtime/kernel.rs +++ b/artiq/firmware/runtime/kernel.rs @@ -87,3 +87,353 @@ unsafe fn load_image(image: &[u8]) -> Result<(), &'static str> { pub fn validate(ptr: usize) -> bool { ptr >= KERNELCPU_EXEC_ADDRESS && ptr <= KERNELCPU_LAST_ADDRESS } + + +#[cfg(has_drtio)] +pub mod subkernel { + use alloc::{vec::Vec, collections::btree_map::BTreeMap}; + use board_artiq::drtio_routing::RoutingTable; + use board_misoc::clock; + use proto_artiq::{drtioaux_proto::{PayloadStatus, MASTER_PAYLOAD_MAX_SIZE}, rpc_proto as rpc}; + use io::Cursor; + use rtio_mgt::drtio; + use sched::{Io, Mutex, Error as SchedError}; + + #[derive(Debug, PartialEq, Clone, Copy)] + pub enum FinishStatus { + Ok, + CommLost, + Exception(u8) // exception source + } + + #[derive(Debug, PartialEq, Clone, Copy)] + pub enum SubkernelState { + NotLoaded, + Uploaded, + Running, + Finished { status: FinishStatus }, + } + + #[derive(Fail, Debug)] + pub enum Error { + #[fail(display = "Timed out waiting for subkernel")] + Timeout, + #[fail(display = "Subkernel is in incorrect state for the given operation")] + IncorrectState, + #[fail(display = "DRTIO error: {}", _0)] + DrtioError(#[cause] drtio::Error), + #[fail(display = "scheduler error: {}", _0)] + SchedError(#[cause] SchedError), + #[fail(display = "rpc io error")] + RpcIoError, + #[fail(display = "subkernel finished prematurely")] + SubkernelFinished, + } + + impl From for Error { + fn from(value: drtio::Error) -> Error { + match value { + drtio::Error::SchedError(x) => Error::SchedError(x), + x => Error::DrtioError(x), + } + } + } + + impl From for Error { + fn from(value: SchedError) -> Error { + Error::SchedError(value) + } + } + + impl From> for Error { + fn from(_value: io::Error) -> Error { + Error::RpcIoError + } + } + + pub struct SubkernelFinished { + pub id: u32, + pub comm_lost: bool, + pub exception: Option> + } + + struct Subkernel { + pub destination: u8, + pub data: Vec, + pub state: SubkernelState + } + + impl Subkernel { + pub fn new(destination: u8, data: Vec) -> Self { + Subkernel { + destination: destination, + data: data, + state: SubkernelState::NotLoaded + } + } + } + + static mut SUBKERNELS: BTreeMap = BTreeMap::new(); + + pub fn add_subkernel(io: &Io, subkernel_mutex: &Mutex, id: u32, destination: u8, kernel: Vec) -> Result<(), Error> { + let _lock = subkernel_mutex.lock(io)?; + unsafe { SUBKERNELS.insert(id, Subkernel::new(destination, kernel)); } + Ok(()) + } + + pub fn upload(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, id: u32) -> Result<(), Error> { + let _lock = subkernel_mutex.lock(io)?; + let subkernel = unsafe { SUBKERNELS.get_mut(&id).unwrap() }; + drtio::subkernel_upload(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, + subkernel.destination, &subkernel.data)?; + subkernel.state = SubkernelState::Uploaded; + Ok(()) + } + + pub fn load(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &RoutingTable, + id: u32, run: bool) -> Result<(), Error> { + let _lock = subkernel_mutex.lock(io)?; + let subkernel = unsafe { SUBKERNELS.get_mut(&id).unwrap() }; + if subkernel.state != SubkernelState::Uploaded { + error!("for id: {} expected Uploaded, got: {:?}", id, subkernel.state); + return Err(Error::IncorrectState); + } + drtio::subkernel_load(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, subkernel.destination, run)?; + if run { + subkernel.state = SubkernelState::Running; + } + Ok(()) + } + + pub fn clear_subkernels(io: &Io, subkernel_mutex: &Mutex) -> Result<(), Error> { + let _lock = subkernel_mutex.lock(io)?; + unsafe { + SUBKERNELS = BTreeMap::new(); + MESSAGE_QUEUE = Vec::new(); + CURRENT_MESSAGES = BTreeMap::new(); + } + Ok(()) + } + + pub fn subkernel_finished(io: &Io, subkernel_mutex: &Mutex, id: u32, with_exception: bool, exception_src: u8) { + // called upon receiving DRTIO SubkernelRunDone + let _lock = subkernel_mutex.lock(io).unwrap(); + let subkernel = unsafe { SUBKERNELS.get_mut(&id) }; + // may be None if session ends and is cleared + if let Some(subkernel) = subkernel { + // ignore other messages, could be a late finish reported + if subkernel.state == SubkernelState::Running { + subkernel.state = SubkernelState::Finished { + status: match with_exception { + true => FinishStatus::Exception(exception_src), + false => FinishStatus::Ok, + } + } + } + } + } + + pub fn destination_changed(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, destination: u8, up: bool) { + let _lock = subkernel_mutex.lock(io).unwrap(); + let subkernels_iter = unsafe { SUBKERNELS.iter_mut() }; + for (id, subkernel) in subkernels_iter { + if subkernel.destination == destination { + if up { + match drtio::subkernel_upload(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, *id, destination, &subkernel.data) + { + Ok(_) => subkernel.state = SubkernelState::Uploaded, + Err(e) => error!("Error adding subkernel on destination {}: {}", destination, e) + } + } else { + subkernel.state = match subkernel.state { + SubkernelState::Running => SubkernelState::Finished { status: FinishStatus::CommLost }, + _ => SubkernelState::NotLoaded, + } + } + } + } + } + + pub fn retrieve_finish_status(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, id: u32) -> Result { + let _lock = subkernel_mutex.lock(io)?; + let mut subkernel = unsafe { SUBKERNELS.get_mut(&id).unwrap() }; + match subkernel.state { + SubkernelState::Finished { status } => { + subkernel.state = SubkernelState::Uploaded; + Ok(SubkernelFinished { + id: id, + comm_lost: status == FinishStatus::CommLost, + exception: if let FinishStatus::Exception(dest) = status { + Some(drtio::subkernel_retrieve_exception(io, aux_mutex, ddma_mutex, subkernel_mutex, + routing_table, dest)?) + } else { None } + }) + }, + _ => { + Err(Error::IncorrectState) + } + } + } + + pub fn await_finish(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, id: u32, timeout: i64) -> Result { + { + let _lock = subkernel_mutex.lock(io)?; + match unsafe { SUBKERNELS.get(&id).unwrap().state } { + SubkernelState::Running | SubkernelState::Finished { .. } => (), + _ => { + return Err(Error::IncorrectState); + } + } + } + let max_time = clock::get_ms() + timeout as u64; + let _res = io.until(|| { + if timeout > 0 && clock::get_ms() > max_time { + return true; + } + if subkernel_mutex.test_lock() { + // cannot lock again within io.until - scheduler guarantees + // that it will not be interrupted - so only test the lock + return false; + } + let subkernel = unsafe { SUBKERNELS.get(&id).unwrap() }; + match subkernel.state { + SubkernelState::Finished { .. } => true, + _ => false + } + })?; + if timeout > 0 && clock::get_ms() > max_time { + error!("Remote subkernel finish await timed out"); + return Err(Error::Timeout); + } + retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id) + } + + pub struct Message { + from_id: u32, + pub count: u8, + pub data: Vec + } + + // FIFO queue of messages + static mut MESSAGE_QUEUE: Vec = Vec::new(); + // currently under construction message(s) (can be from multiple sources) + static mut CURRENT_MESSAGES: BTreeMap = BTreeMap::new(); + + pub fn message_handle_incoming(io: &Io, subkernel_mutex: &Mutex, + id: u32, status: PayloadStatus, length: usize, data: &[u8; MASTER_PAYLOAD_MAX_SIZE]) { + // called when receiving a message from satellite + let _lock = match subkernel_mutex.lock(io) { + Ok(lock) => lock, + // may get interrupted, when session is cancelled or main kernel finishes without await + Err(_) => return, + }; + let subkernel = unsafe { SUBKERNELS.get(&id) }; + if subkernel.is_some() && subkernel.unwrap().state != SubkernelState::Running { + warn!("received a message for a non-running subkernel #{}", id); + // do not add messages for non-running or deleted subkernels + return + } + if status.is_first() { + unsafe { + CURRENT_MESSAGES.remove(&id); + } + } + match unsafe { CURRENT_MESSAGES.get_mut(&id) } { + Some(message) => message.data.extend(&data[..length]), + None => unsafe { + CURRENT_MESSAGES.insert(id, Message { + from_id: id, + count: data[0], + data: data[1..length].to_vec() + }); + } + }; + if status.is_last() { + unsafe { + // when done, remove from working queue + MESSAGE_QUEUE.push(CURRENT_MESSAGES.remove(&id).unwrap()); + }; + } + } + + pub fn message_await(io: &Io, subkernel_mutex: &Mutex, id: u32, timeout: i64 + ) -> Result { + let is_subkernel = { + let _lock = subkernel_mutex.lock(io)?; + let is_subkernel = unsafe { SUBKERNELS.get(&id).is_some() }; + if is_subkernel { + match unsafe { SUBKERNELS.get(&id).unwrap().state } { + SubkernelState::Finished { status: FinishStatus::Ok } | + SubkernelState::Running => (), + SubkernelState::Finished { + status: FinishStatus::CommLost, + } => return Err(Error::SubkernelFinished), + _ => return Err(Error::IncorrectState) + } + } + is_subkernel + }; + let max_time = clock::get_ms() + timeout as u64; + let message = io.until_ok(|| { + if timeout > 0 && clock::get_ms() > max_time { + return Ok(None); + } + if subkernel_mutex.test_lock() { + return Err(()); + } + let msg_len = unsafe { MESSAGE_QUEUE.len() }; + for i in 0..msg_len { + let msg = unsafe { &MESSAGE_QUEUE[i] }; + if msg.from_id == id { + return Ok(Some(unsafe { MESSAGE_QUEUE.remove(i) })); + } + } + if is_subkernel { + match unsafe { SUBKERNELS.get(&id).unwrap().state } { + SubkernelState::Finished { status: FinishStatus::CommLost } | + SubkernelState::Finished { status: FinishStatus::Exception(_) } => return Ok(None), + _ => () + } + } + Err(()) + }); + match message { + Ok(Some(message)) => Ok(message), + Ok(None) => { + if clock::get_ms() > max_time { + Err(Error::Timeout) + } else { + let _lock = subkernel_mutex.lock(io)?; + match unsafe { SUBKERNELS.get(&id).unwrap().state } { + SubkernelState::Finished { .. } => Err(Error::SubkernelFinished), + _ => Err(Error::IncorrectState) + } + } + } + Err(e) => Err(Error::SchedError(e)), + } + } + + pub fn message_send<'a>(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, id: u32, destination: Option, count: u8, tag: &'a [u8], message: *const *const () + ) -> Result<(), Error> { + let mut writer = Cursor::new(Vec::new()); + // reuse rpc code for sending arbitrary data + rpc::send_args(&mut writer, 0, tag, message, false)?; + // skip service tag, but overwrite first byte with tag count + let destination = destination.unwrap_or_else(|| { + let _lock = subkernel_mutex.lock(io).unwrap(); + unsafe { SUBKERNELS.get(&id).unwrap().destination } + } + ); + let data = &mut writer.into_inner()[3..]; + data[0] = count; + Ok(drtio::subkernel_send_message( + io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, destination, data + )?) + } +} \ No newline at end of file diff --git a/artiq/firmware/runtime/main.rs b/artiq/firmware/runtime/main.rs index 08367375a..ca942caa5 100644 --- a/artiq/firmware/runtime/main.rs +++ b/artiq/firmware/runtime/main.rs @@ -1,4 +1,4 @@ -#![feature(lang_items, panic_info_message, const_btree_new, iter_advance_by)] +#![feature(lang_items, panic_info_message, const_btree_new, iter_advance_by, never_type)] #![no_std] extern crate dyld; @@ -25,6 +25,8 @@ extern crate board_artiq; extern crate logger_artiq; extern crate proto_artiq; extern crate riscv; +#[cfg(has_drtio)] +extern crate tar_no_std; use alloc::collections::BTreeMap; use core::cell::RefCell; @@ -34,12 +36,16 @@ use smoltcp::wire::HardwareAddress; use board_misoc::{csr, ident, clock, spiflash, config, net_settings, pmp, boot}; #[cfg(has_ethmac)] use board_misoc::ethmac; +#[cfg(soc_platform = "kasli")] +use board_misoc::irq; use board_misoc::net_settings::{Ipv4AddrConfig}; #[cfg(has_drtio)] use board_artiq::drtioaux; use board_artiq::drtio_routing; use board_artiq::{mailbox, rpc_queue}; use proto_artiq::{mgmt_proto, moninj_proto, rpc_proto, session_proto, kernel_proto}; +#[cfg(has_wrpll)] +use board_artiq::si549; #[cfg(has_drtio_eem)] use board_artiq::drtio_eem; #[cfg(has_rtio_analyzer)] @@ -189,6 +195,7 @@ fn startup() { let aux_mutex = sched::Mutex::new(); let ddma_mutex = sched::Mutex::new(); + let subkernel_mutex = sched::Mutex::new(); let mut scheduler = sched::Scheduler::new(interface); let io = scheduler.io(); @@ -197,7 +204,7 @@ fn startup() { io.spawn(4096, dhcp::dhcp_thread); } - rtio_mgt::startup(&io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex); + rtio_mgt::startup(&io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex, &subkernel_mutex); io.spawn(4096, mgmt::thread); { @@ -205,20 +212,25 @@ fn startup() { let drtio_routing_table = drtio_routing_table.clone(); let up_destinations = up_destinations.clone(); let ddma_mutex = ddma_mutex.clone(); - io.spawn(16384, move |io| { session::thread(io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex) }); + let subkernel_mutex = subkernel_mutex.clone(); + io.spawn(32768, move |io| { session::thread(io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex, &subkernel_mutex) }); } #[cfg(any(has_rtio_moninj, has_drtio))] { let aux_mutex = aux_mutex.clone(); + let ddma_mutex = ddma_mutex.clone(); + let subkernel_mutex = subkernel_mutex.clone(); let drtio_routing_table = drtio_routing_table.clone(); - io.spawn(4096, move |io| { moninj::thread(io, &aux_mutex, &drtio_routing_table) }); + io.spawn(4096, move |io| { moninj::thread(io, &aux_mutex, &ddma_mutex, &subkernel_mutex, &drtio_routing_table) }); } #[cfg(has_rtio_analyzer)] { let aux_mutex = aux_mutex.clone(); + let ddma_mutex = ddma_mutex.clone(); + let subkernel_mutex = subkernel_mutex.clone(); let drtio_routing_table = drtio_routing_table.clone(); let up_destinations = up_destinations.clone(); - io.spawn(8192, move |io| { analyzer::thread(io, &aux_mutex, &drtio_routing_table, &up_destinations) }); + io.spawn(8192, move |io| { analyzer::thread(io, &aux_mutex, &ddma_mutex, &subkernel_mutex, &drtio_routing_table, &up_destinations) }); } #[cfg(has_grabber)] @@ -257,6 +269,11 @@ pub extern fn main() -> i32 { pmp::init_stack_guard(&_sstack_guard as *const u8 as usize); + #[cfg(soc_platform = "kasli")] + irq::enable_interrupts(); + #[cfg(has_wrpll)] + irq::enable(csr::WRPLL_INTERRUPT); + logger_artiq::BufferLogger::new(&mut LOG_BUFFER[..]).register(|| boot::start_user(startup as usize) ); @@ -291,8 +308,11 @@ pub extern fn exception(regs: *const TrapFrame) { let pc = mepc::read(); let cause = mcause::read().cause(); match cause { - mcause::Trap::Interrupt(source) => { - info!("Called interrupt with {:?}", source); + mcause::Trap::Interrupt(_source) => { + #[cfg(has_wrpll)] + if irq::is_pending(csr::WRPLL_INTERRUPT) { + si549::wrpll::interrupt_handler(); + } }, mcause::Trap::Exception(mcause::Exception::UserEnvCall) => { diff --git a/artiq/firmware/runtime/moninj.rs b/artiq/firmware/runtime/moninj.rs index 1ef6b4215..b66b65d95 100644 --- a/artiq/firmware/runtime/moninj.rs +++ b/artiq/firmware/runtime/moninj.rs @@ -50,12 +50,15 @@ mod local_moninj { #[cfg(has_drtio)] mod remote_moninj { use drtioaux; + use drtio_routing; use rtio_mgt::drtio; use sched::{Io, Mutex}; - pub fn read_probe(io: &Io, aux_mutex: &Mutex, linkno: u8, + pub fn read_probe(io: &Io, aux_mutex: &Mutex, + ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, channel: u16, probe: u8) -> u64 { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::MonitorRequest { destination: destination, channel: channel, @@ -69,7 +72,9 @@ mod remote_moninj { 0 } - pub fn inject(io: &Io, aux_mutex: &Mutex, linkno: u8, + pub fn inject(io: &Io, aux_mutex: &Mutex, + _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex, + _routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, channel: u16, overrd: u8, value: u8) { let _lock = aux_mutex.lock(io).unwrap(); drtioaux::send(linkno, &drtioaux::Packet::InjectionRequest { @@ -80,9 +85,11 @@ mod remote_moninj { }).unwrap(); } - pub fn read_injection_status(io: &Io, aux_mutex: &Mutex, linkno: u8, + pub fn read_injection_status(io: &Io, aux_mutex: &Mutex, + ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, destination: u8, channel: u16, overrd: u8) -> u8 { - let reply = drtio::aux_transact(io, aux_mutex, linkno, + let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::InjectionStatusRequest { destination: destination, channel: channel, @@ -99,7 +106,7 @@ mod remote_moninj { #[cfg(has_drtio)] macro_rules! dispatch { - ($io:ident, $aux_mutex:ident, $routing_table:ident, $channel:expr, $func:ident $(, $param:expr)*) => {{ + ($io:ident, $aux_mutex:ident, $ddma_mutex:ident, $subkernel_mutex:ident, $routing_table:ident, $channel:expr, $func:ident $(, $param:expr)*) => {{ let destination = ($channel >> 16) as u8; let channel = $channel as u16; let hop = $routing_table.0[destination as usize][0]; @@ -107,21 +114,21 @@ macro_rules! dispatch { local_moninj::$func(channel, $($param, )*) } else { let linkno = hop - 1; - remote_moninj::$func($io, $aux_mutex, linkno, destination, channel, $($param, )*) + remote_moninj::$func($io, $aux_mutex, $ddma_mutex, $subkernel_mutex, $routing_table, linkno, destination, channel, $($param, )*) } }} } #[cfg(not(has_drtio))] macro_rules! dispatch { - ($io:ident, $aux_mutex:ident, $routing_table:ident, $channel:expr, $func:ident $(, $param:expr)*) => {{ + ($io:ident, $aux_mutex:ident, $ddma_mutex:ident, $subkernel_mutex:ident, $routing_table:ident, $channel:expr, $func:ident $(, $param:expr)*) => {{ let channel = $channel as u16; local_moninj::$func(channel, $($param, )*) }} } -fn connection_worker(io: &Io, _aux_mutex: &Mutex, _routing_table: &drtio_routing::RoutingTable, - mut stream: &mut TcpStream) -> Result<(), Error> { +fn connection_worker(io: &Io, _aux_mutex: &Mutex, _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex, + _routing_table: &drtio_routing::RoutingTable, mut stream: &mut TcpStream) -> Result<(), Error> { let mut probe_watch_list = BTreeMap::new(); let mut inject_watch_list = BTreeMap::new(); let mut next_check = 0; @@ -150,9 +157,9 @@ fn connection_worker(io: &Io, _aux_mutex: &Mutex, _routing_table: &drtio_routing } }, HostMessage::Inject { channel, overrd, value } => dispatch!( - io, _aux_mutex, _routing_table, channel, inject, overrd, value), + io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, channel, inject, overrd, value), HostMessage::GetInjectionStatus { channel, overrd } => { - let value = dispatch!(io, _aux_mutex, _routing_table, channel, read_injection_status, overrd); + let value = dispatch!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, channel, read_injection_status, overrd); let reply = DeviceMessage::InjectionStatus { channel: channel, overrd: overrd, @@ -169,7 +176,7 @@ fn connection_worker(io: &Io, _aux_mutex: &Mutex, _routing_table: &drtio_routing if clock::get_ms() > next_check { for (&(channel, probe), previous) in probe_watch_list.iter_mut() { - let current = dispatch!(io, _aux_mutex, _routing_table, channel, read_probe, probe); + let current = dispatch!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, channel, read_probe, probe); if previous.is_none() || previous.unwrap() != current { let message = DeviceMessage::MonitorStatus { channel: channel, @@ -184,7 +191,7 @@ fn connection_worker(io: &Io, _aux_mutex: &Mutex, _routing_table: &drtio_routing } } for (&(channel, overrd), previous) in inject_watch_list.iter_mut() { - let current = dispatch!(io, _aux_mutex, _routing_table, channel, read_injection_status, overrd); + let current = dispatch!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, channel, read_injection_status, overrd); if previous.is_none() || previous.unwrap() != current { let message = DeviceMessage::InjectionStatus { channel: channel, @@ -205,18 +212,20 @@ fn connection_worker(io: &Io, _aux_mutex: &Mutex, _routing_table: &drtio_routing } } -pub fn thread(io: Io, aux_mutex: &Mutex, routing_table: &Urc>) { +pub fn thread(io: Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &Urc>) { let listener = TcpListener::new(&io, 2047); listener.listen(1383).expect("moninj: cannot listen"); loop { let aux_mutex = aux_mutex.clone(); + let ddma_mutex = ddma_mutex.clone(); + let subkernel_mutex = subkernel_mutex.clone(); let routing_table = routing_table.clone(); let stream = listener.accept().expect("moninj: cannot accept").into_handle(); io.spawn(16384, move |io| { let routing_table = routing_table.borrow(); let mut stream = TcpStream::from_handle(&io, stream); - match connection_worker(&io, &aux_mutex, &routing_table, &mut stream) { + match connection_worker(&io, &aux_mutex, &ddma_mutex, &subkernel_mutex, &routing_table, &mut stream) { Ok(()) => {}, Err(err) => error!("moninj aborted: {}", err) } diff --git a/artiq/firmware/runtime/rtio_clocking.rs b/artiq/firmware/runtime/rtio_clocking.rs index edb1b74e8..64236c749 100644 --- a/artiq/firmware/runtime/rtio_clocking.rs +++ b/artiq/firmware/runtime/rtio_clocking.rs @@ -1,8 +1,11 @@ use board_misoc::config; +#[cfg(has_si5324)] use board_artiq::si5324; +#[cfg(has_si549)] +use board_artiq::si549; use board_misoc::{csr, clock}; -#[derive(Debug, PartialEq)] +#[derive(Debug, PartialEq, Copy, Clone)] #[allow(non_camel_case_types)] pub enum RtioClock { Default, @@ -89,13 +92,14 @@ pub mod crg { // Si5324 input to select for locking to an external clock (as opposed to // a recovered link clock in DRTIO satellites, which is handled elsewhere). -#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))] +#[cfg(all(has_si5324, soc_platform = "kasli", hw_rev = "v2.0"))] const SI5324_EXT_INPUT: si5324::Input = si5324::Input::Ckin1; -#[cfg(all(soc_platform = "kasli", not(hw_rev = "v2.0")))] +#[cfg(all(has_si5324, soc_platform = "kasli", not(hw_rev = "v2.0")))] const SI5324_EXT_INPUT: si5324::Input = si5324::Input::Ckin2; -#[cfg(all(soc_platform = "kc705"))] +#[cfg(all(has_si5324, soc_platform = "kc705"))] const SI5324_EXT_INPUT: si5324::Input = si5324::Input::Ckin2; +#[cfg(has_si5324)] fn setup_si5324_pll(cfg: RtioClock) { let (si5324_settings, si5324_ref_input) = match cfg { RtioClock::Ext0_Synth0_10to125 => { // 125 MHz output from 10 MHz CLKINx reference, 504 Hz BW @@ -214,7 +218,7 @@ fn setup_si5324_pll(cfg: RtioClock) { si5324::setup(&si5324_settings, si5324_ref_input).expect("cannot initialize Si5324"); } -fn setup_si5324(clock_cfg: RtioClock) { +fn sysclk_setup(clock_cfg: RtioClock) { let switched = unsafe { csr::crg::switch_done_read() }; @@ -222,6 +226,8 @@ fn setup_si5324(clock_cfg: RtioClock) { info!("Clocking has already been set up."); return; } + + #[cfg(has_si5324)] match clock_cfg { RtioClock::Ext0_Bypass => { info!("using external RTIO clock with PLL bypass"); @@ -230,7 +236,10 @@ fn setup_si5324(clock_cfg: RtioClock) { _ => setup_si5324_pll(clock_cfg), } - // switch sysclk source to si5324 + #[cfg(has_si549)] + si549::main_setup(&get_si549_setting(clock_cfg)).expect("cannot initialize main Si549"); + + // switch sysclk source #[cfg(not(has_drtio))] { info!("Switching sys clock, rebooting..."); @@ -244,9 +253,153 @@ fn setup_si5324(clock_cfg: RtioClock) { } +#[cfg(all(has_si549, has_wrpll))] +fn wrpll_setup(clk: RtioClock, si549_settings: &si549::FrequencySetting) { + // register values are directly copied from preconfigured mmcm + let (mmcm_setting, mmcm_bypass) = match clk { + RtioClock::Ext0_Synth0_10to125 => ( + si549::wrpll_refclk::MmcmSetting { + // CLKFBOUT_MULT = 62.5, DIVCLK_DIVIDE = 1 , CLKOUT0_DIVIDE = 5 + clkout0_reg1: 0x1083, + clkout0_reg2: 0x0080, + clkfbout_reg1: 0x179e, + clkfbout_reg2: 0x4c00, + div_reg: 0x1041, + lock_reg1: 0x00fa, + lock_reg2: 0x7c01, + lock_reg3: 0xffe9, + power_reg: 0x9900, + filt_reg1: 0x1008, + filt_reg2: 0x8800, + }, + false, + ), + RtioClock::Ext0_Synth0_80to125 => ( + si549::wrpll_refclk::MmcmSetting { + // CLKFBOUT_MULT = 15.625, DIVCLK_DIVIDE = 1 , CLKOUT0_DIVIDE = 10 + clkout0_reg1: 0x1145, + clkout0_reg2: 0x0000, + clkfbout_reg1: 0x11c7, + clkfbout_reg2: 0x5880, + div_reg: 0x1041, + lock_reg1: 0x028a, + lock_reg2: 0x7c01, + lock_reg3: 0xffe9, + power_reg: 0x9900, + filt_reg1: 0x9908, + filt_reg2: 0x8100, + }, + false, + ), + RtioClock::Ext0_Synth0_100to125 => ( + si549::wrpll_refclk::MmcmSetting { + // CLKFBOUT_MULT = 12.5, DIVCLK_DIVIDE = 1 , CLKOUT0_DIVIDE = 10 + clkout0_reg1: 0x1145, + clkout0_reg2: 0x0000, + clkfbout_reg1: 0x1145, + clkfbout_reg2: 0x4c00, + div_reg: 0x1041, + lock_reg1: 0x0339, + lock_reg2: 0x7c01, + lock_reg3: 0xffe9, + power_reg: 0x9900, + filt_reg1: 0x9108, + filt_reg2: 0x0100, + }, + false, + ), + RtioClock::Ext0_Synth0_125to125 => ( + si549::wrpll_refclk::MmcmSetting { + // CLKFBOUT_MULT = 10, DIVCLK_DIVIDE = 1 , CLKOUT0_DIVIDE = 10 + clkout0_reg1: 0x1145, + clkout0_reg2: 0x0000, + clkfbout_reg1: 0x1145, + clkfbout_reg2: 0x0000, + div_reg: 0x1041, + lock_reg1: 0x03e8, + lock_reg2: 0x7001, + lock_reg3: 0xf3e9, + power_reg: 0x0100, + filt_reg1: 0x9908, + filt_reg2: 0x1100, + }, + true, + ), + _ => unreachable!(), + }; + + si549::helper_setup(&si549_settings).expect("cannot initialize helper Si549"); + si549::wrpll_refclk::setup(mmcm_setting, mmcm_bypass).expect("cannot initialize ref clk for wrpll"); + si549::wrpll::select_recovered_clock(true); +} + +#[cfg(has_si549)] +fn get_si549_setting(clk: RtioClock) -> si549::FrequencySetting { + match clk { + RtioClock::Ext0_Synth0_10to125 => { + info!("using 10MHz reference to make 125MHz RTIO clock with WRPLL"); + } + RtioClock::Ext0_Synth0_80to125 => { + info!("using 80MHz reference to make 125MHz RTIO clock with WRPLL"); + } + RtioClock::Ext0_Synth0_100to125 => { + info!("using 100MHz reference to make 125MHz RTIO clock with WRPLL"); + } + RtioClock::Ext0_Synth0_125to125 => { + info!("using 125MHz reference to make 125MHz RTIO clock with WRPLL"); + } + RtioClock::Int_100 => { + info!("using internal 100MHz RTIO clock"); + } + RtioClock::Int_125 => { + info!("using internal 125MHz RTIO clock"); + } + _ => { + warn!( + "rtio_clock setting '{:?}' is unsupported. Falling back to default internal 125MHz RTIO clock.", + clk + ); + } + }; + + match clk { + RtioClock::Int_100 => { + si549::FrequencySetting { + main: si549::DividerConfig { + hsdiv: 0x06C, + lsdiv: 0, + fbdiv: 0x046C5F49797, + }, + helper: si549::DividerConfig { + // 100MHz*32767/32768 + hsdiv: 0x06C, + lsdiv: 0, + fbdiv: 0x046C5670BBD, + }, + } + } + _ => { + // Everything else use 125MHz + si549::FrequencySetting { + main: si549::DividerConfig { + hsdiv: 0x058, + lsdiv: 0, + fbdiv: 0x04815791F25, + }, + helper: si549::DividerConfig { + // 125MHz*32767/32768 + hsdiv: 0x058, + lsdiv: 0, + fbdiv: 0x04814E8F442, + }, + } + } + } +} + pub fn init() { let clock_cfg = get_rtio_clock_cfg(); - setup_si5324(clock_cfg); + sysclk_setup(clock_cfg); #[cfg(has_drtio)] { @@ -255,7 +408,7 @@ pub fn init() { }; if switched == 0 { info!("Switching sys clock, rebooting..."); - clock::spin_us(500); // delay for clean UART log + clock::spin_us(3000); // delay for clean UART log unsafe { // clock switch and reboot will begin after TX is initialized // and TX will be initialized after this @@ -282,4 +435,18 @@ pub fn init() { error!("RTIO clock failed"); } } + + #[cfg(all(has_si549, has_wrpll))] + { + // SYS CLK switch will reset CSRs that are used by WRPLL + match clock_cfg { + RtioClock::Ext0_Synth0_10to125 + | RtioClock::Ext0_Synth0_80to125 + | RtioClock::Ext0_Synth0_100to125 + | RtioClock::Ext0_Synth0_125to125 => { + wrpll_setup(clock_cfg, &get_si549_setting(clock_cfg)); + } + _ => {} + } + } } diff --git a/artiq/firmware/runtime/rtio_dma.rs b/artiq/firmware/runtime/rtio_dma.rs index ce0fd4088..e1efa5567 100644 --- a/artiq/firmware/runtime/rtio_dma.rs +++ b/artiq/firmware/runtime/rtio_dma.rs @@ -1,6 +1,6 @@ use core::mem; use alloc::{vec::Vec, string::String, collections::btree_map::BTreeMap}; -use sched::{Io, Mutex}; +use sched::{Io, Mutex, Error as SchedError}; const ALIGNMENT: usize = 64; @@ -39,19 +39,48 @@ pub mod remote_dma { } } + #[derive(Fail, Debug)] + pub enum Error { + #[fail(display = "Timed out waiting for DMA results")] + Timeout, + #[fail(display = "DDMA trace is in incorrect state for the given operation")] + IncorrectState, + #[fail(display = "scheduler error: {}", _0)] + SchedError(#[cause] SchedError), + #[fail(display = "DRTIO error: {}", _0)] + DrtioError(#[cause] drtio::Error), + } + + impl From for Error { + fn from(value: drtio::Error) -> Error { + match value { + drtio::Error::SchedError(x) => Error::SchedError(x), + x => Error::DrtioError(x), + } + } + } + + impl From for Error { + fn from(value: SchedError) -> Error { + Error::SchedError(value) + } + } + // remote traces map. ID -> destination, trace pair static mut TRACES: BTreeMap> = BTreeMap::new(); - pub fn add_traces(io: &Io, ddma_mutex: &Mutex, id: u32, traces: BTreeMap>) { - let _lock = ddma_mutex.lock(io); + pub fn add_traces(io: &Io, ddma_mutex: &Mutex, id: u32, traces: BTreeMap> + ) -> Result<(), SchedError> { + let _lock = ddma_mutex.lock(io)?; let mut trace_map: BTreeMap = BTreeMap::new(); for (destination, trace) in traces { trace_map.insert(destination, trace.into()); } unsafe { TRACES.insert(id, trace_map); } + Ok(()) } - pub fn await_done(io: &Io, ddma_mutex: &Mutex, id: u32, timeout: u64) -> Result { + pub fn await_done(io: &Io, ddma_mutex: &Mutex, id: u32, timeout: u64) -> Result { let max_time = clock::get_ms() + timeout as u64; io.until(|| { if clock::get_ms() > max_time { @@ -70,15 +99,15 @@ pub mod remote_dma { } } true - }).unwrap(); + })?; if clock::get_ms() > max_time { error!("Remote DMA await done timed out"); - return Err("Timed out waiting for results."); + return Err(Error::Timeout); } // clear the internal state, and if there have been any errors, return one of them let mut playback_state: RemoteState = RemoteState::PlaybackEnded { error: 0, channel: 0, timestamp: 0 }; { - let _lock = ddma_mutex.lock(io).unwrap(); + let _lock = ddma_mutex.lock(io)?; let traces = unsafe { TRACES.get_mut(&id).unwrap() }; for (_dest, trace) in traces { match trace.state { @@ -91,60 +120,57 @@ pub mod remote_dma { Ok(playback_state) } - pub fn erase(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, - routing_table: &RoutingTable, id: u32) { - let _lock = ddma_mutex.lock(io).unwrap(); + pub fn erase(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, id: u32) -> Result<(), Error> { + let _lock = ddma_mutex.lock(io)?; let destinations = unsafe { TRACES.get(&id).unwrap() }; for destination in destinations.keys() { - match drtio::ddma_send_erase(io, aux_mutex, routing_table, id, *destination) { + match drtio::ddma_send_erase(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, *destination) { Ok(_) => (), Err(e) => error!("Error erasing trace on DMA: {}", e) } } unsafe { TRACES.remove(&id); } + Ok(()) } - pub fn upload_traces(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, - routing_table: &RoutingTable, id: u32) { - let _lock = ddma_mutex.lock(io); + pub fn upload_traces(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, id: u32) -> Result<(), Error> { + let _lock = ddma_mutex.lock(io)?; let traces = unsafe { TRACES.get_mut(&id).unwrap() }; for (destination, mut trace) in traces { - match drtio::ddma_upload_trace(io, aux_mutex, routing_table, id, *destination, trace.get_trace()) - { - Ok(_) => trace.state = RemoteState::Loaded, - Err(e) => error!("Error adding DMA trace on destination {}: {}", destination, e) - } + drtio::ddma_upload_trace(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, *destination, trace.get_trace())?; + trace.state = RemoteState::Loaded; } + Ok(()) } - pub fn playback(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, - routing_table: &RoutingTable, id: u32, timestamp: u64) { + pub fn playback(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &RoutingTable, id: u32, timestamp: u64) -> Result<(), Error>{ // triggers playback on satellites let destinations = unsafe { - let _lock = ddma_mutex.lock(io).unwrap(); + let _lock = ddma_mutex.lock(io)?; TRACES.get(&id).unwrap() }; for (destination, trace) in destinations { { // need to drop the lock before sending the playback request to avoid a deadlock // if a PlaybackStatus is returned from another satellite in the meanwhile. - let _lock = ddma_mutex.lock(io).unwrap(); + let _lock = ddma_mutex.lock(io)?; if trace.state != RemoteState::Loaded { error!("Destination {} not ready for DMA, state: {:?}", *destination, trace.state); - continue; + return Err(Error::IncorrectState); } } - match drtio::ddma_send_playback(io, aux_mutex, routing_table, id, *destination, timestamp) { - Ok(_) => (), - Err(e) => error!("Error during remote DMA playback: {}", e) - } + drtio::ddma_send_playback(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, *destination, timestamp)?; } + Ok(()) } pub fn playback_done(io: &Io, ddma_mutex: &Mutex, - id: u32, destination: u8, error: u8, channel: u32, timestamp: u64) { + id: u32, source: u8, error: u8, channel: u32, timestamp: u64) { // called upon receiving PlaybackDone aux packet let _lock = ddma_mutex.lock(io).unwrap(); - let mut trace = unsafe { TRACES.get_mut(&id).unwrap().get_mut(&destination).unwrap() }; + let mut trace = unsafe { TRACES.get_mut(&id).unwrap().get_mut(&source).unwrap() }; trace.state = RemoteState::PlaybackEnded { error: error, channel: channel, @@ -152,7 +178,7 @@ pub mod remote_dma { }; } - pub fn destination_changed(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, + pub fn destination_changed(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &RoutingTable, destination: u8, up: bool) { // update state of the destination, resend traces if it's up let _lock = ddma_mutex.lock(io).unwrap(); @@ -160,7 +186,7 @@ pub mod remote_dma { for (id, dest_traces) in traces_iter { if let Some(trace) = dest_traces.get_mut(&destination) { if up { - match drtio::ddma_upload_trace(io, aux_mutex, routing_table, *id, destination, trace.get_trace()) + match drtio::ddma_upload_trace(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, *id, destination, trace.get_trace()) { Ok(_) => trace.state = RemoteState::Loaded, Err(e) => error!("Error adding DMA trace on destination {}: {}", destination, e) @@ -172,10 +198,10 @@ pub mod remote_dma { } } - pub fn has_remote_traces(io: &Io, ddma_mutex: &Mutex, id: u32) -> bool { - let _lock = ddma_mutex.lock(io).unwrap(); + pub fn has_remote_traces(io: &Io, ddma_mutex: &Mutex, id: u32) -> Result { + let _lock = ddma_mutex.lock(io)?; let trace_list = unsafe { TRACES.get(&id).unwrap() }; - !trace_list.is_empty() + Ok(!trace_list.is_empty()) } } @@ -225,11 +251,11 @@ impl Manager { } pub fn record_stop(&mut self, duration: u64, _enable_ddma: bool, - _io: &Io, _ddma_mutex: &Mutex) -> u32 { + _io: &Io, _ddma_mutex: &Mutex) -> Result { let mut local_trace = Vec::new(); let mut _remote_traces: BTreeMap> = BTreeMap::new(); - if _enable_ddma & cfg!(has_drtio) { + if _enable_ddma && cfg!(has_drtio) { let mut trace = Vec::new(); mem::swap(&mut self.recording_trace, &mut trace); trace.push(0); @@ -284,9 +310,9 @@ impl Manager { self.name_map.insert(name, id); #[cfg(has_drtio)] - remote_dma::add_traces(_io, _ddma_mutex, id, _remote_traces); + remote_dma::add_traces(_io, _ddma_mutex, id, _remote_traces)?; - id + Ok(id) } pub fn erase(&mut self, name: &str) { diff --git a/artiq/firmware/runtime/rtio_mgt.rs b/artiq/firmware/runtime/rtio_mgt.rs index 3ead188b5..2d0cbad55 100644 --- a/artiq/firmware/runtime/rtio_mgt.rs +++ b/artiq/firmware/runtime/rtio_mgt.rs @@ -17,22 +17,57 @@ pub mod drtio { use super::*; use alloc::vec::Vec; use drtioaux; - use proto_artiq::drtioaux_proto::DMA_TRACE_MAX_SIZE; + use proto_artiq::drtioaux_proto::{MASTER_PAYLOAD_MAX_SIZE, PayloadStatus}; use rtio_dma::remote_dma; #[cfg(has_rtio_analyzer)] use analyzer::remote_analyzer::RemoteBuffer; + use kernel::subkernel; + use sched::Error as SchedError; + + #[derive(Fail, Debug)] + pub enum Error { + #[fail(display = "timed out")] + Timeout, + #[fail(display = "unexpected packet: {:?}", _0)] + UnexpectedPacket(drtioaux::Packet), + #[fail(display = "aux packet error")] + AuxError, + #[fail(display = "link down")] + LinkDown, + #[fail(display = "unexpected reply")] + UnexpectedReply, + #[fail(display = "error adding DMA trace on satellite #{}", _0)] + DmaAddTraceFail(u8), + #[fail(display = "error erasing DMA trace on satellite #{}", _0)] + DmaEraseFail(u8), + #[fail(display = "error playing back DMA trace on satellite #{}", _0)] + DmaPlaybackFail(u8), + #[fail(display = "error adding subkernel on satellite #{}", _0)] + SubkernelAddFail(u8), + #[fail(display = "error on subkernel run request on satellite #{}", _0)] + SubkernelRunFail(u8), + #[fail(display = "sched error: {}", _0)] + SchedError(#[cause] SchedError), + } + + impl From for Error { + fn from(value: SchedError) -> Error { + Error::SchedError(value) + } + } pub fn startup(io: &Io, aux_mutex: &Mutex, routing_table: &Urc>, up_destinations: &Urc>, - ddma_mutex: &Mutex) { + ddma_mutex: &Mutex, subkernel_mutex: &Mutex) { let aux_mutex = aux_mutex.clone(); let routing_table = routing_table.clone(); let up_destinations = up_destinations.clone(); let ddma_mutex = ddma_mutex.clone(); - io.spawn(8192, move |io| { + let subkernel_mutex = subkernel_mutex.clone(); + io.spawn(16384, move |io| { let routing_table = routing_table.borrow(); - link_thread(io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex); + link_thread(io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex, &subkernel_mutex); }); } @@ -43,42 +78,104 @@ pub mod drtio { } } - fn recv_aux_timeout(io: &Io, linkno: u8, timeout: u32) -> Result { + fn recv_aux_timeout(io: &Io, linkno: u8, timeout: u32) -> Result { let max_time = clock::get_ms() + timeout as u64; loop { if !link_rx_up(linkno) { - return Err("link went down"); + return Err(Error::LinkDown); } if clock::get_ms() > max_time { - return Err("timeout"); + return Err(Error::Timeout); } match drtioaux::recv(linkno) { Ok(Some(packet)) => return Ok(packet), Ok(None) => (), - Err(_) => return Err("aux packet error") + Err(_) => return Err(Error::AuxError) } - io.relinquish().unwrap(); + io.relinquish()?; } } - fn process_async_packets(io: &Io, ddma_mutex: &Mutex, packet: drtioaux::Packet - ) -> Option { - // returns None if an async packet has been consumed + fn process_async_packets(io: &Io, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, packet: &drtioaux::Packet + ) -> bool { match packet { - drtioaux::Packet::DmaPlaybackStatus { id, destination, error, channel, timestamp } => { - remote_dma::playback_done(io, ddma_mutex, id, destination, error, channel, timestamp); - None + // packets to be consumed locally + drtioaux::Packet::DmaPlaybackStatus { id, source, destination: 0, error, channel, timestamp } => { + remote_dma::playback_done(io, ddma_mutex, *id, *source, *error, *channel, *timestamp); + true }, - other => Some(other) + drtioaux::Packet::SubkernelFinished { id, destination: 0, with_exception, exception_src } => { + subkernel::subkernel_finished(io, subkernel_mutex, *id, *with_exception, *exception_src); + true + }, + drtioaux::Packet::SubkernelMessage { id, source: from, destination: 0, status, length, data } => { + subkernel::message_handle_incoming(io, subkernel_mutex, *id, *status, *length as usize, data); + // acknowledge receiving part of the message + drtioaux::send(linkno, + &drtioaux::Packet::SubkernelMessageAck { destination: *from } + ).unwrap(); + true + }, + // (potentially) routable packets + drtioaux::Packet::DmaAddTraceRequest { destination, .. } | + drtioaux::Packet::DmaAddTraceReply { destination, .. } | + drtioaux::Packet::DmaRemoveTraceRequest { destination, .. } | + drtioaux::Packet::DmaRemoveTraceReply { destination, .. } | + drtioaux::Packet::DmaPlaybackRequest { destination, .. } | + drtioaux::Packet::DmaPlaybackReply { destination, .. } | + drtioaux::Packet::SubkernelLoadRunRequest { destination, .. } | + drtioaux::Packet::SubkernelLoadRunReply { destination, .. } | + drtioaux::Packet::SubkernelMessage { destination, .. } | + drtioaux::Packet::SubkernelMessageAck { destination, .. } | + drtioaux::Packet::DmaPlaybackStatus { destination, .. } | + drtioaux::Packet::SubkernelFinished { destination, .. } => { + if *destination == 0 { + false + } else { + let dest_link = routing_table.0[*destination as usize][0] - 1; + if dest_link == linkno { + warn!("[LINK#{}] Re-routed packet would return to the same link, dropping: {:?}", linkno, packet); + } else { + drtioaux::send(dest_link, &packet).unwrap(); + } + true + } + } + _ => false } } - pub fn aux_transact(io: &Io, aux_mutex: &Mutex, linkno: u8, request: &drtioaux::Packet - ) -> Result { - let _lock = aux_mutex.lock(io).unwrap(); + pub fn aux_transact(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8, request: &drtioaux::Packet + ) -> Result { + let _lock = aux_mutex.lock(io)?; drtioaux::send(linkno, request).unwrap(); - let reply = recv_aux_timeout(io, linkno, 200)?; - Ok(reply) + loop { + let reply = recv_aux_timeout(io, linkno, 200)?; + if !process_async_packets(io, ddma_mutex, subkernel_mutex, routing_table, linkno, &reply) { + // returns none if it was an async packet + return Ok(reply); + } + } + } + + fn setup_transact(io: &Io, aux_mutex: &Mutex, linkno: u8, request: &drtioaux::Packet) -> Result { + /* shorter aux_transact for setup purposes, no checking for async packets, + as they should not be generated yet */ + let _lock = aux_mutex.lock(io)?; + drtioaux::send(linkno, request).unwrap(); + recv_aux_timeout(io, linkno, 200) + } + + pub fn clear_buffers(io: &Io, aux_mutex: &Mutex) { + let _lock = aux_mutex.lock(io).unwrap(); + for linkno in 0..(csr::DRTIO.len() as u8) { + if !link_rx_up(linkno) { + continue; + } + let _ = recv_aux_timeout(io, linkno, 200); + } } fn ping_remote(io: &Io, aux_mutex: &Mutex, linkno: u8) -> u32 { @@ -91,7 +188,7 @@ pub mod drtio { if count > 100 { return 0; } - let reply = aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::EchoRequest); + let reply = setup_transact(io, aux_mutex, linkno, &drtioaux::Packet::EchoRequest); match reply { Ok(drtioaux::Packet::EchoReply) => { // make sure receive buffer is drained @@ -110,7 +207,7 @@ pub mod drtio { } } - fn sync_tsc(io: &Io, aux_mutex: &Mutex, linkno: u8) -> Result<(), &'static str> { + fn sync_tsc(io: &Io, aux_mutex: &Mutex, linkno: u8) -> Result<(), Error> { let _lock = aux_mutex.lock(io).unwrap(); unsafe { @@ -123,32 +220,32 @@ pub mod drtio { if reply == drtioaux::Packet::TSCAck { return Ok(()); } else { - return Err("unexpected reply"); + return Err(Error::UnexpectedReply); } } fn load_routing_table(io: &Io, aux_mutex: &Mutex, - linkno: u8, routing_table: &drtio_routing::RoutingTable) -> Result<(), &'static str> { + linkno: u8, routing_table: &drtio_routing::RoutingTable) -> Result<(), Error> { for i in 0..drtio_routing::DEST_COUNT { - let reply = aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::RoutingSetPath { + let reply = setup_transact(io, aux_mutex, linkno, &drtioaux::Packet::RoutingSetPath { destination: i as u8, hops: routing_table.0[i] })?; if reply != drtioaux::Packet::RoutingAck { - return Err("unexpected reply"); + return Err(Error::UnexpectedReply); } } Ok(()) } fn set_rank(io: &Io, aux_mutex: &Mutex, - linkno: u8, rank: u8) -> Result<(), &'static str> { - let reply = aux_transact(io, aux_mutex, linkno, + linkno: u8, rank: u8) -> Result<(), Error> { + let reply = setup_transact(io, aux_mutex, linkno, &drtioaux::Packet::RoutingSetRank { rank: rank })?; if reply != drtioaux::Packet::RoutingAck { - return Err("unexpected reply"); + return Err(Error::UnexpectedReply); } Ok(()) } @@ -166,15 +263,19 @@ pub mod drtio { } } - fn process_unsolicited_aux(io: &Io, aux_mutex: &Mutex, linkno: u8, ddma_mutex: &Mutex) { + fn process_unsolicited_aux(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, linkno: u8) { let _lock = aux_mutex.lock(io).unwrap(); - match drtioaux::recv(linkno) { - Ok(Some(drtioaux::Packet::DmaPlaybackStatus { id, destination, error, channel, timestamp })) => { - remote_dma::playback_done(io, ddma_mutex, id, destination, error, channel, timestamp); + loop { + match drtioaux::recv(linkno) { + Ok(Some(packet)) => { + if !process_async_packets(&io, ddma_mutex, subkernel_mutex, routing_table, linkno, &packet) { + warn!("[LINK#{}] unsolicited aux packet: {:?}", linkno, packet); + } + }, + Ok(None) => return, + Err(_) => { warn!("[LINK#{}] aux packet error", linkno); return } } - Ok(Some(packet)) => warn!("[LINK#{}] unsolicited aux packet: {:?}", linkno, packet), - Ok(None) => (), - Err(_) => warn!("[LINK#{}] aux packet error", linkno) } } @@ -221,7 +322,7 @@ pub mod drtio { fn destination_survey(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, up_links: &[bool], up_destinations: &Urc>, - ddma_mutex: &Mutex) { + ddma_mutex: &Mutex, subkernel_mutex: &Mutex) { for destination in 0..drtio_routing::DEST_COUNT { let hop = routing_table.0[destination][0]; let destination = destination as u8; @@ -235,51 +336,46 @@ pub mod drtio { let linkno = hop - 1; if destination_up(up_destinations, destination) { if up_links[linkno as usize] { - loop { - let reply = aux_transact(io, aux_mutex, linkno, - &drtioaux::Packet::DestinationStatusRequest { - destination: destination - }); - if let Ok(reply) = reply { - let reply = process_async_packets(io, ddma_mutex, reply); - match reply { - Some(drtioaux::Packet::DestinationDownReply) => { - destination_set_up(routing_table, up_destinations, destination, false); - remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, false); - } - Some(drtioaux::Packet::DestinationOkReply) => (), - Some(drtioaux::Packet::DestinationSequenceErrorReply { channel }) => { - error!("[DEST#{}] RTIO sequence error involving channel 0x{:04x}:{}", destination, channel, resolve_channel_name(channel as u32)); - unsafe { SEEN_ASYNC_ERRORS |= ASYNC_ERROR_SEQUENCE_ERROR }; - } - Some(drtioaux::Packet::DestinationCollisionReply { channel }) => { - error!("[DEST#{}] RTIO collision involving channel 0x{:04x}:{}", destination, channel, resolve_channel_name(channel as u32)); - unsafe { SEEN_ASYNC_ERRORS |= ASYNC_ERROR_COLLISION }; - } - Some(drtioaux::Packet::DestinationBusyReply { channel }) => { - error!("[DEST#{}] RTIO busy error involving channel 0x{:04x}:{}", destination, channel, resolve_channel_name(channel as u32)); - unsafe { SEEN_ASYNC_ERRORS |= ASYNC_ERROR_BUSY }; - } - Some(packet) => error!("[DEST#{}] received unexpected aux packet: {:?}", destination, packet), - None => { - // continue asking until we get Destination...Reply or error out - // wait a bit not to overwhelm the receiver causing gateway errors - io.sleep(10).unwrap(); - continue; - } + let reply = aux_transact(io, aux_mutex, + ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::DestinationStatusRequest { + destination: destination + }); + if let Ok(reply) = reply { + match reply { + drtioaux::Packet::DestinationDownReply => { + destination_set_up(routing_table, up_destinations, destination, false); + remote_dma::destination_changed(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, destination, false); + subkernel::destination_changed(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, destination, false); } - } else { - error!("[DEST#{}] communication failed ({})", destination, reply.unwrap_err()); + drtioaux::Packet::DestinationOkReply => (), + drtioaux::Packet::DestinationSequenceErrorReply { channel } => { + error!("[DEST#{}] RTIO sequence error involving channel 0x{:04x}:{}", destination, channel, resolve_channel_name(channel as u32)); + unsafe { SEEN_ASYNC_ERRORS |= ASYNC_ERROR_SEQUENCE_ERROR }; + } + drtioaux::Packet::DestinationCollisionReply { channel } => { + error!("[DEST#{}] RTIO collision involving channel 0x{:04x}:{}", destination, channel, resolve_channel_name(channel as u32)); + unsafe { SEEN_ASYNC_ERRORS |= ASYNC_ERROR_COLLISION }; + } + drtioaux::Packet::DestinationBusyReply { channel } => { + error!("[DEST#{}] RTIO busy error involving channel 0x{:04x}:{}", destination, channel, resolve_channel_name(channel as u32)); + unsafe { SEEN_ASYNC_ERRORS |= ASYNC_ERROR_BUSY }; + } + packet => error!("[DEST#{}] received unexpected aux packet: {:?}", destination, packet), + } - break; + } else { + error!("[DEST#{}] communication failed ({:?})", destination, reply.unwrap_err()); } } else { destination_set_up(routing_table, up_destinations, destination, false); - remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, false); + remote_dma::destination_changed(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, destination, false); + subkernel::destination_changed(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, destination, false); } } else { if up_links[linkno as usize] { - let reply = aux_transact(io, aux_mutex, linkno, + let reply = aux_transact(io, aux_mutex, ddma_mutex, + subkernel_mutex, routing_table, linkno, &drtioaux::Packet::DestinationStatusRequest { destination: destination }); @@ -288,10 +384,11 @@ pub mod drtio { Ok(drtioaux::Packet::DestinationOkReply) => { destination_set_up(routing_table, up_destinations, destination, true); init_buffer_space(destination as u8, linkno); - remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, true); + remote_dma::destination_changed(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, destination, true); + subkernel::destination_changed(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, destination, true); }, Ok(packet) => error!("[DEST#{}] received unexpected aux packet: {:?}", destination, packet), - Err(e) => error!("[DEST#{}] communication failed ({})", destination, e) + Err(e) => error!("[DEST#{}] communication failed ({:?})", destination, e) } } } @@ -302,7 +399,7 @@ pub mod drtio { pub fn link_thread(io: Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, up_destinations: &Urc>, - ddma_mutex: &Mutex) { + ddma_mutex: &Mutex, subkernel_mutex: &Mutex) { let mut up_links = [false; csr::DRTIO.len()]; loop { for linkno in 0..csr::DRTIO.len() { @@ -310,7 +407,7 @@ pub mod drtio { if up_links[linkno as usize] { /* link was previously up */ if link_rx_up(linkno) { - process_unsolicited_aux(&io, aux_mutex, linkno, ddma_mutex); + process_unsolicited_aux(&io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno); process_local_errors(linkno); } else { info!("[LINK#{}] link is down", linkno); @@ -325,13 +422,13 @@ pub mod drtio { info!("[LINK#{}] remote replied after {} packets", linkno, ping_count); up_links[linkno as usize] = true; if let Err(e) = sync_tsc(&io, aux_mutex, linkno) { - error!("[LINK#{}] failed to sync TSC ({})", linkno, e); + error!("[LINK#{}] failed to sync TSC ({:?})", linkno, e); } if let Err(e) = load_routing_table(&io, aux_mutex, linkno, routing_table) { - error!("[LINK#{}] failed to load routing table ({})", linkno, e); + error!("[LINK#{}] failed to load routing table ({:?})", linkno, e); } if let Err(e) = set_rank(&io, aux_mutex, linkno, 1) { - error!("[LINK#{}] failed to set rank ({})", linkno, e); + error!("[LINK#{}] failed to set rank ({:?})", linkno, e); } info!("[LINK#{}] link initialization completed", linkno); } else { @@ -340,12 +437,12 @@ pub mod drtio { } } } - destination_survey(&io, aux_mutex, routing_table, &up_links, up_destinations, ddma_mutex); + destination_survey(&io, aux_mutex, routing_table, &up_links, up_destinations, ddma_mutex, subkernel_mutex); io.sleep(200).unwrap(); } } - pub fn reset(io: &Io, aux_mutex: &Mutex) { + pub fn reset(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable) { for linkno in 0..csr::DRTIO.len() { unsafe { (csr::DRTIO[linkno].reset_write)(1); @@ -361,95 +458,98 @@ pub mod drtio { for linkno in 0..csr::DRTIO.len() { let linkno = linkno as u8; if link_rx_up(linkno) { - let reply = aux_transact(io, aux_mutex, linkno, + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, &drtioaux::Packet::ResetRequest); match reply { Ok(drtioaux::Packet::ResetAck) => (), Ok(_) => error!("[LINK#{}] reset failed, received unexpected aux packet", linkno), - Err(e) => error!("[LINK#{}] reset failed, aux packet error ({})", linkno, e) + Err(e) => error!("[LINK#{}] reset failed, aux packet error ({:?})", linkno, e) } } } } - pub fn ddma_upload_trace(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, - id: u32, destination: u8, trace: &Vec) -> Result<(), &'static str> { - let linkno = routing_table.0[destination as usize][0] - 1; - let mut i = 0; - while i < trace.len() { - let mut trace_slice: [u8; DMA_TRACE_MAX_SIZE] = [0; DMA_TRACE_MAX_SIZE]; - let len: usize = if i + DMA_TRACE_MAX_SIZE < trace.len() { DMA_TRACE_MAX_SIZE } else { trace.len() - i } as usize; - let last = i + len == trace.len(); - trace_slice[..len].clone_from_slice(&trace[i..i+len]); - i += len; - let reply = aux_transact(io, aux_mutex, linkno, - &drtioaux::Packet::DmaAddTraceRequest { - id: id, destination: destination, last: last, length: len as u16, trace: trace_slice}); - match reply { - Ok(drtioaux::Packet::DmaAddTraceReply { succeeded: true }) => (), - Ok(drtioaux::Packet::DmaAddTraceReply { succeeded: false }) => { - return Err("error adding trace on satellite"); }, - Ok(_) => { return Err("adding DMA trace failed, unexpected aux packet"); }, - Err(_) => { return Err("adding DMA trace failed, aux error"); } + fn partition_data(data: &[u8], send_f: F) -> Result<(), Error> + where F: Fn(&[u8; MASTER_PAYLOAD_MAX_SIZE], PayloadStatus, usize) -> Result<(), Error> { + let mut i = 0; + while i < data.len() { + let mut slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; + let len: usize = if i + MASTER_PAYLOAD_MAX_SIZE < data.len() { MASTER_PAYLOAD_MAX_SIZE } else { data.len() - i } as usize; + let first = i == 0; + let last = i + len == data.len(); + let status = PayloadStatus::from_status(first, last); + slice[..len].clone_from_slice(&data[i..i+len]); + i += len; + send_f(&slice, status, len)?; } + Ok(()) } - Ok(()) + + pub fn ddma_upload_trace(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, id: u32, destination: u8, trace: &[u8] + ) -> Result<(), Error> { + let linkno = routing_table.0[destination as usize][0] - 1; + partition_data(trace, |slice, status, len: usize| { + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::DmaAddTraceRequest { + id: id, source: 0, destination: destination, status: status, length: len as u16, trace: *slice})?; + match reply { + drtioaux::Packet::DmaAddTraceReply { destination: 0, succeeded: true, .. } => Ok(()), + drtioaux::Packet::DmaAddTraceReply { destination: 0, succeeded: false, .. } => Err(Error::DmaAddTraceFail(destination)), + packet => Err(Error::UnexpectedPacket(packet)), + } + }) } - pub fn ddma_send_erase(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, - id: u32, destination: u8) -> Result<(), &'static str> { + pub fn ddma_send_erase(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, id: u32, destination: u8) -> Result<(), Error> { let linkno = routing_table.0[destination as usize][0] - 1; - let reply = aux_transact(io, aux_mutex, linkno, - &drtioaux::Packet::DmaRemoveTraceRequest { id: id, destination: destination }); + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::DmaRemoveTraceRequest { id: id, source: 0, destination: destination })?; match reply { - Ok(drtioaux::Packet::DmaRemoveTraceReply { succeeded: true }) => Ok(()), - Ok(drtioaux::Packet::DmaRemoveTraceReply { succeeded: false }) => Err("satellite DMA erase error"), - Ok(_) => Err("erasing trace failed, unexpected aux packet"), - Err(_) => Err("erasing trace failed, aux error") + drtioaux::Packet::DmaRemoveTraceReply { destination: 0, succeeded: true } => Ok(()), + drtioaux::Packet::DmaRemoveTraceReply { destination: 0, succeeded: false } => Err(Error::DmaEraseFail(destination)), + packet => Err(Error::UnexpectedPacket(packet)), } } - pub fn ddma_send_playback(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, - id: u32, destination: u8, timestamp: u64) -> Result<(), &'static str> { + pub fn ddma_send_playback(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, id: u32, destination: u8, timestamp: u64) -> Result<(), Error> { let linkno = routing_table.0[destination as usize][0] - 1; - let reply = aux_transact(io, aux_mutex, linkno, - &drtioaux::Packet::DmaPlaybackRequest{ id: id, destination: destination, timestamp: timestamp }); + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::DmaPlaybackRequest{ id: id, source: 0, destination: destination, timestamp: timestamp })?; match reply { - Ok(drtioaux::Packet::DmaPlaybackReply { succeeded: true }) => return Ok(()), - Ok(drtioaux::Packet::DmaPlaybackReply { succeeded: false }) => - return Err("error on DMA playback request"), - Ok(_) => return Err("received unexpected aux packet during DMA playback"), - Err(_) => return Err("aux error on DMA playback") + drtioaux::Packet::DmaPlaybackReply { destination: 0, succeeded: true } => Ok(()), + drtioaux::Packet::DmaPlaybackReply { destination: 0, succeeded: false } => + Err(Error::DmaPlaybackFail(destination)), + packet => Err(Error::UnexpectedPacket(packet)), } } #[cfg(has_rtio_analyzer)] - fn analyzer_get_data(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, - destination: u8) -> Result { + fn analyzer_get_data(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, destination: u8) -> Result { let linkno = routing_table.0[destination as usize][0] - 1; - let reply = aux_transact(io, aux_mutex, linkno, - &drtioaux::Packet::AnalyzerHeaderRequest { destination: destination }); + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::AnalyzerHeaderRequest { destination: destination })?; let (sent, total, overflow) = match reply { - Ok(drtioaux::Packet::AnalyzerHeader { - sent_bytes, total_byte_count, overflow_occurred } - ) => (sent_bytes, total_byte_count, overflow_occurred), - Ok(_) => return Err("received unexpected aux packet during remote analyzer header request"), - Err(e) => return Err(e) + drtioaux::Packet::AnalyzerHeader { sent_bytes, total_byte_count, overflow_occurred } => + (sent_bytes, total_byte_count, overflow_occurred), + packet => return Err(Error::UnexpectedPacket(packet)), }; let mut remote_data: Vec = Vec::new(); if sent > 0 { let mut last_packet = false; while !last_packet { - let reply = aux_transact(io, aux_mutex, linkno, - &drtioaux::Packet::AnalyzerDataRequest { destination: destination }); + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::AnalyzerDataRequest { destination: destination })?; match reply { - Ok(drtioaux::Packet::AnalyzerData { last, length, data }) => { + drtioaux::Packet::AnalyzerData { last, length, data } => { last_packet = last; remote_data.extend(&data[0..length as usize]); }, - Ok(_) => return Err("received unexpected aux packet during remote analyzer data request"), - Err(e) => return Err(e) + packet => return Err(Error::UnexpectedPacket(packet)), } } } @@ -463,18 +563,83 @@ pub mod drtio { } #[cfg(has_rtio_analyzer)] - pub fn analyzer_query(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, - up_destinations: &Urc> - ) -> Result, &'static str> { + pub fn analyzer_query(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, up_destinations: &Urc> + ) -> Result, Error> { let mut remote_buffers: Vec = Vec::new(); for i in 1..drtio_routing::DEST_COUNT { if destination_up(up_destinations, i as u8) { - remote_buffers.push(analyzer_get_data(io, aux_mutex, routing_table, i as u8)?); + remote_buffers.push(analyzer_get_data(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, i as u8)?); } } Ok(remote_buffers) } + pub fn subkernel_upload(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, id: u32, destination: u8, data: &Vec) -> Result<(), Error> { + let linkno = routing_table.0[destination as usize][0] - 1; + partition_data(data, |slice, status, len: usize| { + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::SubkernelAddDataRequest { + id: id, destination: destination, status: status, length: len as u16, data: *slice})?; + match reply { + drtioaux::Packet::SubkernelAddDataReply { succeeded: true } => Ok(()), + drtioaux::Packet::SubkernelAddDataReply { succeeded: false } => + Err(Error::SubkernelAddFail(destination)), + packet => Err(Error::UnexpectedPacket(packet)), + } + }) + } + + pub fn subkernel_load(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, id: u32, destination: u8, run: bool + ) -> Result<(), Error> { + let linkno = routing_table.0[destination as usize][0] - 1; + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::SubkernelLoadRunRequest{ id: id, source: 0, destination: destination, run: run })?; + match reply { + drtioaux::Packet::SubkernelLoadRunReply { destination: 0, succeeded: true } => Ok(()), + drtioaux::Packet::SubkernelLoadRunReply { destination: 0, succeeded: false } => + Err(Error::SubkernelRunFail(destination)), + packet => Err(Error::UnexpectedPacket(packet)), + } + } + + pub fn subkernel_retrieve_exception(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, destination: u8 + ) -> Result, Error> { + let linkno = routing_table.0[destination as usize][0] - 1; + let mut remote_data: Vec = Vec::new(); + loop { + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::SubkernelExceptionRequest { destination: destination })?; + match reply { + drtioaux::Packet::SubkernelException { last, length, data } => { + remote_data.extend(&data[0..length as usize]); + if last { + return Ok(remote_data); + } + }, + packet => return Err(Error::UnexpectedPacket(packet)), + } + } + } + + pub fn subkernel_send_message(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable, id: u32, destination: u8, message: &[u8] + ) -> Result<(), Error> { + let linkno = routing_table.0[destination as usize][0] - 1; + partition_data(message, |slice, status, len: usize| { + let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, + &drtioaux::Packet::SubkernelMessage { + source: 0, destination: destination, + id: id, status: status, length: len as u16, data: *slice})?; + match reply { + drtioaux::Packet::SubkernelMessageAck { .. } => Ok(()), + packet => Err(Error::UnexpectedPacket(packet)), + } + }) + } } #[cfg(not(has_drtio))] @@ -484,8 +649,8 @@ pub mod drtio { pub fn startup(_io: &Io, _aux_mutex: &Mutex, _routing_table: &Urc>, _up_destinations: &Urc>, - _ddma_mutex: &Mutex) {} - pub fn reset(_io: &Io, _aux_mutex: &Mutex) {} + _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex) {} + pub fn reset(_io: &Io, _aux_mutex: &Mutex, _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex, _routing_table: &drtio_routing::RoutingTable) {} } static mut SEEN_ASYNC_ERRORS: u8 = 0; @@ -548,18 +713,19 @@ fn read_device_map() -> DeviceMap { pub fn startup(io: &Io, aux_mutex: &Mutex, routing_table: &Urc>, up_destinations: &Urc>, - ddma_mutex: &Mutex) { + ddma_mutex: &Mutex, subkernel_mutex: &Mutex) { set_device_map(read_device_map()); - drtio::startup(io, aux_mutex, routing_table, up_destinations, ddma_mutex); + drtio::startup(io, aux_mutex, routing_table, up_destinations, ddma_mutex, subkernel_mutex); unsafe { csr::rtio_core::reset_phy_write(1); } io.spawn(4096, async_error_thread); } -pub fn reset(io: &Io, aux_mutex: &Mutex) { +pub fn reset(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + routing_table: &drtio_routing::RoutingTable) { unsafe { csr::rtio_core::reset_write(1); } - drtio::reset(io, aux_mutex) + drtio::reset(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table) } diff --git a/artiq/firmware/runtime/session.rs b/artiq/firmware/runtime/session.rs index 64e015038..d972414bd 100644 --- a/artiq/firmware/runtime/session.rs +++ b/artiq/firmware/runtime/session.rs @@ -1,9 +1,14 @@ use core::{mem, str, cell::{Cell, RefCell}, fmt::Write as FmtWrite}; -use alloc::{vec::Vec, string::String}; +use alloc::{vec::Vec, string::{String, ToString}}; use byteorder::{ByteOrder, NativeEndian}; use cslice::CSlice; +#[cfg(has_drtio)] +use tar_no_std::TarArchiveRef; +use dyld::elf; use io::{Read, Write, Error as IoError}; +#[cfg(has_drtio)] +use io::Cursor; use board_misoc::{ident, cache, config}; use {mailbox, rpc_queue, kernel}; use urc::Urc; @@ -12,6 +17,10 @@ use rtio_clocking; use rtio_dma::Manager as DmaManager; #[cfg(has_drtio)] use rtio_dma::remote_dma; +#[cfg(has_drtio)] +use kernel::{subkernel, subkernel::Error as SubkernelError}; +#[cfg(has_drtio)] +use rtio_mgt::drtio; use rtio_mgt::get_async_errors; use cache::Cache; use kern_hwreq; @@ -33,6 +42,20 @@ pub enum Error { ClockFailure, #[fail(display = "protocol error: {}", _0)] Protocol(#[cause] host::Error), + #[fail(display = "subkernel io error")] + SubkernelIoError, + #[cfg(has_drtio)] + #[fail(display = "DDMA error: {}", _0)] + Ddma(#[cause] remote_dma::Error), + #[cfg(has_drtio)] + #[fail(display = "subkernel destination is down")] + DestinationDown, + #[cfg(has_drtio)] + #[fail(display = "subkernel error: {}", _0)] + Subkernel(#[cause] SubkernelError), + #[cfg(has_drtio)] + #[fail(display = "drtio aux error: {}", _0)] + DrtioAux(#[cause] drtio::Error), #[fail(display = "{}", _0)] Unexpected(String), } @@ -43,6 +66,16 @@ impl From> for Error { } } +#[cfg(has_drtio)] +impl From for Error { + fn from(value: drtio::Error) -> Error { + match value { + drtio::Error::SchedError(x) => Error::from(x), + x => Error::DrtioAux(x), + } + } +} + impl From for Error { fn from(value: SchedError) -> Error { Error::Protocol(host::Error::Io(IoError::Other(value))) @@ -55,10 +88,57 @@ impl From> for Error { } } +impl From<&str> for Error { + fn from(value: &str) -> Error { + Error::Unexpected(value.to_string()) + } +} + +impl From> for Error { + fn from(_value: io::Error) -> Error { + Error::SubkernelIoError + } +} + +#[cfg(has_drtio)] +impl From for Error { + fn from(value: SubkernelError) -> Error { + match value { + SubkernelError::SchedError(x) => Error::from(x), + SubkernelError::DrtioError(x) => Error::from(x), + x => Error::Subkernel(x), + } + } +} + +#[cfg(has_drtio)] +impl From for Error { + fn from(value: remote_dma::Error) -> Error { + match value { + remote_dma::Error::SchedError(x) => Error::from(x), + remote_dma::Error::DrtioError(x) => Error::from(x), + x => Error::Ddma(x), + } + } +} + macro_rules! unexpected { ($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*)))); } +#[cfg(has_drtio)] +macro_rules! propagate_subkernel_exception { + ( $exception:ident, $stream:ident ) => { + error!("Exception in subkernel"); + match $stream { + None => return Ok(true), + Some(ref mut $stream) => { + $stream.write_all($exception)?; + } + } + } +} + // Persistent state #[derive(Debug)] struct Congress { @@ -131,6 +211,8 @@ fn host_read(reader: &mut R) -> Result> let request = host::Request::read_from(reader)?; match &request { &host::Request::LoadKernel(_) => debug!("comm<-host LoadLibrary(...)"), + &host::Request::UploadSubkernel { id, destination, kernel: _} => debug!( + "comm<-host UploadSubkernel(id: {}, destination: {}, ...)", id, destination), _ => debug!("comm<-host {:?}", request) } Ok(request) @@ -161,12 +243,13 @@ pub fn kern_send(io: &Io, request: &kern::Message) -> Result<(), Error(io: &Io, f: F) -> Result> where F: FnOnce(&kern::Message) -> Result> { - io.until(|| mailbox::receive() != 0)?; - if !kernel::validate(mailbox::receive()) { - return Err(Error::InvalidPointer(mailbox::receive())) + let mut msg_ptr = 0; + io.until(|| { msg_ptr = mailbox::receive(); msg_ptr != 0 })?; + if !kernel::validate(msg_ptr) { + return Err(Error::InvalidPointer(msg_ptr)) } - f(unsafe { &*(mailbox::receive() as *const kern::Message) }) + f(unsafe { &*(msg_ptr as *const kern::Message) }) } fn kern_recv_dotrace(reply: &kern::Message) { @@ -233,8 +316,65 @@ fn kern_run(session: &mut Session) -> Result<(), Error> { kern_acknowledge() } -fn process_host_message(io: &Io, - stream: &mut TcpStream, + +fn process_flash_kernel(io: &Io, _aux_mutex: &Mutex, _subkernel_mutex: &Mutex, _ddma_mutex: &Mutex, + _routing_table: &drtio_routing::RoutingTable, + _up_destinations: &Urc>, + session: &mut Session, kernel: &[u8] +) -> Result<(), Error> { + // handle ELF and TAR files + if kernel[0] == elf::ELFMAG0 && kernel[1] == elf::ELFMAG1 && + kernel[2] == elf::ELFMAG2 && kernel[3] == elf::ELFMAG3 { + // assume ELF file, proceed as before + unsafe { + // make a copy as kernel CPU cannot read SPI directly + kern_load(io, session, Vec::from(kernel).as_ref()) + } + } else { + #[cfg(has_drtio)] + { + let archive = TarArchiveRef::new(kernel); + let entries = archive.entries(); + let mut main_lib: Option<&[u8]> = None; + for entry in entries { + if entry.filename().as_str() == "main.elf" { + main_lib = Some(entry.data()); + } else { + // subkernel filename must be in format: + // " .elf" + let filename = entry.filename(); + let mut iter = filename.as_str().split_whitespace(); + let sid: u32 = iter.next().unwrap() + .parse().unwrap(); + let dest: u8 = iter.next().unwrap() + .strip_suffix(".elf").unwrap() + .parse().unwrap(); + let up = { + let up_destinations = _up_destinations.borrow(); + up_destinations[dest as usize] + }; + if up { + let subkernel_lib = entry.data().to_vec(); + subkernel::add_subkernel(io, _subkernel_mutex, sid, dest, subkernel_lib)?; + subkernel::upload(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, sid)?; + } else { + return Err(Error::DestinationDown); + } + } + } + unsafe { + kern_load(io, session, Vec::from(main_lib.unwrap()).as_ref()) + } + } + #[cfg(not(has_drtio))] + { + unexpected!("multi-kernel libraries are not supported in standalone systems") + } + } +} + +fn process_host_message(io: &Io, _aux_mutex: &Mutex, _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex, + _routing_table: &drtio_routing::RoutingTable, stream: &mut TcpStream, session: &mut Session) -> Result<(), Error> { match host_read(stream)? { host::Request::SystemInfo => { @@ -245,7 +385,7 @@ fn process_host_message(io: &Io, session.congress.finished_cleanly.set(true) } - host::Request::LoadKernel(kernel) => + host::Request::LoadKernel(kernel) => { match unsafe { kern_load(io, session, &kernel) } { Ok(()) => host_write(stream, host::Reply::LoadCompleted)?, Err(error) => { @@ -254,7 +394,8 @@ fn process_host_message(io: &Io, host_write(stream, host::Reply::LoadFailed(&description))?; kern_acknowledge()?; } - }, + } + }, host::Request::RunKernel => match kern_run(session) { Ok(()) => (), @@ -323,6 +464,24 @@ fn process_host_message(io: &Io, session.kernel_state = KernelState::Running } + + host::Request::UploadSubkernel { id: _id, destination: _dest, kernel: _kernel } => { + #[cfg(has_drtio)] + { + subkernel::add_subkernel(io, _subkernel_mutex, _id, _dest, _kernel)?; + match subkernel::upload(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, _id) { + Ok(_) => host_write(stream, host::Reply::LoadCompleted)?, + Err(error) => { + subkernel::clear_subkernels(io, _subkernel_mutex)?; + let mut description = String::new(); + write!(&mut description, "{}", error).unwrap(); + host_write(stream, host::Reply::LoadFailed(&description))? + } + } + } + #[cfg(not(has_drtio))] + host_write(stream, host::Reply::LoadFailed("No DRTIO on this system, subkernels are not supported"))? + } } Ok(()) @@ -331,7 +490,7 @@ fn process_host_message(io: &Io, fn process_kern_message(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, up_destinations: &Urc>, - ddma_mutex: &Mutex, mut stream: Option<&mut TcpStream>, + ddma_mutex: &Mutex, subkernel_mutex: &Mutex, mut stream: Option<&mut TcpStream>, session: &mut Session) -> Result> { kern_recv_notrace(io, |request| { match (request, session.kernel_state) { @@ -349,7 +508,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, kern_recv_dotrace(request); - if kern_hwreq::process_kern_hwreq(io, aux_mutex, routing_table, up_destinations, request)? { + if kern_hwreq::process_kern_hwreq(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, up_destinations, request)? { return Ok(false) } @@ -373,7 +532,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, if let Some(_id) = session.congress.dma_manager.record_start(name) { // replace the record #[cfg(has_drtio)] - remote_dma::erase(io, aux_mutex, ddma_mutex, routing_table, _id); + remote_dma::erase(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, _id)?; } kern_acknowledge() } @@ -382,10 +541,10 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, kern_acknowledge() } &kern::DmaRecordStop { duration, enable_ddma } => { - let _id = session.congress.dma_manager.record_stop(duration, enable_ddma, io, ddma_mutex); + let _id = session.congress.dma_manager.record_stop(duration, enable_ddma, io, ddma_mutex)?; #[cfg(has_drtio)] if enable_ddma { - remote_dma::upload_traces(io, aux_mutex, ddma_mutex, routing_table, _id); + remote_dma::upload_traces(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, _id)?; } cache::flush_l2_cache(); kern_acknowledge() @@ -393,7 +552,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, &kern::DmaEraseRequest { name } => { #[cfg(has_drtio)] if let Some(id) = session.congress.dma_manager.get_id(name) { - remote_dma::erase(io, aux_mutex, ddma_mutex, routing_table, *id); + remote_dma::erase(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, *id)?; } session.congress.dma_manager.erase(name); kern_acknowledge() @@ -402,7 +561,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, session.congress.dma_manager.with_trace(name, |trace, duration| { #[cfg(has_drtio)] let uses_ddma = match trace { - Some(trace) => remote_dma::has_remote_traces(io, aux_mutex, trace.as_ptr() as u32), + Some(trace) => remote_dma::has_remote_traces(io, aux_mutex, trace.as_ptr() as u32)?, None => false }; #[cfg(not(has_drtio))] @@ -416,7 +575,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, } &kern::DmaStartRemoteRequest { id: _id, timestamp: _timestamp } => { #[cfg(has_drtio)] - remote_dma::playback(io, aux_mutex, ddma_mutex, routing_table, _id as u32, _timestamp as u64); + remote_dma::playback(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, _id as u32, _timestamp as u64)?; kern_acknowledge() } &kern::DmaAwaitRemoteRequest { id: _id } => { @@ -441,7 +600,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, None => unexpected!("unexpected RPC in flash kernel"), Some(ref mut stream) => { host_write(stream, host::Reply::RpcRequest { async: async })?; - rpc::send_args(stream, service, tag, data)?; + rpc::send_args(stream, service, tag, data, true)?; if !async { session.kernel_state = KernelState::RpcWait } @@ -474,6 +633,8 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, unsafe { kernel::stop() } session.kernel_state = KernelState::Absent; unsafe { session.congress.cache.unborrow() } + #[cfg(has_drtio)] + subkernel::clear_subkernels(io, subkernel_mutex)?; match stream { None => return Ok(true), @@ -491,6 +652,8 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, unsafe { kernel::stop() } session.kernel_state = KernelState::Absent; unsafe { session.congress.cache.unborrow() } + #[cfg(has_drtio)] + subkernel::clear_subkernels(io, subkernel_mutex)?; match stream { None => { @@ -510,6 +673,113 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex, } } } + #[cfg(has_drtio)] + &kern::SubkernelLoadRunRequest { id, destination: _, run } => { + let succeeded = match subkernel::load( + io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, run) { + Ok(()) => true, + Err(e) => { error!("Error loading subkernel: {}", e); false } + }; + kern_send(io, &kern::SubkernelLoadRunReply { succeeded: succeeded }) + } + #[cfg(has_drtio)] + &kern::SubkernelAwaitFinishRequest{ id, timeout } => { + let res = subkernel::await_finish(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, + id, timeout); + let status = match res { + Ok(ref res) => { + if res.comm_lost { + kern::SubkernelStatus::CommLost + } else if let Some(exception) = &res.exception { + propagate_subkernel_exception!(exception, stream); + // will not be called after exception is served + kern::SubkernelStatus::OtherError + } else { + kern::SubkernelStatus::NoError + } + }, + Err(SubkernelError::Timeout) => kern::SubkernelStatus::Timeout, + Err(SubkernelError::IncorrectState) => kern::SubkernelStatus::IncorrectState, + Err(_) => kern::SubkernelStatus::OtherError + }; + kern_send(io, &kern::SubkernelAwaitFinishReply { status: status }) + } + #[cfg(has_drtio)] + &kern::SubkernelMsgSend { id, destination, count, tag, data } => { + subkernel::message_send(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, id, destination, count, tag, data)?; + kern_acknowledge() + } + #[cfg(has_drtio)] + &kern::SubkernelMsgRecvRequest { id, timeout, tags } => { + let message_received = subkernel::message_await(io, subkernel_mutex, id as u32, timeout); + let (status, count) = match message_received { + Ok(ref message) => (kern::SubkernelStatus::NoError, message.count), + Err(SubkernelError::Timeout) => (kern::SubkernelStatus::Timeout, 0), + Err(SubkernelError::IncorrectState) => (kern::SubkernelStatus::IncorrectState, 0), + Err(SubkernelError::SubkernelFinished) => { + let res = subkernel::retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex, + routing_table, id as u32)?; + if res.comm_lost { + (kern::SubkernelStatus::CommLost, 0) + } else if let Some(exception) = &res.exception { + propagate_subkernel_exception!(exception, stream); + (kern::SubkernelStatus::OtherError, 0) + } else { + (kern::SubkernelStatus::OtherError, 0) + } + } + Err(_) => (kern::SubkernelStatus::OtherError, 0) + }; + kern_send(io, &kern::SubkernelMsgRecvReply { status: status, count: count})?; + if let Ok(message) = message_received { + // receive code almost identical to RPC recv, except we are not reading from a stream + let mut reader = Cursor::new(message.data); + let mut current_tags = tags; + let mut i = 0; + loop { + // kernel has to consume all arguments in the whole message + let slot = kern_recv(io, |reply| { + match reply { + &kern::RpcRecvRequest(slot) => Ok(slot), + other => unexpected!( + "expected root value slot from kernel CPU, not {:?}", other) + } + })?; + let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error> { + if size == 0 { + return Ok(0 as *mut ()) + } + kern_send(io, &kern::RpcRecvReply(Ok(size)))?; + Ok(kern_recv(io, |reply| { + match reply { + &kern::RpcRecvRequest(slot) => Ok(slot), + other => unexpected!( + "expected nested value slot from kernel CPU, not {:?}", other) + } + })?) + }); + match res { + Ok(new_tags) => { + kern_send(io, &kern::RpcRecvReply(Ok(0)))?; + i += 1; + if i < message.count { + // update the tag for next read + current_tags = new_tags; + } else { + // should be done by then + break; + } + }, + Err(_) => unexpected!("expected valid subkernel message data") + }; + } + Ok(()) + } else { + // if timed out, no data has been received, exception should be raised by kernel + Ok(()) + } + }, + request => unexpected!("unexpected request {:?} from kernel CPU", request) }.and(Ok(false)) }) @@ -530,13 +800,17 @@ fn process_kern_queued_rpc(stream: &mut TcpStream, fn host_kernel_worker(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, up_destinations: &Urc>, - ddma_mutex: &Mutex, stream: &mut TcpStream, + ddma_mutex: &Mutex, subkernel_mutex: &Mutex, + stream: &mut TcpStream, congress: &mut Congress) -> Result<(), Error> { let mut session = Session::new(congress); + #[cfg(has_drtio)] + subkernel::clear_subkernels(&io, &subkernel_mutex)?; loop { if stream.can_recv() { - process_host_message(io, stream, &mut session)? + process_host_message(io, aux_mutex, ddma_mutex, subkernel_mutex, + routing_table, stream, &mut session)? } else if !stream.may_recv() { return Ok(()) } @@ -548,7 +822,7 @@ fn host_kernel_worker(io: &Io, aux_mutex: &Mutex, if mailbox::receive() != 0 { process_kern_message(io, aux_mutex, routing_table, up_destinations, - ddma_mutex, + ddma_mutex, subkernel_mutex, Some(stream), &mut session)?; } @@ -566,17 +840,23 @@ fn host_kernel_worker(io: &Io, aux_mutex: &Mutex, fn flash_kernel_worker(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, up_destinations: &Urc>, - ddma_mutex: &Mutex, congress: &mut Congress, + ddma_mutex: &Mutex, subkernel_mutex: &Mutex, congress: &mut Congress, config_key: &str) -> Result<(), Error> { let mut session = Session::new(congress); config::read(config_key, |result| { match result { - Ok(kernel) => unsafe { - // kernel CPU cannot access the SPI flash address space directly, - // so make a copy. - kern_load(io, &mut session, Vec::from(kernel).as_ref()) - }, + Ok(kernel) => { + // process .ELF or .TAR kernels + let res = process_flash_kernel(io, aux_mutex, subkernel_mutex, ddma_mutex, routing_table, up_destinations, &mut session, kernel); + #[cfg(has_drtio)] + match res { + // wait to establish the DRTIO connection + Err(Error::DestinationDown) => io.sleep(500)?, + _ => () + } + res + } _ => Err(Error::KernelNotFound) } })?; @@ -588,7 +868,7 @@ fn flash_kernel_worker(io: &Io, aux_mutex: &Mutex, } if mailbox::receive() != 0 { - if process_kern_message(io, aux_mutex, routing_table, up_destinations, ddma_mutex, None, &mut session)? { + if process_kern_message(io, aux_mutex, routing_table, up_destinations, ddma_mutex, subkernel_mutex, None, &mut session)? { return Ok(()) } } @@ -613,13 +893,13 @@ fn respawn(io: &Io, handle: &mut Option, f: F) } } - *handle = Some(io.spawn(16384, f)) + *handle = Some(io.spawn(32768, f)) } pub fn thread(io: Io, aux_mutex: &Mutex, routing_table: &Urc>, up_destinations: &Urc>, - ddma_mutex: &Mutex) { + ddma_mutex: &Mutex, subkernel_mutex: &Mutex) { let listener = TcpListener::new(&io, 65535); listener.listen(1381).expect("session: cannot listen"); info!("accepting network sessions"); @@ -628,9 +908,11 @@ pub fn thread(io: Io, aux_mutex: &Mutex, let mut kernel_thread = None; { + let routing_table = routing_table.borrow(); let mut congress = congress.borrow_mut(); info!("running startup kernel"); - match flash_kernel_worker(&io, &aux_mutex, &routing_table.borrow(), &up_destinations, ddma_mutex, &mut congress, "startup_kernel") { + match flash_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations, + ddma_mutex, subkernel_mutex, &mut congress, "startup_kernel") { Ok(()) => info!("startup kernel finished"), Err(Error::KernelNotFound) => @@ -671,24 +953,37 @@ pub fn thread(io: Io, aux_mutex: &Mutex, let up_destinations = up_destinations.clone(); let congress = congress.clone(); let ddma_mutex = ddma_mutex.clone(); + let subkernel_mutex = subkernel_mutex.clone(); let stream = stream.into_handle(); respawn(&io, &mut kernel_thread, move |io| { let routing_table = routing_table.borrow(); let mut congress = congress.borrow_mut(); let mut stream = TcpStream::from_handle(&io, stream); - match host_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex, &mut stream, &mut *congress) { + match host_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations, + &ddma_mutex, &subkernel_mutex, &mut stream, &mut *congress) { Ok(()) => (), Err(Error::Protocol(host::Error::Io(IoError::UnexpectedEnd))) => info!("connection closed"), Err(Error::Protocol(host::Error::Io( - IoError::Other(SchedError::Interrupted)))) => - info!("kernel interrupted"), + IoError::Other(SchedError::Interrupted)))) => { + info!("kernel interrupted"); + #[cfg(has_drtio)] + drtio::clear_buffers(&io, &aux_mutex); + } Err(err) => { congress.finished_cleanly.set(false); error!("session aborted: {}", err); + #[cfg(has_drtio)] + drtio::clear_buffers(&io, &aux_mutex); } } - stream.close().expect("session: close socket"); + loop { + match stream.close() { + Ok(_) => break, + Err(SchedError::Interrupted) => (), + Err(e) => panic!("session: close socket: {:?}", e) + }; + } }); } @@ -700,22 +995,32 @@ pub fn thread(io: Io, aux_mutex: &Mutex, let up_destinations = up_destinations.clone(); let congress = congress.clone(); let ddma_mutex = ddma_mutex.clone(); + let subkernel_mutex = subkernel_mutex.clone(); respawn(&io, &mut kernel_thread, move |io| { let routing_table = routing_table.borrow(); let mut congress = congress.borrow_mut(); - match flash_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex, &mut *congress, "idle_kernel") { + match flash_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations, + &ddma_mutex, &subkernel_mutex, &mut *congress, "idle_kernel") { Ok(()) => info!("idle kernel finished, standing by"), Err(Error::Protocol(host::Error::Io( - IoError::Other(SchedError::Interrupted)))) => - info!("idle kernel interrupted"), + IoError::Other(SchedError::Interrupted)))) => { + info!("idle kernel interrupted"); + // clear state for regular kernel + #[cfg(has_drtio)] + drtio::clear_buffers(&io, &aux_mutex); + } Err(Error::KernelNotFound) => { info!("no idle kernel found"); while io.relinquish().is_ok() {} } - Err(err) => - error!("idle kernel aborted: {}", err) + Err(err) => { + error!("idle kernel aborted: {}", err); + #[cfg(has_drtio)] + drtio::clear_buffers(&io, &aux_mutex); + } } + }) } diff --git a/artiq/firmware/satman/Cargo.toml b/artiq/firmware/satman/Cargo.toml index 20dec311f..f8016f576 100644 --- a/artiq/firmware/satman/Cargo.toml +++ b/artiq/firmware/satman/Cargo.toml @@ -14,8 +14,11 @@ build_misoc = { path = "../libbuild_misoc" } [dependencies] log = { version = "0.4", default-features = false } +io = { path = "../libio", features = ["byteorder", "alloc"] } +cslice = { version = "0.3" } board_misoc = { path = "../libboard_misoc", features = ["uart_console", "log"] } board_artiq = { path = "../libboard_artiq", features = ["alloc"] } alloc_list = { path = "../liballoc_list" } riscv = { version = "0.6.0", features = ["inline-asm"] } proto_artiq = { path = "../libproto_artiq", features = ["log", "alloc"] } +eh = { path = "../libeh" } \ No newline at end of file diff --git a/artiq/firmware/satman/Makefile b/artiq/firmware/satman/Makefile index 41f1f961a..fdd867288 100644 --- a/artiq/firmware/satman/Makefile +++ b/artiq/firmware/satman/Makefile @@ -1,9 +1,14 @@ include ../include/generated/variables.mak include $(MISOC_DIRECTORY)/software/common.mak -LDFLAGS += -L../libbase +CFLAGS += \ + -I$(LIBUNWIND_DIRECTORY) \ + -I$(LIBUNWIND_DIRECTORY)/../unwinder/include -RUSTFLAGS += -Cpanic=abort +LDFLAGS += \ + -L../libunwind + +RUSTFLAGS += -Cpanic=unwind all:: satman.bin satman.fbi @@ -13,8 +18,12 @@ $(RUSTOUT)/libsatman.a: --manifest-path $(SATMAN_DIRECTORY)/Cargo.toml \ --target $(SATMAN_DIRECTORY)/../$(CARGO_TRIPLE).json -satman.elf: $(RUSTOUT)/libsatman.a - $(link) -T $(SATMAN_DIRECTORY)/satman.ld +satman.elf: $(RUSTOUT)/libsatman.a ksupport_data.o + $(link) -T $(SATMAN_DIRECTORY)/../firmware.ld \ + -lunwind-vexriscv-bare -m elf32lriscv + +ksupport_data.o: ../ksupport/ksupport.elf + $(LD) -r -m elf32lriscv -b binary -o $@ $< %.bin: %.elf $(objcopy) -O binary diff --git a/artiq/firmware/satman/analyzer.rs b/artiq/firmware/satman/analyzer.rs index e2a43ba22..72f9e8c2b 100644 --- a/artiq/firmware/satman/analyzer.rs +++ b/artiq/firmware/satman/analyzer.rs @@ -1,6 +1,6 @@ use core::cmp::min; use board_misoc::{csr, cache}; -use proto_artiq::drtioaux_proto::ANALYZER_MAX_SIZE; +use proto_artiq::drtioaux_proto::SAT_PAYLOAD_MAX_SIZE; const BUFFER_SIZE: usize = 512 * 1024; @@ -86,10 +86,10 @@ impl Analyzer { } } - pub fn get_data(&mut self, data_slice: &mut [u8; ANALYZER_MAX_SIZE]) -> AnalyzerSliceMeta { + pub fn get_data(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> AnalyzerSliceMeta { let data = unsafe { &BUFFER.data[..] }; let i = (self.data_pointer + self.sent_bytes) % BUFFER_SIZE; - let len = min(ANALYZER_MAX_SIZE, self.data_len - self.sent_bytes); + let len = min(SAT_PAYLOAD_MAX_SIZE, self.data_len - self.sent_bytes); let last = self.sent_bytes + len == self.data_len; if i + len >= BUFFER_SIZE { diff --git a/artiq/firmware/satman/cache.rs b/artiq/firmware/satman/cache.rs new file mode 100644 index 000000000..fa73364cb --- /dev/null +++ b/artiq/firmware/satman/cache.rs @@ -0,0 +1,83 @@ +use alloc::{vec, vec::Vec, string::String, collections::btree_map::BTreeMap}; +use cslice::{CSlice, AsCSlice}; +use core::mem::transmute; + +struct Entry { + data: Vec, + slice: CSlice<'static, i32>, + borrowed: bool +} + +impl core::fmt::Debug for Entry { + fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { + f.debug_struct("Entry") + .field("data", &self.data) + .field("borrowed", &self.borrowed) + .finish() + } +} + +pub struct Cache { + entries: BTreeMap, + empty: CSlice<'static, i32>, +} + +impl core::fmt::Debug for Cache { + fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { + f.debug_struct("Cache") + .field("entries", &self.entries) + .finish() + } +} + +impl Cache { + pub fn new() -> Cache { + let empty_vec = vec![]; + let empty = unsafe { + transmute::, CSlice<'static, i32>>(empty_vec.as_c_slice()) + }; + Cache { entries: BTreeMap::new(), empty } + } + + pub fn get(&mut self, key: &str) -> *const CSlice<'static, i32> { + match self.entries.get_mut(key) { + None => &self.empty, + Some(ref mut entry) => { + entry.borrowed = true; + &entry.slice + } + } + } + + pub fn put(&mut self, key: &str, data: &[i32]) -> Result<(), ()> { + match self.entries.get_mut(key) { + None => (), + Some(ref mut entry) => { + if entry.borrowed { return Err(()) } + entry.data = Vec::from(data); + unsafe { + entry.slice = transmute::, CSlice<'static, i32>>( + entry.data.as_c_slice()); + } + return Ok(()) + } + } + + let data = Vec::from(data); + let slice = unsafe { + transmute::, CSlice<'static, i32>>(data.as_c_slice()) + }; + self.entries.insert(String::from(key), Entry { + data, + slice, + borrowed: false + }); + Ok(()) + } + + pub unsafe fn unborrow(&mut self) { + for (_key, entry) in self.entries.iter_mut() { + entry.borrowed = false; + } + } +} diff --git a/artiq/firmware/satman/dma.rs b/artiq/firmware/satman/dma.rs index 133bfd3c0..97aea721d 100644 --- a/artiq/firmware/satman/dma.rs +++ b/artiq/firmware/satman/dma.rs @@ -1,39 +1,190 @@ +use alloc::{vec::Vec, collections::btree_map::BTreeMap, string::String}; +use core::mem; +use board_artiq::{drtioaux, drtio_routing::RoutingTable}; use board_misoc::{csr, cache::flush_l2_cache}; -use alloc::{vec::Vec, collections::btree_map::BTreeMap}; +use proto_artiq::drtioaux_proto::PayloadStatus; +use routing::{Router, Sliceable}; +use kernel::Manager as KernelManager; +use ::{cricon_select, cricon_read, RtioMaster, MASTER_PAYLOAD_MAX_SIZE}; const ALIGNMENT: usize = 64; -#[derive(Debug, PartialEq)] +#[derive(PartialEq)] enum ManagerState { Idle, Playback } pub struct RtioStatus { + pub source: u8, pub id: u32, - pub error: u8, + pub error: u8, pub channel: u32, pub timestamp: u64 } +#[derive(Debug)] pub enum Error { IdNotFound, PlaybackInProgress, - EntryNotComplete + EntryNotComplete, + MasterDmaFound, + UploadFail, } -#[derive(Debug)] struct Entry { trace: Vec, padding_len: usize, - complete: bool + complete: bool, + duration: u64, // relevant for locally ran DMA +} + +impl Entry { + pub fn from_vec(data: Vec, duration: u64) -> Entry { + let mut entry = Entry { + trace: data, + padding_len: 0, + complete: true, + duration: duration, + }; + entry.realign(); + entry + } + + pub fn id(&self) -> u32 { + self.trace[self.padding_len..].as_ptr() as u32 + } + + pub fn realign(&mut self) { + self.trace.push(0); + let data_len = self.trace.len(); + + self.trace.reserve(ALIGNMENT - 1); + let padding = ALIGNMENT - self.trace.as_ptr() as usize % ALIGNMENT; + let padding = if padding == ALIGNMENT { 0 } else { padding }; + for _ in 0..padding { + // Vec guarantees that this will not reallocate + self.trace.push(0) + } + for i in 1..data_len + 1 { + self.trace[data_len + padding - i] = self.trace[data_len - i] + } + self.complete = true; + self.padding_len = padding; + } +} + +enum RemoteTraceState { + Unsent, + Sending(usize), + Ready, + Running(usize), +} + +struct RemoteTraces { + remote_traces: BTreeMap, + state: RemoteTraceState, +} + +impl RemoteTraces { + pub fn new(traces: BTreeMap) -> RemoteTraces { + RemoteTraces { + remote_traces: traces, + state: RemoteTraceState::Unsent + } + } + + // on subkernel request + pub fn upload_traces(&mut self, id: u32, router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable) -> usize { + let len = self.remote_traces.len(); + if len > 0 { + self.state = RemoteTraceState::Sending(self.remote_traces.len()); + for (dest, trace) in self.remote_traces.iter_mut() { + // queue up the first packet for all destinations, rest will be sent after first ACK + let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; + let meta = trace.get_slice_master(&mut data_slice); + router.route(drtioaux::Packet::DmaAddTraceRequest { + source: self_destination, destination: *dest, id: id, + status: meta.status, length: meta.len, trace: data_slice + }, routing_table, rank, self_destination); + } + } + len + } + + // on incoming Packet::DmaAddTraceReply + pub fn ack_upload(&mut self, kernel_manager: &mut KernelManager, source: u8, id: u32, succeeded: bool, router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable) { + if let RemoteTraceState::Sending(count) = self.state { + if let Some(trace) = self.remote_traces.get_mut(&source) { + if trace.at_end() { + if count - 1 == 0 { + self.state = RemoteTraceState::Ready; + kernel_manager.ddma_remote_uploaded(succeeded); + } else { + self.state = RemoteTraceState::Sending(count - 1); + } + } else { + // send next slice + let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; + let meta = trace.get_slice_master(&mut data_slice); + router.route(drtioaux::Packet::DmaAddTraceRequest { + source: self_destination, destination: meta.destination, id: id, + status: meta.status, length: meta.len, trace: data_slice + }, routing_table, rank, self_destination); + } + } + } + + } + + // on subkernel request + pub fn playback(&mut self, id: u32, timestamp: u64, router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable) { + // route all the playback requests + // remote traces + local trace + self.state = RemoteTraceState::Running(self.remote_traces.len() + 1); + for (dest, _) in self.remote_traces.iter() { + router.route(drtioaux::Packet::DmaPlaybackRequest { + source: self_destination, destination: *dest, id: id, timestamp: timestamp + }, routing_table, rank, self_destination); + // response will be ignored (succeeded = false handled by the main thread) + } + } + + // on incoming Packet::DmaPlaybackDone + pub fn remote_finished(&mut self, kernel_manager: &mut KernelManager, error: u8, channel: u32, timestamp: u64) { + if let RemoteTraceState::Running(count) = self.state { + if error != 0 || count - 1 == 0 { + // notify the kernel about a DDMA error or finish + kernel_manager.ddma_finished(error, channel, timestamp); + self.state = RemoteTraceState::Ready; + // further messages will be ignored (if there was an error) + } else { // no error and not the last one awaited + self.state = RemoteTraceState::Running(count - 1); + } + } + } + + pub fn erase(&mut self, id: u32, router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable) { + for (dest, _) in self.remote_traces.iter() { + router.route(drtioaux::Packet::DmaRemoveTraceRequest { + source: self_destination, destination: *dest, id: id + }, routing_table, rank, self_destination); + // response will be ignored as this object will stop existing too + } + } } -#[derive(Debug)] pub struct Manager { - entries: BTreeMap, + entries: BTreeMap<(u8, u32), Entry>, state: ManagerState, - currentid: u32 + current_id: u32, + current_source: u8, + previous_cri_master: RtioMaster, + + remote_entries: BTreeMap, + name_map: BTreeMap, + recording_trace: Vec, + recording_name: String } impl Manager { @@ -45,71 +196,197 @@ impl Manager { } Manager { entries: BTreeMap::new(), - currentid: 0, + current_id: 0, + current_source: 0, + previous_cri_master: RtioMaster::Drtio, state: ManagerState::Idle, + remote_entries: BTreeMap::new(), + name_map: BTreeMap::new(), + recording_trace: Vec::new(), + recording_name: String::new(), } } - pub fn add(&mut self, id: u32, last: bool, trace: &[u8], trace_len: usize) -> Result<(), Error> { - let entry = match self.entries.get_mut(&id) { + pub fn add(&mut self, source: u8, id: u32, status: PayloadStatus, trace: &[u8], trace_len: usize) -> Result<(), Error> { + if status.is_first() { + self.entries.remove(&(source, id)); + } + let entry = match self.entries.get_mut(&(source, id)) { Some(entry) => { if entry.complete { // replace entry - self.entries.remove(&id); - self.entries.insert(id, Entry { + self.entries.remove(&(source, id)); + self.entries.insert((source, id), Entry { trace: Vec::new(), padding_len: 0, - complete: false }); - self.entries.get_mut(&id).unwrap() + complete: false, + duration: 0 + }); + self.entries.get_mut(&(source, id)).unwrap() } else { entry } }, None => { - self.entries.insert(id, Entry { + self.entries.insert((source, id), Entry { trace: Vec::new(), padding_len: 0, - complete: false }); - self.entries.get_mut(&id).unwrap() + complete: false, + duration: 0, + }); + self.entries.get_mut(&(source, id)).unwrap() }, }; entry.trace.extend(&trace[0..trace_len]); - if last { - entry.trace.push(0); - let data_len = entry.trace.len(); - - // Realign. - entry.trace.reserve(ALIGNMENT - 1); - let padding = ALIGNMENT - entry.trace.as_ptr() as usize % ALIGNMENT; - let padding = if padding == ALIGNMENT { 0 } else { padding }; - for _ in 0..padding { - // Vec guarantees that this will not reallocate - entry.trace.push(0) - } - for i in 1..data_len + 1 { - entry.trace[data_len + padding - i] = entry.trace[data_len - i] - } - entry.complete = true; - entry.padding_len = padding; + if status.is_last() { + entry.realign(); flush_l2_cache(); } Ok(()) } + // API for subkernel + pub fn record_start(&mut self, name: &str) { + self.recording_name = String::from(name); + self.recording_trace = Vec::new(); + } - pub fn erase(&mut self, id: u32) -> Result<(), Error> { - match self.entries.remove(&id) { + // API for subkernel + pub fn record_append(&mut self, data: &[u8]) { + self.recording_trace.extend_from_slice(data); + } + + // API for subkernel + pub fn record_stop(&mut self, duration: u64, self_destination: u8) -> Result { + let mut trace = Vec::new(); + mem::swap(&mut self.recording_trace, &mut trace); + trace.push(0); + let mut local_trace = Vec::new(); + let mut remote_traces: BTreeMap = BTreeMap::new(); + // analyze each entry and put in proper buckets, as the kernel core + // sends whole chunks, to limit comms/kernel CPU communication, + // and as only comms core has access to varios DMA buffers. + let mut ptr = 0; + while trace[ptr] != 0 { + // ptr + 3 = tgt >> 24 (destination) + let len = trace[ptr] as usize; + let destination = trace[ptr+3]; + if destination == 0 { + return Err(Error::MasterDmaFound); + } else if destination == self_destination { + local_trace.extend(&trace[ptr..ptr+len]); + } + else { + if let Some(remote_trace) = remote_traces.get_mut(&destination) { + remote_trace.extend(&trace[ptr..ptr+len]); + } else { + remote_traces.insert(destination, Sliceable::new(destination, trace[ptr..ptr+len].to_vec())); + } + } + // and jump to the next event + ptr += len; + } + let local_entry = Entry::from_vec(local_trace, duration); + let id = local_entry.id(); + + self.entries.insert((self_destination, id), local_entry); + self.remote_entries.insert(id, RemoteTraces::new(remote_traces)); + let mut name = String::new(); + mem::swap(&mut self.recording_name, &mut name); + self.name_map.insert(name, id); + + flush_l2_cache(); + + Ok(id) + } + + pub fn upload_traces(&mut self, id: u32, router: &mut Router, rank: u8, self_destination: u8, + routing_table: &RoutingTable) -> Result { + let remote_traces = self.remote_entries.get_mut(&id); + let mut len = 0; + if let Some(traces) = remote_traces { + len = traces.upload_traces(id, router, rank, self_destination, routing_table); + } + Ok(len) + } + + pub fn with_trace(&self, self_destination: u8, name: &str, f: F) -> R + where F: FnOnce(Option<&[u8]>, u64) -> R { + if let Some(ptr) = self.name_map.get(name) { + match self.entries.get(&(self_destination, *ptr)) { + Some(entry) => f(Some(&entry.trace[entry.padding_len..]), entry.duration), + None => f(None, 0) + } + } else { + f(None, 0) + } + } + + // API for subkernel + pub fn playback_remote(&mut self, id: u32, timestamp: u64, + router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable + ) -> Result<(), Error> { + if let Some(traces) = self.remote_entries.get_mut(&id) { + traces.playback(id, timestamp, router, rank, self_destination, routing_table); + Ok(()) + } else { + Err(Error::IdNotFound) + } + } + + // API for subkernel + pub fn erase_name(&mut self, name: &str, router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable) { + if let Some(id) = self.name_map.get(name) { + if let Some(traces) = self.remote_entries.get_mut(&id) { + traces.erase(*id, router, rank, self_destination, routing_table); + self.remote_entries.remove(&id); + } + self.entries.remove(&(self_destination, *id)); + self.name_map.remove(name); + } + } + + // API for incoming DDMA (drtio) + pub fn erase(&mut self, source: u8, id: u32) -> Result<(), Error> { + match self.entries.remove(&(source, id)) { Some(_) => Ok(()), None => Err(Error::IdNotFound) } } - pub fn playback(&mut self, id: u32, timestamp: u64) -> Result<(), Error> { + pub fn remote_finished(&mut self, kernel_manager: &mut KernelManager, + id: u32, error: u8, channel: u32, timestamp: u64) { + if let Some(entry) = self.remote_entries.get_mut(&id) { + entry.remote_finished(kernel_manager, error, channel, timestamp); + } + } + + pub fn ack_upload(&mut self, kernel_manager: &mut KernelManager, source: u8, id: u32, succeeded: bool, + router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable) { + if let Some(entry) = self.remote_entries.get_mut(&id) { + entry.ack_upload(kernel_manager, source, id, succeeded, router, rank, self_destination, routing_table); + } + } + + pub fn cleanup(&mut self, router: &mut Router, rank: u8, self_destination: u8, routing_table: &RoutingTable) { + // after subkernel ends, remove all self-generated traces + for (_, id) in self.name_map.iter_mut() { + if let Some(traces) = self.remote_entries.get_mut(&id) { + traces.erase(*id, router, rank, self_destination, routing_table); + self.remote_entries.remove(&id); + } + self.entries.remove(&(self_destination, *id)); + } + self.name_map.clear(); + } + + // API for both incoming DDMA (drtio) and subkernel + pub fn playback(&mut self, source: u8, id: u32, timestamp: u64) -> Result<(), Error> { if self.state != ManagerState::Idle { return Err(Error::PlaybackInProgress); } - let entry = match self.entries.get(&id){ + let entry = match self.entries.get(&(source, id)){ Some(entry) => entry, None => { return Err(Error::IdNotFound); } }; @@ -120,20 +397,22 @@ impl Manager { assert!(ptr as u32 % 64 == 0); self.state = ManagerState::Playback; - self.currentid = id; + self.current_id = id; + self.current_source = source; + self.previous_cri_master = cricon_read(); unsafe { csr::rtio_dma::base_address_write(ptr as u64); csr::rtio_dma::time_offset_write(timestamp as u64); - csr::cri_con::selected_write(1); + cricon_select(RtioMaster::Dma); csr::rtio_dma::enable_write(1); // playback has begun here, for status call check_state } Ok(()) } - pub fn check_state(&mut self) -> Option { + pub fn get_status(&mut self) -> Option { if self.state != ManagerState::Playback { // nothing to report return None; @@ -141,19 +420,19 @@ impl Manager { let dma_enable = unsafe { csr::rtio_dma::enable_read() }; if dma_enable != 0 { return None; - } - else { + } else { self.state = ManagerState::Idle; unsafe { - csr::cri_con::selected_write(0); - let error = csr::rtio_dma::error_read(); + cricon_select(self.previous_cri_master); + let error = csr::rtio_dma::error_read(); let channel = csr::rtio_dma::error_channel_read(); let timestamp = csr::rtio_dma::error_timestamp_read(); if error != 0 { csr::rtio_dma::error_write(1); } - return Some(RtioStatus { - id: self.currentid, + return Some(RtioStatus { + source: self.current_source, + id: self.current_id, error: error, channel: channel, timestamp: timestamp }); @@ -161,4 +440,8 @@ impl Manager { } } + pub fn running(&self) -> bool { + self.state == ManagerState::Playback + } + } \ No newline at end of file diff --git a/artiq/firmware/satman/kernel.rs b/artiq/firmware/satman/kernel.rs new file mode 100644 index 000000000..b00861abb --- /dev/null +++ b/artiq/firmware/satman/kernel.rs @@ -0,0 +1,1013 @@ +use core::mem; +use alloc::{string::String, format, vec::Vec, collections::btree_map::BTreeMap}; +use cslice::AsCSlice; + +use board_artiq::{drtioaux, drtio_routing::RoutingTable, mailbox, spi}; +use board_misoc::{csr, clock, i2c}; +use proto_artiq::{ + drtioaux_proto::PayloadStatus, + kernel_proto as kern, + session_proto::Reply::KernelException as HostKernelException, + rpc_proto as rpc}; +use eh::eh_artiq; +use io::Cursor; +use kernel::eh_artiq::StackPointerBacktrace; + +use ::{cricon_select, RtioMaster}; +use cache::Cache; +use dma::{Manager as DmaManager, Error as DmaError}; +use routing::{Router, Sliceable, SliceMeta}; +use SAT_PAYLOAD_MAX_SIZE; +use MASTER_PAYLOAD_MAX_SIZE; + +mod kernel_cpu { + use super::*; + use core::ptr; + + use proto_artiq::kernel_proto::{KERNELCPU_EXEC_ADDRESS, KERNELCPU_LAST_ADDRESS, KSUPPORT_HEADER_SIZE}; + + pub unsafe fn start() { + if csr::kernel_cpu::reset_read() == 0 { + panic!("attempted to start kernel CPU when it is already running") + } + + stop(); + + extern { + static _binary____ksupport_ksupport_elf_start: u8; + static _binary____ksupport_ksupport_elf_end: u8; + } + let ksupport_start = &_binary____ksupport_ksupport_elf_start as *const _; + let ksupport_end = &_binary____ksupport_ksupport_elf_end as *const _; + ptr::copy_nonoverlapping(ksupport_start, + (KERNELCPU_EXEC_ADDRESS - KSUPPORT_HEADER_SIZE) as *mut u8, + ksupport_end as usize - ksupport_start as usize); + + csr::kernel_cpu::reset_write(0); + } + + pub unsafe fn stop() { + csr::kernel_cpu::reset_write(1); + cricon_select(RtioMaster::Drtio); + + mailbox::acknowledge(); + } + + pub fn validate(ptr: usize) -> bool { + ptr >= KERNELCPU_EXEC_ADDRESS && ptr <= KERNELCPU_LAST_ADDRESS + } +} + +#[derive(Debug, Clone, PartialEq, Eq)] +enum KernelState { + Absent, + Loaded, + Running, + MsgAwait { id: u32, max_time: i64, tags: Vec }, + MsgSending, + SubkernelAwaitLoad, + SubkernelAwaitFinish { max_time: i64, id: u32 }, + DmaUploading { max_time: u64 }, + DmaAwait { max_time: u64 }, +} + +#[derive(Debug)] +pub enum Error { + Load(String), + KernelNotFound, + InvalidPointer(usize), + Unexpected(String), + NoMessage, + AwaitingMessage, + SubkernelIoError, + DrtioError, + KernelException(Sliceable), + DmaError(DmaError), +} + +impl From> for Error { + fn from(_value: io::Error) -> Error { + Error::SubkernelIoError + } +} + +impl From> for Error { + fn from(_value: drtioaux::Error) -> Error { + Error::DrtioError + } +} + +impl From for Error { + fn from(value: DmaError) -> Error { + Error::DmaError(value) + } +} + +macro_rules! unexpected { + ($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*)))); +} + +/* represents interkernel messages */ +struct Message { + id: u32, + count: u8, + data: Vec +} + +#[derive(PartialEq)] +enum OutMessageState { + NoMessage, + MessageBeingSent, + MessageSent, + MessageAcknowledged +} + +/* for dealing with incoming and outgoing interkernel messages */ +struct MessageManager { + out_message: Option, + out_state: OutMessageState, + in_queue: Vec, + in_buffer: Option, +} + +// Per-run state +struct Session { + kernel_state: KernelState, + log_buffer: String, + last_exception: Option, + source: u8, // which destination requested running the kernel + messages: MessageManager, + subkernels_finished: Vec // ids of subkernels finished +} + +#[derive(Debug)] +struct KernelLibrary { + library: Vec, + complete: bool +} + +pub struct Manager { + kernels: BTreeMap, + current_id: u32, + session: Session, + cache: Cache, + last_finished: Option +} + +pub struct SubkernelFinished { + pub id: u32, + pub with_exception: bool, + pub exception_source: u8, + pub source: u8 +} + +impl MessageManager { + pub fn new() -> MessageManager { + MessageManager { + out_message: None, + out_state: OutMessageState::NoMessage, + in_queue: Vec::new(), + in_buffer: None + } + } + + pub fn handle_incoming(&mut self, status: PayloadStatus, length: usize, id: u32, data: &[u8; MASTER_PAYLOAD_MAX_SIZE]) { + // called when receiving a message from master + if status.is_first() { + // clear the buffer for first message + self.in_buffer = None; + } + match self.in_buffer.as_mut() { + Some(message) => message.data.extend(&data[..length]), + None => { + self.in_buffer = Some(Message { + id: id, + count: data[0], + data: data[1..length].to_vec() + }); + } + }; + if status.is_last() { + // when done, remove from working queue + self.in_queue.push(self.in_buffer.take().unwrap()); + } + } + + pub fn was_message_acknowledged(&mut self) -> bool { + match self.out_state { + OutMessageState::MessageAcknowledged => { + self.out_state = OutMessageState::NoMessage; + true + }, + _ => false + } + } + + pub fn get_outgoing_slice(&mut self, data_slice: &mut [u8; MASTER_PAYLOAD_MAX_SIZE]) -> Option { + if self.out_state != OutMessageState::MessageBeingSent { + return None; + } + let meta = self.out_message.as_mut()?.get_slice_master(data_slice); + if meta.status.is_last() { + // clear the message slot + self.out_message = None; + // notify kernel with a flag that message is sent + self.out_state = OutMessageState::MessageSent; + } + Some(meta) + } + + pub fn ack_slice(&mut self) -> bool { + // returns whether or not there's more to be sent + match self.out_state { + OutMessageState::MessageBeingSent => true, + OutMessageState::MessageSent => { + self.out_state = OutMessageState::MessageAcknowledged; + false + }, + _ => { + warn!("received unsolicited SubkernelMessageAck"); + false + } + } + } + + pub fn accept_outgoing(&mut self, id: u32, self_destination: u8, destination: u8, + count: u8, tag: &[u8], data: *const *const (), + routing_table: &RoutingTable, rank: u8, router: &mut Router + ) -> Result<(), Error> { + let mut writer = Cursor::new(Vec::new()); + rpc::send_args(&mut writer, 0, tag, data, false)?; + // skip service tag, but write the count + let mut data = writer.into_inner().split_off(3); + data[0] = count; + self.out_message = Some(Sliceable::new(destination, data)); + + let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; + self.out_state = OutMessageState::MessageBeingSent; + let meta = self.get_outgoing_slice(&mut data_slice).unwrap(); + router.route(drtioaux::Packet::SubkernelMessage { + source: self_destination, destination: destination, id: id, + status: meta.status, length: meta.len as u16, data: data_slice + }, routing_table, rank, self_destination); + Ok(()) + } + + pub fn get_incoming(&mut self, id: u32) -> Option { + for i in 0..self.in_queue.len() { + if self.in_queue[i].id == id { + return Some(self.in_queue.remove(i)); + } + } + None + } + + pub fn pending_ids(&self) -> Vec { + let mut pending_ids: Vec = Vec::new(); + for msg in self.in_queue.iter() { + pending_ids.push(msg.id); + } + pending_ids + } +} + +impl Session { + pub fn new() -> Session { + Session { + kernel_state: KernelState::Absent, + log_buffer: String::new(), + last_exception: None, + source: 0, + messages: MessageManager::new(), + subkernels_finished: Vec::new() + } + } + + fn running(&self) -> bool { + match self.kernel_state { + KernelState::Absent | KernelState::Loaded => false, + _ => true + } + } + + fn flush_log_buffer(&mut self) { + if &self.log_buffer[self.log_buffer.len() - 1..] == "\n" { + for line in self.log_buffer.lines() { + info!(target: "kernel", "{}", line); + } + self.log_buffer.clear() + } + } +} + +impl Manager { + pub fn new() -> Manager { + Manager { + kernels: BTreeMap::new(), + current_id: 0, + session: Session::new(), + cache: Cache::new(), + last_finished: None, + } + } + + pub fn add(&mut self, id: u32, status: PayloadStatus, data: &[u8], data_len: usize) -> Result<(), Error> { + if status.is_first() { + // in case master is interrupted, and subkernel is sent again, clean the state + self.kernels.remove(&id); + } + let kernel = match self.kernels.get_mut(&id) { + Some(kernel) => { + if kernel.complete { + // replace entry + self.kernels.remove(&id); + self.kernels.insert(id, KernelLibrary { + library: Vec::new(), + complete: false }); + self.kernels.get_mut(&id).unwrap() + } else { + kernel + } + }, + None => { + self.kernels.insert(id, KernelLibrary { + library: Vec::new(), + complete: false }); + self.kernels.get_mut(&id).unwrap() + }, + }; + kernel.library.extend(&data[0..data_len]); + + kernel.complete = status.is_last(); + Ok(()) + } + + pub fn is_running(&self) -> bool { + self.session.running() + } + + pub fn get_current_id(&self) -> Option { + match self.is_running() { + true => Some(self.current_id), + false => None + } + } + + pub fn stop(&mut self) { + unsafe { kernel_cpu::stop() } + self.session.kernel_state = KernelState::Absent; + unsafe { self.cache.unborrow() } + } + + pub fn run(&mut self, source: u8, id: u32) -> Result<(), Error> { + info!("starting subkernel #{}", id); + if self.session.kernel_state != KernelState::Loaded + || self.current_id != id { + self.load(id)?; + } + self.session.source = source; + self.session.kernel_state = KernelState::Running; + cricon_select(RtioMaster::Kernel); + + kern_acknowledge() + } + + pub fn message_handle_incoming(&mut self, status: PayloadStatus, length: usize, id: u32, slice: &[u8; MASTER_PAYLOAD_MAX_SIZE]) { + if !self.is_running() { + return; + } + self.session.messages.handle_incoming(status, length, id, slice); + } + + pub fn message_get_slice(&mut self, slice: &mut [u8; MASTER_PAYLOAD_MAX_SIZE]) -> Option { + if !self.is_running() { + return None; + } + self.session.messages.get_outgoing_slice(slice) + } + + pub fn message_ack_slice(&mut self) -> bool { + if !self.is_running() { + warn!("received unsolicited SubkernelMessageAck"); + return false; + } + self.session.messages.ack_slice() + } + + pub fn load(&mut self, id: u32) -> Result<(), Error> { + if self.current_id == id && self.session.kernel_state == KernelState::Loaded { + return Ok(()) + } + if !self.kernels.get(&id).ok_or(Error::KernelNotFound)?.complete { + return Err(Error::KernelNotFound) + } + self.current_id = id; + self.session = Session::new(); + self.stop(); + + unsafe { + kernel_cpu::start(); + + kern_send(&kern::LoadRequest(&self.kernels.get(&id).unwrap().library)).unwrap(); + kern_recv(|reply| { + match reply { + kern::LoadReply(Ok(())) => { + self.session.kernel_state = KernelState::Loaded; + Ok(()) + } + kern::LoadReply(Err(error)) => { + kernel_cpu::stop(); + error!("load error: {:?}", error); + Err(Error::Load(format!("{}", error))) + } + other => { + unexpected!("unexpected kernel CPU reply to load request: {:?}", other) + } + } + }) + } + } + + pub fn exception_get_slice(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> SliceMeta { + match self.session.last_exception.as_mut() { + Some(exception) => exception.get_slice_sat(data_slice), + None => SliceMeta { destination: 0, len: 0, status: PayloadStatus::FirstAndLast } + } + } + + fn runtime_exception(&mut self, cause: Error) { + let raw_exception: Vec = Vec::new(); + let mut writer = Cursor::new(raw_exception); + match (HostKernelException { + exceptions: &[Some(eh_artiq::Exception { + id: 11, // SubkernelError, defined in ksupport + message: format!("in subkernel id {}: {:?}", self.current_id, cause).as_c_slice(), + param: [0, 0, 0], + file: file!().as_c_slice(), + line: line!(), + column: column!(), + function: format!("subkernel id {}", self.current_id).as_c_slice(), + })], + stack_pointers: &[StackPointerBacktrace { + stack_pointer: 0, + initial_backtrace_size: 0, + current_backtrace_size: 0 + }], + backtrace: &[], + async_errors: 0 + }).write_to(&mut writer) { + Ok(_) => self.session.last_exception = Some(Sliceable::new(0, writer.into_inner())), + Err(_) => error!("Error writing exception data") + } + } + + pub fn ddma_finished(&mut self, error: u8, channel: u32, timestamp: u64) { + if let KernelState::DmaAwait { .. } = self.session.kernel_state { + kern_send(&kern::DmaAwaitRemoteReply { + timeout: false, error: error, channel: channel, timestamp: timestamp + }).unwrap(); + self.session.kernel_state = KernelState::Running; + } + } + + pub fn ddma_nack(&mut self) { + // for simplicity treat it as a timeout for now... + if let KernelState::DmaAwait { .. } = self.session.kernel_state { + kern_send(&kern::DmaAwaitRemoteReply { + timeout: true, error: 0, channel: 0, timestamp: 0 + }).unwrap(); + self.session.kernel_state = KernelState::Running; + } + } + + pub fn ddma_remote_uploaded(&mut self, succeeded: bool) { + if let KernelState::DmaUploading { .. } = self.session.kernel_state { + if succeeded { + self.session.kernel_state = KernelState::Running; + kern_acknowledge().unwrap(); + } else { + self.stop(); + self.runtime_exception(Error::DmaError(DmaError::UploadFail)); + } + } + } + + pub fn process_kern_requests(&mut self, router: &mut Router, routing_table: &RoutingTable, rank: u8, destination: u8, dma_manager: &mut DmaManager) { + macro_rules! finished { + ($with_exception:expr) => {{ Some(SubkernelFinished { + source: self.session.source, id: self.current_id, + with_exception: $with_exception, exception_source: destination + }) }} + } + + if let Some(subkernel_finished) = self.last_finished.take() { + info!("subkernel {} finished, with exception: {}", subkernel_finished.id, subkernel_finished.with_exception); + let pending = self.session.messages.pending_ids(); + if pending.len() > 0 { + warn!("subkernel terminated with messages still pending: {:?}", pending); + } + router.route(drtioaux::Packet::SubkernelFinished { + destination: subkernel_finished.source, id: subkernel_finished.id, + with_exception: subkernel_finished.with_exception, exception_src: subkernel_finished.exception_source + }, &routing_table, rank, destination); + dma_manager.cleanup(router, rank, destination, routing_table); + } + + if !self.is_running() { + return; + } + + match self.process_external_messages() { + Ok(()) => (), + Err(Error::AwaitingMessage) => return, // kernel still waiting, do not process kernel messages + Err(Error::KernelException(exception)) => { + unsafe { kernel_cpu::stop() } + self.session.kernel_state = KernelState::Absent; + unsafe { self.cache.unborrow() } + self.session.last_exception = Some(exception); + self.last_finished = finished!(true); + }, + Err(e) => { + error!("Error while running processing external messages: {:?}", e); + self.stop(); + self.runtime_exception(e); + self.last_finished = finished!(true); + } + } + + match self.process_kern_message(router, routing_table, rank, destination, dma_manager) { + Ok(Some(with_exception)) => { + self.last_finished = finished!(with_exception) + }, + Ok(None) | Err(Error::NoMessage) => (), + Err(e) => { + error!("Error while running kernel: {:?}", e); + self.stop(); + self.runtime_exception(e); + self.last_finished = finished!(true); + } + } + } + + fn process_external_messages(&mut self) -> Result<(), Error> { + match &self.session.kernel_state { + KernelState::MsgAwait { id, max_time, tags } => { + if *max_time > 0 && clock::get_ms() > *max_time as u64 { + kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::Timeout, count: 0 })?; + self.session.kernel_state = KernelState::Running; + return Ok(()) + } + if let Some(message) = self.session.messages.get_incoming(*id) { + kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::NoError, count: message.count })?; + let tags = tags.clone(); + self.session.kernel_state = KernelState::Running; + pass_message_to_kernel(&message, &tags) + } else { + Err(Error::AwaitingMessage) + } + }, + KernelState::MsgSending => { + if self.session.messages.was_message_acknowledged() { + self.session.kernel_state = KernelState::Running; + kern_acknowledge() + } else { + Err(Error::AwaitingMessage) + } + }, + KernelState::SubkernelAwaitFinish { max_time, id } => { + if *max_time > 0 && clock::get_ms() > *max_time as u64 { + kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::Timeout })?; + self.session.kernel_state = KernelState::Running; + } else { + let mut i = 0; + for status in &self.session.subkernels_finished { + if *status == *id { + kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::NoError })?; + self.session.kernel_state = KernelState::Running; + self.session.subkernels_finished.swap_remove(i); + break; + } + i += 1; + } + } + Ok(()) + } + KernelState::DmaAwait { max_time } => { + if clock::get_ms() > *max_time { + kern_send(&kern::DmaAwaitRemoteReply { timeout: true, error: 0, channel: 0, timestamp: 0 })?; + self.session.kernel_state = KernelState::Running; + } + // ddma_finished() and nack() covers the other case + Ok(()) + } + KernelState::DmaUploading { max_time } => { + if clock::get_ms() > *max_time { + unexpected!("DMAError: Timed out sending traces to remote"); + } + Ok(()) + } + _ => Ok(()) + } + } + + pub fn subkernel_load_run_reply(&mut self, succeeded: bool, self_destination: u8) { + if self.session.kernel_state == KernelState::SubkernelAwaitLoad { + if let Err(e) = kern_send(&kern::SubkernelLoadRunReply { succeeded: succeeded }) { + self.stop(); + self.runtime_exception(e); + self.last_finished = Some(SubkernelFinished { + source: self.session.source, id: self.current_id, + with_exception: true, exception_source: self_destination + }) + } else { + self.session.kernel_state = KernelState::Running; + } + } else { + warn!("received unsolicited SubkernelLoadRunReply"); + } + } + + pub fn remote_subkernel_finished(&mut self, id: u32, with_exception: bool, exception_source: u8) { + if with_exception { + unsafe { kernel_cpu::stop() } + self.session.kernel_state = KernelState::Absent; + unsafe { self.cache.unborrow() } + self.last_finished = Some(SubkernelFinished { + source: self.session.source, id: self.current_id, + with_exception: true, exception_source: exception_source + }) + } else { + self.session.subkernels_finished.push(id); + } + } + + fn process_kern_message(&mut self, router: &mut Router, + routing_table: &RoutingTable, + rank: u8, destination: u8, + dma_manager: &mut DmaManager + ) -> Result, Error> { + // returns Ok(with_exception) on finish + // None if the kernel is still running + kern_recv(|request| { + match (request, &self.session.kernel_state) { + (&kern::LoadReply(_), KernelState::Loaded) | + (_, KernelState::DmaUploading { .. }) | + (_, KernelState::DmaAwait { .. }) | + (_, KernelState::MsgSending) | + (_, KernelState::SubkernelAwaitLoad) | + (_, KernelState::SubkernelAwaitFinish { .. }) => { + // We're standing by; ignore the message. + return Ok(None) + } + (_, KernelState::Running) => (), + _ => { + unexpected!("unexpected request {:?} from kernel CPU in {:?} state", + request, self.session.kernel_state) + }, + } + + if process_kern_hwreq(request, destination)? { + return Ok(None) + } + + match request { + &kern::Log(args) => { + use core::fmt::Write; + self.session.log_buffer + .write_fmt(args) + .unwrap_or_else(|_| warn!("cannot append to session log buffer")); + self.session.flush_log_buffer(); + kern_acknowledge() + } + + &kern::LogSlice(arg) => { + self.session.log_buffer += arg; + self.session.flush_log_buffer(); + kern_acknowledge() + } + + &kern::RpcFlush => { + // we do not have to do anything about this request, + // it is sent by the kernel firmware regardless of RPC being used + kern_acknowledge() + } + + &kern::CacheGetRequest { key } => { + let value = self.cache.get(key); + kern_send(&kern::CacheGetReply { + value: unsafe { mem::transmute(value) } + }) + } + + &kern::CachePutRequest { key, value } => { + let succeeded = self.cache.put(key, value).is_ok(); + kern_send(&kern::CachePutReply { succeeded: succeeded }) + } + + &kern::RunFinished => { + unsafe { kernel_cpu::stop() } + self.session.kernel_state = KernelState::Absent; + unsafe { self.cache.unborrow() } + + return Ok(Some(false)) + } + &kern::RunException { exceptions, stack_pointers, backtrace } => { + unsafe { kernel_cpu::stop() } + self.session.kernel_state = KernelState::Absent; + unsafe { self.cache.unborrow() } + let exception = slice_kernel_exception(&exceptions, &stack_pointers, &backtrace)?; + self.session.last_exception = Some(exception); + return Ok(Some(true)) + } + + &kern::DmaRecordStart(name) => { + dma_manager.record_start(name); + kern_acknowledge() + } + &kern::DmaRecordAppend(data) => { + dma_manager.record_append(data); + kern_acknowledge() + } + &kern::DmaRecordStop { duration, enable_ddma: _ } => { + // ddma is always used on satellites + if let Ok(id) = dma_manager.record_stop(duration, destination) { + let remote_count = dma_manager.upload_traces(id, router, rank, destination, routing_table)?; + if remote_count > 0 { + let max_time = clock::get_ms() + 10_000 as u64; + self.session.kernel_state = KernelState::DmaUploading { max_time: max_time }; + Ok(()) + } else { + kern_acknowledge() + } + } else { + unexpected!("DMAError: found an unsupported call to RTIO devices on master") + } + } + &kern::DmaEraseRequest { name } => { + dma_manager.erase_name(name, router, rank, destination, routing_table); + kern_acknowledge() + } + &kern::DmaRetrieveRequest { name } => { + dma_manager.with_trace(destination, name, |trace, duration| { + kern_send(&kern::DmaRetrieveReply { + trace: trace, + duration: duration, + uses_ddma: true, + }) + }) + } + &kern::DmaStartRemoteRequest { id, timestamp } => { + let max_time = clock::get_ms() + 10_000 as u64; + self.session.kernel_state = KernelState::DmaAwait { max_time: max_time }; + dma_manager.playback_remote(id as u32, timestamp as u64, router, rank, destination, routing_table)?; + dma_manager.playback(destination, id as u32, timestamp as u64)?; + Ok(()) + } + + &kern::SubkernelMsgSend { id, destination: msg_dest, count, tag, data } => { + let message_destination; + let message_id; + if let Some(dest) = msg_dest { + message_destination = dest; + message_id = id; + } else { + // return message, return to source + message_destination = self.session.source; + message_id = self.current_id; + } + self.session.messages.accept_outgoing(message_id, destination, + message_destination, count, tag, data, + routing_table, rank, router)?; + // acknowledge after the message is sent + self.session.kernel_state = KernelState::MsgSending; + Ok(()) + } + + &kern::SubkernelMsgRecvRequest { id, timeout, tags } => { + // negative timeout value means no timeout + let max_time = if timeout > 0 { clock::get_ms() as i64 + timeout } else { timeout }; + // ID equal to -1 indicates wildcard for receiving arguments + let id = if id == -1 { self.current_id } else { id as u32 }; + self.session.kernel_state = KernelState::MsgAwait { + id: id, max_time: max_time, tags: tags.to_vec() }; + Ok(()) + }, + + &kern::SubkernelLoadRunRequest { id, destination: sk_destination, run } => { + self.session.kernel_state = KernelState::SubkernelAwaitLoad; + router.route(drtioaux::Packet::SubkernelLoadRunRequest { + source: destination, destination: sk_destination, id: id, run: run + }, routing_table, rank, destination); + Ok(()) + } + + &kern::SubkernelAwaitFinishRequest{ id, timeout } => { + let max_time = if timeout > 0 { clock::get_ms() as i64 + timeout } else { timeout }; + self.session.kernel_state = KernelState::SubkernelAwaitFinish { max_time: max_time, id: id }; + Ok(()) + } + + request => unexpected!("unexpected request {:?} from kernel CPU", request) + }.and(Ok(None)) + }) + } +} + +impl Drop for Manager { + fn drop(&mut self) { + cricon_select(RtioMaster::Drtio); + unsafe { + kernel_cpu::stop() + }; + } +} + +fn kern_recv(f: F) -> Result + where F: FnOnce(&kern::Message) -> Result { + if mailbox::receive() == 0 { + return Err(Error::NoMessage); + }; + if !kernel_cpu::validate(mailbox::receive()) { + return Err(Error::InvalidPointer(mailbox::receive())) + } + f(unsafe { &*(mailbox::receive() as *const kern::Message) }) +} + +fn kern_recv_w_timeout(timeout: u64, f: F) -> Result + where F: FnOnce(&kern::Message) -> Result + Copy { + // sometimes kernel may be too slow to respond immediately + // (e.g. when receiving external messages) + // we cannot wait indefinitely to keep the satellite responsive + // so a timeout is used instead + let max_time = clock::get_ms() + timeout; + while clock::get_ms() < max_time { + match kern_recv(f) { + Err(Error::NoMessage) => continue, + anything_else => return anything_else + } + } + Err(Error::NoMessage) +} + +fn kern_acknowledge() -> Result<(), Error> { + mailbox::acknowledge(); + Ok(()) +} + +fn kern_send(request: &kern::Message) -> Result<(), Error> { + unsafe { mailbox::send(request as *const _ as usize) } + while !mailbox::acknowledged() {} + Ok(()) +} + +fn slice_kernel_exception(exceptions: &[Option], + stack_pointers: &[eh_artiq::StackPointerBacktrace], + backtrace: &[(usize, usize)] +) -> Result { + error!("exception in kernel"); + for exception in exceptions { + error!("{:?}", exception.unwrap()); + } + error!("stack pointers: {:?}", stack_pointers); + error!("backtrace: {:?}", backtrace); + // master will only pass the exception data back to the host: + let raw_exception: Vec = Vec::new(); + let mut writer = Cursor::new(raw_exception); + match (HostKernelException { + exceptions: exceptions, + stack_pointers: stack_pointers, + backtrace: backtrace, + async_errors: 0 + }).write_to(&mut writer) { + // save last exception data to be received by master + Ok(_) => Ok(Sliceable::new(0, writer.into_inner())), + Err(_) => Err(Error::SubkernelIoError) + } +} + +fn pass_message_to_kernel(message: &Message, tags: &[u8]) -> Result<(), Error> { + let mut reader = Cursor::new(&message.data); + let mut current_tags = tags; + let mut i = 0; + loop { + let slot = kern_recv_w_timeout(100, |reply| { + match reply { + &kern::RpcRecvRequest(slot) => Ok(slot), + &kern::RunException { exceptions, stack_pointers, backtrace } => { + let exception = slice_kernel_exception(&exceptions, &stack_pointers, &backtrace)?; + Err(Error::KernelException(exception)) + }, + other => unexpected!( + "expected root value slot from kernel CPU, not {:?}", other) + } + })?; + let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error> { + if size == 0 { + return Ok(0 as *mut ()) + } + kern_send(&kern::RpcRecvReply(Ok(size)))?; + Ok(kern_recv_w_timeout(100, |reply| { + match reply { + &kern::RpcRecvRequest(slot) => Ok(slot), + &kern::RunException { + exceptions, + stack_pointers, + backtrace + }=> { + let exception = slice_kernel_exception(&exceptions, &stack_pointers, &backtrace)?; + Err(Error::KernelException(exception)) + }, + other => unexpected!( + "expected nested value slot from kernel CPU, not {:?}", other) + } + })?) + }); + match res { + Ok(new_tags) => { + kern_send(&kern::RpcRecvReply(Ok(0)))?; + i += 1; + if i < message.count { + // update the tag for next read + current_tags = new_tags; + } else { + // should be done by then + break; + } + }, + Err(_) => unexpected!("expected valid subkernel message data") + }; + } + Ok(()) +} + +fn process_kern_hwreq(request: &kern::Message, self_destination: u8) -> Result { + match request { + &kern::RtioInitRequest => { + unsafe { + csr::drtiosat::reset_write(1); + clock::spin_us(100); + csr::drtiosat::reset_write(0); + } + kern_acknowledge() + } + + &kern::RtioDestinationStatusRequest { destination } => { + // only local destination is considered "up" + // no access to other DRTIO destinations + kern_send(&kern::RtioDestinationStatusReply { + up: destination == self_destination }) + } + + &kern::I2cStartRequest { busno } => { + let succeeded = i2c::start(busno as u8).is_ok(); + kern_send(&kern::I2cBasicReply { succeeded: succeeded }) + } + &kern::I2cRestartRequest { busno } => { + let succeeded = i2c::restart(busno as u8).is_ok(); + kern_send(&kern::I2cBasicReply { succeeded: succeeded }) + } + &kern::I2cStopRequest { busno } => { + let succeeded = i2c::stop(busno as u8).is_ok(); + kern_send(&kern::I2cBasicReply { succeeded: succeeded }) + } + &kern::I2cWriteRequest { busno, data } => { + match i2c::write(busno as u8, data) { + Ok(ack) => kern_send( + &kern::I2cWriteReply { succeeded: true, ack: ack }), + Err(_) => kern_send( + &kern::I2cWriteReply { succeeded: false, ack: false }) + } + } + &kern::I2cReadRequest { busno, ack } => { + match i2c::read(busno as u8, ack) { + Ok(data) => kern_send( + &kern::I2cReadReply { succeeded: true, data: data }), + Err(_) => kern_send( + &kern::I2cReadReply { succeeded: false, data: 0xff }) + } + } + &kern::I2cSwitchSelectRequest { busno, address, mask } => { + let succeeded = i2c::switch_select(busno as u8, address, mask).is_ok(); + kern_send(&kern::I2cBasicReply { succeeded: succeeded }) + } + + &kern::SpiSetConfigRequest { busno, flags, length, div, cs } => { + let succeeded = spi::set_config(busno as u8, flags, length, div, cs).is_ok(); + kern_send(&kern::SpiBasicReply { succeeded: succeeded }) + }, + &kern::SpiWriteRequest { busno, data } => { + let succeeded = spi::write(busno as u8, data).is_ok(); + kern_send(&kern::SpiBasicReply { succeeded: succeeded }) + } + &kern::SpiReadRequest { busno } => { + match spi::read(busno as u8) { + Ok(data) => kern_send( + &kern::SpiReadReply { succeeded: true, data: data }), + Err(_) => kern_send( + &kern::SpiReadReply { succeeded: false, data: 0 }) + } + } + + _ => return Ok(false) + }.and(Ok(true)) +} \ No newline at end of file diff --git a/artiq/firmware/satman/main.rs b/artiq/firmware/satman/main.rs index c524483fc..335c353df 100644 --- a/artiq/firmware/satman/main.rs +++ b/artiq/firmware/satman/main.rs @@ -1,4 +1,4 @@ -#![feature(never_type, panic_info_message, llvm_asm, default_alloc_error_handler)] +#![feature(never_type, panic_info_message, asm, default_alloc_error_handler)] #![no_std] #[macro_use] @@ -9,28 +9,38 @@ extern crate board_artiq; extern crate riscv; extern crate alloc; extern crate proto_artiq; +extern crate cslice; +extern crate io; +extern crate eh; use core::convert::TryFrom; use board_misoc::{csr, ident, clock, uart_logger, i2c, pmp}; #[cfg(has_si5324)] use board_artiq::si5324; -use board_artiq::{spi, drtioaux}; +#[cfg(has_si549)] +use board_artiq::si549; +#[cfg(soc_platform = "kasli")] +use board_misoc::irq; +use board_artiq::{spi, drtioaux, drtio_routing}; #[cfg(soc_platform = "efc")] use board_artiq::ad9117; -use board_artiq::drtio_routing; -use proto_artiq::drtioaux_proto::ANALYZER_MAX_SIZE; +use proto_artiq::drtioaux_proto::{SAT_PAYLOAD_MAX_SIZE, MASTER_PAYLOAD_MAX_SIZE}; #[cfg(has_drtio_eem)] use board_artiq::drtio_eem; use riscv::register::{mcause, mepc, mtval}; use dma::Manager as DmaManager; +use kernel::Manager as KernelManager; use analyzer::Analyzer; #[global_allocator] static mut ALLOC: alloc_list::ListAlloc = alloc_list::EMPTY; mod repeater; +mod routing; mod dma; mod analyzer; +mod kernel; +mod cache; fn drtiosat_reset(reset: bool) { unsafe { @@ -60,6 +70,33 @@ fn drtiosat_tsc_loaded() -> bool { } } +#[derive(Clone, Copy)] +pub enum RtioMaster { + Drtio, + Dma, + Kernel +} + +pub fn cricon_select(master: RtioMaster) { + let val = match master { + RtioMaster::Drtio => 0, + RtioMaster::Dma => 1, + RtioMaster::Kernel => 2 + }; + unsafe { + csr::cri_con::selected_write(val); + } +} + +pub fn cricon_read() -> RtioMaster { + let val = unsafe { csr::cri_con::selected_read() }; + match val { + 0 => RtioMaster::Drtio, + 1 => RtioMaster::Dma, + 2 => RtioMaster::Kernel, + _ => unreachable!() + } +} #[cfg(has_drtio_routing)] macro_rules! forward { @@ -68,7 +105,14 @@ macro_rules! forward { if hop != 0 { let repno = (hop - 1) as usize; if repno < $repeaters.len() { - return $repeaters[repno].aux_forward($packet); + if $packet.expects_response() { + return $repeaters[repno].aux_forward($packet); + } else { + let res = $repeaters[repno].aux_send($packet); + // allow the satellite to parse the packet before next + clock::spin_us(10_000); + return res; + } } else { return Err(drtioaux::Error::RoutingError); } @@ -81,9 +125,10 @@ macro_rules! forward { ($routing_table:expr, $destination:expr, $rank:expr, $repeaters:expr, $packet:expr) => {} } -fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repeaters: &mut [repeater::Repeater], - _routing_table: &mut drtio_routing::RoutingTable, _rank: &mut u8, - packet: drtioaux::Packet) -> Result<(), drtioaux::Error> { +fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmgr: &mut KernelManager, + _repeaters: &mut [repeater::Repeater], _routing_table: &mut drtio_routing::RoutingTable, rank: &mut u8, + router: &mut routing::Router, self_destination: &mut u8, packet: drtioaux::Packet +) -> Result<(), drtioaux::Error> { // In the code below, *_chan_sel_write takes an u8 if there are fewer than 256 channels, // and u16 otherwise; hence the `as _` conversion. match packet { @@ -102,51 +147,45 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea drtioaux::send(0, &drtioaux::Packet::ResetAck) }, - drtioaux::Packet::DestinationStatusRequest { destination: _destination } => { + drtioaux::Packet::DestinationStatusRequest { destination } => { #[cfg(has_drtio_routing)] - let hop = _routing_table.0[_destination as usize][*_rank as usize]; + let hop = _routing_table.0[destination as usize][*rank as usize]; #[cfg(not(has_drtio_routing))] let hop = 0; if hop == 0 { - // async messages - if let Some(status) = _manager.check_state() { - info!("playback done, error: {}, channel: {}, timestamp: {}", status.error, status.channel, status.timestamp); - drtioaux::send(0, &drtioaux::Packet::DmaPlaybackStatus { - destination: _destination, id: status.id, error: status.error, channel: status.channel, timestamp: status.timestamp })?; - } else { - let errors; + *self_destination = destination; + let errors; + unsafe { + errors = csr::drtiosat::rtio_error_read(); + } + if errors & 1 != 0 { + let channel; unsafe { - errors = csr::drtiosat::rtio_error_read(); + channel = csr::drtiosat::sequence_error_channel_read(); + csr::drtiosat::rtio_error_write(1); } - if errors & 1 != 0 { - let channel; - unsafe { - channel = csr::drtiosat::sequence_error_channel_read(); - csr::drtiosat::rtio_error_write(1); - } - drtioaux::send(0, - &drtioaux::Packet::DestinationSequenceErrorReply { channel })?; - } else if errors & 2 != 0 { - let channel; - unsafe { - channel = csr::drtiosat::collision_channel_read(); - csr::drtiosat::rtio_error_write(2); - } - drtioaux::send(0, - &drtioaux::Packet::DestinationCollisionReply { channel })?; - } else if errors & 4 != 0 { - let channel; - unsafe { - channel = csr::drtiosat::busy_channel_read(); - csr::drtiosat::rtio_error_write(4); - } - drtioaux::send(0, - &drtioaux::Packet::DestinationBusyReply { channel })?; + drtioaux::send(0, + &drtioaux::Packet::DestinationSequenceErrorReply { channel })?; + } else if errors & 2 != 0 { + let channel; + unsafe { + channel = csr::drtiosat::collision_channel_read(); + csr::drtiosat::rtio_error_write(2); } - else { - drtioaux::send(0, &drtioaux::Packet::DestinationOkReply)?; + drtioaux::send(0, + &drtioaux::Packet::DestinationCollisionReply { channel })?; + } else if errors & 4 != 0 { + let channel; + unsafe { + channel = csr::drtiosat::busy_channel_read(); + csr::drtiosat::rtio_error_write(4); } + drtioaux::send(0, + &drtioaux::Packet::DestinationBusyReply { channel })?; + } + else { + drtioaux::send(0, &drtioaux::Packet::DestinationOkReply)?; } } @@ -157,7 +196,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea if hop <= csr::DRTIOREP.len() { let repno = hop - 1; match _repeaters[repno].aux_forward(&drtioaux::Packet::DestinationStatusRequest { - destination: _destination + destination: destination }) { Ok(()) => (), Err(drtioaux::Error::LinkDown) => drtioaux::send(0, &drtioaux::Packet::DestinationDownReply)?, @@ -171,7 +210,6 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea } } } - Ok(()) } @@ -186,18 +224,18 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea drtioaux::send(0, &drtioaux::Packet::RoutingAck) } #[cfg(has_drtio_routing)] - drtioaux::Packet::RoutingSetRank { rank } => { - *_rank = rank; - drtio_routing::interconnect_enable_all(_routing_table, rank); + drtioaux::Packet::RoutingSetRank { rank: new_rank } => { + *rank = new_rank; + drtio_routing::interconnect_enable_all(_routing_table, new_rank); - let rep_rank = rank + 1; + let rep_rank = new_rank + 1; for rep in _repeaters.iter() { if let Err(e) = rep.set_rank(rep_rank) { error!("failed to set rank ({})", e); } } - info!("rank: {}", rank); + info!("rank: {}", new_rank); info!("routing table: {}", _routing_table); drtioaux::send(0, &drtioaux::Packet::RoutingAck) @@ -213,7 +251,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea } drtioaux::Packet::MonitorRequest { destination: _destination, channel, probe } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let value; #[cfg(has_rtio_moninj)] unsafe { @@ -230,7 +268,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea drtioaux::send(0, &reply) }, drtioaux::Packet::InjectionRequest { destination: _destination, channel, overrd, value } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); #[cfg(has_rtio_moninj)] unsafe { csr::rtio_moninj::inj_chan_sel_write(channel as _); @@ -240,7 +278,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea Ok(()) }, drtioaux::Packet::InjectionStatusRequest { destination: _destination, channel, overrd } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let value; #[cfg(has_rtio_moninj)] unsafe { @@ -256,22 +294,22 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea }, drtioaux::Packet::I2cStartRequest { destination: _destination, busno } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let succeeded = i2c::start(busno).is_ok(); drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded }) } drtioaux::Packet::I2cRestartRequest { destination: _destination, busno } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let succeeded = i2c::restart(busno).is_ok(); drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded }) } drtioaux::Packet::I2cStopRequest { destination: _destination, busno } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let succeeded = i2c::stop(busno).is_ok(); drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded }) } drtioaux::Packet::I2cWriteRequest { destination: _destination, busno, data } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); match i2c::write(busno, data) { Ok(ack) => drtioaux::send(0, &drtioaux::Packet::I2cWriteReply { succeeded: true, ack: ack }), @@ -280,7 +318,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea } } drtioaux::Packet::I2cReadRequest { destination: _destination, busno, ack } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); match i2c::read(busno, ack) { Ok(data) => drtioaux::send(0, &drtioaux::Packet::I2cReadReply { succeeded: true, data: data }), @@ -289,25 +327,25 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea } } drtioaux::Packet::I2cSwitchSelectRequest { destination: _destination, busno, address, mask } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let succeeded = i2c::switch_select(busno, address, mask).is_ok(); drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded }) } drtioaux::Packet::SpiSetConfigRequest { destination: _destination, busno, flags, length, div, cs } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let succeeded = spi::set_config(busno, flags, length, div, cs).is_ok(); drtioaux::send(0, &drtioaux::Packet::SpiBasicReply { succeeded: succeeded }) }, drtioaux::Packet::SpiWriteRequest { destination: _destination, busno, data } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let succeeded = spi::write(busno, data).is_ok(); drtioaux::send(0, &drtioaux::Packet::SpiBasicReply { succeeded: succeeded }) } drtioaux::Packet::SpiReadRequest { destination: _destination, busno } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); match spi::read(busno) { Ok(data) => drtioaux::send(0, &drtioaux::Packet::SpiReadReply { succeeded: true, data: data }), @@ -317,7 +355,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea } drtioaux::Packet::AnalyzerHeaderRequest { destination: _destination } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); + forward!(_routing_table, _destination, *rank, _repeaters, &packet); let header = analyzer.get_header(); drtioaux::send(0, &drtioaux::Packet::AnalyzerHeader { total_byte_count: header.total_byte_count, @@ -327,8 +365,8 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea } drtioaux::Packet::AnalyzerDataRequest { destination: _destination } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); - let mut data_slice: [u8; ANALYZER_MAX_SIZE] = [0; ANALYZER_MAX_SIZE]; + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE]; let meta = analyzer.get_data(&mut data_slice); drtioaux::send(0, &drtioaux::Packet::AnalyzerData { last: meta.last, @@ -337,26 +375,114 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea }) } - #[cfg(has_rtio_dma)] - drtioaux::Packet::DmaAddTraceRequest { destination: _destination, id, last, length, trace } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); - let succeeded = _manager.add(id, last, &trace, length as usize).is_ok(); - drtioaux::send(0, - &drtioaux::Packet::DmaAddTraceReply { succeeded: succeeded }) + drtioaux::Packet::DmaAddTraceRequest { source, destination, id, status, length, trace } => { + forward!(_routing_table, destination, *rank, _repeaters, &packet); + *self_destination = destination; + let succeeded = dmamgr.add(source, id, status, &trace, length as usize).is_ok(); + router.send(drtioaux::Packet::DmaAddTraceReply { + source: *self_destination, destination: source, id: id, succeeded: succeeded + }, _routing_table, *rank, *self_destination) } - #[cfg(has_rtio_dma)] - drtioaux::Packet::DmaRemoveTraceRequest { destination: _destination, id } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); - let succeeded = _manager.erase(id).is_ok(); - drtioaux::send(0, - &drtioaux::Packet::DmaRemoveTraceReply { succeeded: succeeded }) + drtioaux::Packet::DmaAddTraceReply { source, destination: _destination, id, succeeded } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + dmamgr.ack_upload(kernelmgr, source, id, succeeded, router, *rank, *self_destination, _routing_table); + Ok(()) } - #[cfg(has_rtio_dma)] - drtioaux::Packet::DmaPlaybackRequest { destination: _destination, id, timestamp } => { - forward!(_routing_table, _destination, *_rank, _repeaters, &packet); - let succeeded = _manager.playback(id, timestamp).is_ok(); + drtioaux::Packet::DmaRemoveTraceRequest { source, destination: _destination, id } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + let succeeded = dmamgr.erase(source, id).is_ok(); + router.send(drtioaux::Packet::DmaRemoveTraceReply { + destination: source, succeeded: succeeded + }, _routing_table, *rank, *self_destination) + } + drtioaux::Packet::DmaPlaybackRequest { source, destination: _destination, id, timestamp } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + // no DMA with a running kernel + let succeeded = !kernelmgr.is_running() && dmamgr.playback(source, id, timestamp).is_ok(); + router.send(drtioaux::Packet::DmaPlaybackReply { + destination: source, succeeded: succeeded + }, _routing_table, *rank, *self_destination) + } + drtioaux::Packet::DmaPlaybackReply { destination: _destination, succeeded } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + if !succeeded { + kernelmgr.ddma_nack(); + } + Ok(()) + } + drtioaux::Packet::DmaPlaybackStatus { source: _, destination: _destination, id, error, channel, timestamp } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + dmamgr.remote_finished(kernelmgr, id, error, channel, timestamp); + Ok(()) + } + + drtioaux::Packet::SubkernelAddDataRequest { destination, id, status, length, data } => { + forward!(_routing_table, destination, *rank, _repeaters, &packet); + *self_destination = destination; + let succeeded = kernelmgr.add(id, status, &data, length as usize).is_ok(); drtioaux::send(0, - &drtioaux::Packet::DmaPlaybackReply { succeeded: succeeded }) + &drtioaux::Packet::SubkernelAddDataReply { succeeded: succeeded }) + } + drtioaux::Packet::SubkernelLoadRunRequest { source, destination: _destination, id, run } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + let mut succeeded = kernelmgr.load(id).is_ok(); + // allow preloading a kernel with delayed run + if run { + if dmamgr.running() { + // cannot run kernel while DDMA is running + succeeded = false; + } else { + succeeded |= kernelmgr.run(source, id).is_ok(); + } + } + router.send(drtioaux::Packet::SubkernelLoadRunReply { + destination: source, succeeded: succeeded + }, + _routing_table, *rank, *self_destination) + } + drtioaux::Packet::SubkernelLoadRunReply { destination: _destination, succeeded } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + // received if local subkernel started another, remote subkernel + kernelmgr.subkernel_load_run_reply(succeeded, *self_destination); + Ok(()) + } + drtioaux::Packet::SubkernelFinished { destination: _destination, id, with_exception, exception_src } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + kernelmgr.remote_subkernel_finished(id, with_exception, exception_src); + Ok(()) + } + drtioaux::Packet::SubkernelExceptionRequest { destination: _destination } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE]; + let meta = kernelmgr.exception_get_slice(&mut data_slice); + drtioaux::send(0, &drtioaux::Packet::SubkernelException { + last: meta.status.is_last(), + length: meta.len, + data: data_slice, + }) + } + drtioaux::Packet::SubkernelMessage { source, destination: _destination, id, status, length, data } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + kernelmgr.message_handle_incoming(status, length as usize, id, &data); + router.send(drtioaux::Packet::SubkernelMessageAck { + destination: source + }, _routing_table, *rank, *self_destination) + } + drtioaux::Packet::SubkernelMessageAck { destination: _destination } => { + forward!(_routing_table, _destination, *rank, _repeaters, &packet); + if kernelmgr.message_ack_slice() { + let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE]; + if let Some(meta) = kernelmgr.message_get_slice(&mut data_slice) { + // route and not send immediately as ACKs are not a beginning of a transaction + router.route(drtioaux::Packet::SubkernelMessage { + source: *self_destination, destination: meta.destination, id: kernelmgr.get_current_id().unwrap(), + status: meta.status, length: meta.len as u16, data: data_slice + }, _routing_table, *rank, *self_destination); + } else { + error!("Error receiving message slice"); + } + } + Ok(()) } _ => { @@ -367,27 +493,25 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea } fn process_aux_packets(dma_manager: &mut DmaManager, analyzer: &mut Analyzer, - repeaters: &mut [repeater::Repeater], - routing_table: &mut drtio_routing::RoutingTable, rank: &mut u8) { + kernelmgr: &mut KernelManager, repeaters: &mut [repeater::Repeater], + routing_table: &mut drtio_routing::RoutingTable, rank: &mut u8, router: &mut routing::Router, + destination: &mut u8) { let result = drtioaux::recv(0).and_then(|packet| { - if let Some(packet) = packet { - process_aux_packet(dma_manager, analyzer, repeaters, routing_table, rank, packet) + if let Some(packet) = packet.or_else(|| router.get_local_packet()) { + process_aux_packet(dma_manager, analyzer, kernelmgr, + repeaters, routing_table, rank, router, destination, packet) } else { Ok(()) } }); - match result { - Ok(()) => (), - Err(e) => warn!("aux packet error ({})", e) + if let Err(e) = result { + warn!("aux packet error ({})", e); } } fn drtiosat_process_errors() { - let errors; - unsafe { - errors = csr::drtiosat::protocol_error_read(); - } + let errors = unsafe { csr::drtiosat::protocol_error_read() }; if errors & 1 != 0 { error!("received packet of an unknown type"); } @@ -395,26 +519,29 @@ fn drtiosat_process_errors() { error!("received truncated packet"); } if errors & 4 != 0 { - let destination; - unsafe { - destination = csr::drtiosat::buffer_space_timeout_dest_read(); - } + let destination = unsafe { + csr::drtiosat::buffer_space_timeout_dest_read() + }; error!("timeout attempting to get buffer space from CRI, destination=0x{:02x}", destination) } - if errors & 8 != 0 { - let channel; - let timestamp_event; - let timestamp_counter; - unsafe { - channel = csr::drtiosat::underflow_channel_read(); - timestamp_event = csr::drtiosat::underflow_timestamp_event_read() as i64; - timestamp_counter = csr::drtiosat::underflow_timestamp_counter_read() as i64; + let drtiosat_active = unsafe { csr::cri_con::selected_read() == 0 }; + if drtiosat_active { + // RTIO errors are handled by ksupport and dma manager + if errors & 8 != 0 { + let channel; + let timestamp_event; + let timestamp_counter; + unsafe { + channel = csr::drtiosat::underflow_channel_read(); + timestamp_event = csr::drtiosat::underflow_timestamp_event_read() as i64; + timestamp_counter = csr::drtiosat::underflow_timestamp_counter_read() as i64; + } + error!("write underflow, channel={}, timestamp={}, counter={}, slack={}", + channel, timestamp_event, timestamp_counter, timestamp_event-timestamp_counter); + } + if errors & 16 != 0 { + error!("write overflow"); } - error!("write underflow, channel={}, timestamp={}, counter={}, slack={}", - channel, timestamp_event, timestamp_counter, timestamp_event-timestamp_counter); - } - if errors & 16 != 0 { - error!("write overflow"); } unsafe { csr::drtiosat::protocol_error_write(errors); @@ -472,6 +599,36 @@ const SI5324_SETTINGS: si5324::FrequencySettings crystal_as_ckin2: true }; +#[cfg(all(has_si549, rtio_frequency = "125.0"))] +const SI549_SETTINGS: si549::FrequencySetting = si549::FrequencySetting { + main: si549::DividerConfig { + hsdiv: 0x058, + lsdiv: 0, + fbdiv: 0x04815791F25, + }, + helper: si549::DividerConfig { + // 125MHz*32767/32768 + hsdiv: 0x058, + lsdiv: 0, + fbdiv: 0x04814E8F442, + }, +}; + +#[cfg(all(has_si549, rtio_frequency = "100.0"))] +const SI549_SETTINGS: si549::FrequencySetting = si549::FrequencySetting { + main: si549::DividerConfig { + hsdiv: 0x06C, + lsdiv: 0, + fbdiv: 0x046C5F49797, + }, + helper: si549::DividerConfig { + // 100MHz*32767/32768 + hsdiv: 0x06C, + lsdiv: 0, + fbdiv: 0x046C5670BBD, + }, +}; + #[cfg(not(soc_platform = "efc"))] fn sysclk_setup() { let switched = unsafe { @@ -484,9 +641,12 @@ fn sysclk_setup() { else { #[cfg(has_si5324)] si5324::setup(&SI5324_SETTINGS, si5324::Input::Ckin1).expect("cannot initialize Si5324"); + #[cfg(has_si549)] + si549::main_setup(&SI549_SETTINGS).expect("cannot initialize main Si549"); + info!("Switching sys clock, rebooting..."); // delay for clean UART log, wait until UART FIFO is empty - clock::spin_us(1300); + clock::spin_us(3000); unsafe { csr::gt_drtio::stable_clkin_write(1); } @@ -507,6 +667,10 @@ pub extern fn main() -> i32 { ALLOC.add_range(&mut _fheap, &mut _eheap); pmp::init_stack_guard(&_sstack_guard as *const u8 as usize); } + #[cfg(soc_platform = "kasli")] + irq::enable_interrupts(); + #[cfg(has_wrpll)] + irq::enable(csr::WRPLL_INTERRUPT); clock::init(); uart_logger::ConsoleLogger::register(); @@ -542,25 +706,43 @@ pub extern fn main() -> i32 { #[cfg(not(soc_platform = "efc"))] sysclk_setup(); + #[cfg(has_si549)] + si549::helper_setup(&SI549_SETTINGS).expect("cannot initialize helper Si549"); + #[cfg(soc_platform = "efc")] let mut io_expander; #[cfg(soc_platform = "efc")] { + let p3v3_fmc_en_pin; + let vadj_fmc_en_pin; + + #[cfg(hw_rev = "v1.0")] + { + p3v3_fmc_en_pin = 0; + vadj_fmc_en_pin = 1; + } + #[cfg(hw_rev = "v1.1")] + { + p3v3_fmc_en_pin = 1; + vadj_fmc_en_pin = 7; + } + io_expander = board_misoc::io_expander::IoExpander::new().unwrap(); - + io_expander.init().expect("I2C I/O expander initialization failed"); + // Enable LEDs io_expander.set_oe(0, 1 << 5 | 1 << 6 | 1 << 7).unwrap(); // Enable VADJ and P3V3_FMC - io_expander.set_oe(1, 1 << 0 | 1 << 1).unwrap(); + io_expander.set_oe(1, 1 << p3v3_fmc_en_pin | 1 << vadj_fmc_en_pin).unwrap(); - io_expander.set(1, 0, true); - io_expander.set(1, 1, true); + io_expander.set(1, p3v3_fmc_en_pin, true); + io_expander.set(1, vadj_fmc_en_pin, true); io_expander.service().unwrap(); } - #[cfg(not(has_drtio_eem))] + #[cfg(not(soc_platform = "efc"))] unsafe { csr::gt_drtio::txenable_write(0xffffffffu32 as _); } @@ -584,6 +766,7 @@ pub extern fn main() -> i32 { } let mut routing_table = drtio_routing::RoutingTable::default_empty(); let mut rank = 1; + let mut destination = 1; let mut hardware_tick_ts = 0; @@ -591,10 +774,12 @@ pub extern fn main() -> i32 { ad9117::init().expect("AD9117 initialization failed"); loop { + let mut router = routing::Router::new(); + while !drtiosat_link_rx_up() { drtiosat_process_errors(); for rep in repeaters.iter_mut() { - rep.service(&routing_table, rank); + rep.service(&routing_table, rank, destination, &mut router); } #[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))] { @@ -613,23 +798,28 @@ pub extern fn main() -> i32 { si5324::siphaser::calibrate_skew().expect("failed to calibrate skew"); } - // DMA manager created here, so when link is dropped, all DMA traces - // are cleared out for a clean slate on subsequent connections, - // without a manual intervention. + #[cfg(has_wrpll)] + si549::wrpll::select_recovered_clock(true); + + // various managers created here, so when link is dropped, DMA traces, + // analyzer logs, kernels are cleared and/or stopped for a clean slate + // on subsequent connections, without a manual intervention. let mut dma_manager = DmaManager::new(); - - // Reset the analyzer as well. let mut analyzer = Analyzer::new(); + let mut kernelmgr = KernelManager::new(); + cricon_select(RtioMaster::Drtio); drtioaux::reset(0); drtiosat_reset(false); drtiosat_reset_phy(false); while drtiosat_link_rx_up() { drtiosat_process_errors(); - process_aux_packets(&mut dma_manager, &mut analyzer, &mut repeaters, &mut routing_table, &mut rank); + process_aux_packets(&mut dma_manager, &mut analyzer, + &mut kernelmgr, &mut repeaters, &mut routing_table, + &mut rank, &mut router, &mut destination); for rep in repeaters.iter_mut() { - rep.service(&routing_table, rank); + rep.service(&routing_table, rank, destination, &mut router); } #[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))] { @@ -650,6 +840,26 @@ pub extern fn main() -> i32 { error!("aux packet error: {}", e); } } + if let Some(status) = dma_manager.get_status() { + info!("playback done, error: {}, channel: {}, timestamp: {}", status.error, status.channel, status.timestamp); + router.route(drtioaux::Packet::DmaPlaybackStatus { + source: destination, destination: status.source, id: status.id, + error: status.error, channel: status.channel, timestamp: status.timestamp + }, &routing_table, rank, destination); + } + + kernelmgr.process_kern_requests(&mut router, &routing_table, rank, destination, &mut dma_manager); + + #[cfg(has_drtio_routing)] + if let Some((repno, packet)) = router.get_downstream_packet() { + if let Err(e) = repeaters[repno].aux_send(&packet) { + warn!("[REP#{}] Error when sending packet to satellite ({:?})", repno, e) + } + } + + if let Some(packet) = router.get_upstream_packet() { + drtioaux::send(0, &packet).unwrap(); + } } drtiosat_reset_phy(true); @@ -658,11 +868,27 @@ pub extern fn main() -> i32 { info!("uplink is down, switching to local oscillator clock"); #[cfg(has_si5324)] si5324::siphaser::select_recovered_clock(false).expect("failed to switch clocks"); + #[cfg(has_wrpll)] + si549::wrpll::select_recovered_clock(false); } } #[cfg(soc_platform = "efc")] fn enable_error_led() { + let p3v3_fmc_en_pin; + let vadj_fmc_en_pin; + + #[cfg(hw_rev = "v1.0")] + { + p3v3_fmc_en_pin = 0; + vadj_fmc_en_pin = 1; + } + #[cfg(hw_rev = "v1.1")] + { + p3v3_fmc_en_pin = 1; + vadj_fmc_en_pin = 7; + } + let mut io_expander = board_misoc::io_expander::IoExpander::new().unwrap(); // Keep LEDs enabled @@ -671,10 +897,10 @@ fn enable_error_led() { io_expander.set(0, 7, true); // Keep VADJ and P3V3_FMC enabled - io_expander.set_oe(1, 1 << 0 | 1 << 1).unwrap(); + io_expander.set_oe(1, 1 << p3v3_fmc_en_pin | 1 << vadj_fmc_en_pin).unwrap(); - io_expander.set(1, 0, true); - io_expander.set(1, 1, true); + io_expander.set(1, p3v3_fmc_en_pin, true); + io_expander.set(1, vadj_fmc_en_pin, true); io_expander.service().unwrap(); } @@ -683,23 +909,33 @@ fn enable_error_led() { pub extern fn exception(_regs: *const u32) { let pc = mepc::read(); let cause = mcause::read().cause(); - - fn hexdump(addr: u32) { - let addr = (addr - addr % 4) as *const u32; - let mut ptr = addr; - println!("@ {:08p}", ptr); - for _ in 0..4 { - print!("+{:04x}: ", ptr as usize - addr as usize); - print!("{:08x} ", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); - print!("{:08x} ", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); - print!("{:08x} ", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); - print!("{:08x}\n", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); + match cause { + mcause::Trap::Interrupt(_source) => { + #[cfg(has_wrpll)] + if irq::is_pending(csr::WRPLL_INTERRUPT) { + si549::wrpll::interrupt_handler(); + } + }, + + mcause::Trap::Exception(e) => { + fn hexdump(addr: u32) { + let addr = (addr - addr % 4) as *const u32; + let mut ptr = addr; + println!("@ {:08p}", ptr); + for _ in 0..4 { + print!("+{:04x}: ", ptr as usize - addr as usize); + print!("{:08x} ", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); + print!("{:08x} ", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); + print!("{:08x} ", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); + print!("{:08x}\n", unsafe { *ptr }); ptr = ptr.wrapping_offset(1); + } + } + + hexdump(u32::try_from(pc).unwrap()); + let mtval = mtval::read(); + panic!("exception {:?} at PC 0x{:x}, trap value 0x{:x}", e, u32::try_from(pc).unwrap(), mtval) } } - - hexdump(u32::try_from(pc).unwrap()); - let mtval = mtval::read(); - panic!("exception {:?} at PC 0x{:x}, trap value 0x{:x}", cause, u32::try_from(pc).unwrap(), mtval) } #[no_mangle] diff --git a/artiq/firmware/satman/repeater.rs b/artiq/firmware/satman/repeater.rs index 9969d5099..cd18badc2 100644 --- a/artiq/firmware/satman/repeater.rs +++ b/artiq/firmware/satman/repeater.rs @@ -1,6 +1,7 @@ use board_artiq::{drtioaux, drtio_routing}; #[cfg(has_drtio_routing)] use board_misoc::{csr, clock}; +use routing::Router; #[cfg(has_drtio_routing)] fn rep_link_rx_up(repno: u8) -> bool { @@ -48,7 +49,7 @@ impl Repeater { self.state == RepeaterState::Up } - pub fn service(&mut self, routing_table: &drtio_routing::RoutingTable, rank: u8) { + pub fn service(&mut self, routing_table: &drtio_routing::RoutingTable, rank: u8, self_destination: u8, router: &mut Router) { self.process_local_errors(); match self.state { @@ -106,7 +107,7 @@ impl Repeater { } } RepeaterState::Up => { - self.process_unsolicited_aux(); + self.process_unsolicited_aux(routing_table, rank, self_destination, router); if !rep_link_rx_up(self.repno) { info!("[REP#{}] link is down", self.repno); self.state = RepeaterState::Down; @@ -121,9 +122,10 @@ impl Repeater { } } - fn process_unsolicited_aux(&self) { + fn process_unsolicited_aux(&self, routing_table: &drtio_routing::RoutingTable, + rank: u8, self_destination: u8, router: &mut Router) { match drtioaux::recv(self.auxno) { - Ok(Some(packet)) => warn!("[REP#{}] unsolicited aux packet: {:?}", self.repno, packet), + Ok(Some(packet)) => router.route(packet, routing_table, rank, self_destination), Ok(None) => (), Err(_) => warn!("[REP#{}] aux packet error", self.repno) } @@ -180,15 +182,19 @@ impl Repeater { } pub fn aux_forward(&self, request: &drtioaux::Packet) -> Result<(), drtioaux::Error> { - if self.state != RepeaterState::Up { - return Err(drtioaux::Error::LinkDown); - } - drtioaux::send(self.auxno, request).unwrap(); + self.aux_send(request)?; let reply = self.recv_aux_timeout(200)?; drtioaux::send(0, &reply).unwrap(); Ok(()) } + pub fn aux_send(&self, request: &drtioaux::Packet) -> Result<(), drtioaux::Error> { + if self.state != RepeaterState::Up { + return Err(drtioaux::Error::LinkDown); + } + drtioaux::send(self.auxno, request) + } + pub fn sync_tsc(&self) -> Result<(), drtioaux::Error> { if self.state != RepeaterState::Up { return Ok(()); @@ -199,7 +205,6 @@ impl Repeater { (csr::DRTIOREP[repno].set_time_write)(1); while (csr::DRTIOREP[repno].set_time_read)() == 1 {} } - // TSCAck is the only aux packet that is sent spontaneously // by the satellite, in response to a TSC set on the RT link. let reply = self.recv_aux_timeout(10000)?; @@ -275,7 +280,7 @@ pub struct Repeater { impl Repeater { pub fn new(_repno: u8) -> Repeater { Repeater::default() } - pub fn service(&self, _routing_table: &drtio_routing::RoutingTable, _rank: u8) { } + pub fn service(&self, _routing_table: &drtio_routing::RoutingTable, _rank: u8, _destination: u8, _router: &mut Router) { } pub fn sync_tsc(&self) -> Result<(), drtioaux::Error> { Ok(()) } diff --git a/artiq/firmware/satman/routing.rs b/artiq/firmware/satman/routing.rs new file mode 100644 index 000000000..cb17d6822 --- /dev/null +++ b/artiq/firmware/satman/routing.rs @@ -0,0 +1,169 @@ +use alloc::{vec::Vec, collections::vec_deque::VecDeque}; +use board_artiq::{drtioaux, drtio_routing}; +#[cfg(has_drtio_routing)] +use board_misoc::csr; +use core::cmp::min; +use proto_artiq::drtioaux_proto::PayloadStatus; +use SAT_PAYLOAD_MAX_SIZE; +use MASTER_PAYLOAD_MAX_SIZE; + +/* represents data that has to be sent with the aux protocol */ +#[derive(Debug)] +pub struct Sliceable { + it: usize, + data: Vec, + destination: u8 +} + +pub struct SliceMeta { + pub destination: u8, + pub len: u16, + pub status: PayloadStatus +} + +macro_rules! get_slice_fn { + ( $name:tt, $size:expr ) => { + pub fn $name(&mut self, data_slice: &mut [u8; $size]) -> SliceMeta { + let first = self.it == 0; + let len = min($size, self.data.len() - self.it); + let last = self.it + len == self.data.len(); + let status = PayloadStatus::from_status(first, last); + data_slice[..len].clone_from_slice(&self.data[self.it..self.it+len]); + self.it += len; + + SliceMeta { + destination: self.destination, + len: len as u16, + status: status + } + } + }; +} + +impl Sliceable { + pub fn new(destination: u8, data: Vec) -> Sliceable { + Sliceable { + it: 0, + data: data, + destination: destination + } + } + + pub fn at_end(&self) -> bool { + self.it == self.data.len() + } + + pub fn extend(&mut self, data: &[u8]) { + self.data.extend(data); + } + + get_slice_fn!(get_slice_sat, SAT_PAYLOAD_MAX_SIZE); + get_slice_fn!(get_slice_master, MASTER_PAYLOAD_MAX_SIZE); +} + +// Packets from downstream (further satellites) are received and routed appropriately. +// they're passed as soon as possible downstream (within the subtree), or sent upstream, +// which is notified about pending packets. +// for rank 1 (connected to master) satellites, these packets are passed as an answer to DestinationStatusRequest; +// for higher ranks, after getting a notification, it will transact with downstream to get the pending packets. + +// forward! macro is not deprecated, as routable packets are only these that can originate +// from both master and satellite, e.g. DDMA and Subkernel. + +pub struct Router { + upstream_queue: VecDeque, + local_queue: VecDeque, + #[cfg(has_drtio_routing)] + downstream_queue: VecDeque<(usize, drtioaux::Packet)>, +} + +impl Router { + pub fn new() -> Router { + Router { + upstream_queue: VecDeque::new(), + local_queue: VecDeque::new(), + #[cfg(has_drtio_routing)] + downstream_queue: VecDeque::new(), + } + } + + // called by local sources (DDMA, kernel) and by repeaters on receiving async data + // messages are always buffered for both upstream and downstream + pub fn route(&mut self, packet: drtioaux::Packet, + _routing_table: &drtio_routing::RoutingTable, _rank: u8, + self_destination: u8 + ) { + let destination = packet.routable_destination(); + #[cfg(has_drtio_routing)] + { + if let Some(destination) = destination { + let hop = _routing_table.0[destination as usize][_rank as usize] as usize; + if destination == self_destination { + self.local_queue.push_back(packet); + } else if hop > 0 && hop < csr::DRTIOREP.len() { + let repno = (hop - 1) as usize; + self.downstream_queue.push_back((repno, packet)); + } else { + self.upstream_queue.push_back(packet); + } + } else { + error!("Received an unroutable packet: {:?}", packet); + } + } + #[cfg(not(has_drtio_routing))] + { + if destination == Some(self_destination) { + self.local_queue.push_back(packet); + } else { + self.upstream_queue.push_back(packet); + } + } + } + + // Sends a packet to a required destination, routing if it's necessary + pub fn send(&mut self, packet: drtioaux::Packet, + _routing_table: &drtio_routing::RoutingTable, + _rank: u8, _destination: u8 + ) -> Result<(), drtioaux::Error> { + #[cfg(has_drtio_routing)] + { + let destination = packet.routable_destination(); + if let Some(destination) = destination { + let hop = _routing_table.0[destination as usize][_rank as usize] as usize; + if destination == 0 { + // response is needed immediately if master required it + drtioaux::send(0, &packet)?; + } else if !(hop > 0 && hop < csr::DRTIOREP.len()) { + // higher rank can wait + self.upstream_queue.push_back(packet); + } else { + let repno = (hop - 1) as usize; + // transaction will occur at closest possible opportunity + self.downstream_queue.push_back((repno, packet)); + } + Ok(()) + } else { + // packet not supported in routing, fallback - sent directly + drtioaux::send(0, &packet) + } + } + #[cfg(not(has_drtio_routing))] + { + drtioaux::send(0, &packet) + } + } + + pub fn get_upstream_packet(&mut self) -> Option { + let packet = self.upstream_queue.pop_front(); + packet + } + + #[cfg(has_drtio_routing)] + pub fn get_downstream_packet(&mut self) -> Option<(usize, drtioaux::Packet)> { + self.downstream_queue.pop_front() + } + + pub fn get_local_packet(&mut self) -> Option { + self.local_queue.pop_front() + } +} \ No newline at end of file diff --git a/artiq/firmware/satman/satman.ld b/artiq/firmware/satman/satman.ld deleted file mode 100644 index f58ef38d8..000000000 --- a/artiq/firmware/satman/satman.ld +++ /dev/null @@ -1,78 +0,0 @@ -INCLUDE generated/output_format.ld -INCLUDE generated/regions.ld -ENTRY(_reset_handler) - -SECTIONS -{ - .vectors : - { - *(.vectors) - } > main_ram - - .text : - { - *(.text .text.*) - . = ALIGN(0x40000); - } > main_ram - - .eh_frame : - { - __eh_frame_start = .; - KEEP(*(.eh_frame)) - __eh_frame_end = .; - } > main_ram - - /* https://sourceware.org/bugzilla/show_bug.cgi?id=20475 */ - .got : - { - PROVIDE(_GLOBAL_OFFSET_TABLE_ = .); - *(.got) - } > main_ram - - .got.plt : - { - *(.got.plt) - } > main_ram - - .rodata : - { - _frodata = .; - *(.rodata .rodata.*) - _erodata = .; - } > main_ram - - .data : - { - *(.data .data.*) - } > main_ram - - .sdata : - { - *(.sdata .sdata.*) - } > main_ram - - .bss (NOLOAD) : ALIGN(4) - { - _fbss = .; - *(.sbss .sbss.* .bss .bss.*); - . = ALIGN(4); - _ebss = .; - } > main_ram - - .stack (NOLOAD) : ALIGN(0x1000) - { - _sstack_guard = .; - . += 0x1000; - _estack = .; - . += 0x10000; - _fstack = . - 16; - } > main_ram - - /* 64MB heap for alloc use */ - .heap (NOLOAD) : ALIGN(16) - { - _fheap = .; - . = . + 0x4000000; - _eheap = .; - } > main_ram -} diff --git a/artiq/frontend/afws_client.py b/artiq/frontend/afws_client.py index 49ce122c8..2faaf560b 100755 --- a/artiq/frontend/afws_client.py +++ b/artiq/frontend/afws_client.py @@ -146,7 +146,8 @@ class Client: print(error_msg) table = PrettyTable() table.field_names = ["Variant", "Expiry date"] - table.add_rows(variants) + for variant in variants: + table.add_row(variant) print(table) sys.exit(1) return variants[0][0] @@ -244,10 +245,11 @@ def main(): sys.exit(1) zip_unarchive(contents, args.directory) elif args.action == "get_variants": - data = client.get_variants() + variants = client.get_variants() table = PrettyTable() table.field_names = ["Variant", "Expiry date"] - table.add_rows(data) + for variant in variants: + table.add_row(variant) print(table) elif args.action == "get_json": if args.variant: diff --git a/artiq/frontend/aqctl_coreanalyzer_proxy.py b/artiq/frontend/aqctl_coreanalyzer_proxy.py new file mode 100755 index 000000000..34726e47a --- /dev/null +++ b/artiq/frontend/aqctl_coreanalyzer_proxy.py @@ -0,0 +1,112 @@ +#!/usr/bin/env python3 + +import argparse +import asyncio +import atexit +import logging + +from sipyco.asyncio_tools import AsyncioServer, SignalHandler, atexit_register_coroutine +from sipyco.pc_rpc import Server +from sipyco import common_args + +from artiq.coredevice.comm_analyzer import get_analyzer_dump, ANALYZER_MAGIC + + +logger = logging.getLogger(__name__) + + +# simplified version of sipyco Broadcaster +class ProxyServer(AsyncioServer): + def __init__(self, queue_limit=8): + AsyncioServer.__init__(self) + self._recipients = set() + self._queue_limit = queue_limit + + async def _handle_connection_cr(self, reader, writer): + try: + writer.write(ANALYZER_MAGIC) + queue = asyncio.Queue(self._queue_limit) + self._recipients.add(queue) + try: + while True: + dump = await queue.get() + writer.write(dump) + # raise exception on connection error + await writer.drain() + finally: + self._recipients.remove(queue) + except (ConnectionResetError, ConnectionAbortedError, BrokenPipeError): + # receivers disconnecting are a normal occurence + pass + finally: + writer.close() + + def distribute(self, dump): + for recipient in self._recipients: + recipient.put_nowait(dump) + + +class ProxyControl: + def __init__(self, distribute_cb, core_addr, core_port=1382): + self.distribute_cb = distribute_cb + self.core_addr = core_addr + self.core_port = core_port + + def ping(self): + return True + + def trigger(self): + try: + dump = get_analyzer_dump(self.core_addr, self.core_port) + self.distribute_cb(dump) + except: + logger.warning("Trigger failed:", exc_info=True) + raise + + +def get_argparser(): + parser = argparse.ArgumentParser( + description="ARTIQ core analyzer proxy") + common_args.verbosity_args(parser) + common_args.simple_network_args(parser, [ + ("proxy", "proxying", 1385), + ("control", "control", 1386) + ]) + parser.add_argument("core_addr", metavar="CORE_ADDR", + help="hostname or IP address of the core device") + return parser + + +def main(): + args = get_argparser().parse_args() + common_args.init_logger_from_args(args) + + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + atexit.register(loop.close) + + signal_handler = SignalHandler() + signal_handler.setup() + atexit.register(signal_handler.teardown) + + bind_address = common_args.bind_address_from_args(args) + + proxy_server = ProxyServer() + loop.run_until_complete(proxy_server.start(bind_address, args.port_proxy)) + atexit_register_coroutine(proxy_server.stop, loop=loop) + + controller = ProxyControl(proxy_server.distribute, args.core_addr) + server = Server({"coreanalyzer_proxy_control": controller}, None, True) + loop.run_until_complete(server.start(bind_address, args.port_control)) + atexit_register_coroutine(server.stop, loop=loop) + + _, pending = loop.run_until_complete(asyncio.wait( + [loop.create_task(signal_handler.wait_terminate()), + loop.create_task(server.wait_terminate())], + return_when=asyncio.FIRST_COMPLETED)) + for task in pending: + task.cancel() + + +if __name__ == "__main__": + main() diff --git a/artiq/frontend/artiq_browser.py b/artiq/frontend/artiq_browser.py index 5cb7ef090..751c57d2e 100755 --- a/artiq/frontend/artiq_browser.py +++ b/artiq/frontend/artiq_browser.py @@ -81,7 +81,7 @@ class Browser(QtWidgets.QMainWindow): self.files.dataset_changed.connect( self.experiments.dataset_changed) - self.applets = applets.AppletsDock(self, dataset_sub, dataset_ctl, loop=loop) + self.applets = applets.AppletsDock(self, dataset_sub, dataset_ctl, self.experiments, loop=loop) smgr.register(self.applets) atexit_register_coroutine(self.applets.stop, loop=loop) diff --git a/artiq/frontend/artiq_client.py b/artiq/frontend/artiq_client.py index 4e7667465..39502aad9 100755 --- a/artiq/frontend/artiq_client.py +++ b/artiq/frontend/artiq_client.py @@ -22,8 +22,10 @@ from sipyco.pc_rpc import Client from sipyco.sync_struct import Subscriber from sipyco.broadcast import Receiver from sipyco import common_args, pyon +from sipyco.asyncio_tools import SignalHandler -from artiq.tools import scale_from_metadata, short_format, parse_arguments +from artiq.tools import (scale_from_metadata, short_format, parse_arguments, + parse_devarg_override) from artiq import __version__ as artiq_version @@ -68,6 +70,8 @@ def get_argparser(): parser_add.add_argument("-r", "--revision", default=None, help="use a specific repository revision " "(defaults to head, ignored without -R)") + parser_add.add_argument("--devarg-override", default="", + help="specify device arguments to override") parser_add.add_argument("--content", default=False, action="store_true", help="submit by content") @@ -109,11 +113,25 @@ def get_argparser(): "del-dataset", help="delete a dataset") parser_del_dataset.add_argument("name", help="name of the dataset") + parser_supply_interactive = subparsers.add_parser( + "supply-interactive", help="supply interactive arguments") + parser_supply_interactive.add_argument( + "rid", metavar="RID", type=int, help="RID of target experiment") + parser_supply_interactive.add_argument( + "arguments", metavar="ARGUMENTS", nargs="*", + help="interactive arguments") + + parser_cancel_interactive = subparsers.add_parser( + "cancel-interactive", help="cancel interactive arguments") + parser_cancel_interactive.add_argument( + "rid", metavar="RID", type=int, help="RID of target experiment") + parser_show = subparsers.add_parser( "show", help="show schedule, log, devices or datasets") parser_show.add_argument( "what", metavar="WHAT", - choices=["schedule", "log", "ccb", "devices", "datasets"], + choices=["schedule", "log", "ccb", "devices", "datasets", + "interactive-args"], help="select object to show: %(choices)s") subparsers.add_parser( @@ -132,8 +150,7 @@ def get_argparser(): "ls", help="list a directory on the master") parser_ls.add_argument("directory", default="", nargs="?") - subparsers.add_parser( - "terminate", help="terminate the ARTIQ master") + subparsers.add_parser("terminate", help="terminate the ARTIQ master") common_args.verbosity_args(parser) return parser @@ -146,6 +163,7 @@ def _action_submit(remote, args): raise ValueError("Failed to parse run arguments") from err expid = { + "devarg_override": parse_devarg_override(args.devarg_override), "log_level": logging.WARNING + args.quiet*10 - args.verbose*10, "class_name": args.class_name, "arguments": arguments, @@ -204,6 +222,15 @@ def _action_scan_devices(remote, args): remote.scan() +def _action_supply_interactive(remote, args): + arguments = parse_arguments(args.arguments) + remote.supply(args.rid, arguments) + + +def _action_cancel_interactive(remote, args): + remote.cancel(args.rid) + + def _action_scan_repository(remote, args): if getattr(args, "async"): remote.scan_repository_async(args.revision) @@ -270,17 +297,34 @@ def _show_datasets(datasets): print(table) +def _show_interactive_args(interactive_args): + clear_screen() + table = PrettyTable(["RID", "Title", "Key", "Type", "Group", "Tooltip"]) + for rid, input_request in sorted(interactive_args.items(), key=itemgetter(0)): + title = input_request["title"] + for key, procdesc, group, tooltip in input_request["arglist_desc"]: + table.add_row([rid, title, key, procdesc["ty"], group, tooltip]) + print(table) + + def _run_subscriber(host, port, subscriber): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) try: - loop.run_until_complete(subscriber.connect(host, port)) + signal_handler = SignalHandler() + signal_handler.setup() try: - loop.run_until_complete(asyncio.wait_for(subscriber.receive_task, - None)) - print("Connection to master lost") + loop.run_until_complete(subscriber.connect(host, port)) + try: + _, pending = loop.run_until_complete(asyncio.wait( + [loop.create_task(signal_handler.wait_terminate()), subscriber.receive_task], + return_when=asyncio.FIRST_COMPLETED)) + for task in pending: + task.cancel() + finally: + loop.run_until_complete(subscriber.close()) finally: - loop.run_until_complete(subscriber.close()) + signal_handler.teardown() finally: loop.close() @@ -334,18 +378,22 @@ def main(): _show_dict(args, "devices", _show_devices) elif args.what == "datasets": _show_dict(args, "datasets", _show_datasets) + elif args.what == "interactive-args": + _show_dict(args, "interactive_args", _show_interactive_args) else: raise ValueError else: port = 3251 if args.port is None else args.port target_name = { - "submit": "master_schedule", - "delete": "master_schedule", - "set_dataset": "master_dataset_db", - "del_dataset": "master_dataset_db", - "scan_devices": "master_device_db", - "scan_repository": "master_experiment_db", - "ls": "master_experiment_db", + "submit": "schedule", + "delete": "schedule", + "set_dataset": "dataset_db", + "del_dataset": "dataset_db", + "scan_devices": "device_db", + "supply_interactive": "interactive_arg_db", + "cancel_interactive": "interactive_arg_db", + "scan_repository": "experiment_db", + "ls": "experiment_db", "terminate": "master_management", }[action] remote = Client(args.server, port, target_name) diff --git a/artiq/frontend/artiq_compile.py b/artiq/frontend/artiq_compile.py index 5df540f86..1f202f861 100755 --- a/artiq/frontend/artiq_compile.py +++ b/artiq/frontend/artiq_compile.py @@ -1,6 +1,6 @@ #!/usr/bin/env python3 -import os, sys, logging, argparse +import os, sys, io, tarfile, logging, argparse from sipyco import common_args @@ -71,6 +71,5 @@ def main(): finally: device_mgr.close_devices() - if __name__ == "__main__": main() diff --git a/artiq/frontend/artiq_dashboard.py b/artiq/frontend/artiq_dashboard.py index 8bcf9546c..1d0d35284 100755 --- a/artiq/frontend/artiq_dashboard.py +++ b/artiq/frontend/artiq_dashboard.py @@ -6,7 +6,6 @@ import atexit import importlib import os import logging -import sys from PyQt5 import QtCore, QtGui, QtWidgets from qasync import QEventLoop @@ -15,13 +14,15 @@ from sipyco.pc_rpc import AsyncioClient, Client from sipyco.broadcast import Receiver from sipyco import common_args from sipyco.asyncio_tools import atexit_register_coroutine +from sipyco.sync_struct import Subscriber from artiq import __artiq_dir__ as artiq_dir, __version__ as artiq_version from artiq.tools import get_user_config_dir from artiq.gui.models import ModelSubscriber from artiq.gui import state, log from artiq.dashboard import (experiments, shortcuts, explorer, - moninj, datasets, schedule, applets_ccb) + moninj, datasets, schedule, applets_ccb, + waveform, interactive_args) def get_argparser(): @@ -47,6 +48,15 @@ def get_argparser(): parser.add_argument( "-p", "--load-plugin", dest="plugin_modules", action="append", help="Python module to load on startup") + parser.add_argument( + "--analyzer-proxy-timeout", default=5, type=float, + help="connection timeout to core analyzer proxy") + parser.add_argument( + "--analyzer-proxy-timer", default=5, type=float, + help="retry timer to core analyzer proxy") + parser.add_argument( + "--analyzer-proxy-timer-backoff", default=1.1, type=float, + help="retry timer backoff multiplier to core analyzer proxy") common_args.verbosity_args(parser) return parser @@ -60,7 +70,7 @@ class MainWindow(QtWidgets.QMainWindow): self.setWindowTitle("ARTIQ Dashboard - {}".format(server)) qfm = QtGui.QFontMetrics(self.font()) - self.resize(140*qfm.averageCharWidth(), 38*qfm.lineSpacing()) + self.resize(140 * qfm.averageCharWidth(), 38 * qfm.lineSpacing()) self.exit_request = asyncio.Event() @@ -85,11 +95,23 @@ class MdiArea(QtWidgets.QMdiArea): self.pixmap = QtGui.QPixmap(os.path.join( artiq_dir, "gui", "logo_ver.svg")) + self.setActivationOrder(self.ActivationHistoryOrder) + + self.tile = QtWidgets.QShortcut( + QtGui.QKeySequence('Ctrl+Shift+T'), self) + self.tile.activated.connect( + lambda: self.tileSubWindows()) + + self.cascade = QtWidgets.QShortcut( + QtGui.QKeySequence('Ctrl+Shift+C'), self) + self.cascade.activated.connect( + lambda: self.cascadeSubWindows()) + def paintEvent(self, event): QtWidgets.QMdiArea.paintEvent(self, event) painter = QtGui.QPainter(self.viewport()) - x = (self.width() - self.pixmap.width())//2 - y = (self.height() - self.pixmap.height())//2 + x = (self.width() - self.pixmap.width()) // 2 + y = (self.height() - self.pixmap.height()) // 2 painter.setOpacity(0.5) painter.drawPixmap(x, y, self.pixmap) @@ -106,9 +128,9 @@ def main(): if args.db_file is None: args.db_file = os.path.join(get_user_config_dir(), - "artiq_dashboard_{server}_{port}.pyon".format( - server=args.server.replace(":","."), - port=args.port_notify)) + "artiq_dashboard_{server}_{port}.pyon".format( + server=args.server.replace(":", "."), + port=args.port_notify)) app = QtWidgets.QApplication(["ARTIQ Dashboard"]) loop = QEventLoop(app) @@ -118,10 +140,10 @@ def main(): # create connections to master rpc_clients = dict() - for target in "schedule", "experiment_db", "dataset_db", "device_db": + for target in "schedule", "experiment_db", "dataset_db", "device_db", "interactive_arg_db": client = AsyncioClient() loop.run_until_complete(client.connect_rpc( - args.server, args.port_control, "master_" + target)) + args.server, args.port_control, target)) atexit.register(client.close_rpc) rpc_clients[target] = client @@ -132,6 +154,7 @@ def main(): master_management.close_rpc() disconnect_reported = False + def report_disconnect(): nonlocal disconnect_reported if not disconnect_reported: @@ -143,9 +166,9 @@ def main(): for notifier_name, modelf in (("explist", explorer.Model), ("explist_status", explorer.StatusUpdater), ("datasets", datasets.Model), - ("schedule", schedule.Model)): - subscriber = ModelSubscriber(notifier_name, modelf, - report_disconnect) + ("schedule", schedule.Model), + ("interactive_args", interactive_args.Model)): + subscriber = ModelSubscriber(notifier_name, modelf, report_disconnect) loop.run_until_complete(subscriber.connect( args.server, args.port_notify)) atexit_register_coroutine(subscriber.close, loop=loop) @@ -203,10 +226,22 @@ def main(): smgr.register(d_applets) broadcast_clients["ccb"].notify_cbs.append(d_applets.ccb_notify) - d_ttl_dds = moninj.MonInj(rpc_clients["schedule"]) - loop.run_until_complete(d_ttl_dds.start(args.server, args.port_notify)) + d_ttl_dds = moninj.MonInj(rpc_clients["schedule"], main_window) + smgr.register(d_ttl_dds) atexit_register_coroutine(d_ttl_dds.stop, loop=loop) + d_waveform = waveform.WaveformDock( + args.analyzer_proxy_timeout, + args.analyzer_proxy_timer, + args.analyzer_proxy_timer_backoff + ) + atexit_register_coroutine(d_waveform.stop, loop=loop) + + d_interactive_args = interactive_args.InteractiveArgsDock( + sub_clients["interactive_args"], + rpc_clients["interactive_arg_db"] + ) + d_schedule = schedule.ScheduleDock( rpc_clients["schedule"], sub_clients["schedule"]) smgr.register(d_schedule) @@ -219,8 +254,8 @@ def main(): # lay out docks right_docks = [ d_explorer, d_shortcuts, - d_ttl_dds.ttl_dock, d_ttl_dds.dds_dock, d_ttl_dds.dac_dock, - d_datasets, d_applets + d_datasets, d_applets, + d_waveform, d_interactive_args ] main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, right_docks[0]) for d1, d2 in zip(right_docks, right_docks[1:]): @@ -234,18 +269,25 @@ def main(): # QDockWidgets fail to be embedded. main_window.show() smgr.load() + + def init_cbs(ddb): + d_ttl_dds.dm.init_ddb(ddb) + d_waveform.init_ddb(ddb) + return ddb + devices_sub = Subscriber("devices", init_cbs, [d_ttl_dds.dm.notify_ddb, d_waveform.notify_ddb]) + loop.run_until_complete(devices_sub.connect(args.server, args.port_notify)) + atexit_register_coroutine(devices_sub.close, loop=loop) + smgr.start(loop=loop) atexit_register_coroutine(smgr.stop, loop=loop) - # work around for https://github.com/m-labs/artiq/issues/1307 - d_ttl_dds.ttl_dock.show() - d_ttl_dds.dds_dock.show() - # create first log dock if not already in state d_log0 = logmgr.first_log_dock() if d_log0 is not None: main_window.tabifyDockWidget(d_schedule, d_log0) - + d_moninj0 = d_ttl_dds.first_moninj_dock() + if d_moninj0 is not None: + main_window.tabifyDockWidget(right_docks[-1], d_moninj0) if server_name is not None: server_description = server_name + " ({})".format(args.server) @@ -257,5 +299,6 @@ def main(): main_window.show() loop.run_until_complete(main_window.exit_request.wait()) + if __name__ == "__main__": main() diff --git a/artiq/frontend/artiq_ddb_template.py b/artiq/frontend/artiq_ddb_template.py index c11ccbb88..94e4fc2d7 100755 --- a/artiq/frontend/artiq_ddb_template.py +++ b/artiq/frontend/artiq_ddb_template.py @@ -24,6 +24,27 @@ def get_cpu_target(description): else: raise NotImplementedError + +def get_num_leds(description): + drtio_role = description["drtio_role"] + target = description["target"] + hw_rev = description["hw_rev"] + kasli_board_leds = { + "v1.0": 4, + "v1.1": 6, + "v2.0": 3 + } + if target == "kasli": + if hw_rev in ("v1.0", "v1.1") and drtio_role != "standalone": + # LEDs are used for DRTIO status on v1.0 and v1.1 + return kasli_board_leds[hw_rev] - 3 + return kasli_board_leds[hw_rev] + elif target == "kasli_soc": + return 2 + else: + raise ValueError + + def process_header(output, description): print(textwrap.dedent(""" # Autogenerated for the {variant} variant @@ -34,7 +55,13 @@ def process_header(output, description): "type": "local", "module": "artiq.coredevice.core", "class": "Core", - "arguments": {{"host": core_addr, "ref_period": {ref_period}, "target": "{cpu_target}"}}, + "arguments": {{ + "host": core_addr, + "ref_period": {ref_period}, + "analyzer_proxy": "core_analyzer", + "target": "{cpu_target}", + "satellite_cpu_targets": {{}} + }}, }}, "core_log": {{ "type": "controller", @@ -49,6 +76,13 @@ def process_header(output, description): "port": 1384, "command": "aqctl_moninj_proxy --port-proxy {{port_proxy}} --port-control {{port}} --bind {{bind}} " + core_addr }}, + "core_analyzer": {{ + "type": "controller", + "host": "::1", + "port_proxy": 1385, + "port": 1386, + "command": "aqctl_coreanalyzer_proxy --port-proxy {{port_proxy}} --port-control {{port}} --bind {{bind}} " + core_addr + }}, "core_cache": {{ "type": "local", "module": "artiq.coredevice.cache", @@ -60,8 +94,6 @@ def process_header(output, description): "class": "CoreDMA" }}, - "satellite_cpu_targets": {{}}, - "i2c_switch0": {{ "type": "local", "module": "artiq.coredevice.i2c", @@ -179,6 +211,11 @@ class PeripheralManager: urukul_name = self.get_name("urukul") synchronization = peripheral["synchronization"] channel = count(0) + pll_en = peripheral["pll_en"] + clk_div = peripheral.get("clk_div") + if clk_div is None: + clk_div = 0 if pll_en else 1 + self.gen(""" device_db["eeprom_{name}"] = {{ "type": "local", @@ -245,7 +282,7 @@ class PeripheralManager: sync_device="\"ttl_{name}_sync\"".format(name=urukul_name) if synchronization else "None", refclk=peripheral.get("refclk", self.primary_description["rtio_frequency"]), clk_sel=peripheral["clk_sel"], - clk_div=peripheral["clk_div"]) + clk_div=clk_div) dds = peripheral["dds"] pll_vco = peripheral.get("pll_vco") for i in range(4): @@ -267,7 +304,7 @@ class PeripheralManager: uchn=i, sw=",\n \"sw_device\": \"ttl_{name}_sw{uchn}\"".format(name=urukul_name, uchn=i) if len(peripheral["ports"]) > 1 else "", pll_vco=",\n \"pll_vco\": {}".format(pll_vco) if pll_vco is not None else "", - pll_n=peripheral.get("pll_n", 32), pll_en=peripheral["pll_en"], + pll_n=peripheral.get("pll_n", 32), pll_en=pll_en, sync_delay_seed=",\n \"sync_delay_seed\": \"eeprom_{}:{}\"".format(urukul_name, 64 + 4*i) if synchronization else "", io_update_delay=",\n \"io_update_delay\": \"eeprom_{}:{}\"".format(urukul_name, 64 + 4*i) if synchronization else "") elif dds == "ad9912": @@ -288,7 +325,7 @@ class PeripheralManager: uchn=i, sw=",\n \"sw_device\": \"ttl_{name}_sw{uchn}\"".format(name=urukul_name, uchn=i) if len(peripheral["ports"]) > 1 else "", pll_vco=",\n \"pll_vco\": {}".format(pll_vco) if pll_vco is not None else "", - pll_n=peripheral.get("pll_n", 8), pll_en=peripheral["pll_en"]) + pll_n=peripheral.get("pll_n", 8), pll_en=pll_en) else: raise ValueError return next(channel) @@ -675,8 +712,8 @@ class PeripheralManager: processor = getattr(self, "process_"+str(peripheral["type"])) return processor(rtio_offset, peripheral) - def add_board_leds(self, rtio_offset, board_name=None): - for i in range(2): + def add_board_leds(self, rtio_offset, board_name=None, num_leds=2): + for i in range(num_leds): if board_name is None: led_name = self.get_name("led") else: @@ -690,7 +727,7 @@ class PeripheralManager: }}""", name=led_name, channel=rtio_offset+i) - return 2 + return num_leds def split_drtio_eem(peripherals): @@ -719,9 +756,10 @@ def process(output, primary_description, satellites): for peripheral in local_peripherals: n_channels = pm.process(rtio_offset, peripheral) rtio_offset += n_channels - if drtio_role == "standalone": - n_channels = pm.add_board_leds(rtio_offset) - rtio_offset += n_channels + + num_leds = get_num_leds(primary_description) + pm.add_board_leds(rtio_offset, num_leds=num_leds) + rtio_offset += num_leds for destination, description in satellites: if description["drtio_role"] != "satellite": @@ -732,7 +770,7 @@ def process(output, primary_description, satellites): print(textwrap.dedent(""" # DEST#{dest} peripherals - device_db["satellite_cpu_targets"][{dest}] = \"{target}\"""").format( + device_db["core"]["arguments"]["satellite_cpu_targets"][{dest}] = \"{target}\"""").format( dest=destination, target=get_cpu_target(description)), file=output) @@ -740,12 +778,26 @@ def process(output, primary_description, satellites): for peripheral in peripherals: n_channels = pm.process(rtio_offset, peripheral) rtio_offset += n_channels - - for peripheral in drtio_peripherals: + + num_leds = get_num_leds(description) + pm.add_board_leds(rtio_offset, num_leds=num_leds) + rtio_offset += num_leds + + for i, peripheral in enumerate(drtio_peripherals): + if not("drtio_destination" in peripheral): + if primary_description["target"] == "kasli": + if primary_description["hw_rev"] in ("v1.0", "v1.1"): + peripheral["drtio_destination"] = 3 + i + else: + peripheral["drtio_destination"] = 4 + i + elif primary_description["target"] == "kasli_soc": + peripheral["drtio_destination"] = 5 + i + else: + raise NotImplementedError print(textwrap.dedent(""" # DEST#{dest} peripherals - device_db["satellite_cpu_targets"][{dest}] = \"{target}\"""").format( + device_db["core"]["arguments"]["satellite_cpu_targets"][{dest}] = \"{target}\"""").format( dest=peripheral["drtio_destination"], target=get_cpu_target(peripheral)), file=output) diff --git a/artiq/frontend/artiq_flash.py b/artiq/frontend/artiq_flash.py index 6c2037ca1..9a36d6ce8 100755 --- a/artiq/frontend/artiq_flash.py +++ b/artiq/frontend/artiq_flash.py @@ -261,12 +261,19 @@ def main(): "storage": ("spi0", 0x440000), "firmware": ("spi0", 0x450000), }, - "efc": { + "efc1v0": { "programmer": partial(ProgrammerXC7, board="efc", proxy="bscan_spi_xc7a100t.bit"), "gateware": ("spi0", 0x000000), - "bootloader": ("spi0", 0x400000), - "storage": ("spi0", 0x440000), - "firmware": ("spi0", 0x450000), + "bootloader": ("spi0", 0x600000), + "storage": ("spi0", 0x640000), + "firmware": ("spi0", 0x650000), + }, + "efc1v1": { + "programmer": partial(ProgrammerXC7, board="efc", proxy="bscan_spi_xc7a200t.bit"), + "gateware": ("spi0", 0x000000), + "bootloader": ("spi0", 0x600000), + "storage": ("spi0", 0x640000), + "firmware": ("spi0", 0x650000), }, "kc705": { "programmer": partial(ProgrammerXC7, board="kc705", proxy="bscan_spi_xc7k325t.bit"), diff --git a/artiq/frontend/artiq_master.py b/artiq/frontend/artiq_master.py index f61d6cf52..26debb1a2 100755 --- a/artiq/frontend/artiq_master.py +++ b/artiq/frontend/artiq_master.py @@ -15,7 +15,8 @@ from sipyco.asyncio_tools import atexit_register_coroutine, SignalHandler from artiq import __version__ as artiq_version from artiq.master.log import log_args, init_log -from artiq.master.databases import DeviceDB, DatasetDB +from artiq.master.databases import (DeviceDB, DatasetDB, + InteractiveArgDB) from artiq.master.scheduler import Scheduler from artiq.master.rid_counter import RIDCounter from artiq.master.experiments import (FilesystemBackend, GitBackend, @@ -57,10 +58,10 @@ def get_argparser(): log_args(parser) parser.add_argument("--name", - help="friendly name, displayed in dashboards " - "to identify master instead of server address") - parser.add_argument("--log-submissions", default=None, - help="set the filename to create the experiment subimission") + help="friendly name, displayed in dashboards " + "to identify master instead of server address") + parser.add_argument("--log-submissions", default=None, + help="log experiment submissions to specified file") return parser @@ -81,8 +82,7 @@ def main(): bind, args.port_broadcast)) atexit_register_coroutine(server_broadcast.stop, loop=loop) - log_forwarder.callback = (lambda msg: - server_broadcast.broadcast("log", msg)) + log_forwarder.callback = lambda msg: server_broadcast.broadcast("log", msg) def ccb_issue(service, *args, **kwargs): msg = { "service": service, @@ -96,6 +96,7 @@ def main(): atexit.register(dataset_db.close_db) dataset_db.start(loop=loop) atexit_register_coroutine(dataset_db.stop, loop=loop) + interactive_arg_db = InteractiveArgDB() worker_handlers = dict() if args.git: @@ -106,15 +107,22 @@ def main(): repo_backend, worker_handlers, args.experiment_subdir) atexit.register(experiment_db.close) - scheduler = Scheduler(RIDCounter(), worker_handlers, experiment_db, args.log_submissions) + scheduler = Scheduler(RIDCounter(), worker_handlers, experiment_db, + args.log_submissions) scheduler.start(loop=loop) atexit_register_coroutine(scheduler.stop, loop=loop) + # Python doesn't allow writing attributes to bound methods. + def get_interactive_arguments(*args, **kwargs): + return interactive_arg_db.get(*args, **kwargs) + get_interactive_arguments._worker_pass_rid = True worker_handlers.update({ "get_device_db": device_db.get_device_db, "get_device": device_db.get, "get_dataset": dataset_db.get, + "get_dataset_metadata": dataset_db.get_metadata, "update_dataset": dataset_db.update, + "get_interactive_arguments": get_interactive_arguments, "scheduler_submit": scheduler.submit, "scheduler_delete": scheduler.delete, "scheduler_request_termination": scheduler.request_termination, @@ -133,10 +141,11 @@ def main(): server_control = RPCServer({ "master_management": master_management, - "master_device_db": device_db, - "master_dataset_db": dataset_db, - "master_schedule": scheduler, - "master_experiment_db": experiment_db, + "device_db": device_db, + "dataset_db": dataset_db, + "interactive_arg_db": interactive_arg_db, + "schedule": scheduler, + "experiment_db": experiment_db, }, allow_parallel=True) loop.run_until_complete(server_control.start( bind, args.port_control)) @@ -146,8 +155,9 @@ def main(): "schedule": scheduler.notifier, "devices": device_db.data, "datasets": dataset_db.data, + "interactive_args": interactive_arg_db.pending, "explist": experiment_db.explist, - "explist_status": experiment_db.status + "explist_status": experiment_db.status, }) loop.run_until_complete(server_notify.start( bind, args.port_notify)) diff --git a/artiq/frontend/artiq_run.py b/artiq/frontend/artiq_run.py index 5f8c66efd..7905a301e 100755 --- a/artiq/frontend/artiq_run.py +++ b/artiq/frontend/artiq_run.py @@ -4,13 +4,14 @@ import argparse import sys +import tarfile from operator import itemgetter import logging from collections import defaultdict import h5py -from sipyco import common_args +from sipyco import common_args, pyon from artiq import __version__ as artiq_version from artiq.language.environment import EnvExperiment, ProcessArgumentManager @@ -42,6 +43,20 @@ class ELFRunner(FileRunner): return f.read() +class TARRunner(FileRunner): + def compile(self): + with tarfile.open(self.file, "r:") as tar: + for entry in tar: + if entry.name == 'main.elf': + main_lib = tar.extractfile(entry).read() + else: + subkernel_name = entry.name.removesuffix(".elf") + sid, dest = tuple(map(lambda x: int(x), subkernel_name.split(" "))) + subkernel_lib = tar.extractfile(entry).read() + self.core.comm.upload_subkernel(subkernel_lib, sid, dest) + return main_lib + + class DummyScheduler: def __init__(self): self.rid = 0 @@ -106,11 +121,33 @@ def get_argparser(with_file=True): return parser +class ArgumentManager(ProcessArgumentManager): + def get_interactive(self, interactive_arglist, title): + print(title) + result = dict() + for key, processor, group, tooltip in interactive_arglist: + success = False + while not success: + user_input = input("{}:{} (group={}, tooltip={}): ".format( + key, type(processor).__name__, group, tooltip)) + try: + user_input_deser = pyon.decode(user_input) + value = processor.process(user_input_deser) + except: + logger.error("failed to process user input, retrying", + exc_info=True) + else: + success = True + result[key] = value + return result + + def _build_experiment(device_mgr, dataset_mgr, args): arguments = parse_arguments(args.arguments) - argument_mgr = ProcessArgumentManager(arguments) + argument_mgr = ArgumentManager(arguments) managers = (device_mgr, dataset_mgr, argument_mgr, {}) if hasattr(args, "file"): + is_tar = tarfile.is_tarfile(args.file) is_elf = args.file.endswith(".elf") if is_elf: if args.arguments: @@ -118,7 +155,9 @@ def _build_experiment(device_mgr, dataset_mgr, args): if args.class_name: raise ValueError("class-name not supported " "for precompiled kernels") - if is_elf: + if is_tar: + return TARRunner(managers, file=args.file) + elif is_elf: return ELFRunner(managers, file=args.file) else: import_cache.install_hook() @@ -152,6 +191,7 @@ def run(with_file=False): exp_inst = _build_experiment(device_mgr, dataset_mgr, args) exp_inst.prepare() exp_inst.run() + device_mgr.notify_run_end() exp_inst.analyze() except Exception as exn: if hasattr(exn, "artiq_core_exception"): diff --git a/artiq/frontend/artiq_sinara_tester.py b/artiq/frontend/artiq_sinara_tester.py index d8f445885..5d85db1d2 100755 --- a/artiq/frontend/artiq_sinara_tester.py +++ b/artiq/frontend/artiq_sinara_tester.py @@ -14,7 +14,7 @@ from artiq.coredevice.ttl import TTLOut, TTLInOut from artiq.coredevice.urukul import CPLD as UrukulCPLD from artiq.coredevice.ad9910 import AD9910, SyncDataEeprom from artiq.coredevice.mirny import Mirny -from artiq.coredevice.almazny import AlmaznyLegacy +from artiq.coredevice.almazny import AlmaznyLegacy, AlmaznyChannel from artiq.coredevice.adf5356 import ADF5356 from artiq.coredevice.sampler import Sampler from artiq.coredevice.zotino import Zotino @@ -22,7 +22,14 @@ from artiq.coredevice.fastino import Fastino from artiq.coredevice.phaser import Phaser, PHASER_GW_BASE, PHASER_GW_MIQRO from artiq.coredevice.grabber import Grabber from artiq.coredevice.suservo import SUServo, Channel as SUServoChannel - +from artiq.coredevice.shuttler import ( + Config as ShuttlerConfig, + Trigger as ShuttlerTrigger, + DCBias as ShuttlerDCBias, + DDS as ShuttlerDDS, + Relay as ShuttlerRelay, + ADC as ShuttlerADC, + shuttler_volt_to_mu) from artiq.master.databases import DeviceDB from artiq.master.worker_db import DeviceManager @@ -78,7 +85,9 @@ class SinaraTester(EnvExperiment): self.mirnies = dict() self.suservos = dict() self.suschannels = dict() + self.legacy_almaznys = dict() self.almaznys = dict() + self.shuttler = dict() ddb = self.get_device_db() for name, desc in ddb.items(): @@ -117,7 +126,20 @@ class SinaraTester(EnvExperiment): elif (module, cls) == ("artiq.coredevice.suservo", "Channel"): self.suschannels[name] = self.get_device(name) elif (module, cls) == ("artiq.coredevice.almazny", "AlmaznyLegacy"): + self.legacy_almaznys[name] = self.get_device(name) + elif (module, cls) == ("artiq.coredevice.almazny", "AlmaznyChannel"): self.almaznys[name] = self.get_device(name) + elif (module, cls) == ("artiq.coredevice.shuttler", "Config"): + shuttler_name = name.replace("_config", "") + self.shuttler[shuttler_name] = ({ + "config": self.get_device(name), + "trigger": self.get_device("{}_trigger".format(shuttler_name)), + "leds": [self.get_device("{}_led{}".format(shuttler_name, i)) for i in range(2)], + "dcbias": [self.get_device("{}_dcbias{}".format(shuttler_name, i)) for i in range(16)], + "dds": [self.get_device("{}_dds{}".format(shuttler_name, i)) for i in range(16)], + "relay": self.get_device("{}_relay".format(shuttler_name)), + "adc": self.get_device("{}_adc".format(shuttler_name)), + }) # Remove Urukul, Sampler, Zotino and Mirny control signals # from TTL outs (tested separately) and remove Urukuls covered by @@ -166,6 +188,7 @@ class SinaraTester(EnvExperiment): self.mirnies = sorted(self.mirnies.items(), key=lambda x: (x[1].cpld.bus.channel, x[1].channel)) self.suservos = sorted(self.suservos.items(), key=lambda x: x[1].channel) self.suschannels = sorted(self.suschannels.items(), key=lambda x: x[1].channel) + self.shuttler = sorted(self.shuttler.items(), key=lambda x: x[1]["leds"][0].channel) @kernel def test_led(self, led: TTLOut): @@ -377,67 +400,111 @@ class SinaraTester(EnvExperiment): channel.pulse(100.*ms) self.core.delay(100.*ms) @kernel - def init_almazny(self, almazny: AlmaznyLegacy): + def init_legacy_almazny(self, almazny: AlmaznyLegacy): self.core.break_realtime() almazny.init() almazny.output_toggle(True) @kernel - def almazny_set_attenuators_mu(self, almazny: AlmaznyLegacy, ch: int32, atts: int32): + def legacy_almazny_set_attenuators_mu(self, almazny: AlmaznyLegacy, ch: int32, atts: int32): self.core.break_realtime() almazny.set_att_mu(ch, atts) @kernel - def almazny_set_attenuators(self, almazny: AlmaznyLegacy, ch: int32, atts: float): - self.core.break_realtime() - almazny.set_att(ch, atts) - + def legacy_almazny_att_test(self, almazny: AlmaznyLegacy): + # change attenuation bit by bit over time for all channels + att_mu = 0 + while not is_enter_pressed(): + self.core.break_realtime() + t = now_mu() - self.core.seconds_to_mu(0.5) + while self.core.get_rtio_counter_mu() < t: + pass + for ch in range(4): + almazny.set_att_mu(ch, att_mu) + self.core.delay(250.*ms) + if att_mu == 0: + att_mu = 1 + else: + att_mu = (att_mu << 1) & 0x3F + @kernel - def almazny_toggle_output(self, almazny: AlmaznyLegacy, rf_on: bool): + def legacy_almazny_toggle_output(self, almazny: AlmaznyLegacy, rf_on: bool): self.core.break_realtime() almazny.output_toggle(rf_on) - def test_almaznys(self): - print("*** Testing Almaznys.") - for name, almazny in sorted(self.almaznys.items(), key=lambda x: x[0]): + def test_legacy_almaznys(self): + print("*** Testing legacy Almaznys (v1.1 or older).") + for name, almazny in sorted(self.legacy_almaznys.items(), key=lambda x: x[0]): print(name + "...") print("Initializing Mirny CPLDs...") for name, cpld in sorted(self.mirny_cplds.items(), key=lambda x: x[0]): print(name + "...") self.init_mirny(cpld) print("...done") - print("Testing attenuators. Frequencies:") for card_n, channels in enumerate(chunker(self.mirnies, 4)): for channel_n, (channel_name, channel_dev) in enumerate(channels): frequency = 2000. + card_n * 250 + channel_n * 50 print("{}\t{}MHz".format(channel_name, frequency*2)) self.setup_mirny(channel_dev, frequency) - print("{} info: {}".format(channel_name, channel_dev.info())) - self.init_almazny(almazny) - print("RF ON, all attenuators ON. Press ENTER when done.") - for i in range(4): - self.almazny_set_attenuators_mu(almazny, i, 63) - input() - print("RF ON, half power attenuators ON. Press ENTER when done.") - for i in range(4): - self.almazny_set_attenuators(almazny, i, 15.5) - input() - print("RF ON, all attenuators OFF. Press ENTER when done.") - for i in range(4): - self.almazny_set_attenuators(almazny, i, 0.) - input() + self.init_legacy_almazny(almazny) print("SR outputs are OFF. Press ENTER when done.") - self.almazny_toggle_output(almazny, False) - input() - print("RF ON, all attenuators are ON. Press ENTER when done.") - for i in range(4): - self.almazny_set_attenuators(almazny, i, 31.5) - self.almazny_toggle_output(almazny, True) - input() - print("RF OFF. Press ENTER when done.") - self.almazny_toggle_output(almazny, False) + self.legacy_almazny_toggle_output(almazny, False) input() + print("RF ON, attenuators are tested. Press ENTER when done.") + self.legacy_almazny_toggle_output(almazny, True) + self.legacy_almazny_att_test(almazny) + self.legacy_almazny_toggle_output(almazny, False) + + @kernel + def almazny_led_wave(self, almaznys: list[AlmaznyChannel]): + while not is_enter_pressed(): + self.core.break_realtime() + # do not fill the FIFOs too much to avoid long response times + t = now_mu() - self.core.seconds_to_mu(0.2) + while self.core.get_rtio_counter_mu() < t: + pass + for ch in almaznys: + ch.set(31.5, False, True) + self.core.delay(100*ms) + ch.set(31.5, False, False) + + @kernel + def almazny_att_test(self, almaznys: list[AlmaznyChannel]): + rf_en = 1 + led = 1 + att_mu = 0 + while not is_enter_pressed(): + self.core.break_realtime() + t = now_mu() - self.core.seconds_to_mu(0.2) + while self.core.get_rtio_counter_mu() < t: + pass + setting = led << 7 | rf_en << 6 | (att_mu & 0x3F) + for ch in almaznys: + ch.set_mu(setting) + self.core.delay(250.*ms) + if att_mu == 0: + att_mu = 1 + else: + att_mu = (att_mu << 1) & 0x3F + + def test_almaznys(self): + print("*** Testing Almaznys (v1.2+).") + print("Initializing Mirny CPLDs...") + for name, cpld in sorted(self.mirny_cplds.items(), key=lambda x: x[0]): + print(name + "...") + self.init_mirny(cpld) + print("...done") + print("Frequencies:") + for card_n, channels in enumerate(chunker(self.mirnies, 4)): + for channel_n, (channel_name, channel_dev) in enumerate(channels): + frequency = 2000 + card_n * 250 + channel_n * 50 + print("{}\t{}MHz".format(channel_name, frequency*2)) + self.setup_mirny(channel_dev, frequency) + print("RF ON, attenuators are tested. Press ENTER when done.") + self.almazny_att_test([ch for _, ch in self.almaznys.items()]) + print("RF OFF, testing LEDs. Press ENTER when done.") + self.almazny_led_wave([ch for _, ch in self.almaznys.items()]) def test_mirnies(self): print("*** Testing Mirny PLLs.") @@ -768,6 +835,124 @@ class SinaraTester(EnvExperiment): print("Press ENTER when done.") input() + @kernel + def setup_shuttler_init(self, relay: ShuttlerRelay, adc: ShuttlerADC, dcbias: ShuttlerDCBias, + dds: ShuttlerDDS, trigger: TTLOut, config: ShuttlerConfig): + self.core.break_realtime() + # Reset Shuttler Output Relay + relay.init() + delay_mu(int64(self.core.ref_multiplier)) + + relay.enable(0x0000) + delay_mu(int64(self.core.ref_multiplier)) + + # Setup ADC and and Calibration + delay_mu(int64(self.core.ref_multiplier)) + adc.power_up() + + delay_mu(int64(self.core.ref_multiplier)) + if adc.read_id() >> 4 != 0x038d: + print("Remote AFE Board's ADC is not found. Check Remote AFE Board's Cables Connections") + assert adc.read_id() >> 4 == 0x038d + + delay_mu(int64(self.core.ref_multiplier)) + adc.calibrate(dcbias, trigger, config) + + #Reset Shuttler DAC Output + for ch in range(16): + self.setup_shuttler_set_output(dcbias, dds, trigger, ch, 0.0) + + @kernel + def set_shuttler_relay(self, relay: ShuttlerRelay, val: int32): + self.core.break_realtime() + relay.enable(val) + + # NAC3TODO https://git.m-labs.hk/M-Labs/nac3/issues/101 + @kernel + def get_shuttler_output_voltage(self, adc: ShuttlerADC, ch: int32) -> float: + self.core.break_realtime() + return adc.read_ch(ch) + + @kernel + def setup_shuttler_set_output(self, dcbias: ShuttlerDCBias, dds: ShuttlerDDS, trigger: ShuttlerTrigger, ch: int32, volt: float): + self.core.break_realtime() + dcbias[ch].set_waveform( + a0=shuttler_volt_to_mu(volt), + a1=0, + a2=0, + a3=0, + ) + delay_mu(int64(self.core.ref_multiplier)) + + dds[ch].set_waveform( + b0=0, + b1=0, + b2=0, + b3=0, + c0=0, + c1=0, + c2=0, + ) + delay_mu(int64(self.core.ref_multiplier)) + + trigger.trigger(1 << ch) + delay_mu(int64(self.core.ref_multiplier)) + + @kernel + def shuttler_relay_led_wave(self, relay: ShuttlerRelay): + while not is_enter_pressed(): + self.core.break_realtime() + # do not fill the FIFOs too much to avoid long response times + t = now_mu() - self.core.seconds_to_mu(.2) + while self.core.get_rtio_counter_mu() < t: + pass + for ch in range(16): + relay.enable(1 << ch) + self.core.delay(100.*ms) + relay.enable(0x0000) + self.core.delay(100.*ms) + + def test_shuttler(self): + print("*** Testing Shuttler.") + + for card_n, (card_name, card_dev) in enumerate(self.shuttler): + print("Testing: ", card_name) + + output_voltage = 0.0 + + self.setup_shuttler_init(card_dev["relay"], card_dev["adc"], card_dev["dcbias"], card_dev["dds"], card_dev["trigger"], card_dev["config"]) + + print("Check Remote AFE Board Relay LED Indicators.") + print("Press Enter to Continue.") + self.shuttler_relay_led_wave(card_dev["relay"]) + + self.set_shuttler_relay(card_dev["relay"], 0xFFFF) + + passed = True + adc_readings = [] + volt_set = [(-1)**i*(2.*card_n + .1*(i//2 + 1)) for i in range(16)] + + print("Testing Shuttler DAC") + print("Voltages:", " ".join(["{:.1f}".format(x) for x in volt_set])) + + for ch, volt in enumerate(volt_set): + self.setup_shuttler_set_output(card_dev["dcbias"], card_dev["dds"], card_dev["trigger"], ch, volt) + output_voltage = self.get_shuttler_output_voltage(card_dev["adc"], ch) + if (abs(volt) - abs(output_voltage)) > 0.1: + passed = False + adc_readings.append(output_voltage) + + print("Press Enter to Continue.") + input() + self.set_shuttler_relay(card_dev["relay"], 0x0000) + + if passed: + print("PASSED") + else: + print("FAILED") + print("Shuttler Remote AFE Board ADC has abnormal readings.") + print(f"ADC Readings:", " ".join(["{:.2f}".format(x) for x in adc_readings])) + def run(self, tests): print("****** Sinara system tester ******") print("") @@ -825,6 +1010,7 @@ def main(): experiment = SinaraTester((device_mgr, None, None, None)) experiment.prepare() experiment.run(tests) + device_mgr.notify_run_end() experiment.analyze() finally: device_mgr.close_devices() diff --git a/artiq/gateware/drtio/aux_controller.py b/artiq/gateware/drtio/aux_controller.py index 5f91b1f45..27ca2a15d 100644 --- a/artiq/gateware/drtio/aux_controller.py +++ b/artiq/gateware/drtio/aux_controller.py @@ -10,6 +10,7 @@ from misoc.interconnect import wishbone max_packet = 1024 +aux_buffer_count = 8 class Transmitter(Module, AutoCSR): @@ -95,13 +96,13 @@ class Transmitter(Module, AutoCSR): class Receiver(Module, AutoCSR): def __init__(self, link_layer, min_mem_dw): - self.aux_rx_length = CSRStatus(bits_for(max_packet)) self.aux_rx_present = CSR() self.aux_rx_error = CSR() + self.aux_read_pointer = CSR(log2_int(aux_buffer_count)) ll_dw = len(link_layer.rx_aux_data) mem_dw = max(min_mem_dw, ll_dw) - self.specials.mem = Memory(mem_dw, max_packet//(mem_dw//8)) + self.specials.mem = Memory(mem_dw, aux_buffer_count*max_packet//(mem_dw//8)) converter = ClockDomainsRenamer("rtio_rx")( stream.Converter(ll_dw, mem_dw)) @@ -123,30 +124,45 @@ class Receiver(Module, AutoCSR): mem_port = self.mem.get_port(write_capable=True, clock_domain="rtio_rx") self.specials += mem_port + # write pointer represents where the gateware is + write_pointer = Signal(log2_int(aux_buffer_count)) + write_pointer_sys = Signal.like(write_pointer) + # read pointer represents where CPU is + # write reaching read is an error, read reaching write is buffer clear + read_pointer = Signal.like(write_pointer) + read_pointer_rx = Signal.like(write_pointer) + + read_pointer.attr.add("no_retiming") + write_pointer.attr.add("no_retiming") + signal_error = PulseSynchronizer("rtio_rx", "sys") + + self.specials += [ + MultiReg(read_pointer, read_pointer_rx, "rtio_rx"), + MultiReg(write_pointer, write_pointer_sys) + ] + frame_counter_nbits = bits_for(max_packet) - log2_int(mem_dw//8) frame_counter = Signal(frame_counter_nbits) + # frame counter requires one more bit to represent overflow (bits_for) + # actual valid packet will be addressed with one bit less + packet_nbits = frame_counter_nbits - 1 self.comb += [ - mem_port.adr.eq(frame_counter), + mem_port.adr[:packet_nbits].eq(frame_counter), + # bits above the frame counter point to current frame + mem_port.adr[packet_nbits:].eq(write_pointer), mem_port.dat_w.eq(converter.source.data), - converter.source.ack.eq(1) + converter.source.ack.eq(1), + self.aux_read_pointer.w.eq(read_pointer) ] - frame_counter.attr.add("no_retiming") - frame_counter_sys = Signal(frame_counter_nbits) - self.specials += MultiReg(frame_counter, frame_counter_sys) - self.comb += self.aux_rx_length.status.eq(frame_counter_sys << log2_int(mem_dw//8)) - - signal_frame = PulseSynchronizer("rtio_rx", "sys") - frame_ack = PulseSynchronizer("sys", "rtio_rx") - signal_error = PulseSynchronizer("rtio_rx", "sys") - self.submodules += signal_frame, frame_ack, signal_error + self.submodules += signal_error self.sync += [ - If(self.aux_rx_present.re, self.aux_rx_present.w.eq(0)), - If(signal_frame.o, self.aux_rx_present.w.eq(1)), If(self.aux_rx_error.re, self.aux_rx_error.w.eq(0)), - If(signal_error.o, self.aux_rx_error.w.eq(1)) + If(signal_error.o, self.aux_rx_error.w.eq(1)), + self.aux_rx_present.w.eq(~(read_pointer == write_pointer_sys)), + If(self.aux_rx_present.re & self.aux_rx_present.w, + read_pointer.eq(read_pointer + 1)), ] - self.comb += frame_ack.i.eq(self.aux_rx_present.re) fsm = ClockDomainsRenamer("rtio_rx")(FSM(reset_state="IDLE")) self.submodules += fsm @@ -171,7 +187,7 @@ class Receiver(Module, AutoCSR): NextState("FRAME") ) ).Else( - NextValue(frame_counter, 0) + NextValue(frame_counter, 0), ) ) fsm.act("FRAME", @@ -189,14 +205,12 @@ class Receiver(Module, AutoCSR): ) ) fsm.act("SIGNAL_FRAME", - signal_frame.i.eq(1), - NextState("WAIT_ACK"), - If(converter.source.stb, signal_error.i.eq(1)) - ) - fsm.act("WAIT_ACK", - If(frame_ack.o, - NextValue(frame_counter, 0), - NextState("IDLE") + NextState("IDLE"), + If((write_pointer + 1) == read_pointer_rx, + # on full buffer, overwrite only current frame + signal_error.i.eq(1) + ).Else( + NextValue(write_pointer, write_pointer + 1) ), If(converter.source.stb, signal_error.i.eq(1)) ) @@ -215,8 +229,8 @@ class DRTIOAuxController(Module): tx_sdram_if = wishbone.SRAM(self.transmitter.mem, read_only=False, data_width=dw) rx_sdram_if = wishbone.SRAM(self.receiver.mem, read_only=True, data_width=dw) decoder = wishbone.Decoder(self.bus, - [(lambda a: a[log2_int(max_packet)-wsb] == 0, tx_sdram_if.bus), - (lambda a: a[log2_int(max_packet)-wsb] == 1, rx_sdram_if.bus)], + [(lambda a: a[log2_int(max_packet*aux_buffer_count)-wsb] == 0, tx_sdram_if.bus), + (lambda a: a[log2_int(max_packet*aux_buffer_count)-wsb] == 1, rx_sdram_if.bus)], register=True) self.submodules += tx_sdram_if, rx_sdram_if, decoder diff --git a/artiq/gateware/drtio/core.py b/artiq/gateware/drtio/core.py index b2a108237..8928f833e 100644 --- a/artiq/gateware/drtio/core.py +++ b/artiq/gateware/drtio/core.py @@ -61,8 +61,8 @@ class SyncRTIO(Module): self.submodules.outputs = ClockDomainsRenamer("rio")( SED(channels, tsc.glbl_fine_ts_width, lane_count=lane_count, fifo_depth=fifo_depth, - enable_spread=False, report_buffer_space=True, - interface=self.cri)) + enable_spread=True, fifo_high_watermark=0.75, + report_buffer_space=True, interface=self.cri)) self.comb += self.outputs.coarse_timestamp.eq(tsc.coarse_ts) self.sync += self.outputs.minimum_coarse_timestamp.eq(tsc.coarse_ts + 16) @@ -136,7 +136,7 @@ class DRTIOSatellite(Module): self.sync += [ If(self.tsc_loaded.re, self.tsc_loaded.w.eq(0)), - If(self.rt_packet.tsc_load, self.tsc_loaded.w.eq(1)) + If(self.rt_packet.tsc_load, self.tsc_loaded.w.eq(1)), ] self.submodules.rt_errors = rt_errors_satellite.RTErrorsSatellite( diff --git a/artiq/gateware/drtio/rt_controller_repeater.py b/artiq/gateware/drtio/rt_controller_repeater.py index 53b2b1b07..79b9559eb 100644 --- a/artiq/gateware/drtio/rt_controller_repeater.py +++ b/artiq/gateware/drtio/rt_controller_repeater.py @@ -17,12 +17,11 @@ class RTController(Module, AutoCSR): self.sync += rt_packet.reset.eq(self.reset.storage) - set_time_stb = Signal() self.sync += [ - If(rt_packet.set_time_stb, set_time_stb.eq(0)), - If(self.set_time.re, set_time_stb.eq(1)) + If(rt_packet.set_time_ack, rt_packet.set_time_stb.eq(0)), + If(self.set_time.re, rt_packet.set_time_stb.eq(1)) ] - self.comb += self.set_time.w.eq(set_time_stb) + self.comb += self.set_time.w.eq(rt_packet.set_time_stb) errors = [ (rt_packet.err_unknown_packet_type, "rtio_rx", None, None), diff --git a/artiq/gateware/drtio/transceiver/gtp_7series.py b/artiq/gateware/drtio/transceiver/gtp_7series.py index 75c31a486..a5c0d0c00 100644 --- a/artiq/gateware/drtio/transceiver/gtp_7series.py +++ b/artiq/gateware/drtio/transceiver/gtp_7series.py @@ -18,7 +18,7 @@ class GTPSingle(Module): # # # - self.stable_clkin = Signal() + self.clk_path_ready = Signal() self.txenable = Signal() self.submodules.encoder = encoder = Encoder(2, True) self.submodules.decoders = decoders = [ClockDomainsRenamer("rtio_rx")( @@ -40,7 +40,7 @@ class GTPSingle(Module): self.submodules += rx_init self.comb += [ - tx_init.stable_clkin.eq(self.stable_clkin), + tx_init.clk_path_ready.eq(self.clk_path_ready), qpll_channel.reset.eq(tx_init.pllreset), tx_init.plllock.eq(qpll_channel.lock) ] @@ -715,7 +715,7 @@ class GTP(Module, TransceiverInterface): def __init__(self, qpll_channel, data_pads, sys_clk_freq, rtio_clk_freq, master=0): self.nchannels = nchannels = len(data_pads) self.gtps = [] - + self.clk_path_ready = Signal() # # # channel_interfaces = [] @@ -736,7 +736,7 @@ class GTP(Module, TransceiverInterface): TransceiverInterface.__init__(self, channel_interfaces) for n, gtp in enumerate(self.gtps): self.comb += [ - gtp.stable_clkin.eq(self.stable_clkin.storage), + gtp.clk_path_ready.eq(self.clk_path_ready), gtp.txenable.eq(self.txenable.storage[n]) ] diff --git a/artiq/gateware/drtio/transceiver/gtp_7series_init.py b/artiq/gateware/drtio/transceiver/gtp_7series_init.py index 8916c2c36..d7d86c459 100644 --- a/artiq/gateware/drtio/transceiver/gtp_7series_init.py +++ b/artiq/gateware/drtio/transceiver/gtp_7series_init.py @@ -10,7 +10,7 @@ __all__ = ["GTPTXInit", "GTPRXInit"] class GTPTXInit(Module): def __init__(self, sys_clk_freq, mode="single"): - self.stable_clkin = Signal() + self.clk_path_ready = Signal() self.done = Signal() self.restart = Signal() @@ -87,7 +87,7 @@ class GTPTXInit(Module): startup_fsm.act("PLL_RESET", self.pllreset.eq(1), pll_reset_timer.wait.eq(1), - If(pll_reset_timer.done & self.stable_clkin, + If(pll_reset_timer.done & self.clk_path_ready, NextState("GTP_RESET") ) ) diff --git a/artiq/gateware/drtio/transceiver/gtx_7series.py b/artiq/gateware/drtio/transceiver/gtx_7series.py index f62663895..693e91ea4 100644 --- a/artiq/gateware/drtio/transceiver/gtx_7series.py +++ b/artiq/gateware/drtio/transceiver/gtx_7series.py @@ -279,14 +279,13 @@ class GTX(Module, TransceiverInterface): self.nchannels = nchannels = len(pads) self.gtxs = [] self.rtio_clk_freq = clk_freq - + self.clk_path_ready = Signal() # # # refclk = Signal() - clk_enable = Signal() self.specials += Instance("IBUFDS_GTE2", - i_CEB=~clk_enable, + i_CEB=0, i_I=clock_pads.p, i_IB=clock_pads.n, o_O=refclk, @@ -315,14 +314,10 @@ class GTX(Module, TransceiverInterface): for n, gtx in enumerate(self.gtxs): self.comb += [ gtx.txenable.eq(self.txenable.storage[n]), - gtx.tx_init.stable_clkin.eq(clk_enable) + gtx.tx_init.clk_path_ready.eq(self.clk_path_ready) ] # rx_init is in SYS domain, rather than bootstrap - self.specials += MultiReg(clk_enable, gtx.rx_init.stable_clkin) - - # stable_clkin resets after reboot since it's in SYS domain - # still need to keep clk_enable high after this - self.sync.bootstrap += clk_enable.eq(self.stable_clkin.storage | self.gtxs[0].tx_init.cplllock) + self.specials += MultiReg(self.clk_path_ready, gtx.rx_init.clk_path_ready) # Connect slave i's `rtio_rx` clock to `rtio_rxi` clock for i in range(nchannels): diff --git a/artiq/gateware/drtio/transceiver/gtx_7series_init.py b/artiq/gateware/drtio/transceiver/gtx_7series_init.py index 6f1bff15e..e5b67f125 100644 --- a/artiq/gateware/drtio/transceiver/gtx_7series_init.py +++ b/artiq/gateware/drtio/transceiver/gtx_7series_init.py @@ -16,7 +16,7 @@ class GTXInit(Module): assert mode in ["single", "master", "slave"] self.mode = mode - self.stable_clkin = Signal() + self.clk_path_ready = Signal() self.done = Signal() self.restart = Signal() @@ -110,7 +110,7 @@ class GTXInit(Module): startup_fsm.act("INITIAL", startup_timer.wait.eq(1), - If(startup_timer.done & self.stable_clkin, NextState("RESET_PLL")) + If(startup_timer.done & self.clk_path_ready, NextState("RESET_PLL")) ) startup_fsm.act("RESET_PLL", gtXxreset.eq(1), diff --git a/artiq/gateware/eem_7series.py b/artiq/gateware/eem_7series.py index 9a20e5ae7..683646c02 100644 --- a/artiq/gateware/eem_7series.py +++ b/artiq/gateware/eem_7series.py @@ -134,7 +134,7 @@ def peripheral_shuttler(module, peripheral, **kwargs): port, port_aux = peripheral["ports"] else: raise ValueError("wrong number of ports") - eem.Shuttler.add_std(module, port, port_aux) + eem.Shuttler.add_std(module, port, port_aux, **kwargs) peripheral_processors = { "dio": peripheral_dio, diff --git a/artiq/gateware/rtio/sed/core.py b/artiq/gateware/rtio/sed/core.py index 8953ff9d2..32ed90996 100644 --- a/artiq/gateware/rtio/sed/core.py +++ b/artiq/gateware/rtio/sed/core.py @@ -12,10 +12,13 @@ __all__ = ["SED"] class SED(Module): def __init__(self, channels, glbl_fine_ts_width, - lane_count=8, fifo_depth=128, enable_spread=True, + lane_count=8, fifo_depth=128, fifo_high_watermark=1.0, enable_spread=True, quash_channels=[], report_buffer_space=False, interface=None): seqn_width = layouts.seqn_width(lane_count, fifo_depth) + fifo_high_watermark = int(fifo_high_watermark * fifo_depth) + assert fifo_depth >= fifo_high_watermark + self.submodules.lane_dist = LaneDistributor(lane_count, seqn_width, layouts.fifo_payload(channels), [channel.interface.o.delay for channel in channels], @@ -23,7 +26,7 @@ class SED(Module): enable_spread=enable_spread, quash_channels=quash_channels, interface=interface) - self.submodules.fifos = FIFOs(lane_count, fifo_depth, + self.submodules.fifos = FIFOs(lane_count, fifo_depth, fifo_high_watermark, layouts.fifo_payload(channels), report_buffer_space) self.submodules.gates = Gates(lane_count, seqn_width, layouts.fifo_payload(channels), diff --git a/artiq/gateware/rtio/sed/fifos.py b/artiq/gateware/rtio/sed/fifos.py index 3bf1bfc5a..81a260678 100644 --- a/artiq/gateware/rtio/sed/fifos.py +++ b/artiq/gateware/rtio/sed/fifos.py @@ -11,7 +11,7 @@ __all__ = ["FIFOs"] class FIFOs(Module): - def __init__(self, lane_count, fifo_depth, layout_payload, report_buffer_space=False): + def __init__(self, lane_count, fifo_depth, high_watermark, layout_payload, report_buffer_space=False): seqn_width = layouts.seqn_width(lane_count, fifo_depth) self.input = [Record(layouts.fifo_ingress(seqn_width, layout_payload)) for _ in range(lane_count)] @@ -33,6 +33,7 @@ class FIFOs(Module): fifo.din.eq(Cat(input.seqn, input.payload.raw_bits())), fifo.we.eq(input.we), input.writable.eq(fifo.writable), + input.high_watermark.eq(fifo.level >= high_watermark), Cat(output.seqn, output.payload.raw_bits()).eq(fifo.dout), output.readable.eq(fifo.readable), diff --git a/artiq/gateware/rtio/sed/lane_distributor.py b/artiq/gateware/rtio/sed/lane_distributor.py index 08a3fa716..f14010362 100644 --- a/artiq/gateware/rtio/sed/lane_distributor.py +++ b/artiq/gateware/rtio/sed/lane_distributor.py @@ -154,8 +154,10 @@ class LaneDistributor(Module): self.comb += lio.payload.timestamp.eq(compensated_timestamp) # cycle #3, read status + current_lane_high_watermark = Signal() current_lane_writable = Signal() self.comb += [ + current_lane_high_watermark.eq(Array(lio.high_watermark for lio in self.output)[current_lane]), current_lane_writable.eq(Array(lio.writable for lio in self.output)[current_lane]), o_status_wait.eq(~current_lane_writable) ] @@ -170,12 +172,10 @@ class LaneDistributor(Module): self.sequence_error_channel.eq(self.cri.chan_sel[:16]) ] - # current lane has been full, spread events by switching to the next. + # current lane has reached high watermark, spread events by switching to the next. if enable_spread: - current_lane_writable_r = Signal(reset=1) self.sync += [ - current_lane_writable_r.eq(current_lane_writable), - If(~current_lane_writable_r & current_lane_writable, + If(current_lane_high_watermark | ~current_lane_writable, force_laneB.eq(1) ), If(do_write, diff --git a/artiq/gateware/rtio/sed/layouts.py b/artiq/gateware/rtio/sed/layouts.py index 1fbb8f6ec..c3b69d64b 100644 --- a/artiq/gateware/rtio/sed/layouts.py +++ b/artiq/gateware/rtio/sed/layouts.py @@ -31,6 +31,7 @@ def fifo_ingress(seqn_width, layout_payload): return [ ("we", 1, DIR_M_TO_S), ("writable", 1, DIR_S_TO_M), + ("high_watermark", 1, DIR_S_TO_M), ("seqn", seqn_width, DIR_M_TO_S), ("payload", [(a, b, DIR_M_TO_S) for a, b in layout_payload]) ] diff --git a/artiq/gateware/targets/efc.py b/artiq/gateware/targets/efc.py index 42a4f031b..5410eafa3 100644 --- a/artiq/gateware/targets/efc.py +++ b/artiq/gateware/targets/efc.py @@ -29,9 +29,10 @@ class Satellite(BaseSoC, AMPSoC): } mem_map.update(BaseSoC.mem_map) - def __init__(self, gateware_identifier_str=None, **kwargs): + def __init__(self, gateware_identifier_str=None, hw_rev="v1.1", **kwargs): BaseSoC.__init__(self, cpu_type="vexriscv", + hw_rev=hw_rev, cpu_bus_width=64, sdram_controller_type="minicon", l2_size=128*1024, @@ -82,9 +83,10 @@ class Satellite(BaseSoC, AMPSoC): core.link_layer, self.cpu_dw)) self.csr_devices.append("drtioaux0") + drtio_aux_mem_size = 1024 * 16 # max_packet * 8 buffers * 2 (tx, rx halves) memory_address = self.mem_map["drtioaux"] - self.add_wb_slave(memory_address, 0x800, self.drtioaux0.bus) - self.add_memory_region("drtioaux0_mem", memory_address | self.shadow_base, 0x800) + self.add_wb_slave(memory_address, drtio_aux_mem_size, self.drtioaux0.bus) + self.add_memory_region("drtioaux0_mem", memory_address | self.shadow_base, drtio_aux_mem_size) self.config["HAS_DRTIO"] = None self.add_csr_group("drtioaux", ["drtioaux0"]) @@ -242,12 +244,15 @@ def main(): builder_args(parser) parser.set_defaults(output_dir="artiq_efc") parser.add_argument("-V", "--variant", default="shuttler") + parser.add_argument("--hw-rev", choices=["v1.0", "v1.1"], default="v1.1", + help="Hardware revision") parser.add_argument("--gateware-identifier-str", default=None, help="Override ROM identifier") args = parser.parse_args() argdict = dict() argdict["gateware_identifier_str"] = args.gateware_identifier_str + argdict["hw_rev"] = args.hw_rev soc = Satellite(**argdict) build_artiq_soc(soc, builder_argdict(args)) diff --git a/artiq/gateware/targets/kasli.py b/artiq/gateware/targets/kasli.py index ca3364398..0bdc50f28 100755 --- a/artiq/gateware/targets/kasli.py +++ b/artiq/gateware/targets/kasli.py @@ -1,6 +1,8 @@ #!/usr/bin/env python3 import argparse +import logging +from packaging.version import Version from migen import * from migen.genlib.resetsync import AsyncResetSynchronizer @@ -14,16 +16,21 @@ from misoc.targets.kasli import ( BaseSoC, MiniSoC, soc_kasli_args, soc_kasli_argdict) from misoc.integration.builder import builder_args, builder_argdict +from artiq import __version__ as artiq_version from artiq.gateware.amp import AMPSoC from artiq.gateware import rtio from artiq.gateware.rtio.phy import ttl_simple, ttl_serdes_7series, edge_counter from artiq.gateware.rtio.xilinx_clocking import fix_serdes_timing_path -from artiq.gateware import eem +from artiq.gateware import rtio, eem, eem_7series from artiq.gateware.drtio.transceiver import gtp_7series, eem_serdes from artiq.gateware.drtio.siphaser import SiPhaser7Series from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer from artiq.gateware.drtio import * +from artiq.gateware.wrpll import wrpll from artiq.build_soc import * +from artiq.coredevice import jsondesc + +logger = logging.getLogger(__name__) class SMAClkinForward(Module): @@ -50,7 +57,7 @@ class StandaloneBase(MiniSoC, AMPSoC): } mem_map.update(MiniSoC.mem_map) - def __init__(self, gateware_identifier_str=None, hw_rev="v2.0", **kwargs): + def __init__(self, gateware_identifier_str=None, with_wrpll=False, hw_rev="v2.0", **kwargs): if hw_rev in ("v1.0", "v1.1"): cpu_bus_width = 32 else: @@ -76,7 +83,6 @@ class StandaloneBase(MiniSoC, AMPSoC): self.submodules.error_led = gpio.GPIOOut(Cat( self.platform.request("error_led"))) self.csr_devices.append("error_led") - self.submodules += SMAClkinForward(self.platform) cdr_clk_out = self.platform.request("cdr_clk_clean") else: cdr_clk_out = self.platform.request("si5324_clkout") @@ -98,12 +104,31 @@ class StandaloneBase(MiniSoC, AMPSoC): self.crg.configure(cdr_clk_buf) + if with_wrpll: + clk_synth = self.platform.request("cdr_clk_clean_fabric") + clk_synth_se = Signal() + self.platform.add_period_constraint(clk_synth.p, 8.0) + self.specials += Instance("IBUFGDS", p_DIFF_TERM="TRUE", p_IBUF_LOW_PWR="FALSE", i_I=clk_synth.p, i_IB=clk_synth.n, o_O=clk_synth_se) + self.submodules.wrpll_refclk = wrpll.FrequencyMultiplier(self.platform.request("sma_clkin")) + self.submodules.wrpll = wrpll.WRPLL( + platform=self.platform, + cd_ref=self.wrpll_refclk.cd_ref, + main_clk_se=clk_synth_se) + self.csr_devices.append("wrpll_refclk") + self.csr_devices.append("wrpll") + self.interrupt_devices.append("wrpll") + self.config["HAS_SI549"] = None + self.config["WRPLL_REF_CLK"] = "SMA_CLKIN" + else: + if self.platform.hw_rev == "v2.0": + self.submodules += SMAClkinForward(self.platform) + self.config["HAS_SI5324"] = None + self.config["SI5324_SOFT_RESET"] = None + i2c = self.platform.request("i2c") self.submodules.i2c = gpio.GPIOTristate([i2c.scl, i2c.sda]) self.csr_devices.append("i2c") self.config["I2C_BUS_COUNT"] = 1 - self.config["HAS_SI5324"] = None - self.config["SI5324_SOFT_RESET"] = None def add_rtio(self, rtio_channels, sed_lanes=8): fix_serdes_timing_path(self.platform) @@ -130,89 +155,6 @@ class StandaloneBase(MiniSoC, AMPSoC): self.csr_devices.append("rtio_analyzer") -class Tester(StandaloneBase): - """ - Configuration for CI tests. Contains the maximum number of different EEMs. - """ - def __init__(self, hw_rev=None, dds=None, **kwargs): - if hw_rev is None: - hw_rev = "v2.0" - if dds is None: - dds = "ad9910" - StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs) - - # self.config["SI5324_EXT_REF"] = None - self.config["RTIO_FREQUENCY"] = "125.0" - if hw_rev == "v1.0": - # EEM clock fan-out from Si5324, not MMCX - self.comb += self.platform.request("clk_sel").eq(1) - - self.rtio_channels = [] - eem.DIO.add_std(self, 5, - ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X, - edge_counter_cls=edge_counter.SimpleEdgeCounter) - eem.Urukul.add_std(self, 0, 1, ttl_serdes_7series.Output_8X, dds, - ttl_simple.ClockGen) - eem.Sampler.add_std(self, 3, 2, ttl_serdes_7series.Output_8X) - eem.Zotino.add_std(self, 4, ttl_serdes_7series.Output_8X) - - if hw_rev in ("v1.0", "v1.1"): - for i in (1, 2): - sfp_ctl = self.platform.request("sfp_ctl", i) - phy = ttl_simple.Output(sfp_ctl.led) - self.submodules += phy - self.rtio_channels.append(rtio.Channel.from_phy(phy)) - - self.config["HAS_RTIO_LOG"] = None - self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) - self.rtio_channels.append(rtio.LogChannel()) - self.add_rtio(self.rtio_channels) - - -class SUServo(StandaloneBase): - """ - SUServo (Sampler-Urukul-Servo) extension variant configuration - """ - def __init__(self, hw_rev=None, **kwargs): - if hw_rev is None: - hw_rev = "v2.0" - StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs) - - # self.config["SI5324_EXT_REF"] = None - self.config["RTIO_FREQUENCY"] = "125.0" - if hw_rev == "v1.0": - # EEM clock fan-out from Si5324, not MMCX - self.comb += self.platform.request("clk_sel").eq(1) - - self.rtio_channels = [] - # EEM0, EEM1: DIO - eem.DIO.add_std(self, 0, - ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X) - eem.DIO.add_std(self, 1, - ttl_serdes_7series.Output_8X, ttl_serdes_7series.Output_8X) - - # EEM3/2: Sampler, EEM5/4: Urukul, EEM7/6: Urukul - eem.SUServo.add_std(self, - eems_sampler=(3, 2), - eems_urukul=[[5, 4], [7, 6]]) - - for i in (1, 2): - sfp_ctl = self.platform.request("sfp_ctl", i) - phy = ttl_simple.Output(sfp_ctl.led) - self.submodules += phy - self.rtio_channels.append(rtio.Channel.from_phy(phy)) - - self.config["HAS_RTIO_LOG"] = None - self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) - self.rtio_channels.append(rtio.LogChannel()) - - self.add_rtio(self.rtio_channels) - - pads = self.platform.lookup_request("sampler3_adc_data_p") - self.platform.add_false_path_constraints( - pads.clkout, self.crg.cd_sys.clk) - - class MasterBase(MiniSoC, AMPSoC): mem_map = { "cri_con": 0x10000000, @@ -223,7 +165,7 @@ class MasterBase(MiniSoC, AMPSoC): } mem_map.update(MiniSoC.mem_map) - def __init__(self, rtio_clk_freq=125e6, enable_sata=False, gateware_identifier_str=None, hw_rev="v2.0", **kwargs): + def __init__(self, rtio_clk_freq=125e6, enable_sata=False, with_wrpll=False, gateware_identifier_str=None, hw_rev="v2.0", **kwargs): if hw_rev in ("v1.0", "v1.1"): cpu_bus_width = 32 else: @@ -249,14 +191,33 @@ class MasterBase(MiniSoC, AMPSoC): self.submodules.error_led = gpio.GPIOOut(Cat( self.platform.request("error_led"))) self.csr_devices.append("error_led") - self.submodules += SMAClkinForward(platform) i2c = self.platform.request("i2c") self.submodules.i2c = gpio.GPIOTristate([i2c.scl, i2c.sda]) self.csr_devices.append("i2c") self.config["I2C_BUS_COUNT"] = 1 - self.config["HAS_SI5324"] = None - self.config["SI5324_SOFT_RESET"] = None + + if with_wrpll: + clk_synth = platform.request("cdr_clk_clean_fabric") + clk_synth_se = Signal() + platform.add_period_constraint(clk_synth.p, 8.0) + self.specials += Instance("IBUFGDS", p_DIFF_TERM="TRUE", p_IBUF_LOW_PWR="FALSE", i_I=clk_synth.p, i_IB=clk_synth.n, o_O=clk_synth_se) + self.submodules.wrpll_refclk = wrpll.FrequencyMultiplier(platform.request("sma_clkin")) + self.submodules.wrpll = wrpll.WRPLL( + platform=self.platform, + cd_ref=self.wrpll_refclk.cd_ref, + main_clk_se=clk_synth_se) + self.csr_devices.append("wrpll_refclk") + self.csr_devices.append("wrpll") + self.interrupt_devices.append("wrpll") + self.config["HAS_SI549"] = None + self.config["WRPLL_REF_CLK"] = "SMA_CLKIN" + else: + if platform.hw_rev == "v2.0": + self.submodules += SMAClkinForward(self.platform) + self.config["HAS_SI5324"] = None + self.config["SI5324_SOFT_RESET"] = None + self.config["RTIO_FREQUENCY"] = str(rtio_clk_freq/1e6) drtio_data_pads = [] @@ -313,10 +274,11 @@ class MasterBase(MiniSoC, AMPSoC): setattr(self.submodules, coreaux_name, coreaux) self.csr_devices.append(coreaux_name) - memory_address = self.mem_map["drtioaux"] + 0x800*i - self.add_wb_slave(memory_address, 0x800, + drtio_aux_mem_size = 1024 * 16 # max_packet * 8 buffers * 2 (tx, rx halves) + memory_address = self.mem_map["drtioaux"] + drtio_aux_mem_size*i + self.add_wb_slave(memory_address, drtio_aux_mem_size, coreaux.bus) - self.add_memory_region(memory_name, memory_address | self.shadow_base, 0x800) + self.add_memory_region(memory_name, memory_address | self.shadow_base, drtio_aux_mem_size) self.config["HAS_DRTIO"] = None self.config["HAS_DRTIO_ROUTING"] = None self.config["DRTIO_ROLE"] = "master" @@ -326,7 +288,8 @@ class MasterBase(MiniSoC, AMPSoC): txout_buf = Signal() self.specials += Instance("BUFG", i_I=gtp.txoutclk, o_O=txout_buf) - self.crg.configure(txout_buf, clk_sw=gtp.tx_init.done) + self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtp.tx_init.done) + self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap") platform.add_period_constraint(gtp.txoutclk, rtio_clk_period) platform.add_period_constraint(gtp.rxoutclk, rtio_clk_period) @@ -394,10 +357,11 @@ class MasterBase(MiniSoC, AMPSoC): setattr(self.submodules, coreaux_name, coreaux) self.csr_devices.append(coreaux_name) - memory_address = self.mem_map["drtioaux"] + 0x800*channel - self.add_wb_slave(memory_address, 0x800, + drtio_aux_mem_size = 1024 * 16 # max_packet * 8 buffers * 2 (tx, rx halves) + memory_address = self.mem_map["drtioaux"] + drtio_aux_mem_size*channel + self.add_wb_slave(memory_address, drtio_aux_mem_size, coreaux.bus) - self.add_memory_region(memory_name, memory_address | self.shadow_base, 0x800) + self.add_memory_region(memory_name, memory_address | self.shadow_base, drtio_aux_mem_size) def add_drtio_cpuif_groups(self): self.add_csr_group("drtio", self.drtio_csr_group) @@ -451,7 +415,7 @@ class SatelliteBase(BaseSoC, AMPSoC): } mem_map.update(BaseSoC.mem_map) - def __init__(self, rtio_clk_freq=125e6, enable_sata=False, *, gateware_identifier_str=None, hw_rev="v2.0", **kwargs): + def __init__(self, rtio_clk_freq=125e6, enable_sata=False, with_wrpll=False, *, gateware_identifier_str=None, hw_rev="v2.0", **kwargs): if hw_rev in ("v1.0", "v1.1"): cpu_bus_width = 32 else: @@ -529,15 +493,15 @@ class SatelliteBase(BaseSoC, AMPSoC): self.submodules.rtio_tsc = rtio.TSC(glbl_fine_ts_width=3) - drtioaux_csr_group = [] - drtioaux_memory_group = [] - drtiorep_csr_group = [] + self.drtioaux_csr_group = [] + self.drtioaux_memory_group = [] + self.drtiorep_csr_group = [] self.drtio_cri = [] for i in range(len(self.gt_drtio.channels)): coreaux_name = "drtioaux" + str(i) memory_name = "drtioaux" + str(i) + "_mem" - drtioaux_csr_group.append(coreaux_name) - drtioaux_memory_group.append(memory_name) + self.drtioaux_csr_group.append(coreaux_name) + self.drtioaux_memory_group.append(memory_name) cdr = ClockDomainsRenamer({"rtio_rx": "rtio_rx" + str(i)}) @@ -550,7 +514,7 @@ class SatelliteBase(BaseSoC, AMPSoC): self.csr_devices.append("drtiosat") else: corerep_name = "drtiorep" + str(i-1) - drtiorep_csr_group.append(corerep_name) + self.drtiorep_csr_group.append(corerep_name) core = cdr(DRTIORepeater( self.rtio_tsc, self.gt_drtio.channels[i])) @@ -562,16 +526,14 @@ class SatelliteBase(BaseSoC, AMPSoC): setattr(self.submodules, coreaux_name, coreaux) self.csr_devices.append(coreaux_name) - memory_address = self.mem_map["drtioaux"] + 0x800*i - self.add_wb_slave(memory_address, 0x800, + drtio_aux_mem_size = 1024 * 16 # max_packet * 8 buffers * 2 (tx, rx halves) + memory_address = self.mem_map["drtioaux"] + drtio_aux_mem_size * i + self.add_wb_slave(memory_address, drtio_aux_mem_size, coreaux.bus) - self.add_memory_region(memory_name, memory_address | self.shadow_base, 0x800) + self.add_memory_region(memory_name, memory_address | self.shadow_base, drtio_aux_mem_size) self.config["HAS_DRTIO"] = None self.config["HAS_DRTIO_ROUTING"] = None self.config["DRTIO_ROLE"] = "satellite" - self.add_csr_group("drtioaux", drtioaux_csr_group) - self.add_memory_group("drtioaux_mem", drtioaux_memory_group) - self.add_csr_group("drtiorep", drtiorep_csr_group) i2c = self.platform.request("i2c") self.submodules.i2c = gpio.GPIOTristate([i2c.scl, i2c.sda]) @@ -581,22 +543,39 @@ class SatelliteBase(BaseSoC, AMPSoC): rtio_clk_period = 1e9/rtio_clk_freq self.config["RTIO_FREQUENCY"] = str(rtio_clk_freq/1e6) - self.submodules.siphaser = SiPhaser7Series( - si5324_clkin=platform.request("cdr_clk") if platform.hw_rev == "v2.0" - else platform.request("si5324_clkin"), - rx_synchronizer=self.rx_synchronizer, - ref_clk=self.crg.clk125_div2, ref_div2=True, - rtio_clk_freq=rtio_clk_freq) - platform.add_false_path_constraints( - self.crg.cd_sys.clk, self.siphaser.mmcm_freerun_output) - self.csr_devices.append("siphaser") - self.config["HAS_SI5324"] = None - self.config["SI5324_SOFT_RESET"] = None + if with_wrpll: + clk_synth = platform.request("cdr_clk_clean_fabric") + clk_synth_se = Signal() + platform.add_period_constraint(clk_synth.p, 8.0) + self.specials += Instance("IBUFGDS", p_DIFF_TERM="TRUE", p_IBUF_LOW_PWR="FALSE", i_I=clk_synth.p, i_IB=clk_synth.n, o_O=clk_synth_se) + self.submodules.wrpll = wrpll.WRPLL( + platform=self.platform, + cd_ref=self.gt_drtio.cd_rtio_rx0, + main_clk_se=clk_synth_se) + self.submodules.wrpll_skewtester = wrpll.SkewTester(self.rx_synchronizer) + self.csr_devices.append("wrpll_skewtester") + self.csr_devices.append("wrpll") + self.interrupt_devices.append("wrpll") + self.config["HAS_SI549"] = None + self.config["WRPLL_REF_CLK"] = "GT_CDR" + else: + self.submodules.siphaser = SiPhaser7Series( + si5324_clkin=platform.request("cdr_clk") if platform.hw_rev == "v2.0" + else platform.request("si5324_clkin"), + rx_synchronizer=self.rx_synchronizer, + ref_clk=self.crg.clk125_div2, ref_div2=True, + rtio_clk_freq=rtio_clk_freq) + platform.add_false_path_constraints( + self.crg.cd_sys.clk, self.siphaser.mmcm_freerun_output) + self.csr_devices.append("siphaser") + self.config["HAS_SI5324"] = None + self.config["SI5324_SOFT_RESET"] = None gtp = self.gt_drtio.gtps[0] txout_buf = Signal() self.specials += Instance("BUFG", i_I=gtp.txoutclk, o_O=txout_buf) - self.crg.configure(txout_buf, clk_sw=gtp.tx_init.done) + self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtp.tx_init.done) + self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap") platform.add_period_constraint(gtp.txoutclk, rtio_clk_period) platform.add_period_constraint(gtp.rxoutclk, rtio_clk_period) @@ -638,77 +617,237 @@ class SatelliteBase(BaseSoC, AMPSoC): self.get_native_sdram_if(), cpu_dw=self.cpu_dw) self.csr_devices.append("rtio_analyzer") + def add_eem_drtio(self, eem_drtio_channels): + # Must be called before invoking add_rtio() to construct the CRI + # interconnect properly + self.submodules.eem_transceiver = eem_serdes.EEMSerdes(self.platform, eem_drtio_channels) + self.csr_devices.append("eem_transceiver") + self.config["HAS_DRTIO_EEM"] = None + self.config["EEM_DRTIO_COUNT"] = len(eem_drtio_channels) -class Master(MasterBase): - def __init__(self, hw_rev=None, **kwargs): + cdr = ClockDomainsRenamer({"rtio_rx": "sys"}) + for i in range(len(self.eem_transceiver.channels)): + channel = i + len(self.gt_drtio.channels) + corerep_name = "drtiorep" + str(channel-1) + coreaux_name = "drtioaux" + str(channel) + memory_name = "drtioaux" + str(channel) + "_mem" + self.drtiorep_csr_group.append(corerep_name) + self.drtioaux_csr_group.append(coreaux_name) + self.drtioaux_memory_group.append(memory_name) + + core = cdr(DRTIORepeater( + self.rtio_tsc, self.eem_transceiver.channels[i])) + setattr(self.submodules, corerep_name, core) + self.drtio_cri.append(core.cri) + self.csr_devices.append(corerep_name) + + coreaux = cdr(DRTIOAuxController(core.link_layer, self.cpu_dw)) + setattr(self.submodules, coreaux_name, coreaux) + self.csr_devices.append(coreaux_name) + + drtio_aux_mem_size = 1024 * 16 # max_packet * 8 buffers * 2 (tx, rx halves) + memory_address = self.mem_map["drtioaux"] + drtio_aux_mem_size*channel + self.add_wb_slave(memory_address, drtio_aux_mem_size, + coreaux.bus) + self.add_memory_region(memory_name, memory_address | self.shadow_base, drtio_aux_mem_size) + + def add_drtio_cpuif_groups(self): + self.add_csr_group("drtiorep", self.drtiorep_csr_group) + self.add_csr_group("drtioaux", self.drtioaux_csr_group) + self.add_memory_group("drtioaux_mem", self.drtioaux_memory_group) + +class GenericStandalone(StandaloneBase): + def __init__(self, description, hw_rev=None,**kwargs): if hw_rev is None: - hw_rev = "v2.0" - MasterBase.__init__(self, hw_rev=hw_rev, **kwargs) + hw_rev = description["hw_rev"] + self.class_name_override = description["variant"] + StandaloneBase.__init__(self, + hw_rev=hw_rev, + with_wrpll=description["enable_wrpll"], + **kwargs) + self.config["RTIO_FREQUENCY"] = "{:.1f}".format(description["rtio_frequency"]/1e6) + if "ext_ref_frequency" in description: + self.config["SI5324_EXT_REF"] = None + self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format( + description["ext_ref_frequency"]/1e6) + if hw_rev == "v1.0": + # EEM clock fan-out from Si5324, not MMCX + self.comb += self.platform.request("clk_sel").eq(1) + + has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"]) + if has_grabber: + self.grabber_csr_group = [] self.rtio_channels = [] - - phy = ttl_simple.Output(self.platform.request("user_led", 0)) - self.submodules += phy - self.rtio_channels.append(rtio.Channel.from_phy(phy)) - # matches Tester EEM numbers - eem.DIO.add_std(self, 5, - ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X) - eem.Urukul.add_std(self, 0, 1, ttl_serdes_7series.Output_8X) + eem_7series.add_peripherals(self, description["peripherals"]) + if hw_rev in ("v1.0", "v1.1"): + for i in (1, 2): + print("SFP LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels))) + sfp_ctl = self.platform.request("sfp_ctl", i) + phy = ttl_simple.Output(sfp_ctl.led) + self.submodules += phy + self.rtio_channels.append(rtio.Channel.from_phy(phy)) + if hw_rev in ("v1.1", "v2.0"): + for i in range(3): + print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels))) + phy = ttl_simple.Output(self.platform.request("user_led", i)) + self.submodules += phy + self.rtio_channels.append(rtio.Channel.from_phy(phy)) self.config["HAS_RTIO_LOG"] = None self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) self.rtio_channels.append(rtio.LogChannel()) - self.add_rtio(self.rtio_channels) + self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"]) + + if has_grabber: + self.config["HAS_GRABBER"] = None + self.add_csr_group("grabber", self.grabber_csr_group) + for grabber in self.grabber_csr_group: + self.platform.add_false_path_constraints( + self.crg.cd_sys.clk, getattr(self, grabber).deserializer.cd_cl.clk) -class Satellite(SatelliteBase): - def __init__(self, hw_rev=None, **kwargs): +class GenericMaster(MasterBase): + def __init__(self, description, hw_rev=None, **kwargs): if hw_rev is None: - hw_rev = "v2.0" - SatelliteBase.__init__(self, hw_rev=hw_rev, **kwargs) + hw_rev = description["hw_rev"] + self.class_name_override = description["variant"] + has_drtio_over_eem = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"]) + MasterBase.__init__(self, + hw_rev=hw_rev, + rtio_clk_freq=description["rtio_frequency"], + enable_sata=description["enable_sata_drtio"], + enable_sys5x=has_drtio_over_eem, + with_wrpll=description["enable_wrpll"], + **kwargs) + if "ext_ref_frequency" in description: + self.config["SI5324_EXT_REF"] = None + self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format( + description["ext_ref_frequency"]/1e6) + if hw_rev == "v1.0": + # EEM clock fan-out from Si5324, not MMCX + self.comb += self.platform.request("clk_sel").eq(1) + + if has_drtio_over_eem: + self.eem_drtio_channels = [] + has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"]) + if has_grabber: + self.grabber_csr_group = [] self.rtio_channels = [] - phy = ttl_simple.Output(self.platform.request("user_led", 0)) - self.submodules += phy - self.rtio_channels.append(rtio.Channel.from_phy(phy)) - # matches Tester EEM numbers - eem.DIO.add_std(self, 5, - ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X) + eem_7series.add_peripherals(self, description["peripherals"]) + if hw_rev in ("v1.1", "v2.0"): + for i in range(3): + print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels))) + phy = ttl_simple.Output(self.platform.request("user_led", i)) + self.submodules += phy + self.rtio_channels.append(rtio.Channel.from_phy(phy)) - self.add_rtio(self.rtio_channels) + self.config["HAS_RTIO_LOG"] = None + self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) + self.rtio_channels.append(rtio.LogChannel()) + + if has_drtio_over_eem: + self.add_eem_drtio(self.eem_drtio_channels) + self.add_drtio_cpuif_groups() + + self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"]) + + if has_grabber: + self.config["HAS_GRABBER"] = None + self.add_csr_group("grabber", self.grabber_csr_group) + for grabber in self.grabber_csr_group: + self.platform.add_false_path_constraints( + self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk) -VARIANTS = {cls.__name__.lower(): cls for cls in [Tester, SUServo, Master, Satellite]} +class GenericSatellite(SatelliteBase): + def __init__(self, description, hw_rev=None, **kwargs): + if hw_rev is None: + hw_rev = description["hw_rev"] + self.class_name_override = description["variant"] + has_drtio_over_eem = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"]) + SatelliteBase.__init__(self, + hw_rev=hw_rev, + rtio_clk_freq=description["rtio_frequency"], + enable_sata=description["enable_sata_drtio"], + enable_sys5x=has_drtio_over_eem, + with_wrpll=description["enable_wrpll"], + **kwargs) + if hw_rev == "v1.0": + # EEM clock fan-out from Si5324, not MMCX + self.comb += self.platform.request("clk_sel").eq(1) + if has_drtio_over_eem: + self.eem_drtio_channels = [] + has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"]) + if has_grabber: + self.grabber_csr_group = [] + + self.rtio_channels = [] + eem_7series.add_peripherals(self, description["peripherals"]) + if hw_rev in ("v1.1", "v2.0"): + for i in range(3): + print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels))) + phy = ttl_simple.Output(self.platform.request("user_led", i)) + self.submodules += phy + self.rtio_channels.append(rtio.Channel.from_phy(phy)) + + self.config["HAS_RTIO_LOG"] = None + self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) + self.rtio_channels.append(rtio.LogChannel()) + + if has_drtio_over_eem: + self.add_eem_drtio(self.eem_drtio_channels) + self.add_drtio_cpuif_groups() + + self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"]) + if has_grabber: + self.config["HAS_GRABBER"] = None + self.add_csr_group("grabber", self.grabber_csr_group) + for grabber in self.grabber_csr_group: + self.platform.add_false_path_constraints( + self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk) def main(): parser = argparse.ArgumentParser( - description="ARTIQ device binary builder for Kasli systems") + description="ARTIQ device binary builder for generic Kasli systems") builder_args(parser) soc_kasli_args(parser) parser.set_defaults(output_dir="artiq_kasli") - parser.add_argument("-V", "--variant", default="tester", - help="variant: {} (default: %(default)s)".format( - "/".join(sorted(VARIANTS.keys())))) - parser.add_argument("--tester-dds", default=None, - help="Tester variant DDS type: ad9910/ad9912 " - "(default: ad9910)") + parser.add_argument("description", metavar="DESCRIPTION", + help="JSON system description file") parser.add_argument("--gateware-identifier-str", default=None, help="Override ROM identifier") args = parser.parse_args() + description = jsondesc.load(args.description) - argdict = dict() - argdict["gateware_identifier_str"] = args.gateware_identifier_str - argdict["dds"] = args.tester_dds + min_artiq_version = description["min_artiq_version"] + if Version(artiq_version) < Version(min_artiq_version): + logger.warning("ARTIQ version mismatch: current %s < %s minimum", + artiq_version, min_artiq_version) - variant = args.variant.lower() - try: - cls = VARIANTS[variant] - except KeyError: - raise SystemExit("Invalid variant (-V/--variant)") + if description["target"] != "kasli": + raise ValueError("Description is for a different target") - soc = cls(**soc_kasli_argdict(args), **argdict) + if description["drtio_role"] == "standalone": + cls = GenericStandalone + elif description["drtio_role"] == "master": + cls = GenericMaster + elif description["drtio_role"] == "satellite": + cls = GenericSatellite + else: + raise ValueError("Invalid DRTIO role") + + has_shuttler = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"]) + if has_shuttler and (description["drtio_role"] == "standalone"): + raise ValueError("Shuttler requires DRTIO, please switch role to master") + if description["enable_wrpll"] and description["hw_rev"] in ["v1.0", "v1.1"]: + raise ValueError("Kasli {} does not support WRPLL".format(description["hw_rev"])) + + soc = cls(description, gateware_identifier_str=args.gateware_identifier_str, **soc_kasli_argdict(args)) + args.variant = description["variant"] build_artiq_soc(soc, builder_argdict(args)) diff --git a/artiq/gateware/targets/kasli_generic.py b/artiq/gateware/targets/kasli_generic.py deleted file mode 100755 index 2f91de1b1..000000000 --- a/artiq/gateware/targets/kasli_generic.py +++ /dev/null @@ -1,187 +0,0 @@ -#!/usr/bin/env python3 - -import argparse -import logging -from packaging.version import Version - -from misoc.integration.builder import builder_args, builder_argdict -from misoc.targets.kasli import soc_kasli_args, soc_kasli_argdict - -from artiq import __version__ as artiq_version -from artiq.coredevice import jsondesc -from artiq.gateware import rtio, eem_7series -from artiq.gateware.rtio.phy import ttl_simple -from artiq.gateware.targets.kasli import StandaloneBase, MasterBase, SatelliteBase -from artiq.build_soc import * - -logger = logging.getLogger(__name__) - -class GenericStandalone(StandaloneBase): - def __init__(self, description, hw_rev=None,**kwargs): - if hw_rev is None: - hw_rev = description["hw_rev"] - self.class_name_override = description["variant"] - StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs) - self.config["RTIO_FREQUENCY"] = "{:.1f}".format(description["rtio_frequency"]/1e6) - if "ext_ref_frequency" in description: - self.config["SI5324_EXT_REF"] = None - self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format( - description["ext_ref_frequency"]/1e6) - if hw_rev == "v1.0": - # EEM clock fan-out from Si5324, not MMCX - self.comb += self.platform.request("clk_sel").eq(1) - - has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"]) - if has_grabber: - self.grabber_csr_group = [] - - self.rtio_channels = [] - eem_7series.add_peripherals(self, description["peripherals"]) - if hw_rev in ("v1.0", "v1.1"): - for i in (1, 2): - print("SFP LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels))) - sfp_ctl = self.platform.request("sfp_ctl", i) - phy = ttl_simple.Output(sfp_ctl.led) - self.submodules += phy - self.rtio_channels.append(rtio.Channel.from_phy(phy)) - if hw_rev == "v2.0": - for i in (1, 2): - print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels))) - phy = ttl_simple.Output(self.platform.request("user_led", i)) - self.submodules += phy - self.rtio_channels.append(rtio.Channel.from_phy(phy)) - - self.config["HAS_RTIO_LOG"] = None - self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) - self.rtio_channels.append(rtio.LogChannel()) - - self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"]) - - if has_grabber: - self.config["HAS_GRABBER"] = None - self.add_csr_group("grabber", self.grabber_csr_group) - for grabber in self.grabber_csr_group: - self.platform.add_false_path_constraints( - self.crg.cd_sys.clk, getattr(self, grabber).deserializer.cd_cl.clk) - - -class GenericMaster(MasterBase): - def __init__(self, description, hw_rev=None, **kwargs): - if hw_rev is None: - hw_rev = description["hw_rev"] - self.class_name_override = description["variant"] - has_drtio_over_eem = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"]) - MasterBase.__init__(self, - hw_rev=hw_rev, - rtio_clk_freq=description["rtio_frequency"], - enable_sata=description["enable_sata_drtio"], - enable_sys5x=has_drtio_over_eem, - **kwargs) - if "ext_ref_frequency" in description: - self.config["SI5324_EXT_REF"] = None - self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format( - description["ext_ref_frequency"]/1e6) - if hw_rev == "v1.0": - # EEM clock fan-out from Si5324, not MMCX - self.comb += self.platform.request("clk_sel").eq(1) - - if has_drtio_over_eem: - self.eem_drtio_channels = [] - has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"]) - if has_grabber: - self.grabber_csr_group = [] - - self.rtio_channels = [] - eem_7series.add_peripherals(self, description["peripherals"]) - self.config["HAS_RTIO_LOG"] = None - self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) - self.rtio_channels.append(rtio.LogChannel()) - - if has_drtio_over_eem: - self.add_eem_drtio(self.eem_drtio_channels) - self.add_drtio_cpuif_groups() - - self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"]) - - if has_grabber: - self.config["HAS_GRABBER"] = None - self.add_csr_group("grabber", self.grabber_csr_group) - for grabber in self.grabber_csr_group: - self.platform.add_false_path_constraints( - self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk) - - -class GenericSatellite(SatelliteBase): - def __init__(self, description, hw_rev=None, **kwargs): - if hw_rev is None: - hw_rev = description["hw_rev"] - self.class_name_override = description["variant"] - SatelliteBase.__init__(self, - hw_rev=hw_rev, - rtio_clk_freq=description["rtio_frequency"], - enable_sata=description["enable_sata_drtio"], - **kwargs) - if hw_rev == "v1.0": - # EEM clock fan-out from Si5324, not MMCX - self.comb += self.platform.request("clk_sel").eq(1) - - has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"]) - if has_grabber: - self.grabber_csr_group = [] - - self.rtio_channels = [] - eem_7series.add_peripherals(self, description["peripherals"]) - self.config["HAS_RTIO_LOG"] = None - self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) - self.rtio_channels.append(rtio.LogChannel()) - - self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"]) - if has_grabber: - self.config["HAS_GRABBER"] = None - self.add_csr_group("grabber", self.grabber_csr_group) - for grabber in self.grabber_csr_group: - self.platform.add_false_path_constraints( - self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk) - - -def main(): - parser = argparse.ArgumentParser( - description="ARTIQ device binary builder for generic Kasli systems") - builder_args(parser) - soc_kasli_args(parser) - parser.set_defaults(output_dir="artiq_kasli") - parser.add_argument("description", metavar="DESCRIPTION", - help="JSON system description file") - parser.add_argument("--gateware-identifier-str", default=None, - help="Override ROM identifier") - args = parser.parse_args() - description = jsondesc.load(args.description) - - min_artiq_version = description["min_artiq_version"] - if Version(artiq_version) < Version(min_artiq_version): - logger.warning("ARTIQ version mismatch: current %s < %s minimum", - artiq_version, min_artiq_version) - - if description["target"] != "kasli": - raise ValueError("Description is for a different target") - - if description["drtio_role"] == "standalone": - cls = GenericStandalone - elif description["drtio_role"] == "master": - cls = GenericMaster - elif description["drtio_role"] == "satellite": - cls = GenericSatellite - else: - raise ValueError("Invalid DRTIO role") - - has_shuttler = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"]) - if has_shuttler and (description["drtio_role"] == "standalone"): - raise ValueError("Shuttler requires DRTIO, please switch role to master") - - soc = cls(description, gateware_identifier_str=args.gateware_identifier_str, **soc_kasli_argdict(args)) - args.variant = description["variant"] - build_artiq_soc(soc, builder_argdict(args)) - - -if __name__ == "__main__": - main() diff --git a/artiq/gateware/targets/kc705.py b/artiq/gateware/targets/kc705.py index 1c62476e5..4200919ae 100755 --- a/artiq/gateware/targets/kc705.py +++ b/artiq/gateware/targets/kc705.py @@ -247,10 +247,11 @@ class _MasterBase(MiniSoC, AMPSoC): setattr(self.submodules, coreaux_name, coreaux) self.csr_devices.append(coreaux_name) - memory_address = self.mem_map["drtioaux"] + 0x800*i - self.add_wb_slave(memory_address, 0x800, + drtio_aux_mem_size = 1024 * 16 # max_packet * 8 buffers * 2 (tx, rx halves) + memory_address = self.mem_map["drtioaux"] + drtio_aux_mem_size*i + self.add_wb_slave(memory_address, drtio_aux_mem_size, coreaux.bus) - self.add_memory_region(memory_name, memory_address | self.shadow_base, 0x800) + self.add_memory_region(memory_name, memory_address | self.shadow_base, drtio_aux_mem_size) self.config["HAS_DRTIO"] = None self.config["HAS_DRTIO_ROUTING"] = None self.add_csr_group("drtio", drtio_csr_group) @@ -273,7 +274,8 @@ class _MasterBase(MiniSoC, AMPSoC): txout_buf = Signal() self.specials += Instance("BUFG", i_I=gtx0.txoutclk, o_O=txout_buf) - self.crg.configure(txout_buf, clk_sw=gtx0.tx_init.done) + self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtx0.tx_init.done) + self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap") self.comb += [ platform.request("user_sma_clock_p").eq(ClockSignal("rtio_rx0")), @@ -404,10 +406,11 @@ class _SatelliteBase(BaseSoC, AMPSoC): setattr(self.submodules, coreaux_name, coreaux) self.csr_devices.append(coreaux_name) - memory_address = self.mem_map["drtioaux"] + 0x800*i - self.add_wb_slave(memory_address, 0x800, + drtio_aux_mem_size = 1024 * 16 # max_packet * 8 buffers * 2 (tx, rx halves) + memory_address = self.mem_map["drtioaux"] + drtio_aux_mem_size*i + self.add_wb_slave(memory_address, drtio_aux_mem_size, coreaux.bus) - self.add_memory_region(memory_name, memory_address | self.shadow_base, 0x800) + self.add_memory_region(memory_name, memory_address | self.shadow_base, drtio_aux_mem_size) self.config["HAS_DRTIO"] = None self.config["HAS_DRTIO_ROUTING"] = None self.add_csr_group("drtioaux", drtioaux_csr_group) @@ -440,7 +443,8 @@ class _SatelliteBase(BaseSoC, AMPSoC): txout_buf = Signal() self.specials += Instance("BUFG", i_I=gtx0.txoutclk, o_O=txout_buf) - self.crg.configure(txout_buf, clk_sw=gtx0.tx_init.done) + self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtx0.tx_init.done) + self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap") self.comb += [ platform.request("user_sma_clock_p").eq(ClockSignal("rtio_rx0")), diff --git a/artiq/gateware/test/drtio/test_aux_controller.py b/artiq/gateware/test/drtio/test_aux_controller.py index 6c65307ed..1ea809c0b 100644 --- a/artiq/gateware/test/drtio/test_aux_controller.py +++ b/artiq/gateware/test/drtio/test_aux_controller.py @@ -60,28 +60,36 @@ class TestAuxController(unittest.TestCase): while (yield from dut[dw].aux_controller.transmitter.aux_tx.read()): yield - def receive_packet(dw): + def receive_packet(pkt_no, len, dw): while not (yield from dut[dw].aux_controller.receiver.aux_rx_present.read()): yield - length = yield from dut[dw].aux_controller.receiver.aux_rx_length.read() + p = yield from dut[dw].aux_controller.receiver.aux_read_pointer.read() + self.assertEqual(pkt_no, p) r = [] - for i in range(length//(dw//8)): + # packet length is derived by software now, so we pass it for the test + for i in range(len): + if dw == 64: + offset = 2048 + if dw == 32: + offset = 1024 + print(dw) + max_packet = 1024 r.append((yield from dut[dw].aux_controller.bus.read(256+i))) yield from dut[dw].aux_controller.receiver.aux_rx_present.write(1) return r prng = random.Random(0) - def send_and_check_packet(dw): + def send_and_check_packet(i, dw): data = [prng.randrange(2**dw-1) for _ in range(prng.randrange(1, 16))] yield from send_packet(data, dw) - received = yield from receive_packet(dw) + received = yield from receive_packet(i, len(data), dw) self.assertEqual(data, received) def sim(dw): yield from link_init(dw) for i in range(8): - yield from send_and_check_packet(dw) + yield from send_and_check_packet(i, dw) @passive def rt_traffic(dw): @@ -95,5 +103,5 @@ class TestAuxController(unittest.TestCase): yield dut[dw].link_layer.tx_rt_frame.eq(0) yield - run_simulation(dut[32], [sim(32), rt_traffic(32)]) - run_simulation(dut[64], [sim(64), rt_traffic(64)]) + run_simulation(dut[32], [sim(32), rt_traffic(32)], vcd_name="32bit.vcd") + run_simulation(dut[64], [sim(64), rt_traffic(64)], vcd_name="64bit.vcd") diff --git a/artiq/gateware/wrpll/__init__.py b/artiq/gateware/wrpll/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/artiq/gateware/wrpll/ddmtd.py b/artiq/gateware/wrpll/ddmtd.py new file mode 100644 index 000000000..2d0380810 --- /dev/null +++ b/artiq/gateware/wrpll/ddmtd.py @@ -0,0 +1,119 @@ +from migen import * +from migen.genlib.cdc import PulseSynchronizer, MultiReg +from misoc.interconnect.csr import * + + +class DDMTDSampler(Module): + def __init__(self, cd_ref, main_clk_se): + self.ref_beating = Signal() + self.main_beating = Signal() + + # # # + + ref_clk = Signal() + self.specials +=[ + # ISERDESE2 can only be driven from fabric via IDELAYE2 (see UG471) + Instance("IDELAYE2", + p_DELAY_SRC="DATAIN", + p_HIGH_PERFORMANCE_MODE="TRUE", + p_REFCLK_FREQUENCY=208.3, # REFCLK frequency from IDELAYCTRL + p_IDELAY_VALUE=0, + + i_DATAIN=cd_ref.clk, + + o_DATAOUT=ref_clk + ), + Instance("ISERDESE2", + p_IOBDELAY="IFD", # use DDLY as input + p_DATA_RATE="SDR", + p_DATA_WIDTH=2, # min is 2 + p_NUM_CE=1, + + i_DDLY=ref_clk, + i_CE1=1, + i_CLK=ClockSignal("helper"), + i_CLKDIV=ClockSignal("helper"), + + o_Q1=self.ref_beating + ), + Instance("ISERDESE2", + p_DATA_RATE="SDR", + p_DATA_WIDTH=2, # min is 2 + p_NUM_CE=1, + + i_D=main_clk_se, + i_CE1=1, + i_CLK=ClockSignal("helper"), + i_CLKDIV=ClockSignal("helper"), + + o_Q1=self.main_beating, + ), + ] + + +class DDMTDDeglitcherMedianEdge(Module): + def __init__(self, counter, input_signal, stable_0_period=100, stable_1_period=100): + self.tag = Signal(len(counter)) + self.detect = Signal() + + stable_0_counter = Signal(reset=stable_0_period - 1, max=stable_0_period) + stable_1_counter = Signal(reset=stable_1_period - 1, max=stable_1_period) + + # # # + + # Based on CERN's median edge deglitcher FSM + # https://white-rabbit.web.cern.ch/documents/Precise_time_and_frequency_transfer_in_a_White_Rabbit_network.pdf (p.72) + fsm = ClockDomainsRenamer("helper")(FSM(reset_state="WAIT_STABLE_0")) + self.submodules += fsm + + fsm.act("WAIT_STABLE_0", + If(stable_0_counter != 0, + NextValue(stable_0_counter, stable_0_counter - 1) + ).Else( + NextValue(stable_0_counter, stable_0_period - 1), + NextState("WAIT_EDGE") + ), + If(input_signal, + NextValue(stable_0_counter, stable_0_period - 1) + ), + ) + fsm.act("WAIT_EDGE", + If(input_signal, + NextValue(self.tag, counter), + NextState("GOT_EDGE") + ) + ) + fsm.act("GOT_EDGE", + If(stable_1_counter != 0, + NextValue(stable_1_counter, stable_1_counter - 1) + ).Else( + NextValue(stable_1_counter, stable_1_period - 1), + self.detect.eq(1), + NextState("WAIT_STABLE_0") + ), + If(~input_signal, + NextValue(self.tag, self.tag + 1), + NextValue(stable_1_counter, stable_1_period - 1) + ), + ) + + +class DDMTD(Module): + def __init__(self, counter, input_signal): + + # in helper clock domain + self.h_tag = Signal(len(counter)) + self.h_tag_update = Signal() + + # # # + + deglitcher = DDMTDDeglitcherMedianEdge(counter, input_signal) + self.submodules += deglitcher + + self.sync.helper += [ + self.h_tag_update.eq(0), + If(deglitcher.detect, + self.h_tag_update.eq(1), + self.h_tag.eq(deglitcher.tag) + ) + ] \ No newline at end of file diff --git a/artiq/gateware/wrpll/si549.py b/artiq/gateware/wrpll/si549.py new file mode 100644 index 000000000..3ae894832 --- /dev/null +++ b/artiq/gateware/wrpll/si549.py @@ -0,0 +1,277 @@ +from migen import * +from migen.genlib.fsm import * + +from misoc.interconnect.csr import * + + +class I2CClockGen(Module): + def __init__(self, width): + self.load = Signal(width) + self.clk2x = Signal() + + cnt = Signal.like(self.load) + self.comb += [ + self.clk2x.eq(cnt == 0), + ] + self.sync += [ + If(self.clk2x, + cnt.eq(self.load), + ).Else( + cnt.eq(cnt - 1), + ) + ] + + +class I2CMasterMachine(Module): + def __init__(self, clock_width): + self.scl = Signal(reset=1) + self.sda_o = Signal(reset=1) + self.sda_i = Signal() + + self.submodules.cg = CEInserter()(I2CClockGen(clock_width)) + self.start = Signal() + self.stop = Signal() + self.write = Signal() + self.ack = Signal() + self.data = Signal(8) + self.ready = Signal() + + # # # + + bits = Signal(4) + data = Signal(8) + + fsm = CEInserter()(FSM("IDLE")) + self.submodules += fsm + + fsm.act("IDLE", + self.ready.eq(1), + If(self.start, + NextState("START0"), + ).Elif(self.stop, + NextState("STOP0"), + ).Elif(self.write, + NextValue(bits, 8), + NextValue(data, self.data), + NextState("WRITE0") + ) + ) + + fsm.act("START0", + NextValue(self.scl, 1), + NextState("START1") + ) + fsm.act("START1", + NextValue(self.sda_o, 0), + NextState("IDLE") + ) + + fsm.act("STOP0", + NextValue(self.scl, 0), + NextState("STOP1") + ) + fsm.act("STOP1", + NextValue(self.sda_o, 0), + NextState("STOP2") + ) + fsm.act("STOP2", + NextValue(self.scl, 1), + NextState("STOP3") + ) + fsm.act("STOP3", + NextValue(self.sda_o, 1), + NextState("IDLE") + ) + + fsm.act("WRITE0", + NextValue(self.scl, 0), + NextState("WRITE1") + ) + fsm.act("WRITE1", + If(bits == 0, + NextValue(self.sda_o, 1), + NextState("READACK0"), + ).Else( + NextValue(self.sda_o, data[7]), + NextState("WRITE2"), + ) + ) + fsm.act("WRITE2", + NextValue(self.scl, 1), + NextValue(data[1:], data[:-1]), + NextValue(bits, bits - 1), + NextState("WRITE0"), + ) + fsm.act("READACK0", + NextValue(self.scl, 1), + NextState("READACK1"), + ) + fsm.act("READACK1", + NextValue(self.ack, ~self.sda_i), + NextState("IDLE") + ) + + run = Signal() + idle = Signal() + self.comb += [ + run.eq((self.start | self.stop | self.write) & self.ready), + idle.eq(~run & fsm.ongoing("IDLE")), + self.cg.ce.eq(~idle), + fsm.ce.eq(run | self.cg.clk2x), + ] + + +class ADPLLProgrammer(Module): + def __init__(self): + self.i2c_divider = Signal(16) + self.i2c_address = Signal(7) + + self.adpll = Signal(24) + self.stb = Signal() + self.busy = Signal() + self.nack = Signal() + + self.scl = Signal() + self.sda_i = Signal() + self.sda_o = Signal() + + # # # + + master = I2CMasterMachine(16) + self.submodules += master + + self.comb += [ + master.cg.load.eq(self.i2c_divider), + self.scl.eq(master.scl), + master.sda_i.eq(self.sda_i), + self.sda_o.eq(master.sda_o) + ] + + fsm = FSM() + self.submodules += fsm + + fsm.act("IDLE", + If(self.stb, + NextValue(self.nack, 0), + NextState("START") + ) + ) + fsm.act("START", + master.start.eq(1), + If(master.ready, NextState("DEVADDRESS")) + ) + fsm.act("DEVADDRESS", + master.data.eq(self.i2c_address << 1), + master.write.eq(1), + If(master.ready, NextState("REGADRESS")) + ) + fsm.act("REGADRESS", + master.data.eq(231), + master.write.eq(1), + If(master.ready, + If(master.ack, + NextState("DATA0") + ).Else( + NextValue(self.nack, 1), + NextState("STOP") + ) + ) + ) + fsm.act("DATA0", + master.data.eq(self.adpll[0:8]), + master.write.eq(1), + If(master.ready, + If(master.ack, + NextState("DATA1") + ).Else( + NextValue(self.nack, 1), + NextState("STOP") + ) + ) + ) + fsm.act("DATA1", + master.data.eq(self.adpll[8:16]), + master.write.eq(1), + If(master.ready, + If(master.ack, + NextState("DATA2") + ).Else( + NextValue(self.nack, 1), + NextState("STOP") + ) + ) + ) + fsm.act("DATA2", + master.data.eq(self.adpll[16:24]), + master.write.eq(1), + If(master.ready, + If(~master.ack, NextValue(self.nack, 1)), + NextState("STOP") + ) + ) + fsm.act("STOP", + master.stop.eq(1), + If(master.ready, + If(~master.ack, NextValue(self.nack, 1)), + NextState("IDLE") + ) + ) + + self.comb += self.busy.eq(~fsm.ongoing("IDLE")) + + +class Si549(Module, AutoCSR): + def __init__(self, pads): + self.i2c_divider = CSRStorage(16, reset=75) + self.i2c_address = CSRStorage(7) + + self.adpll = CSRStorage(24) + self.adpll_stb = CSR() + self.adpll_busy = CSRStatus() + self.nack = CSRStatus() + + self.bitbang_enable = CSRStorage() + + self.sda_oe = CSRStorage() + self.sda_out = CSRStorage() + self.sda_in = CSRStatus() + self.scl_oe = CSRStorage() + self.scl_out = CSRStorage() + + # # # + + self.submodules.programmer = ADPLLProgrammer() + + self.sync += self.programmer.stb.eq(self.adpll_stb.re) + + self.comb += [ + self.programmer.i2c_divider.eq(self.i2c_divider.storage), + self.programmer.i2c_address.eq(self.i2c_address.storage), + self.programmer.adpll.eq(self.adpll.storage), + self.adpll_busy.status.eq(self.programmer.busy), + self.nack.status.eq(self.programmer.nack) + ] + + # I2C with bitbang/gateware mode select + sda_t = TSTriple(1) + scl_t = TSTriple(1) + self.specials += [ + sda_t.get_tristate(pads.sda), + scl_t.get_tristate(pads.scl) + ] + + self.comb += [ + If(self.bitbang_enable.storage, + sda_t.oe.eq(self.sda_oe.storage), + sda_t.o.eq(self.sda_out.storage), + self.sda_in.status.eq(sda_t.i), + scl_t.oe.eq(self.scl_oe.storage), + scl_t.o.eq(self.scl_out.storage) + ).Else( + sda_t.oe.eq(~self.programmer.sda_o), + sda_t.o.eq(0), + self.programmer.sda_i.eq(sda_t.i), + scl_t.oe.eq(~self.programmer.scl), + scl_t.o.eq(0), + ) + ] \ No newline at end of file diff --git a/artiq/gateware/wrpll/wrpll.py b/artiq/gateware/wrpll/wrpll.py new file mode 100644 index 000000000..992f41612 --- /dev/null +++ b/artiq/gateware/wrpll/wrpll.py @@ -0,0 +1,236 @@ +from migen import * +from migen.genlib.cdc import MultiReg, AsyncResetSynchronizer, PulseSynchronizer +from misoc.interconnect.csr import * +from misoc.interconnect.csr_eventmanager import * + +from artiq.gateware.wrpll.ddmtd import DDMTDSampler, DDMTD +from artiq.gateware.wrpll.si549 import Si549 + +class FrequencyCounter(Module, AutoCSR): + def __init__(self, domains, counter_width=24): + self.update = CSR() + self.busy = CSRStatus() + + counter_reset = Signal() + counter_stb = Signal() + timer = Signal(counter_width) + + # # # + + fsm = FSM() + self.submodules += fsm + + fsm.act("IDLE", + counter_reset.eq(1), + If(self.update.re, + NextValue(timer, 2**counter_width - 1), + NextState("COUNTING") + ) + ) + fsm.act("COUNTING", + self.busy.status.eq(1), + If(timer != 0, + NextValue(timer, timer - 1) + ).Else( + counter_stb.eq(1), + NextState("IDLE") + ) + ) + + for domain in domains: + name = "counter_" + domain + counter_csr = CSRStatus(counter_width, name=name) + setattr(self, name, counter_csr) + + divider = Signal(2) + divided = Signal() + divided_sys = Signal() + divided_sys_r = Signal() + divided_tick = Signal() + counter = Signal(counter_width) + + # # # + + sync_domain = getattr(self.sync, domain) + sync_domain +=[ + divider.eq(divider + 1), + divided.eq(divider[-1]) + ] + self.specials += MultiReg(divided, divided_sys) + self.sync += divided_sys_r.eq(divided_sys) + self.comb += divided_tick.eq(divided_sys & ~divided_sys_r) + + self.sync += [ + If(counter_stb, counter_csr.status.eq(counter)), + If(divided_tick, counter.eq(counter + 1)), + If(counter_reset, counter.eq(0)) + ] + +class SkewTester(Module, AutoCSR): + def __init__(self, rx_synchronizer): + self.error = CSR() + + # # # + + # The RX synchronizer is tested for setup/hold violations by feeding it a + # toggling pattern and checking that the same toggling pattern comes out. + toggle_in = Signal() + self.sync.rtio_rx0 += toggle_in.eq(~toggle_in) + toggle_out = rx_synchronizer.resync(toggle_in) + + toggle_out_expected = Signal() + self.sync += toggle_out_expected.eq(~toggle_out) + + error = Signal() + self.sync += [ + If(toggle_out != toggle_out_expected, error.eq(1)), + If(self.error.re, error.eq(0)) + ] + self.specials += MultiReg(error, self.error.w) + + +class WRPLL(Module, AutoCSR): + def __init__(self, platform, cd_ref, main_clk_se, COUNTER_BIT=32): + self.helper_reset = CSRStorage(reset=1) + self.ref_tag = CSRStatus(COUNTER_BIT) + self.main_tag = CSRStatus(COUNTER_BIT) + + ddmtd_counter = Signal(COUNTER_BIT) + + ref_tag_sys = Signal(COUNTER_BIT) + main_tag_sys = Signal(COUNTER_BIT) + ref_tag_stb_sys = Signal() + main_tag_stb_sys = Signal() + + # # # + + self.submodules.main_dcxo = Si549(platform.request("ddmtd_main_dcxo_i2c")) + self.submodules.helper_dcxo = Si549(platform.request("ddmtd_helper_dcxo_i2c")) + + helper_dcxo_pads = platform.request("ddmtd_helper_clk") + self.clock_domains.cd_helper = ClockDomain() + self.specials += [ + Instance("IBUFGDS", + i_I=helper_dcxo_pads.p, i_IB=helper_dcxo_pads.n, + o_O=self.cd_helper.clk), + AsyncResetSynchronizer(self.cd_helper, self.helper_reset.storage) + ] + + self.submodules.frequency_counter = FrequencyCounter(["sys", cd_ref.name]) + + self.submodules.ddmtd_sampler = DDMTDSampler(cd_ref, main_clk_se) + + self.sync.helper += ddmtd_counter.eq(ddmtd_counter + 1) + self.submodules.ddmtd_ref = DDMTD(ddmtd_counter, self.ddmtd_sampler.ref_beating) + self.submodules.ddmtd_main = DDMTD(ddmtd_counter, self.ddmtd_sampler.main_beating) + + # DDMTD tags collection + + self.specials += [ + MultiReg(self.ddmtd_ref.h_tag, ref_tag_sys), + MultiReg(self.ddmtd_main.h_tag, main_tag_sys) + ] + + ref_tag_stb_ps = PulseSynchronizer("helper", "sys") + main_tag_stb_ps = PulseSynchronizer("helper", "sys") + self.submodules += [ + ref_tag_stb_ps, + main_tag_stb_ps + ] + self.sync.helper += [ + ref_tag_stb_ps.i.eq(self.ddmtd_ref.h_tag_update), + main_tag_stb_ps.i.eq(self.ddmtd_main.h_tag_update) + ] + self.sync += [ + ref_tag_stb_sys.eq(ref_tag_stb_ps.o), + main_tag_stb_sys.eq(main_tag_stb_ps.o) + ] + + self.sync += [ + If(ref_tag_stb_sys, + self.ref_tag.status.eq(ref_tag_sys), + ), + If(main_tag_stb_sys, + self.main_tag.status.eq(main_tag_sys) + ) + ] + + # EventMangers for firmware interrupt + + self.submodules.ref_tag_ev = EventManager() + self.ref_tag_ev.stb = EventSourcePulse() + self.ref_tag_ev.finalize() + + self.submodules.main_tag_ev = EventManager() + self.main_tag_ev.stb = EventSourcePulse() + self.main_tag_ev.finalize() + + self.sync += [ + self.ref_tag_ev.stb.trigger.eq(ref_tag_stb_sys), + self.main_tag_ev.stb.trigger.eq(main_tag_stb_sys) + ] + + self.submodules.ev = SharedIRQ(self.ref_tag_ev, self.main_tag_ev) + +class FrequencyMultiplier(Module, AutoCSR): + def __init__(self, clkin): + clkin_se = Signal() + mmcm_locked = Signal() + mmcm_fb_clk = Signal() + ref_clk = Signal() + self.clock_domains.cd_ref = ClockDomain() + self.refclk_reset = CSRStorage(reset=1) + + self.mmcm_bypass = CSRStorage() + self.mmcm_locked = CSRStatus() + self.mmcm_reset = CSRStorage(reset=1) + + self.mmcm_daddr = CSRStorage(7) + self.mmcm_din = CSRStorage(16) + self.mmcm_dwen = CSRStorage() + self.mmcm_den = CSRStorage() + self.mmcm_dclk = CSRStorage() + self.mmcm_dout = CSRStatus(16) + self.mmcm_dready = CSRStatus() + + # # # + + self.specials += [ + Instance("IBUFDS", + i_I=clkin.p, i_IB=clkin.n, + o_O=clkin_se), + # MMCME2 is capable to accept 10MHz input while PLLE2 only support down to 19MHz input (DS191) + # The MMCME2 can be reconfiged during runtime using the Dynamic Reconfiguration Ports + Instance("MMCME2_ADV", + p_BANDWIDTH="HIGH", # lower output jitter (see https://support.xilinx.com/s/question/0D52E00006iHqRqSAK) + o_LOCKED=self.mmcm_locked.status, + i_RST=self.mmcm_reset.storage, + + p_CLKIN1_PERIOD=8, # ns + i_CLKIN1=clkin_se, + i_CLKINSEL=1, # 1=CLKIN1 0=CLKIN2 + + # VCO @ 1.25GHz + p_CLKFBOUT_MULT_F=10, p_DIVCLK_DIVIDE=1, + i_CLKFBIN=mmcm_fb_clk, o_CLKFBOUT=mmcm_fb_clk, + + # 125MHz for WRPLL + p_CLKOUT0_DIVIDE_F=10, p_CLKOUT0_PHASE=0.0, o_CLKOUT0=ref_clk, + + # Dynamic Reconfiguration Ports + i_DADDR = self.mmcm_daddr.storage, + i_DI = self.mmcm_din.storage, + i_DWE = self.mmcm_dwen.storage, + i_DEN = self.mmcm_den.storage, + i_DCLK = self.mmcm_dclk.storage, + o_DO = self.mmcm_dout.status, + o_DRDY = self.mmcm_dready.status + ), + Instance("BUFGMUX", + i_I0=ref_clk, + i_I1=clkin_se, + i_S=self.mmcm_bypass.storage, + o_O=self.cd_ref.clk + ), + AsyncResetSynchronizer(self.cd_ref, self.refclk_reset.storage), + ] diff --git a/artiq/gui/applets.py b/artiq/gui/applets.py index 411075ec6..9609fbac0 100644 --- a/artiq/gui/applets.py +++ b/artiq/gui/applets.py @@ -7,6 +7,7 @@ import os import subprocess from functools import partial from itertools import count +from types import SimpleNamespace from PyQt5 import QtCore, QtGui, QtWidgets @@ -14,43 +15,16 @@ from sipyco.pipe_ipc import AsyncioParentComm from sipyco.logging_tools import LogParser from sipyco import pyon -from artiq.gui.entries import procdesc_to_entry -from artiq.gui.tools import (QDockWidgetCloseDetect, LayoutWidget, - WheelFilter) +from artiq.gui.entries import procdesc_to_entry, EntryTreeWidget +from artiq.gui.tools import QDockWidgetCloseDetect, LayoutWidget logger = logging.getLogger(__name__) -class EntryArea(QtWidgets.QTreeWidget): + +class EntryArea(EntryTreeWidget): def __init__(self): - QtWidgets.QTreeWidget.__init__(self) - self.setColumnCount(3) - self.header().setStretchLastSection(False) - if hasattr(self.header(), "setSectionResizeMode"): - set_resize_mode = self.header().setSectionResizeMode - else: - set_resize_mode = self.header().setResizeMode - set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents) - set_resize_mode(1, QtWidgets.QHeaderView.Stretch) - self.header().setVisible(False) - self.setSelectionMode(self.NoSelection) - self.setHorizontalScrollMode(self.ScrollPerPixel) - self.setVerticalScrollMode(self.ScrollPerPixel) - - self.setStyleSheet("QTreeWidget {background: " + - self.palette().midlight().color().name() + " ;}") - - self.viewport().installEventFilter(WheelFilter(self.viewport(), True)) - - self._groups = dict() - self._arg_to_widgets = dict() - self._arguments = dict() - - self.gradient = QtGui.QLinearGradient( - 0, 0, 0, QtGui.QFontMetrics(self.font()).lineSpacing()*2.5) - self.gradient.setColorAt(0, self.palette().base().color()) - self.gradient.setColorAt(1, self.palette().midlight().color()) - + EntryTreeWidget.__init__(self) reset_all_button = QtWidgets.QPushButton("Restore defaults") reset_all_button.setToolTip("Reset all to default values") reset_all_button.setIcon( @@ -62,125 +36,57 @@ class EntryArea(QtWidgets.QTreeWidget): buttons.layout.setColumnStretch(1, 0) buttons.layout.setColumnStretch(2, 1) buttons.addWidget(reset_all_button, 0, 1) - self.bottom_item = QtWidgets.QTreeWidgetItem() - self.addTopLevelItem(self.bottom_item) self.setItemWidget(self.bottom_item, 1, buttons) - self.bottom_item.setHidden(True) - - def setattr_argument(self, name, proc, group=None, tooltip=None): + self._processors = dict() + + def setattr_argument(self, key, processor, group=None, tooltip=None): argument = dict() - desc = proc.describe() + desc = processor.describe() + self._processors[key] = processor argument["desc"] = desc argument["group"] = group argument["tooltip"] = tooltip - self._arguments[name] = argument - widgets = dict() - self._arg_to_widgets[name] = widgets - entry_class = procdesc_to_entry(argument["desc"]) - argument["state"] = entry_class.default_state(argument["desc"]) - entry = entry_class(argument) - widget_item = QtWidgets.QTreeWidgetItem([name]) - if argument["tooltip"]: - widget_item.setToolTip(0, argument["tooltip"]) - widgets["entry"] = entry - widgets["widget_item"] = widget_item + self.set_argument(key, argument) - if len(self._arguments) > 1: - self.bottom_item.setHidden(False) + def __getattr__(self, key): + return self.get_value(key) - for col in range(3): - widget_item.setBackground(col, self.gradient) - font = widget_item.font(0) - font.setBold(True) - widget_item.setFont(0, font) + def get_value(self, key): + entry = self._arg_to_widgets[key]["entry"] + argument = self._arguments[key] + processor = self._processors[key] + return processor.process(entry.state_to_value(argument["state"])) - if argument["group"] is None: - self.insertTopLevelItem(self.indexFromItem(self.bottom_item).row(), widget_item) - else: - self._get_group(argument["group"]).addChild(widget_item) - self.bottom_item.setHidden(False) - fix_layout = LayoutWidget() - widgets["fix_layout"] = fix_layout - fix_layout.addWidget(entry) - self.setItemWidget(widget_item, 1, fix_layout) - - reset_value = QtWidgets.QToolButton() - reset_value.setToolTip("Reset to default value") - reset_value.setIcon( - QtWidgets.QApplication.style().standardIcon( - QtWidgets.QStyle.SP_BrowserReload)) - reset_value.clicked.connect(partial(self.reset_value, name)) - - tool_buttons = LayoutWidget() - tool_buttons.addWidget(reset_value, 0) - self.setItemWidget(widget_item, 2, tool_buttons) - - def _get_group(self, name): - if name in self._groups: - return self._groups[name] - group = QtWidgets.QTreeWidgetItem([name]) - for col in range(3): - group.setBackground(col, self.palette().mid()) - group.setForeground(col, self.palette().brightText()) - font = group.font(col) - font.setBold(True) - group.setFont(col, font) - self.insertTopLevelItem(self.indexFromItem(self.bottom_item).row(), group) - self._groups[name] = group - return group - - def __getattr__(self, name): - return self.get_value(name) - - def get_value(self, name): - entry = self._arg_to_widgets[name]["entry"] - argument = self._arguments[name] - return entry.state_to_value(argument["state"]) - - def set_value(self, name, value): - ty = self._arguments[name]["desc"]["ty"] + def set_value(self, key, value): + ty = self._arguments[key]["desc"]["ty"] if ty == "Scannable": desc = value.describe() - self._arguments[name]["state"][desc["ty"]] = desc - self._arguments[name]["state"]["selected"] = desc["ty"] + self._arguments[key]["state"][desc["ty"]] = desc + self._arguments[key]["state"]["selected"] = desc["ty"] else: - self._arguments[name]["state"] = value - self.update_value(name) + self._arguments[key]["state"] = value + self.update_value(key) def get_values(self): - d = dict() - for name in self._arguments.keys(): - d[name] = self.get_value(name) + d = SimpleNamespace() + for key in self._arguments.keys(): + setattr(d, key, self.get_value(key)) return d def set_values(self, values): - for name, value in values.items(): - self.set_value(name, value) + for key, value in values.items(): + self.set_value(key, value) - def update_value(self, name): - widgets = self._arg_to_widgets[name] - argument = self._arguments[name] + def update_value(self, key): + argument = self._arguments[key] + self.update_argument(key, argument) - # Qt needs a setItemWidget() to handle layout correctly, - # simply replacing the entry inside the LayoutWidget - # results in a bug. - - widgets["entry"].deleteLater() - widgets["entry"] = procdesc_to_entry(argument["desc"])(argument) - widgets["fix_layout"].deleteLater() - widgets["fix_layout"] = LayoutWidget() - widgets["fix_layout"].addWidget(widgets["entry"]) - self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"]) - self.updateGeometries() - - def reset_value(self, name): - procdesc = self._arguments[name]["desc"] - self._arguments[name]["state"] = procdesc_to_entry(procdesc).default_state(procdesc) - self.update_value(name) + def reset_value(self, key): + self.reset_entry(key) def reset_all(self): - for name in self._arguments.keys(): - self.reset_value(name) + for key in self._arguments.keys(): + self.reset_entry(key) class AppletIPCServer(AsyncioParentComm): @@ -255,7 +161,7 @@ class AppletIPCServer(AsyncioParentComm): elif action == "update_dataset": await self.dataset_ctl.update(obj["mod"]) elif action == "set_argument_value": - self.expmgr.set_argument_value(obj["expurl"], obj["name"], obj["value"]) + self.expmgr.set_argument_value(obj["expurl"], obj["key"], obj["value"]) else: raise ValueError("unknown action in applet message") except: diff --git a/artiq/gui/dndwidgets.py b/artiq/gui/dndwidgets.py new file mode 100644 index 000000000..1f7b3fc07 --- /dev/null +++ b/artiq/gui/dndwidgets.py @@ -0,0 +1,164 @@ +from PyQt5 import QtCore, QtWidgets, QtGui + +from artiq.gui.flowlayout import FlowLayout + + +class VDragDropSplitter(QtWidgets.QSplitter): + dropped = QtCore.pyqtSignal(int, int) + + def __init__(self, parent): + QtWidgets.QSplitter.__init__(self, parent=parent) + self.setAcceptDrops(True) + self.setContentsMargins(0, 0, 0, 0) + self.setOrientation(QtCore.Qt.Vertical) + self.setChildrenCollapsible(False) + + def resetSizes(self): + self.setSizes(self.count() * [1]) + + def dragEnterEvent(self, e): + e.accept() + + def dragLeaveEvent(self, e): + self.setRubberBand(-1) + e.accept() + + def dragMoveEvent(self, e): + pos = e.pos() + src = e.source() + src_i = self.indexOf(src) + self.setRubberBand(self.height()) + # case 0: smaller than source widget + if pos.y() < src.y(): + for n in range(src_i): + w = self.widget(n) + if pos.y() < w.y() + w.size().height(): + self.setRubberBand(w.y()) + break + # case 2: greater than source widget + elif pos.y() > src.y() + src.size().height(): + for n in range(src_i + 1, self.count()): + w = self.widget(n) + if pos.y() < w.y(): + self.setRubberBand(w.y()) + break + else: + self.setRubberBand(-1) + e.accept() + + def dropEvent(self, e): + self.setRubberBand(-1) + pos = e.pos() + src = e.source() + src_i = self.indexOf(src) + for n in range(self.count()): + w = self.widget(n) + if pos.y() < w.y() + w.size().height(): + self.dropped.emit(src_i, n) + break + e.accept() + + +# Scroll area with auto-scroll on vertical drag +class VDragScrollArea(QtWidgets.QScrollArea): + def __init__(self, parent): + QtWidgets.QScrollArea.__init__(self, parent) + self.installEventFilter(self) + self._margin = 40 + self._timer = QtCore.QTimer(self) + self._timer.setInterval(20) + self._timer.timeout.connect(self._on_auto_scroll) + self._direction = 0 + self._speed = 10 + + def setAutoScrollMargin(self, margin): + self._margin = margin + + def setAutoScrollSpeed(self, speed): + self._speed = speed + + def eventFilter(self, obj, e): + if e.type() == QtCore.QEvent.DragMove: + val = self.verticalScrollBar().value() + height = self.viewport().height() + y = e.pos().y() + self._direction = 0 + if y < val + self._margin: + self._direction = -1 + elif y > height + val - self._margin: + self._direction = 1 + if not self._timer.isActive(): + self._timer.start() + elif e.type() in (QtCore.QEvent.Drop, QtCore.QEvent.DragLeave): + self._timer.stop() + return False + + def _on_auto_scroll(self): + val = self.verticalScrollBar().value() + min_ = self.verticalScrollBar().minimum() + max_ = self.verticalScrollBar().maximum() + dy = self._direction * self._speed + new_val = min(max_, max(min_, val + dy)) + self.verticalScrollBar().setValue(new_val) + + +# Widget with FlowLayout and drag and drop support between widgets +class DragDropFlowLayoutWidget(QtWidgets.QWidget): + def __init__(self): + QtWidgets.QWidget.__init__(self) + self.layout = FlowLayout() + self.setLayout(self.layout) + self.setAcceptDrops(True) + + def _get_index(self, pos): + for i in range(self.layout.count()): + if self.itemAt(i).geometry().contains(pos): + return i + return -1 + + def mousePressEvent(self, event): + if event.buttons() == QtCore.Qt.LeftButton \ + and event.modifiers() == QtCore.Qt.ShiftModifier: + index = self._get_index(event.pos()) + if index == -1: + return + drag = QtGui.QDrag(self) + mime = QtCore.QMimeData() + mime.setData("index", str(index).encode()) + drag.setMimeData(mime) + pixmapi = QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_FileIcon) + drag.setPixmap(pixmapi.pixmap(32)) + drag.exec_(QtCore.Qt.MoveAction) + event.accept() + + def dragEnterEvent(self, event): + event.accept() + + def dropEvent(self, event): + index = self._get_index(event.pos()) + source_layout = event.source() + source_index = int(bytes(event.mimeData().data("index")).decode()) + if source_layout == self: + if index == source_index: + return + widget = self.layout.itemAt(source_index).widget() + self.layout.removeWidget(widget) + self.layout.addWidget(widget) + self.layout.itemList.insert(index, self.layout.itemList.pop()) + else: + widget = source_layout.layout.itemAt(source_index).widget() + source_layout.layout.removeWidget(widget) + self.layout.addWidget(widget) + if index != -1: + self.layout.itemList.insert(index, self.layout.itemList.pop()) + event.accept() + + def addWidget(self, widget): + self.layout.addWidget(widget) + + def count(self): + return self.layout.count() + + def itemAt(self, i): + return self.layout.itemAt(i) diff --git a/artiq/gui/entries.py b/artiq/gui/entries.py index 8b9bf4788..43daa7a5b 100644 --- a/artiq/gui/entries.py +++ b/artiq/gui/entries.py @@ -1,9 +1,10 @@ import logging from collections import OrderedDict +from functools import partial from PyQt5 import QtCore, QtGui, QtWidgets -from artiq.gui.tools import LayoutWidget, disable_scroll_wheel +from artiq.gui.tools import LayoutWidget, disable_scroll_wheel, WheelFilter from artiq.gui.scanwidget import ScanWidget from artiq.gui.scientific_spinbox import ScientificSpinBox @@ -11,6 +12,157 @@ from artiq.gui.scientific_spinbox import ScientificSpinBox logger = logging.getLogger(__name__) +class EntryTreeWidget(QtWidgets.QTreeWidget): + quickStyleClicked = QtCore.pyqtSignal() + + def __init__(self): + QtWidgets.QTreeWidget.__init__(self) + self.setColumnCount(3) + self.header().setStretchLastSection(False) + if hasattr(self.header(), "setSectionResizeMode"): + set_resize_mode = self.header().setSectionResizeMode + else: + set_resize_mode = self.header().setResizeMode + set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents) + set_resize_mode(1, QtWidgets.QHeaderView.Stretch) + set_resize_mode(2, QtWidgets.QHeaderView.ResizeToContents) + self.header().setVisible(False) + self.setSelectionMode(self.NoSelection) + self.setHorizontalScrollMode(self.ScrollPerPixel) + self.setVerticalScrollMode(self.ScrollPerPixel) + + self.setStyleSheet("QTreeWidget {background: " + + self.palette().midlight().color().name() + " ;}") + + self.viewport().installEventFilter(WheelFilter(self.viewport(), True)) + + self._groups = dict() + self._arg_to_widgets = dict() + self._arguments = dict() + + self.gradient = QtGui.QLinearGradient( + 0, 0, 0, QtGui.QFontMetrics(self.font()).lineSpacing() * 2.5) + self.gradient.setColorAt(0, self.palette().base().color()) + self.gradient.setColorAt(1, self.palette().midlight().color()) + + self.bottom_item = QtWidgets.QTreeWidgetItem() + self.addTopLevelItem(self.bottom_item) + + def set_argument(self, key, argument): + self._arguments[key] = argument + widgets = dict() + self._arg_to_widgets[key] = widgets + entry_class = procdesc_to_entry(argument["desc"]) + argument["state"] = entry_class.default_state(argument["desc"]) + entry = entry_class(argument) + if argument["desc"].get("quickstyle"): + entry.quickStyleClicked.connect(self.quickStyleClicked) + widget_item = QtWidgets.QTreeWidgetItem([key]) + if argument["tooltip"]: + widget_item.setToolTip(0, argument["tooltip"]) + widgets["entry"] = entry + widgets["widget_item"] = widget_item + + for col in range(3): + widget_item.setBackground(col, self.gradient) + font = widget_item.font(0) + font.setBold(True) + widget_item.setFont(0, font) + + if argument["group"] is None: + self.insertTopLevelItem(self.indexFromItem(self.bottom_item).row(), widget_item) + else: + self._get_group(argument["group"]).addChild(widget_item) + fix_layout = LayoutWidget() + widgets["fix_layout"] = fix_layout + fix_layout.addWidget(entry) + self.setItemWidget(widget_item, 1, fix_layout) + + reset_entry = QtWidgets.QToolButton() + reset_entry.setToolTip("Reset to default value") + reset_entry.setIcon( + QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_BrowserReload)) + reset_entry.clicked.connect(partial(self.reset_entry, key)) + + disable_other_scans = QtWidgets.QToolButton() + widgets["disable_other_scans"] = disable_other_scans + disable_other_scans.setIcon( + QtWidgets.QApplication.style().standardIcon( + QtWidgets.QStyle.SP_DialogResetButton)) + disable_other_scans.setToolTip("Disable other scans") + disable_other_scans.clicked.connect( + partial(self._disable_other_scans, key)) + if not isinstance(entry, ScanEntry): + disable_other_scans.setVisible(False) + + tool_buttons = LayoutWidget() + tool_buttons.layout.setRowStretch(0, 1) + tool_buttons.layout.setRowStretch(3, 1) + tool_buttons.addWidget(reset_entry, 1) + tool_buttons.addWidget(disable_other_scans, 2) + self.setItemWidget(widget_item, 2, tool_buttons) + + def _get_group(self, key): + if key in self._groups: + return self._groups[key] + group = QtWidgets.QTreeWidgetItem([key]) + for col in range(3): + group.setBackground(col, self.palette().mid()) + group.setForeground(col, self.palette().brightText()) + font = group.font(col) + font.setBold(True) + group.setFont(col, font) + self.insertTopLevelItem(self.indexFromItem(self.bottom_item).row(), group) + self._groups[key] = group + return group + + def _disable_other_scans(self, current_key): + for key, widgets in self._arg_to_widgets.items(): + if (key != current_key and isinstance(widgets["entry"], ScanEntry)): + widgets["entry"].disable() + + def update_argument(self, key, argument): + widgets = self._arg_to_widgets[key] + + # Qt needs a setItemWidget() to handle layout correctly, + # simply replacing the entry inside the LayoutWidget + # results in a bug. + + widgets["entry"].deleteLater() + widgets["entry"] = procdesc_to_entry(argument["desc"])(argument) + widgets["disable_other_scans"].setVisible( + isinstance(widgets["entry"], ScanEntry)) + widgets["fix_layout"].deleteLater() + widgets["fix_layout"] = LayoutWidget() + widgets["fix_layout"].addWidget(widgets["entry"]) + self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"]) + self.updateGeometries() + + def reset_entry(self, key): + procdesc = self._arguments[key]["desc"] + self._arguments[key]["state"] = procdesc_to_entry(procdesc).default_state(procdesc) + self.update_argument(key, self._arguments[key]) + + def save_state(self): + expanded = [] + for k, v in self._groups.items(): + if v.isExpanded(): + expanded.append(k) + return { + "expanded": expanded, + "scroll": self.verticalScrollBar().value() + } + + def restore_state(self, state): + for e in state["expanded"]: + try: + self._groups[e].setExpanded(True) + except KeyError: + pass + self.verticalScrollBar().setValue(state["scroll"]) + + class StringEntry(QtWidgets.QLineEdit): def __init__(self, argument): QtWidgets.QLineEdit.__init__(self) @@ -45,17 +197,38 @@ class BooleanEntry(QtWidgets.QCheckBox): return procdesc.get("default", False) -class EnumerationEntry(QtWidgets.QComboBox): +class EnumerationEntry(QtWidgets.QWidget): + quickStyleClicked = QtCore.pyqtSignal() + def __init__(self, argument): - QtWidgets.QComboBox.__init__(self) - disable_scroll_wheel(self) - choices = argument["desc"]["choices"] - self.addItems(choices) - idx = choices.index(argument["state"]) - self.setCurrentIndex(idx) - def update(index): - argument["state"] = choices[index] - self.currentIndexChanged.connect(update) + QtWidgets.QWidget.__init__(self) + layout = QtWidgets.QHBoxLayout() + self.setLayout(layout) + procdesc = argument["desc"] + choices = procdesc["choices"] + if procdesc["quickstyle"]: + self.btn_group = QtWidgets.QButtonGroup() + for i, choice in enumerate(choices): + button = QtWidgets.QPushButton(choice) + self.btn_group.addButton(button) + self.btn_group.setId(button, i) + layout.addWidget(button) + + def submit(index): + argument["state"] = choices[index] + self.quickStyleClicked.emit() + self.btn_group.idClicked.connect(submit) + else: + self.combo_box = QtWidgets.QComboBox() + disable_scroll_wheel(self.combo_box) + self.combo_box.addItems(choices) + idx = choices.index(argument["state"]) + self.combo_box.setCurrentIndex(idx) + layout.addWidget(self.combo_box) + + def update(index): + argument["state"] = choices[index] + self.combo_box.currentIndexChanged.connect(update) @staticmethod def state_to_value(state): diff --git a/artiq/gui/logo_ver.svg b/artiq/gui/logo_ver.svg index 4ebd46eed..b98176a18 100644 --- a/artiq/gui/logo_ver.svg +++ b/artiq/gui/logo_ver.svg @@ -1,7 +1,7 @@ 8 - + aria-label="7"> diff --git a/artiq/gui/tools.py b/artiq/gui/tools.py index 441da9b81..957823780 100644 --- a/artiq/gui/tools.py +++ b/artiq/gui/tools.py @@ -4,6 +4,40 @@ import logging from PyQt5 import QtCore, QtWidgets +class DoubleClickLineEdit(QtWidgets.QLineEdit): + finished = QtCore.pyqtSignal() + + def __init__(self, init): + QtWidgets.QLineEdit.__init__(self, init) + self.setFrame(False) + self.setReadOnly(True) + self.returnPressed.connect(self._return_pressed) + self.editingFinished.connect(self._editing_finished) + self._text = init + + def mouseDoubleClickEvent(self, event): + if self.isReadOnly(): + self.setReadOnly(False) + self.setFrame(True) + QtWidgets.QLineEdit.mouseDoubleClickEvent(self, event) + + def _return_pressed(self): + self._text = self.text() + + def _editing_finished(self): + self.setReadOnly(True) + self.setFrame(False) + self.setText(self._text) + self.finished.emit() + + def keyPressEvent(self, event): + key = event.key() + if key == QtCore.Qt.Key_Escape and not self.isReadOnly(): + self.editingFinished.emit() + else: + QtWidgets.QLineEdit.keyPressEvent(self, event) + + def log_level_to_name(level): if level >= logging.CRITICAL: return "CRITICAL" @@ -69,6 +103,23 @@ async def get_open_file_name(parent, caption, dir, filter): return await fut +async def get_save_file_name(parent, caption, dir, filter, suffix=None): + """like QtWidgets.QFileDialog.getSaveFileName(), but a coroutine""" + dialog = QtWidgets.QFileDialog(parent, caption, dir, filter) + dialog.setFileMode(dialog.AnyFile) + dialog.setAcceptMode(dialog.AcceptSave) + if suffix is not None: + dialog.setDefaultSuffix(suffix) + fut = asyncio.Future() + + def on_accept(): + fut.set_result(dialog.selectedFiles()[0]) + dialog.accepted.connect(on_accept) + dialog.rejected.connect(fut.cancel) + dialog.open() + return await fut + + # Based on: # http://stackoverflow.com/questions/250890/using-qsortfilterproxymodel-with-a-tree-model class QRecursiveFilterProxyModel(QtCore.QSortFilterProxyModel): diff --git a/artiq/language/environment.py b/artiq/language/environment.py index ba42b2791..a80701c78 100644 --- a/artiq/language/environment.py +++ b/artiq/language/environment.py @@ -1,5 +1,7 @@ from collections import OrderedDict from inspect import isclass +from contextlib import contextmanager +from types import SimpleNamespace from sipyco import pyon @@ -10,7 +12,8 @@ from artiq.language.core import rpc __all__ = ["NoDefault", "DefaultMissing", "PYONValue", "BooleanValue", "EnumerationValue", "NumberValue", "StringValue", - "HasEnvironment", "Experiment", "EnvExperiment"] + "HasEnvironment", "Experiment", "EnvExperiment", + "CancelledArgsError"] class NoDefault: @@ -24,6 +27,12 @@ class DefaultMissing(Exception): pass +class CancelledArgsError(Exception): + """Raised by the ``interactive`` context manager when an interactive + arguments request is cancelled.""" + pass + + class _SimpleArgProcessor: def __init__(self, default=NoDefault): # If default is a list, it means multiple defaults are specified, with @@ -80,9 +89,12 @@ class EnumerationValue(_SimpleArgProcessor): :param choices: A list of string representing the possible values of the argument. + :param quickstyle: Enables the choices to be displayed in the GUI as a + list of buttons that submit the experiment when clicked. """ - def __init__(self, choices, default=NoDefault): + def __init__(self, choices, default=NoDefault, quickstyle=False): self.choices = choices + self.quickstyle = quickstyle super().__init__(default) def process(self, x): @@ -93,6 +105,7 @@ class EnumerationValue(_SimpleArgProcessor): def describe(self): d = _SimpleArgProcessor.describe(self) d["choices"] = self.choices + d["quickstyle"] = self.quickstyle return d @@ -212,6 +225,9 @@ class TraceArgumentManager: self.requested_args[key] = processor, group, tooltip return None + def get_interactive(self, interactive_arglist, title): + raise NotImplementedError + class ProcessArgumentManager: def __init__(self, unprocessed_arguments): @@ -233,6 +249,10 @@ class ProcessArgumentManager: raise AttributeError("Supplied argument(s) not queried in experiment: " + ", ".join(unprocessed)) + def get_interactive(self, interactive_arglist, title): + raise NotImplementedError + + class HasEnvironment: """Provides methods to manage the environment of an experiment (arguments, devices, datasets).""" @@ -322,6 +342,33 @@ class HasEnvironment: kernel_invariants = getattr(self, "kernel_invariants", set()) self.kernel_invariants = kernel_invariants | {key} + @contextmanager + def interactive(self, title=""): + """Request arguments from the user interactively. + + This context manager returns a namespace object on which the method + `setattr_argument` should be called, with the usual semantics. + + When the context manager terminates, the experiment is blocked + and the user is presented with the requested argument widgets. + After the user enters values, the experiment is resumed and + the namespace contains the values of the arguments. + + If the interactive arguments request is cancelled, raises + ``CancelledArgsError``.""" + interactive_arglist = [] + namespace = SimpleNamespace() + def setattr_argument(key, processor=None, group=None, tooltip=None): + interactive_arglist.append((key, processor, group, tooltip)) + namespace.setattr_argument = setattr_argument + yield namespace + del namespace.setattr_argument + argdict = self.__argument_mgr.get_interactive(interactive_arglist, title) + if argdict is None: + raise CancelledArgsError + for key, value in argdict.items(): + setattr(namespace, key, value) + def get_device_db(self): """Returns the full contents of the device database.""" return self.__device_mgr.get_device_db() diff --git a/artiq/master/databases.py b/artiq/master/databases.py index db5760d31..d889933d7 100644 --- a/artiq/master/databases.py +++ b/artiq/master/databases.py @@ -2,7 +2,8 @@ import asyncio import lmdb -from sipyco.sync_struct import Notifier, process_mod, ModAction, update_from_dict +from sipyco.sync_struct import (Notifier, process_mod, ModAction, + update_from_dict) from sipyco import pyon from sipyco.asyncio_tools import TaskObject @@ -36,6 +37,9 @@ class DeviceDB: desc = self.data.raw_view[desc] return desc + def get_satellite_cpu_target(self, destination): + return self.data.raw_view["satellite_cpu_targets"][destination] + class DatasetDB(TaskObject): def __init__(self, persist_file, autosave_period=30): @@ -57,12 +61,13 @@ class DatasetDB(TaskObject): def save(self): with self.lmdb.begin(write=True) as txn: for key in self.pending_keys: - if key not in self.data.raw_view or not self.data.raw_view[key][0]: + if (key not in self.data.raw_view + or not self.data.raw_view[key][0]): txn.delete(key.encode()) else: value_and_metadata = (self.data.raw_view[key][1], self.data.raw_view[key][2]) - txn.put(key.encode(), + txn.put(key.encode(), pyon.encode(value_and_metadata).encode()) self.pending_keys.clear() @@ -84,7 +89,8 @@ class DatasetDB(TaskObject): if mod["path"]: key = mod["path"][0] else: - assert(mod["action"] == ModAction.setitem.value or mod["action"] == ModAction.delitem.value) + assert (mod["action"] == ModAction.setitem.value + or mod["action"] == ModAction.delitem.value) key = mod["key"] self.pending_keys.add(key) process_mod(self.data, mod) @@ -108,3 +114,34 @@ class DatasetDB(TaskObject): del self.data[key] self.pending_keys.add(key) # + + +class InteractiveArgDB: + def __init__(self): + self.pending = Notifier(dict()) + self.futures = dict() + + async def get(self, rid, arglist_desc, title): + self.pending[rid] = {"title": title, "arglist_desc": arglist_desc} + self.futures[rid] = asyncio.get_running_loop().create_future() + try: + value = await self.futures[rid] + finally: + del self.pending[rid] + del self.futures[rid] + return value + + def supply(self, rid, values): + # quick sanity checks + if rid not in self.futures or self.futures[rid].done(): + raise ValueError("no experiment with this RID is " + "waiting for interactive arguments") + if {i[0] for i in self.pending.raw_view[rid]["arglist_desc"]} != set(values.keys()): + raise ValueError("supplied and requested keys do not match") + self.futures[rid].set_result(values) + + def cancel(self, rid): + if rid not in self.futures or self.futures[rid].done(): + raise ValueError("no experiment with this RID is " + "waiting for interactive arguments") + self.futures[rid].set_result(None) diff --git a/artiq/master/experiments.py b/artiq/master/experiments.py index df29f2a6b..935872fb0 100644 --- a/artiq/master/experiments.py +++ b/artiq/master/experiments.py @@ -111,7 +111,7 @@ class ExperimentDB: try: if new_cur_rev is None: new_cur_rev = self.repo_backend.get_head_rev() - wd, _ = self.repo_backend.request_rev(new_cur_rev) + wd, _, _ = self.repo_backend.request_rev(new_cur_rev) self.repo_backend.release_rev(self.cur_rev) self.cur_rev = new_cur_rev self.status["cur_rev"] = new_cur_rev @@ -132,7 +132,7 @@ class ExperimentDB: if use_repository: if revision is None: revision = self.cur_rev - wd, _ = self.repo_backend.request_rev(revision) + wd, _, revision = self.repo_backend.request_rev(revision) filename = os.path.join(wd, filename) worker = Worker(self.worker_handlers) try: @@ -169,7 +169,7 @@ class FilesystemBackend: return "N/A" def request_rev(self, rev): - return self.root, None + return self.root, None, "N/A" def release_rev(self, rev): pass @@ -200,14 +200,26 @@ class GitBackend: def get_head_rev(self): return str(self.git.head.target) + def _get_pinned_rev(self, rev): + """ + Resolve a git reference (e.g. "HEAD", "master", "abcdef123456...") into + a git hash + """ + commit, _ = self.git.resolve_refish(rev) + + logger.debug('Resolved git ref "%s" into "%s"', rev, commit.hex) + + return commit.hex + def request_rev(self, rev): + rev = self._get_pinned_rev(rev) if rev in self.checkouts: co = self.checkouts[rev] co.ref_count += 1 else: co = _GitCheckout(self.git, rev) self.checkouts[rev] = co - return co.path, co.message + return co.path, co.message, rev def release_rev(self, rev): co = self.checkouts[rev] diff --git a/artiq/master/scheduler.py b/artiq/master/scheduler.py index bc264964e..18e16f90d 100644 --- a/artiq/master/scheduler.py +++ b/artiq/master/scheduler.py @@ -132,15 +132,23 @@ class RunPool: writer.writerow([rid, start_time, expid["file"]]) def submit(self, expid, priority, due_date, flush, pipeline_name): + """ + Submits an experiment to be run by this pool + + If expid has the attribute `repo_rev`, treat it as a git revision or + reference and resolve into a unique git hash before submission + """ # mutates expid to insert head repository revision if None and # replaces relative path with the absolute one. # called through scheduler. rid = self.ridc.get() if "repo_rev" in expid: - if expid["repo_rev"] is None: - expid["repo_rev"] = self.experiment_db.cur_rev - wd, repo_msg = self.experiment_db.repo_backend.request_rev( - expid["repo_rev"]) + repo_rev_or_ref = expid["repo_rev"] or self.experiment_db.cur_rev + wd, repo_msg, repo_rev = self.experiment_db.repo_backend.request_rev(repo_rev_or_ref) + + # Mutate expid's repo_rev to that returned from request_rev, in case + # a branch was passed instead of a hash + expid["repo_rev"] = repo_rev else: if "file" in expid: expid["file"] = os.path.abspath(expid["file"]) @@ -148,7 +156,7 @@ class RunPool: run = Run(rid, pipeline_name, wd, expid, priority, due_date, flush, self, repo_msg=repo_msg) - if self.log_submissions is not None: + if self.log_submissions is not None: self.log_submission(rid, expid) self.runs[rid] = run self.state_changed.notify() @@ -231,7 +239,7 @@ class PrepareStage(TaskObject): try: await run.build() await run.prepare() - except: + except Exception: logger.error("got worker exception in prepare stage, " "deleting RID %d", run.rid) log_worker_exception() @@ -281,7 +289,7 @@ class RunStage(TaskObject): else: run.status = RunStatus.running completed = await run.run() - except: + except Exception: logger.error("got worker exception in run stage, " "deleting RID %d", run.rid) log_worker_exception() @@ -318,7 +326,7 @@ class AnalyzeStage(TaskObject): run.status = RunStatus.analyzing try: await run.analyze() - except: + except Exception: logger.error("got worker exception in analyze stage of RID %d.", run.rid) log_worker_exception() @@ -502,8 +510,7 @@ class Scheduler: """Returns ``True`` if termination is requested.""" for pipeline in self._pipelines.values(): if rid in pipeline.pool.runs: - run = pipeline.pool.runs[rid] + run = pipeline.pool.runs[rid] if run.termination_requested: return True return False - \ No newline at end of file diff --git a/artiq/master/worker.py b/artiq/master/worker.py index ca83dc7db..0353f9731 100644 --- a/artiq/master/worker.py +++ b/artiq/master/worker.py @@ -226,9 +226,15 @@ class Worker: else: func = self.handlers[action] try: - data = func(*obj["args"], **obj["kwargs"]) + if getattr(func, "_worker_pass_rid", False): + args = [self.rid] + list(obj["args"]) + else: + args = obj["args"] + data = func(*args, **obj["kwargs"]) + if asyncio.iscoroutine(data): + data = await data reply = {"status": "ok", "data": data} - except: + except Exception: reply = { "status": "failed", "exception": current_exc_packed() diff --git a/artiq/master/worker_db.py b/artiq/master/worker_db.py index 5da202b2f..af316e4cf 100644 --- a/artiq/master/worker_db.py +++ b/artiq/master/worker_db.py @@ -19,12 +19,13 @@ class DummyDevice: pass -def _create_device(desc, device_mgr): +def _create_device(desc, device_mgr, argument_overrides): ty = desc["type"] if ty == "local": module = importlib.import_module(desc["module"]) device_class = getattr(module, desc["class"]) - return device_class(device_mgr, **desc.get("arguments", {})) + arguments = desc.get("arguments", {}) | argument_overrides + return device_class(device_mgr, **arguments) elif ty == "controller": if desc.get("best_effort", False): cls = BestEffortClient @@ -60,6 +61,7 @@ class DeviceManager: self.ddb = ddb self.virtual_devices = virtual_devices self.active_devices = [] + self.devarg_override = {} def get_device_db(self): """Returns the full contents of the device database.""" @@ -85,13 +87,20 @@ class DeviceManager: return existing_dev try: - dev = _create_device(desc, self) + dev = _create_device(desc, self, self.devarg_override.get(name, {})) except Exception as e: raise DeviceError("Failed to create device '{}'" .format(name)) from e self.active_devices.append((desc, dev)) return dev + def notify_run_end(self): + """Sends a "end of Experiment run stage" notification to + all active devices.""" + for _desc, dev in self.active_devices: + if hasattr(dev, "notify_run_end"): + dev.notify_run_end() + def close_devices(self): """Closes all active devices, in the opposite order as they were requested.""" @@ -101,8 +110,9 @@ class DeviceManager: dev.close_rpc() elif hasattr(dev, "close"): dev.close() - except Exception as e: - logger.warning("Exception %r when closing device %r", e, dev) + except: + logger.warning("Exception raised when closing device %r:", + dev, exc_info=True) self.active_devices.clear() diff --git a/artiq/master/worker_impl.py b/artiq/master/worker_impl.py index defa37ab2..40afab2e0 100644 --- a/artiq/master/worker_impl.py +++ b/artiq/master/worker_impl.py @@ -73,6 +73,7 @@ class ParentDeviceDB: class ParentDatasetDB: get = make_parent_action("get_dataset") update = make_parent_action("update_dataset") + get_metadata = make_parent_action("get_dataset_metadata") class Watchdog: @@ -186,6 +187,10 @@ class ExamineDatasetMgr: def get(key, archive=False): return ParentDatasetDB.get(key) + @staticmethod + def get_metadata(key): + return ParentDatasetDB.get_metadata(key) + def examine(device_mgr, dataset_mgr, file): previous_keys = set(sys.modules.keys()) @@ -200,9 +205,7 @@ def examine(device_mgr, dataset_mgr, file): name = name[:-1] argument_mgr = TraceArgumentManager() scheduler_defaults = {} - cls = exp_class( # noqa: F841 (fill argument_mgr) - (device_mgr, dataset_mgr, argument_mgr, scheduler_defaults) - ) + exp_class((device_mgr, dataset_mgr, argument_mgr, scheduler_defaults)) arginfo = OrderedDict( (k, (proc.describe(), group, tooltip)) for k, (proc, group, tooltip) in argument_mgr.requested_args.items() @@ -217,6 +220,18 @@ def examine(device_mgr, dataset_mgr, file): del sys.modules[key] +class ArgumentManager(ProcessArgumentManager): + _get_interactive = make_parent_action("get_interactive_arguments") + + def get_interactive(self, interactive_arglist, title): + arglist_desc = [(k, p.describe(), g, t) + for k, p, g, t in interactive_arglist] + arguments = ArgumentManager._get_interactive(arglist_desc, title) + if arguments is not None: + for key, processor, _, _ in interactive_arglist: + arguments[key] = processor.process(arguments[key]) + return arguments + def put_completed(): put_object({"action": "completed"}) @@ -281,6 +296,8 @@ def main(): start_time = time.time() rid = obj["rid"] expid = obj["expid"] + if "devarg_override" in expid: + device_mgr.devarg_override = expid["devarg_override"] if "file" in expid: if obj["wd"] is not None: # Using repository @@ -296,11 +313,11 @@ def main(): rid, obj["pipeline_name"], expid, obj["priority"]) start_local_time = time.localtime(start_time) dirname = os.path.join("results", - time.strftime("%Y-%m-%d", start_local_time), - time.strftime("%H", start_local_time)) + time.strftime("%Y-%m-%d", start_local_time), + time.strftime("%H", start_local_time)) os.makedirs(dirname, exist_ok=True) os.chdir(dirname) - argument_mgr = ProcessArgumentManager(expid["arguments"]) + argument_mgr = ArgumentManager(expid["arguments"]) exp_inst = exp((device_mgr, dataset_mgr, argument_mgr, {})) argument_mgr.check_unprocessed_arguments() put_completed() @@ -316,6 +333,8 @@ def main(): # for end of analyze stage. write_results() raise + finally: + device_mgr.notify_run_end() put_completed() elif action == "analyze": try: diff --git a/artiq/test/coredevice/test_compile.py b/artiq/test/coredevice/test_compile.py index 0213ef0cf..a1a02bef5 100644 --- a/artiq/test/coredevice/test_compile.py +++ b/artiq/test/coredevice/test_compile.py @@ -39,9 +39,9 @@ class TestCompile(ExperimentCase): mgmt.clear_log() with tempfile.TemporaryDirectory() as tmp: db_path = os.path.join(artiq_root, "device_db.py") - subprocess.call([sys.executable, "-m", "artiq.frontend.artiq_compile", "--device-db", db_path, + subprocess.check_call([sys.executable, "-m", "artiq.frontend.artiq_compile", "--device-db", db_path, "-c", "CheckLog", "-o", os.path.join(tmp, "check_log.elf"), __file__]) - subprocess.call([sys.executable, "-m", "artiq.frontend.artiq_run", "--device-db", db_path, + subprocess.check_call([sys.executable, "-m", "artiq.frontend.artiq_run", "--device-db", db_path, os.path.join(tmp, "check_log.elf")]) log = mgmt.get_log() self.assertIn("test_artiq_compile", log) diff --git a/artiq/test/coredevice/test_embedding.py b/artiq/test/coredevice/test_embedding.py index cde3fa3dc..9883dc67b 100644 --- a/artiq/test/coredevice/test_embedding.py +++ b/artiq/test/coredevice/test_embedding.py @@ -574,3 +574,29 @@ class NumpyQuotingTest(ExperimentCase): def test_issue_1871(self): """Ensure numpy.array() does not break NumPy math functions""" self.create(_NumpyQuoting).run() + + +class _IntBoundary(EnvExperiment): + def build(self): + self.setattr_device("core") + self.int32_min = numpy.iinfo(numpy.int32).min + self.int32_max = numpy.iinfo(numpy.int32).max + self.int64_min = numpy.iinfo(numpy.int64).min + self.int64_max = numpy.iinfo(numpy.int64).max + + @kernel + def test_int32_bounds(self, min_val: TInt32, max_val: TInt32): + return min_val == self.int32_min and max_val == self.int32_max + + @kernel + def test_int64_bounds(self, min_val: TInt64, max_val: TInt64): + return min_val == self.int64_min and max_val == self.int64_max + + @kernel + def run(self): + self.test_int32_bounds(self.int32_min, self.int32_max) + self.test_int64_bounds(self.int64_min, self.int64_max) + +class IntBoundaryTest(ExperimentCase): + def test_int_boundary(self): + self.create(_IntBoundary).run() diff --git a/artiq/test/hardware_testbench.py b/artiq/test/hardware_testbench.py index ee58cfcdc..2e82f0916 100644 --- a/artiq/test/hardware_testbench.py +++ b/artiq/test/hardware_testbench.py @@ -56,6 +56,7 @@ class ExperimentCase(unittest.TestCase): try: exp = self.create(cls, *args, **kwargs) exp.run() + self.device_mgr.notify_run_end() exp.analyze() return exp except CompileError as error: diff --git a/artiq/test/lit/embedding/error_subkernel_annot.py b/artiq/test/lit/embedding/error_subkernel_annot.py new file mode 100644 index 000000000..3f4bc1c5c --- /dev/null +++ b/artiq/test/lit/embedding/error_subkernel_annot.py @@ -0,0 +1,15 @@ +# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t +# RUN: OutputCheck %s --file-to-check=%t + +from artiq.language.core import * +from artiq.language.types import * + +# CHECK-L: ${LINE:+2}: error: type annotation for argument 'x', '1', is not an ARTIQ type +@subkernel(destination=1) +def foo(x: 1) -> TNone: + pass + +@kernel +def entrypoint(): + # CHECK-L: ${LINE:+1}: note: in subkernel call here + foo() diff --git a/artiq/test/lit/embedding/error_subkernel_annot_return.py b/artiq/test/lit/embedding/error_subkernel_annot_return.py new file mode 100644 index 000000000..51977ae00 --- /dev/null +++ b/artiq/test/lit/embedding/error_subkernel_annot_return.py @@ -0,0 +1,15 @@ +# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t +# RUN: OutputCheck %s --file-to-check=%t + +from artiq.language.core import * +from artiq.language.types import * + +# CHECK-L: ${LINE:+2}: error: type annotation for return type, '1', is not an ARTIQ type +@subkernel(destination=1) +def foo() -> 1: + pass + +@kernel +def entrypoint(): + # CHECK-L: ${LINE:+1}: note: in subkernel call here + foo() diff --git a/artiq/test/lit/embedding/subkernel_message_recv.py b/artiq/test/lit/embedding/subkernel_message_recv.py new file mode 100644 index 000000000..35e094aa6 --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_message_recv.py @@ -0,0 +1,22 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +@kernel +def entrypoint(): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + message_pass() + # CHECK-NOT: call void @subkernel_send_message\(i32 ., i1 false, i8 1, i8 1, .*\), !dbg !. + # CHECK: call i8 @subkernel_await_message\(i32 2, i64 -1, { i8\*, i32 }\* nonnull .*, i8 1, i8 1\), !dbg !. + subkernel_recv("message", TInt32) + + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-NOT-L: declare void @subkernel_send_message(i32, i1, i8, i8, { i8*, i32 }*, i8**) local_unnamed_addr +# CHECK-L: declare i8 @subkernel_await_message(i32, i64, { i8*, i32 }*, i8, i8) local_unnamed_addr +# CHECK-NOT-L: declare void @subkernel_await_finish(i32, i64) local_unnamed_addr +@subkernel(destination=1) +def message_pass() -> TNone: + subkernel_send(0, "message", 15) diff --git a/artiq/test/lit/embedding/subkernel_message_send.py b/artiq/test/lit/embedding/subkernel_message_send.py new file mode 100644 index 000000000..3f0f77e35 --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_message_send.py @@ -0,0 +1,20 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +@kernel +def entrypoint(): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + message_pass() + # CHECK: call void @subkernel_send_message\(i32 2, i1 false, i8 1, i8 1, .*\), !dbg !. + # CHECK-NOT: call i8 @subkernel_await_message\(i32 1, i64 10000, { i8\*, i32 }\* nonnull .*, i8 1, i8 1\), !dbg !. + subkernel_send(1, "message", 15) + + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-L: declare void @subkernel_send_message(i32, i1, i8, i8, { i8*, i32 }*, i8**) local_unnamed_addr +@subkernel(destination=1) +def message_pass() -> TNone: + subkernel_recv("message", TInt32) diff --git a/artiq/test/lit/embedding/subkernel_no_arg.py b/artiq/test/lit/embedding/subkernel_no_arg.py new file mode 100644 index 000000000..605fbbfaa --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_no_arg.py @@ -0,0 +1,18 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +@kernel +def entrypoint(): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !. + no_arg() + + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-NOT-L: declare void @subkernel_send_message(i32, i1, i8, { i8*, i32 }*, i8**) local_unnamed_addr +@subkernel(destination=1) +def no_arg() -> TStr: + pass diff --git a/artiq/test/lit/embedding/subkernel_return.py b/artiq/test/lit/embedding/subkernel_return.py new file mode 100644 index 000000000..3c9d1169a --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_return.py @@ -0,0 +1,22 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +@kernel +def entrypoint(): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !. + returning() + # CHECK: call i8 @subkernel_await_message\(i32 1, i64 -1, { i8\*, i32 }\* nonnull .*, i8 1, i8 1\), !dbg !. + # CHECK: call void @subkernel_await_finish\(i32 1, i64 -1\), !dbg !. + subkernel_await(returning) + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-NOT-L: declare void @subkernel_send_message(i32, i1, i8, i8, { i8*, i32 }*, i8**) local_unnamed_addr +# CHECK-L: declare i8 @subkernel_await_message(i32, i64, { i8*, i32 }*, i8, i8) local_unnamed_addr +# CHECK-L: declare void @subkernel_await_finish(i32, i64) local_unnamed_addr +@subkernel(destination=1) +def returning() -> TInt32: + return 1 diff --git a/artiq/test/lit/embedding/subkernel_return_none.py b/artiq/test/lit/embedding/subkernel_return_none.py new file mode 100644 index 000000000..a7795f785 --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_return_none.py @@ -0,0 +1,22 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +@kernel +def entrypoint(): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !. + returning_none() + # CHECK: call void @subkernel_await_finish\(i32 1, i64 -1\), !dbg !. + # CHECK-NOT: call i8 @subkernel_await_message\(i32 1, i64 -1\, .*\), !dbg !. + subkernel_await(returning_none) + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-NOT-L: declare void @subkernel_send_message(i32, i1, i8, { i8*, i32 }*, i8**) local_unnamed_addr +# CHECK-L: declare void @subkernel_await_finish(i32, i64) local_unnamed_addr +# CHECK-NOT-L: declare i8 @subkernel_await_message(i32, i64, { i8*, i32 }*, i8, i8) local_unnamed_addr +@subkernel(destination=1) +def returning_none() -> TNone: + pass diff --git a/artiq/test/lit/embedding/subkernel_self.py b/artiq/test/lit/embedding/subkernel_self.py new file mode 100644 index 000000000..7bf9cbafd --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_self.py @@ -0,0 +1,25 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +class A: + @subkernel(destination=1) + def sk(self): + pass + + @kernel + def kernel_entrypoint(self): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !. + self.sk() + +a = A() + +@kernel +def entrypoint(): + a.kernel_entrypoint() + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-NOT-L: declare void @subkernel_send_message(i32, i1, i8, i8, { i8*, i32 }*, i8**) local_unnamed_addr diff --git a/artiq/test/lit/embedding/subkernel_self_args.py b/artiq/test/lit/embedding/subkernel_self_args.py new file mode 100644 index 000000000..5aebed2e9 --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_self_args.py @@ -0,0 +1,25 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +class A: + @subkernel(destination=1) + def sk(self, a): + pass + + @kernel + def kernel_entrypoint(self): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK: call void @subkernel_send_message\(i32 1, i1 false, i8 1, i8 1, .*\), !dbg !. + self.sk(1) + +a = A() + +@kernel +def entrypoint(): + a.kernel_entrypoint() + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-L: declare void @subkernel_send_message(i32, i1, i8, i8, { i8*, i32 }*, i8**) local_unnamed_addr diff --git a/artiq/test/lit/embedding/subkernel_with_arg.py b/artiq/test/lit/embedding/subkernel_with_arg.py new file mode 100644 index 000000000..114516586 --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_with_arg.py @@ -0,0 +1,18 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +@kernel +def entrypoint(): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK: call void @subkernel_send_message\(i32 ., i1 false, i8 1, i8 1, .*\), !dbg !. + accept_arg(1) + + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-L: declare void @subkernel_send_message(i32, i1, i8, i8, { i8*, i32 }*, i8**) local_unnamed_addr +@subkernel(destination=1) +def accept_arg(arg: TInt32) -> TNone: + pass diff --git a/artiq/test/lit/embedding/subkernel_with_opt_arg.py b/artiq/test/lit/embedding/subkernel_with_opt_arg.py new file mode 100644 index 000000000..fb5cc3df1 --- /dev/null +++ b/artiq/test/lit/embedding/subkernel_with_opt_arg.py @@ -0,0 +1,21 @@ +# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s +# RUN: OutputCheck %s --file-to-check=%t.ll + +from artiq.language.core import * +from artiq.language.types import * + +@kernel +def entrypoint(): + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK: call void @subkernel_send_message\(i32 ., i1 false, i8 1, i8 1, .*\), !dbg !. + accept_arg(1) + # CHECK: call void @subkernel_load_run\(i32 1, i8 1, i1 true\), !dbg !. + # CHECK: call void @subkernel_send_message\(i32 ., i1 false, i8 1, i8 2, .*\), !dbg !. + accept_arg(1, 2) + + +# CHECK-L: declare void @subkernel_load_run(i32, i8, i1) local_unnamed_addr +# CHECK-L: declare void @subkernel_send_message(i32, i1, i8, i8, { i8*, i32 }*, i8**) local_unnamed_addr +@subkernel(destination=1) +def accept_arg(arg_a, arg_b=5) -> TNone: + pass diff --git a/artiq/test/lit/escape/error_numpy_array.py b/artiq/test/lit/escape/error_numpy_array.py new file mode 100644 index 000000000..1ebdeda81 --- /dev/null +++ b/artiq/test/lit/escape/error_numpy_array.py @@ -0,0 +1,15 @@ +# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t +# RUN: OutputCheck %s --file-to-check=%t + +from artiq.experiment import * +import numpy as np + +@kernel +def a(): + # CHECK-L: ${LINE:+2}: error: cannot return an allocated value that does not live forever + # CHECK-L: ${LINE:+1}: note: ... to this point + return np.array([0, 1]) + +@kernel +def entrypoint(): + a() diff --git a/artiq/test/lit/escape/error_numpy_full.py b/artiq/test/lit/escape/error_numpy_full.py new file mode 100644 index 000000000..66158d8ca --- /dev/null +++ b/artiq/test/lit/escape/error_numpy_full.py @@ -0,0 +1,16 @@ +# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t +# RUN: OutputCheck %s --file-to-check=%t + +from artiq.experiment import * +import numpy as np + +@kernel +def a(): + # CHECK-L: ${LINE:+2}: error: cannot return an allocated value that does not live forever + # CHECK-L: ${LINE:+1}: note: ... to this point + return np.full(10, 42.0) + + +@kernel +def entrypoint(): + a() diff --git a/artiq/test/lit/escape/error_numpy_transpose.py b/artiq/test/lit/escape/error_numpy_transpose.py new file mode 100644 index 000000000..e1dc32d51 --- /dev/null +++ b/artiq/test/lit/escape/error_numpy_transpose.py @@ -0,0 +1,17 @@ +# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t +# RUN: OutputCheck %s --file-to-check=%t + +from artiq.experiment import * +import numpy as np + +data = np.array([[0, 1], [2, 3]]) + +@kernel +def a(): + # CHECK-L: ${LINE:+2}: error: cannot return an allocated value that does not live forever + # CHECK-L: ${LINE:+1}: note: ... to this point + return np.transpose(data) + +@kernel +def entrypoint(): + a() diff --git a/artiq/test/test_frontends.py b/artiq/test/test_frontends.py index caef4839c..3dc1a6b87 100644 --- a/artiq/test/test_frontends.py +++ b/artiq/test/test_frontends.py @@ -9,7 +9,7 @@ class TestFrontends(unittest.TestCase): """Test --help as a simple smoke test against catastrophic breakage.""" commands = { "aqctl": [ - "corelog", "moninj_proxy" + "corelog", "moninj_proxy", "coreanalyzer_proxy" ], "artiq": [ "client", "compile", "coreanalyzer", "coremgmt", diff --git a/artiq/tools.py b/artiq/tools.py index 35fa3bbb5..c051e104d 100644 --- a/artiq/tools.py +++ b/artiq/tools.py @@ -18,7 +18,9 @@ from artiq.language.environment import is_public_experiment from artiq.language import units -__all__ = ["parse_arguments", "elide", "scale_from_metadata", +__all__ = ["parse_arguments", + "parse_devarg_override", "unparse_devarg_override", + "elide", "scale_from_metadata", "short_format", "file_import", "get_experiment", "exc_to_warning", "asyncio_wait_or_cancel", @@ -36,6 +38,29 @@ def parse_arguments(arguments): return d +def parse_devarg_override(devarg_override): + devarg_override_dict = {} + for item in devarg_override.split(): + device, _, override = item.partition(":") + if not override: + raise ValueError + if device not in devarg_override_dict: + devarg_override_dict[device] = {} + argument, _, value = override.partition("=") + if not value: + raise ValueError + devarg_override_dict[device][argument] = pyon.decode(value) + return devarg_override_dict + + +def unparse_devarg_override(devarg_override): + devarg_override_strs = [ + "{}:{}={}".format(device, argument, pyon.encode(value)) + for device, overrides in devarg_override.items() + for argument, value in overrides.items()] + return " ".join(devarg_override_strs) + + def elide(s, maxlen): elided = False if len(s) > maxlen: diff --git a/doc/manual/compiler.rst b/doc/manual/compiler.rst index 0a72b38f3..88b57d814 100644 --- a/doc/manual/compiler.rst +++ b/doc/manual/compiler.rst @@ -79,6 +79,25 @@ interpreter. Empty lists do not have valid list element types, so they cannot be used in the kernel. +In kernels, lifetime of allocated values (e.g. lists or numpy arrays) might not be correctly +tracked across function calls (see `#1497 `_, +`#1677 `_) like in this example :: + + @kernel + def func(a): + return a + + class ProblemReturn1(EnvExperiment): + def build(self): + self.setattr_device("core") + + @kernel + def run(self): + # results in memory corruption + return func([1, 2, 3]) + +This results in memory corruption at runtime. + Asynchronous RPCs ----------------- diff --git a/doc/manual/conf.py b/doc/manual/conf.py index c004b4d85..3e5e83784 100644 --- a/doc/manual/conf.py +++ b/doc/manual/conf.py @@ -28,7 +28,9 @@ mock_modules = ["artiq.gui.waitingspinnerwidget", "artiq.gui.models", "artiq.compiler.module", "artiq.compiler.embedding", - "qasync", "pyqtgraph", "matplotlib", + "artiq.dashboard.waveform", + "artiq.dashboard.interactive_args", + "qasync", "pyqtgraph", "matplotlib", "lmdb", "numpy", "dateutil", "dateutil.parser", "prettytable", "PyQt5", "h5py", "serial", "scipy", "scipy.interpolate", "nac3artiq", @@ -71,6 +73,7 @@ extensions = [ 'sphinx.ext.napoleon', 'sphinxarg.ext', 'sphinxcontrib.wavedrom', # see also below for config + "sphinxcontrib.jquery", ] mathjax_path = "https://m-labs.hk/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML.js" @@ -89,7 +92,7 @@ master_doc = 'index' # General information about the project. project = 'ARTIQ' -copyright = '2014-2023, M-Labs Limited' +copyright = '2014-2024, M-Labs Limited' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the diff --git a/doc/manual/core_device.rst b/doc/manual/core_device.rst index 24e9768c0..764884f45 100644 --- a/doc/manual/core_device.rst +++ b/doc/manual/core_device.rst @@ -1,80 +1,53 @@ Core device =========== -The core device is a FPGA-based hardware component that contains a softcore CPU tightly coupled with the so-called RTIO core that provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler, and communicates with the core device peripherals (TTL, DDS, etc.) over the RTIO core. This architecture provides high timing resolution, low latency, low jitter, high level programming capabilities, and good integration with the rest of the Python experiment code. - -While it is possible to use all the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, many experiments require it. +The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :ref:`artiq-real-time-i-o-concepts`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code. +While it is possible to use the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, many experiments require it. .. _core-device-flash-storage: Flash storage -************* +^^^^^^^^^^^^^ -The core device contains some flash space that can be used to store configuration data. - -This storage area is used to store the core device MAC address, IP address and even the idle kernel. - -The flash storage area is one sector (typically 64 kB) large and is organized as a list of key-value records. - -This flash storage space can be accessed by using ``artiq_coremgmt`` (see: :ref:`core-device-management-tool`). +The core device contains some flash storage space which is used to store configuration data. It is one sector (typically 64 kB) large and organized as a list of key-value records, accessible by using ``artiq_coremgmt`` (see :ref:`core-device-management-tool`). The core device IP and MAC addresses, as well as, if present, the startup and/or idle kernels (see :ref:`miscellaneous_config_core_device`) are stored here. .. _board-ports: FPGA board ports -**************** +^^^^^^^^^^^^^^^^ All boards have a serial interface running at 115200bps 8-N-1 that can be used for debugging. -Kasli ------ +Kasli and Kasli SoC +^^^^^^^^^^^^^^^^^^^ -`Kasli `_ is a versatile core device designed for ARTIQ as part of the `Sinara `_ family of boards. All variants support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) connected directly to it. +`Kasli `_ and `Kasli-SoC `_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara `_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli-SoC, which runs on a separate `Zynq port `_ of the ARTIQ firmware, is architecturally separate, among other things being capable of performing much heavier software computations quickly locally to the board, but provides generally similar features to Kasli. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original Kasli v1.0 remains supported by ARTIQ. -Standalone variants -+++++++++++++++++++ +Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC). -Kasli is connected to the network using a 1000Base-X SFP module. `No-name `_ BiDi (1000Base-BX) modules have been used successfully. The SFP module for the network should be installed into the SFP0 cage. -The other SFP cages are not used. - -The RTIO clock frequency is 125MHz or 150MHz, which is generated by the Si5324. - -DRTIO master variants -+++++++++++++++++++++ - -Kasli can be used as a DRTIO master that provides local RTIO channels and can additionally control one DRTIO satellite. - -The RTIO clock frequency is 125MHz or 150MHz, which is generated by the Si5324. The DRTIO line rate is 2.5Gbps or 3Gbps. - -As with the standalone configuration, the SFP module for the Ethernet network should be installed into the SFP0 cage. The DRTIO connections are on SFP1 and SFP2, and optionally on the SATA connector. - -DRTIO satellite/repeater variants -+++++++++++++++++++++++++++++++++ - -Kasli can be used as a DRTIO satellite with a 125MHz or 150MHz RTIO clock and a 2.5Gbps or 3Gbps DRTIO line rate. - -The DRTIO upstream connection is on SFP0 or optionally on the SATA connector, and the remaining SFPs are downstream ports. +The DRTIO line rate depends upon the RTIO clock frequency running, e.g., at 125MHz the line rate is 2.5Gbps, at 150MHz 3.0Gbps, etc. See below for information on RTIO clocks. KC705 ------ +^^^^^ An alternative target board for the ARTIQ core device is the KC705 development board from Xilinx. It supports the NIST CLOCK and QC2 hardware (FMC). Common problems -+++++++++++++++ +--------------- * The SW13 switches on the board need to be set to 00001. * When connected, the CLOCK adapter breaks the JTAG chain due to TDI not being connected to TDO on the FMC mezzanine. * On some boards, the JTAG USB connector is not correctly soldered. VADJ -++++ +---- With the NIST CLOCK and QC2 adapters, for safe operation of the DDS buses (to prevent damage to the IO banks of the FPGA), the FMC VADJ rail of the KC705 should be changed to 3.3V. Plug the Texas Instruments USB-TO-GPIO PMBus adapter into the PMBus connector in the corner of the KC705 and use the Fusion Digital Power Designer software to configure (requires Windows). Write to chip number U55 (address 52), channel 4, which is the VADJ rail, to make it 3.3V instead of 2.5V. Power cycle the KC705 board to check that the startup voltage on the VADJ rail is now 3.3V. NIST CLOCK -++++++++++ +---------- With the CLOCK hardware, the TTL lines are mapped as follows: @@ -118,7 +91,7 @@ The DDS bus is on channel 27. NIST QC2 -++++++++ +-------- With the QC2 hardware, the TTL lines are mapped as follows: @@ -161,25 +134,31 @@ To avoid I/O contention, the startup kernel should first program the TCA6424A ex See :mod:`artiq.coredevice.i2c` for more details. +.. _core-device-clocking: + Clocking -++++++++ +^^^^^^^^ -The KC705 in standalone variants supports an internal 125 MHz RTIO clock (based on its crystal oscillator, or external reference for PLL for DRTIO variants) and an external clock, that can be selected using the ``rtio_clock`` configuration entry. Valid values are: +The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include: - * ``int_125`` - internal crystal oscillator, 125 MHz output (default), - * ``ext0_bypass`` - external clock. + * ``int_100`` - internal crystal reference is used to synthesize a 100MHz RTIO clock, + * ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option), + * ``int_150`` - internal crystal reference is used to synthesize a 150MHz RTIO clock. + * ``ext0_synth0_10to125`` - external 10MHz reference clock used to synthesize a 125MHz RTIO clock, + * ``ext0_synth0_80to125`` - external 80MHz reference clock used to synthesize a 125MHz RTIO clock, + * ``ext0_synth0_100to125`` - external 100MHz reference clock used to synthesize a 125MHz RTIO clock, + * ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock. -KC705 in DRTIO variants and Kasli generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. Valid values are: +The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``. - * ``int_125`` - internal crystal oscillator using PLL, 125 MHz output (default), - * ``int_100`` - internal crystal oscillator using PLL, 100 MHz output, - * ``int_150`` - internal crystal oscillator using PLL, 150 MHz output, - * ``ext0_synth0_10to125`` - external 10 MHz reference using PLL, 125 MHz output, - * ``ext0_synth0_80to125`` - external 80 MHz reference using PLL, 125 MHz output, - * ``ext0_synth0_100to125`` - external 100 MHz reference using PLL, 125 MHz output, - * ``ext0_synth0_125to125`` - external 125 MHz reference using PLL, 125 MHz output, - * ``ext0_bypass``, ``ext0_bypass_125``, ``ext0_bypass_100`` - external clock - with explicit aliases available. +As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD `_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the JSON description file. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through ``awfs_client``, write to the helpdesk@ email. -The selected option can be observed in the core device boot logs. +If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM. -Options ``rtio_clock=int_XXX`` and ``rtio_clock=ext0_synth0_XXXXX`` generate the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference (depending on exact option). ``rtio_clock=ext0_bypass`` bypasses that PLL and the user must supply the RTIO clock (typically 125 MHz) at the Kasli front panel SMA input. Bypassing the PLL ensures the skews between input clock, Kasli downstream clock outputs, and RTIO clock are deterministic accross reboots of the system. This is useful when phase determinism is required in situtations where the reference clock fans out to other devices before reaching Kasli. +If not using WRPLL, PLL can also be bypassed entirely with the options + + * ``ext0_bypass`` (input clock used directly) + * ``ext0_bypass_125`` (explicit alias) + * ``ext0_bypass_100`` (explicit alias) + +Bypassing the PLL ensures the skews between input clock, downstream clock outputs, and RTIO clock are deterministic across reboots of the system. This is useful when phase determinism is required in situations where the reference clock fans out to other devices before reaching the master. \ No newline at end of file diff --git a/doc/manual/default_network_ports.rst b/doc/manual/default_network_ports.rst index 6ed9b4fbf..ddd8f0712 100644 --- a/doc/manual/default_network_ports.rst +++ b/doc/manual/default_network_ports.rst @@ -14,6 +14,10 @@ Default network ports +---------------------------------+--------------+ | Moninj (proxy control) | 1384 | +---------------------------------+--------------+ +| Core analyzer proxy (proxy) | 1385 | ++---------------------------------+--------------+ +| Core analyzer proxy (control) | 1386 | ++---------------------------------+--------------+ | Master (logging input) | 1066 | +---------------------------------+--------------+ | Master (broadcasts) | 1067 | diff --git a/doc/manual/developing.rst b/doc/manual/developing.rst index 9079f1adf..3697685c3 100644 --- a/doc/manual/developing.rst +++ b/doc/manual/developing.rst @@ -9,15 +9,15 @@ Developing ARTIQ The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server `_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler. ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``flake.nix`` file and/or nixpkgs, and run the commands manually) - but Nix makes the process a lot easier. -* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra `_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``. +* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment, which can be entered with the command `vivado-env`). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra `_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``. * During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives). * Install the `Nix package manager `_, version 2.4 or later. Prefer a single-user installation for simplicity. * If you did not install Vivado in its default location ``/opt``, clone the ARTIQ Git repository and edit ``flake.nix`` accordingly. * Enable flakes in Nix by e.g. adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (for example ``~/.config/nix/nix.conf``). * Clone the ARTIQ Git repository and run ``nix develop`` at the root (where ``flake.nix`` is). * Make the current source code of ARTIQ available to the Python interpreter by running ``export PYTHONPATH=`pwd`:$PYTHONPATH``. -* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli``. If you are using a JSON system description file, use ``$ python -m artiq.gateware.targets.kasli_generic file.json``. -* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli/``. You need to configure OpenOCD as explained :ref:`in the user section `. OpenOCD is already part of the flake's development environment. +* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli .json``, using a JSON system description file. +* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli/``. You need to configure OpenOCD as explained :ref:`in the user section `. OpenOCD is already part of the flake's development environment. * Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed in the flake's development environment). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``. * The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group). diff --git a/doc/manual/faq.rst b/doc/manual/faq.rst index bd5d4ac89..2e48f9e5c 100644 --- a/doc/manual/faq.rst +++ b/doc/manual/faq.rst @@ -27,6 +27,16 @@ organize datasets in folders? Use the dot (".") in dataset names to separate folders. The GUI will automatically create and delete folders in the dataset tree display. +organize experiment windows in the dashboard? +--------------------------------------------- + +Experiment windows can be organized by using the following hotkeys: + +* CTRL+SHIFT+T to tile experiment windows +* CTRL+SHIFT+C to cascade experiment windows + +The windows will be organized in the order they were last interacted with. + write a generator feeding a kernel feeding an analyze function? --------------------------------------------------------------- diff --git a/doc/manual/getting_started_core.rst b/doc/manual/getting_started_core.rst index 5b7cab0e5..72c7bfa0e 100644 --- a/doc/manual/getting_started_core.rst +++ b/doc/manual/getting_started_core.rst @@ -21,16 +21,18 @@ As a very first step, we will turn on a LED on the core device. Create a file `` self.core.reset() self.led.on() -The central part of our code is our ``LED`` class, which derives from :class:`artiq.language.environment.EnvExperiment`. Among other features, :class:`~artiq.language.environment.EnvExperiment` calls our :meth:`~artiq.language.environment.Experiment.build` method and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` method that interfaces to the device database to create the appropriate device drivers and make those drivers accessible as ``self.core`` and ``self.led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host). The decorator uses ``self.core`` internally, which is why we request the core device using :meth:`~artiq.language.environment.HasEnvironment.setattr_device` like any other. +The central part of our code is our ``LED`` class, which derives from :class:`artiq.language.environment.EnvExperiment`. Among other features, :class:`~artiq.language.environment.EnvExperiment` calls our :meth:`~artiq.language.environment.Experiment.build` method and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` method that interfaces with the device database to create the appropriate device drivers and make those drivers accessible as ``self.core`` and ``self.led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host). The decorator uses ``self.core`` internally, which is why we request the core device using :meth:`~artiq.language.environment.HasEnvironment.setattr_device` like any other. -Copy the file ``device_db.py`` (containing the device database) from the ``examples/master`` folder of ARTIQ into the same directory as ``led.py`` (alternatively, you can use the ``--device-db`` option of ``artiq_run``). You will probably want to set the IP address of the core device in ``device_db.py`` so that the computer can connect to it (it is the ``host`` parameter of the ``comm`` entry). See :ref:`device-db` for more information. The example device database is designed for the ``nist_clock`` hardware adapter on the KC705; see :ref:`board-ports` for RTIO channel assignments if you need to adapt the device database to a different hardware platform. +It is important that you supply the correct device database for your system configuration; it is generated by a Python script typically called ``device_db.py`` (see also :ref:`the device database `). If you purchased a system from M-Labs, the ``device_db.py`` for your system will normally already have been provided to you (either on the USB stick, inside ``~/artiq`` on the NUC, or by email). If you have the JSON description file for your system on hand, you can use the ARTIQ front-end tool ``artiq_ddb_template`` to generate a matching device database file. Otherwise, you can also find examples in the ``examples`` folder of ARTIQ (sorted in corresponding subfolders per core device) which you can edit to match your system. .. note:: - To obtain the examples, you can find where the ARTIQ package is installed on your machine with: :: + To access the examples, you can find where the ARTIQ package is installed on your machine with: :: python3 -c "import artiq; print(artiq.__path__[0])" -Run your code using ``artiq_run``, which is part of the ARTIQ front-end tools: :: +Make sure ``device_db.py`` is in the same directory as ``led.py``. The field ``core_addr``, placed at the top of the file, needs to match the current IP address of your core device in order for your host machine to contact it. If you purchased a pre-assembled system and haven't changed the IP address it is normally already set correctly. + +Run your code using ``artiq_run``, which is one of the ARTIQ front-end tools: :: $ artiq_run led.py @@ -39,9 +41,9 @@ The process should terminate quietly and the LED of the device should turn on. C Host/core device interaction (RPC) ---------------------------------- -A method or function running on the core device (which we call a "kernel") may communicate with the host by calling non-kernel functions that may accept parameters and may return a value. The "remote procedure call" (RPC) mechanisms handle automatically the communication between the host and the device of which function to call, with which parameters, and what the returned value is. +A method or function running on the core device (which we call a "kernel") may communicate with the host by calling non-kernel functions that may accept parameters and may return a value. The "remote procedure call" (RPC) mechanisms automatically handle the communication between the host and the device, conveying between them what function to call, what parameters to call it with, and the resulting value, once returned. -Modify the code as follows: :: +Modify ``led.py`` as follows: :: def input_led_state() -> TBool: return input("Enter desired LED state: ") == "1" @@ -69,15 +71,11 @@ You can then turn the LED off and on by entering 0 or 1 at the prompt that appea $ artiq_run led.py Enter desired LED state: 0 -What happens is the ARTIQ compiler notices that the :meth:`input_led_state` function does not have a ``@kernel`` decorator (:func:`~artiq.language.core.kernel`) and thus must be executed on the host. When the core device calls it, it sends a request to the host to execute it. The host displays the prompt, collects user input, and sends the result back to the core device, which sets the LED state accordingly. +What happens is that the ARTIQ compiler notices that the :meth:`input_led_state` function does not have a ``@kernel`` decorator (:func:`~artiq.language.core.kernel`) and thus must be executed on the host. When the function is called on the core device, it sends a request to the host, which executes it. The core device waits until the host returns, and then continues the kernel; in this case, the host displays the prompt, collects user input, and the core device sets the LED state accordingly. -RPC functions must always return a value of the same type. When they return a value that is not ``None``, the compiler should be informed in advance of the type of the value, which is what the ``-> TBool`` annotation is for. - -Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :func:`self.led.on()` or :func:`self.led.off()` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. -These events would fail because the RPC to :meth:`input_led_state()` can take an arbitrary amount of time and therefore the deadline for submission of RTIO events would have long passed when :func:`self.led.on()` or :func:`self.led.off()` are called. -The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. -It advances the timeline far enough to ensure that events can meet the submission deadline. +The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. +Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :func:`self.led.on()` or :func:`self.led.off()` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to :meth:`input_led_state()` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :func:`self.led.on()` or :func:`self.led.off()` are called (that is, the ``rtio_counter`` wall clock will have advanced far ahead of the timeline cursor ``now``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :ref:`artiq-real-time-i-o-concepts` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter`` and moves up the ``now`` cursor far enough to ensure it's once again safely ahead of the wall clock. Real-time Input/Output (RTIO) ----------------------------- @@ -112,13 +110,13 @@ There are no input-only TTL channels. The experiment then drives one million 2 µs long pulses separated by 2 µs each. Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run.py rtio.py``. Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%. -This is not what you would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled. -The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc, all of which are hard to predict and variable. +This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled. +The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable. Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle. Instead, inside the core device, output timing is generated by the gateware and the CPU only programs switching commands with certain timestamps that the CPU computes. -This guarantees precise timing as long as the CPU can keep generating timestamps that are increasing fast enough. In case it fails to do that (and attempts to program an event with a timestamp smaller than the current RTIO clock timestamp), a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` exception is raised. The kernel causing it may catch it (using a regular ``try... except...`` construct), or it will be propagated to the host. +This guarantees precise timing as long as the CPU can keep generating timestamps that are increasing fast enough. In the case that it fails to do so (and attempts to program an event with a timestamp smaller than the current RTIO clock timestamp), :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` is raised. The kernel causing it may catch it (using a regular ``try... except...`` construct), or allow it to propagate to the host. Try reducing the period of the generated waveform until the CPU cannot keep up with the generation of switching events and the underflow exception is raised. Then try catching it: :: @@ -147,23 +145,33 @@ Try reducing the period of the generated waveform until the CPU cannot keep up w Parallel and sequential blocks ------------------------------ -It is often necessary that several pulses overlap one another. This can be expressed through the use of ``with parallel`` constructs, in which the events generated by the individual statements are executed at the same time. The duration of the ``parallel`` block is the duration of its longest statement. +It is often necessary for several pulses to overlap one another. This can be expressed through the use of ``with parallel`` constructs, in which the events generated by the individual statements are executed at the same time. The duration of the ``parallel`` block is the duration of its longest statement. Try the following code and observe the generated pulses on a 2-channel oscilloscope or logic analyzer: :: - for i in range(1000000): - with parallel: - self.ttl0.pulse(2*us) - self.ttl1.pulse(4*us) - delay(4*us) + from artiq.experiment import * + + class Tutorial(EnvExperiment): + def build(self): + self.setattr_device("core") + self.setattr_device("ttl0") + self.setattr_device("ttl1") + + @kernel + def run(self): + self.core.reset() + for i in range(1000000): + with parallel: + self.ttl0.pulse(2*us) + self.ttl1.pulse(4*us) + delay(4*us) ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches. It simply remembers the position on the timeline when entering the ``parallel`` block and then seeks back to that position after submitting the events generated by each statement. In other words, the statements in the ``parallel`` block are actually executed sequentially, only the RTIO events generated by them are scheduled to be executed in parallel. -Note that if a statement takes a lot of CPU time to execute (this different from the events scheduled by a statement taking a long time), it may cause a subsequent statement to miss the deadline for timely submission of its events. -This then causes a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` exception to be raised. +Note that accordingly if a statement takes a lot of CPU time to execute (which is different from -- and has nothing to do with! -- the events *scheduled* by the statement taking a long time), it may cause a subsequent statement to miss the deadline for timely submission of its events (and raise :exc:`~artiq.coredevice.exceptions.RTIOUnderflow`), while earlier statements in the parallel block would have submitted their events without problems. -Within a parallel block, some statements can be made sequential again using a ``with sequential`` construct. Observe the pulses generated by this code: :: +Within a parallel block, some statements can be scheduled sequentially again using a ``with sequential`` block. Observe the pulses generated by this code: :: for i in range(1000000): with parallel: @@ -174,11 +182,11 @@ Within a parallel block, some statements can be made sequential again using a `` self.ttl1.pulse(4*us) delay(4*us) -Particular care needs to be taken when working with ``parallel`` blocks in cases where a large number of RTIO events are generated as it possible to create sequencing errors (`RTIO sequence error`). Sequence errors do not halt execution of the kernel for performance reasons and instead are reported in the core log. If the ``aqctl_corelog`` process has been started with ``artiq_ctlmgr``, then these errors will be posted to the master log. However, if an experiment is executed through ``artiq_run``, these errors will not be visible outside of the core log. +Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to create sequence errors. A sequence error is caused when the scalable event dispatcher (SED) cannot queue an RTIO event due to its timestamp being the same as or earlier than another event in its queue. By default, the SED has 8 lanes, which suffice in most cases to avoid sequence errors; however, if many (>8) events are queued with interlaced timestamps the problem can still surface. See :ref:`sequence-errors`. -A sequence error is caused when the scalable event dispatcher (SED) cannot queue an RTIO event due to its timestamp being the same as or earlier than another event in its queue. By default, the SED has 8 lanes which allows ``parallel`` events to work without sequence errors in most cases, however if many (>8) events are queued with conflicting timestamps this error can surface. +Note that for performance reasons sequence errors do not halt execution of the kernel. Instead, they are reported in the core log. If the ``aqctl_corelog`` process has been started with ``artiq_ctlmgr``, then these errors will be posted to the master log. If an experiment is executed through ``artiq_run``, the errors will only be visible in the core log. -These errors can usually be overcome by reordering the generation of the events. Alternatively, the number of SED lanes can be increased in the gateware. +Sequence errors can usually be overcome by reordering the generation of the events (again, different from and unrelated to reordering the events themselves). Alternatively, the number of SED lanes can be increased in the gateware. .. _rtio-analyzer-example: @@ -203,13 +211,12 @@ The core device records the real-time I/O waveforms into a circular buffer. It i rtio_log("ttl0", "i", i) delay(...) -Afterwards, the recorded data can be extracted and written to a VCD file using ``artiq_coreanalyzer -w rtio.vcd`` (see: :ref:`core-device-rtio-analyzer-tool`). VCD files can be viewed using third-party tools such as GtkWave. - +Afterwards, the recorded data can be extracted and written to a VCD file using ``artiq_coreanalyzer -w rtio.vcd`` (see :ref:`core-device-rtio-analyzer-tool`). VCD files can be viewed using third-party tools such as GtkWave. Direct Memory Access (DMA) -------------------------- -DMA allows you to store fixed sequences of pulses in system memory, and have the DMA core in the FPGA play them back at high speed. Pulse sequences that are too fast for the CPU (i.e. would cause RTIO underflows) can still be generated using DMA. The only modification of the sequence that the DMA core supports is shifting it in time (so it can be played back at any position of the timeline), everything else is fixed at the time of recording the sequence. +DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences are always possible to generate and execute. However, if they are time-consuming for the CPU to generate, they may require very large amounts of positive slack in order to allow the CPU enough time to complete the generation before the wall clock 'catches up' (that is, without running into RTIO underflows). A better option is to record these sequences to the DMA core. Once recorded, events sequences are fixed and cannot be modified, but can be safely replayed at any position in the timeline, potentially repeatedly. Try this: :: @@ -244,12 +251,14 @@ Try this: :: # each playback advances the timeline by 50*(100+100) ns self.core_dma.playback_handle(pulses_handle) +For more documentation on the methods used, see the :mod:`artiq.coredevice.dma` reference. + Distributed Direct Memory Access (DDMA) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ By default on DRTIO systems, all events recorded by the DMA core are kept and played back on the master. -With distributed DMA, RTIO events that should be played back on remote destinations, are distributed to the corresponding satellites. In some cases (typically, large buffers on several satellites with high event throughput), it allows for better performance and higher bandwidth, as the RTIO events do not have to be sent over the DRTIO link(s) during playback. +With distributed DMA, RTIO events that should be played back on remote destinations are distributed to the corresponding satellites. In some cases (typically, large buffers on several satellites with high event throughput), it allows for better performance and higher bandwidth, as the RTIO events do not have to be sent over the DRTIO link(s) during playback. To enable distributed DMA, simply provide an ``enable_ddma=True`` argument for the :meth:`~artiq.coredevice.dma.CoreDMA.record` method - taking a snippet from the previous example: :: @@ -262,8 +271,117 @@ To enable distributed DMA, simply provide an ``enable_ddma=True`` argument for t self.ttl0.pulse(100*ns) delay(100*ns) -This argument is ignored on standalone systems, as it does not apply there. +In standalone systems this argument is ignored and has no effect. -Enabling DDMA on a purely local sequence on a DRTIO system introduces an overhead during trace recording which comes from additional processing done on the record, so careful use is advised. +Enabling DDMA on a purely local sequence on a DRTIO system introduces an overhead during trace recording which comes from additional processing done on the record, so careful use is advised. Due to the extra time that communicating with relevant satellites takes, an additional delay before playback may be necessary to prevent a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` when playing back a DDMA-enabled sequence. -Due to the extra time that communicating with relevant satellites takes, an additional delay before playback may be necessary to prevent a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` when playing back a DDMA-enabled sequence. \ No newline at end of file +Subkernels +---------- + +Subkernels refers to kernels running on a satellite device. This allows offloading some processing and control over remote RTIO devices, freeing up resources on the master. + +Subkernels behave for the most part like regular kernels; they accept arguments and can return values. However, there are few caveats: + + - they do not support RPCs, + - they do not support DRTIO, + - their return value must be fully annotated with an ARTIQ type, + - their arguments should be annotated, and only basic ARTIQ types are supported, + - while ``self`` is allowed, there is no attribute writeback - any changes will be discarded when the subkernel is completed, + - they can raise exceptions, but the exceptions cannot be caught by the master (rather, they are propagated directly to the host), + - they begin execution as soon as possible when called, and can be awaited. + +To define a subkernel, use the subkernel decorator (``@subkernel(destination=X)``). The destination is the satellite number as defined in the routing table, and must be between 1 and 255. To call a subkernel, call it like a normal function; and to await its result, use ``subkernel_await(function, [timeout])``. + +For example, a subkernel performing integer addition: :: + + from artiq.experiment import * + + + @subkernel(destination=1) + def subkernel_add(a: TInt32, b: TInt32) -> TInt32: + return a + b + + class SubkernelExperiment(EnvExperiment): + def build(self): + self.setattr_device("core") + + @kernel + def run(self): + subkernel_add(2, 2) + result = subkernel_await(subkernel_add) + assert result == 4 + +Sometimes subkernel execution may take large amounts of time. By default, the await function will wait as long as necessary. If a timeout is needed, it can be set using the optional argument of ``subkernel_await()``. The value given is interpreted in milliseconds. If a negative value is given, timeout is disabled. + +Subkernels are compiled after the main kernel and immediately uploaded to satellites. When called, the master instructs the appropriate satellite to load the subkernel into their kernel core and run it. If the subkernel is complex, and its binary relatively large, the delay between the call and actually running the subkernel may be substantial; if it's necessary to minimize this delay, ``subkernel_preload(function)`` should be used before the call. + +While ``self`` is accepted as an argument for subkernels, it is embedded into the compiled data. Any changes made by the main kernel or other subkernels will not be available. + +Subkernels can call other kernels and subkernels. For a more complex example: :: + + from artiq.experiment import * + + class SubkernelExperiment(EnvExperiment): + def build(self): + self.setattr_device("core") + self.setattr_device("ttl0") + self.setattr_device("ttl8") # assuming it's on satellite + + @subkernel(destination=1) + def add_and_pulse(self, a: TInt32, b: TInt32) -> TInt32: + c = a + b + self.pulse_ttl(c) + return c + + @subkernel(destination=1) + def pulse_ttl(self, delay: TInt32) -> TNone: + self.ttl8.pulse(delay*us) + + @kernel + def run(self): + subkernel_preload(self.add_and_pulse) + self.core.reset() + delay(10*ms) + self.add_and_pulse(2, 2) + self.ttl0.pulse(15*us) + result = subkernel_await(self.add_and_pulse) + assert result == 4 + self.pulse_ttl(20) + +Without the preload, the delay after the core reset would need to be longer. The operation may still take some time, depending on the connection. Notice that the method ``pulse_ttl()`` can be called both within a subkernel and on its own. + +It is not necessary for subkernels to always be awaited, but awaiting is required to retrieve returned values and exceptions. + +.. note:: + While a subkernel is running, regardless of what devices it makes use of, none of the RTIO devices on that satellite (or on any satellites downstream) will be available to the master. Control is returned to master after the subkernel completes - to be certain a device is usable, await the subkernel before performing any RTIO operations on the affected satellites. + +Message passing +^^^^^^^^^^^^^^^ + +Apart from arguments and returns, subkernels can also pass messages between each other or the master with built-in ``subkernel_send()`` and ``subkernel_recv()`` functions. This can be used for communication between subkernels, to pass additional data, or to send partially computed data. Consider the following example: :: + + from artiq.experiment import * + + @subkernel(destination=1) + def simple_message() -> TInt32: + data = subkernel_recv("message", TInt32) + return data + 20 + + class MessagePassing(EnvExperiment): + def build(self): + self.setattr_device("core") + + @kernel + def run(self): + simple_self() + subkernel_send(1, "message", 150) + result = subkernel_await(simple_self) + assert result == 170 + +The ``subkernel_send(destination, name, value)`` function takes three arguments: a destination, a name for the message (to be used for identification in the corresponding ``subkernel_recv()``), and the passed value. + +The ``subkernel_recv(name, type, [timeout])`` function requires two arguments: message name (matching exactly the name provided in ``subkernel_send``) and expected type. Optionally, it accepts a third argument, a timeout for the operation in milliseconds. If this value is negative, timeout is disabled. By default, it waits as long as necessary. + +To avoid misinterpretation of the data the compiler type-checks the value sent by ``subkernel_send`` against the type declared in ``subkernel_recv``. To guard against common errors, it also checks that all message names are used in both a sending and receiving function. + +A message can only be received while a subkernel is running, and is placed into a buffer to be retrieved when required; therefore send executes independently of any receive and never deadlocks. However, a receive function may timeout or wait forever if no message with the correct name and destination is ever sent. \ No newline at end of file diff --git a/doc/manual/getting_started_mgmt.rst b/doc/manual/getting_started_mgmt.rst index e9398c7bc..334d6389c 100644 --- a/doc/manual/getting_started_mgmt.rst +++ b/doc/manual/getting_started_mgmt.rst @@ -10,7 +10,7 @@ Starting your first experiment with the master In the previous tutorial, we used the ``artiq_run`` utility to execute our experiments, which is a simple stand-alone tool that bypasses the ARTIQ management system. We will now see how to run an experiment using the master (the central program in the management system that schedules and executes experiments) and the dashboard (that connects to the master and controls it). -First, create a folder ``~/artiq-master`` and copy the file ``device_db.py`` (containing the device database) found in the ``examples/master`` directory from the ARTIQ sources. The master uses those files in the same way as ``artiq_run``. +First, create a folder ``~/artiq-master`` and copy your ``device_db.py`` into it (the file containing the device database, as described in :ref:`connecting-to-the-core-device`).The master uses those files in the same way as ``artiq_run``. Then create a ``~/artiq-master/repository`` sub-folder to contain experiments. The master scans this ``repository`` folder to determine what experiments are available (the name of the folder can be changed using ``-r``). diff --git a/doc/manual/index.rst b/doc/manual/index.rst index 76b0b8f61..6207838c1 100644 --- a/doc/manual/index.rst +++ b/doc/manual/index.rst @@ -6,21 +6,21 @@ ARTIQ documentation :maxdepth: 2 introduction + release_notes installing developing - release_notes rtio getting_started_core - compiler getting_started_mgmt - core_device - management_system environment - drtio + compiler + management_system + drtio + core_device core_language_reference - core_drivers_reference - list_of_ndsps - developing_a_ndsp + core_drivers_reference utilities + developing_a_ndsp + list_of_ndsps default_network_ports faq diff --git a/doc/manual/installing.rst b/doc/manual/installing.rst index 0302ed40c..699762f97 100644 --- a/doc/manual/installing.rst +++ b/doc/manual/installing.rst @@ -90,6 +90,35 @@ If your favorite package is not available with Nix, contact us using the helpdes Troubleshooting ^^^^^^^^^^^^^^^ +"Do you want to allow configuration setting... (y/N)?" +"""""""""""""""""""""""""""""""""""""""""""""""""""""" + +When installing and initializing ARTIQ using commands like ``nix shell``, ``nix develop``, or ``nix profile install``, you may encounter prompts to modify certain configuration settings. These settings correspond to the ``nixConfig`` flag within the ARTIQ flake: + +:: + + do you want to allow configuration setting 'extra-sandbox-paths' to be set to '/opt' (y/N)? + do you want to allow configuration setting 'extra-substituters' to be set to 'https://nixbld.m-labs.hk' (y/N)? + do you want to allow configuration setting 'extra-trusted-public-keys' to be set to 'nixbld.m-labs.hk-1:5aSRVA5b320xbNvu30tqxVPXpld73bhtOeH6uAjRyHc=' (y/N)? + +We recommend accepting these settings by responding with ``y``. If asked to permanently mark these values as trusted, choose ``y`` again. This action saves the configuration to ``~/.local/share/nix/trusted-settings.json``, allowing future prompts to be bypassed. + +Alternatively, you can also use the option `accept-flake-config `_ by appending ``--accept-flake-config`` to your nix command: + +:: + + nix develop --accept-flake-config + +Or add the option to ``~/.config/nix/nix.conf`` to make the setting more permanent: + +:: + + extra-experimental-features = flakes + accept-flake-config = true + +.. note:: + Should you wish to revert to the default settings, you can do so by editing the appropriate options in the aforementioned configuration files. + "Ignoring untrusted substituter, you are not a trusted user" """""""""""""""""""""""""""""""""""""""""""""""""""""""""""" @@ -125,16 +154,19 @@ This will set your user as a trusted user, allowing the use of any untrusted sub Installing via MSYS2 (Windows) ------------------------------ -Install `MSYS2 `_, then edit ``C:\MINGW64\etc\pacman.conf`` and add at the end: :: +We recommend using our `offline installer `_, which contains all the necessary packages and no additional configuration is needed. +After installation, launch ``MSYS2 with ARTIQ`` from the Windows Start menu. + +Alternatively, you may install `MSYS2 `_, then edit ``C:\MINGW64\etc\pacman.conf`` and add at the end: :: [artiq] SigLevel = Optional TrustAll Server = https://msys2.m-labs.hk/artiq-nac3 -Launch ``MSYS2 MINGW64`` from the Windows Start menu to open the MSYS2 shell, and enter the following commands: :: +Launch ``MSYS2 CLANG64`` from the Windows Start menu to open the MSYS2 shell, and enter the following commands: :: pacman -Syy - pacman -S mingw-w64-x86_64-artiq + pacman -S mingw-w64-clang-x86_64-artiq .. note:: Some ARTIQ examples also require matplotlib and numba, and they must be installed manually for running those examples. They are available in MSYS2. @@ -144,14 +176,18 @@ If your favorite package is not available with MSYS2, contact us using the helpd Upgrading ARTIQ (with Nix) -------------------------- -Run ``$ nix profile upgrade`` if you installed ARTIQ into your user profile. If you used a ``flake.nix`` shell environment, make a back-up copy of the ``flake.lock`` file to enable rollback, then run ``$ nix flake update`` and re-enter ``$ nix shell``. +.. note:: + When you upgrade ARTIQ, as well as updating the software on your host machine, it may also be necessary to reflash the gateware and firmware of your core device to keep them compatible. New numbered release versions in particular incorporate breaking changes and are not generally compatible. See :ref:`reflashing-core-device` below for instructions on reflashing. + +Upgrading with Nix +^^^^^^^^^^^^^^^^^^ + +Run ``$ nix profile upgrade`` if you installed ARTIQ into your user profile. If you used a ``flake.nix`` shell environment, make a back-up copy of the ``flake.lock`` file to enable rollback, then run ``$ nix flake update`` and re-enter the environment with ``$ nix shell``. To rollback to the previous version, respectively use ``$ nix profile rollback`` or restore the backed-up version of the ``flake.lock`` file. -You may need to reflash the gateware and firmware of the core device to keep it synchronized with the software. - -Upgrading ARTIQ (with MSYS2) ----------------------------- +Upgrading with MSYS2 +^^^^^^^^^^^^^^^^^^^^ Run ``pacman -Syu`` to update all MSYS2 packages including ARTIQ. If you get a message telling you that the shell session must be restarted after a partial update, open the shell again after the partial update and repeat the command. See the MSYS2 and Pacman manual for information on how to update individual packages if required. @@ -161,40 +197,43 @@ Flashing gateware and firmware into the core device --------------------------------------------------- .. note:: - If you have purchased a pre-assembled system from M-Labs or QUARTIQ, the gateware and firmware are already flashed and you can skip those steps, unless you want to replace them with a different version of ARTIQ. + If you have purchased a pre-assembled system from M-Labs or QUARTIQ, the gateware and firmware of your device will already be flashed to the newest version of ARTIQ. These steps are only necessary if you obtained your hardware in a different way, or if you want to change or upgrade your ARTIQ version after purchase. -You need to write three binary images onto the FPGA board: -1. The FPGA gateware bitstream -2. The bootloader -3. The ARTIQ runtime or satellite manager +Obtaining the board binaries +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Installing OpenOCD -^^^^^^^^^^^^^^^^^^ +If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware that corresponds to your currently installed version of ARTIQ using AFWS (ARTIQ firmware service). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. + +Run the command:: + + $ afws_client [username] build [afws_directory] [variant] + +Replace ``[username]`` with the login name that was given to you with the subscription, ``[variant]`` with the name of your system variant, and ``[afws_directory]`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email. + +For certain configurations (KC705 or ZC705 only) it is also possible to source firmware from `the M-Labs Hydra server `_ (in ``main`` and ``zynq`` respectively). + +Without a subscription, you may build the firmware yourself from the open source code. See the section :ref:`Developing ARTIQ `. + +.. _installing-configuring-openocd: + +Installing and configuring OpenOCD +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: - This version of OpenOCD is not applicable to Kasli-SoC. + These instructions are not applicable to Kasli-SoC, which does not use the utility ``artiq_flash`` to reflash. If your core device is a Kasli SoC, skip straight to :ref:`writing-flash`. -OpenOCD can be used to write the binary images into the core device FPGA board's flash memory. +ARTIQ supplies the utility ``artiq_flash``, which uses OpenOCD to write the binary images into an FPGA board's flash memory. -With Nix, add ``aqmain.openocd-bscanspi`` to the shell packages. -Be careful not to add ``pkgs.openocd`` instead - this would install OpenOCD from the NixOS package collection, which does not contain the bscanspi bitstreams necessary to flash ARTIQ boards. +* With Nix, add ``aqmain.openocd-bscanspi`` to the shell packages. Be careful not to add ``pkgs.openocd`` instead - this would install OpenOCD from the NixOS package collection, which does not support ARTIQ boards. -With MSYS2, install ``openocd`` and ``bscan-spi-bitstreams`` as follows:: - - pacman -S mingw-w64-x86_64-openocd mingw-w64-x86_64-bscan-spi-bitstreams +* With MSYS2, ``openocd`` and ``bscan-spi-bitstreams`` are included with ``artiq`` by default. .. _configuring-openocd: -Configuring OpenOCD -^^^^^^^^^^^^^^^^^^^ +Some additional steps are necessary to ensure that OpenOCD can communicate with the FPGA board: -.. note:: - These instructions are not applicable to Kasli-SoC. - -Some additional steps are necessary to ensure that OpenOCD can communicate with the FPGA board. - -On Linux, first ensure that the current user belongs to the ``plugdev`` group (i.e. ``plugdev`` shown when you run ``$ groups``). If it does not, run ``$ sudo adduser $USER plugdev`` and re-login. +* On Linux, first ensure that the current user belongs to the ``plugdev`` group (i.e. ``plugdev`` shown when you run ``$ groups``). If it does not, run ``$ sudo adduser $USER plugdev`` and re-login. If you installed OpenOCD on Linux using Nix, use the ``which`` command to determine the path to OpenOCD, and then copy the udev rules: :: @@ -205,7 +244,7 @@ If you installed OpenOCD on Linux using Nix, use the ``which`` command to determ NixOS users should of course configure OpenOCD through ``/etc/nixos/configuration.nix`` instead. -On Windows, a third-party tool, `Zadig `_, is necessary. Use it as follows: +* On Windows, a third-party tool, `Zadig `_, is necessary. Use it as follows: 1. Make sure the FPGA board's JTAG USB port is connected to your computer. 2. Activate Options → List All Devices. @@ -216,35 +255,22 @@ On Windows, a third-party tool, `Zadig `_, is necessary. You may need to repeat these steps every time you plug the FPGA board into a port where it has not been plugged into previously on the same system. -Obtaining the board binaries -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware that corresponds to the currently installed version of ARTIQ using AFWS (ARTIQ firmware service). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. - -Run the command:: - - $ afws_client [username] build [afws_directory] [variant] - -Replace ``[username]`` with the login name that was given to you with the subscription, ``[variant]`` with the name of your system variant, and ``[afws_directory]`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email. - -Without a subscription, you may build the firmware yourself from the open source code. See the section :ref:`Developing ARTIQ `. +.. _writing-flash: Writing the flash ^^^^^^^^^^^^^^^^^ -Then, you can write the flash: - -* For Kasli:: - - $ artiq_flash -d [afws_directory] - -The JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you simply need to connect your computer to the micro-USB connector on the Kasli front panel. +First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you simply need to connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses ``artiq_coremgmt``, an IP address supplied either with the ``-D`` option or in a correctly specified ``device_db.py`` suffices. * For Kasli-SoC:: $ artiq_coremgmt [-D 192.168.1.75] config write -f boot [afws_directory]/boot.bin -If the Kasli-SoC won't boot due to corrupted firmware and ``artiq_coremgmt`` cannot access it, extract the SD card and replace ``boot.bin`` manually. +If the Kasli-SoC won't boot due to nonexistent or corrupted firmware, extract the SD card and copy ``boot.bin`` onto it manually. + +* For Kasli:: + + $ artiq_flash -d [afws_directory] * For the KC705 board:: @@ -252,93 +278,94 @@ If the Kasli-SoC won't boot due to corrupted firmware and ``artiq_coremgmt`` can The SW13 switches need to be set to 00001. +Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for ``artiq_flash`` in the :ref:`Utilities ` reference. + +.. _core-device-networking: + Setting up the core device IP networking ---------------------------------------- -For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to an Ethernet port in your network. If the port is 10Mbps or 100Mbps and not 1000Mbps, make sure that the SFP/RJ45 transceiver supports the lower rate. Many SFP/RJ45 transceivers only support the 1000Mbps rate. If you do not have a SFP/RJ45 transceiver that supports 10Mbps and 100Mbps rates, you may instead use a gigabit Ethernet switch in the middle to perform rate conversion. +For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to an Ethernet port in your network. If the port is 10Mbps or 100Mbps and not 1000Mbps, make sure that the SFP/RJ45 transceiver supports the lower rate. Many SFP/RJ45 transceivers only support the 1000Mbps rate. If you do not have a SFP/RJ45 transceiver that supports 10Mbps and 100Mbps rates, you may instead use a gigabit Ethernet switch in the middle to perform rate conversion. -You can also insert other types of SFP transceivers into Kasli if you wish to use it directly in e.g. an optical fiber Ethernet network. +You can also insert other types of SFP transceivers into Kasli if you wish to use it directly in e.g. an optical fiber Ethernet network. -If you purchased a Kasli device from M-Labs, it usually comes with the IP address ``192.168.1.75``. Once you can reach this IP, it can be changed with: :: +Kasli-SoC already directly features RJ45 10/100/1000T Ethernet, but the same is still true of its SFP ports. + +If you purchased a Kasli or Kasli-SoC device from M-Labs, it usually comes with the IP address ``192.168.1.75``. Once you can reach this IP, it can be changed by running: :: $ artiq_coremgmt -D 192.168.1.75 config write -s ip [new IP] -and then reboot the device (with ``artiq_flash start`` or a power cycle). +and then rebooting the device (with ``artiq_flash start`` or a power cycle). -If the ``ip`` config field is not set, or set to ``use_dhcp`` then the device will -attempt to obtain an IP address and default gateway using DHCP. If a static IP -address is wanted, install OpenOCD as before, and flash the IP, default gateway -(and, if necessary, MAC and IPv6) addresses directly: :: +.. note:: + Kasli-SoC is not a valid target for ``artiq_flash``; it is easiest to reboot by power cycle. For a KC705, it is necessary to specify ``artiq_flash -t kc705 start``. - $ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx/xx -s ipv4_default_route xx.xx.xx.xx -s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx -s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx +* For Kasli-SoC: + +If the ``ip`` config is not set, Kasli-SoC firmware defaults to using the IP address ``192.168.1.56``. It can then be changed with the procedure above. + +* For Kasli or KC705: + +If the ``ip`` config field is not set or set to ``use_dhcp``, the device will attempt to obtain an IP address and default gateway using DHCP. If a static IP address is nonetheless wanted, it can be flashed directly (OpenOCD must be installed and configured, as above), along with, as necessary, default gateway, IPv6, and/or MAC address: :: + + $ artiq_mkfs flash_storage.img [-s mac xx:xx:xx:xx:xx:xx] [-s ip xx.xx.xx.xx/xx] [-s ipv4_default_route xx.xx.xx.xx] [-s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx] [-s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx] $ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start -For Kasli devices, flashing a MAC address is not necessary as they can obtain it from their EEPROM. -If you only want to access the core device from the same subnet you may -omit the default gateway and IPv4 prefix length: :: - - $ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx +On Kasli or Kasli SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. Regardless of board, once a device is reachable by ``artiq_coremgmt``, any of these fields can be accessed using ``artiq_coremgmt config write`` and ``artiq_coremgt config read``; see also :ref:`Utilities `. If DHCP has been used the address can be found in the console output, which can be viewed using: :: $ python -m misoc.tools.flterm /dev/ttyUSB2 - Check that you can ping the device. If ping fails, check that the Ethernet link LED is ON - on Kasli, it is the LED next to the SFP0 connector. As a next step, look at the messages emitted on the UART during boot. Use a program such as flterm or PuTTY to connect to the device's serial port at 115200bps 8-N-1 and reboot the device. On Kasli, the serial port is on FTDI channel 2 with v1.1 hardware (with channel 0 being JTAG) and on FTDI channel 1 with v1.0 hardware. Note that on Windows you might need to install the `FTDI drivers `_ first. -If you want to use IPv6, the device also has a link-local address that corresponds to its EUI-64, and an additional arbitrary IPv6 address can be defined by using the ``ip6`` configuration key. All IPv4 and IPv6 addresses can be used at the same time. +Regarding use of IPv6, note that the device also has a link-local address that corresponds to its EUI-64, which can be used simultaneously to the IPv6 address defined by using the ``ip6`` configuration key, which may be of arbitrary nature. + +.. _miscellaneous_config_core_device: Miscellaneous configuration of the core device ---------------------------------------------- -Those steps are optional. The core device usually needs to be restarted for changes to take effect. +These steps are optional, and only need to be executed if necessary for your specific purposes. In all cases, the core device generally needs to be restarted for changes to take effect. -* Load the idle kernel +Flash idle or startup kernel +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The idle kernel is the kernel (some piece of code running on the core device) which the core device runs whenever it is not connected to a PC via Ethernet. -This kernel is therefore stored in the :ref:`core device configuration flash storage `. +The idle kernel is the kernel (that is, a piece of code running on the core device; see :ref:`next topic ` for more information about kernels) which the core device runs whenever it is not connected to the host via Ethernet. This kernel is therefore stored immediately in the :ref:`core device configuration flash storage `. -To flash the idle kernel, first compile the idle experiment. The idle experiment's ``run()`` method must be a kernel: it must be decorated with the ``@kernel`` decorator (see :ref:`next topic ` for more information about kernels). Since the core device is not connected to the PC, RPCs (calling Python code running on the PC from the kernel) are forbidden in the idle experiment. Then write it into the core device configuration flash storage: :: +To flash the idle kernel, first compile an idle experiment. Since the core device is not connected to the host, RPCs (calling Python code running on the host from the kernel) are forbidden, and its ``run()`` method must be a kernel, marked correctly with the ``@kernel`` decorator. Write the compiled experiment to the core device configuration flash storage, under the key ``idle_kernel``: :: $ artiq_compile idle.py $ artiq_coremgmt config write -f idle_kernel idle.elf -.. note:: You can find more information about how to use the ``artiq_coremgmt`` utility on the :ref:`Utilities ` page. - -* Load the startup kernel - -The startup kernel is executed once when the core device powers up. It should initialize DDSes, set up TTL directions, etc. Proceed as with the idle kernel, but using the ``startup_kernel`` key in the ``artiq_coremgmt`` command. +The startup kernel is the kernel executed once immediately whenever the core device powers on. Uses include initializing DDSes, setting TTL directions etc. Proceed as with the idle kernel, but using the ``startup_kernel`` key in the ``artiq_coremgmt`` command. For DRTIO systems, the startup kernel should wait until the desired destinations (including local RTIO) are up, using :meth:`artiq.coredevice.Core.get_rtio_destination_status`. -* Load the DRTIO routing table +Load the DRTIO routing table +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you are using DRTIO and the default routing table (for a star topology) is not suitable to your needs, prepare and load a different routing table. See :ref:`Using DRTIO `. -* Select the RTIO clock source (KC705 and Kasli) +Select the RTIO clock source +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The KC705 may use either an external clock signal, or its internal clock with external frequency or internal crystal reference. The clock is selected at power-up. Setting the RTIO clock source to "ext0_bypass" would bypass the Si5324 synthesiser, requiring that an input clock be present. To select the source, use one of these commands: :: +The core device may use any of: an external clock signal, its internal clock with external frequency reference, or its internal clock with internal crystal reference. Clock source and timing are set at power-up. To find out what clock signal you are using, check startup logs with ``artiq_coremgmt log``. + +The default is to use an internal 125MHz clock. To select a source, use a command of the form: :: $ artiq_coremgmt config write -s rtio_clock int_125 # internal 125MHz clock (default) - $ artiq_coremgmt config write -s rtio_clock ext0_bypass # external clock (bypass) + $ artiq_coremgmt config write -s rtio_clock ext0_synth0_10to125 # external 10MHz reference used to synthesize internal 125MHz -Other options include: - - ``ext0_synth0_10to125`` - external 10MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock, - - ``ext0_synth0_80to125`` - external 80MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock, - - ``ext0_synth0_100to125`` - external 100MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock, - - ``ext0_synth0_125to125`` - external 125MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock, - - ``int_100`` - internal crystal reference is used by Si5324 to synthesize a 100MHz RTIO clock, - - ``int_150`` - internal crystal reference is used by Si5324 to synthesize a 150MHz RTIO clock. - - ``ext0_bypass_125`` and ``ext0_bypass_100`` - explicit aliases for ``ext0_bypass``. +See :ref:`core-device-clocking` for availability of specific options. -Availability of these options depends on the board and their configuration - specific setting may or may not be supported. - -* Setup resolving RTIO channels to their names +Set up resolving RTIO channels to their names +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This feature allows you to print the channels' respective names alongside with their numbers in RTIO error messages. To enable it, run the ``artiq_rtiomap`` tool and write its result into the device config at the ``device_map`` key: :: $ artiq_rtiomap dev_map.bin $ artiq_coremgmt config write -f device_map dev_map.bin -.. note:: You can find more information about how to use the ``artiq_rtiomap`` utility on the :ref:`Utilities ` page. +.. note:: More information on the ``artiq_rtiomap`` utility can be found on the :ref:`Utilities ` page. diff --git a/doc/manual/introduction.rst b/doc/manual/introduction.rst index 2f3f23a0e..2601f980b 100644 --- a/doc/manual/introduction.rst +++ b/doc/manual/introduction.rst @@ -27,4 +27,4 @@ Website: https://m-labs.hk/artiq `Cite ARTIQ `_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``. -Copyright (C) 2014-2023 M-Labs Limited. Licensed under GNU LGPL version 3+. +Copyright (C) 2014-2024 M-Labs Limited. Licensed under GNU LGPL version 3+. diff --git a/doc/manual/list_of_ndsps.rst b/doc/manual/list_of_ndsps.rst index 65953485c..03d3a1b3d 100644 --- a/doc/manual/list_of_ndsps.rst +++ b/doc/manual/list_of_ndsps.rst @@ -29,4 +29,4 @@ The following network device support packages are available for ARTIQ. If you wo | InfluxDB database | Not available | Not available | `HTML `_ | https://gitlab.com/charlesbaynham/artiq_influx_generic | +---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+ -MSYS2 packages all start with the ``mingw-w64-x86_64-`` prefix. +MSYS2 packages all start with the ``mingw-w64-clang-x86_64-`` prefix. diff --git a/doc/manual/management_system.rst b/doc/manual/management_system.rst index abb5c2d17..440b27d9d 100644 --- a/doc/manual/management_system.rst +++ b/doc/manual/management_system.rst @@ -157,11 +157,10 @@ Embedded applets should use `AppletRequestIPC` while standalone applets use `App Applet entry area ***************** -Extensions are provided to enable the use of argument widgets in applets through the `EntryArea` class. +Argument widgets can be used in applets through the `EntryArea` class. Below is a simple example code snippet using the `EntryArea` class: :: - # Create the experiment area entry_area = EntryArea() # Create a new widget diff --git a/doc/manual/rtio.rst b/doc/manual/rtio.rst index f5901ab0d..fad626c1d 100644 --- a/doc/manual/rtio.rst +++ b/doc/manual/rtio.rst @@ -1,3 +1,5 @@ +.. _artiq-real-time-i-o-concepts: + ARTIQ Real-Time I/O Concepts ============================ @@ -119,6 +121,8 @@ To track down ``RTIOUnderflows`` in an experiment there are a few approaches: code. * The :any:`integrated logic analyzer ` shows the timeline context that lead to the exception. The analyzer is always active and supports plotting of RTIO slack. RTIO slack is the difference between timeline cursor and wall clock time (``now - rtio_counter``). +.. _sequence-errors: + Sequence errors --------------- A sequence error happens when the sequence of coarse timestamps cannot be supported by the gateware. For example, there may have been too many timeline rewinds. diff --git a/doc/manual/utilities.rst b/doc/manual/utilities.rst index d7642113d..14dd2eb87 100644 --- a/doc/manual/utilities.rst +++ b/doc/manual/utilities.rst @@ -12,7 +12,6 @@ Local running tool :ref: artiq.frontend.artiq_run.get_argparser :prog: artiq_run - Static compiler --------------- @@ -32,6 +31,8 @@ This tool compiles key/value pairs into a binary image suitable for flashing int :ref: artiq.frontend.artiq_mkfs.get_argparser :prog: artiq_mkfs +.. _flashing-loading-tool: + Flashing/Loading tool --------------------- @@ -46,7 +47,7 @@ Core device management tool The artiq_coremgmt utility gives remote access to the core device logs, the :ref:`core-device-flash-storage`, and other management functions. -To use this tool, you need to specify a ``device_db.py`` device database file which contains a ``comm`` device (an example is provided in ``examples/master/device_db.py``). This tells the tool how to connect to the core device and with which parameters (e.g. IP address, TCP port). When not specified, the artiq_coremgmt utility will assume that there is a file named ``device_db.py`` in the current directory. +To use this tool, it is necessary to specify the IP address your core device can be contacted at. If no option is used, the utility will assume there is a file named ``device_db.py`` in the current directory containing the device database; otherwise, a device database file can be provided with ``--device-db`` or an address directly with ``--device`` (see also below). To read core device logs:: @@ -139,6 +140,15 @@ Core device RTIO analyzer tool .. _routing-table-tool: +Core device RTIO analyzer proxy +------------------------------- + +:mod:`~artiq.frontend.aqctl_coreanalyzer_proxy` is a tool to distribute the core analyzer dump to several clients such as the dashboard. + +.. argparse:: + :ref: artiq.frontend.aqctl_coreanalyzer_proxy.get_argparser + :prog: aqctl_coreanalyzer_proxy + DRTIO routing table manipulation tool ------------------------------------- diff --git a/flake.nix b/flake.nix index 9b3fd2c58..83e1ad742 100644 --- a/flake.nix +++ b/flake.nix @@ -24,8 +24,8 @@ artiqRev = self.sourceInfo.rev or "unknown"; rustManifest = pkgs.fetchurl { - url = "https://static.rust-lang.org/dist/2021-01-29/channel-rust-nightly.toml"; - sha256 = "sha256-EZKgw89AH4vxaJpUHmIMzMW/80wAFQlfcxRoBD9nz0c="; + url = "https://static.rust-lang.org/dist/2021-09-01/channel-rust-nightly.toml"; + sha256 = "sha256-KYLZHfOkotnM6BZd7CU+vBA3w/VtiWxth3ngJlmA41U="; }; targets = []; @@ -187,7 +187,8 @@ cargoDeps = rustPlatform.importCargoLock { lockFile = ./artiq/firmware/Cargo.lock; outputHashes = { - "fringe-1.2.1" = "sha256-m4rzttWXRlwx53LWYpaKuU5AZe4GSkbjHS6oINt5d3Y="; + "fringe-1.2.1" = "sha256-u7NyZBzGrMii79V+Xs4Dx9tCpiby6p8IumkUl7oGBm0="; + "tar-no-std-0.1.8" = "sha256-xm17108v4smXOqxdLvHl9CxTCJslmeogjm4Y87IXFuM="; }; }; nativeBuildInputs = [ @@ -277,23 +278,26 @@ paths = [ openocd-fixed bscan_spi_bitstreams-pkg ]; }; - sphinxcontrib-wavedrom = pkgs.python3Packages.buildPythonPackage rec { - pname = "sphinxcontrib-wavedrom"; - version = "3.0.4"; - format = "pyproject"; - src = pkgs.python3Packages.fetchPypi { - inherit pname version; - sha256 = "sha256-0zTHVBr9kXwMEo4VRTFsxdX2HI31DxdHfLUHCQmw1Ko="; - }; - nativeBuildInputs = [ pkgs.python3Packages.setuptools-scm ]; - propagatedBuildInputs = (with pkgs.python3Packages; [ wavedrom sphinx xcffib cairosvg ]); - }; latex-artiq-manual = pkgs.texlive.combine { inherit (pkgs.texlive) scheme-basic latexmk cmap collection-fontsrecommended fncychap titlesec tabulary varwidth framed fancyvrb float wrapfig parskip upquote capt-of needspace etoolbox booktabs; }; + + artiq-frontend-dev-wrappers = pkgs.runCommandNoCC "artiq-frontend-dev-wrappers" {} + '' + mkdir -p $out/bin + for program in ${self}/artiq/frontend/*.py; do + if [ -x $program ]; then + progname=`basename -s .py $program` + outname=$out/bin/$progname + echo "#!${pkgs.bash}/bin/bash" >> $outname + echo "exec python3 -m artiq.frontend.$progname \"\$@\"" >> $outname + chmod 755 $outname + fi + done + ''; in rec { packages.x86_64-linux = rec { inherit (nac3.packages.x86_64-linux) python3-mimalloc; @@ -308,14 +312,14 @@ target = "efc"; variant = "shuttler"; }; - inherit sphinxcontrib-wavedrom latex-artiq-manual; + inherit latex-artiq-manual; artiq-manual-html = pkgs.stdenvNoCC.mkDerivation rec { name = "artiq-manual-html-${version}"; version = artiqVersion; src = self; - buildInputs = [ - pkgs.python3Packages.sphinx pkgs.python3Packages.sphinx_rtd_theme - pkgs.python3Packages.sphinx-argparse sphinxcontrib-wavedrom + buildInputs = with pkgs.python3Packages; [ + sphinx sphinx_rtd_theme + sphinx-argparse sphinxcontrib-wavedrom ]; buildPhase = '' export VERSIONEER_OVERRIDE=${artiqVersion} @@ -333,11 +337,10 @@ name = "artiq-manual-pdf-${version}"; version = artiqVersion; src = self; - buildInputs = [ - pkgs.python3Packages.sphinx pkgs.python3Packages.sphinx_rtd_theme - pkgs.python3Packages.sphinx-argparse sphinxcontrib-wavedrom - latex-artiq-manual - ]; + buildInputs = with pkgs.python3Packages; [ + sphinx sphinx_rtd_theme + sphinx-argparse sphinxcontrib-wavedrom + ] ++ [ latex-artiq-manual ]; buildPhase = '' export VERSIONEER_OVERRIDE=${artiq.version} export SOURCE_DATE_EPOCH=${builtins.toString self.sourceInfo.lastModified} @@ -355,13 +358,14 @@ packages.x86_64-w64-mingw32 = import ./windows { inherit sipyco nac3 artiq-comtools; artiq = self; }; - inherit makeArtiqBoardPackage; + inherit makeArtiqBoardPackage openocd-bscanspi-f; defaultPackage.x86_64-linux = packages.x86_64-linux.python3-mimalloc.withPackages(ps: [ packages.x86_64-linux.artiq ]); # Main development shell with everything you need to develop ARTIQ on Linux. - # ARTIQ itself is not included in the environment, you can make Python use the current sources using e.g. - # export PYTHONPATH=`pwd`:$PYTHONPATH + # The current copy of the ARTIQ sources is added to PYTHONPATH so changes can be tested instantly. + # Additionally, executable wrappers that import the current ARTIQ sources for the ARTIQ frontends + # are added to PATH. devShells.x86_64-linux.default = pkgs.mkShell { name = "artiq-dev-shell"; buildInputs = [ @@ -370,16 +374,19 @@ pkgs.llvmPackages_14.clang-unwrapped pkgs.llvm_14 pkgs.lld_14 + pkgs.git + artiq-frontend-dev-wrappers # use the vivado-env command to enter a FHS shell that lets you run the Vivado installer packages.x86_64-linux.vivadoEnv packages.x86_64-linux.vivado packages.x86_64-linux.openocd-bscanspi pkgs.python3Packages.sphinx pkgs.python3Packages.sphinx_rtd_theme - pkgs.python3Packages.sphinx-argparse sphinxcontrib-wavedrom latex-artiq-manual + pkgs.python3Packages.sphinx-argparse pkgs.python3Packages.sphinxcontrib-wavedrom latex-artiq-manual ]; shellHook = '' export QT_PLUGIN_PATH=${pkgs.qt5.qtbase}/${pkgs.qt5.qtbase.dev.qtPluginPrefix}:${pkgs.qt5.qtsvg.bin}/${pkgs.qt5.qtbase.dev.qtPluginPrefix} export QML2_IMPORT_PATH=${pkgs.qt5.qtbase}/${pkgs.qt5.qtbase.dev.qtQmlPrefix} + export PYTHONPATH=`git rev-parse --show-toplevel`:$PYTHONPATH ''; }; diff --git a/setup.py b/setup.py index 92bc68b76..8a2670938 100755 --- a/setup.py +++ b/setup.py @@ -25,6 +25,7 @@ console_scripts = [ "artiq_route = artiq.frontend.artiq_route:main", "artiq_run = artiq.frontend.artiq_run:main", "artiq_flash = artiq.frontend.artiq_flash:main", + "aqctl_coreanalyzer_proxy = artiq.frontend.aqctl_coreanalyzer_proxy:main", "aqctl_corelog = artiq.frontend.aqctl_corelog:main", "aqctl_moninj_proxy = artiq.frontend.aqctl_moninj_proxy:main", "afws_client = artiq.frontend.afws_client:main",