forked from M-Labs/artiq
1
0
Fork 0

Compare commits

...

71 Commits
nac3 ... master

Author SHA1 Message Date
Simon Renblad 76fba538b1 artiq_ddb_template: fixed missing separator 2023-12-18 13:23:39 +08:00
Sebastien Bourdeauducq 8dd8cfa6b0 master: implement devarg_override 2023-12-18 12:11:40 +08:00
Sebastien Bourdeauducq 5df0721811 dashboard,client: add device argument overrides to expid 2023-12-17 19:43:41 +08:00
Sebastien Bourdeauducq 6326051052 flake: forward cmdline arguments in devshell wrappers 2023-12-17 19:42:56 +08:00
Sebastien Bourdeauducq 44a95b5dda dashboard: add repository revision clear button 2023-12-17 16:37:02 +08:00
Sebastien Bourdeauducq 645b9b8c5f flake: add executable wrappers for frontends to devshell 2023-12-17 13:41:49 +08:00
Sebastien Bourdeauducq 858f0479ba aqctl_coreanalyzer_proxy: permissions and shebang 2023-12-17 13:27:38 +08:00
Sebastien Bourdeauducq 133b26b6ce flake: add ARTIQ sources to PYTHONPATH in devshell 2023-12-17 13:05:16 +08:00
Sebastien Bourdeauducq d96213dbbc flake: update dependencies 2023-12-17 12:55:36 +08:00
Sebastien Bourdeauducq 413d33c3d1 core: document analyzer proxy options 2023-12-13 14:29:33 +08:00
Sebastien Bourdeauducq c2b53ecb43 core: add option to trigger analyzer proxy at run end 2023-12-13 14:27:48 +08:00
Sebastien Bourdeauducq ede0b37c6e devices: introduce notify_run_end API 2023-12-13 14:27:04 +08:00
Sebastien Bourdeauducq 795c4372fa DeviceManager: fix close exception error message 2023-12-13 14:06:53 +08:00
Sebastien Bourdeauducq 402a5d3376 core: connect lazily to analyzer proxy
Otherwise artiq_compile and other uses of Core that does not access hardware/network may fail.
2023-12-13 13:46:47 +08:00
Sebastien Bourdeauducq 85850ad9e8 wavesynth: remove 2023-12-13 13:36:21 +08:00
Sebastien Bourdeauducq 7a863b4f5e core: add trigger_analyzer_proxy API 2023-12-13 13:08:54 +08:00
Sebastien Bourdeauducq a26cee6ca7 coreanalyzer_proxy: cleanups/renames 2023-12-13 13:07:35 +08:00
Sebastien Bourdeauducq be08862606 logo: text to path 2023-12-08 19:34:47 +08:00
Sebastien Bourdeauducq 05a9422e67 aqctl_coreanalyzer_proxy: cleanup 2023-12-08 18:56:10 +08:00
Simon Renblad b09a39c82e
add aqctl_coreanalyzer_proxy 2023-12-08 18:55:07 +08:00
mwojcik 49267671f9 core: fix precompile 2023-12-04 12:10:11 +08:00
Sebastien Bourdeauducq 8ca75a3fb9 firmware: deal with rust nonsense
Fixes
"error: edition 2021 is unstable and only available with -Z unstable-options.
error: could not compile `alloc`"
2023-12-03 11:20:18 +08:00
Florian Agbuya 8381b34a79 flake: add new booktabs dependency for artiq-manual-pdf 2023-12-03 11:18:59 +08:00
Sebastien Bourdeauducq d458fc27bf switch to new nixpkgs release 2023-12-03 11:18:25 +08:00
mwojcik 9f4b8db2de repeater: fix setting tsc 2023-12-01 16:43:48 +08:00
Florian Agbuya 1108cebd75 flake: fix ncurses on vivado
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-11-28 17:36:36 +08:00
Florian Agbuya cf7cbd0c3b flake: update nixpkgs
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-11-28 17:36:36 +08:00
mwojcik 1a28069aa2 support for pre-compiling subkernels 2023-11-23 16:49:02 +08:00
Sebastien Bourdeauducq 56418e342e take into account VERSIONEER_REV in artiq._version.get_rev 2023-11-22 20:51:02 +08:00
Sebastien Bourdeauducq 77c6553725 always provide artiq._version.get_rev 2023-11-14 14:14:47 +08:00
Sebastien Bourdeauducq e81e8f28cf gateware: merge kasli_generic into kasli. Closes #2279 2023-11-14 14:01:17 +08:00
mwojcik de10e584f6 support .tar flashed idle/startup kernels 2023-11-13 18:14:35 +08:00
Florian Agbuya 875666f3ec doc: add section on new nix flakes config (closes #2232)
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-11-10 16:47:56 +08:00
Sebastien Bourdeauducq 3ad3fac828 update ARTIQ-8 release notes 2023-11-08 11:17:17 +08:00
Simon Renblad 49afa116b3 RELEASE_NOTES: artiq_ddb_template needs gateware 2023-11-08 10:51:39 +08:00
Simon Renblad 363afb5fc9 artiq_ddb_template: add support for user LEDs
Add support for additional user LEDs.
2023-11-08 10:51:39 +08:00
Simon Renblad e7af219505 kasli_generic: add support for user LEDs
Add additional LED RTIO devices.
2023-11-08 10:51:39 +08:00
linuswck ec2b86b08d kc705: fix gtx clock path durnig init 2023-11-07 18:36:48 +08:00
linuswck 8f7d138dbd gtx: Always enable IBUFDS_GTE2, add clk_path_ready
- Set clk_path_ready to High to start Initialization of GTP TX and RX
2023-11-07 18:36:48 +08:00
Sebastien Bourdeauducq bbe6ff8cac flake: update dependencies 2023-11-07 18:36:11 +08:00
Sebastien Bourdeauducq c0a6252e77 afws_client: improve compatibility with older versions of prettytable. Closes #2264 2023-11-07 14:06:31 +08:00
mwojcik 6640bf0e82 drtioaux/subkernel/ddma: introduce proper errors, more robust 2023-11-07 13:42:04 +08:00
mwojcik b3c0d084d4 drtio: better control state of bigger payloads 2023-11-07 13:42:04 +08:00
linuswck bb0b8a6c00 kasli: Correct the GTP TX clock path during init
- TXOUT must be fed back into TXUSRCLK during initialization
- Now, MMCM Clock Input is switched before GTP TX Init is started instead of after GTP TX Init is done
- Reset in Sys Clock domain is kept asserted when clock is switched and GTP TX Init is NOT done
2023-11-07 13:40:32 +08:00
Sebastien Bourdeauducq ce80bf5717 flake: update dependencies 2023-11-07 13:40:17 +08:00
Florian Agbuya 378dd0e5ca flake: fix and upgrade wavedrom (closes #2266)
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-10-30 08:09:00 +01:00
jfniedermeyer 9c68451cae Add hotkeys to organize experiments in dashboard
Signed-off-by: jfniedermeyer <justin.niedermeyer@colorado.edu>
2023-10-27 21:47:30 +02:00
linuswck 93c9d8bcdf artiq_ddb_template:set default Shuttler drtio_dest
- remove default Shuttler "drtio_destination" value in jsonschema
- set the default Shuttler "drtio_destination" value according to
    board "target" and "hw_rev"
2023-10-27 21:46:02 +02:00
mwojcik e480bbe8d8 artiq_ddb_template: move satellite_cpu_target to core 2023-10-27 21:45:12 +02:00
mwojcik b168f0bb4b subkernel: separate tags and data 2023-10-17 12:18:03 +02:00
Sebastien Bourdeauducq 6705c9fbfb flake: update dependencies 2023-10-17 15:37:06 +08:00
mwojcik 5f445f6b92 ad53xx: fix `load()` references in documentation 2023-10-16 13:54:38 +08:00
occheung 363f7327f1 io_expander: initialize before service 2023-10-15 07:45:20 +08:00
Sebastien Bourdeauducq f7abc156cb flake: update dependencies 2023-10-11 16:41:34 +08:00
linuswck de41bd6655 eem_7series: pass through kwargs for shuttler 2023-10-11 12:15:06 +08:00
Simon Renblad 96941d7c04 big_number: fix metadata scaling, add unit label 2023-10-09 15:35:14 +08:00
mwojcik f3c79e71e1 firmware: merge runtime and satman linker scripts 2023-10-09 15:33:29 +08:00
Simon Renblad 333b81f789 set_argument_value warning in browser 2023-10-09 10:38:17 +08:00
Sebastien Bourdeauducq d070826911 flake: update dependencies 2023-10-09 10:13:58 +08:00
Sebastien Bourdeauducq 9c90f923d2 test: check return value of subprocesses in test_compile 2023-10-09 10:07:04 +08:00
Sebastien Bourdeauducq e23e4d39d7 artiq_compile: ignore subkernel_arg_types 2023-10-09 10:03:43 +08:00
David Nadlinger 08eea09d44 compiler: Catch escaping numpy.{array, full, transpose}() results
Function calls in general can still be used to hide escaping
allocations from the compiler (issue #1497), but these calls in
particular always allocate, so we can easily and accurately handle
them.
2023-10-09 09:00:26 +08:00
mwojcik 7ab52af603 docs: subkernel support 2023-10-08 17:12:06 +08:00
mwojcik 973fd88b27 core: compile and upload subkernels 2023-10-08 17:11:51 +08:00
mwojcik 8d7194941e tests: add lit tests for subkernels 2023-10-08 17:11:51 +08:00
mwojcik 0a750c77e8 compiler: support subkernels 2023-10-08 17:11:51 +08:00
mwojcik 1a0fc317df satman: support subkernels 2023-10-08 17:11:32 +08:00
mwojcik e05be2f8e4 runtime: support subkernels 2023-10-08 17:11:32 +08:00
mwojcik 6f4b8c641e drtioaux_proto: use better payload names 2023-10-08 17:11:32 +08:00
mwojcik b42816582e ksupport: support subkernels 2023-10-08 17:11:32 +08:00
Hartmann Michael (IFAG PSS SIS SCE QSE) 76f1318bc0 doc: Extend documentation
Extend the paragraph "Pitfalls" in the documentation of "Compiler" by
problems caused by returning values from the stack.
2023-10-07 07:20:33 +08:00
108 changed files with 4369 additions and 1474 deletions

View File

@ -8,36 +8,79 @@ ARTIQ-8 (Unreleased)
Highlights: Highlights:
* Hardware support: * New hardware support:
- Support for Shuttler, a 16-channel 125MSPS DAC card intended for ion transport.
Waveform generator and user API are similar to the NIST PDQ.
- Implemented Phaser-servo. This requires recent gateware on Phaser. - Implemented Phaser-servo. This requires recent gateware on Phaser.
- Implemented Phaser-MIQRO support. This requires the Phaser MIQRO gateware - Almazny v1.2 with finer RF switch control.
variant. - Metlino and Sayma support has been dropped due to complications with synchronous RTIO clocking.
- More user LEDs are exposed to RTIO on Kasli.
- Implemented Phaser-MIQRO support. This requires the proprietary Phaser MIQRO gateware
variant from QUARTIQ.
- Sampler: fixed ADC MU to Volt conversion factor for Sampler v2.2+. - Sampler: fixed ADC MU to Volt conversion factor for Sampler v2.2+.
For earlier hardware versions, specify the hardware version in the device For earlier hardware versions, specify the hardware version in the device
database file (e.g. ``"hw_rev": "v2.1"``) to use the correct conversion factor. database file (e.g. ``"hw_rev": "v2.1"``) to use the correct conversion factor.
- Almazny v1.2. It is incompatible with the legacy versions and is the default. To use legacy * Support for distributed DMA, where DMA is run directly on satellites for corresponding
versions, specify ``almazny_hw_rev`` in the JSON description. RTIO events, increasing bandwidth in scenarios with heavy satellite usage.
- Metlino and Sayma support has been dropped due to complications with synchronous RTIO clocking. * Support for subkernels, where kernels are run on satellite device CPUs to offload some
of the processing and RTIO operations.
* CPU (on softcore platforms) and AXI bus (on Zynq) are now clocked synchronously with the RTIO * CPU (on softcore platforms) and AXI bus (on Zynq) are now clocked synchronously with the RTIO
clock, to facilitate implementation of local processing on DRTIO satellites, and to slightly clock, to facilitate implementation of local processing on DRTIO satellites, and to slightly
reduce RTIO latency. reduce RTIO latency.
* MSYS2 packaging for Windows, which replaces Conda. Conda packages are still available to * Support for DRTIO-over-EEM, used with Shuttler.
support legacy installations, but may be removed in a future release. * Added channel names to RTIO error messages.
* Added channel names to RTIO errors. * GUI:
* Full Python 3.10 support. - Implemented Applet Request Interfaces which allow applets to modify datasets and set the
* Python's built-in types (such as `float`, or `List[...]`) can now be used in type annotations on current values of widgets in the dashboard's experiment windows.
kernel functions. - Implemented a new EntryArea widget which allows argument entry widgets to be used in applets.
* Distributed DMA is now supported, allowing DMA to be run directly on satellites for corresponding
RTIO events, increasing bandwidth in scenarios with heavy satellite usage.
* Applet Request Interfaces have been implemented, enabling applets to directly modify datasets
and temporarily set arguments in the dashboard.
* EntryArea widget has been implemented, allowing argument entry widgets to be used in applets.
* Dashboard:
- The "Close all applets" command (shortcut: Ctrl-Alt-W) now ignores docked applets, - The "Close all applets" command (shortcut: Ctrl-Alt-W) now ignores docked applets,
making it a convenient way to clean up after exploratory work without destroying a making it a convenient way to clean up after exploratory work without destroying a
carefully arranged default workspace. carefully arranged default workspace.
* Persistent datasets are now stored in a LMDB database for improved performance. PYON databases can - Hotkeys now organize experiment windows in the order they were last interacted with:
be converted with the script below. + CTRL+SHIFT+T tiles experiment windows
+ CTRL+SHIFT+C cascades experiment windows
* Persistent datasets are now stored in a LMDB database for improved performance.
* Python's built-in types (such as ``float``, or ``List[...]``) can now be used in type annotations on
kernel functions.
* Full Python 3.10 support.
* MSYS2 packaging for Windows, which replaces Conda. Conda packages are still available to
support legacy installations, but may be removed in a future release.
Breaking changes:
* ``SimpleApplet`` now calls widget constructors with an additional ``ctl`` parameter for control
operations, which includes dataset operations. It can be ignored if not needed. For an example usage,
refer to the ``big_number.py`` applet.
* ``SimpleApplet`` and ``TitleApplet`` now call ``data_changed`` with additional parameters. Derived applets
should change the function signature as below:
::
# SimpleApplet
def data_changed(self, value, metadata, persist, mods)
# SimpleApplet (old version)
def data_changed(self, data, mods)
# TitleApplet
def data_changed(self, value, metadata, persist, mods, title)
# TitleApplet (old version)
def data_changed(self, data, mods, title)
Accesses to the data argument should be replaced as below:
::
data[key][0] ==> persist[key]
data[key][1] ==> value[key]
* The ``ndecimals`` parameter in ``NumberValue`` and ``Scannable`` has been renamed to ``precision``.
Parameters after and including ``scale`` in both constructors are now keyword-only.
Refer to the updated ``no_hardware/arguments_demo.py`` example for current usage.
* Almazny v1.2 is incompatible with the legacy versions and is the default.
To use legacy versions, specify ``almazny_hw_rev`` in the JSON description.
* kasli_generic.py has been merged into kasli.py, and the demonstration designs without JSON descriptions
have been removed. The base classes remain present in kasli.py to support third-party flows without
JSON descriptions.
* Legacy PYON databases should be converted to LMDB with the script below:
:: ::
@ -51,35 +94,7 @@ Highlights:
txn.put(key.encode(), pyon.encode((value, {})).encode()) txn.put(key.encode(), pyon.encode((value, {})).encode())
new.close() new.close()
Breaking changes: * ``artiq.wavesynth`` has been removed.
* ``SimpleApplet`` now calls widget constructors with an additional ``ctl`` parameter for control
operations, which includes dataset operations. It can be ignored if not needed. For an example usage,
refer to the ``big_number.py`` applet.
* ``SimpleApplet`` and ``TitleApplet`` now call ``data_changed`` with additional parameters. Wrapped widgets
should refactor the function signature as seen below:
::
# SimpleApplet
def data_changed(self, value, metadata, persist, mods)
# SimpleApplet (old version)
def data_changed(self, data, mods)
# TitleApplet
def data_changed(self, value, metadata, persist, mods, title)
# TitleApplet (old version)
def data_changed(self, data, mods, title)
Old syntax should be replaced with the form shown on the right.
::
data[key][0] ==> persist[key]
data[key][1] ==> value[key]
data[key][2] ==> metadata[key]
* The ``ndecimals`` parameter in ``NumberValue`` and ``Scannable`` has been renamed to ``precision``.
Parameters after and including ``scale`` in both constructors are now keyword-only.
Refer to the updated ``no_hardware/arguments_demo.py`` example for current usage.
ARTIQ-7 ARTIQ-7
------- -------

View File

@ -1,4 +1,7 @@
import os import os
def get_rev():
return os.getenv("VERSIONEER_REV", default="unknown")
def get_version(): def get_version():
return os.getenv("VERSIONEER_OVERRIDE", default="8.0+unknown.beta") return os.getenv("VERSIONEER_OVERRIDE", default="8.0+unknown.beta")

View File

@ -2,6 +2,8 @@
from PyQt5 import QtWidgets, QtCore, QtGui from PyQt5 import QtWidgets, QtCore, QtGui
from artiq.applets.simple import SimpleApplet from artiq.applets.simple import SimpleApplet
from artiq.tools import scale_from_metadata
from artiq.gui.tools import LayoutWidget
class QResponsiveLCDNumber(QtWidgets.QLCDNumber): class QResponsiveLCDNumber(QtWidgets.QLCDNumber):
@ -21,29 +23,41 @@ class QCancellableLineEdit(QtWidgets.QLineEdit):
super().keyPressEvent(event) super().keyPressEvent(event)
class NumberWidget(QtWidgets.QStackedWidget): class NumberWidget(LayoutWidget):
def __init__(self, args, req): def __init__(self, args, req):
QtWidgets.QStackedWidget.__init__(self) LayoutWidget.__init__(self)
self.dataset_name = args.dataset self.dataset_name = args.dataset
self.req = req self.req = req
self.metadata = dict()
self.number_area = QtWidgets.QStackedWidget()
self.addWidget(self.number_area, 0, 0)
self.unit_area = QtWidgets.QLabel()
self.unit_area.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignTop)
self.addWidget(self.unit_area, 0, 1)
self.lcd_widget = QResponsiveLCDNumber() self.lcd_widget = QResponsiveLCDNumber()
self.lcd_widget.setDigitCount(args.digit_count) self.lcd_widget.setDigitCount(args.digit_count)
self.lcd_widget.doubleClicked.connect(self.start_edit) self.lcd_widget.doubleClicked.connect(self.start_edit)
self.addWidget(self.lcd_widget) self.number_area.addWidget(self.lcd_widget)
self.edit_widget = QCancellableLineEdit() self.edit_widget = QCancellableLineEdit()
self.edit_widget.setValidator(QtGui.QDoubleValidator()) self.edit_widget.setValidator(QtGui.QDoubleValidator())
self.edit_widget.setAlignment(QtCore.Qt.AlignRight) self.edit_widget.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignVCenter)
self.edit_widget.editCancelled.connect(self.cancel_edit) self.edit_widget.editCancelled.connect(self.cancel_edit)
self.edit_widget.returnPressed.connect(self.confirm_edit) self.edit_widget.returnPressed.connect(self.confirm_edit)
self.addWidget(self.edit_widget) self.number_area.addWidget(self.edit_widget)
font = QtGui.QFont() font = QtGui.QFont()
font.setPointSize(60) font.setPointSize(60)
self.edit_widget.setFont(font) self.edit_widget.setFont(font)
self.setCurrentWidget(self.lcd_widget) unit_font = QtGui.QFont()
unit_font.setPointSize(20)
self.unit_area.setFont(unit_font)
self.number_area.setCurrentWidget(self.lcd_widget)
def start_edit(self): def start_edit(self):
# QLCDNumber value property contains the value of zero # QLCDNumber value property contains the value of zero
@ -51,22 +65,32 @@ class NumberWidget(QtWidgets.QStackedWidget):
self.edit_widget.setText(str(self.lcd_widget.value())) self.edit_widget.setText(str(self.lcd_widget.value()))
self.edit_widget.selectAll() self.edit_widget.selectAll()
self.edit_widget.setFocus() self.edit_widget.setFocus()
self.setCurrentWidget(self.edit_widget) self.number_area.setCurrentWidget(self.edit_widget)
def confirm_edit(self): def confirm_edit(self):
value = float(self.edit_widget.text()) scale = scale_from_metadata(self.metadata)
self.req.set_dataset(self.dataset_name, value) val = float(self.edit_widget.text())
self.setCurrentWidget(self.lcd_widget) val *= scale
self.req.set_dataset(self.dataset_name, val, **self.metadata)
self.number_area.setCurrentWidget(self.lcd_widget)
def cancel_edit(self): def cancel_edit(self):
self.setCurrentWidget(self.lcd_widget) self.number_area.setCurrentWidget(self.lcd_widget)
def data_changed(self, value, metadata, persist, mods): def data_changed(self, value, metadata, persist, mods):
try: try:
n = float(value[self.dataset_name]) self.metadata = metadata[self.dataset_name]
# This applet will degenerate other scalar types to native float on edit
# Use the dashboard ChangeEditDialog for consistent type casting
val = float(value[self.dataset_name])
scale = scale_from_metadata(self.metadata)
val /= scale
except (KeyError, ValueError, TypeError): except (KeyError, ValueError, TypeError):
n = "---" val = "---"
self.lcd_widget.display(n)
unit = self.metadata.get("unit", "")
self.unit_area.setText(unit)
self.lcd_widget.display(val)
def main(): def main():

View File

@ -508,5 +508,9 @@ class ExperimentsArea(QtWidgets.QMdiArea):
self.open_experiments.append(dock) self.open_experiments.append(dock)
return dock return dock
def set_argument_value(self, expurl, name, value):
logger.warning("Unable to set argument '%s', dropping change. "
"'set_argument_value' not supported in browser.", name)
def on_dock_closed(self, dock): def on_dock_closed(self, dock):
self.open_experiments.remove(dock) self.open_experiments.remove(dock)

View File

@ -71,7 +71,6 @@ def build_artiq_soc(soc, argdict):
if not soc.config["DRTIO_ROLE"] == "satellite": if not soc.config["DRTIO_ROLE"] == "satellite":
builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime")) builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime"))
else: else:
# Assume DRTIO satellite.
builder.add_software_package("satman", os.path.join(firmware_dir, "satman")) builder.add_software_package("satman", os.path.join(firmware_dir, "satman"))
try: try:
builder.build() builder.build()

View File

@ -21,13 +21,19 @@ class scoped(object):
set of variables resolved as globals set of variables resolved as globals
""" """
class remote(object):
"""
:ivar remote_fn: (bool) whether function is ran on a remote device,
meaning arguments are received remotely and return is sent remotely
"""
# Typed versions of untyped nodes # Typed versions of untyped nodes
class argT(ast.arg, commontyped): class argT(ast.arg, commontyped):
pass pass
class ClassDefT(ast.ClassDef): class ClassDefT(ast.ClassDef):
_types = ("constructor_type",) _types = ("constructor_type",)
class FunctionDefT(ast.FunctionDef, scoped): class FunctionDefT(ast.FunctionDef, scoped, remote):
_types = ("signature_type",) _types = ("signature_type",)
class QuotedFunctionDefT(FunctionDefT): class QuotedFunctionDefT(FunctionDefT):
""" """
@ -58,7 +64,7 @@ class BinOpT(ast.BinOp, commontyped):
pass pass
class BoolOpT(ast.BoolOp, commontyped): class BoolOpT(ast.BoolOp, commontyped):
pass pass
class CallT(ast.Call, commontyped): class CallT(ast.Call, commontyped, remote):
""" """
:ivar iodelay: (:class:`iodelay.Expr`) :ivar iodelay: (:class:`iodelay.Expr`)
:ivar arg_exprs: (dict of str to :class:`iodelay.Expr`) :ivar arg_exprs: (dict of str to :class:`iodelay.Expr`)

View File

@ -38,6 +38,9 @@ class TInt(types.TMono):
def one(): def one():
return 1 return 1
def TInt8():
return TInt(types.TValue(8))
def TInt32(): def TInt32():
return TInt(types.TValue(32)) return TInt(types.TValue(32))
@ -244,6 +247,12 @@ def fn_at_mu():
def fn_rtio_log(): def fn_rtio_log():
return types.TBuiltinFunction("rtio_log") return types.TBuiltinFunction("rtio_log")
def fn_subkernel_await():
return types.TBuiltinFunction("subkernel_await")
def fn_subkernel_preload():
return types.TBuiltinFunction("subkernel_preload")
# Accessors # Accessors
def is_none(typ): def is_none(typ):
@ -326,7 +335,7 @@ def get_iterable_elt(typ):
# n-dimensional arrays, rather than the n-1 dimensional result of iterating over # n-dimensional arrays, rather than the n-1 dimensional result of iterating over
# the first axis, which makes the name a bit misleading. # the first axis, which makes the name a bit misleading.
if is_str(typ) or is_bytes(typ) or is_bytearray(typ): if is_str(typ) or is_bytes(typ) or is_bytearray(typ):
return TInt(types.TValue(8)) return TInt8()
elif types._is_pointer(typ) or is_iterable(typ): elif types._is_pointer(typ) or is_iterable(typ):
return typ.find()["elt"].find() return typ.find()["elt"].find()
else: else:
@ -342,5 +351,5 @@ def is_allocated(typ):
is_float(typ) or is_range(typ) or is_float(typ) or is_range(typ) or
types._is_pointer(typ) or types.is_function(typ) or types._is_pointer(typ) or types.is_function(typ) or
types.is_external_function(typ) or types.is_rpc(typ) or types.is_external_function(typ) or types.is_rpc(typ) or
types.is_method(typ) or types.is_tuple(typ) or types.is_subkernel(typ) or types.is_method(typ) or
types.is_value(typ)) types.is_tuple(typ) or types.is_value(typ))

View File

@ -74,7 +74,9 @@ class EmbeddingMap:
"CacheError", "CacheError",
"SPIError", "SPIError",
"0:ZeroDivisionError", "0:ZeroDivisionError",
"0:IndexError"]) "0:IndexError",
"UnwrapNoneError",
"SubkernelError"])
def preallocate_runtime_exception_names(self, names): def preallocate_runtime_exception_names(self, names):
for i, name in enumerate(names): for i, name in enumerate(names):
@ -183,7 +185,22 @@ class EmbeddingMap:
obj_typ, _ = self.type_map[type(obj_ref)] obj_typ, _ = self.type_map[type(obj_ref)]
yield obj_id, obj_ref, obj_typ yield obj_id, obj_ref, obj_typ
def subkernels(self):
subkernels = {}
for k, v in self.object_forward_map.items():
if hasattr(v, "artiq_embedded"):
if v.artiq_embedded.destination is not None:
subkernels[k] = v
return subkernels
def has_rpc(self): def has_rpc(self):
return any(filter(
lambda x: (inspect.isfunction(x) or inspect.ismethod(x)) and \
(not hasattr(x, "artiq_embedded") or x.artiq_embedded.destination is None),
self.object_forward_map.values()
))
def has_rpc_or_subkernel(self):
return any(filter(lambda x: inspect.isfunction(x) or inspect.ismethod(x), return any(filter(lambda x: inspect.isfunction(x) or inspect.ismethod(x),
self.object_forward_map.values())) self.object_forward_map.values()))
@ -469,7 +486,7 @@ class ASTSynthesizer:
return asttyped.QuoteT(value=value, type=instance_type, return asttyped.QuoteT(value=value, type=instance_type,
loc=loc) loc=loc)
def call(self, callee, args, kwargs, callback=None): def call(self, callee, args, kwargs, callback=None, remote_fn=False):
""" """
Construct an AST fragment calling a function specified by Construct an AST fragment calling a function specified by
an AST node `function_node`, with given arguments. an AST node `function_node`, with given arguments.
@ -513,7 +530,7 @@ class ASTSynthesizer:
starargs=None, kwargs=None, starargs=None, kwargs=None,
type=types.TVar(), iodelay=None, arg_exprs={}, type=types.TVar(), iodelay=None, arg_exprs={},
begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None, begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None,
loc=callee_node.loc.join(end_loc)) loc=callee_node.loc.join(end_loc), remote_fn=remote_fn)
if callback is not None: if callback is not None:
node = asttyped.CallT( node = asttyped.CallT(
@ -548,7 +565,7 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
arg=node.arg, annotation=None, arg=node.arg, annotation=None,
arg_loc=node.arg_loc, colon_loc=node.colon_loc, loc=node.loc) arg_loc=node.arg_loc, colon_loc=node.colon_loc, loc=node.loc)
def visit_quoted_function(self, node, function): def visit_quoted_function(self, node, function, remote_fn):
extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine) extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine)
extractor.visit(node) extractor.visit(node)
@ -569,7 +586,7 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
body=node.body, decorator_list=node.decorator_list, body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc, keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs, arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
loc=node.loc) loc=node.loc, remote_fn=remote_fn)
try: try:
self.env_stack.append(node.typing_env) self.env_stack.append(node.typing_env)
@ -777,7 +794,7 @@ class TypedtreeHasher(algorithm.Visitor):
return hash(tuple(freeze(getattr(node, field_name)) for field_name in fields)) return hash(tuple(freeze(getattr(node, field_name)) for field_name in fields))
class Stitcher: class Stitcher:
def __init__(self, core, dmgr, engine=None, print_as_rpc=True): def __init__(self, core, dmgr, engine=None, print_as_rpc=True, destination=0, subkernel_arg_types=[]):
self.core = core self.core = core
self.dmgr = dmgr self.dmgr = dmgr
if engine is None: if engine is None:
@ -803,11 +820,19 @@ class Stitcher:
self.value_map = defaultdict(lambda: []) self.value_map = defaultdict(lambda: [])
self.definitely_changed = False self.definitely_changed = False
self.destination = destination
self.first_call = True
# for non-annotated subkernels:
# main kernel inferencer output with types of arguments
self.subkernel_arg_types = subkernel_arg_types
def stitch_call(self, function, args, kwargs, callback=None): def stitch_call(self, function, args, kwargs, callback=None):
# We synthesize source code for the initial call so that # We synthesize source code for the initial call so that
# diagnostics would have something meaningful to display to the user. # diagnostics would have something meaningful to display to the user.
synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function)) synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function))
call_node = synthesizer.call(function, args, kwargs, callback) # first call of a subkernel will get its arguments from remote (DRTIO)
remote_fn = self.destination != 0
call_node = synthesizer.call(function, args, kwargs, callback, remote_fn=remote_fn)
synthesizer.finalize() synthesizer.finalize()
self.typedtree.append(call_node) self.typedtree.append(call_node)
@ -919,6 +944,10 @@ class Stitcher:
return [diagnostic.Diagnostic("note", return [diagnostic.Diagnostic("note",
"in kernel function here", {}, "in kernel function here", {},
call_loc)] call_loc)]
elif fn_kind == 'subkernel':
return [diagnostic.Diagnostic("note",
"in subkernel call here", {},
call_loc)]
else: else:
assert False assert False
else: else:
@ -938,7 +967,7 @@ class Stitcher:
self._function_loc(function), self._function_loc(function),
notes=self._call_site_note(loc, fn_kind)) notes=self._call_site_note(loc, fn_kind))
self.engine.process(diag) self.engine.process(diag)
elif fn_kind == 'rpc' and param.default is not inspect.Parameter.empty: elif fn_kind == 'rpc' or fn_kind == 'subkernel' and param.default is not inspect.Parameter.empty:
notes = [] notes = []
notes.append(diagnostic.Diagnostic("note", notes.append(diagnostic.Diagnostic("note",
"expanded from here while trying to infer a type for an" "expanded from here while trying to infer a type for an"
@ -957,11 +986,18 @@ class Stitcher:
Inferencer(engine=self.engine).visit(ast) Inferencer(engine=self.engine).visit(ast)
IntMonomorphizer(engine=self.engine).visit(ast) IntMonomorphizer(engine=self.engine).visit(ast)
return ast.type return ast.type
else: elif fn_kind == 'kernel' and self.first_call and self.destination != 0:
# Let the rest of the program decide. # subkernels do not have access to the main kernel code to infer
return types.TVar() # arg types - so these are cached and passed onto subkernel
# compilation, to avoid having to annotate them fully
for name, typ in self.subkernel_arg_types:
if param.name == name:
return typ
def _quote_embedded_function(self, function, flags): # Let the rest of the program decide.
return types.TVar()
def _quote_embedded_function(self, function, flags, remote_fn=False):
# we are now parsing new functions... definitely changed the type # we are now parsing new functions... definitely changed the type
self.definitely_changed = True self.definitely_changed = True
@ -1060,7 +1096,7 @@ class Stitcher:
engine=self.engine, prelude=self.prelude, engine=self.engine, prelude=self.prelude,
globals=self.globals, host_environment=host_environment, globals=self.globals, host_environment=host_environment,
quote=self._quote) quote=self._quote)
function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function) function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function, remote_fn)
function_node.flags = flags function_node.flags = flags
# Add it into our typedtree so that it gets inferenced and codegen'd. # Add it into our typedtree so that it gets inferenced and codegen'd.
@ -1174,7 +1210,6 @@ class Stitcher:
signature = inspect.signature(function) signature = inspect.signature(function)
arg_types = OrderedDict() arg_types = OrderedDict()
optarg_types = OrderedDict()
for param in signature.parameters.values(): for param in signature.parameters.values():
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD: if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",
@ -1212,6 +1247,40 @@ class Stitcher:
self.functions[function] = function_type self.functions[function] = function_type
return function_type return function_type
def _quote_subkernel(self, function, loc):
if isinstance(function, SpecializedFunction):
host_function = function.host_function
else:
host_function = function
ret_type = builtins.TNone()
signature = inspect.signature(host_function)
if signature.return_annotation is not inspect.Signature.empty:
ret_type = self._extract_annot(host_function, signature.return_annotation,
"return type", loc, fn_kind='subkernel')
arg_types = OrderedDict()
optarg_types = OrderedDict()
for param in signature.parameters.values():
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
diag = diagnostic.Diagnostic("error",
"subkernels must only use positional arguments; '{argument}' isn't",
{"argument": param.name},
self._function_loc(function),
notes=self._call_site_note(loc, fn_kind='subkernel'))
self.engine.process(diag)
arg_type = self._type_of_param(function, loc, param, fn_kind='subkernel')
if param.default is inspect.Parameter.empty:
arg_types[param.name] = arg_type
else:
optarg_types[param.name] = arg_type
function_type = types.TSubkernel(arg_types, optarg_types, ret_type,
sid=self.embedding_map.store_object(host_function),
destination=host_function.artiq_embedded.destination)
self.functions[function] = function_type
return function_type
def _quote_rpc(self, function, loc): def _quote_rpc(self, function, loc):
if isinstance(function, SpecializedFunction): if isinstance(function, SpecializedFunction):
host_function = function.host_function host_function = function.host_function
@ -1271,8 +1340,18 @@ class Stitcher:
(host_function.artiq_embedded.core_name is None and (host_function.artiq_embedded.core_name is None and
host_function.artiq_embedded.portable is False and host_function.artiq_embedded.portable is False and
host_function.artiq_embedded.syscall is None and host_function.artiq_embedded.syscall is None and
host_function.artiq_embedded.destination is None and
host_function.artiq_embedded.forbidden is False): host_function.artiq_embedded.forbidden is False):
self._quote_rpc(function, loc) self._quote_rpc(function, loc)
elif host_function.artiq_embedded.destination is not None and \
host_function.artiq_embedded.destination != self.destination:
# treat subkernels as kernels if running on the same device
if not 0 < host_function.artiq_embedded.destination <= 255:
diag = diagnostic.Diagnostic("error",
"subkernel destination must be between 1 and 255 (inclusive)", {},
self._function_loc(host_function))
self.engine.process(diag)
self._quote_subkernel(function, loc)
elif host_function.artiq_embedded.function is not None: elif host_function.artiq_embedded.function is not None:
if host_function.__name__ == "<lambda>": if host_function.__name__ == "<lambda>":
note = diagnostic.Diagnostic("note", note = diagnostic.Diagnostic("note",
@ -1296,8 +1375,13 @@ class Stitcher:
notes=[note]) notes=[note])
self.engine.process(diag) self.engine.process(diag)
destination = host_function.artiq_embedded.destination
# remote_fn only for first call in subkernels
remote_fn = destination is not None and self.first_call
self._quote_embedded_function(function, self._quote_embedded_function(function,
flags=host_function.artiq_embedded.flags) flags=host_function.artiq_embedded.flags,
remote_fn=remote_fn)
self.first_call = False
elif host_function.artiq_embedded.syscall is not None: elif host_function.artiq_embedded.syscall is not None:
# Insert a storage-less global whose type instructs the compiler # Insert a storage-less global whose type instructs the compiler
# to perform a system call instead of a regular call. # to perform a system call instead of a regular call.

View File

@ -706,6 +706,81 @@ class SetLocal(Instruction):
def value(self): def value(self):
return self.operands[1] return self.operands[1]
class GetArgFromRemote(Instruction):
"""
An instruction that receives function arguments from remote
(ie. subkernel in DRTIO context)
:ivar arg_name: (string) argument name
:ivar arg_type: argument type
"""
"""
:param arg_name: (string) argument name
:param arg_type: argument type
"""
def __init__(self, arg_name, arg_type, name=""):
assert isinstance(arg_name, str)
super().__init__([], arg_type, name)
self.arg_name = arg_name
self.arg_type = arg_type
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.arg_name = self.arg_name
self_copy.arg_type = self.arg_type
return self_copy
def opcode(self):
return "getargfromremote({})".format(repr(self.arg_name))
class GetOptArgFromRemote(GetArgFromRemote):
"""
An instruction that may or may not retrieve an optional function argument
from remote, depending on number of values received by firmware.
:ivar rcv_count: number of received values,
determined by firmware
:ivar index: (integer) index of the current argument,
in reference to remote arguments
"""
"""
:param rcv_count: number of received valuese
:param index: (integer) index of the current argument,
in reference to remote arguments
"""
def __init__(self, arg_name, arg_type, rcv_count, index, name=""):
super().__init__(arg_name, arg_type, name)
self.rcv_count = rcv_count
self.index = index
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.rcv_count = self.rcv_count
self_copy.index = self.index
return self_copy
def opcode(self):
return "getoptargfromremote({})".format(repr(self.arg_name))
class SubkernelAwaitArgs(Instruction):
"""
A builtin instruction that takes min and max received messages as operands,
and a list of received types.
:ivar arg_types: (list of types) types of passed arguments (including optional)
"""
"""
:param arg_types: (list of types) types of passed arguments (including optional)
"""
def __init__(self, operands, arg_types, name=None):
assert isinstance(arg_types, list)
self.arg_types = arg_types
super().__init__(operands, builtins.TNone(), name)
class GetAttr(Instruction): class GetAttr(Instruction):
""" """
An intruction that loads an attribute from an object, An intruction that loads an attribute from an object,
@ -728,7 +803,7 @@ class GetAttr(Instruction):
typ = obj.type.attributes[attr] typ = obj.type.attributes[attr]
else: else:
typ = obj.type.constructor.attributes[attr] typ = obj.type.constructor.attributes[attr]
if types.is_function(typ) or types.is_rpc(typ): if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
typ = types.TMethod(obj.type, typ) typ = types.TMethod(obj.type, typ)
super().__init__([obj], typ, name) super().__init__([obj], typ, name)
self.attr = attr self.attr = attr
@ -1190,14 +1265,18 @@ class IndirectBranch(Terminator):
class Return(Terminator): class Return(Terminator):
""" """
A return instruction. A return instruction.
:param remote_return: (bool)
marks a return in subkernel context,
where the return value is sent back through DRTIO
""" """
""" """
:param value: (:class:`Value`) return value :param value: (:class:`Value`) return value
""" """
def __init__(self, value, name=""): def __init__(self, value, remote_return=False, name=""):
assert isinstance(value, Value) assert isinstance(value, Value)
super().__init__([value], builtins.TNone(), name) super().__init__([value], builtins.TNone(), name)
self.remote_return = remote_return
def opcode(self): def opcode(self):
return "return" return "return"

View File

@ -84,6 +84,8 @@ class Module:
constant_hoister.process(self.artiq_ir) constant_hoister.process(self.artiq_ir)
if remarks: if remarks:
invariant_detection.process(self.artiq_ir) invariant_detection.process(self.artiq_ir)
# for subkernels: main kernel inferencer output, to be passed to further compilations
self.subkernel_arg_types = inferencer.subkernel_arg_types
def build_llvm_ir(self, target): def build_llvm_ir(self, target):
"""Compile the module to LLVM IR for the specified target.""" """Compile the module to LLVM IR for the specified target."""

View File

@ -37,6 +37,7 @@ def globals():
# ARTIQ decorators # ARTIQ decorators
"kernel": builtins.fn_kernel(), "kernel": builtins.fn_kernel(),
"subkernel": builtins.fn_kernel(),
"portable": builtins.fn_kernel(), "portable": builtins.fn_kernel(),
"rpc": builtins.fn_kernel(), "rpc": builtins.fn_kernel(),
@ -54,4 +55,8 @@ def globals():
# ARTIQ utility functions # ARTIQ utility functions
"rtio_log": builtins.fn_rtio_log(), "rtio_log": builtins.fn_rtio_log(),
"core_log": builtins.fn_print(), "core_log": builtins.fn_print(),
# ARTIQ subkernel utility functions
"subkernel_await": builtins.fn_subkernel_await(),
"subkernel_preload": builtins.fn_subkernel_preload(),
} }

View File

@ -94,8 +94,9 @@ class Target:
tool_symbolizer = "llvm-symbolizer" tool_symbolizer = "llvm-symbolizer"
tool_cxxfilt = "llvm-cxxfilt" tool_cxxfilt = "llvm-cxxfilt"
def __init__(self): def __init__(self, subkernel_id=None):
self.llcontext = ll.Context() self.llcontext = ll.Context()
self.subkernel_id = subkernel_id
def target_machine(self): def target_machine(self):
lltarget = llvm.Target.from_triple(self.triple) lltarget = llvm.Target.from_triple(self.triple)
@ -148,7 +149,8 @@ class Target:
ir.BasicBlock._dump_loc = False ir.BasicBlock._dump_loc = False
type_printer = types.TypePrinter() type_printer = types.TypePrinter()
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", ".txt", suffix = "_subkernel_{}".format(self.subkernel_id) if self.subkernel_id is not None else ""
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", suffix + ".txt",
lambda: "\n".join(fn.as_entity(type_printer) for fn in module.artiq_ir)) lambda: "\n".join(fn.as_entity(type_printer) for fn in module.artiq_ir))
llmod = module.build_llvm_ir(self) llmod = module.build_llvm_ir(self)
@ -160,12 +162,12 @@ class Target:
_dump("", "LLVM IR (broken)", ".ll", lambda: str(llmod)) _dump("", "LLVM IR (broken)", ".ll", lambda: str(llmod))
raise raise
_dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", "_unopt.ll", _dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", suffix + "_unopt.ll",
lambda: str(llparsedmod)) lambda: str(llparsedmod))
self.optimize(llparsedmod) self.optimize(llparsedmod)
_dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", ".ll", _dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", suffix + ".ll",
lambda: str(llparsedmod)) lambda: str(llparsedmod))
return llparsedmod return llparsedmod

View File

@ -108,6 +108,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.current_args = None self.current_args = None
self.current_assign = None self.current_assign = None
self.current_exception = None self.current_exception = None
self.current_remote_fn = False
self.break_target = None self.break_target = None
self.continue_target = None self.continue_target = None
self.return_target = None self.return_target = None
@ -211,7 +212,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
old_priv_env, self.current_private_env = self.current_private_env, priv_env old_priv_env, self.current_private_env = self.current_private_env, priv_env
self.generic_visit(node) self.generic_visit(node)
self.terminate(ir.Return(ir.Constant(None, builtins.TNone()))) self.terminate(ir.Return(ir.Constant(None, builtins.TNone()),
remote_return=self.current_remote_fn))
return self.functions return self.functions
finally: finally:
@ -294,6 +296,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
old_block, self.current_block = self.current_block, entry old_block, self.current_block = self.current_block, entry
old_globals, self.current_globals = self.current_globals, node.globals_in_scope old_globals, self.current_globals = self.current_globals, node.globals_in_scope
old_remote_fn = self.current_remote_fn
self.current_remote_fn = getattr(node, "remote_fn", False)
env_without_globals = \ env_without_globals = \
{var: node.typing_env[var] {var: node.typing_env[var]
@ -326,7 +330,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.terminate(ir.Return(result)) self.terminate(ir.Return(result))
elif builtins.is_none(typ.ret): elif builtins.is_none(typ.ret):
if not self.current_block.is_terminated(): if not self.current_block.is_terminated():
self.current_block.append(ir.Return(ir.Constant(None, builtins.TNone()))) self.current_block.append(ir.Return(ir.Constant(None, builtins.TNone()),
remote_return=self.current_remote_fn))
else: else:
if not self.current_block.is_terminated(): if not self.current_block.is_terminated():
if len(self.current_block.predecessors()) != 0: if len(self.current_block.predecessors()) != 0:
@ -345,6 +350,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.current_block = old_block self.current_block = old_block
self.current_globals = old_globals self.current_globals = old_globals
self.current_env = old_env self.current_env = old_env
self.current_remote_fn = old_remote_fn
if not is_lambda: if not is_lambda:
self.current_private_env = old_priv_env self.current_private_env = old_priv_env
@ -367,7 +373,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
return_value = self.visit(node.value) return_value = self.visit(node.value)
if self.return_target is None: if self.return_target is None:
self.append(ir.Return(return_value)) self.append(ir.Return(return_value,
remote_return=self.current_remote_fn))
else: else:
self.append(ir.SetLocal(self.current_private_env, "$return", return_value)) self.append(ir.SetLocal(self.current_private_env, "$return", return_value))
self.append(ir.Branch(self.return_target)) self.append(ir.Branch(self.return_target))
@ -2524,6 +2531,33 @@ class ARTIQIRGenerator(algorithm.Visitor):
or types.is_builtin(typ, "at_mu"): or types.is_builtin(typ, "at_mu"):
return self.append(ir.Builtin(typ.name, return self.append(ir.Builtin(typ.name,
[self.visit(arg) for arg in node.args], node.type)) [self.visit(arg) for arg in node.args], node.type))
elif types.is_builtin(typ, "subkernel_await"):
if len(node.args) == 2 and len(node.keywords) == 0:
fn = node.args[0].type
timeout = self.visit(node.args[1])
elif len(node.args) == 1 and len(node.keywords) == 0:
fn = node.args[0].type
timeout = ir.Constant(10_000, builtins.TInt64())
else:
assert False
if types.is_method(fn):
fn = types.get_method_function(fn)
sid = ir.Constant(fn.sid, builtins.TInt32())
if not builtins.is_none(fn.ret):
ret = self.append(ir.Builtin("subkernel_retrieve_return", [sid, timeout], fn.ret))
else:
ret = ir.Constant(None, builtins.TNone())
self.append(ir.Builtin("subkernel_await_finish", [sid, timeout], builtins.TNone()))
return ret
elif types.is_builtin(typ, "subkernel_preload"):
if len(node.args) == 1 and len(node.keywords) == 0:
fn = node.args[0].type
else:
assert False
if types.is_method(fn):
fn = types.get_method_function(fn)
sid = ir.Constant(fn.sid, builtins.TInt32())
return self.append(ir.Builtin("subkernel_preload", [sid], builtins.TNone()))
elif types.is_exn_constructor(typ): elif types.is_exn_constructor(typ):
return self.alloc_exn(node.type, *[self.visit(arg_node) for arg_node in node.args]) return self.alloc_exn(node.type, *[self.visit(arg_node) for arg_node in node.args])
elif types.is_constructor(typ): elif types.is_constructor(typ):
@ -2535,8 +2569,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
node.loc) node.loc)
self.engine.process(diag) self.engine.process(diag)
def _user_call(self, callee, positional, keywords, arg_exprs={}): def _user_call(self, callee, positional, keywords, arg_exprs={}, remote_fn=False):
if types.is_function(callee.type) or types.is_rpc(callee.type): if types.is_function(callee.type) or types.is_rpc(callee.type) or types.is_subkernel(callee.type):
func = callee func = callee
self_arg = None self_arg = None
fn_typ = callee.type fn_typ = callee.type
@ -2551,16 +2585,51 @@ class ARTIQIRGenerator(algorithm.Visitor):
else: else:
assert False assert False
if types.is_rpc(fn_typ): if types.is_rpc(fn_typ) or types.is_subkernel(fn_typ):
if self_arg is None: if self_arg is None or types.is_subkernel(fn_typ):
# self is not passed to subkernels by remote
args = positional args = positional
else: elif self_arg is not None:
args = [self_arg] + positional args = [self_arg] + positional
for keyword in keywords: for keyword in keywords:
arg = keywords[keyword] arg = keywords[keyword]
args.append(self.append(ir.Alloc([ir.Constant(keyword, builtins.TStr()), arg], args.append(self.append(ir.Alloc([ir.Constant(keyword, builtins.TStr()), arg],
ir.TKeyword(arg.type)))) ir.TKeyword(arg.type))))
elif remote_fn:
assert self_arg is None
assert len(fn_typ.args) >= len(positional)
assert len(keywords) == 0 # no keyword support
args = [None] * fn_typ.arity()
index = 0
# fill in first available args
for arg in positional:
args[index] = arg
index += 1
# remaining args are received through DRTIO
if index < len(args):
# min/max args received remotely (minus already filled)
offset = index
min_args = ir.Constant(len(fn_typ.args)-offset, builtins.TInt8())
max_args = ir.Constant(fn_typ.arity()-offset, builtins.TInt8())
arg_types = list(fn_typ.args.items())[offset:]
arg_type_list = [a[1] for a in arg_types] + [a[1] for a in fn_typ.optargs.items()]
rcvd_count = self.append(ir.SubkernelAwaitArgs([min_args, max_args], arg_type_list))
# obligatory arguments
for arg_name, arg_type in arg_types:
args[index] = self.append(ir.GetArgFromRemote(arg_name, arg_type,
name="ARG.{}".format(arg_name)))
index += 1
# optional arguments
for optarg_name, optarg_type in fn_typ.optargs.items():
idx = ir.Constant(index-offset, builtins.TInt8())
args[index] = \
self.append(ir.GetOptArgFromRemote(optarg_name, optarg_type, rcvd_count, idx))
index += 1
else: else:
args = [None] * (len(fn_typ.args) + len(fn_typ.optargs)) args = [None] * (len(fn_typ.args) + len(fn_typ.optargs))
@ -2646,7 +2715,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
else: else:
assert False, "Broadcasting for {} arguments not implemented".format(len) assert False, "Broadcasting for {} arguments not implemented".format(len)
else: else:
insn = self._user_call(callee, args, keywords, node.arg_exprs) remote_fn = getattr(node, "remote_fn", False)
insn = self._user_call(callee, args, keywords, node.arg_exprs, remote_fn)
if isinstance(node.func, asttyped.AttributeT): if isinstance(node.func, asttyped.AttributeT):
attr_node = node.func attr_node = node.func
self.method_map[(attr_node.value.type.find(), self.method_map[(attr_node.value.type.find(),

View File

@ -238,7 +238,7 @@ class ASTTypedRewriter(algorithm.Transformer):
body=node.body, decorator_list=node.decorator_list, body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc, keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs, arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
loc=node.loc) loc=node.loc, remote_fn=False)
try: try:
self.env_stack.append(node.typing_env) self.env_stack.append(node.typing_env)
@ -439,8 +439,9 @@ class ASTTypedRewriter(algorithm.Transformer):
def visit_Call(self, node): def visit_Call(self, node):
node = self.generic_visit(node) node = self.generic_visit(node)
node = asttyped.CallT(type=types.TVar(), iodelay=None, arg_exprs={}, node = asttyped.CallT(type=types.TVar(), iodelay=None, arg_exprs={},
func=node.func, args=node.args, keywords=node.keywords, remote_fn=False, func=node.func,
args=node.args, keywords=node.keywords,
starargs=node.starargs, kwargs=node.kwargs, starargs=node.starargs, kwargs=node.kwargs,
star_loc=node.star_loc, dstar_loc=node.dstar_loc, star_loc=node.star_loc, dstar_loc=node.dstar_loc,
begin_loc=node.begin_loc, end_loc=node.end_loc, loc=node.loc) begin_loc=node.begin_loc, end_loc=node.end_loc, loc=node.loc)

View File

@ -46,6 +46,7 @@ class Inferencer(algorithm.Visitor):
self.function = None # currently visited function, for Return inference self.function = None # currently visited function, for Return inference
self.in_loop = False self.in_loop = False
self.has_return = False self.has_return = False
self.subkernel_arg_types = dict()
def _unify(self, typea, typeb, loca, locb, makenotes=None, when=""): def _unify(self, typea, typeb, loca, locb, makenotes=None, when=""):
try: try:
@ -178,7 +179,7 @@ class Inferencer(algorithm.Visitor):
# Convert to a method. # Convert to a method.
attr_type = types.TMethod(object_type, attr_type) attr_type = types.TMethod(object_type, attr_type)
self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc) self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc)
elif types.is_rpc(attr_type): elif types.is_rpc(attr_type) or types.is_subkernel(attr_type):
# Convert to a method. We don't have to bother typechecking # Convert to a method. We don't have to bother typechecking
# the self argument, since for RPCs anything goes. # the self argument, since for RPCs anything goes.
attr_type = types.TMethod(object_type, attr_type) attr_type = types.TMethod(object_type, attr_type)
@ -1293,6 +1294,55 @@ class Inferencer(algorithm.Visitor):
# Ignored. # Ignored.
self._unify(node.type, builtins.TNone(), self._unify(node.type, builtins.TNone(),
node.loc, None) node.loc, None)
elif types.is_builtin(typ, "subkernel_await"):
valid_forms = lambda: [
valid_form("subkernel_await(f: subkernel) -> f return type"),
valid_form("subkernel_await(f: subkernel, timeout: numpy.int64) -> f return type")
]
if 1 <= len(node.args) <= 2:
arg0 = node.args[0].type
if types.is_var(arg0):
pass # undetermined yet
else:
if types.is_method(arg0):
fn = types.get_method_function(arg0)
elif types.is_function(arg0) or types.is_subkernel(arg0):
fn = arg0
else:
diagnose(valid_forms())
self._unify(node.type, fn.ret,
node.loc, None)
if len(node.args) == 2:
arg1 = node.args[1]
if types.is_var(arg1.type):
pass
elif builtins.is_int(arg1.type):
# promote to TInt64
self._unify(arg1.type, builtins.TInt64(),
arg1.loc, None)
else:
diagnose(valid_forms())
else:
diagnose(valid_forms())
elif types.is_builtin(typ, "subkernel_preload"):
valid_forms = lambda: [
valid_form("subkernel_preload(f: subkernel) -> None")
]
if len(node.args) == 1:
arg0 = node.args[0].type
if types.is_var(arg0):
pass # undetermined yet
else:
if types.is_method(arg0):
fn = types.get_method_function(arg0)
elif types.is_function(arg0) or types.is_subkernel(arg0):
fn = arg0
else:
diagnose(valid_forms())
self._unify(node.type, fn.ret,
node.loc, None)
else:
diagnose(valid_forms())
else: else:
assert False assert False
@ -1331,6 +1381,7 @@ class Inferencer(algorithm.Visitor):
typ_args = typ.args typ_args = typ.args
typ_optargs = typ.optargs typ_optargs = typ.optargs
typ_ret = typ.ret typ_ret = typ.ret
typ_func = typ
else: else:
typ_self = types.get_method_self(typ) typ_self = types.get_method_self(typ)
typ_func = types.get_method_function(typ) typ_func = types.get_method_function(typ)
@ -1388,12 +1439,23 @@ class Inferencer(algorithm.Visitor):
other_node=node.args[0]) other_node=node.args[0])
self._unify(node.type, ret, node.loc, None) self._unify(node.type, ret, node.loc, None)
return return
if types.is_subkernel(typ_func) and typ_func.sid not in self.subkernel_arg_types:
self.subkernel_arg_types[typ_func.sid] = []
for actualarg, (formalname, formaltyp) in \ for actualarg, (formalname, formaltyp) in \
zip(node.args, list(typ_args.items()) + list(typ_optargs.items())): zip(node.args, list(typ_args.items()) + list(typ_optargs.items())):
self._unify(actualarg.type, formaltyp, self._unify(actualarg.type, formaltyp,
actualarg.loc, None) actualarg.loc, None)
passed_args[formalname] = actualarg.loc passed_args[formalname] = actualarg.loc
if types.is_subkernel(typ_func):
if types.is_instance(actualarg.type):
# objects cannot be passed to subkernels, as rpc code doesn't support them
diag = diagnostic.Diagnostic("error",
"argument '{name}' of type: {typ} is not supported in subkernels",
{"name": formalname, "typ": actualarg.type},
actualarg.loc, [])
self.engine.process(diag)
self.subkernel_arg_types[typ_func.sid].append((formalname, formaltyp))
for keyword in node.keywords: for keyword in node.keywords:
if keyword.arg in passed_args: if keyword.arg in passed_args:
@ -1424,7 +1486,7 @@ class Inferencer(algorithm.Visitor):
passed_args[keyword.arg] = keyword.arg_loc passed_args[keyword.arg] = keyword.arg_loc
for formalname in typ_args: for formalname in typ_args:
if formalname not in passed_args: if formalname not in passed_args and not node.remote_fn:
note = diagnostic.Diagnostic("note", note = diagnostic.Diagnostic("note",
"the called function is of type {type}", "the called function is of type {type}",
{"type": types.TypePrinter().name(node.func.type)}, {"type": types.TypePrinter().name(node.func.type)},

View File

@ -280,7 +280,7 @@ class IODelayEstimator(algorithm.Visitor):
context="as an argument for delay_mu()") context="as an argument for delay_mu()")
call_delay = value call_delay = value
elif not types.is_builtin(typ): elif not types.is_builtin(typ):
if types.is_function(typ) or types.is_rpc(typ): if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
offset = 0 offset = 0
elif types.is_method(typ): elif types.is_method(typ):
offset = 1 offset = 1
@ -288,7 +288,7 @@ class IODelayEstimator(algorithm.Visitor):
else: else:
assert False assert False
if types.is_rpc(typ): if types.is_rpc(typ) or types.is_subkernel(typ):
call_delay = iodelay.Const(0) call_delay = iodelay.Const(0)
else: else:
delay = typ.find().delay.find() delay = typ.find().delay.find()
@ -311,13 +311,20 @@ class IODelayEstimator(algorithm.Visitor):
args[arg_name] = arg_node args[arg_name] = arg_node
free_vars = delay.duration.free_vars() free_vars = delay.duration.free_vars()
node.arg_exprs = { try:
arg: self.evaluate(args[arg], abort=abort, node.arg_exprs = {
context="in the expression for argument '{}' " arg: self.evaluate(args[arg], abort=abort,
"that affects I/O delay".format(arg)) context="in the expression for argument '{}' "
for arg in free_vars "that affects I/O delay".format(arg))
} for arg in free_vars
call_delay = delay.duration.fold(node.arg_exprs) }
call_delay = delay.duration.fold(node.arg_exprs)
except KeyError as e:
if getattr(node, "remote_fn", False):
note = diagnostic.Diagnostic("note",
"function called here", {},
node.loc)
self.abort("due to arguments passed remotely", node.loc, note)
else: else:
assert False assert False
else: else:

View File

@ -215,7 +215,7 @@ class LLVMIRGenerator:
typ = typ.find() typ = typ.find()
if types.is_tuple(typ): if types.is_tuple(typ):
return ll.LiteralStructType([self.llty_of_type(eltty) for eltty in typ.elts]) return ll.LiteralStructType([self.llty_of_type(eltty) for eltty in typ.elts])
elif types.is_rpc(typ) or types.is_external_function(typ): elif types.is_rpc(typ) or types.is_external_function(typ) or types.is_subkernel(typ):
if for_return: if for_return:
return llvoid return llvoid
else: else:
@ -398,6 +398,15 @@ class LLVMIRGenerator:
elif name == "rpc_recv": elif name == "rpc_recv":
llty = ll.FunctionType(lli32, [llptr]) llty = ll.FunctionType(lli32, [llptr])
elif name == "subkernel_send_message":
llty = ll.FunctionType(llvoid, [lli32, lli8, llsliceptr, llptrptr])
elif name == "subkernel_load_run":
llty = ll.FunctionType(llvoid, [lli32, lli1])
elif name == "subkernel_await_finish":
llty = ll.FunctionType(llvoid, [lli32, lli64])
elif name == "subkernel_await_message":
llty = ll.FunctionType(lli8, [lli32, lli64, llsliceptr, lli8, lli8])
# with now-pinning # with now-pinning
elif name == "now": elif name == "now":
llty = lli64 llty = lli64
@ -874,6 +883,53 @@ class LLVMIRGenerator:
llvalue = self.llbuilder.bitcast(llvalue, llptr.type.pointee) llvalue = self.llbuilder.bitcast(llvalue, llptr.type.pointee)
return self.llbuilder.store(llvalue, llptr) return self.llbuilder.store(llvalue, llptr)
def process_GetArgFromRemote(self, insn):
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
llval = self._build_rpc_recv(insn.arg_type, llstackptr)
return llval
def process_GetOptArgFromRemote(self, insn):
# optarg = index < rcv_count ? Some(rcv_recv()) : None
llhead = self.llbuilder.basic_block
llrcv = self.llbuilder.append_basic_block(name="optarg.get.{}".format(insn.arg_name))
# argument received
self.llbuilder.position_at_end(llrcv)
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
llval = self._build_rpc_recv(insn.arg_type, llstackptr)
llrpcretblock = self.llbuilder.basic_block # 'return' from rpc_recv, will be needed later
# create the tail block, needs to be after the rpc recv tail block
lltail = self.llbuilder.append_basic_block(name="optarg.tail.{}".format(insn.arg_name))
self.llbuilder.branch(lltail)
# go back to head to add a branch to the tail
self.llbuilder.position_at_end(llhead)
llargrcvd = self.llbuilder.icmp_unsigned("<", self.map(insn.index), self.map(insn.rcv_count))
self.llbuilder.cbranch(llargrcvd, llrcv, lltail)
# argument not received/after arg recvd
self.llbuilder.position_at_end(lltail)
llargtype = self.llty_of_type(insn.arg_type)
llphi_arg_present = self.llbuilder.phi(lli1, name="optarg.phi.present.{}".format(insn.arg_name))
llphi_arg = self.llbuilder.phi(llargtype, name="optarg.phi.{}".format(insn.arg_name))
llphi_arg_present.add_incoming(ll.Constant(lli1, 0), llhead)
llphi_arg.add_incoming(ll.Constant(llargtype, ll.Undefined), llhead)
llphi_arg_present.add_incoming(ll.Constant(lli1, 1), llrpcretblock)
llphi_arg.add_incoming(llval, llrpcretblock)
lloptarg = ll.Constant(ll.LiteralStructType([lli1, llargtype]), ll.Undefined)
lloptarg = self.llbuilder.insert_value(lloptarg, llphi_arg_present, 0)
lloptarg = self.llbuilder.insert_value(lloptarg, llphi_arg, 1)
return lloptarg
def attr_index(self, typ, attr): def attr_index(self, typ, attr):
return list(typ.attributes.keys()).index(attr) return list(typ.attributes.keys()).index(attr)
@ -898,8 +954,8 @@ class LLVMIRGenerator:
def get_global_closure_ptr(self, typ, attr): def get_global_closure_ptr(self, typ, attr):
closure_type = typ.attributes[attr] closure_type = typ.attributes[attr]
assert types.is_constructor(typ) assert types.is_constructor(typ)
assert types.is_function(closure_type) or types.is_rpc(closure_type) assert types.is_function(closure_type) or types.is_rpc(closure_type) or types.is_subkernel(closure_type)
if types.is_external_function(closure_type) or types.is_rpc(closure_type): if types.is_external_function(closure_type) or types.is_rpc(closure_type) or types.is_subkernel(closure_type):
return None return None
llty = self.llty_of_type(typ.attributes[attr]) llty = self.llty_of_type(typ.attributes[attr])
@ -1344,9 +1400,36 @@ class LLVMIRGenerator:
return self.llbuilder.call(self.llbuiltin("delay_mu"), [llinterval]) return self.llbuilder.call(self.llbuiltin("delay_mu"), [llinterval])
elif insn.op == "end_catch": elif insn.op == "end_catch":
return self.llbuilder.call(self.llbuiltin("__artiq_end_catch"), []) return self.llbuilder.call(self.llbuiltin("__artiq_end_catch"), [])
elif insn.op == "subkernel_await_finish":
llsid = self.map(insn.operands[0])
lltimeout = self.map(insn.operands[1])
return self.llbuilder.call(self.llbuiltin("subkernel_await_finish"), [llsid, lltimeout],
name="subkernel.await.finish")
elif insn.op == "subkernel_retrieve_return":
llsid = self.map(insn.operands[0])
lltimeout = self.map(insn.operands[1])
lltagptr = self._build_subkernel_tags([insn.type])
self.llbuilder.call(self.llbuiltin("subkernel_await_message"),
[llsid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
name="subkernel.await.message")
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
return self._build_rpc_recv(insn.type, llstackptr)
elif insn.op == "subkernel_preload":
llsid = self.map(insn.operands[0])
return self.llbuilder.call(self.llbuiltin("subkernel_load_run"), [llsid, ll.Constant(lli1, 0)],
name="subkernel.preload")
else: else:
assert False assert False
def process_SubkernelAwaitArgs(self, insn):
llmin = self.map(insn.operands[0])
llmax = self.map(insn.operands[1])
lltagptr = self._build_subkernel_tags(insn.arg_types)
return self.llbuilder.call(self.llbuiltin("subkernel_await_message"),
[ll.Constant(lli32, 0), ll.Constant(lli64, 10_000), lltagptr, llmin, llmax],
name="subkernel.await.args")
def process_Closure(self, insn): def process_Closure(self, insn):
llenv = self.map(insn.environment()) llenv = self.map(insn.environment())
llenv = self.llbuilder.bitcast(llenv, llptr) llenv = self.llbuilder.bitcast(llenv, llptr)
@ -1426,6 +1509,76 @@ class LLVMIRGenerator:
return llfun, list(llargs), llarg_attrs, llcallstackptr return llfun, list(llargs), llarg_attrs, llcallstackptr
def _build_subkernel_tags(self, tag_list):
def ret_error_handler(typ):
printer = types.TypePrinter()
note = diagnostic.Diagnostic("note",
"value of type {type}",
{"type": printer.name(typ)},
fun_loc)
diag = diagnostic.Diagnostic("error",
"type {type} is not supported in subkernels",
{"type": printer.name(fun_type.ret)},
fun_loc, notes=[note])
self.engine.process(diag)
tag = b"".join([ir.rpc_tag(arg_type, ret_error_handler) for arg_type in tag_list])
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
lltagptr = self.llbuilder.alloca(lltag.type)
self.llbuilder.store(lltag, lltagptr)
return lltagptr
def _build_rpc_recv(self, ret, llstackptr, llnormalblock=None, llunwindblock=None):
# T result = {
# void *ret_ptr = alloca(sizeof(T));
# void *ptr = ret_ptr;
# loop: int size = rpc_recv(ptr);
# // Non-zero: Provide `size` bytes of extra storage for variable-length data.
# if(size) { ptr = alloca(size); goto loop; }
# else *(T*)ret_ptr
# }
llprehead = self.llbuilder.basic_block
llhead = self.llbuilder.append_basic_block(name="rpc.head")
if llunwindblock:
llheadu = self.llbuilder.append_basic_block(name="rpc.head.unwind")
llalloc = self.llbuilder.append_basic_block(name="rpc.continue")
lltail = self.llbuilder.append_basic_block(name="rpc.tail")
llretty = self.llty_of_type(ret)
llslot = self.llbuilder.alloca(llretty, name="rpc.ret.alloc")
llslotgen = self.llbuilder.bitcast(llslot, llptr, name="rpc.ret.ptr")
self.llbuilder.branch(llhead)
self.llbuilder.position_at_end(llhead)
llphi = self.llbuilder.phi(llslotgen.type, name="rpc.ptr")
llphi.add_incoming(llslotgen, llprehead)
if llunwindblock:
llsize = self.llbuilder.invoke(self.llbuiltin("rpc_recv"), [llphi],
llheadu, llunwindblock,
name="rpc.size.next")
self.llbuilder.position_at_end(llheadu)
else:
llsize = self.llbuilder.call(self.llbuiltin("rpc_recv"), [llphi],
name="rpc.size.next")
lldone = self.llbuilder.icmp_unsigned('==', llsize, ll.Constant(llsize.type, 0),
name="rpc.done")
self.llbuilder.cbranch(lldone, lltail, llalloc)
self.llbuilder.position_at_end(llalloc)
llalloca = self.llbuilder.alloca(lli8, llsize, name="rpc.alloc")
llalloca.align = self.max_target_alignment
llphi.add_incoming(llalloca, llalloc)
self.llbuilder.branch(llhead)
self.llbuilder.position_at_end(lltail)
llret = self.llbuilder.load(llslot, name="rpc.ret")
if not ret.fold(False, lambda r, t: r or builtins.is_allocated(t)):
# We didn't allocate anything except the slot for the value itself.
# Don't waste stack space.
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
if llnormalblock:
self.llbuilder.branch(llnormalblock)
return llret
def _build_rpc(self, fun_loc, fun_type, args, llnormalblock, llunwindblock): def _build_rpc(self, fun_loc, fun_type, args, llnormalblock, llunwindblock):
llservice = ll.Constant(lli32, fun_type.service) llservice = ll.Constant(lli32, fun_type.service)
@ -1501,57 +1654,103 @@ class LLVMIRGenerator:
return ll.Undefined return ll.Undefined
# T result = { llret = self._build_rpc_recv(fun_type.ret, llstackptr, llnormalblock, llunwindblock)
# void *ret_ptr = alloca(sizeof(T));
# void *ptr = ret_ptr;
# loop: int size = rpc_recv(ptr);
# // Non-zero: Provide `size` bytes of extra storage for variable-length data.
# if(size) { ptr = alloca(size); goto loop; }
# else *(T*)ret_ptr
# }
llprehead = self.llbuilder.basic_block
llhead = self.llbuilder.append_basic_block(name="rpc.head")
if llunwindblock:
llheadu = self.llbuilder.append_basic_block(name="rpc.head.unwind")
llalloc = self.llbuilder.append_basic_block(name="rpc.continue")
lltail = self.llbuilder.append_basic_block(name="rpc.tail")
llretty = self.llty_of_type(fun_type.ret)
llslot = self.llbuilder.alloca(llretty, name="rpc.ret.alloc")
llslotgen = self.llbuilder.bitcast(llslot, llptr, name="rpc.ret.ptr")
self.llbuilder.branch(llhead)
self.llbuilder.position_at_end(llhead)
llphi = self.llbuilder.phi(llslotgen.type, name="rpc.ptr")
llphi.add_incoming(llslotgen, llprehead)
if llunwindblock:
llsize = self.llbuilder.invoke(self.llbuiltin("rpc_recv"), [llphi],
llheadu, llunwindblock,
name="rpc.size.next")
self.llbuilder.position_at_end(llheadu)
else:
llsize = self.llbuilder.call(self.llbuiltin("rpc_recv"), [llphi],
name="rpc.size.next")
lldone = self.llbuilder.icmp_unsigned('==', llsize, ll.Constant(llsize.type, 0),
name="rpc.done")
self.llbuilder.cbranch(lldone, lltail, llalloc)
self.llbuilder.position_at_end(llalloc)
llalloca = self.llbuilder.alloca(lli8, llsize, name="rpc.alloc")
llalloca.align = self.max_target_alignment
llphi.add_incoming(llalloca, llalloc)
self.llbuilder.branch(llhead)
self.llbuilder.position_at_end(lltail)
llret = self.llbuilder.load(llslot, name="rpc.ret")
if not fun_type.ret.fold(False, lambda r, t: r or builtins.is_allocated(t)):
# We didn't allocate anything except the slot for the value itself.
# Don't waste stack space.
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
if llnormalblock:
self.llbuilder.branch(llnormalblock)
return llret return llret
def _build_subkernel_call(self, fun_loc, fun_type, args):
llsid = ll.Constant(lli32, fun_type.sid)
tag = b""
for arg in args:
def arg_error_handler(typ):
printer = types.TypePrinter()
note = diagnostic.Diagnostic("note",
"value of type {type}",
{"type": printer.name(typ)},
arg.loc)
diag = diagnostic.Diagnostic("error",
"type {type} is not supported in subkernel calls",
{"type": printer.name(arg.type)},
arg.loc, notes=[note])
self.engine.process(diag)
tag += ir.rpc_tag(arg.type, arg_error_handler)
tag += b":"
# run the kernel first
self.llbuilder.call(self.llbuiltin("subkernel_load_run"), [llsid, ll.Constant(lli1, 1)])
# arg sent in the same vein as RPC
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.stack")
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
lltagptr = self.llbuilder.alloca(lltag.type)
self.llbuilder.store(lltag, lltagptr)
if args:
# only send args if there's anything to send, 'self' is excluded
llargs = self.llbuilder.alloca(llptr, ll.Constant(lli32, len(args)),
name="subkernel.args")
for index, arg in enumerate(args):
if builtins.is_none(arg.type):
llargslot = self.llbuilder.alloca(llunit,
name="subkernel.arg{}".format(index))
else:
llarg = self.map(arg)
llargslot = self.llbuilder.alloca(llarg.type,
name="subkernel.arg{}".format(index))
self.llbuilder.store(llarg, llargslot)
llargslot = self.llbuilder.bitcast(llargslot, llptr)
llargptr = self.llbuilder.gep(llargs, [ll.Constant(lli32, index)])
self.llbuilder.store(llargslot, llargptr)
llargcount = ll.Constant(lli8, len(args))
self.llbuilder.call(self.llbuiltin("subkernel_send_message"),
[llsid, llargcount, lltagptr, llargs])
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
return llsid
def _build_subkernel_return(self, insn):
# builds a remote return.
# unlike args, return only sends one thing.
if builtins.is_none(insn.value().type):
# do not waste time and bandwidth on Nones
return
def ret_error_handler(typ):
printer = types.TypePrinter()
note = diagnostic.Diagnostic("note",
"value of type {type}",
{"type": printer.name(typ)},
fun_loc)
diag = diagnostic.Diagnostic("error",
"return type {type} is not supported in subkernel returns",
{"type": printer.name(fun_type.ret)},
fun_loc, notes=[note])
self.engine.process(diag)
tag = ir.rpc_tag(insn.value().type, ret_error_handler)
tag += b":"
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
lltagptr = self.llbuilder.alloca(lltag.type)
self.llbuilder.store(lltag, lltagptr)
llrets = self.llbuilder.alloca(llptr, ll.Constant(lli32, 1),
name="subkernel.return")
llret = self.map(insn.value())
llretslot = self.llbuilder.alloca(llret.type, name="subkernel.retval")
self.llbuilder.store(llret, llretslot)
llretslot = self.llbuilder.bitcast(llretslot, llptr)
self.llbuilder.store(llretslot, llrets)
llsid = ll.Constant(lli32, 0) # return goes back to master, sid is ignored
lltagcount = ll.Constant(lli8, 1) # only one thing is returned
self.llbuilder.call(self.llbuiltin("subkernel_send_message"),
[llsid, lltagcount, lltagptr, llrets])
def process_Call(self, insn): def process_Call(self, insn):
functiontyp = insn.target_function().type functiontyp = insn.target_function().type
if types.is_rpc(functiontyp): if types.is_rpc(functiontyp):
@ -1559,6 +1758,10 @@ class LLVMIRGenerator:
functiontyp, functiontyp,
insn.arguments(), insn.arguments(),
llnormalblock=None, llunwindblock=None) llnormalblock=None, llunwindblock=None)
elif types.is_subkernel(functiontyp):
return self._build_subkernel_call(insn.target_function().loc,
functiontyp,
insn.arguments())
elif types.is_external_function(functiontyp): elif types.is_external_function(functiontyp):
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn) llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn)
else: else:
@ -1595,6 +1798,11 @@ class LLVMIRGenerator:
functiontyp, functiontyp,
insn.arguments(), insn.arguments(),
llnormalblock, llunwindblock) llnormalblock, llunwindblock)
elif types.is_subkernel(functiontyp):
return self._build_subkernel_call(insn.target_function().loc,
functiontyp,
insn.arguments(),
llnormalblock, llunwindblock)
elif types.is_external_function(functiontyp): elif types.is_external_function(functiontyp):
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn) llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn)
else: else:
@ -1673,7 +1881,8 @@ class LLVMIRGenerator:
attrvalue = getattr(value, attr) attrvalue = getattr(value, attr)
is_class_function = (types.is_constructor(typ) and is_class_function = (types.is_constructor(typ) and
types.is_function(typ.attributes[attr]) and types.is_function(typ.attributes[attr]) and
not types.is_external_function(typ.attributes[attr])) not types.is_external_function(typ.attributes[attr]) and
not types.is_subkernel(typ.attributes[attr]))
if is_class_function: if is_class_function:
attrvalue = self.embedding_map.specialize_function(typ.instance, attrvalue) attrvalue = self.embedding_map.specialize_function(typ.instance, attrvalue)
if not (types.is_instance(typ) and attr in typ.constant_attributes): if not (types.is_instance(typ) and attr in typ.constant_attributes):
@ -1758,7 +1967,8 @@ class LLVMIRGenerator:
llelts = [self._quote(v, t, lambda: path() + [str(i)]) llelts = [self._quote(v, t, lambda: path() + [str(i)])
for i, (v, t) in enumerate(zip(value, typ.elts))] for i, (v, t) in enumerate(zip(value, typ.elts))]
return ll.Constant(llty, llelts) return ll.Constant(llty, llelts)
elif types.is_rpc(typ) or types.is_external_function(typ) or types.is_builtin_function(typ): elif types.is_rpc(typ) or types.is_external_function(typ) or \
types.is_builtin_function(typ) or types.is_subkernel(typ):
# RPC, C and builtin functions have no runtime representation. # RPC, C and builtin functions have no runtime representation.
return ll.Constant(llty, ll.Undefined) return ll.Constant(llty, ll.Undefined)
elif types.is_function(typ): elif types.is_function(typ):
@ -1813,6 +2023,8 @@ class LLVMIRGenerator:
return llinsn return llinsn
def process_Return(self, insn): def process_Return(self, insn):
if insn.remote_return:
self._build_subkernel_return(insn)
if builtins.is_none(insn.value().type): if builtins.is_none(insn.value().type):
return self.llbuilder.ret_void() return self.llbuilder.ret_void()
else: else:

View File

@ -385,6 +385,50 @@ class TRPC(Type):
def __hash__(self): def __hash__(self):
return hash(self.service) return hash(self.service)
class TSubkernel(TFunction):
"""
A kernel to be run on a satellite.
:ivar args: (:class:`collections.OrderedDict` of string to :class:`Type`)
function arguments
:ivar ret: (:class:`Type`)
return type
:ivar sid: (int) subkernel ID number
:ivar destination: (int) satellite destination number
"""
attributes = OrderedDict()
def __init__(self, args, optargs, ret, sid, destination):
assert isinstance(ret, Type)
super().__init__(args, optargs, ret)
self.sid, self.destination = sid, destination
self.delay = TFixedDelay(iodelay.Const(0))
def unify(self, other):
if other is self:
return
if isinstance(other, TSubkernel) and \
self.sid == other.sid and \
self.destination == other.destination:
self.ret.unify(other.ret)
elif isinstance(other, TVar):
other.unify(self)
else:
raise UnificationError(self, other)
def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TSubkernel({})".format(repr(self.ret))
def __eq__(self, other):
return isinstance(other, TSubkernel) and \
self.sid == other.sid
def __hash__(self):
return hash(self.sid)
class TBuiltin(Type): class TBuiltin(Type):
""" """
An instance of builtin type. Every instance of a builtin An instance of builtin type. Every instance of a builtin
@ -644,6 +688,9 @@ def is_function(typ):
def is_rpc(typ): def is_rpc(typ):
return isinstance(typ.find(), TRPC) return isinstance(typ.find(), TRPC)
def is_subkernel(typ):
return isinstance(typ.find(), TSubkernel)
def is_external_function(typ, name=None): def is_external_function(typ, name=None):
typ = typ.find() typ = typ.find()
if name is None: if name is None:
@ -810,6 +857,10 @@ class TypePrinter(object):
return "[rpc{} #{}](...)->{}".format(typ.service, return "[rpc{} #{}](...)->{}".format(typ.service,
" async" if typ.is_async else "", " async" if typ.is_async else "",
self.name(typ.ret, depth + 1)) self.name(typ.ret, depth + 1))
elif isinstance(typ, TSubkernel):
return "<subkernel{} dest#{}>->{}".format(typ.sid,
typ.destination,
self.name(typ.ret, depth + 1))
elif isinstance(typ, TBuiltinFunction): elif isinstance(typ, TBuiltinFunction):
return "<function {}>".format(typ.name) return "<function {}>".format(typ.name)
elif isinstance(typ, (TConstructor, TExceptionConstructor)): elif isinstance(typ, (TConstructor, TExceptionConstructor)):

View File

@ -102,8 +102,20 @@ class RegionOf(algorithm.Visitor):
if types.is_external_function(node.func.type, "cache_get"): if types.is_external_function(node.func.type, "cache_get"):
# The cache is borrow checked dynamically # The cache is borrow checked dynamically
return Global() return Global()
else:
self.visit_sometimes_allocating(node) if (types.is_builtin_function(node.func.type, "array")
or types.is_builtin_function(node.func.type, "make_array")
or types.is_builtin_function(node.func.type, "numpy.transpose")):
# While lifetime tracking across function calls in general is currently
# broken (see below), these special builtins that allocate an array on
# the stack of the caller _always_ allocate regardless of the parameters,
# and we can thus handle them without running into the precision issue
# mentioned in commit ae999db.
return self.visit_allocating(node)
# FIXME: Return statement missing here, but see m-labs/artiq#1497 and
# commit ae999db.
self.visit_sometimes_allocating(node)
# Value lives as long as the object/container, if it's mutable, # Value lives as long as the object/container, if it's mutable,
# or else forever # or else forever

View File

@ -233,7 +233,7 @@ class AD53xx:
def write_gain_mu(self, channel, gain=0xffff): def write_gain_mu(self, channel, gain=0xffff):
"""Program the gain register for a DAC channel. """Program the gain register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
:param gain: 16-bit gain register value (default: 0xffff) :param gain: 16-bit gain register value (default: 0xffff)
@ -245,7 +245,7 @@ class AD53xx:
def write_offset_mu(self, channel, offset=0x8000): def write_offset_mu(self, channel, offset=0x8000):
"""Program the offset register for a DAC channel. """Program the offset register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
:param offset: 16-bit offset register value (default: 0x8000) :param offset: 16-bit offset register value (default: 0x8000)
@ -258,7 +258,7 @@ class AD53xx:
"""Program the DAC offset voltage for a channel. """Program the DAC offset voltage for a channel.
An offset of +V can be used to trim out a DAC offset error of -V. An offset of +V can be used to trim out a DAC offset error of -V.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
:param voltage: the offset voltage :param voltage: the offset voltage
@ -270,7 +270,7 @@ class AD53xx:
def write_dac_mu(self, channel, value): def write_dac_mu(self, channel, value):
"""Program the DAC input register for a channel. """Program the DAC input register for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
""" """
self.bus.write( self.bus.write(
@ -280,7 +280,7 @@ class AD53xx:
def write_dac(self, channel, voltage): def write_dac(self, channel, voltage):
"""Program the DAC output voltage for a channel. """Program the DAC output voltage for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
""" """
self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs, self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs,
@ -313,7 +313,7 @@ class AD53xx:
If no LDAC device was defined, the LDAC pulse is skipped. If no LDAC device was defined, the LDAC pulse is skipped.
See :meth load:. See :meth:`load`.
:param values: list of DAC values to program :param values: list of DAC values to program
:param channels: list of DAC channels to program. If not specified, :param channels: list of DAC channels to program. If not specified,
@ -355,7 +355,7 @@ class AD53xx:
""" Two-point calibration of a DAC channel. """ Two-point calibration of a DAC channel.
Programs the offset and gain register to trim out DAC errors. Does not Programs the offset and gain register to trim out DAC errors. Does not
take effect until LDAC is pulsed (see :meth load:). take effect until LDAC is pulsed (see :meth:`load`).
Calibration consists of measuring the DAC output voltage for a channel Calibration consists of measuring the DAC output voltage for a channel
with the DAC set to zero-scale (0x0000) and full-scale (0xffff). with the DAC set to zero-scale (0x0000) and full-scale (0xffff).

View File

@ -23,6 +23,8 @@ class Request(Enum):
RPCReply = 7 RPCReply = 7
RPCException = 8 RPCException = 8
SubkernelUpload = 9
class Reply(Enum): class Reply(Enum):
SystemInfo = 2 SystemInfo = 2
@ -208,6 +210,7 @@ class CommKernel:
self.unpack_float64 = struct.Struct(self.endian + "d").unpack self.unpack_float64 = struct.Struct(self.endian + "d").unpack
self.pack_header = struct.Struct(self.endian + "lB").pack self.pack_header = struct.Struct(self.endian + "lB").pack
self.pack_int8 = struct.Struct(self.endian + "B").pack
self.pack_int32 = struct.Struct(self.endian + "l").pack self.pack_int32 = struct.Struct(self.endian + "l").pack
self.pack_int64 = struct.Struct(self.endian + "q").pack self.pack_int64 = struct.Struct(self.endian + "q").pack
self.pack_float64 = struct.Struct(self.endian + "d").pack self.pack_float64 = struct.Struct(self.endian + "d").pack
@ -322,7 +325,7 @@ class CommKernel:
self._write(chunk) self._write(chunk)
def _write_int8(self, value): def _write_int8(self, value):
self._write(value) self._write(self.pack_int8(value))
def _write_int32(self, value): def _write_int32(self, value):
self._write(self.pack_int32(value)) self._write(self.pack_int32(value))
@ -382,6 +385,19 @@ class CommKernel:
else: else:
self._read_expect(Reply.LoadCompleted) self._read_expect(Reply.LoadCompleted)
def upload_subkernel(self, kernel_library, id, destination):
self._write_header(Request.SubkernelUpload)
self._write_int32(id)
self._write_int8(destination)
self._write_bytes(kernel_library)
self._flush()
self._read_header()
if self._read_type == Reply.LoadFailed:
raise LoadError(self._read_string())
else:
self._read_expect(Reply.LoadCompleted)
def run(self): def run(self):
self._write_empty(Request.RunKernel) self._write_empty(Request.RunKernel)
self._flush() self._flush()

View File

@ -1,5 +1,6 @@
import os, sys import os, sys
import numpy import numpy
from inspect import getfullargspec
from functools import wraps from functools import wraps
from pythonparser import diagnostic from pythonparser import diagnostic
@ -77,38 +78,55 @@ class Core:
:param ref_multiplier: ratio between the RTIO fine timestamp frequency :param ref_multiplier: ratio between the RTIO fine timestamp frequency
and the RTIO coarse timestamp frequency (e.g. SERDES multiplication and the RTIO coarse timestamp frequency (e.g. SERDES multiplication
factor). factor).
:param analyzer_proxy: name of the core device analyzer proxy to trigger
(optional).
:param analyze_at_run_end: automatically trigger the core device analyzer
proxy after the Experiment's run stage finishes.
""" """
kernel_invariants = { kernel_invariants = {
"core", "ref_period", "coarse_ref_period", "ref_multiplier", "core", "ref_period", "coarse_ref_period", "ref_multiplier",
} }
def __init__(self, dmgr, host, ref_period, ref_multiplier=8, target="rv32g"): def __init__(self, dmgr,
host, ref_period,
analyzer_proxy=None, analyze_at_run_end=False,
ref_multiplier=8,
target="rv32g", satellite_cpu_targets={}):
self.ref_period = ref_period self.ref_period = ref_period
self.ref_multiplier = ref_multiplier self.ref_multiplier = ref_multiplier
self.satellite_cpu_targets = satellite_cpu_targets
self.target_cls = get_target_cls(target) self.target_cls = get_target_cls(target)
self.coarse_ref_period = ref_period*ref_multiplier self.coarse_ref_period = ref_period*ref_multiplier
if host is None: if host is None:
self.comm = CommKernelDummy() self.comm = CommKernelDummy()
else: else:
self.comm = CommKernel(host) self.comm = CommKernel(host)
self.analyzer_proxy_name = analyzer_proxy
self.analyze_at_run_end = analyze_at_run_end
self.first_run = True self.first_run = True
self.dmgr = dmgr self.dmgr = dmgr
self.core = self self.core = self
self.comm.core = self self.comm.core = self
self.analyzer_proxy = None
def notify_run_end(self):
if self.analyze_at_run_end:
self.trigger_analyzer_proxy()
def close(self): def close(self):
self.comm.close() self.comm.close()
def compile(self, function, args, kwargs, set_result=None, def compile(self, function, args, kwargs, set_result=None,
attribute_writeback=True, print_as_rpc=True, attribute_writeback=True, print_as_rpc=True,
target=None): target=None, destination=0, subkernel_arg_types=[]):
try: try:
engine = _DiagnosticEngine(all_errors_are_fatal=True) engine = _DiagnosticEngine(all_errors_are_fatal=True)
stitcher = Stitcher(engine=engine, core=self, dmgr=self.dmgr, stitcher = Stitcher(engine=engine, core=self, dmgr=self.dmgr,
print_as_rpc=print_as_rpc) print_as_rpc=print_as_rpc,
destination=destination, subkernel_arg_types=subkernel_arg_types)
stitcher.stitch_call(function, args, kwargs, set_result) stitcher.stitch_call(function, args, kwargs, set_result)
stitcher.finalize() stitcher.finalize()
@ -122,7 +140,8 @@ class Core:
return stitcher.embedding_map, stripped_library, \ return stitcher.embedding_map, stripped_library, \
lambda addresses: target.symbolize(library, addresses), \ lambda addresses: target.symbolize(library, addresses), \
lambda symbols: target.demangle(symbols) lambda symbols: target.demangle(symbols), \
module.subkernel_arg_types
except diagnostic.Error as error: except diagnostic.Error as error:
raise CompileError(error.diagnostic) from error raise CompileError(error.diagnostic) from error
@ -140,11 +159,38 @@ class Core:
def set_result(new_result): def set_result(new_result):
nonlocal result nonlocal result
result = new_result result = new_result
embedding_map, kernel_library, symbolizer, demangler = \ embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
self.compile(function, args, kwargs, set_result) self.compile(function, args, kwargs, set_result)
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
self._run_compiled(kernel_library, embedding_map, symbolizer, demangler) self._run_compiled(kernel_library, embedding_map, symbolizer, demangler)
return result return result
def compile_subkernel(self, sid, subkernel_fn, embedding_map, args, subkernel_arg_types):
# pass self to subkernels (if applicable)
# assuming the first argument is self
subkernel_args = getfullargspec(subkernel_fn.artiq_embedded.function)
self_arg = []
if len(subkernel_args[0]) > 0:
if subkernel_args[0][0] == 'self':
self_arg = args[:1]
destination = subkernel_fn.artiq_embedded.destination
destination_tgt = self.satellite_cpu_targets[destination]
target = get_target_cls(destination_tgt)(subkernel_id=sid)
object_map, kernel_library, _, _, _ = \
self.compile(subkernel_fn, self_arg, {}, attribute_writeback=False,
print_as_rpc=False, target=target, destination=destination,
subkernel_arg_types=subkernel_arg_types.get(sid, []))
if object_map.has_rpc_or_subkernel():
raise ValueError("Subkernel must not use RPC or subkernels in other destinations")
return destination, kernel_library
def compile_and_upload_subkernels(self, embedding_map, args, subkernel_arg_types):
for sid, subkernel_fn in embedding_map.subkernels().items():
destination, kernel_library = \
self.compile_subkernel(sid, subkernel_fn, embedding_map,
args, subkernel_arg_types)
self.comm.upload_subkernel(kernel_library, sid, destination)
def precompile(self, function, *args, **kwargs): def precompile(self, function, *args, **kwargs):
"""Precompile a kernel and return a callable that executes it on the core device """Precompile a kernel and return a callable that executes it on the core device
at a later time. at a later time.
@ -153,7 +199,7 @@ class Core:
as additional positional and keyword arguments. as additional positional and keyword arguments.
The returned callable accepts no arguments. The returned callable accepts no arguments.
Precompiled kernels may use RPCs. Precompiled kernels may use RPCs and subkernels.
Object attributes at the beginning of a precompiled kernel execution have the Object attributes at the beginning of a precompiled kernel execution have the
values they had at precompilation time. If up-to-date values are required, values they had at precompilation time. If up-to-date values are required,
@ -178,8 +224,9 @@ class Core:
nonlocal result nonlocal result
result = new_result result = new_result
embedding_map, kernel_library, symbolizer, demangler = \ embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
self.compile(function, args, kwargs, set_result, attribute_writeback=False) self.compile(function, args, kwargs, set_result, attribute_writeback=False)
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
@wraps(function) @wraps(function)
def run_precompiled(): def run_precompiled():
@ -255,3 +302,23 @@ class Core:
min_now = rtio_get_counter() + 125000 min_now = rtio_get_counter() + 125000
if now_mu() < min_now: if now_mu() < min_now:
at_mu(min_now) at_mu(min_now)
def trigger_analyzer_proxy(self):
"""Causes the core analyzer proxy to retrieve a dump from the device,
and distribute it to all connected clients (typically dashboards).
Returns only after the dump has been retrieved from the device.
Raises IOError if no analyzer proxy has been configured, or if the
analyzer proxy fails. In the latter case, more details would be
available in the proxy log.
"""
if self.analyzer_proxy is None:
if self.analyzer_proxy_name is not None:
self.analyzer_proxy = self.dmgr.get(self.analyzer_proxy_name)
if self.analyzer_proxy is None:
raise IOError("No analyzer proxy configured")
else:
success = self.analyzer_proxy.trigger()
if not success:
raise IOError("Analyzer proxy reported failure")

View File

@ -630,8 +630,7 @@
"maxItems": 2 "maxItems": 2
}, },
"drtio_destination": { "drtio_destination": {
"type": "integer", "type": "integer"
"default": 4
} }
}, },
"required": ["ports"] "required": ["ports"]

View File

@ -148,6 +148,13 @@ class DMAError(Exception):
artiq_builtin = True artiq_builtin = True
class SubkernelError(Exception):
"""Raised when an operation regarding a subkernel is invalid
or cannot be completed.
"""
artiq_builtin = True
class ClockFailure(Exception): class ClockFailure(Exception):
"""Raised when RTIO PLL has lost lock.""" """Raised when RTIO PLL has lost lock."""

View File

@ -13,6 +13,7 @@ from artiq.gui.entries import procdesc_to_entry, ScanEntry
from artiq.gui.fuzzy_select import FuzzySelectWidget from artiq.gui.fuzzy_select import FuzzySelectWidget
from artiq.gui.tools import (LayoutWidget, WheelFilter, from artiq.gui.tools import (LayoutWidget, WheelFilter,
log_level_to_name, get_open_file_name) log_level_to_name, get_open_file_name)
from artiq.tools import parse_devarg_override, unparse_devarg_override
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -305,7 +306,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
flush = self.flush flush = self.flush
flush.setToolTip("Flush the pipeline (of current- and higher-priority " flush.setToolTip("Flush the pipeline (of current- and higher-priority "
"experiments) before starting the experiment") "experiments) before starting the experiment")
self.layout.addWidget(flush, 2, 2, 1, 2) self.layout.addWidget(flush, 2, 2)
flush.setChecked(scheduling["flush"]) flush.setChecked(scheduling["flush"])
@ -313,6 +314,20 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
scheduling["flush"] = bool(checked) scheduling["flush"] = bool(checked)
flush.stateChanged.connect(update_flush) flush.stateChanged.connect(update_flush)
devarg_override = QtWidgets.QComboBox()
devarg_override.setEditable(True)
devarg_override.lineEdit().setPlaceholderText("Override device arguments")
devarg_override.lineEdit().setClearButtonEnabled(True)
devarg_override.insertItem(0, "core:analyze_at_run_end=True")
self.layout.addWidget(devarg_override, 2, 3)
devarg_override.setCurrentText(options["devarg_override"])
def update_devarg_override(text):
options["devarg_override"] = text
devarg_override.editTextChanged.connect(update_devarg_override)
self.devarg_override = devarg_override
log_level = QtWidgets.QComboBox() log_level = QtWidgets.QComboBox()
log_level.addItems(log_levels) log_level.addItems(log_levels)
log_level.setCurrentIndex(1) log_level.setCurrentIndex(1)
@ -333,6 +348,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
if "repo_rev" in options: if "repo_rev" in options:
repo_rev = QtWidgets.QLineEdit() repo_rev = QtWidgets.QLineEdit()
repo_rev.setPlaceholderText("current") repo_rev.setPlaceholderText("current")
repo_rev.setClearButtonEnabled(True)
repo_rev_label = QtWidgets.QLabel("Revision:") repo_rev_label = QtWidgets.QLabel("Revision:")
repo_rev_label.setToolTip("Experiment repository revision " repo_rev_label.setToolTip("Experiment repository revision "
"(commit ID) to use") "(commit ID) to use")
@ -465,6 +481,9 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
return return
try: try:
if "devarg_override" in expid:
self.devarg_override.setCurrentText(
unparse_devarg_override(expid["devarg_override"]))
self.log_level.setCurrentIndex(log_levels.index( self.log_level.setCurrentIndex(log_levels.index(
log_level_to_name(expid["log_level"]))) log_level_to_name(expid["log_level"])))
if ("repo_rev" in expid and if ("repo_rev" in expid and
@ -638,7 +657,8 @@ class ExperimentManager:
else: else:
# mutated by _ExperimentDock # mutated by _ExperimentDock
options = { options = {
"log_level": logging.WARNING "log_level": logging.WARNING,
"devarg_override": ""
} }
if expurl[:5] == "repo:": if expurl[:5] == "repo:":
options["repo_rev"] = None options["repo_rev"] = None
@ -733,7 +753,14 @@ class ExperimentManager:
entry_cls = procdesc_to_entry(argument["desc"]) entry_cls = procdesc_to_entry(argument["desc"])
argument_values[name] = entry_cls.state_to_value(argument["state"]) argument_values[name] = entry_cls.state_to_value(argument["state"])
try:
devarg_override = parse_devarg_override(options["devarg_override"])
except:
logger.error("Failed to parse device argument overrides for %s", expurl)
return
expid = { expid = {
"devarg_override": devarg_override,
"log_level": options["log_level"], "log_level": options["log_level"],
"file": file, "file": file,
"class_name": class_name, "class_name": class_name,

View File

@ -7,7 +7,11 @@ device_db = {
"type": "local", "type": "local",
"module": "artiq.coredevice.core", "module": "artiq.coredevice.core",
"class": "Core", "class": "Core",
"arguments": {"host": core_addr, "ref_period": 1e-9} "arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
}, },
"core_log": { "core_log": {
"type": "controller", "type": "controller",
@ -22,6 +26,13 @@ device_db = {
"port": 1384, "port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr "command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
}, },
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": { "core_cache": {
"type": "local", "type": "local",
"module": "artiq.coredevice.cache", "module": "artiq.coredevice.cache",

View File

@ -5,7 +5,11 @@ device_db = {
"type": "local", "type": "local",
"module": "artiq.coredevice.core", "module": "artiq.coredevice.core",
"class": "Core", "class": "Core",
"arguments": {"host": core_addr, "ref_period": 1/(8*150e6)} "arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
}, },
"core_log": { "core_log": {
"type": "controller", "type": "controller",
@ -20,6 +24,13 @@ device_db = {
"port": 1384, "port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr "command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
}, },
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": { "core_cache": {
"type": "local", "type": "local",
"module": "artiq.coredevice.cache", "module": "artiq.coredevice.cache",

View File

@ -5,7 +5,11 @@ device_db = {
"type": "local", "type": "local",
"module": "artiq.coredevice.core", "module": "artiq.coredevice.core",
"class": "Core", "class": "Core",
"arguments": {"host": core_addr, "ref_period": 1e-9} "arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
}, },
"core_log": { "core_log": {
"type": "controller", "type": "controller",
@ -20,6 +24,13 @@ device_db = {
"port": 1384, "port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr "command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
}, },
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": { "core_cache": {
"type": "local", "type": "local",
"module": "artiq.coredevice.cache", "module": "artiq.coredevice.cache",

View File

@ -9,7 +9,11 @@ device_db = {
"type": "local", "type": "local",
"module": "artiq.coredevice.core", "module": "artiq.coredevice.core",
"class": "Core", "class": "Core",
"arguments": {"host": core_addr, "ref_period": 1e-9} "arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
}, },
"core_log": { "core_log": {
"type": "controller", "type": "controller",
@ -24,6 +28,13 @@ device_db = {
"port": 1384, "port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr "command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
}, },
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": { "core_cache": {
"type": "local", "type": "local",
"module": "artiq.coredevice.cache", "module": "artiq.coredevice.cache",

View File

@ -13,6 +13,12 @@ dependencies = [
name = "alloc_list" name = "alloc_list"
version = "0.0.0" version = "0.0.0"
[[package]]
name = "arrayvec"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96d30a06541fbafbc7f82ed10c06164cfbd2c401138f6addd8404629c4b16711"
[[package]] [[package]]
name = "bare-metal" name = "bare-metal"
version = "0.2.5" version = "0.2.5"
@ -332,6 +338,7 @@ dependencies = [
"proto_artiq", "proto_artiq",
"riscv", "riscv",
"smoltcp", "smoltcp",
"tar-no-std",
"unwind_backtrace", "unwind_backtrace",
] ]
@ -352,6 +359,9 @@ dependencies = [
"board_artiq", "board_artiq",
"board_misoc", "board_misoc",
"build_misoc", "build_misoc",
"cslice",
"eh",
"io",
"log", "log",
"proto_artiq", "proto_artiq",
"riscv", "riscv",
@ -413,6 +423,16 @@ dependencies = [
"syn", "syn",
] ]
[[package]]
name = "tar-no-std"
version = "0.1.8"
source = "git+https://git.m-labs.hk/M-Labs/tar-no-std?rev=2ab6dc5#2ab6dc58e5249c59c4eb03eaf3a119bcdd678d32"
dependencies = [
"arrayvec",
"bitflags",
"log",
]
[[package]] [[package]]
name = "unicode-xid" name = "unicode-xid"
version = "0.0.4" version = "0.0.4"

View File

@ -6,7 +6,7 @@ ENTRY(_reset_handler)
* ld does not allow this expression here. * ld does not allow this expression here.
*/ */
MEMORY { MEMORY {
runtime (RWX) : ORIGIN = 0x40000000, LENGTH = 0x4000000 /* 64M */ firmware (RWX) : ORIGIN = 0x40000000, LENGTH = 0x4000000 /* 64M */
} }
SECTIONS SECTIONS
@ -14,24 +14,24 @@ SECTIONS
.vectors : .vectors :
{ {
*(.vectors) *(.vectors)
} > runtime } > firmware
.text : .text :
{ {
*(.text .text.*) *(.text .text.*)
} > runtime } > firmware
.eh_frame : .eh_frame :
{ {
__eh_frame_start = .; __eh_frame_start = .;
KEEP(*(.eh_frame)) KEEP(*(.eh_frame))
__eh_frame_end = .; __eh_frame_end = .;
} > runtime } > firmware
.eh_frame_hdr : .eh_frame_hdr :
{ {
KEEP(*(.eh_frame_hdr)) KEEP(*(.eh_frame_hdr))
} > runtime } > firmware
__eh_frame_hdr_start = SIZEOF(.eh_frame_hdr) > 0 ? ADDR(.eh_frame_hdr) : 0; __eh_frame_hdr_start = SIZEOF(.eh_frame_hdr) > 0 ? ADDR(.eh_frame_hdr) : 0;
__eh_frame_hdr_end = SIZEOF(.eh_frame_hdr) > 0 ? . : 0; __eh_frame_hdr_end = SIZEOF(.eh_frame_hdr) > 0 ? . : 0;
@ -39,35 +39,35 @@ SECTIONS
.gcc_except_table : .gcc_except_table :
{ {
*(.gcc_except_table) *(.gcc_except_table)
} > runtime } > firmware
/* https://sourceware.org/bugzilla/show_bug.cgi?id=20475 */ /* https://sourceware.org/bugzilla/show_bug.cgi?id=20475 */
.got : .got :
{ {
*(.got) *(.got)
} > runtime } > firmware
.got.plt : .got.plt :
{ {
*(.got.plt) *(.got.plt)
} > runtime } > firmware
.rodata : .rodata :
{ {
*(.rodata .rodata.*) *(.rodata .rodata.*)
} > runtime } > firmware
.data : .data :
{ {
*(.data .data.*) *(.data .data.*)
} > runtime } > firmware
.bss (NOLOAD) : ALIGN(4) .bss (NOLOAD) : ALIGN(4)
{ {
_fbss = .; _fbss = .;
*(.sbss .sbss.* .bss .bss.*); *(.sbss .sbss.* .bss .bss.*);
_ebss = .; _ebss = .;
} > runtime } > firmware
.stack (NOLOAD) : ALIGN(0x1000) .stack (NOLOAD) : ALIGN(0x1000)
{ {
@ -76,12 +76,12 @@ SECTIONS
_estack = .; _estack = .;
. += 0x10000; . += 0x10000;
_fstack = . - 16; _fstack = . - 16;
} > runtime } > firmware
.heap (NOLOAD) : ALIGN(16) .heap (NOLOAD) : ALIGN(16)
{ {
_fheap = .; _fheap = .;
. = ORIGIN(runtime) + LENGTH(runtime); . = ORIGIN(firmware) + LENGTH(firmware);
_eheap = .; _eheap = .;
} > runtime } > firmware
} }

View File

@ -157,6 +157,11 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(dma_retrieve = ::dma_retrieve), api!(dma_retrieve = ::dma_retrieve),
api!(dma_playback = ::dma_playback), api!(dma_playback = ::dma_playback),
api!(subkernel_load_run = ::subkernel_load_run),
api!(subkernel_send_message = ::subkernel_send_message),
api!(subkernel_await_message = ::subkernel_await_message),
api!(subkernel_await_finish = ::subkernel_await_finish),
api!(i2c_start = ::nrt_bus::i2c::start), api!(i2c_start = ::nrt_bus::i2c::start),
api!(i2c_restart = ::nrt_bus::i2c::restart), api!(i2c_restart = ::nrt_bus::i2c::restart),
api!(i2c_stop = ::nrt_bus::i2c::stop), api!(i2c_stop = ::nrt_bus::i2c::stop),

View File

@ -333,7 +333,7 @@ extern fn stop_fn(_version: c_int,
} }
} }
static EXCEPTION_ID_LOOKUP: [(&str, u32); 11] = [ static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [
("RuntimeError", 0), ("RuntimeError", 0),
("RTIOUnderflow", 1), ("RTIOUnderflow", 1),
("RTIOOverflow", 2), ("RTIOOverflow", 2),
@ -345,6 +345,7 @@ static EXCEPTION_ID_LOOKUP: [(&str, u32); 11] = [
("ZeroDivisionError", 8), ("ZeroDivisionError", 8),
("IndexError", 9), ("IndexError", 9),
("UnwrapNoneError", 10), ("UnwrapNoneError", 10),
("SubkernelError", 11)
]; ];
pub fn get_exception_id(name: &str) -> u32 { pub fn get_exception_id(name: &str) -> u32 {

View File

@ -21,7 +21,6 @@ use dyld::Library;
use board_artiq::{mailbox, rpc_queue}; use board_artiq::{mailbox, rpc_queue};
use proto_artiq::{kernel_proto, rpc_proto}; use proto_artiq::{kernel_proto, rpc_proto};
use kernel_proto::*; use kernel_proto::*;
#[cfg(has_rtio_dma)]
use board_misoc::csr; use board_misoc::csr;
use riscv::register::{mcause, mepc, mtval}; use riscv::register::{mcause, mepc, mtval};
@ -139,7 +138,7 @@ extern fn rpc_send_async(service: u32, tag: &CSlice<u8>, data: *const *const ())
rpc_queue::enqueue(|mut slice| { rpc_queue::enqueue(|mut slice| {
let length = { let length = {
let mut writer = Cursor::new(&mut slice[4..]); let mut writer = Cursor::new(&mut slice[4..]);
rpc_proto::send_args(&mut writer, service, tag.as_ref(), data)?; rpc_proto::send_args(&mut writer, service, tag.as_ref(), data, true)?;
writer.position() writer.position()
}; };
io::ProtoWrite::write_u32(&mut slice, length as u32) io::ProtoWrite::write_u32(&mut slice, length as u32)
@ -396,7 +395,7 @@ extern fn dma_retrieve(name: &CSlice<u8>) -> DmaTrace {
}) })
} }
#[cfg(has_rtio_dma)] #[cfg(kernel_has_rtio_dma)]
#[unwind(allowed)] #[unwind(allowed)]
extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) { extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) {
assert!(ptr % 64 == 0); assert!(ptr % 64 == 0);
@ -454,10 +453,74 @@ extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) {
} }
} }
#[cfg(not(has_rtio_dma))] #[cfg(not(kernel_has_rtio_dma))]
#[unwind(allowed)] #[unwind(allowed)]
extern fn dma_playback(_timestamp: i64, _ptr: i32, _uses_ddma: bool) { extern fn dma_playback(_timestamp: i64, _ptr: i32, _uses_ddma: bool) {
unimplemented!("not(has_rtio_dma)") unimplemented!("not(kernel_has_rtio_dma)")
}
#[unwind(allowed)]
extern fn subkernel_load_run(id: u32, run: bool) {
send(&SubkernelLoadRunRequest { id: id, run: run });
recv!(&SubkernelLoadRunReply { succeeded } => {
if !succeeded {
raise!("SubkernelError",
"Error loading or running the subkernel");
}
});
}
#[unwind(allowed)]
extern fn subkernel_await_finish(id: u32, timeout: u64) {
send(&SubkernelAwaitFinishRequest { id: id, timeout: timeout });
recv!(SubkernelAwaitFinishReply { status } => {
match status {
SubkernelStatus::NoError => (),
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation")
}
})
}
#[unwind(aborts)]
extern fn subkernel_send_message(id: u32, count: u8, tag: &CSlice<u8>, data: *const *const ()) {
send(&SubkernelMsgSend {
id: id,
count: count,
tag: tag.as_ref(),
data: data
});
}
#[unwind(allowed)]
extern fn subkernel_await_message(id: u32, timeout: u64, tags: &CSlice<u8>, min: u8, max: u8) -> u8 {
send(&SubkernelMsgRecvRequest { id: id, timeout: timeout, tags: tags.as_ref() });
recv!(SubkernelMsgRecvReply { status, count } => {
match status {
SubkernelStatus::NoError => {
if count < &min || count > &max {
raise!("SubkernelError",
"Received less or more arguments than expected");
}
*count
}
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation")
}
})
// RpcRecvRequest should be called `count` times after this to receive message data
} }
unsafe fn attribute_writeback(typeinfo: *const ()) { unsafe fn attribute_writeback(typeinfo: *const ()) {

View File

@ -14,8 +14,51 @@ impl<T> From<IoError<T>> for Error<T> {
} }
} }
pub const DMA_TRACE_MAX_SIZE: usize = /*max size*/512 - /*CRC*/4 - /*packet ID*/1 - /*trace ID*/4 - /*last*/1 -/*length*/2; // maximum size of arbitrary payloads
pub const ANALYZER_MAX_SIZE: usize = /*max size*/512 - /*CRC*/4 - /*packet ID*/1 - /*last*/1 - /*length*/2; // used by satellite -> master analyzer, subkernel exceptions
pub const SAT_PAYLOAD_MAX_SIZE: usize = /*max size*/512 - /*CRC*/4 - /*packet ID*/1 - /*last*/1 - /*length*/2;
// used by DDMA, subkernel program data (need to provide extra ID and destination)
pub const MASTER_PAYLOAD_MAX_SIZE: usize = SAT_PAYLOAD_MAX_SIZE - /*destination*/1 - /*ID*/4;
#[derive(PartialEq, Clone, Copy, Debug)]
#[repr(u8)]
pub enum PayloadStatus {
Middle = 0,
First = 1,
Last = 2,
FirstAndLast = 3,
}
impl From<u8> for PayloadStatus {
fn from(value: u8) -> PayloadStatus {
match value {
0 => PayloadStatus::Middle,
1 => PayloadStatus::First,
2 => PayloadStatus::Last,
3 => PayloadStatus::FirstAndLast,
_ => unreachable!(),
}
}
}
impl PayloadStatus {
pub fn is_first(self) -> bool {
self == PayloadStatus::First || self == PayloadStatus::FirstAndLast
}
pub fn is_last(self) -> bool {
self == PayloadStatus::Last || self == PayloadStatus::FirstAndLast
}
pub fn from_status(first: bool, last: bool) -> PayloadStatus {
match (first, last) {
(true, true) => PayloadStatus::FirstAndLast,
(true, false) => PayloadStatus::First,
(false, true) => PayloadStatus::Last,
(false, false) => PayloadStatus::Middle
}
}
}
#[derive(PartialEq, Debug)] #[derive(PartialEq, Debug)]
pub enum Packet { pub enum Packet {
@ -61,15 +104,25 @@ pub enum Packet {
AnalyzerHeaderRequest { destination: u8 }, AnalyzerHeaderRequest { destination: u8 },
AnalyzerHeader { sent_bytes: u32, total_byte_count: u64, overflow_occurred: bool }, AnalyzerHeader { sent_bytes: u32, total_byte_count: u64, overflow_occurred: bool },
AnalyzerDataRequest { destination: u8 }, AnalyzerDataRequest { destination: u8 },
AnalyzerData { last: bool, length: u16, data: [u8; ANALYZER_MAX_SIZE]}, AnalyzerData { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE]},
DmaAddTraceRequest { destination: u8, id: u32, last: bool, length: u16, trace: [u8; DMA_TRACE_MAX_SIZE] }, DmaAddTraceRequest { destination: u8, id: u32, status: PayloadStatus, length: u16, trace: [u8; MASTER_PAYLOAD_MAX_SIZE] },
DmaAddTraceReply { succeeded: bool }, DmaAddTraceReply { succeeded: bool },
DmaRemoveTraceRequest { destination: u8, id: u32 }, DmaRemoveTraceRequest { destination: u8, id: u32 },
DmaRemoveTraceReply { succeeded: bool }, DmaRemoveTraceReply { succeeded: bool },
DmaPlaybackRequest { destination: u8, id: u32, timestamp: u64 }, DmaPlaybackRequest { destination: u8, id: u32, timestamp: u64 },
DmaPlaybackReply { succeeded: bool }, DmaPlaybackReply { succeeded: bool },
DmaPlaybackStatus { destination: u8, id: u32, error: u8, channel: u32, timestamp: u64 } DmaPlaybackStatus { destination: u8, id: u32, error: u8, channel: u32, timestamp: u64 },
SubkernelAddDataRequest { destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelAddDataReply { succeeded: bool },
SubkernelLoadRunRequest { destination: u8, id: u32, run: bool },
SubkernelLoadRunReply { succeeded: bool },
SubkernelFinished { id: u32, with_exception: bool },
SubkernelExceptionRequest { destination: u8 },
SubkernelException { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE] },
SubkernelMessage { destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelMessageAck { destination: u8 },
} }
impl Packet { impl Packet {
@ -215,7 +268,7 @@ impl Packet {
0xa3 => { 0xa3 => {
let last = reader.read_bool()?; let last = reader.read_bool()?;
let length = reader.read_u16()?; let length = reader.read_u16()?;
let mut data: [u8; ANALYZER_MAX_SIZE] = [0; ANALYZER_MAX_SIZE]; let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?; reader.read_exact(&mut data[0..length as usize])?;
Packet::AnalyzerData { Packet::AnalyzerData {
last: last, last: last,
@ -227,14 +280,14 @@ impl Packet {
0xb0 => { 0xb0 => {
let destination = reader.read_u8()?; let destination = reader.read_u8()?;
let id = reader.read_u32()?; let id = reader.read_u32()?;
let last = reader.read_bool()?; let status = reader.read_u8()?;
let length = reader.read_u16()?; let length = reader.read_u16()?;
let mut trace: [u8; DMA_TRACE_MAX_SIZE] = [0; DMA_TRACE_MAX_SIZE]; let mut trace: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut trace[0..length as usize])?; reader.read_exact(&mut trace[0..length as usize])?;
Packet::DmaAddTraceRequest { Packet::DmaAddTraceRequest {
destination: destination, destination: destination,
id: id, id: id,
last: last, status: PayloadStatus::from(status),
length: length as u16, length: length as u16,
trace: trace, trace: trace,
} }
@ -265,6 +318,69 @@ impl Packet {
timestamp: reader.read_u64()? timestamp: reader.read_u64()?
}, },
0xc0 => {
let destination = reader.read_u8()?;
let id = reader.read_u32()?;
let status = reader.read_u8()?;
let length = reader.read_u16()?;
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelAddDataRequest {
destination: destination,
id: id,
status: PayloadStatus::from(status),
length: length as u16,
data: data,
}
},
0xc1 => Packet::SubkernelAddDataReply {
succeeded: reader.read_bool()?
},
0xc4 => Packet::SubkernelLoadRunRequest {
destination: reader.read_u8()?,
id: reader.read_u32()?,
run: reader.read_bool()?
},
0xc5 => Packet::SubkernelLoadRunReply {
succeeded: reader.read_bool()?
},
0xc8 => Packet::SubkernelFinished {
id: reader.read_u32()?,
with_exception: reader.read_bool()?,
},
0xc9 => Packet::SubkernelExceptionRequest {
destination: reader.read_u8()?
},
0xca => {
let last = reader.read_bool()?;
let length = reader.read_u16()?;
let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelException {
last: last,
length: length,
data: data
}
},
0xcb => {
let destination = reader.read_u8()?;
let id = reader.read_u32()?;
let status = reader.read_u8()?;
let length = reader.read_u16()?;
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelMessage {
destination: destination,
id: id,
status: PayloadStatus::from(status),
length: length as u16,
data: data,
}
},
0xcc => Packet::SubkernelMessageAck {
destination: reader.read_u8()?
},
ty => return Err(Error::UnknownPacket(ty)) ty => return Err(Error::UnknownPacket(ty))
}) })
} }
@ -445,11 +561,11 @@ impl Packet {
writer.write_all(&data[0..length as usize])?; writer.write_all(&data[0..length as usize])?;
}, },
Packet::DmaAddTraceRequest { destination, id, last, trace, length } => { Packet::DmaAddTraceRequest { destination, id, status, trace, length } => {
writer.write_u8(0xb0)?; writer.write_u8(0xb0)?;
writer.write_u8(destination)?; writer.write_u8(destination)?;
writer.write_u32(id)?; writer.write_u32(id)?;
writer.write_bool(last)?; writer.write_u8(status as u8)?;
// trace may be broken down to fit within drtio aux memory limit // trace may be broken down to fit within drtio aux memory limit
// will be reconstructed by satellite // will be reconstructed by satellite
writer.write_u16(length)?; writer.write_u16(length)?;
@ -485,7 +601,57 @@ impl Packet {
writer.write_u8(error)?; writer.write_u8(error)?;
writer.write_u32(channel)?; writer.write_u32(channel)?;
writer.write_u64(timestamp)?; writer.write_u64(timestamp)?;
} },
Packet::SubkernelAddDataRequest { destination, id, status, data, length } => {
writer.write_u8(0xc0)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_u8(status as u8)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
},
Packet::SubkernelAddDataReply { succeeded } => {
writer.write_u8(0xc1)?;
writer.write_bool(succeeded)?;
},
Packet::SubkernelLoadRunRequest { destination, id, run } => {
writer.write_u8(0xc4)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_bool(run)?;
},
Packet::SubkernelLoadRunReply { succeeded } => {
writer.write_u8(0xc5)?;
writer.write_bool(succeeded)?;
},
Packet::SubkernelFinished { id, with_exception } => {
writer.write_u8(0xc8)?;
writer.write_u32(id)?;
writer.write_bool(with_exception)?;
},
Packet::SubkernelExceptionRequest { destination } => {
writer.write_u8(0xc9)?;
writer.write_u8(destination)?;
},
Packet::SubkernelException { last, length, data } => {
writer.write_u8(0xca)?;
writer.write_bool(last)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
},
Packet::SubkernelMessage { destination, id, status, data, length } => {
writer.write_u8(0xcb)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_u8(status as u8)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
},
Packet::SubkernelMessageAck { destination } => {
writer.write_u8(0xcc)?;
writer.write_u8(destination)?;
},
} }
Ok(()) Ok(())
} }

View File

@ -10,6 +10,15 @@ pub const KERNELCPU_LAST_ADDRESS: usize = 0x4fffffff;
// section in ksupport.elf. // section in ksupport.elf.
pub const KSUPPORT_HEADER_SIZE: usize = 0x74; pub const KSUPPORT_HEADER_SIZE: usize = 0x74;
#[derive(Debug)]
pub enum SubkernelStatus {
NoError,
Timeout,
IncorrectState,
CommLost,
OtherError
}
#[derive(Debug)] #[derive(Debug)]
pub enum Message<'a> { pub enum Message<'a> {
LoadRequest(&'a [u8]), LoadRequest(&'a [u8]),
@ -94,6 +103,14 @@ pub enum Message<'a> {
SpiReadReply { succeeded: bool, data: u32 }, SpiReadReply { succeeded: bool, data: u32 },
SpiBasicReply { succeeded: bool }, SpiBasicReply { succeeded: bool },
SubkernelLoadRunRequest { id: u32, run: bool },
SubkernelLoadRunReply { succeeded: bool },
SubkernelAwaitFinishRequest { id: u32, timeout: u64 },
SubkernelAwaitFinishReply { status: SubkernelStatus },
SubkernelMsgSend { id: u32, count: u8, tag: &'a [u8], data: *const *const () },
SubkernelMsgRecvRequest { id: u32, timeout: u64, tags: &'a [u8] },
SubkernelMsgRecvReply { status: SubkernelStatus, count: u8 },
Log(fmt::Arguments<'a>), Log(fmt::Arguments<'a>),
LogSlice(&'a str) LogSlice(&'a str)
} }

View File

@ -190,9 +190,9 @@ unsafe fn recv_value<R, E>(reader: &mut R, tag: Tag, data: &mut *mut (),
} }
} }
pub fn recv_return<R, E>(reader: &mut R, tag_bytes: &[u8], data: *mut (), pub fn recv_return<'a, R, E>(reader: &mut R, tag_bytes: &'a [u8], data: *mut (),
alloc: &dyn Fn(usize) -> Result<*mut (), E>) alloc: &dyn Fn(usize) -> Result<*mut (), E>)
-> Result<(), E> -> Result<&'a [u8], E>
where R: Read + ?Sized, where R: Read + ?Sized,
E: From<Error<R::ReadError>> E: From<Error<R::ReadError>>
{ {
@ -204,14 +204,16 @@ pub fn recv_return<R, E>(reader: &mut R, tag_bytes: &[u8], data: *mut (),
let mut data = data; let mut data = data;
unsafe { recv_value(reader, tag, &mut data, alloc)? }; unsafe { recv_value(reader, tag, &mut data, alloc)? };
Ok(()) Ok(it.data)
} }
unsafe fn send_elements<W>(writer: &mut W, elt_tag: Tag, length: usize, data: *const ()) unsafe fn send_elements<W>(writer: &mut W, elt_tag: Tag, length: usize, data: *const (), write_tags: bool)
-> Result<(), Error<W::WriteError>> -> Result<(), Error<W::WriteError>>
where W: Write + ?Sized where W: Write + ?Sized
{ {
writer.write_u8(elt_tag.as_u8())?; if write_tags {
writer.write_u8(elt_tag.as_u8())?;
}
match elt_tag { match elt_tag {
// we cannot use NativeEndian::from_slice_i32 as the data is not mutable, // we cannot use NativeEndian::from_slice_i32 as the data is not mutable,
// and that is not needed as the data is already in native endian // and that is not needed as the data is already in native endian
@ -230,14 +232,14 @@ unsafe fn send_elements<W>(writer: &mut W, elt_tag: Tag, length: usize, data: *c
_ => { _ => {
let mut data = data; let mut data = data;
for _ in 0..length { for _ in 0..length {
send_value(writer, elt_tag, &mut data)?; send_value(writer, elt_tag, &mut data, write_tags)?;
} }
} }
} }
Ok(()) Ok(())
} }
unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ()) unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_tags: bool)
-> Result<(), Error<W::WriteError>> -> Result<(), Error<W::WriteError>>
where W: Write + ?Sized where W: Write + ?Sized
{ {
@ -248,8 +250,9 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
$map $map
}) })
} }
if write_tags {
writer.write_u8(tag.as_u8())?; writer.write_u8(tag.as_u8())?;
}
match tag { match tag {
Tag::None => Ok(()), Tag::None => Ok(()),
Tag::Bool => Tag::Bool =>
@ -269,12 +272,14 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
writer.write_bytes((*ptr).as_ref())), writer.write_bytes((*ptr).as_ref())),
Tag::Tuple(it, arity) => { Tag::Tuple(it, arity) => {
let mut it = it.clone(); let mut it = it.clone();
writer.write_u8(arity)?; if write_tags {
writer.write_u8(arity)?;
}
let mut max_alignment = 0; let mut max_alignment = 0;
for _ in 0..arity { for _ in 0..arity {
let tag = it.next().expect("truncated tag"); let tag = it.next().expect("truncated tag");
max_alignment = core::cmp::max(max_alignment, tag.alignment()); max_alignment = core::cmp::max(max_alignment, tag.alignment());
send_value(writer, tag, data)? send_value(writer, tag, data, write_tags)?
} }
*data = round_up_const(*data, max_alignment); *data = round_up_const(*data, max_alignment);
Ok(()) Ok(())
@ -286,11 +291,13 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
let length = (**ptr).length as usize; let length = (**ptr).length as usize;
writer.write_u32((**ptr).length)?; writer.write_u32((**ptr).length)?;
let tag = it.clone().next().expect("truncated tag"); let tag = it.clone().next().expect("truncated tag");
send_elements(writer, tag, length, (**ptr).elements) send_elements(writer, tag, length, (**ptr).elements, write_tags)
}) })
} }
Tag::Array(it, num_dims) => { Tag::Array(it, num_dims) => {
writer.write_u8(num_dims)?; if write_tags {
writer.write_u8(num_dims)?;
}
consume_value!(*const(), |buffer| { consume_value!(*const(), |buffer| {
let elt_tag = it.clone().next().expect("truncated tag"); let elt_tag = it.clone().next().expect("truncated tag");
@ -302,14 +309,14 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
}) })
} }
let length = total_len as usize; let length = total_len as usize;
send_elements(writer, elt_tag, length, *buffer) send_elements(writer, elt_tag, length, *buffer, write_tags)
}) })
} }
Tag::Range(it) => { Tag::Range(it) => {
let tag = it.clone().next().expect("truncated tag"); let tag = it.clone().next().expect("truncated tag");
send_value(writer, tag, data)?; send_value(writer, tag, data, write_tags)?;
send_value(writer, tag, data)?; send_value(writer, tag, data, write_tags)?;
send_value(writer, tag, data)?; send_value(writer, tag, data, write_tags)?;
Ok(()) Ok(())
} }
Tag::Keyword(it) => { Tag::Keyword(it) => {
@ -319,7 +326,7 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
writer.write_string(str::from_utf8((*ptr).name.as_ref()).unwrap())?; writer.write_string(str::from_utf8((*ptr).name.as_ref()).unwrap())?;
let tag = it.clone().next().expect("truncated tag"); let tag = it.clone().next().expect("truncated tag");
let mut data = ptr.offset(1) as *const (); let mut data = ptr.offset(1) as *const ();
send_value(writer, tag, &mut data) send_value(writer, tag, &mut data, write_tags)
}) })
// Tag::Keyword never appears in composite types, so we don't have // Tag::Keyword never appears in composite types, so we don't have
// to accurately advance data. // to accurately advance data.
@ -333,7 +340,7 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
} }
} }
pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const *const ()) pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const *const (), write_tags: bool)
-> Result<(), Error<W::WriteError>> -> Result<(), Error<W::WriteError>>
where W: Write + ?Sized where W: Write + ?Sized
{ {
@ -350,7 +357,7 @@ pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const
for index in 0.. { for index in 0.. {
if let Some(arg_tag) = args_it.next() { if let Some(arg_tag) = args_it.next() {
let mut data = unsafe { *data.offset(index) }; let mut data = unsafe { *data.offset(index) };
unsafe { send_value(writer, arg_tag, &mut data)? }; unsafe { send_value(writer, arg_tag, &mut data, write_tags)? };
} else { } else {
break break
} }
@ -482,7 +489,7 @@ mod tag {
#[derive(Debug, Clone, Copy)] #[derive(Debug, Clone, Copy)]
pub struct TagIterator<'a> { pub struct TagIterator<'a> {
data: &'a [u8] pub data: &'a [u8]
} }
impl<'a> TagIterator<'a> { impl<'a> TagIterator<'a> {

View File

@ -84,6 +84,8 @@ pub enum Request {
column: u32, column: u32,
function: u32, function: u32,
}, },
UploadSubkernel { id: u32, destination: u8, kernel: Vec<u8> },
} }
#[derive(Debug)] #[derive(Debug)]
@ -137,6 +139,11 @@ impl Request {
column: reader.read_u32()?, column: reader.read_u32()?,
function: reader.read_u32()? function: reader.read_u32()?
}, },
9 => Request::UploadSubkernel {
id: reader.read_u32()?,
destination: reader.read_u8()?,
kernel: reader.read_bytes()?
},
ty => return Err(Error::UnknownPacket(ty)) ty => return Err(Error::UnknownPacket(ty))
}) })

View File

@ -40,3 +40,7 @@ git = "https://git.m-labs.hk/M-Labs/libfringe.git"
rev = "3ecbe5" rev = "3ecbe5"
default-features = false default-features = false
features = ["alloc"] features = ["alloc"]
[dependencies.tar-no-std]
git = "https://git.m-labs.hk/M-Labs/tar-no-std"
rev = "2ab6dc5"

View File

@ -21,7 +21,7 @@ $(RUSTOUT)/libruntime.a:
--target $(RUNTIME_DIRECTORY)/../$(CARGO_TRIPLE).json --target $(RUNTIME_DIRECTORY)/../$(CARGO_TRIPLE).json
runtime.elf: $(RUSTOUT)/libruntime.a ksupport_data.o runtime.elf: $(RUSTOUT)/libruntime.a ksupport_data.o
$(link) -T $(RUNTIME_DIRECTORY)/runtime.ld \ $(link) -T $(RUNTIME_DIRECTORY)/../firmware.ld \
-lunwind-vexriscv-bare -m elf32lriscv -lunwind-vexriscv-bare -m elf32lriscv
ksupport_data.o: ../ksupport/ksupport.elf ksupport_data.o: ../ksupport/ksupport.elf

View File

@ -54,19 +54,16 @@ pub mod remote_analyzer {
pub fn get_data(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, pub fn get_data(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>> up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>
) -> Result<RemoteBuffer, &'static str> { ) -> Result<RemoteBuffer, drtio::Error> {
// gets data from satellites and returns consolidated data // gets data from satellites and returns consolidated data
let mut remote_data: Vec<u8> = Vec::new(); let mut remote_data: Vec<u8> = Vec::new();
let mut remote_overflow = false; let mut remote_overflow = false;
let mut remote_sent_bytes = 0; let mut remote_sent_bytes = 0;
let mut remote_total_bytes = 0; let mut remote_total_bytes = 0;
let data_vec = match drtio::analyzer_query( let data_vec = drtio::analyzer_query(
io, aux_mutex, routing_table, up_destinations io, aux_mutex, routing_table, up_destinations
) { )?;
Ok(data_vec) => data_vec,
Err(e) => return Err(e)
};
for data in data_vec { for data in data_vec {
remote_total_bytes += data.total_byte_count; remote_total_bytes += data.total_byte_count;
remote_sent_bytes += data.sent_bytes; remote_sent_bytes += data.sent_bytes;

View File

@ -32,7 +32,7 @@ mod remote_i2c {
} }
Err(e) => { Err(e) => {
error!("aux packet error ({})", e); error!("aux packet error ({})", e);
Err(e) Err("aux packet error")
} }
} }
} }
@ -55,7 +55,7 @@ mod remote_i2c {
} }
Err(e) => { Err(e) => {
error!("aux packet error ({})", e); error!("aux packet error ({})", e);
Err(e) Err("aux packet error")
} }
} }
} }
@ -78,7 +78,7 @@ mod remote_i2c {
} }
Err(e) => { Err(e) => {
error!("aux packet error ({})", e); error!("aux packet error ({})", e);
Err(e) Err("aux packet error")
} }
} }
} }
@ -102,7 +102,7 @@ mod remote_i2c {
} }
Err(e) => { Err(e) => {
error!("aux packet error ({})", e); error!("aux packet error ({})", e);
Err(e) Err("aux packet error")
} }
} }
} }
@ -126,7 +126,7 @@ mod remote_i2c {
} }
Err(e) => { Err(e) => {
error!("aux packet error ({})", e); error!("aux packet error ({})", e);
Err(e) Err("aux packet error")
} }
} }
} }
@ -151,7 +151,7 @@ mod remote_i2c {
} }
Err(e) => { Err(e) => {
error!("aux packet error ({})", e); error!("aux packet error ({})", e);
Err(e) Err("aux packet error")
} }
} }
} }

View File

@ -87,3 +87,340 @@ unsafe fn load_image(image: &[u8]) -> Result<(), &'static str> {
pub fn validate(ptr: usize) -> bool { pub fn validate(ptr: usize) -> bool {
ptr >= KERNELCPU_EXEC_ADDRESS && ptr <= KERNELCPU_LAST_ADDRESS ptr >= KERNELCPU_EXEC_ADDRESS && ptr <= KERNELCPU_LAST_ADDRESS
} }
#[cfg(has_drtio)]
pub mod subkernel {
use alloc::{vec::Vec, collections::btree_map::BTreeMap};
use board_artiq::drtio_routing::RoutingTable;
use board_misoc::clock;
use proto_artiq::{drtioaux_proto::{PayloadStatus, MASTER_PAYLOAD_MAX_SIZE}, rpc_proto as rpc};
use io::Cursor;
use rtio_mgt::drtio;
use sched::{Io, Mutex, Error as SchedError};
#[derive(Debug, PartialEq, Clone, Copy)]
pub enum FinishStatus {
Ok,
CommLost,
Exception
}
#[derive(Debug, PartialEq, Clone, Copy)]
pub enum SubkernelState {
NotLoaded,
Uploaded,
Running,
Finished { status: FinishStatus },
}
#[derive(Fail, Debug)]
pub enum Error {
#[fail(display = "Timed out waiting for subkernel")]
Timeout,
#[fail(display = "Subkernel is in incorrect state for the given operation")]
IncorrectState,
#[fail(display = "DRTIO error: {}", _0)]
DrtioError(#[cause] drtio::Error),
#[fail(display = "scheduler error: {}", _0)]
SchedError(#[cause] SchedError),
#[fail(display = "rpc io error")]
RpcIoError,
#[fail(display = "subkernel finished prematurely")]
SubkernelFinished,
}
impl From<drtio::Error> for Error {
fn from(value: drtio::Error) -> Error {
match value {
drtio::Error::SchedError(x) => Error::SchedError(x),
x => Error::DrtioError(x),
}
}
}
impl From<SchedError> for Error {
fn from(value: SchedError) -> Error {
Error::SchedError(value)
}
}
impl From<io::Error<!>> for Error {
fn from(_value: io::Error<!>) -> Error {
Error::RpcIoError
}
}
pub struct SubkernelFinished {
pub id: u32,
pub comm_lost: bool,
pub exception: Option<Vec<u8>>
}
struct Subkernel {
pub destination: u8,
pub data: Vec<u8>,
pub state: SubkernelState
}
impl Subkernel {
pub fn new(destination: u8, data: Vec<u8>) -> Self {
Subkernel {
destination: destination,
data: data,
state: SubkernelState::NotLoaded
}
}
}
static mut SUBKERNELS: BTreeMap<u32, Subkernel> = BTreeMap::new();
pub fn add_subkernel(io: &Io, subkernel_mutex: &Mutex, id: u32, destination: u8, kernel: Vec<u8>) -> Result<(), Error> {
let _lock = subkernel_mutex.lock(io)?;
unsafe { SUBKERNELS.insert(id, Subkernel::new(destination, kernel)); }
Ok(())
}
pub fn upload(io: &Io, aux_mutex: &Mutex, subkernel_mutex: &Mutex,
routing_table: &RoutingTable, id: u32) -> Result<(), Error> {
let _lock = subkernel_mutex.lock(io)?;
let subkernel = unsafe { SUBKERNELS.get_mut(&id).unwrap() };
drtio::subkernel_upload(io, aux_mutex, routing_table, id,
subkernel.destination, &subkernel.data)?;
subkernel.state = SubkernelState::Uploaded;
Ok(())
}
pub fn load(io: &Io, aux_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &RoutingTable,
id: u32, run: bool) -> Result<(), Error> {
let _lock = subkernel_mutex.lock(io)?;
let subkernel = unsafe { SUBKERNELS.get_mut(&id).unwrap() };
if subkernel.state != SubkernelState::Uploaded {
error!("for id: {} expected Uploaded, got: {:?}", id, subkernel.state);
return Err(Error::IncorrectState);
}
drtio::subkernel_load(io, aux_mutex, routing_table, id, subkernel.destination, run)?;
if run {
subkernel.state = SubkernelState::Running;
}
Ok(())
}
pub fn clear_subkernels(io: &Io, subkernel_mutex: &Mutex) -> Result<(), Error> {
let _lock = subkernel_mutex.lock(io)?;
unsafe {
SUBKERNELS = BTreeMap::new();
MESSAGE_QUEUE = Vec::new();
CURRENT_MESSAGES = BTreeMap::new();
}
Ok(())
}
pub fn subkernel_finished(io: &Io, subkernel_mutex: &Mutex, id: u32, with_exception: bool) {
// called upon receiving DRTIO SubkernelRunDone
let _lock = subkernel_mutex.lock(io).unwrap();
let subkernel = unsafe { SUBKERNELS.get_mut(&id) };
// may be None if session ends and is cleared
if let Some(subkernel) = subkernel {
// ignore other messages, could be a late finish reported
if subkernel.state == SubkernelState::Running {
subkernel.state = SubkernelState::Finished {
status: match with_exception {
true => FinishStatus::Exception,
false => FinishStatus::Ok,
}
}
}
}
}
pub fn destination_changed(io: &Io, aux_mutex: &Mutex, subkernel_mutex: &Mutex,
routing_table: &RoutingTable, destination: u8, up: bool) {
let _lock = subkernel_mutex.lock(io).unwrap();
let subkernels_iter = unsafe { SUBKERNELS.iter_mut() };
for (id, subkernel) in subkernels_iter {
if subkernel.destination == destination {
if up {
match drtio::subkernel_upload(io, aux_mutex, routing_table, *id, destination, &subkernel.data)
{
Ok(_) => subkernel.state = SubkernelState::Uploaded,
Err(e) => error!("Error adding subkernel on destination {}: {}", destination, e)
}
} else {
subkernel.state = match subkernel.state {
SubkernelState::Running => SubkernelState::Finished { status: FinishStatus::CommLost },
_ => SubkernelState::NotLoaded,
}
}
}
}
}
pub fn retrieve_finish_status(io: &Io, aux_mutex: &Mutex, subkernel_mutex: &Mutex,
routing_table: &RoutingTable, id: u32) -> Result<SubkernelFinished, Error> {
let _lock = subkernel_mutex.lock(io)?;
let mut subkernel = unsafe { SUBKERNELS.get_mut(&id).unwrap() };
match subkernel.state {
SubkernelState::Finished { status } => {
subkernel.state = SubkernelState::Uploaded;
Ok(SubkernelFinished {
id: id,
comm_lost: status == FinishStatus::CommLost,
exception: if status == FinishStatus::Exception {
Some(drtio::subkernel_retrieve_exception(io, aux_mutex,
routing_table, subkernel.destination)?)
} else { None }
})
},
_ => {
Err(Error::IncorrectState)
}
}
}
pub fn await_finish(io: &Io, aux_mutex: &Mutex, subkernel_mutex: &Mutex,
routing_table: &RoutingTable, id: u32, timeout: u64) -> Result<SubkernelFinished, Error> {
{
let _lock = subkernel_mutex.lock(io)?;
match unsafe { SUBKERNELS.get(&id).unwrap().state } {
SubkernelState::Running | SubkernelState::Finished { .. } => (),
_ => {
return Err(Error::IncorrectState);
}
}
}
let max_time = clock::get_ms() + timeout as u64;
let _res = io.until(|| {
if clock::get_ms() > max_time {
return true;
}
if subkernel_mutex.test_lock() {
// cannot lock again within io.until - scheduler guarantees
// that it will not be interrupted - so only test the lock
return false;
}
let subkernel = unsafe { SUBKERNELS.get(&id).unwrap() };
match subkernel.state {
SubkernelState::Finished { .. } => true,
_ => false
}
})?;
if clock::get_ms() > max_time {
error!("Remote subkernel finish await timed out");
return Err(Error::Timeout);
}
retrieve_finish_status(io, aux_mutex, subkernel_mutex, routing_table, id)
}
pub struct Message {
from_id: u32,
pub count: u8,
pub data: Vec<u8>
}
// FIFO queue of messages
static mut MESSAGE_QUEUE: Vec<Message> = Vec::new();
// currently under construction message(s) (can be from multiple sources)
static mut CURRENT_MESSAGES: BTreeMap<u32, Message> = BTreeMap::new();
pub fn message_handle_incoming(io: &Io, subkernel_mutex: &Mutex,
id: u32, status: PayloadStatus, length: usize, data: &[u8; MASTER_PAYLOAD_MAX_SIZE]) {
// called when receiving a message from satellite
let _lock = match subkernel_mutex.lock(io) {
Ok(lock) => lock,
// may get interrupted, when session is cancelled or main kernel finishes without await
Err(_) => return,
};
let subkernel = unsafe { SUBKERNELS.get(&id) };
if subkernel.is_none() || subkernel.unwrap().state != SubkernelState::Running {
// do not add messages for non-existing, non-running or deleted subkernels
return
}
if status.is_first() {
unsafe {
CURRENT_MESSAGES.remove(&id);
}
}
match unsafe { CURRENT_MESSAGES.get_mut(&id) } {
Some(message) => message.data.extend(&data[..length]),
None => unsafe {
CURRENT_MESSAGES.insert(id, Message {
from_id: id,
count: data[0],
data: data[1..length].to_vec()
});
}
};
if status.is_last() {
unsafe {
// when done, remove from working queue
MESSAGE_QUEUE.push(CURRENT_MESSAGES.remove(&id).unwrap());
};
}
}
pub fn message_await(io: &Io, subkernel_mutex: &Mutex, id: u32, timeout: u64
) -> Result<Message, Error> {
{
let _lock = subkernel_mutex.lock(io)?;
match unsafe { SUBKERNELS.get(&id).unwrap().state } {
SubkernelState::Finished { .. } => return Err(Error::SubkernelFinished),
SubkernelState::Running => (),
_ => return Err(Error::IncorrectState)
}
}
let max_time = clock::get_ms() + timeout as u64;
let message = io.until_ok(|| {
if clock::get_ms() > max_time {
return Ok(None);
}
if subkernel_mutex.test_lock() {
return Err(());
}
let msg_len = unsafe { MESSAGE_QUEUE.len() };
for i in 0..msg_len {
let msg = unsafe { &MESSAGE_QUEUE[i] };
if msg.from_id == id {
return Ok(Some(unsafe { MESSAGE_QUEUE.remove(i) }));
}
}
match unsafe { SUBKERNELS.get(&id).unwrap().state } {
SubkernelState::Finished { .. } => return Ok(None),
_ => ()
}
Err(())
});
match message {
Ok(Some(message)) => Ok(message),
Ok(None) => {
if clock::get_ms() > max_time {
Err(Error::Timeout)
} else {
let _lock = subkernel_mutex.lock(io)?;
match unsafe { SUBKERNELS.get(&id).unwrap().state } {
SubkernelState::Finished { .. } => Err(Error::SubkernelFinished),
_ => Err(Error::IncorrectState)
}
}
}
Err(e) => Err(Error::SchedError(e)),
}
}
pub fn message_send<'a>(io: &Io, aux_mutex: &Mutex, subkernel_mutex: &Mutex,
routing_table: &RoutingTable, id: u32, count: u8, tag: &'a [u8], message: *const *const ()
) -> Result<(), Error> {
let mut writer = Cursor::new(Vec::new());
let _lock = subkernel_mutex.lock(io)?;
let destination = unsafe { SUBKERNELS.get(&id).unwrap().destination };
// reuse rpc code for sending arbitrary data
rpc::send_args(&mut writer, 0, tag, message, false)?;
// skip service tag, but overwrite first byte with tag count
let data = &mut writer.into_inner()[3..];
data[0] = count;
Ok(drtio::subkernel_send_message(
io, aux_mutex, routing_table, id, destination, data
)?)
}
}

View File

@ -1,4 +1,4 @@
#![feature(lang_items, panic_info_message, const_btree_new, iter_advance_by)] #![feature(lang_items, panic_info_message, const_btree_new, iter_advance_by, never_type)]
#![no_std] #![no_std]
extern crate dyld; extern crate dyld;
@ -25,6 +25,8 @@ extern crate board_artiq;
extern crate logger_artiq; extern crate logger_artiq;
extern crate proto_artiq; extern crate proto_artiq;
extern crate riscv; extern crate riscv;
#[cfg(has_drtio)]
extern crate tar_no_std;
use alloc::collections::BTreeMap; use alloc::collections::BTreeMap;
use core::cell::RefCell; use core::cell::RefCell;
@ -189,6 +191,7 @@ fn startup() {
let aux_mutex = sched::Mutex::new(); let aux_mutex = sched::Mutex::new();
let ddma_mutex = sched::Mutex::new(); let ddma_mutex = sched::Mutex::new();
let subkernel_mutex = sched::Mutex::new();
let mut scheduler = sched::Scheduler::new(interface); let mut scheduler = sched::Scheduler::new(interface);
let io = scheduler.io(); let io = scheduler.io();
@ -197,7 +200,7 @@ fn startup() {
io.spawn(4096, dhcp::dhcp_thread); io.spawn(4096, dhcp::dhcp_thread);
} }
rtio_mgt::startup(&io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex); rtio_mgt::startup(&io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex, &subkernel_mutex);
io.spawn(4096, mgmt::thread); io.spawn(4096, mgmt::thread);
{ {
@ -205,7 +208,8 @@ fn startup() {
let drtio_routing_table = drtio_routing_table.clone(); let drtio_routing_table = drtio_routing_table.clone();
let up_destinations = up_destinations.clone(); let up_destinations = up_destinations.clone();
let ddma_mutex = ddma_mutex.clone(); let ddma_mutex = ddma_mutex.clone();
io.spawn(16384, move |io| { session::thread(io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex) }); let subkernel_mutex = subkernel_mutex.clone();
io.spawn(32768, move |io| { session::thread(io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex, &subkernel_mutex) });
} }
#[cfg(any(has_rtio_moninj, has_drtio))] #[cfg(any(has_rtio_moninj, has_drtio))]
{ {

View File

@ -255,7 +255,7 @@ pub fn init() {
}; };
if switched == 0 { if switched == 0 {
info!("Switching sys clock, rebooting..."); info!("Switching sys clock, rebooting...");
clock::spin_us(500); // delay for clean UART log clock::spin_us(3000); // delay for clean UART log
unsafe { unsafe {
// clock switch and reboot will begin after TX is initialized // clock switch and reboot will begin after TX is initialized
// and TX will be initialized after this // and TX will be initialized after this

View File

@ -1,6 +1,6 @@
use core::mem; use core::mem;
use alloc::{vec::Vec, string::String, collections::btree_map::BTreeMap}; use alloc::{vec::Vec, string::String, collections::btree_map::BTreeMap};
use sched::{Io, Mutex}; use sched::{Io, Mutex, Error as SchedError};
const ALIGNMENT: usize = 64; const ALIGNMENT: usize = 64;
@ -39,19 +39,48 @@ pub mod remote_dma {
} }
} }
#[derive(Fail, Debug)]
pub enum Error {
#[fail(display = "Timed out waiting for DMA results")]
Timeout,
#[fail(display = "DDMA trace is in incorrect state for the given operation")]
IncorrectState,
#[fail(display = "scheduler error: {}", _0)]
SchedError(#[cause] SchedError),
#[fail(display = "DRTIO error: {}", _0)]
DrtioError(#[cause] drtio::Error),
}
impl From<drtio::Error> for Error {
fn from(value: drtio::Error) -> Error {
match value {
drtio::Error::SchedError(x) => Error::SchedError(x),
x => Error::DrtioError(x),
}
}
}
impl From<SchedError> for Error {
fn from(value: SchedError) -> Error {
Error::SchedError(value)
}
}
// remote traces map. ID -> destination, trace pair // remote traces map. ID -> destination, trace pair
static mut TRACES: BTreeMap<u32, BTreeMap<u8, RemoteTrace>> = BTreeMap::new(); static mut TRACES: BTreeMap<u32, BTreeMap<u8, RemoteTrace>> = BTreeMap::new();
pub fn add_traces(io: &Io, ddma_mutex: &Mutex, id: u32, traces: BTreeMap<u8, Vec<u8>>) { pub fn add_traces(io: &Io, ddma_mutex: &Mutex, id: u32, traces: BTreeMap<u8, Vec<u8>>
let _lock = ddma_mutex.lock(io); ) -> Result<(), SchedError> {
let _lock = ddma_mutex.lock(io)?;
let mut trace_map: BTreeMap<u8, RemoteTrace> = BTreeMap::new(); let mut trace_map: BTreeMap<u8, RemoteTrace> = BTreeMap::new();
for (destination, trace) in traces { for (destination, trace) in traces {
trace_map.insert(destination, trace.into()); trace_map.insert(destination, trace.into());
} }
unsafe { TRACES.insert(id, trace_map); } unsafe { TRACES.insert(id, trace_map); }
Ok(())
} }
pub fn await_done(io: &Io, ddma_mutex: &Mutex, id: u32, timeout: u64) -> Result<RemoteState, &'static str> { pub fn await_done(io: &Io, ddma_mutex: &Mutex, id: u32, timeout: u64) -> Result<RemoteState, Error> {
let max_time = clock::get_ms() + timeout as u64; let max_time = clock::get_ms() + timeout as u64;
io.until(|| { io.until(|| {
if clock::get_ms() > max_time { if clock::get_ms() > max_time {
@ -70,15 +99,15 @@ pub mod remote_dma {
} }
} }
true true
}).unwrap(); })?;
if clock::get_ms() > max_time { if clock::get_ms() > max_time {
error!("Remote DMA await done timed out"); error!("Remote DMA await done timed out");
return Err("Timed out waiting for results."); return Err(Error::Timeout);
} }
// clear the internal state, and if there have been any errors, return one of them // clear the internal state, and if there have been any errors, return one of them
let mut playback_state: RemoteState = RemoteState::PlaybackEnded { error: 0, channel: 0, timestamp: 0 }; let mut playback_state: RemoteState = RemoteState::PlaybackEnded { error: 0, channel: 0, timestamp: 0 };
{ {
let _lock = ddma_mutex.lock(io).unwrap(); let _lock = ddma_mutex.lock(io)?;
let traces = unsafe { TRACES.get_mut(&id).unwrap() }; let traces = unsafe { TRACES.get_mut(&id).unwrap() };
for (_dest, trace) in traces { for (_dest, trace) in traces {
match trace.state { match trace.state {
@ -92,8 +121,8 @@ pub mod remote_dma {
} }
pub fn erase(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, pub fn erase(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex,
routing_table: &RoutingTable, id: u32) { routing_table: &RoutingTable, id: u32) -> Result<(), Error> {
let _lock = ddma_mutex.lock(io).unwrap(); let _lock = ddma_mutex.lock(io)?;
let destinations = unsafe { TRACES.get(&id).unwrap() }; let destinations = unsafe { TRACES.get(&id).unwrap() };
for destination in destinations.keys() { for destination in destinations.keys() {
match drtio::ddma_send_erase(io, aux_mutex, routing_table, id, *destination) { match drtio::ddma_send_erase(io, aux_mutex, routing_table, id, *destination) {
@ -102,42 +131,39 @@ pub mod remote_dma {
} }
} }
unsafe { TRACES.remove(&id); } unsafe { TRACES.remove(&id); }
Ok(())
} }
pub fn upload_traces(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, pub fn upload_traces(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex,
routing_table: &RoutingTable, id: u32) { routing_table: &RoutingTable, id: u32) -> Result<(), Error> {
let _lock = ddma_mutex.lock(io); let _lock = ddma_mutex.lock(io)?;
let traces = unsafe { TRACES.get_mut(&id).unwrap() }; let traces = unsafe { TRACES.get_mut(&id).unwrap() };
for (destination, mut trace) in traces { for (destination, mut trace) in traces {
match drtio::ddma_upload_trace(io, aux_mutex, routing_table, id, *destination, trace.get_trace()) drtio::ddma_upload_trace(io, aux_mutex, routing_table, id, *destination, trace.get_trace())?;
{ trace.state = RemoteState::Loaded;
Ok(_) => trace.state = RemoteState::Loaded,
Err(e) => error!("Error adding DMA trace on destination {}: {}", destination, e)
}
} }
Ok(())
} }
pub fn playback(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, pub fn playback(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex,
routing_table: &RoutingTable, id: u32, timestamp: u64) { routing_table: &RoutingTable, id: u32, timestamp: u64) -> Result<(), Error>{
// triggers playback on satellites // triggers playback on satellites
let destinations = unsafe { let destinations = unsafe {
let _lock = ddma_mutex.lock(io).unwrap(); let _lock = ddma_mutex.lock(io)?;
TRACES.get(&id).unwrap() }; TRACES.get(&id).unwrap() };
for (destination, trace) in destinations { for (destination, trace) in destinations {
{ {
// need to drop the lock before sending the playback request to avoid a deadlock // need to drop the lock before sending the playback request to avoid a deadlock
// if a PlaybackStatus is returned from another satellite in the meanwhile. // if a PlaybackStatus is returned from another satellite in the meanwhile.
let _lock = ddma_mutex.lock(io).unwrap(); let _lock = ddma_mutex.lock(io)?;
if trace.state != RemoteState::Loaded { if trace.state != RemoteState::Loaded {
error!("Destination {} not ready for DMA, state: {:?}", *destination, trace.state); error!("Destination {} not ready for DMA, state: {:?}", *destination, trace.state);
continue; return Err(Error::IncorrectState);
} }
} }
match drtio::ddma_send_playback(io, aux_mutex, routing_table, id, *destination, timestamp) { drtio::ddma_send_playback(io, aux_mutex, routing_table, id, *destination, timestamp)?;
Ok(_) => (),
Err(e) => error!("Error during remote DMA playback: {}", e)
}
} }
Ok(())
} }
pub fn playback_done(io: &Io, ddma_mutex: &Mutex, pub fn playback_done(io: &Io, ddma_mutex: &Mutex,
@ -172,10 +198,10 @@ pub mod remote_dma {
} }
} }
pub fn has_remote_traces(io: &Io, ddma_mutex: &Mutex, id: u32) -> bool { pub fn has_remote_traces(io: &Io, ddma_mutex: &Mutex, id: u32) -> Result<bool, Error> {
let _lock = ddma_mutex.lock(io).unwrap(); let _lock = ddma_mutex.lock(io)?;
let trace_list = unsafe { TRACES.get(&id).unwrap() }; let trace_list = unsafe { TRACES.get(&id).unwrap() };
!trace_list.is_empty() Ok(!trace_list.is_empty())
} }
} }
@ -225,11 +251,11 @@ impl Manager {
} }
pub fn record_stop(&mut self, duration: u64, _enable_ddma: bool, pub fn record_stop(&mut self, duration: u64, _enable_ddma: bool,
_io: &Io, _ddma_mutex: &Mutex) -> u32 { _io: &Io, _ddma_mutex: &Mutex) -> Result<u32, SchedError> {
let mut local_trace = Vec::new(); let mut local_trace = Vec::new();
let mut _remote_traces: BTreeMap<u8, Vec<u8>> = BTreeMap::new(); let mut _remote_traces: BTreeMap<u8, Vec<u8>> = BTreeMap::new();
if _enable_ddma & cfg!(has_drtio) { if _enable_ddma && cfg!(has_drtio) {
let mut trace = Vec::new(); let mut trace = Vec::new();
mem::swap(&mut self.recording_trace, &mut trace); mem::swap(&mut self.recording_trace, &mut trace);
trace.push(0); trace.push(0);
@ -284,9 +310,9 @@ impl Manager {
self.name_map.insert(name, id); self.name_map.insert(name, id);
#[cfg(has_drtio)] #[cfg(has_drtio)]
remote_dma::add_traces(_io, _ddma_mutex, id, _remote_traces); remote_dma::add_traces(_io, _ddma_mutex, id, _remote_traces)?;
id Ok(id)
} }
pub fn erase(&mut self, name: &str) { pub fn erase(&mut self, name: &str) {

View File

@ -17,22 +17,57 @@ pub mod drtio {
use super::*; use super::*;
use alloc::vec::Vec; use alloc::vec::Vec;
use drtioaux; use drtioaux;
use proto_artiq::drtioaux_proto::DMA_TRACE_MAX_SIZE; use proto_artiq::drtioaux_proto::{MASTER_PAYLOAD_MAX_SIZE, PayloadStatus};
use rtio_dma::remote_dma; use rtio_dma::remote_dma;
#[cfg(has_rtio_analyzer)] #[cfg(has_rtio_analyzer)]
use analyzer::remote_analyzer::RemoteBuffer; use analyzer::remote_analyzer::RemoteBuffer;
use kernel::subkernel;
use sched::Error as SchedError;
#[derive(Fail, Debug)]
pub enum Error {
#[fail(display = "timed out")]
Timeout,
#[fail(display = "unexpected packet: {:?}", _0)]
UnexpectedPacket(drtioaux::Packet),
#[fail(display = "aux packet error")]
AuxError,
#[fail(display = "link down")]
LinkDown,
#[fail(display = "unexpected reply")]
UnexpectedReply,
#[fail(display = "error adding DMA trace on satellite #{}", _0)]
DmaAddTraceFail(u8),
#[fail(display = "error erasing DMA trace on satellite #{}", _0)]
DmaEraseFail(u8),
#[fail(display = "error playing back DMA trace on satellite #{}", _0)]
DmaPlaybackFail(u8),
#[fail(display = "error adding subkernel on satellite #{}", _0)]
SubkernelAddFail(u8),
#[fail(display = "error on subkernel run request on satellite #{}", _0)]
SubkernelRunFail(u8),
#[fail(display = "sched error: {}", _0)]
SchedError(#[cause] SchedError),
}
impl From<SchedError> for Error {
fn from(value: SchedError) -> Error {
Error::SchedError(value)
}
}
pub fn startup(io: &Io, aux_mutex: &Mutex, pub fn startup(io: &Io, aux_mutex: &Mutex,
routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>, routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex) { ddma_mutex: &Mutex, subkernel_mutex: &Mutex) {
let aux_mutex = aux_mutex.clone(); let aux_mutex = aux_mutex.clone();
let routing_table = routing_table.clone(); let routing_table = routing_table.clone();
let up_destinations = up_destinations.clone(); let up_destinations = up_destinations.clone();
let ddma_mutex = ddma_mutex.clone(); let ddma_mutex = ddma_mutex.clone();
let subkernel_mutex = subkernel_mutex.clone();
io.spawn(8192, move |io| { io.spawn(8192, move |io| {
let routing_table = routing_table.borrow(); let routing_table = routing_table.borrow();
link_thread(io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex); link_thread(io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex, &subkernel_mutex);
}); });
} }
@ -43,44 +78,66 @@ pub mod drtio {
} }
} }
fn recv_aux_timeout(io: &Io, linkno: u8, timeout: u32) -> Result<drtioaux::Packet, &'static str> { fn recv_aux_timeout(io: &Io, linkno: u8, timeout: u32) -> Result<drtioaux::Packet, Error> {
let max_time = clock::get_ms() + timeout as u64; let max_time = clock::get_ms() + timeout as u64;
loop { loop {
if !link_rx_up(linkno) { if !link_rx_up(linkno) {
return Err("link went down"); return Err(Error::LinkDown);
} }
if clock::get_ms() > max_time { if clock::get_ms() > max_time {
return Err("timeout"); return Err(Error::Timeout);
} }
match drtioaux::recv(linkno) { match drtioaux::recv(linkno) {
Ok(Some(packet)) => return Ok(packet), Ok(Some(packet)) => return Ok(packet),
Ok(None) => (), Ok(None) => (),
Err(_) => return Err("aux packet error") Err(_) => return Err(Error::AuxError)
} }
io.relinquish().unwrap(); io.relinquish()?;
} }
} }
fn process_async_packets(io: &Io, ddma_mutex: &Mutex, packet: drtioaux::Packet fn process_async_packets(io: &Io, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, linkno: u8,
) -> Option<drtioaux::Packet> { packet: drtioaux::Packet) -> Option<drtioaux::Packet> {
// returns None if an async packet has been consumed // returns None if an async packet has been consumed
match packet { match packet {
drtioaux::Packet::DmaPlaybackStatus { id, destination, error, channel, timestamp } => { drtioaux::Packet::DmaPlaybackStatus { id, destination, error, channel, timestamp } => {
remote_dma::playback_done(io, ddma_mutex, id, destination, error, channel, timestamp); remote_dma::playback_done(io, ddma_mutex, id, destination, error, channel, timestamp);
None None
}, },
drtioaux::Packet::SubkernelFinished { id, with_exception } => {
subkernel::subkernel_finished(io, subkernel_mutex, id, with_exception);
None
},
drtioaux::Packet::SubkernelMessage { id, destination: from, status, length, data } => {
subkernel::message_handle_incoming(io, subkernel_mutex, id, status, length as usize, &data);
// acknowledge receiving part of the message
drtioaux::send(linkno,
&drtioaux::Packet::SubkernelMessageAck { destination: from }
).unwrap();
None
}
other => Some(other) other => Some(other)
} }
} }
pub fn aux_transact(io: &Io, aux_mutex: &Mutex, linkno: u8, request: &drtioaux::Packet pub fn aux_transact(io: &Io, aux_mutex: &Mutex, linkno: u8, request: &drtioaux::Packet
) -> Result<drtioaux::Packet, &'static str> { ) -> Result<drtioaux::Packet, Error> {
let _lock = aux_mutex.lock(io).unwrap(); let _lock = aux_mutex.lock(io)?;
drtioaux::send(linkno, request).unwrap(); drtioaux::send(linkno, request).unwrap();
let reply = recv_aux_timeout(io, linkno, 200)?; let reply = recv_aux_timeout(io, linkno, 200)?;
Ok(reply) Ok(reply)
} }
pub fn clear_buffers(io: &Io, aux_mutex: &Mutex) {
let _lock = aux_mutex.lock(io).unwrap();
for linkno in 0..(csr::DRTIO.len() as u8) {
if !link_rx_up(linkno) {
continue;
}
let _ = recv_aux_timeout(io, linkno, 200);
}
}
fn ping_remote(io: &Io, aux_mutex: &Mutex, linkno: u8) -> u32 { fn ping_remote(io: &Io, aux_mutex: &Mutex, linkno: u8) -> u32 {
let mut count = 0; let mut count = 0;
loop { loop {
@ -110,7 +167,7 @@ pub mod drtio {
} }
} }
fn sync_tsc(io: &Io, aux_mutex: &Mutex, linkno: u8) -> Result<(), &'static str> { fn sync_tsc(io: &Io, aux_mutex: &Mutex, linkno: u8) -> Result<(), Error> {
let _lock = aux_mutex.lock(io).unwrap(); let _lock = aux_mutex.lock(io).unwrap();
unsafe { unsafe {
@ -123,32 +180,32 @@ pub mod drtio {
if reply == drtioaux::Packet::TSCAck { if reply == drtioaux::Packet::TSCAck {
return Ok(()); return Ok(());
} else { } else {
return Err("unexpected reply"); return Err(Error::UnexpectedReply);
} }
} }
fn load_routing_table(io: &Io, aux_mutex: &Mutex, fn load_routing_table(io: &Io, aux_mutex: &Mutex,
linkno: u8, routing_table: &drtio_routing::RoutingTable) -> Result<(), &'static str> { linkno: u8, routing_table: &drtio_routing::RoutingTable) -> Result<(), Error> {
for i in 0..drtio_routing::DEST_COUNT { for i in 0..drtio_routing::DEST_COUNT {
let reply = aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::RoutingSetPath { let reply = aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::RoutingSetPath {
destination: i as u8, destination: i as u8,
hops: routing_table.0[i] hops: routing_table.0[i]
})?; })?;
if reply != drtioaux::Packet::RoutingAck { if reply != drtioaux::Packet::RoutingAck {
return Err("unexpected reply"); return Err(Error::UnexpectedReply);
} }
} }
Ok(()) Ok(())
} }
fn set_rank(io: &Io, aux_mutex: &Mutex, fn set_rank(io: &Io, aux_mutex: &Mutex,
linkno: u8, rank: u8) -> Result<(), &'static str> { linkno: u8, rank: u8) -> Result<(), Error> {
let reply = aux_transact(io, aux_mutex, linkno, let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::RoutingSetRank { &drtioaux::Packet::RoutingSetRank {
rank: rank rank: rank
})?; })?;
if reply != drtioaux::Packet::RoutingAck { if reply != drtioaux::Packet::RoutingAck {
return Err("unexpected reply"); return Err(Error::UnexpectedReply);
} }
Ok(()) Ok(())
} }
@ -166,13 +223,14 @@ pub mod drtio {
} }
} }
fn process_unsolicited_aux(io: &Io, aux_mutex: &Mutex, linkno: u8, ddma_mutex: &Mutex) { fn process_unsolicited_aux(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, linkno: u8) {
let _lock = aux_mutex.lock(io).unwrap(); let _lock = aux_mutex.lock(io).unwrap();
match drtioaux::recv(linkno) { match drtioaux::recv(linkno) {
Ok(Some(drtioaux::Packet::DmaPlaybackStatus { id, destination, error, channel, timestamp })) => { Ok(Some(packet)) => {
remote_dma::playback_done(io, ddma_mutex, id, destination, error, channel, timestamp); if let Some(packet) = process_async_packets(io, ddma_mutex, subkernel_mutex, linkno, packet) {
warn!("[LINK#{}] unsolicited aux packet: {:?}", linkno, packet);
}
} }
Ok(Some(packet)) => warn!("[LINK#{}] unsolicited aux packet: {:?}", linkno, packet),
Ok(None) => (), Ok(None) => (),
Err(_) => warn!("[LINK#{}] aux packet error", linkno) Err(_) => warn!("[LINK#{}] aux packet error", linkno)
} }
@ -221,7 +279,7 @@ pub mod drtio {
fn destination_survey(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, fn destination_survey(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
up_links: &[bool], up_links: &[bool],
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex) { ddma_mutex: &Mutex, subkernel_mutex: &Mutex) {
for destination in 0..drtio_routing::DEST_COUNT { for destination in 0..drtio_routing::DEST_COUNT {
let hop = routing_table.0[destination][0]; let hop = routing_table.0[destination][0];
let destination = destination as u8; let destination = destination as u8;
@ -241,11 +299,12 @@ pub mod drtio {
destination: destination destination: destination
}); });
if let Ok(reply) = reply { if let Ok(reply) = reply {
let reply = process_async_packets(io, ddma_mutex, reply); let reply = process_async_packets(io, ddma_mutex, subkernel_mutex, linkno, reply);
match reply { match reply {
Some(drtioaux::Packet::DestinationDownReply) => { Some(drtioaux::Packet::DestinationDownReply) => {
destination_set_up(routing_table, up_destinations, destination, false); destination_set_up(routing_table, up_destinations, destination, false);
remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, false); remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, false);
subkernel::destination_changed(io, aux_mutex, subkernel_mutex, routing_table, destination, false);
} }
Some(drtioaux::Packet::DestinationOkReply) => (), Some(drtioaux::Packet::DestinationOkReply) => (),
Some(drtioaux::Packet::DestinationSequenceErrorReply { channel }) => { Some(drtioaux::Packet::DestinationSequenceErrorReply { channel }) => {
@ -269,13 +328,14 @@ pub mod drtio {
} }
} }
} else { } else {
error!("[DEST#{}] communication failed ({})", destination, reply.unwrap_err()); error!("[DEST#{}] communication failed ({:?})", destination, reply.unwrap_err());
} }
break; break;
} }
} else { } else {
destination_set_up(routing_table, up_destinations, destination, false); destination_set_up(routing_table, up_destinations, destination, false);
remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, false); remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, false);
subkernel::destination_changed(io, aux_mutex, subkernel_mutex, routing_table, destination, false);
} }
} else { } else {
if up_links[linkno as usize] { if up_links[linkno as usize] {
@ -289,9 +349,10 @@ pub mod drtio {
destination_set_up(routing_table, up_destinations, destination, true); destination_set_up(routing_table, up_destinations, destination, true);
init_buffer_space(destination as u8, linkno); init_buffer_space(destination as u8, linkno);
remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, true); remote_dma::destination_changed(io, aux_mutex, ddma_mutex, routing_table, destination, true);
subkernel::destination_changed(io, aux_mutex, subkernel_mutex, routing_table, destination, true);
}, },
Ok(packet) => error!("[DEST#{}] received unexpected aux packet: {:?}", destination, packet), Ok(packet) => error!("[DEST#{}] received unexpected aux packet: {:?}", destination, packet),
Err(e) => error!("[DEST#{}] communication failed ({})", destination, e) Err(e) => error!("[DEST#{}] communication failed ({:?})", destination, e)
} }
} }
} }
@ -302,7 +363,7 @@ pub mod drtio {
pub fn link_thread(io: Io, aux_mutex: &Mutex, pub fn link_thread(io: Io, aux_mutex: &Mutex,
routing_table: &drtio_routing::RoutingTable, routing_table: &drtio_routing::RoutingTable,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex) { ddma_mutex: &Mutex, subkernel_mutex: &Mutex) {
let mut up_links = [false; csr::DRTIO.len()]; let mut up_links = [false; csr::DRTIO.len()];
loop { loop {
for linkno in 0..csr::DRTIO.len() { for linkno in 0..csr::DRTIO.len() {
@ -310,7 +371,7 @@ pub mod drtio {
if up_links[linkno as usize] { if up_links[linkno as usize] {
/* link was previously up */ /* link was previously up */
if link_rx_up(linkno) { if link_rx_up(linkno) {
process_unsolicited_aux(&io, aux_mutex, linkno, ddma_mutex); process_unsolicited_aux(&io, aux_mutex, ddma_mutex, subkernel_mutex, linkno);
process_local_errors(linkno); process_local_errors(linkno);
} else { } else {
info!("[LINK#{}] link is down", linkno); info!("[LINK#{}] link is down", linkno);
@ -325,13 +386,13 @@ pub mod drtio {
info!("[LINK#{}] remote replied after {} packets", linkno, ping_count); info!("[LINK#{}] remote replied after {} packets", linkno, ping_count);
up_links[linkno as usize] = true; up_links[linkno as usize] = true;
if let Err(e) = sync_tsc(&io, aux_mutex, linkno) { if let Err(e) = sync_tsc(&io, aux_mutex, linkno) {
error!("[LINK#{}] failed to sync TSC ({})", linkno, e); error!("[LINK#{}] failed to sync TSC ({:?})", linkno, e);
} }
if let Err(e) = load_routing_table(&io, aux_mutex, linkno, routing_table) { if let Err(e) = load_routing_table(&io, aux_mutex, linkno, routing_table) {
error!("[LINK#{}] failed to load routing table ({})", linkno, e); error!("[LINK#{}] failed to load routing table ({:?})", linkno, e);
} }
if let Err(e) = set_rank(&io, aux_mutex, linkno, 1) { if let Err(e) = set_rank(&io, aux_mutex, linkno, 1) {
error!("[LINK#{}] failed to set rank ({})", linkno, e); error!("[LINK#{}] failed to set rank ({:?})", linkno, e);
} }
info!("[LINK#{}] link initialization completed", linkno); info!("[LINK#{}] link initialization completed", linkno);
} else { } else {
@ -340,7 +401,7 @@ pub mod drtio {
} }
} }
} }
destination_survey(&io, aux_mutex, routing_table, &up_links, up_destinations, ddma_mutex); destination_survey(&io, aux_mutex, routing_table, &up_links, up_destinations, ddma_mutex, subkernel_mutex);
io.sleep(200).unwrap(); io.sleep(200).unwrap();
} }
} }
@ -366,75 +427,79 @@ pub mod drtio {
match reply { match reply {
Ok(drtioaux::Packet::ResetAck) => (), Ok(drtioaux::Packet::ResetAck) => (),
Ok(_) => error!("[LINK#{}] reset failed, received unexpected aux packet", linkno), Ok(_) => error!("[LINK#{}] reset failed, received unexpected aux packet", linkno),
Err(e) => error!("[LINK#{}] reset failed, aux packet error ({})", linkno, e) Err(e) => error!("[LINK#{}] reset failed, aux packet error ({:?})", linkno, e)
} }
} }
} }
} }
pub fn ddma_upload_trace(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, fn partition_data<F>(data: &[u8], send_f: F) -> Result<(), Error>
id: u32, destination: u8, trace: &Vec<u8>) -> Result<(), &'static str> { where F: Fn(&[u8; MASTER_PAYLOAD_MAX_SIZE], PayloadStatus, usize) -> Result<(), Error> {
let mut i = 0;
while i < data.len() {
let mut slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
let len: usize = if i + MASTER_PAYLOAD_MAX_SIZE < data.len() { MASTER_PAYLOAD_MAX_SIZE } else { data.len() - i } as usize;
let first = i == 0;
let last = i + len == data.len();
let status = PayloadStatus::from_status(first, last);
slice[..len].clone_from_slice(&data[i..i+len]);
i += len;
send_f(&slice, status, len)?;
}
Ok(())
}
pub fn ddma_upload_trace(io: &Io, aux_mutex: &Mutex,
routing_table: &drtio_routing::RoutingTable,
id: u32, destination: u8, trace: &[u8]) -> Result<(), Error> {
let linkno = routing_table.0[destination as usize][0] - 1; let linkno = routing_table.0[destination as usize][0] - 1;
let mut i = 0; partition_data(trace, |slice, status, len: usize| {
while i < trace.len() {
let mut trace_slice: [u8; DMA_TRACE_MAX_SIZE] = [0; DMA_TRACE_MAX_SIZE];
let len: usize = if i + DMA_TRACE_MAX_SIZE < trace.len() { DMA_TRACE_MAX_SIZE } else { trace.len() - i } as usize;
let last = i + len == trace.len();
trace_slice[..len].clone_from_slice(&trace[i..i+len]);
i += len;
let reply = aux_transact(io, aux_mutex, linkno, let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::DmaAddTraceRequest { &drtioaux::Packet::DmaAddTraceRequest {
id: id, destination: destination, last: last, length: len as u16, trace: trace_slice}); id: id, destination: destination, status: status, length: len as u16, trace: *slice})?;
match reply { match reply {
Ok(drtioaux::Packet::DmaAddTraceReply { succeeded: true }) => (), drtioaux::Packet::DmaAddTraceReply { succeeded: true } => Ok(()),
Ok(drtioaux::Packet::DmaAddTraceReply { succeeded: false }) => { drtioaux::Packet::DmaAddTraceReply { succeeded: false } => Err(Error::DmaAddTraceFail(destination)),
return Err("error adding trace on satellite"); }, packet => Err(Error::UnexpectedPacket(packet)),
Ok(_) => { return Err("adding DMA trace failed, unexpected aux packet"); },
Err(_) => { return Err("adding DMA trace failed, aux error"); }
} }
} })
Ok(())
} }
pub fn ddma_send_erase(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, pub fn ddma_send_erase(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
id: u32, destination: u8) -> Result<(), &'static str> { id: u32, destination: u8) -> Result<(), Error> {
let linkno = routing_table.0[destination as usize][0] - 1; let linkno = routing_table.0[destination as usize][0] - 1;
let reply = aux_transact(io, aux_mutex, linkno, let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::DmaRemoveTraceRequest { id: id, destination: destination }); &drtioaux::Packet::DmaRemoveTraceRequest { id: id, destination: destination })?;
match reply { match reply {
Ok(drtioaux::Packet::DmaRemoveTraceReply { succeeded: true }) => Ok(()), drtioaux::Packet::DmaRemoveTraceReply { succeeded: true } => Ok(()),
Ok(drtioaux::Packet::DmaRemoveTraceReply { succeeded: false }) => Err("satellite DMA erase error"), drtioaux::Packet::DmaRemoveTraceReply { succeeded: false } => Err(Error::DmaEraseFail(destination)),
Ok(_) => Err("erasing trace failed, unexpected aux packet"), packet => Err(Error::UnexpectedPacket(packet)),
Err(_) => Err("erasing trace failed, aux error")
} }
} }
pub fn ddma_send_playback(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, pub fn ddma_send_playback(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
id: u32, destination: u8, timestamp: u64) -> Result<(), &'static str> { id: u32, destination: u8, timestamp: u64) -> Result<(), Error> {
let linkno = routing_table.0[destination as usize][0] - 1; let linkno = routing_table.0[destination as usize][0] - 1;
let reply = aux_transact(io, aux_mutex, linkno, let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::DmaPlaybackRequest{ id: id, destination: destination, timestamp: timestamp }); &drtioaux::Packet::DmaPlaybackRequest{ id: id, destination: destination, timestamp: timestamp })?;
match reply { match reply {
Ok(drtioaux::Packet::DmaPlaybackReply { succeeded: true }) => return Ok(()), drtioaux::Packet::DmaPlaybackReply { succeeded: true } => Ok(()),
Ok(drtioaux::Packet::DmaPlaybackReply { succeeded: false }) => drtioaux::Packet::DmaPlaybackReply { succeeded: false } =>
return Err("error on DMA playback request"), Err(Error::DmaPlaybackFail(destination)),
Ok(_) => return Err("received unexpected aux packet during DMA playback"), packet => Err(Error::UnexpectedPacket(packet)),
Err(_) => return Err("aux error on DMA playback")
} }
} }
#[cfg(has_rtio_analyzer)] #[cfg(has_rtio_analyzer)]
fn analyzer_get_data(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, fn analyzer_get_data(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
destination: u8) -> Result<RemoteBuffer, &'static str> { destination: u8) -> Result<RemoteBuffer, Error> {
let linkno = routing_table.0[destination as usize][0] - 1; let linkno = routing_table.0[destination as usize][0] - 1;
let reply = aux_transact(io, aux_mutex, linkno, let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::AnalyzerHeaderRequest { destination: destination }); &drtioaux::Packet::AnalyzerHeaderRequest { destination: destination })?;
let (sent, total, overflow) = match reply { let (sent, total, overflow) = match reply {
Ok(drtioaux::Packet::AnalyzerHeader { drtioaux::Packet::AnalyzerHeader { sent_bytes, total_byte_count, overflow_occurred } =>
sent_bytes, total_byte_count, overflow_occurred } (sent_bytes, total_byte_count, overflow_occurred),
) => (sent_bytes, total_byte_count, overflow_occurred), packet => return Err(Error::UnexpectedPacket(packet)),
Ok(_) => return Err("received unexpected aux packet during remote analyzer header request"),
Err(e) => return Err(e)
}; };
let mut remote_data: Vec<u8> = Vec::new(); let mut remote_data: Vec<u8> = Vec::new();
@ -442,14 +507,13 @@ pub mod drtio {
let mut last_packet = false; let mut last_packet = false;
while !last_packet { while !last_packet {
let reply = aux_transact(io, aux_mutex, linkno, let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::AnalyzerDataRequest { destination: destination }); &drtioaux::Packet::AnalyzerDataRequest { destination: destination })?;
match reply { match reply {
Ok(drtioaux::Packet::AnalyzerData { last, length, data }) => { drtioaux::Packet::AnalyzerData { last, length, data } => {
last_packet = last; last_packet = last;
remote_data.extend(&data[0..length as usize]); remote_data.extend(&data[0..length as usize]);
}, },
Ok(_) => return Err("received unexpected aux packet during remote analyzer data request"), packet => return Err(Error::UnexpectedPacket(packet)),
Err(e) => return Err(e)
} }
} }
} }
@ -465,7 +529,7 @@ pub mod drtio {
#[cfg(has_rtio_analyzer)] #[cfg(has_rtio_analyzer)]
pub fn analyzer_query(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable, pub fn analyzer_query(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>> up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>
) -> Result<Vec<RemoteBuffer>, &'static str> { ) -> Result<Vec<RemoteBuffer>, Error> {
let mut remote_buffers: Vec<RemoteBuffer> = Vec::new(); let mut remote_buffers: Vec<RemoteBuffer> = Vec::new();
for i in 1..drtio_routing::DEST_COUNT { for i in 1..drtio_routing::DEST_COUNT {
if destination_up(up_destinations, i as u8) { if destination_up(up_destinations, i as u8) {
@ -475,6 +539,70 @@ pub mod drtio {
Ok(remote_buffers) Ok(remote_buffers)
} }
pub fn subkernel_upload(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
id: u32, destination: u8, data: &Vec<u8>) -> Result<(), Error> {
let linkno = routing_table.0[destination as usize][0] - 1;
partition_data(data, |slice, status, len: usize| {
let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::SubkernelAddDataRequest {
id: id, destination: destination, status: status, length: len as u16, data: *slice})?;
match reply {
drtioaux::Packet::SubkernelAddDataReply { succeeded: true } => Ok(()),
drtioaux::Packet::SubkernelAddDataReply { succeeded: false } =>
Err(Error::SubkernelAddFail(destination)),
packet => Err(Error::UnexpectedPacket(packet)),
}
})
}
pub fn subkernel_load(io: &Io, aux_mutex: &Mutex, routing_table: &drtio_routing::RoutingTable,
id: u32, destination: u8, run: bool) -> Result<(), Error> {
let linkno = routing_table.0[destination as usize][0] - 1;
let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::SubkernelLoadRunRequest{ id: id, destination: destination, run: run })?;
match reply {
drtioaux::Packet::SubkernelLoadRunReply { succeeded: true } => Ok(()),
drtioaux::Packet::SubkernelLoadRunReply { succeeded: false } =>
Err(Error::SubkernelRunFail(destination)),
packet => Err(Error::UnexpectedPacket(packet)),
}
}
pub fn subkernel_retrieve_exception(io: &Io, aux_mutex: &Mutex,
routing_table: &drtio_routing::RoutingTable, destination: u8
) -> Result<Vec<u8>, Error> {
let linkno = routing_table.0[destination as usize][0] - 1;
let mut remote_data: Vec<u8> = Vec::new();
loop {
let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::SubkernelExceptionRequest { destination: destination })?;
match reply {
drtioaux::Packet::SubkernelException { last, length, data } => {
remote_data.extend(&data[0..length as usize]);
if last {
return Ok(remote_data);
}
},
packet => return Err(Error::UnexpectedPacket(packet)),
}
}
}
pub fn subkernel_send_message(io: &Io, aux_mutex: &Mutex,
routing_table: &drtio_routing::RoutingTable, id: u32, destination: u8, message: &[u8]
) -> Result<(), Error> {
let linkno = routing_table.0[destination as usize][0] - 1;
partition_data(message, |slice, status, len: usize| {
let reply = aux_transact(io, aux_mutex, linkno,
&drtioaux::Packet::SubkernelMessage {
destination: destination, id: id, status: status, length: len as u16, data: *slice})?;
match reply {
drtioaux::Packet::SubkernelMessageAck { .. } => Ok(()),
packet => Err(Error::UnexpectedPacket(packet)),
}
})
}
} }
#[cfg(not(has_drtio))] #[cfg(not(has_drtio))]
@ -484,7 +612,7 @@ pub mod drtio {
pub fn startup(_io: &Io, _aux_mutex: &Mutex, pub fn startup(_io: &Io, _aux_mutex: &Mutex,
_routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>, _routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>,
_up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, _up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
_ddma_mutex: &Mutex) {} _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex) {}
pub fn reset(_io: &Io, _aux_mutex: &Mutex) {} pub fn reset(_io: &Io, _aux_mutex: &Mutex) {}
} }
@ -548,9 +676,9 @@ fn read_device_map() -> DeviceMap {
pub fn startup(io: &Io, aux_mutex: &Mutex, pub fn startup(io: &Io, aux_mutex: &Mutex,
routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>, routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex) { ddma_mutex: &Mutex, subkernel_mutex: &Mutex) {
set_device_map(read_device_map()); set_device_map(read_device_map());
drtio::startup(io, aux_mutex, routing_table, up_destinations, ddma_mutex); drtio::startup(io, aux_mutex, routing_table, up_destinations, ddma_mutex, subkernel_mutex);
unsafe { unsafe {
csr::rtio_core::reset_phy_write(1); csr::rtio_core::reset_phy_write(1);
} }

View File

@ -1,9 +1,14 @@
use core::{mem, str, cell::{Cell, RefCell}, fmt::Write as FmtWrite}; use core::{mem, str, cell::{Cell, RefCell}, fmt::Write as FmtWrite};
use alloc::{vec::Vec, string::String}; use alloc::{vec::Vec, string::{String, ToString}};
use byteorder::{ByteOrder, NativeEndian}; use byteorder::{ByteOrder, NativeEndian};
use cslice::CSlice; use cslice::CSlice;
#[cfg(has_drtio)]
use tar_no_std::TarArchiveRef;
use dyld::elf;
use io::{Read, Write, Error as IoError}; use io::{Read, Write, Error as IoError};
#[cfg(has_drtio)]
use io::Cursor;
use board_misoc::{ident, cache, config}; use board_misoc::{ident, cache, config};
use {mailbox, rpc_queue, kernel}; use {mailbox, rpc_queue, kernel};
use urc::Urc; use urc::Urc;
@ -12,6 +17,10 @@ use rtio_clocking;
use rtio_dma::Manager as DmaManager; use rtio_dma::Manager as DmaManager;
#[cfg(has_drtio)] #[cfg(has_drtio)]
use rtio_dma::remote_dma; use rtio_dma::remote_dma;
#[cfg(has_drtio)]
use kernel::{subkernel, subkernel::Error as SubkernelError};
#[cfg(has_drtio)]
use rtio_mgt::drtio;
use rtio_mgt::get_async_errors; use rtio_mgt::get_async_errors;
use cache::Cache; use cache::Cache;
use kern_hwreq; use kern_hwreq;
@ -33,6 +42,20 @@ pub enum Error<T> {
ClockFailure, ClockFailure,
#[fail(display = "protocol error: {}", _0)] #[fail(display = "protocol error: {}", _0)]
Protocol(#[cause] host::Error<T>), Protocol(#[cause] host::Error<T>),
#[fail(display = "subkernel io error")]
SubkernelIoError,
#[cfg(has_drtio)]
#[fail(display = "DDMA error: {}", _0)]
Ddma(#[cause] remote_dma::Error),
#[cfg(has_drtio)]
#[fail(display = "subkernel destination is down")]
DestinationDown,
#[cfg(has_drtio)]
#[fail(display = "subkernel error: {}", _0)]
Subkernel(#[cause] SubkernelError),
#[cfg(has_drtio)]
#[fail(display = "drtio aux error: {}", _0)]
DrtioAux(#[cause] drtio::Error),
#[fail(display = "{}", _0)] #[fail(display = "{}", _0)]
Unexpected(String), Unexpected(String),
} }
@ -43,6 +66,16 @@ impl<T> From<host::Error<T>> for Error<T> {
} }
} }
#[cfg(has_drtio)]
impl From<drtio::Error> for Error<SchedError> {
fn from(value: drtio::Error) -> Error<SchedError> {
match value {
drtio::Error::SchedError(x) => Error::from(x),
x => Error::DrtioAux(x),
}
}
}
impl From<SchedError> for Error<SchedError> { impl From<SchedError> for Error<SchedError> {
fn from(value: SchedError) -> Error<SchedError> { fn from(value: SchedError) -> Error<SchedError> {
Error::Protocol(host::Error::Io(IoError::Other(value))) Error::Protocol(host::Error::Io(IoError::Other(value)))
@ -55,10 +88,57 @@ impl From<IoError<SchedError>> for Error<SchedError> {
} }
} }
impl From<&str> for Error<SchedError> {
fn from(value: &str) -> Error<SchedError> {
Error::Unexpected(value.to_string())
}
}
impl From<io::Error<!>> for Error<SchedError> {
fn from(_value: io::Error<!>) -> Error<SchedError> {
Error::SubkernelIoError
}
}
#[cfg(has_drtio)]
impl From<SubkernelError> for Error<SchedError> {
fn from(value: SubkernelError) -> Error<SchedError> {
match value {
SubkernelError::SchedError(x) => Error::from(x),
SubkernelError::DrtioError(x) => Error::from(x),
x => Error::Subkernel(x),
}
}
}
#[cfg(has_drtio)]
impl From<remote_dma::Error> for Error<SchedError> {
fn from(value: remote_dma::Error) -> Error<SchedError> {
match value {
remote_dma::Error::SchedError(x) => Error::from(x),
remote_dma::Error::DrtioError(x) => Error::from(x),
x => Error::Ddma(x),
}
}
}
macro_rules! unexpected { macro_rules! unexpected {
($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*)))); ($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*))));
} }
#[cfg(has_drtio)]
macro_rules! propagate_subkernel_exception {
( $exception:ident, $stream:ident ) => {
error!("Exception in subkernel");
match $stream {
None => return Ok(true),
Some(ref mut $stream) => {
$stream.write_all($exception)?;
}
}
}
}
// Persistent state // Persistent state
#[derive(Debug)] #[derive(Debug)]
struct Congress { struct Congress {
@ -131,6 +211,8 @@ fn host_read<R>(reader: &mut R) -> Result<host::Request, Error<R::ReadError>>
let request = host::Request::read_from(reader)?; let request = host::Request::read_from(reader)?;
match &request { match &request {
&host::Request::LoadKernel(_) => debug!("comm<-host LoadLibrary(...)"), &host::Request::LoadKernel(_) => debug!("comm<-host LoadLibrary(...)"),
&host::Request::UploadSubkernel { id, destination, kernel: _} => debug!(
"comm<-host UploadSubkernel(id: {}, destination: {}, ...)", id, destination),
_ => debug!("comm<-host {:?}", request) _ => debug!("comm<-host {:?}", request)
} }
Ok(request) Ok(request)
@ -233,8 +315,65 @@ fn kern_run(session: &mut Session) -> Result<(), Error<SchedError>> {
kern_acknowledge() kern_acknowledge()
} }
fn process_host_message(io: &Io,
stream: &mut TcpStream, fn process_flash_kernel(io: &Io, _aux_mutex: &Mutex, _subkernel_mutex: &Mutex,
_routing_table: &drtio_routing::RoutingTable,
_up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
session: &mut Session, kernel: &[u8]
) -> Result<(), Error<SchedError>> {
// handle ELF and TAR files
if kernel[0] == elf::ELFMAG0 && kernel[1] == elf::ELFMAG1 &&
kernel[2] == elf::ELFMAG2 && kernel[3] == elf::ELFMAG3 {
// assume ELF file, proceed as before
unsafe {
// make a copy as kernel CPU cannot read SPI directly
kern_load(io, session, Vec::from(kernel).as_ref())
}
} else {
#[cfg(has_drtio)]
{
let archive = TarArchiveRef::new(kernel);
let entries = archive.entries();
let mut main_lib: Option<&[u8]> = None;
for entry in entries {
if entry.filename().as_str() == "main.elf" {
main_lib = Some(entry.data());
} else {
// subkernel filename must be in format:
// "<subkernel id> <destination>.elf"
let filename = entry.filename();
let mut iter = filename.as_str().split_whitespace();
let sid: u32 = iter.next().unwrap()
.parse().unwrap();
let dest: u8 = iter.next().unwrap()
.strip_suffix(".elf").unwrap()
.parse().unwrap();
let up = {
let up_destinations = _up_destinations.borrow();
up_destinations[dest as usize]
};
if up {
let subkernel_lib = entry.data().to_vec();
subkernel::add_subkernel(io, _subkernel_mutex, sid, dest, subkernel_lib)?;
subkernel::upload(io, _aux_mutex, _subkernel_mutex, _routing_table, sid)?;
} else {
return Err(Error::DestinationDown);
}
}
}
unsafe {
kern_load(io, session, Vec::from(main_lib.unwrap()).as_ref())
}
}
#[cfg(not(has_drtio))]
{
unexpected!("multi-kernel libraries are not supported in standalone systems")
}
}
}
fn process_host_message(io: &Io, _aux_mutex: &Mutex, _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex,
_routing_table: &drtio_routing::RoutingTable, stream: &mut TcpStream,
session: &mut Session) -> Result<(), Error<SchedError>> { session: &mut Session) -> Result<(), Error<SchedError>> {
match host_read(stream)? { match host_read(stream)? {
host::Request::SystemInfo => { host::Request::SystemInfo => {
@ -245,7 +384,7 @@ fn process_host_message(io: &Io,
session.congress.finished_cleanly.set(true) session.congress.finished_cleanly.set(true)
} }
host::Request::LoadKernel(kernel) => host::Request::LoadKernel(kernel) => {
match unsafe { kern_load(io, session, &kernel) } { match unsafe { kern_load(io, session, &kernel) } {
Ok(()) => host_write(stream, host::Reply::LoadCompleted)?, Ok(()) => host_write(stream, host::Reply::LoadCompleted)?,
Err(error) => { Err(error) => {
@ -254,7 +393,8 @@ fn process_host_message(io: &Io,
host_write(stream, host::Reply::LoadFailed(&description))?; host_write(stream, host::Reply::LoadFailed(&description))?;
kern_acknowledge()?; kern_acknowledge()?;
} }
}, }
},
host::Request::RunKernel => host::Request::RunKernel =>
match kern_run(session) { match kern_run(session) {
Ok(()) => (), Ok(()) => (),
@ -323,6 +463,23 @@ fn process_host_message(io: &Io,
session.kernel_state = KernelState::Running session.kernel_state = KernelState::Running
} }
host::Request::UploadSubkernel { id: _id, destination: _dest, kernel: _kernel } => {
#[cfg(has_drtio)]
{
subkernel::add_subkernel(io, _subkernel_mutex, _id, _dest, _kernel)?;
match subkernel::upload(io, _aux_mutex, _subkernel_mutex, _routing_table, _id) {
Ok(_) => host_write(stream, host::Reply::LoadCompleted)?,
Err(error) => {
let mut description = String::new();
write!(&mut description, "{}", error).unwrap();
host_write(stream, host::Reply::LoadFailed(&description))?
}
}
}
#[cfg(not(has_drtio))]
host_write(stream, host::Reply::LoadFailed("No DRTIO on this system, subkernels are not supported"))?
}
} }
Ok(()) Ok(())
@ -331,7 +488,7 @@ fn process_host_message(io: &Io,
fn process_kern_message(io: &Io, aux_mutex: &Mutex, fn process_kern_message(io: &Io, aux_mutex: &Mutex,
routing_table: &drtio_routing::RoutingTable, routing_table: &drtio_routing::RoutingTable,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex, mut stream: Option<&mut TcpStream>, ddma_mutex: &Mutex, _subkernel_mutex: &Mutex, mut stream: Option<&mut TcpStream>,
session: &mut Session) -> Result<bool, Error<SchedError>> { session: &mut Session) -> Result<bool, Error<SchedError>> {
kern_recv_notrace(io, |request| { kern_recv_notrace(io, |request| {
match (request, session.kernel_state) { match (request, session.kernel_state) {
@ -373,7 +530,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
if let Some(_id) = session.congress.dma_manager.record_start(name) { if let Some(_id) = session.congress.dma_manager.record_start(name) {
// replace the record // replace the record
#[cfg(has_drtio)] #[cfg(has_drtio)]
remote_dma::erase(io, aux_mutex, ddma_mutex, routing_table, _id); remote_dma::erase(io, aux_mutex, ddma_mutex, routing_table, _id)?;
} }
kern_acknowledge() kern_acknowledge()
} }
@ -382,10 +539,10 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
kern_acknowledge() kern_acknowledge()
} }
&kern::DmaRecordStop { duration, enable_ddma } => { &kern::DmaRecordStop { duration, enable_ddma } => {
let _id = session.congress.dma_manager.record_stop(duration, enable_ddma, io, ddma_mutex); let _id = session.congress.dma_manager.record_stop(duration, enable_ddma, io, ddma_mutex)?;
#[cfg(has_drtio)] #[cfg(has_drtio)]
if enable_ddma { if enable_ddma {
remote_dma::upload_traces(io, aux_mutex, ddma_mutex, routing_table, _id); remote_dma::upload_traces(io, aux_mutex, ddma_mutex, routing_table, _id)?;
} }
cache::flush_l2_cache(); cache::flush_l2_cache();
kern_acknowledge() kern_acknowledge()
@ -393,7 +550,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
&kern::DmaEraseRequest { name } => { &kern::DmaEraseRequest { name } => {
#[cfg(has_drtio)] #[cfg(has_drtio)]
if let Some(id) = session.congress.dma_manager.get_id(name) { if let Some(id) = session.congress.dma_manager.get_id(name) {
remote_dma::erase(io, aux_mutex, ddma_mutex, routing_table, *id); remote_dma::erase(io, aux_mutex, ddma_mutex, routing_table, *id)?;
} }
session.congress.dma_manager.erase(name); session.congress.dma_manager.erase(name);
kern_acknowledge() kern_acknowledge()
@ -402,7 +559,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
session.congress.dma_manager.with_trace(name, |trace, duration| { session.congress.dma_manager.with_trace(name, |trace, duration| {
#[cfg(has_drtio)] #[cfg(has_drtio)]
let uses_ddma = match trace { let uses_ddma = match trace {
Some(trace) => remote_dma::has_remote_traces(io, aux_mutex, trace.as_ptr() as u32), Some(trace) => remote_dma::has_remote_traces(io, aux_mutex, trace.as_ptr() as u32)?,
None => false None => false
}; };
#[cfg(not(has_drtio))] #[cfg(not(has_drtio))]
@ -416,7 +573,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
} }
&kern::DmaStartRemoteRequest { id: _id, timestamp: _timestamp } => { &kern::DmaStartRemoteRequest { id: _id, timestamp: _timestamp } => {
#[cfg(has_drtio)] #[cfg(has_drtio)]
remote_dma::playback(io, aux_mutex, ddma_mutex, routing_table, _id as u32, _timestamp as u64); remote_dma::playback(io, aux_mutex, ddma_mutex, routing_table, _id as u32, _timestamp as u64)?;
kern_acknowledge() kern_acknowledge()
} }
&kern::DmaAwaitRemoteRequest { id: _id } => { &kern::DmaAwaitRemoteRequest { id: _id } => {
@ -441,7 +598,7 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
None => unexpected!("unexpected RPC in flash kernel"), None => unexpected!("unexpected RPC in flash kernel"),
Some(ref mut stream) => { Some(ref mut stream) => {
host_write(stream, host::Reply::RpcRequest { async: async })?; host_write(stream, host::Reply::RpcRequest { async: async })?;
rpc::send_args(stream, service, tag, data)?; rpc::send_args(stream, service, tag, data, true)?;
if !async { if !async {
session.kernel_state = KernelState::RpcWait session.kernel_state = KernelState::RpcWait
} }
@ -510,6 +667,113 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
} }
} }
} }
#[cfg(has_drtio)]
&kern::SubkernelLoadRunRequest { id, run } => {
let succeeded = match subkernel::load(
io, aux_mutex, _subkernel_mutex, routing_table, id, run) {
Ok(()) => true,
Err(e) => { error!("Error loading subkernel: {}", e); false }
};
kern_send(io, &kern::SubkernelLoadRunReply { succeeded: succeeded })
}
#[cfg(has_drtio)]
&kern::SubkernelAwaitFinishRequest{ id, timeout } => {
let res = subkernel::await_finish(io, aux_mutex, _subkernel_mutex, routing_table,
id, timeout);
let status = match res {
Ok(ref res) => {
if res.comm_lost {
kern::SubkernelStatus::CommLost
} else if let Some(exception) = &res.exception {
propagate_subkernel_exception!(exception, stream);
// will not be called after exception is served
kern::SubkernelStatus::OtherError
} else {
kern::SubkernelStatus::NoError
}
},
Err(SubkernelError::Timeout) => kern::SubkernelStatus::Timeout,
Err(SubkernelError::IncorrectState) => kern::SubkernelStatus::IncorrectState,
Err(_) => kern::SubkernelStatus::OtherError
};
kern_send(io, &kern::SubkernelAwaitFinishReply { status: status })
}
#[cfg(has_drtio)]
&kern::SubkernelMsgSend { id, count, tag, data } => {
subkernel::message_send(io, aux_mutex, _subkernel_mutex, routing_table, id, count, tag, data)?;
kern_acknowledge()
}
#[cfg(has_drtio)]
&kern::SubkernelMsgRecvRequest { id, timeout, tags } => {
let message_received = subkernel::message_await(io, _subkernel_mutex, id, timeout);
let (status, count) = match message_received {
Ok(ref message) => (kern::SubkernelStatus::NoError, message.count),
Err(SubkernelError::Timeout) => (kern::SubkernelStatus::Timeout, 0),
Err(SubkernelError::IncorrectState) => (kern::SubkernelStatus::IncorrectState, 0),
Err(SubkernelError::SubkernelFinished) => {
let res = subkernel::retrieve_finish_status(io, aux_mutex, _subkernel_mutex,
routing_table, id)?;
if res.comm_lost {
(kern::SubkernelStatus::CommLost, 0)
} else if let Some(exception) = &res.exception {
propagate_subkernel_exception!(exception, stream);
(kern::SubkernelStatus::OtherError, 0)
} else {
(kern::SubkernelStatus::OtherError, 0)
}
}
Err(_) => (kern::SubkernelStatus::OtherError, 0)
};
kern_send(io, &kern::SubkernelMsgRecvReply { status: status, count: count})?;
if let Ok(message) = message_received {
// receive code almost identical to RPC recv, except we are not reading from a stream
let mut reader = Cursor::new(message.data);
let mut current_tags = tags;
let mut i = 0;
loop {
// kernel has to consume all arguments in the whole message
let slot = kern_recv(io, |reply| {
match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!(
"expected root value slot from kernel CPU, not {:?}", other)
}
})?;
let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error<SchedError>> {
if size == 0 {
return Ok(0 as *mut ())
}
kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
Ok(kern_recv(io, |reply| {
match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!(
"expected nested value slot from kernel CPU, not {:?}", other)
}
})?)
});
match res {
Ok(new_tags) => {
kern_send(io, &kern::RpcRecvReply(Ok(0)))?;
i += 1;
if i < message.count {
// update the tag for next read
current_tags = new_tags;
} else {
// should be done by then
break;
}
},
Err(_) => unexpected!("expected valid subkernel message data")
};
}
Ok(())
} else {
// if timed out, no data has been received, exception should be raised by kernel
Ok(())
}
},
request => unexpected!("unexpected request {:?} from kernel CPU", request) request => unexpected!("unexpected request {:?} from kernel CPU", request)
}.and(Ok(false)) }.and(Ok(false))
}) })
@ -530,13 +794,17 @@ fn process_kern_queued_rpc(stream: &mut TcpStream,
fn host_kernel_worker(io: &Io, aux_mutex: &Mutex, fn host_kernel_worker(io: &Io, aux_mutex: &Mutex,
routing_table: &drtio_routing::RoutingTable, routing_table: &drtio_routing::RoutingTable,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex, stream: &mut TcpStream, ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
stream: &mut TcpStream,
congress: &mut Congress) -> Result<(), Error<SchedError>> { congress: &mut Congress) -> Result<(), Error<SchedError>> {
let mut session = Session::new(congress); let mut session = Session::new(congress);
#[cfg(has_drtio)]
subkernel::clear_subkernels(&io, &subkernel_mutex)?;
loop { loop {
if stream.can_recv() { if stream.can_recv() {
process_host_message(io, stream, &mut session)? process_host_message(io, aux_mutex, ddma_mutex, subkernel_mutex,
routing_table, stream, &mut session)?
} else if !stream.may_recv() { } else if !stream.may_recv() {
return Ok(()) return Ok(())
} }
@ -548,7 +816,7 @@ fn host_kernel_worker(io: &Io, aux_mutex: &Mutex,
if mailbox::receive() != 0 { if mailbox::receive() != 0 {
process_kern_message(io, aux_mutex, process_kern_message(io, aux_mutex,
routing_table, up_destinations, routing_table, up_destinations,
ddma_mutex, ddma_mutex, subkernel_mutex,
Some(stream), &mut session)?; Some(stream), &mut session)?;
} }
@ -566,17 +834,23 @@ fn host_kernel_worker(io: &Io, aux_mutex: &Mutex,
fn flash_kernel_worker(io: &Io, aux_mutex: &Mutex, fn flash_kernel_worker(io: &Io, aux_mutex: &Mutex,
routing_table: &drtio_routing::RoutingTable, routing_table: &drtio_routing::RoutingTable,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex, congress: &mut Congress, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, congress: &mut Congress,
config_key: &str) -> Result<(), Error<SchedError>> { config_key: &str) -> Result<(), Error<SchedError>> {
let mut session = Session::new(congress); let mut session = Session::new(congress);
config::read(config_key, |result| { config::read(config_key, |result| {
match result { match result {
Ok(kernel) => unsafe { Ok(kernel) => {
// kernel CPU cannot access the SPI flash address space directly, // process .ELF or .TAR kernels
// so make a copy. let res = process_flash_kernel(io, aux_mutex, subkernel_mutex, routing_table, up_destinations, &mut session, kernel);
kern_load(io, &mut session, Vec::from(kernel).as_ref()) #[cfg(has_drtio)]
}, match res {
// wait to establish the DRTIO connection
Err(Error::DestinationDown) => io.sleep(500)?,
_ => ()
}
res
}
_ => Err(Error::KernelNotFound) _ => Err(Error::KernelNotFound)
} }
})?; })?;
@ -588,7 +862,7 @@ fn flash_kernel_worker(io: &Io, aux_mutex: &Mutex,
} }
if mailbox::receive() != 0 { if mailbox::receive() != 0 {
if process_kern_message(io, aux_mutex, routing_table, up_destinations, ddma_mutex, None, &mut session)? { if process_kern_message(io, aux_mutex, routing_table, up_destinations, ddma_mutex, subkernel_mutex, None, &mut session)? {
return Ok(()) return Ok(())
} }
} }
@ -613,13 +887,13 @@ fn respawn<F>(io: &Io, handle: &mut Option<ThreadHandle>, f: F)
} }
} }
*handle = Some(io.spawn(16384, f)) *handle = Some(io.spawn(24576, f))
} }
pub fn thread(io: Io, aux_mutex: &Mutex, pub fn thread(io: Io, aux_mutex: &Mutex,
routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>, routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>, up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex) { ddma_mutex: &Mutex, subkernel_mutex: &Mutex) {
let listener = TcpListener::new(&io, 65535); let listener = TcpListener::new(&io, 65535);
listener.listen(1381).expect("session: cannot listen"); listener.listen(1381).expect("session: cannot listen");
info!("accepting network sessions"); info!("accepting network sessions");
@ -628,9 +902,11 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
let mut kernel_thread = None; let mut kernel_thread = None;
{ {
let routing_table = routing_table.borrow();
let mut congress = congress.borrow_mut(); let mut congress = congress.borrow_mut();
info!("running startup kernel"); info!("running startup kernel");
match flash_kernel_worker(&io, &aux_mutex, &routing_table.borrow(), &up_destinations, ddma_mutex, &mut congress, "startup_kernel") { match flash_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations,
ddma_mutex, subkernel_mutex, &mut congress, "startup_kernel") {
Ok(()) => Ok(()) =>
info!("startup kernel finished"), info!("startup kernel finished"),
Err(Error::KernelNotFound) => Err(Error::KernelNotFound) =>
@ -671,21 +947,28 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
let up_destinations = up_destinations.clone(); let up_destinations = up_destinations.clone();
let congress = congress.clone(); let congress = congress.clone();
let ddma_mutex = ddma_mutex.clone(); let ddma_mutex = ddma_mutex.clone();
let subkernel_mutex = subkernel_mutex.clone();
let stream = stream.into_handle(); let stream = stream.into_handle();
respawn(&io, &mut kernel_thread, move |io| { respawn(&io, &mut kernel_thread, move |io| {
let routing_table = routing_table.borrow(); let routing_table = routing_table.borrow();
let mut congress = congress.borrow_mut(); let mut congress = congress.borrow_mut();
let mut stream = TcpStream::from_handle(&io, stream); let mut stream = TcpStream::from_handle(&io, stream);
match host_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex, &mut stream, &mut *congress) { match host_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations,
&ddma_mutex, &subkernel_mutex, &mut stream, &mut *congress) {
Ok(()) => (), Ok(()) => (),
Err(Error::Protocol(host::Error::Io(IoError::UnexpectedEnd))) => Err(Error::Protocol(host::Error::Io(IoError::UnexpectedEnd))) =>
info!("connection closed"), info!("connection closed"),
Err(Error::Protocol(host::Error::Io( Err(Error::Protocol(host::Error::Io(
IoError::Other(SchedError::Interrupted)))) => IoError::Other(SchedError::Interrupted)))) => {
info!("kernel interrupted"), info!("kernel interrupted");
#[cfg(has_drtio)]
drtio::clear_buffers(&io, &aux_mutex);
}
Err(err) => { Err(err) => {
congress.finished_cleanly.set(false); congress.finished_cleanly.set(false);
error!("session aborted: {}", err); error!("session aborted: {}", err);
#[cfg(has_drtio)]
drtio::clear_buffers(&io, &aux_mutex);
} }
} }
stream.close().expect("session: close socket"); stream.close().expect("session: close socket");
@ -700,22 +983,32 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
let up_destinations = up_destinations.clone(); let up_destinations = up_destinations.clone();
let congress = congress.clone(); let congress = congress.clone();
let ddma_mutex = ddma_mutex.clone(); let ddma_mutex = ddma_mutex.clone();
let subkernel_mutex = subkernel_mutex.clone();
respawn(&io, &mut kernel_thread, move |io| { respawn(&io, &mut kernel_thread, move |io| {
let routing_table = routing_table.borrow(); let routing_table = routing_table.borrow();
let mut congress = congress.borrow_mut(); let mut congress = congress.borrow_mut();
match flash_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations, &ddma_mutex, &mut *congress, "idle_kernel") { match flash_kernel_worker(&io, &aux_mutex, &routing_table, &up_destinations,
&ddma_mutex, &subkernel_mutex, &mut *congress, "idle_kernel") {
Ok(()) => Ok(()) =>
info!("idle kernel finished, standing by"), info!("idle kernel finished, standing by"),
Err(Error::Protocol(host::Error::Io( Err(Error::Protocol(host::Error::Io(
IoError::Other(SchedError::Interrupted)))) => IoError::Other(SchedError::Interrupted)))) => {
info!("idle kernel interrupted"), info!("idle kernel interrupted");
// clear state for regular kernel
#[cfg(has_drtio)]
drtio::clear_buffers(&io, &aux_mutex);
}
Err(Error::KernelNotFound) => { Err(Error::KernelNotFound) => {
info!("no idle kernel found"); info!("no idle kernel found");
while io.relinquish().is_ok() {} while io.relinquish().is_ok() {}
} }
Err(err) => Err(err) => {
error!("idle kernel aborted: {}", err) error!("idle kernel aborted: {}", err);
#[cfg(has_drtio)]
drtio::clear_buffers(&io, &aux_mutex);
}
} }
}) })
} }

View File

@ -14,8 +14,11 @@ build_misoc = { path = "../libbuild_misoc" }
[dependencies] [dependencies]
log = { version = "0.4", default-features = false } log = { version = "0.4", default-features = false }
io = { path = "../libio", features = ["byteorder", "alloc"] }
cslice = { version = "0.3" }
board_misoc = { path = "../libboard_misoc", features = ["uart_console", "log"] } board_misoc = { path = "../libboard_misoc", features = ["uart_console", "log"] }
board_artiq = { path = "../libboard_artiq", features = ["alloc"] } board_artiq = { path = "../libboard_artiq", features = ["alloc"] }
alloc_list = { path = "../liballoc_list" } alloc_list = { path = "../liballoc_list" }
riscv = { version = "0.6.0", features = ["inline-asm"] } riscv = { version = "0.6.0", features = ["inline-asm"] }
proto_artiq = { path = "../libproto_artiq", features = ["log", "alloc"] } proto_artiq = { path = "../libproto_artiq", features = ["log", "alloc"] }
eh = { path = "../libeh" }

View File

@ -1,9 +1,14 @@
include ../include/generated/variables.mak include ../include/generated/variables.mak
include $(MISOC_DIRECTORY)/software/common.mak include $(MISOC_DIRECTORY)/software/common.mak
LDFLAGS += -L../libbase CFLAGS += \
-I$(LIBUNWIND_DIRECTORY) \
-I$(LIBUNWIND_DIRECTORY)/../unwinder/include
RUSTFLAGS += -Cpanic=abort LDFLAGS += \
-L../libunwind
RUSTFLAGS += -Cpanic=unwind
export XBUILD_SYSROOT_PATH=$(BUILDINC_DIRECTORY)/../sysroot export XBUILD_SYSROOT_PATH=$(BUILDINC_DIRECTORY)/../sysroot
@ -15,8 +20,12 @@ $(RUSTOUT)/libsatman.a:
--manifest-path $(SATMAN_DIRECTORY)/Cargo.toml \ --manifest-path $(SATMAN_DIRECTORY)/Cargo.toml \
--target $(SATMAN_DIRECTORY)/../$(CARGO_TRIPLE).json --target $(SATMAN_DIRECTORY)/../$(CARGO_TRIPLE).json
satman.elf: $(RUSTOUT)/libsatman.a satman.elf: $(RUSTOUT)/libsatman.a ksupport_data.o
$(link) -T $(SATMAN_DIRECTORY)/satman.ld $(link) -T $(SATMAN_DIRECTORY)/../firmware.ld \
-lunwind-vexriscv-bare -m elf32lriscv
ksupport_data.o: ../ksupport/ksupport.elf
$(LD) -r -m elf32lriscv -b binary -o $@ $<
%.bin: %.elf %.bin: %.elf
$(objcopy) -O binary $(objcopy) -O binary

View File

@ -1,6 +1,6 @@
use core::cmp::min; use core::cmp::min;
use board_misoc::{csr, cache}; use board_misoc::{csr, cache};
use proto_artiq::drtioaux_proto::ANALYZER_MAX_SIZE; use proto_artiq::drtioaux_proto::SAT_PAYLOAD_MAX_SIZE;
const BUFFER_SIZE: usize = 512 * 1024; const BUFFER_SIZE: usize = 512 * 1024;
@ -86,10 +86,10 @@ impl Analyzer {
} }
} }
pub fn get_data(&mut self, data_slice: &mut [u8; ANALYZER_MAX_SIZE]) -> AnalyzerSliceMeta { pub fn get_data(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> AnalyzerSliceMeta {
let data = unsafe { &BUFFER.data[..] }; let data = unsafe { &BUFFER.data[..] };
let i = (self.data_pointer + self.sent_bytes) % BUFFER_SIZE; let i = (self.data_pointer + self.sent_bytes) % BUFFER_SIZE;
let len = min(ANALYZER_MAX_SIZE, self.data_len - self.sent_bytes); let len = min(SAT_PAYLOAD_MAX_SIZE, self.data_len - self.sent_bytes);
let last = self.sent_bytes + len == self.data_len; let last = self.sent_bytes + len == self.data_len;
if i + len >= BUFFER_SIZE { if i + len >= BUFFER_SIZE {

View File

@ -0,0 +1,83 @@
use alloc::{vec, vec::Vec, string::String, collections::btree_map::BTreeMap};
use cslice::{CSlice, AsCSlice};
use core::mem::transmute;
struct Entry {
data: Vec<i32>,
slice: CSlice<'static, i32>,
borrowed: bool
}
impl core::fmt::Debug for Entry {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
f.debug_struct("Entry")
.field("data", &self.data)
.field("borrowed", &self.borrowed)
.finish()
}
}
pub struct Cache {
entries: BTreeMap<String, Entry>,
empty: CSlice<'static, i32>,
}
impl core::fmt::Debug for Cache {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
f.debug_struct("Cache")
.field("entries", &self.entries)
.finish()
}
}
impl Cache {
pub fn new() -> Cache {
let empty_vec = vec![];
let empty = unsafe {
transmute::<CSlice<'_, i32>, CSlice<'static, i32>>(empty_vec.as_c_slice())
};
Cache { entries: BTreeMap::new(), empty }
}
pub fn get(&mut self, key: &str) -> *const CSlice<'static, i32> {
match self.entries.get_mut(key) {
None => &self.empty,
Some(ref mut entry) => {
entry.borrowed = true;
&entry.slice
}
}
}
pub fn put(&mut self, key: &str, data: &[i32]) -> Result<(), ()> {
match self.entries.get_mut(key) {
None => (),
Some(ref mut entry) => {
if entry.borrowed { return Err(()) }
entry.data = Vec::from(data);
unsafe {
entry.slice = transmute::<CSlice<'_, i32>, CSlice<'static, i32>>(
entry.data.as_c_slice());
}
return Ok(())
}
}
let data = Vec::from(data);
let slice = unsafe {
transmute::<CSlice<'_, i32>, CSlice<'static, i32>>(data.as_c_slice())
};
self.entries.insert(String::from(key), Entry {
data,
slice,
borrowed: false
});
Ok(())
}
pub unsafe fn unborrow(&mut self) {
for (_key, entry) in self.entries.iter_mut() {
entry.borrowed = false;
}
}
}

View File

@ -1,5 +1,7 @@
use board_misoc::{csr, cache::flush_l2_cache}; use board_misoc::{csr, cache::flush_l2_cache};
use proto_artiq::drtioaux_proto::PayloadStatus;
use alloc::{vec::Vec, collections::btree_map::BTreeMap}; use alloc::{vec::Vec, collections::btree_map::BTreeMap};
use ::{cricon_select, RtioMaster};
const ALIGNMENT: usize = 64; const ALIGNMENT: usize = 64;
@ -50,7 +52,10 @@ impl Manager {
} }
} }
pub fn add(&mut self, id: u32, last: bool, trace: &[u8], trace_len: usize) -> Result<(), Error> { pub fn add(&mut self, id: u32, status: PayloadStatus, trace: &[u8], trace_len: usize) -> Result<(), Error> {
if status.is_first() {
self.entries.remove(&id);
}
let entry = match self.entries.get_mut(&id) { let entry = match self.entries.get_mut(&id) {
Some(entry) => { Some(entry) => {
if entry.complete { if entry.complete {
@ -75,7 +80,7 @@ impl Manager {
}; };
entry.trace.extend(&trace[0..trace_len]); entry.trace.extend(&trace[0..trace_len]);
if last { if status.is_last() {
entry.trace.push(0); entry.trace.push(0);
let data_len = entry.trace.len(); let data_len = entry.trace.len();
@ -126,14 +131,14 @@ impl Manager {
csr::rtio_dma::base_address_write(ptr as u64); csr::rtio_dma::base_address_write(ptr as u64);
csr::rtio_dma::time_offset_write(timestamp as u64); csr::rtio_dma::time_offset_write(timestamp as u64);
csr::cri_con::selected_write(1); cricon_select(RtioMaster::Dma);
csr::rtio_dma::enable_write(1); csr::rtio_dma::enable_write(1);
// playback has begun here, for status call check_state // playback has begun here, for status call check_state
} }
Ok(()) Ok(())
} }
pub fn check_state(&mut self) -> Option<RtioStatus> { pub fn get_status(&mut self) -> Option<RtioStatus> {
if self.state != ManagerState::Playback { if self.state != ManagerState::Playback {
// nothing to report // nothing to report
return None; return None;
@ -141,12 +146,11 @@ impl Manager {
let dma_enable = unsafe { csr::rtio_dma::enable_read() }; let dma_enable = unsafe { csr::rtio_dma::enable_read() };
if dma_enable != 0 { if dma_enable != 0 {
return None; return None;
} } else {
else {
self.state = ManagerState::Idle; self.state = ManagerState::Idle;
unsafe { unsafe {
csr::cri_con::selected_write(0); cricon_select(RtioMaster::Drtio);
let error = csr::rtio_dma::error_read(); let error = csr::rtio_dma::error_read();
let channel = csr::rtio_dma::error_channel_read(); let channel = csr::rtio_dma::error_channel_read();
let timestamp = csr::rtio_dma::error_timestamp_read(); let timestamp = csr::rtio_dma::error_timestamp_read();
if error != 0 { if error != 0 {
@ -161,4 +165,8 @@ impl Manager {
} }
} }
pub fn running(&self) -> bool {
self.state == ManagerState::Playback
}
} }

View File

@ -0,0 +1,832 @@
use core::{mem, option::NoneError, cmp::min};
use alloc::{string::String, format, vec::Vec, collections::{btree_map::BTreeMap, vec_deque::VecDeque}};
use cslice::AsCSlice;
use board_artiq::{mailbox, spi};
use board_misoc::{csr, clock, i2c};
use proto_artiq::{
drtioaux_proto::PayloadStatus,
kernel_proto as kern,
session_proto::Reply::KernelException as HostKernelException,
rpc_proto as rpc};
use eh::eh_artiq;
use io::Cursor;
use kernel::eh_artiq::StackPointerBacktrace;
use ::{cricon_select, RtioMaster};
use cache::Cache;
use SAT_PAYLOAD_MAX_SIZE;
use MASTER_PAYLOAD_MAX_SIZE;
mod kernel_cpu {
use super::*;
use core::ptr;
use proto_artiq::kernel_proto::{KERNELCPU_EXEC_ADDRESS, KERNELCPU_LAST_ADDRESS, KSUPPORT_HEADER_SIZE};
pub unsafe fn start() {
if csr::kernel_cpu::reset_read() == 0 {
panic!("attempted to start kernel CPU when it is already running")
}
stop();
extern {
static _binary____ksupport_ksupport_elf_start: u8;
static _binary____ksupport_ksupport_elf_end: u8;
}
let ksupport_start = &_binary____ksupport_ksupport_elf_start as *const _;
let ksupport_end = &_binary____ksupport_ksupport_elf_end as *const _;
ptr::copy_nonoverlapping(ksupport_start,
(KERNELCPU_EXEC_ADDRESS - KSUPPORT_HEADER_SIZE) as *mut u8,
ksupport_end as usize - ksupport_start as usize);
csr::kernel_cpu::reset_write(0);
}
pub unsafe fn stop() {
csr::kernel_cpu::reset_write(1);
cricon_select(RtioMaster::Drtio);
mailbox::acknowledge();
}
pub fn validate(ptr: usize) -> bool {
ptr >= KERNELCPU_EXEC_ADDRESS && ptr <= KERNELCPU_LAST_ADDRESS
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
enum KernelState {
Absent,
Loaded,
Running,
MsgAwait { max_time: u64, tags: Vec<u8> },
MsgSending
}
#[derive(Debug)]
pub enum Error {
Load(String),
KernelNotFound,
InvalidPointer(usize),
Unexpected(String),
NoMessage,
AwaitingMessage,
SubkernelIoError,
KernelException(Sliceable)
}
impl From<NoneError> for Error {
fn from(_: NoneError) -> Error {
Error::KernelNotFound
}
}
impl From<io::Error<!>> for Error {
fn from(_value: io::Error<!>) -> Error {
Error::SubkernelIoError
}
}
macro_rules! unexpected {
($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*))));
}
/* represents data that has to be sent to Master */
#[derive(Debug)]
pub struct Sliceable {
it: usize,
data: Vec<u8>
}
/* represents interkernel messages */
struct Message {
count: u8,
data: Vec<u8>
}
#[derive(PartialEq)]
enum OutMessageState {
NoMessage,
MessageReady,
MessageBeingSent,
MessageSent,
MessageAcknowledged
}
/* for dealing with incoming and outgoing interkernel messages */
struct MessageManager {
out_message: Option<Sliceable>,
out_state: OutMessageState,
in_queue: VecDeque<Message>,
in_buffer: Option<Message>,
}
// Per-run state
struct Session {
kernel_state: KernelState,
log_buffer: String,
last_exception: Option<Sliceable>,
messages: MessageManager
}
#[derive(Debug)]
struct KernelLibrary {
library: Vec<u8>,
complete: bool
}
pub struct Manager {
kernels: BTreeMap<u32, KernelLibrary>,
current_id: u32,
session: Session,
cache: Cache,
last_finished: Option<SubkernelFinished>
}
pub struct SubkernelFinished {
pub id: u32,
pub with_exception: bool
}
pub struct SliceMeta {
pub len: u16,
pub status: PayloadStatus
}
macro_rules! get_slice_fn {
( $name:tt, $size:expr ) => {
pub fn $name(&mut self, data_slice: &mut [u8; $size]) -> SliceMeta {
let first = self.it == 0;
let len = min($size, self.data.len() - self.it);
let last = self.it + len == self.data.len();
let status = PayloadStatus::from_status(first, last);
data_slice[..len].clone_from_slice(&self.data[self.it..self.it+len]);
self.it += len;
SliceMeta {
len: len as u16,
status: status
}
}
};
}
impl Sliceable {
pub fn new(data: Vec<u8>) -> Sliceable {
Sliceable {
it: 0,
data: data
}
}
get_slice_fn!(get_slice_sat, SAT_PAYLOAD_MAX_SIZE);
get_slice_fn!(get_slice_master, MASTER_PAYLOAD_MAX_SIZE);
}
impl MessageManager {
pub fn new() -> MessageManager {
MessageManager {
out_message: None,
out_state: OutMessageState::NoMessage,
in_queue: VecDeque::new(),
in_buffer: None
}
}
pub fn handle_incoming(&mut self, status: PayloadStatus, length: usize, data: &[u8; MASTER_PAYLOAD_MAX_SIZE]) {
// called when receiving a message from master
if status.is_first() {
// clear the buffer for first message
self.in_buffer = None;
}
match self.in_buffer.as_mut() {
Some(message) => message.data.extend(&data[..length]),
None => {
self.in_buffer = Some(Message {
count: data[0],
data: data[1..length].to_vec()
});
}
};
if status.is_last() {
// when done, remove from working queue
self.in_queue.push_back(self.in_buffer.take().unwrap());
}
}
pub fn is_outgoing_ready(&mut self) -> bool {
// called by main loop, to see if there's anything to send, will send it afterwards
match self.out_state {
OutMessageState::MessageReady => {
self.out_state = OutMessageState::MessageBeingSent;
true
},
_ => false
}
}
pub fn was_message_acknowledged(&mut self) -> bool {
match self.out_state {
OutMessageState::MessageAcknowledged => {
self.out_state = OutMessageState::NoMessage;
true
},
_ => false
}
}
pub fn get_outgoing_slice(&mut self, data_slice: &mut [u8; MASTER_PAYLOAD_MAX_SIZE]) -> Option<SliceMeta> {
if self.out_state != OutMessageState::MessageBeingSent {
return None;
}
let meta = self.out_message.as_mut()?.get_slice_master(data_slice);
if meta.status.is_last() {
// clear the message slot
self.out_message = None;
// notify kernel with a flag that message is sent
self.out_state = OutMessageState::MessageSent;
}
Some(meta)
}
pub fn ack_slice(&mut self) -> bool {
// returns whether or not there's more to be sent
match self.out_state {
OutMessageState::MessageBeingSent => true,
OutMessageState::MessageSent => {
self.out_state = OutMessageState::MessageAcknowledged;
false
},
_ => {
warn!("received unsolicited SubkernelMessageAck");
false
}
}
}
pub fn accept_outgoing(&mut self, count: u8, tag: &[u8], data: *const *const ()) -> Result<(), Error> {
let mut writer = Cursor::new(Vec::new());
rpc::send_args(&mut writer, 0, tag, data, false)?;
// skip service tag, but write the count
let mut data = writer.into_inner().split_off(3);
data[0] = count;
self.out_message = Some(Sliceable::new(data));
self.out_state = OutMessageState::MessageReady;
Ok(())
}
pub fn get_incoming(&mut self) -> Option<Message> {
self.in_queue.pop_front()
}
}
impl Session {
pub fn new() -> Session {
Session {
kernel_state: KernelState::Absent,
log_buffer: String::new(),
last_exception: None,
messages: MessageManager::new()
}
}
fn running(&self) -> bool {
match self.kernel_state {
KernelState::Absent | KernelState::Loaded => false,
KernelState::Running | KernelState::MsgAwait { .. } |
KernelState::MsgSending => true
}
}
fn flush_log_buffer(&mut self) {
if &self.log_buffer[self.log_buffer.len() - 1..] == "\n" {
for line in self.log_buffer.lines() {
info!(target: "kernel", "{}", line);
}
self.log_buffer.clear()
}
}
}
impl Manager {
pub fn new() -> Manager {
Manager {
kernels: BTreeMap::new(),
current_id: 0,
session: Session::new(),
cache: Cache::new(),
last_finished: None,
}
}
pub fn add(&mut self, id: u32, status: PayloadStatus, data: &[u8], data_len: usize) -> Result<(), Error> {
if status.is_first() {
// in case master is interrupted, and subkernel is sent again, clean the state
self.kernels.remove(&id);
}
let kernel = match self.kernels.get_mut(&id) {
Some(kernel) => {
if kernel.complete {
// replace entry
self.kernels.remove(&id);
self.kernels.insert(id, KernelLibrary {
library: Vec::new(),
complete: false });
self.kernels.get_mut(&id)?
} else {
kernel
}
},
None => {
self.kernels.insert(id, KernelLibrary {
library: Vec::new(),
complete: false });
self.kernels.get_mut(&id)?
},
};
kernel.library.extend(&data[0..data_len]);
kernel.complete = status.is_last();
Ok(())
}
pub fn is_running(&self) -> bool {
self.session.running()
}
pub fn get_current_id(&self) -> Option<u32> {
match self.is_running() {
true => Some(self.current_id),
false => None
}
}
pub fn stop(&mut self) {
unsafe { kernel_cpu::stop() }
self.session.kernel_state = KernelState::Absent;
unsafe { self.cache.unborrow() }
}
pub fn run(&mut self, id: u32) -> Result<(), Error> {
info!("starting subkernel #{}", id);
if self.session.kernel_state != KernelState::Loaded
|| self.current_id != id {
self.load(id)?;
}
self.session.kernel_state = KernelState::Running;
cricon_select(RtioMaster::Kernel);
kern_acknowledge()
}
pub fn message_handle_incoming(&mut self, status: PayloadStatus, length: usize, slice: &[u8; MASTER_PAYLOAD_MAX_SIZE]) {
if !self.is_running() {
return;
}
self.session.messages.handle_incoming(status, length, slice);
}
pub fn message_get_slice(&mut self, slice: &mut [u8; MASTER_PAYLOAD_MAX_SIZE]) -> Option<SliceMeta> {
if !self.is_running() {
return None;
}
self.session.messages.get_outgoing_slice(slice)
}
pub fn message_ack_slice(&mut self) -> bool {
if !self.is_running() {
warn!("received unsolicited SubkernelMessageAck");
return false;
}
self.session.messages.ack_slice()
}
pub fn message_is_ready(&mut self) -> bool {
self.session.messages.is_outgoing_ready()
}
pub fn get_last_finished(&mut self) -> Option<SubkernelFinished> {
self.last_finished.take()
}
pub fn load(&mut self, id: u32) -> Result<(), Error> {
if self.current_id == id && self.session.kernel_state == KernelState::Loaded {
return Ok(())
}
if !self.kernels.get(&id)?.complete {
return Err(Error::KernelNotFound)
}
self.current_id = id;
self.session = Session::new();
self.stop();
unsafe {
kernel_cpu::start();
kern_send(&kern::LoadRequest(&self.kernels.get(&id)?.library)).unwrap();
kern_recv(|reply| {
match reply {
kern::LoadReply(Ok(())) => {
self.session.kernel_state = KernelState::Loaded;
Ok(())
}
kern::LoadReply(Err(error)) => {
kernel_cpu::stop();
Err(Error::Load(format!("{}", error)))
}
other => {
unexpected!("unexpected kernel CPU reply to load request: {:?}", other)
}
}
})
}
}
pub fn exception_get_slice(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> SliceMeta {
match self.session.last_exception.as_mut() {
Some(exception) => exception.get_slice_sat(data_slice),
None => SliceMeta { len: 0, status: PayloadStatus::FirstAndLast }
}
}
fn runtime_exception(&mut self, cause: Error) {
let raw_exception: Vec<u8> = Vec::new();
let mut writer = Cursor::new(raw_exception);
match (HostKernelException {
exceptions: &[Some(eh_artiq::Exception {
id: 11, // SubkernelError, defined in ksupport
message: format!("in subkernel id {}: {:?}", self.current_id, cause).as_c_slice(),
param: [0, 0, 0],
file: file!().as_c_slice(),
line: line!(),
column: column!(),
function: format!("subkernel id {}", self.current_id).as_c_slice(),
})],
stack_pointers: &[StackPointerBacktrace {
stack_pointer: 0,
initial_backtrace_size: 0,
current_backtrace_size: 0
}],
backtrace: &[],
async_errors: 0
}).write_to(&mut writer) {
Ok(_) => self.session.last_exception = Some(Sliceable::new(writer.into_inner())),
Err(_) => error!("Error writing exception data")
}
}
pub fn process_kern_requests(&mut self, rank: u8) {
if !self.is_running() {
return;
}
match self.process_external_messages() {
Ok(()) => (),
Err(Error::AwaitingMessage) => return, // kernel still waiting, do not process kernel messages
Err(Error::KernelException(exception)) => {
unsafe { kernel_cpu::stop() }
self.session.kernel_state = KernelState::Absent;
unsafe { self.cache.unborrow() }
self.session.last_exception = Some(exception);
self.last_finished = Some(SubkernelFinished { id: self.current_id, with_exception: true })
},
Err(e) => {
error!("Error while running processing external messages: {:?}", e);
self.stop();
self.runtime_exception(e);
self.last_finished = Some(SubkernelFinished { id: self.current_id, with_exception: true })
}
}
match self.process_kern_message(rank) {
Ok(Some(with_exception)) => {
self.last_finished = Some(SubkernelFinished { id: self.current_id, with_exception: with_exception })
},
Ok(None) | Err(Error::NoMessage) => (),
Err(e) => {
error!("Error while running kernel: {:?}", e);
self.stop();
self.runtime_exception(e);
self.last_finished = Some(SubkernelFinished { id: self.current_id, with_exception: true })
}
}
}
fn process_external_messages(&mut self) -> Result<(), Error> {
match &self.session.kernel_state {
KernelState::MsgAwait { max_time, tags } => {
if clock::get_ms() > *max_time {
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::Timeout, count: 0 })?;
self.session.kernel_state = KernelState::Running;
return Ok(())
}
if let Some(message) = self.session.messages.get_incoming() {
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::NoError, count: message.count })?;
let tags = tags.clone();
self.session.kernel_state = KernelState::Running;
pass_message_to_kernel(&message, &tags)
} else {
Err(Error::AwaitingMessage)
}
},
KernelState::MsgSending => {
if self.session.messages.was_message_acknowledged() {
self.session.kernel_state = KernelState::Running;
kern_acknowledge()
} else {
Err(Error::AwaitingMessage)
}
},
_ => Ok(())
}
}
fn process_kern_message(&mut self, rank: u8) -> Result<Option<bool>, Error> {
// returns Ok(with_exception) on finish
// None if the kernel is still running
kern_recv(|request| {
match (request, &self.session.kernel_state) {
(&kern::LoadReply(_), KernelState::Loaded) => {
// We're standing by; ignore the message.
return Ok(None)
}
(_, KernelState::Running) => (),
_ => {
unexpected!("unexpected request {:?} from kernel CPU in {:?} state",
request, self.session.kernel_state)
},
}
if process_kern_hwreq(request, rank)? {
return Ok(None)
}
match request {
&kern::Log(args) => {
use core::fmt::Write;
self.session.log_buffer
.write_fmt(args)
.unwrap_or_else(|_| warn!("cannot append to session log buffer"));
self.session.flush_log_buffer();
kern_acknowledge()
}
&kern::LogSlice(arg) => {
self.session.log_buffer += arg;
self.session.flush_log_buffer();
kern_acknowledge()
}
&kern::RpcFlush => {
// we do not have to do anything about this request,
// it is sent by the kernel firmware regardless of RPC being used
kern_acknowledge()
}
&kern::CacheGetRequest { key } => {
let value = self.cache.get(key);
kern_send(&kern::CacheGetReply {
value: unsafe { mem::transmute(value) }
})
}
&kern::CachePutRequest { key, value } => {
let succeeded = self.cache.put(key, value).is_ok();
kern_send(&kern::CachePutReply { succeeded: succeeded })
}
&kern::RunFinished => {
unsafe { kernel_cpu::stop() }
self.session.kernel_state = KernelState::Absent;
unsafe { self.cache.unborrow() }
return Ok(Some(false))
}
&kern::RunException { exceptions, stack_pointers, backtrace } => {
unsafe { kernel_cpu::stop() }
self.session.kernel_state = KernelState::Absent;
unsafe { self.cache.unborrow() }
let exception = slice_kernel_exception(&exceptions, &stack_pointers, &backtrace)?;
self.session.last_exception = Some(exception);
return Ok(Some(true))
}
&kern::SubkernelMsgSend { id: _, count, tag, data } => {
self.session.messages.accept_outgoing(count, tag, data)?;
// acknowledge after the message is sent
self.session.kernel_state = KernelState::MsgSending;
Ok(())
}
&kern::SubkernelMsgRecvRequest { id: _, timeout, tags } => {
let max_time = clock::get_ms() + timeout as u64;
self.session.kernel_state = KernelState::MsgAwait { max_time: max_time, tags: tags.to_vec() };
Ok(())
},
request => unexpected!("unexpected request {:?} from kernel CPU", request)
}.and(Ok(None))
})
}
}
impl Drop for Manager {
fn drop(&mut self) {
cricon_select(RtioMaster::Drtio);
unsafe {
kernel_cpu::stop()
};
}
}
fn kern_recv<R, F>(f: F) -> Result<R, Error>
where F: FnOnce(&kern::Message) -> Result<R, Error> {
if mailbox::receive() == 0 {
return Err(Error::NoMessage);
};
if !kernel_cpu::validate(mailbox::receive()) {
return Err(Error::InvalidPointer(mailbox::receive()))
}
f(unsafe { &*(mailbox::receive() as *const kern::Message) })
}
fn kern_recv_w_timeout<R, F>(timeout: u64, f: F) -> Result<R, Error>
where F: FnOnce(&kern::Message) -> Result<R, Error> + Copy {
// sometimes kernel may be too slow to respond immediately
// (e.g. when receiving external messages)
// we cannot wait indefinitely to keep the satellite responsive
// so a timeout is used instead
let max_time = clock::get_ms() + timeout;
while clock::get_ms() < max_time {
match kern_recv(f) {
Err(Error::NoMessage) => continue,
anything_else => return anything_else
}
}
Err(Error::NoMessage)
}
fn kern_acknowledge() -> Result<(), Error> {
mailbox::acknowledge();
Ok(())
}
fn kern_send(request: &kern::Message) -> Result<(), Error> {
unsafe { mailbox::send(request as *const _ as usize) }
while !mailbox::acknowledged() {}
Ok(())
}
fn slice_kernel_exception(exceptions: &[Option<eh_artiq::Exception>],
stack_pointers: &[eh_artiq::StackPointerBacktrace],
backtrace: &[(usize, usize)]
) -> Result<Sliceable, Error> {
error!("exception in kernel");
for exception in exceptions {
error!("{:?}", exception.unwrap());
}
error!("stack pointers: {:?}", stack_pointers);
error!("backtrace: {:?}", backtrace);
// master will only pass the exception data back to the host:
let raw_exception: Vec<u8> = Vec::new();
let mut writer = Cursor::new(raw_exception);
match (HostKernelException {
exceptions: exceptions,
stack_pointers: stack_pointers,
backtrace: backtrace,
async_errors: 0
}).write_to(&mut writer) {
// save last exception data to be received by master
Ok(_) => Ok(Sliceable::new(writer.into_inner())),
Err(_) => Err(Error::SubkernelIoError)
}
}
fn pass_message_to_kernel(message: &Message, tags: &[u8]) -> Result<(), Error> {
let mut reader = Cursor::new(&message.data);
let mut current_tags = tags;
let mut i = 0;
loop {
let slot = kern_recv_w_timeout(100, |reply| {
match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
&kern::RunException { exceptions, stack_pointers, backtrace } => {
let exception = slice_kernel_exception(&exceptions, &stack_pointers, &backtrace)?;
Err(Error::KernelException(exception))
},
other => unexpected!(
"expected root value slot from kernel CPU, not {:?}", other)
}
})?;
let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error> {
if size == 0 {
return Ok(0 as *mut ())
}
kern_send(&kern::RpcRecvReply(Ok(size)))?;
Ok(kern_recv_w_timeout(100, |reply| {
match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
&kern::RunException {
exceptions,
stack_pointers,
backtrace
}=> {
let exception = slice_kernel_exception(&exceptions, &stack_pointers, &backtrace)?;
Err(Error::KernelException(exception))
},
other => unexpected!(
"expected nested value slot from kernel CPU, not {:?}", other)
}
})?)
});
match res {
Ok(new_tags) => {
kern_send(&kern::RpcRecvReply(Ok(0)))?;
i += 1;
if i < message.count {
// update the tag for next read
current_tags = new_tags;
} else {
// should be done by then
break;
}
},
Err(_) => unexpected!("expected valid subkernel message data")
};
}
Ok(())
}
fn process_kern_hwreq(request: &kern::Message, rank: u8) -> Result<bool, Error> {
match request {
&kern::RtioInitRequest => {
unsafe {
csr::drtiosat::reset_write(1);
clock::spin_us(100);
csr::drtiosat::reset_write(0);
}
kern_acknowledge()
}
&kern::RtioDestinationStatusRequest { destination } => {
// only local destination is considered "up"
// no access to other DRTIO destinations
kern_send(&kern::RtioDestinationStatusReply {
up: destination == rank })
}
&kern::I2cStartRequest { busno } => {
let succeeded = i2c::start(busno as u8).is_ok();
kern_send(&kern::I2cBasicReply { succeeded: succeeded })
}
&kern::I2cRestartRequest { busno } => {
let succeeded = i2c::restart(busno as u8).is_ok();
kern_send(&kern::I2cBasicReply { succeeded: succeeded })
}
&kern::I2cStopRequest { busno } => {
let succeeded = i2c::stop(busno as u8).is_ok();
kern_send(&kern::I2cBasicReply { succeeded: succeeded })
}
&kern::I2cWriteRequest { busno, data } => {
match i2c::write(busno as u8, data) {
Ok(ack) => kern_send(
&kern::I2cWriteReply { succeeded: true, ack: ack }),
Err(_) => kern_send(
&kern::I2cWriteReply { succeeded: false, ack: false })
}
}
&kern::I2cReadRequest { busno, ack } => {
match i2c::read(busno as u8, ack) {
Ok(data) => kern_send(
&kern::I2cReadReply { succeeded: true, data: data }),
Err(_) => kern_send(
&kern::I2cReadReply { succeeded: false, data: 0xff })
}
}
&kern::I2cSwitchSelectRequest { busno, address, mask } => {
let succeeded = i2c::switch_select(busno as u8, address, mask).is_ok();
kern_send(&kern::I2cBasicReply { succeeded: succeeded })
}
&kern::SpiSetConfigRequest { busno, flags, length, div, cs } => {
let succeeded = spi::set_config(busno as u8, flags, length, div, cs).is_ok();
kern_send(&kern::SpiBasicReply { succeeded: succeeded })
},
&kern::SpiWriteRequest { busno, data } => {
let succeeded = spi::write(busno as u8, data).is_ok();
kern_send(&kern::SpiBasicReply { succeeded: succeeded })
}
&kern::SpiReadRequest { busno } => {
match spi::read(busno as u8) {
Ok(data) => kern_send(
&kern::SpiReadReply { succeeded: true, data: data }),
Err(_) => kern_send(
&kern::SpiReadReply { succeeded: false, data: 0 })
}
}
_ => return Ok(false)
}.and(Ok(true))
}

View File

@ -1,4 +1,4 @@
#![feature(never_type, panic_info_message, llvm_asm, default_alloc_error_handler)] #![feature(never_type, panic_info_message, llvm_asm, default_alloc_error_handler, try_trait)]
#![no_std] #![no_std]
#[macro_use] #[macro_use]
@ -9,20 +9,23 @@ extern crate board_artiq;
extern crate riscv; extern crate riscv;
extern crate alloc; extern crate alloc;
extern crate proto_artiq; extern crate proto_artiq;
extern crate cslice;
extern crate io;
extern crate eh;
use core::convert::TryFrom; use core::convert::TryFrom;
use board_misoc::{csr, ident, clock, uart_logger, i2c, pmp}; use board_misoc::{csr, ident, clock, uart_logger, i2c, pmp};
#[cfg(has_si5324)] #[cfg(has_si5324)]
use board_artiq::si5324; use board_artiq::si5324;
use board_artiq::{spi, drtioaux}; use board_artiq::{spi, drtioaux, drtio_routing};
#[cfg(soc_platform = "efc")] #[cfg(soc_platform = "efc")]
use board_artiq::ad9117; use board_artiq::ad9117;
use board_artiq::drtio_routing; use proto_artiq::drtioaux_proto::{SAT_PAYLOAD_MAX_SIZE, MASTER_PAYLOAD_MAX_SIZE};
use proto_artiq::drtioaux_proto::ANALYZER_MAX_SIZE;
#[cfg(has_drtio_eem)] #[cfg(has_drtio_eem)]
use board_artiq::drtio_eem; use board_artiq::drtio_eem;
use riscv::register::{mcause, mepc, mtval}; use riscv::register::{mcause, mepc, mtval};
use dma::Manager as DmaManager; use dma::Manager as DmaManager;
use kernel::Manager as KernelManager;
use analyzer::Analyzer; use analyzer::Analyzer;
#[global_allocator] #[global_allocator]
@ -31,6 +34,8 @@ static mut ALLOC: alloc_list::ListAlloc = alloc_list::EMPTY;
mod repeater; mod repeater;
mod dma; mod dma;
mod analyzer; mod analyzer;
mod kernel;
mod cache;
fn drtiosat_reset(reset: bool) { fn drtiosat_reset(reset: bool) {
unsafe { unsafe {
@ -60,6 +65,22 @@ fn drtiosat_tsc_loaded() -> bool {
} }
} }
pub enum RtioMaster {
Drtio,
Dma,
Kernel
}
pub fn cricon_select(master: RtioMaster) {
let val = match master {
RtioMaster::Drtio => 0,
RtioMaster::Dma => 1,
RtioMaster::Kernel => 2
};
unsafe {
csr::cri_con::selected_write(val);
}
}
#[cfg(has_drtio_routing)] #[cfg(has_drtio_routing)]
macro_rules! forward { macro_rules! forward {
@ -81,8 +102,8 @@ macro_rules! forward {
($routing_table:expr, $destination:expr, $rank:expr, $repeaters:expr, $packet:expr) => {} ($routing_table:expr, $destination:expr, $rank:expr, $repeaters:expr, $packet:expr) => {}
} }
fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repeaters: &mut [repeater::Repeater], fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmgr: &mut KernelManager,
_routing_table: &mut drtio_routing::RoutingTable, _rank: &mut u8, _repeaters: &mut [repeater::Repeater], _routing_table: &mut drtio_routing::RoutingTable, _rank: &mut u8,
packet: drtioaux::Packet) -> Result<(), drtioaux::Error<!>> { packet: drtioaux::Packet) -> Result<(), drtioaux::Error<!>> {
// In the code below, *_chan_sel_write takes an u8 if there are fewer than 256 channels, // In the code below, *_chan_sel_write takes an u8 if there are fewer than 256 channels,
// and u16 otherwise; hence the `as _` conversion. // and u16 otherwise; hence the `as _` conversion.
@ -102,18 +123,30 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea
drtioaux::send(0, &drtioaux::Packet::ResetAck) drtioaux::send(0, &drtioaux::Packet::ResetAck)
}, },
drtioaux::Packet::DestinationStatusRequest { destination: _destination } => { drtioaux::Packet::DestinationStatusRequest { destination } => {
#[cfg(has_drtio_routing)] #[cfg(has_drtio_routing)]
let hop = _routing_table.0[_destination as usize][*_rank as usize]; let hop = _routing_table.0[destination as usize][*_rank as usize];
#[cfg(not(has_drtio_routing))] #[cfg(not(has_drtio_routing))]
let hop = 0; let hop = 0;
if hop == 0 { if hop == 0 {
// async messages // async messages
if let Some(status) = _manager.check_state() { if let Some(status) = dmamgr.get_status() {
info!("playback done, error: {}, channel: {}, timestamp: {}", status.error, status.channel, status.timestamp); info!("playback done, error: {}, channel: {}, timestamp: {}", status.error, status.channel, status.timestamp);
drtioaux::send(0, &drtioaux::Packet::DmaPlaybackStatus { drtioaux::send(0, &drtioaux::Packet::DmaPlaybackStatus {
destination: _destination, id: status.id, error: status.error, channel: status.channel, timestamp: status.timestamp })?; destination: destination, id: status.id, error: status.error, channel: status.channel, timestamp: status.timestamp })?;
} else if let Some(subkernel_finished) = kernelmgr.get_last_finished() {
info!("subkernel {} finished, with exception: {}", subkernel_finished.id, subkernel_finished.with_exception);
drtioaux::send(0, &drtioaux::Packet::SubkernelFinished {
id: subkernel_finished.id, with_exception: subkernel_finished.with_exception
})?;
} else if kernelmgr.message_is_ready() {
let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
let meta = kernelmgr.message_get_slice(&mut data_slice).unwrap();
drtioaux::send(0, &drtioaux::Packet::SubkernelMessage {
destination: destination, id: kernelmgr.get_current_id().unwrap(),
status: meta.status, length: meta.len as u16, data: data_slice
})?;
} else { } else {
let errors; let errors;
unsafe { unsafe {
@ -157,7 +190,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea
if hop <= csr::DRTIOREP.len() { if hop <= csr::DRTIOREP.len() {
let repno = hop - 1; let repno = hop - 1;
match _repeaters[repno].aux_forward(&drtioaux::Packet::DestinationStatusRequest { match _repeaters[repno].aux_forward(&drtioaux::Packet::DestinationStatusRequest {
destination: _destination destination: destination
}) { }) {
Ok(()) => (), Ok(()) => (),
Err(drtioaux::Error::LinkDown) => drtioaux::send(0, &drtioaux::Packet::DestinationDownReply)?, Err(drtioaux::Error::LinkDown) => drtioaux::send(0, &drtioaux::Packet::DestinationDownReply)?,
@ -328,7 +361,7 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea
drtioaux::Packet::AnalyzerDataRequest { destination: _destination } => { drtioaux::Packet::AnalyzerDataRequest { destination: _destination } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet); forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
let mut data_slice: [u8; ANALYZER_MAX_SIZE] = [0; ANALYZER_MAX_SIZE]; let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
let meta = analyzer.get_data(&mut data_slice); let meta = analyzer.get_data(&mut data_slice);
drtioaux::send(0, &drtioaux::Packet::AnalyzerData { drtioaux::send(0, &drtioaux::Packet::AnalyzerData {
last: meta.last, last: meta.last,
@ -337,28 +370,80 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea
}) })
} }
#[cfg(has_rtio_dma)] drtioaux::Packet::DmaAddTraceRequest { destination: _destination, id, status, length, trace } => {
drtioaux::Packet::DmaAddTraceRequest { destination: _destination, id, last, length, trace } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet); forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
let succeeded = _manager.add(id, last, &trace, length as usize).is_ok(); let succeeded = dmamgr.add(id, status, &trace, length as usize).is_ok();
drtioaux::send(0, drtioaux::send(0,
&drtioaux::Packet::DmaAddTraceReply { succeeded: succeeded }) &drtioaux::Packet::DmaAddTraceReply { succeeded: succeeded })
} }
#[cfg(has_rtio_dma)]
drtioaux::Packet::DmaRemoveTraceRequest { destination: _destination, id } => { drtioaux::Packet::DmaRemoveTraceRequest { destination: _destination, id } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet); forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
let succeeded = _manager.erase(id).is_ok(); let succeeded = dmamgr.erase(id).is_ok();
drtioaux::send(0, drtioaux::send(0,
&drtioaux::Packet::DmaRemoveTraceReply { succeeded: succeeded }) &drtioaux::Packet::DmaRemoveTraceReply { succeeded: succeeded })
} }
#[cfg(has_rtio_dma)]
drtioaux::Packet::DmaPlaybackRequest { destination: _destination, id, timestamp } => { drtioaux::Packet::DmaPlaybackRequest { destination: _destination, id, timestamp } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet); forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
let succeeded = _manager.playback(id, timestamp).is_ok(); // no DMA with a running kernel
let succeeded = !kernelmgr.is_running() && dmamgr.playback(id, timestamp).is_ok();
drtioaux::send(0, drtioaux::send(0,
&drtioaux::Packet::DmaPlaybackReply { succeeded: succeeded }) &drtioaux::Packet::DmaPlaybackReply { succeeded: succeeded })
} }
drtioaux::Packet::SubkernelAddDataRequest { destination: _destination, id, status, length, data } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
let succeeded = kernelmgr.add(id, status, &data, length as usize).is_ok();
drtioaux::send(0,
&drtioaux::Packet::SubkernelAddDataReply { succeeded: succeeded })
}
drtioaux::Packet::SubkernelLoadRunRequest { destination: _destination, id, run } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
let mut succeeded = kernelmgr.load(id).is_ok();
// allow preloading a kernel with delayed run
if run {
if dmamgr.running() {
// cannot run kernel while DDMA is running
succeeded = false;
} else {
succeeded |= kernelmgr.run(id).is_ok();
}
}
drtioaux::send(0,
&drtioaux::Packet::SubkernelLoadRunReply { succeeded: succeeded })
}
drtioaux::Packet::SubkernelExceptionRequest { destination: _destination } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
let meta = kernelmgr.exception_get_slice(&mut data_slice);
drtioaux::send(0, &drtioaux::Packet::SubkernelException {
last: meta.status.is_last(),
length: meta.len,
data: data_slice,
})
}
drtioaux::Packet::SubkernelMessage { destination, id: _id, status, length, data } => {
forward!(_routing_table, destination, *_rank, _repeaters, &packet);
kernelmgr.message_handle_incoming(status, length as usize, &data);
drtioaux::send(0, &drtioaux::Packet::SubkernelMessageAck {
destination: destination
})
}
drtioaux::Packet::SubkernelMessageAck { destination: _destination } => {
forward!(_routing_table, _destination, *_rank, _repeaters, &packet);
if kernelmgr.message_ack_slice() {
let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
if let Some(meta) = kernelmgr.message_get_slice(&mut data_slice) {
drtioaux::send(0, &drtioaux::Packet::SubkernelMessage {
destination: *_rank, id: kernelmgr.get_current_id().unwrap(),
status: meta.status, length: meta.len as u16, data: data_slice
})?
} else {
error!("Error receiving message slice");
}
}
Ok(())
}
_ => { _ => {
warn!("received unexpected aux packet"); warn!("received unexpected aux packet");
Ok(()) Ok(())
@ -367,12 +452,12 @@ fn process_aux_packet(_manager: &mut DmaManager, analyzer: &mut Analyzer, _repea
} }
fn process_aux_packets(dma_manager: &mut DmaManager, analyzer: &mut Analyzer, fn process_aux_packets(dma_manager: &mut DmaManager, analyzer: &mut Analyzer,
repeaters: &mut [repeater::Repeater], kernelmgr: &mut KernelManager, repeaters: &mut [repeater::Repeater],
routing_table: &mut drtio_routing::RoutingTable, rank: &mut u8) { routing_table: &mut drtio_routing::RoutingTable, rank: &mut u8) {
let result = let result =
drtioaux::recv(0).and_then(|packet| { drtioaux::recv(0).and_then(|packet| {
if let Some(packet) = packet { if let Some(packet) = packet {
process_aux_packet(dma_manager, analyzer, repeaters, routing_table, rank, packet) process_aux_packet(dma_manager, analyzer, kernelmgr, repeaters, routing_table, rank, packet)
} else { } else {
Ok(()) Ok(())
} }
@ -384,10 +469,7 @@ fn process_aux_packets(dma_manager: &mut DmaManager, analyzer: &mut Analyzer,
} }
fn drtiosat_process_errors() { fn drtiosat_process_errors() {
let errors; let errors = unsafe { csr::drtiosat::protocol_error_read() };
unsafe {
errors = csr::drtiosat::protocol_error_read();
}
if errors & 1 != 0 { if errors & 1 != 0 {
error!("received packet of an unknown type"); error!("received packet of an unknown type");
} }
@ -395,26 +477,29 @@ fn drtiosat_process_errors() {
error!("received truncated packet"); error!("received truncated packet");
} }
if errors & 4 != 0 { if errors & 4 != 0 {
let destination; let destination = unsafe {
unsafe { csr::drtiosat::buffer_space_timeout_dest_read()
destination = csr::drtiosat::buffer_space_timeout_dest_read(); };
}
error!("timeout attempting to get buffer space from CRI, destination=0x{:02x}", destination) error!("timeout attempting to get buffer space from CRI, destination=0x{:02x}", destination)
} }
if errors & 8 != 0 { let drtiosat_active = unsafe { csr::cri_con::selected_read() == 0 };
let channel; if drtiosat_active {
let timestamp_event; // RTIO errors are handled by ksupport and dma manager
let timestamp_counter; if errors & 8 != 0 {
unsafe { let channel;
channel = csr::drtiosat::underflow_channel_read(); let timestamp_event;
timestamp_event = csr::drtiosat::underflow_timestamp_event_read() as i64; let timestamp_counter;
timestamp_counter = csr::drtiosat::underflow_timestamp_counter_read() as i64; unsafe {
channel = csr::drtiosat::underflow_channel_read();
timestamp_event = csr::drtiosat::underflow_timestamp_event_read() as i64;
timestamp_counter = csr::drtiosat::underflow_timestamp_counter_read() as i64;
}
error!("write underflow, channel={}, timestamp={}, counter={}, slack={}",
channel, timestamp_event, timestamp_counter, timestamp_event-timestamp_counter);
}
if errors & 16 != 0 {
error!("write overflow");
} }
error!("write underflow, channel={}, timestamp={}, counter={}, slack={}",
channel, timestamp_event, timestamp_counter, timestamp_event-timestamp_counter);
}
if errors & 16 != 0 {
error!("write overflow");
} }
unsafe { unsafe {
csr::drtiosat::protocol_error_write(errors); csr::drtiosat::protocol_error_write(errors);
@ -486,7 +571,7 @@ fn sysclk_setup() {
si5324::setup(&SI5324_SETTINGS, si5324::Input::Ckin1).expect("cannot initialize Si5324"); si5324::setup(&SI5324_SETTINGS, si5324::Input::Ckin1).expect("cannot initialize Si5324");
info!("Switching sys clock, rebooting..."); info!("Switching sys clock, rebooting...");
// delay for clean UART log, wait until UART FIFO is empty // delay for clean UART log, wait until UART FIFO is empty
clock::spin_us(1300); clock::spin_us(3000);
unsafe { unsafe {
csr::gt_drtio::stable_clkin_write(1); csr::gt_drtio::stable_clkin_write(1);
} }
@ -547,7 +632,8 @@ pub extern fn main() -> i32 {
#[cfg(soc_platform = "efc")] #[cfg(soc_platform = "efc")]
{ {
io_expander = board_misoc::io_expander::IoExpander::new().unwrap(); io_expander = board_misoc::io_expander::IoExpander::new().unwrap();
io_expander.init().expect("I2C I/O expander initialization failed");
// Enable LEDs // Enable LEDs
io_expander.set_oe(0, 1 << 5 | 1 << 6 | 1 << 7).unwrap(); io_expander.set_oe(0, 1 << 5 | 1 << 6 | 1 << 7).unwrap();
@ -613,21 +699,23 @@ pub extern fn main() -> i32 {
si5324::siphaser::calibrate_skew().expect("failed to calibrate skew"); si5324::siphaser::calibrate_skew().expect("failed to calibrate skew");
} }
// DMA manager created here, so when link is dropped, all DMA traces // various managers created here, so when link is dropped, DMA traces,
// are cleared out for a clean slate on subsequent connections, // analyzer logs, kernels are cleared and/or stopped for a clean slate
// without a manual intervention. // on subsequent connections, without a manual intervention.
let mut dma_manager = DmaManager::new(); let mut dma_manager = DmaManager::new();
// Reset the analyzer as well.
let mut analyzer = Analyzer::new(); let mut analyzer = Analyzer::new();
let mut kernelmgr = KernelManager::new();
cricon_select(RtioMaster::Drtio);
drtioaux::reset(0); drtioaux::reset(0);
drtiosat_reset(false); drtiosat_reset(false);
drtiosat_reset_phy(false); drtiosat_reset_phy(false);
while drtiosat_link_rx_up() { while drtiosat_link_rx_up() {
drtiosat_process_errors(); drtiosat_process_errors();
process_aux_packets(&mut dma_manager, &mut analyzer, &mut repeaters, &mut routing_table, &mut rank); process_aux_packets(&mut dma_manager, &mut analyzer,
&mut kernelmgr, &mut repeaters,
&mut routing_table, &mut rank);
for rep in repeaters.iter_mut() { for rep in repeaters.iter_mut() {
rep.service(&routing_table, rank); rep.service(&routing_table, rank);
} }
@ -650,6 +738,7 @@ pub extern fn main() -> i32 {
error!("aux packet error: {}", e); error!("aux packet error: {}", e);
} }
} }
kernelmgr.process_kern_requests(rank);
} }
drtiosat_reset_phy(true); drtiosat_reset_phy(true);

View File

@ -1,78 +0,0 @@
INCLUDE generated/output_format.ld
INCLUDE generated/regions.ld
ENTRY(_reset_handler)
SECTIONS
{
.vectors :
{
*(.vectors)
} > main_ram
.text :
{
*(.text .text.*)
. = ALIGN(0x40000);
} > main_ram
.eh_frame :
{
__eh_frame_start = .;
KEEP(*(.eh_frame))
__eh_frame_end = .;
} > main_ram
/* https://sourceware.org/bugzilla/show_bug.cgi?id=20475 */
.got :
{
PROVIDE(_GLOBAL_OFFSET_TABLE_ = .);
*(.got)
} > main_ram
.got.plt :
{
*(.got.plt)
} > main_ram
.rodata :
{
_frodata = .;
*(.rodata .rodata.*)
_erodata = .;
} > main_ram
.data :
{
*(.data .data.*)
} > main_ram
.sdata :
{
*(.sdata .sdata.*)
} > main_ram
.bss (NOLOAD) : ALIGN(4)
{
_fbss = .;
*(.sbss .sbss.* .bss .bss.*);
. = ALIGN(4);
_ebss = .;
} > main_ram
.stack (NOLOAD) : ALIGN(0x1000)
{
_sstack_guard = .;
. += 0x1000;
_estack = .;
. += 0x10000;
_fstack = . - 16;
} > main_ram
/* 64MB heap for alloc use */
.heap (NOLOAD) : ALIGN(16)
{
_fheap = .;
. = . + 0x4000000;
_eheap = .;
} > main_ram
}

View File

@ -146,7 +146,8 @@ class Client:
print(error_msg) print(error_msg)
table = PrettyTable() table = PrettyTable()
table.field_names = ["Variant", "Expiry date"] table.field_names = ["Variant", "Expiry date"]
table.add_rows(variants) for variant in variants:
table.add_row(variant)
print(table) print(table)
sys.exit(1) sys.exit(1)
return variants[0][0] return variants[0][0]
@ -244,10 +245,11 @@ def main():
sys.exit(1) sys.exit(1)
zip_unarchive(contents, args.directory) zip_unarchive(contents, args.directory)
elif args.action == "get_variants": elif args.action == "get_variants":
data = client.get_variants() variants = client.get_variants()
table = PrettyTable() table = PrettyTable()
table.field_names = ["Variant", "Expiry date"] table.field_names = ["Variant", "Expiry date"]
table.add_rows(data) for variant in variants:
table.add_row(variant)
print(table) print(table)
elif args.action == "get_json": elif args.action == "get_json":
if args.variant: if args.variant:

View File

@ -0,0 +1,113 @@
#!/usr/bin/env python3
import argparse
import asyncio
import atexit
import logging
from sipyco.asyncio_tools import AsyncioServer, SignalHandler, atexit_register_coroutine
from sipyco.pc_rpc import Server
from sipyco import common_args
from artiq.coredevice.comm_analyzer import get_analyzer_dump
logger = logging.getLogger(__name__)
# simplified version of sipyco Broadcaster
class ProxyServer(AsyncioServer):
def __init__(self, queue_limit=8):
AsyncioServer.__init__(self)
self._recipients = set()
self._queue_limit = queue_limit
async def _handle_connection_cr(self, reader, writer):
try:
queue = asyncio.Queue(self._queue_limit)
self._recipients.add(queue)
try:
while True:
dump = await queue.get()
writer.write(dump)
# raise exception on connection error
await writer.drain()
finally:
self._recipients.remove(queue)
except (ConnectionResetError, ConnectionAbortedError, BrokenPipeError):
# receivers disconnecting are a normal occurence
pass
finally:
writer.close()
def distribute(self, dump):
for recipient in self._recipients:
recipient.put_nowait(dump)
class ProxyControl:
def __init__(self, distribute_cb, core_addr, core_port=1382):
self.distribute_cb = distribute_cb
self.core_addr = core_addr
self.core_port = core_port
def ping(self):
return True
def trigger(self):
try:
dump = get_analyzer_dump(self.core_addr, self.core_port)
self.distribute_cb(dump)
except:
logger.warning("Trigger failed:", exc_info=True)
return False
else:
return True
def get_argparser():
parser = argparse.ArgumentParser(
description="ARTIQ core analyzer proxy")
common_args.verbosity_args(parser)
common_args.simple_network_args(parser, [
("proxy", "proxying", 1385),
("control", "control", 1386)
])
parser.add_argument("core_addr", metavar="CORE_ADDR",
help="hostname or IP address of the core device")
return parser
def main():
args = get_argparser().parse_args()
common_args.init_logger_from_args(args)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
atexit.register(loop.close)
signal_handler = SignalHandler()
signal_handler.setup()
atexit.register(signal_handler.teardown)
bind_address = common_args.bind_address_from_args(args)
proxy_server = ProxyServer()
loop.run_until_complete(proxy_server.start(bind_address, args.port_proxy))
atexit_register_coroutine(proxy_server.stop, loop=loop)
controller = ProxyControl(proxy_server.distribute, args.core_addr)
server = Server({"coreanalyzer_proxy_control": controller}, None, True)
loop.run_until_complete(server.start(bind_address, args.port_control))
atexit_register_coroutine(server.stop, loop=loop)
_, pending = loop.run_until_complete(asyncio.wait(
[loop.create_task(signal_handler.wait_terminate()),
loop.create_task(server.wait_terminate())],
return_when=asyncio.FIRST_COMPLETED))
for task in pending:
task.cancel()
if __name__ == "__main__":
main()

View File

@ -81,7 +81,7 @@ class Browser(QtWidgets.QMainWindow):
self.files.dataset_changed.connect( self.files.dataset_changed.connect(
self.experiments.dataset_changed) self.experiments.dataset_changed)
self.applets = applets.AppletsDock(self, dataset_sub, dataset_ctl, loop=loop) self.applets = applets.AppletsDock(self, dataset_sub, dataset_ctl, self.experiments, loop=loop)
smgr.register(self.applets) smgr.register(self.applets)
atexit_register_coroutine(self.applets.stop, loop=loop) atexit_register_coroutine(self.applets.stop, loop=loop)

View File

@ -23,7 +23,8 @@ from sipyco.sync_struct import Subscriber
from sipyco.broadcast import Receiver from sipyco.broadcast import Receiver
from sipyco import common_args, pyon from sipyco import common_args, pyon
from artiq.tools import scale_from_metadata, short_format, parse_arguments from artiq.tools import (scale_from_metadata, short_format, parse_arguments,
parse_devarg_override)
from artiq import __version__ as artiq_version from artiq import __version__ as artiq_version
@ -68,6 +69,8 @@ def get_argparser():
parser_add.add_argument("-r", "--revision", default=None, parser_add.add_argument("-r", "--revision", default=None,
help="use a specific repository revision " help="use a specific repository revision "
"(defaults to head, ignored without -R)") "(defaults to head, ignored without -R)")
parser_add.add_argument("--devarg-override", default="",
help="specify device arguments to override")
parser_add.add_argument("--content", default=False, parser_add.add_argument("--content", default=False,
action="store_true", action="store_true",
help="submit by content") help="submit by content")
@ -146,6 +149,7 @@ def _action_submit(remote, args):
raise ValueError("Failed to parse run arguments") from err raise ValueError("Failed to parse run arguments") from err
expid = { expid = {
"devarg_override": parse_devarg_override(args.devarg_override),
"log_level": logging.WARNING + args.quiet*10 - args.verbose*10, "log_level": logging.WARNING + args.quiet*10 - args.verbose*10,
"class_name": args.class_name, "class_name": args.class_name,
"arguments": arguments, "arguments": arguments,

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import os, sys, logging, argparse import os, sys, io, tarfile, logging, argparse
from sipyco import common_args from sipyco import common_args
@ -63,9 +63,16 @@ def main():
core_name = exp.run.artiq_embedded.core_name core_name = exp.run.artiq_embedded.core_name
core = getattr(exp_inst, core_name) core = getattr(exp_inst, core_name)
object_map, kernel_library, _, _ = \ object_map, main_kernel_library, _, _, subkernel_arg_types = \
core.compile(exp.run, [exp_inst], {}, core.compile(exp.run, [exp_inst], {},
attribute_writeback=False, print_as_rpc=False) attribute_writeback=False, print_as_rpc=False)
subkernels = {}
for sid, subkernel_fn in object_map.subkernels().items():
destination, subkernel_library = core.compile_subkernel(
sid, subkernel_fn, object_map,
[exp_inst], subkernel_arg_types)
subkernels[sid] = (destination, subkernel_library)
except CompileError as error: except CompileError as error:
return return
finally: finally:
@ -77,12 +84,35 @@ def main():
raise ValueError("Experiment must not use RPC") raise ValueError("Experiment must not use RPC")
output = args.output output = args.output
if output is None:
basename, ext = os.path.splitext(args.file)
output = "{}.elf".format(basename)
with open(output, "wb") as f: if not subkernels:
f.write(kernel_library) # just write the ELF file
if output is None:
basename, ext = os.path.splitext(args.file)
output = "{}.elf".format(basename)
with open(output, "wb") as f:
f.write(main_kernel_library)
else:
# combine them in a tar archive
if output is None:
basename, ext = os.path.splitext(args.file)
output = "{}.tar".format(basename)
with tarfile.open(output, "w:") as tar:
# write the main lib as "main.elf"
main_kernel_fileobj = io.BytesIO(main_kernel_library)
main_kernel_info = tarfile.TarInfo(name="main.elf")
main_kernel_info.size = len(main_kernel_library)
tar.addfile(main_kernel_info, fileobj=main_kernel_fileobj)
# subkernels as "<sid> <destination>.elf"
for sid, (destination, subkernel_library) in subkernels.items():
subkernel_fileobj = io.BytesIO(subkernel_library)
subkernel_info = tarfile.TarInfo(name="{} {}.elf".format(sid, destination))
subkernel_info.size = len(subkernel_library)
tar.addfile(subkernel_info, fileobj=subkernel_fileobj)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -85,6 +85,18 @@ class MdiArea(QtWidgets.QMdiArea):
self.pixmap = QtGui.QPixmap(os.path.join( self.pixmap = QtGui.QPixmap(os.path.join(
artiq_dir, "gui", "logo_ver.svg")) artiq_dir, "gui", "logo_ver.svg"))
self.setActivationOrder(self.ActivationHistoryOrder)
self.tile = QtWidgets.QShortcut(
QtGui.QKeySequence('Ctrl+Shift+T'), self)
self.tile.activated.connect(
lambda: self.tileSubWindows())
self.cascade = QtWidgets.QShortcut(
QtGui.QKeySequence('Ctrl+Shift+C'), self)
self.cascade.activated.connect(
lambda: self.cascadeSubWindows())
def paintEvent(self, event): def paintEvent(self, event):
QtWidgets.QMdiArea.paintEvent(self, event) QtWidgets.QMdiArea.paintEvent(self, event)
painter = QtGui.QPainter(self.viewport()) painter = QtGui.QPainter(self.viewport())

View File

@ -24,6 +24,27 @@ def get_cpu_target(description):
else: else:
raise NotImplementedError raise NotImplementedError
def get_num_leds(description):
drtio_role = description["drtio_role"]
target = description["target"]
hw_rev = description["hw_rev"]
kasli_board_leds = {
"v1.0": 4,
"v1.1": 6,
"v2.0": 3
}
if target == "kasli":
if hw_rev in ("v1.0", "v1.1") and drtio_role != "standalone":
# LEDs are used for DRTIO status on v1.0 and v1.1
return kasli_board_leds[hw_rev] - 3
return kasli_board_leds[hw_rev]
elif target == "kasli_soc":
return 2
else:
raise ValueError
def process_header(output, description): def process_header(output, description):
print(textwrap.dedent(""" print(textwrap.dedent("""
# Autogenerated for the {variant} variant # Autogenerated for the {variant} variant
@ -34,7 +55,13 @@ def process_header(output, description):
"type": "local", "type": "local",
"module": "artiq.coredevice.core", "module": "artiq.coredevice.core",
"class": "Core", "class": "Core",
"arguments": {{"host": core_addr, "ref_period": {ref_period}, "target": "{cpu_target}"}}, "arguments": {{
"host": core_addr,
"ref_period": {ref_period},
"analyzer_proxy": "core_analyzer",
"target": "{cpu_target}",
"satellite_cpu_targets": {{}}
}},
}}, }},
"core_log": {{ "core_log": {{
"type": "controller", "type": "controller",
@ -49,6 +76,13 @@ def process_header(output, description):
"port": 1384, "port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {{port_proxy}} --port-control {{port}} --bind {{bind}} " + core_addr "command": "aqctl_moninj_proxy --port-proxy {{port_proxy}} --port-control {{port}} --bind {{bind}} " + core_addr
}}, }},
"core_analyzer": {{
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {{port_proxy}} --port-control {{port}} --bind {{bind}} " + core_addr
}},
"core_cache": {{ "core_cache": {{
"type": "local", "type": "local",
"module": "artiq.coredevice.cache", "module": "artiq.coredevice.cache",
@ -60,8 +94,6 @@ def process_header(output, description):
"class": "CoreDMA" "class": "CoreDMA"
}}, }},
"satellite_cpu_targets": {{}},
"i2c_switch0": {{ "i2c_switch0": {{
"type": "local", "type": "local",
"module": "artiq.coredevice.i2c", "module": "artiq.coredevice.i2c",
@ -703,8 +735,8 @@ class PeripheralManager:
processor = getattr(self, "process_"+str(peripheral["type"])) processor = getattr(self, "process_"+str(peripheral["type"]))
return processor(rtio_offset, peripheral) return processor(rtio_offset, peripheral)
def add_board_leds(self, rtio_offset, board_name=None): def add_board_leds(self, rtio_offset, board_name=None, num_leds=2):
for i in range(2): for i in range(num_leds):
if board_name is None: if board_name is None:
led_name = self.get_name("led") led_name = self.get_name("led")
else: else:
@ -718,7 +750,7 @@ class PeripheralManager:
}}""", }}""",
name=led_name, name=led_name,
channel=rtio_offset+i) channel=rtio_offset+i)
return 2 return num_leds
def split_drtio_eem(peripherals): def split_drtio_eem(peripherals):
@ -747,9 +779,10 @@ def process(output, primary_description, satellites):
for peripheral in local_peripherals: for peripheral in local_peripherals:
n_channels = pm.process(rtio_offset, peripheral) n_channels = pm.process(rtio_offset, peripheral)
rtio_offset += n_channels rtio_offset += n_channels
if drtio_role == "standalone":
n_channels = pm.add_board_leds(rtio_offset) num_leds = get_num_leds(primary_description)
rtio_offset += n_channels pm.add_board_leds(rtio_offset, num_leds=num_leds)
rtio_offset += num_leds
for destination, description in satellites: for destination, description in satellites:
if description["drtio_role"] != "satellite": if description["drtio_role"] != "satellite":
@ -760,7 +793,7 @@ def process(output, primary_description, satellites):
print(textwrap.dedent(""" print(textwrap.dedent("""
# DEST#{dest} peripherals # DEST#{dest} peripherals
device_db["satellite_cpu_targets"][{dest}] = \"{target}\"""").format( device_db["core"]["arguments"]["satellite_cpu_targets"][{dest}] = \"{target}\"""").format(
dest=destination, dest=destination,
target=get_cpu_target(description)), target=get_cpu_target(description)),
file=output) file=output)
@ -768,12 +801,26 @@ def process(output, primary_description, satellites):
for peripheral in peripherals: for peripheral in peripherals:
n_channels = pm.process(rtio_offset, peripheral) n_channels = pm.process(rtio_offset, peripheral)
rtio_offset += n_channels rtio_offset += n_channels
for peripheral in drtio_peripherals: num_leds = get_num_leds(description)
pm.add_board_leds(rtio_offset, num_leds=num_leds)
rtio_offset += num_leds
for i, peripheral in enumerate(drtio_peripherals):
if not("drtio_destination" in peripheral):
if primary_description["target"] == "kasli":
if primary_description["hw_rev"] in ("v1.0", "v1.1"):
peripheral["drtio_destination"] = 3 + i
else:
peripheral["drtio_destination"] = 4 + i
elif primary_description["target"] == "kasli_soc":
peripheral["drtio_destination"] = 5 + i
else:
raise NotImplementedError
print(textwrap.dedent(""" print(textwrap.dedent("""
# DEST#{dest} peripherals # DEST#{dest} peripherals
device_db["satellite_cpu_targets"][{dest}] = \"{target}\"""").format( device_db["core"]["arguments"]["satellite_cpu_targets"][{dest}] = \"{target}\"""").format(
dest=peripheral["drtio_destination"], dest=peripheral["drtio_destination"],
target=get_cpu_target(peripheral)), target=get_cpu_target(peripheral)),
file=output) file=output)

View File

@ -4,6 +4,7 @@
import argparse import argparse
import sys import sys
import tarfile
from operator import itemgetter from operator import itemgetter
import logging import logging
from collections import defaultdict from collections import defaultdict
@ -86,6 +87,20 @@ class LLVMBitcodeRunner(FileRunner):
return self.target.link([self.target.assemble(llmodule)]) return self.target.link([self.target.assemble(llmodule)])
class TARRunner(FileRunner):
def compile(self):
with tarfile.open(self.file, "r:") as tar:
for entry in tar:
if entry.name == 'main.elf':
main_lib = tar.extractfile(entry).read()
else:
subkernel_name = entry.name.removesuffix(".elf")
sid, dest = tuple(map(lambda x: int(x), subkernel_name.split(" ")))
subkernel_lib = tar.extractfile(entry).read()
self.core.comm.upload_subkernel(subkernel_lib, sid, dest)
return main_lib
class DummyScheduler: class DummyScheduler:
def __init__(self): def __init__(self):
self.rid = 0 self.rid = 0
@ -156,6 +171,7 @@ def _build_experiment(device_mgr, dataset_mgr, args):
argument_mgr = ProcessArgumentManager(arguments) argument_mgr = ProcessArgumentManager(arguments)
managers = (device_mgr, dataset_mgr, argument_mgr, {}) managers = (device_mgr, dataset_mgr, argument_mgr, {})
if hasattr(args, "file"): if hasattr(args, "file"):
is_tar = tarfile.is_tarfile(args.file)
is_elf = args.file.endswith(".elf") is_elf = args.file.endswith(".elf")
is_ll = args.file.endswith(".ll") is_ll = args.file.endswith(".ll")
is_bc = args.file.endswith(".bc") is_bc = args.file.endswith(".bc")
@ -165,7 +181,9 @@ def _build_experiment(device_mgr, dataset_mgr, args):
if args.class_name: if args.class_name:
raise ValueError("class-name not supported " raise ValueError("class-name not supported "
"for precompiled kernels") "for precompiled kernels")
if is_elf: if is_tar:
return TARRunner(managers, file=args.file)
elif is_elf:
return ELFRunner(managers, file=args.file) return ELFRunner(managers, file=args.file)
elif is_ll: elif is_ll:
return LLVMIRRunner(managers, file=args.file) return LLVMIRRunner(managers, file=args.file)
@ -204,6 +222,7 @@ def run(with_file=False):
exp_inst = _build_experiment(device_mgr, dataset_mgr, args) exp_inst = _build_experiment(device_mgr, dataset_mgr, args)
exp_inst.prepare() exp_inst.prepare()
exp_inst.run() exp_inst.run()
device_mgr.notify_run_end()
exp_inst.analyze() exp_inst.analyze()
except CompileError as error: except CompileError as error:
return return

View File

@ -804,6 +804,7 @@ def main():
experiment = SinaraTester((device_mgr, None, None, None)) experiment = SinaraTester((device_mgr, None, None, None))
experiment.prepare() experiment.prepare()
experiment.run(tests) experiment.run(tests)
device_mgr.notify_run_end()
experiment.analyze() experiment.analyze()
finally: finally:
device_mgr.close_devices() device_mgr.close_devices()

View File

@ -17,12 +17,11 @@ class RTController(Module, AutoCSR):
self.sync += rt_packet.reset.eq(self.reset.storage) self.sync += rt_packet.reset.eq(self.reset.storage)
set_time_stb = Signal()
self.sync += [ self.sync += [
If(rt_packet.set_time_stb, set_time_stb.eq(0)), If(rt_packet.set_time_ack, rt_packet.set_time_stb.eq(0)),
If(self.set_time.re, set_time_stb.eq(1)) If(self.set_time.re, rt_packet.set_time_stb.eq(1))
] ]
self.comb += self.set_time.w.eq(set_time_stb) self.comb += self.set_time.w.eq(rt_packet.set_time_stb)
errors = [ errors = [
(rt_packet.err_unknown_packet_type, "rtio_rx", None, None), (rt_packet.err_unknown_packet_type, "rtio_rx", None, None),

View File

@ -18,7 +18,7 @@ class GTPSingle(Module):
# # # # # #
self.stable_clkin = Signal() self.clk_path_ready = Signal()
self.txenable = Signal() self.txenable = Signal()
self.submodules.encoder = encoder = Encoder(2, True) self.submodules.encoder = encoder = Encoder(2, True)
self.submodules.decoders = decoders = [ClockDomainsRenamer("rtio_rx")( self.submodules.decoders = decoders = [ClockDomainsRenamer("rtio_rx")(
@ -40,7 +40,7 @@ class GTPSingle(Module):
self.submodules += rx_init self.submodules += rx_init
self.comb += [ self.comb += [
tx_init.stable_clkin.eq(self.stable_clkin), tx_init.clk_path_ready.eq(self.clk_path_ready),
qpll_channel.reset.eq(tx_init.pllreset), qpll_channel.reset.eq(tx_init.pllreset),
tx_init.plllock.eq(qpll_channel.lock) tx_init.plllock.eq(qpll_channel.lock)
] ]
@ -715,7 +715,7 @@ class GTP(Module, TransceiverInterface):
def __init__(self, qpll_channel, data_pads, sys_clk_freq, rtio_clk_freq, master=0): def __init__(self, qpll_channel, data_pads, sys_clk_freq, rtio_clk_freq, master=0):
self.nchannels = nchannels = len(data_pads) self.nchannels = nchannels = len(data_pads)
self.gtps = [] self.gtps = []
self.clk_path_ready = Signal()
# # # # # #
channel_interfaces = [] channel_interfaces = []
@ -736,7 +736,7 @@ class GTP(Module, TransceiverInterface):
TransceiverInterface.__init__(self, channel_interfaces) TransceiverInterface.__init__(self, channel_interfaces)
for n, gtp in enumerate(self.gtps): for n, gtp in enumerate(self.gtps):
self.comb += [ self.comb += [
gtp.stable_clkin.eq(self.stable_clkin.storage), gtp.clk_path_ready.eq(self.clk_path_ready),
gtp.txenable.eq(self.txenable.storage[n]) gtp.txenable.eq(self.txenable.storage[n])
] ]

View File

@ -10,7 +10,7 @@ __all__ = ["GTPTXInit", "GTPRXInit"]
class GTPTXInit(Module): class GTPTXInit(Module):
def __init__(self, sys_clk_freq, mode="single"): def __init__(self, sys_clk_freq, mode="single"):
self.stable_clkin = Signal() self.clk_path_ready = Signal()
self.done = Signal() self.done = Signal()
self.restart = Signal() self.restart = Signal()
@ -87,7 +87,7 @@ class GTPTXInit(Module):
startup_fsm.act("PLL_RESET", startup_fsm.act("PLL_RESET",
self.pllreset.eq(1), self.pllreset.eq(1),
pll_reset_timer.wait.eq(1), pll_reset_timer.wait.eq(1),
If(pll_reset_timer.done & self.stable_clkin, If(pll_reset_timer.done & self.clk_path_ready,
NextState("GTP_RESET") NextState("GTP_RESET")
) )
) )

View File

@ -279,14 +279,13 @@ class GTX(Module, TransceiverInterface):
self.nchannels = nchannels = len(pads) self.nchannels = nchannels = len(pads)
self.gtxs = [] self.gtxs = []
self.rtio_clk_freq = clk_freq self.rtio_clk_freq = clk_freq
self.clk_path_ready = Signal()
# # # # # #
refclk = Signal() refclk = Signal()
clk_enable = Signal()
self.specials += Instance("IBUFDS_GTE2", self.specials += Instance("IBUFDS_GTE2",
i_CEB=~clk_enable, i_CEB=0,
i_I=clock_pads.p, i_I=clock_pads.p,
i_IB=clock_pads.n, i_IB=clock_pads.n,
o_O=refclk, o_O=refclk,
@ -315,14 +314,10 @@ class GTX(Module, TransceiverInterface):
for n, gtx in enumerate(self.gtxs): for n, gtx in enumerate(self.gtxs):
self.comb += [ self.comb += [
gtx.txenable.eq(self.txenable.storage[n]), gtx.txenable.eq(self.txenable.storage[n]),
gtx.tx_init.stable_clkin.eq(clk_enable) gtx.tx_init.clk_path_ready.eq(self.clk_path_ready)
] ]
# rx_init is in SYS domain, rather than bootstrap # rx_init is in SYS domain, rather than bootstrap
self.specials += MultiReg(clk_enable, gtx.rx_init.stable_clkin) self.specials += MultiReg(self.clk_path_ready, gtx.rx_init.clk_path_ready)
# stable_clkin resets after reboot since it's in SYS domain
# still need to keep clk_enable high after this
self.sync.bootstrap += clk_enable.eq(self.stable_clkin.storage | self.gtxs[0].tx_init.cplllock)
# Connect slave i's `rtio_rx` clock to `rtio_rxi` clock # Connect slave i's `rtio_rx` clock to `rtio_rxi` clock
for i in range(nchannels): for i in range(nchannels):

View File

@ -16,7 +16,7 @@ class GTXInit(Module):
assert mode in ["single", "master", "slave"] assert mode in ["single", "master", "slave"]
self.mode = mode self.mode = mode
self.stable_clkin = Signal() self.clk_path_ready = Signal()
self.done = Signal() self.done = Signal()
self.restart = Signal() self.restart = Signal()
@ -110,7 +110,7 @@ class GTXInit(Module):
startup_fsm.act("INITIAL", startup_fsm.act("INITIAL",
startup_timer.wait.eq(1), startup_timer.wait.eq(1),
If(startup_timer.done & self.stable_clkin, NextState("RESET_PLL")) If(startup_timer.done & self.clk_path_ready, NextState("RESET_PLL"))
) )
startup_fsm.act("RESET_PLL", startup_fsm.act("RESET_PLL",
gtXxreset.eq(1), gtXxreset.eq(1),

View File

@ -141,7 +141,7 @@ def peripheral_shuttler(module, peripheral, **kwargs):
port, port_aux = peripheral["ports"] port, port_aux = peripheral["ports"]
else: else:
raise ValueError("wrong number of ports") raise ValueError("wrong number of ports")
eem.Shuttler.add_std(module, port, port_aux) eem.Shuttler.add_std(module, port, port_aux, **kwargs)
peripheral_processors = { peripheral_processors = {
"dio": peripheral_dio, "dio": peripheral_dio,

View File

@ -1,6 +1,8 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import argparse import argparse
import logging
from packaging.version import Version
from migen import * from migen import *
from migen.genlib.resetsync import AsyncResetSynchronizer from migen.genlib.resetsync import AsyncResetSynchronizer
@ -14,16 +16,20 @@ from misoc.targets.kasli import (
BaseSoC, MiniSoC, soc_kasli_args, soc_kasli_argdict) BaseSoC, MiniSoC, soc_kasli_args, soc_kasli_argdict)
from misoc.integration.builder import builder_args, builder_argdict from misoc.integration.builder import builder_args, builder_argdict
from artiq import __version__ as artiq_version
from artiq.gateware.amp import AMPSoC from artiq.gateware.amp import AMPSoC
from artiq.gateware import rtio from artiq.gateware import rtio
from artiq.gateware.rtio.phy import ttl_simple, ttl_serdes_7series, edge_counter from artiq.gateware.rtio.phy import ttl_simple, ttl_serdes_7series, edge_counter
from artiq.gateware.rtio.xilinx_clocking import fix_serdes_timing_path from artiq.gateware.rtio.xilinx_clocking import fix_serdes_timing_path
from artiq.gateware import eem from artiq.gateware import rtio, eem, eem_7series
from artiq.gateware.drtio.transceiver import gtp_7series, eem_serdes from artiq.gateware.drtio.transceiver import gtp_7series, eem_serdes
from artiq.gateware.drtio.siphaser import SiPhaser7Series from artiq.gateware.drtio.siphaser import SiPhaser7Series
from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer
from artiq.gateware.drtio import * from artiq.gateware.drtio import *
from artiq.build_soc import * from artiq.build_soc import *
from artiq.coredevice import jsondesc
logger = logging.getLogger(__name__)
class SMAClkinForward(Module): class SMAClkinForward(Module):
@ -130,89 +136,6 @@ class StandaloneBase(MiniSoC, AMPSoC):
self.csr_devices.append("rtio_analyzer") self.csr_devices.append("rtio_analyzer")
class Tester(StandaloneBase):
"""
Configuration for CI tests. Contains the maximum number of different EEMs.
"""
def __init__(self, hw_rev=None, dds=None, **kwargs):
if hw_rev is None:
hw_rev = "v2.0"
if dds is None:
dds = "ad9910"
StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs)
# self.config["SI5324_EXT_REF"] = None
self.config["RTIO_FREQUENCY"] = "125.0"
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
self.rtio_channels = []
eem.DIO.add_std(self, 5,
ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X,
edge_counter_cls=edge_counter.SimpleEdgeCounter)
eem.Urukul.add_std(self, 0, 1, ttl_serdes_7series.Output_8X, dds,
ttl_simple.ClockGen)
eem.Sampler.add_std(self, 3, 2, ttl_serdes_7series.Output_8X)
eem.Zotino.add_std(self, 4, ttl_serdes_7series.Output_8X)
if hw_rev in ("v1.0", "v1.1"):
for i in (1, 2):
sfp_ctl = self.platform.request("sfp_ctl", i)
phy = ttl_simple.Output(sfp_ctl.led)
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels)
class SUServo(StandaloneBase):
"""
SUServo (Sampler-Urukul-Servo) extension variant configuration
"""
def __init__(self, hw_rev=None, **kwargs):
if hw_rev is None:
hw_rev = "v2.0"
StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs)
# self.config["SI5324_EXT_REF"] = None
self.config["RTIO_FREQUENCY"] = "125.0"
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
self.rtio_channels = []
# EEM0, EEM1: DIO
eem.DIO.add_std(self, 0,
ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X)
eem.DIO.add_std(self, 1,
ttl_serdes_7series.Output_8X, ttl_serdes_7series.Output_8X)
# EEM3/2: Sampler, EEM5/4: Urukul, EEM7/6: Urukul
eem.SUServo.add_std(self,
eems_sampler=(3, 2),
eems_urukul=[[5, 4], [7, 6]])
for i in (1, 2):
sfp_ctl = self.platform.request("sfp_ctl", i)
phy = ttl_simple.Output(sfp_ctl.led)
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels)
pads = self.platform.lookup_request("sampler3_adc_data_p")
self.platform.add_false_path_constraints(
pads.clkout, self.crg.cd_sys.clk)
class MasterBase(MiniSoC, AMPSoC): class MasterBase(MiniSoC, AMPSoC):
mem_map = { mem_map = {
"cri_con": 0x10000000, "cri_con": 0x10000000,
@ -326,7 +249,8 @@ class MasterBase(MiniSoC, AMPSoC):
txout_buf = Signal() txout_buf = Signal()
self.specials += Instance("BUFG", i_I=gtp.txoutclk, o_O=txout_buf) self.specials += Instance("BUFG", i_I=gtp.txoutclk, o_O=txout_buf)
self.crg.configure(txout_buf, clk_sw=gtp.tx_init.done) self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtp.tx_init.done)
self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap")
platform.add_period_constraint(gtp.txoutclk, rtio_clk_period) platform.add_period_constraint(gtp.txoutclk, rtio_clk_period)
platform.add_period_constraint(gtp.rxoutclk, rtio_clk_period) platform.add_period_constraint(gtp.rxoutclk, rtio_clk_period)
@ -596,7 +520,8 @@ class SatelliteBase(BaseSoC, AMPSoC):
gtp = self.gt_drtio.gtps[0] gtp = self.gt_drtio.gtps[0]
txout_buf = Signal() txout_buf = Signal()
self.specials += Instance("BUFG", i_I=gtp.txoutclk, o_O=txout_buf) self.specials += Instance("BUFG", i_I=gtp.txoutclk, o_O=txout_buf)
self.crg.configure(txout_buf, clk_sw=gtp.tx_init.done) self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtp.tx_init.done)
self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap")
platform.add_period_constraint(gtp.txoutclk, rtio_clk_period) platform.add_period_constraint(gtp.txoutclk, rtio_clk_period)
platform.add_period_constraint(gtp.rxoutclk, rtio_clk_period) platform.add_period_constraint(gtp.rxoutclk, rtio_clk_period)
@ -638,77 +563,184 @@ class SatelliteBase(BaseSoC, AMPSoC):
self.get_native_sdram_if(), cpu_dw=self.cpu_dw) self.get_native_sdram_if(), cpu_dw=self.cpu_dw)
self.csr_devices.append("rtio_analyzer") self.csr_devices.append("rtio_analyzer")
class GenericStandalone(StandaloneBase):
class Master(MasterBase): def __init__(self, description, hw_rev=None,**kwargs):
def __init__(self, hw_rev=None, **kwargs):
if hw_rev is None: if hw_rev is None:
hw_rev = "v2.0" hw_rev = description["hw_rev"]
MasterBase.__init__(self, hw_rev=hw_rev, **kwargs) self.class_name_override = description["variant"]
StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs)
self.config["RTIO_FREQUENCY"] = "{:.1f}".format(description["rtio_frequency"]/1e6)
if "ext_ref_frequency" in description:
self.config["SI5324_EXT_REF"] = None
self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format(
description["ext_ref_frequency"]/1e6)
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"])
if has_grabber:
self.grabber_csr_group = []
self.rtio_channels = [] self.rtio_channels = []
eem_7series.add_peripherals(self, description["peripherals"])
phy = ttl_simple.Output(self.platform.request("user_led", 0)) if hw_rev in ("v1.0", "v1.1"):
self.submodules += phy for i in (1, 2):
self.rtio_channels.append(rtio.Channel.from_phy(phy)) print("SFP LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels)))
# matches Tester EEM numbers sfp_ctl = self.platform.request("sfp_ctl", i)
eem.DIO.add_std(self, 5, phy = ttl_simple.Output(sfp_ctl.led)
ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X) self.submodules += phy
eem.Urukul.add_std(self, 0, 1, ttl_serdes_7series.Output_8X) self.rtio_channels.append(rtio.Channel.from_phy(phy))
if hw_rev in ("v1.1", "v2.0"):
for i in range(3):
print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels)))
phy = ttl_simple.Output(self.platform.request("user_led", i))
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["HAS_RTIO_LOG"] = None self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels) self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel()) self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels) self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
if has_grabber:
self.config["HAS_GRABBER"] = None
self.add_csr_group("grabber", self.grabber_csr_group)
for grabber in self.grabber_csr_group:
self.platform.add_false_path_constraints(
self.crg.cd_sys.clk, getattr(self, grabber).deserializer.cd_cl.clk)
class Satellite(SatelliteBase): class GenericMaster(MasterBase):
def __init__(self, hw_rev=None, **kwargs): def __init__(self, description, hw_rev=None, **kwargs):
if hw_rev is None: if hw_rev is None:
hw_rev = "v2.0" hw_rev = description["hw_rev"]
SatelliteBase.__init__(self, hw_rev=hw_rev, **kwargs) self.class_name_override = description["variant"]
has_drtio_over_eem = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"])
MasterBase.__init__(self,
hw_rev=hw_rev,
rtio_clk_freq=description["rtio_frequency"],
enable_sata=description["enable_sata_drtio"],
enable_sys5x=has_drtio_over_eem,
**kwargs)
if "ext_ref_frequency" in description:
self.config["SI5324_EXT_REF"] = None
self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format(
description["ext_ref_frequency"]/1e6)
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
if has_drtio_over_eem:
self.eem_drtio_channels = []
has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"])
if has_grabber:
self.grabber_csr_group = []
self.rtio_channels = [] self.rtio_channels = []
phy = ttl_simple.Output(self.platform.request("user_led", 0)) eem_7series.add_peripherals(self, description["peripherals"])
self.submodules += phy if hw_rev in ("v1.1", "v2.0"):
self.rtio_channels.append(rtio.Channel.from_phy(phy)) for i in range(3):
# matches Tester EEM numbers print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels)))
eem.DIO.add_std(self, 5, phy = ttl_simple.Output(self.platform.request("user_led", i))
ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X) self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.add_rtio(self.rtio_channels) self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
if has_drtio_over_eem:
self.add_eem_drtio(self.eem_drtio_channels)
self.add_drtio_cpuif_groups()
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
if has_grabber:
self.config["HAS_GRABBER"] = None
self.add_csr_group("grabber", self.grabber_csr_group)
for grabber in self.grabber_csr_group:
self.platform.add_false_path_constraints(
self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk)
VARIANTS = {cls.__name__.lower(): cls for cls in [Tester, SUServo, Master, Satellite]} class GenericSatellite(SatelliteBase):
def __init__(self, description, hw_rev=None, **kwargs):
if hw_rev is None:
hw_rev = description["hw_rev"]
self.class_name_override = description["variant"]
SatelliteBase.__init__(self,
hw_rev=hw_rev,
rtio_clk_freq=description["rtio_frequency"],
enable_sata=description["enable_sata_drtio"],
**kwargs)
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"])
if has_grabber:
self.grabber_csr_group = []
self.rtio_channels = []
eem_7series.add_peripherals(self, description["peripherals"])
if hw_rev in ("v1.1", "v2.0"):
for i in range(3):
print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels)))
phy = ttl_simple.Output(self.platform.request("user_led", i))
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
if has_grabber:
self.config["HAS_GRABBER"] = None
self.add_csr_group("grabber", self.grabber_csr_group)
for grabber in self.grabber_csr_group:
self.platform.add_false_path_constraints(
self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk)
def main(): def main():
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="ARTIQ device binary builder for Kasli systems") description="ARTIQ device binary builder for generic Kasli systems")
builder_args(parser) builder_args(parser)
soc_kasli_args(parser) soc_kasli_args(parser)
parser.set_defaults(output_dir="artiq_kasli") parser.set_defaults(output_dir="artiq_kasli")
parser.add_argument("-V", "--variant", default="tester", parser.add_argument("description", metavar="DESCRIPTION",
help="variant: {} (default: %(default)s)".format( help="JSON system description file")
"/".join(sorted(VARIANTS.keys()))))
parser.add_argument("--tester-dds", default=None,
help="Tester variant DDS type: ad9910/ad9912 "
"(default: ad9910)")
parser.add_argument("--gateware-identifier-str", default=None, parser.add_argument("--gateware-identifier-str", default=None,
help="Override ROM identifier") help="Override ROM identifier")
args = parser.parse_args() args = parser.parse_args()
description = jsondesc.load(args.description)
argdict = dict() min_artiq_version = description["min_artiq_version"]
argdict["gateware_identifier_str"] = args.gateware_identifier_str if Version(artiq_version) < Version(min_artiq_version):
argdict["dds"] = args.tester_dds logger.warning("ARTIQ version mismatch: current %s < %s minimum",
artiq_version, min_artiq_version)
variant = args.variant.lower() if description["target"] != "kasli":
try: raise ValueError("Description is for a different target")
cls = VARIANTS[variant]
except KeyError:
raise SystemExit("Invalid variant (-V/--variant)")
soc = cls(**soc_kasli_argdict(args), **argdict) if description["drtio_role"] == "standalone":
cls = GenericStandalone
elif description["drtio_role"] == "master":
cls = GenericMaster
elif description["drtio_role"] == "satellite":
cls = GenericSatellite
else:
raise ValueError("Invalid DRTIO role")
has_shuttler = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"])
if has_shuttler and (description["drtio_role"] == "standalone"):
raise ValueError("Shuttler requires DRTIO, please switch role to master")
soc = cls(description, gateware_identifier_str=args.gateware_identifier_str, **soc_kasli_argdict(args))
args.variant = description["variant"]
build_artiq_soc(soc, builder_argdict(args)) build_artiq_soc(soc, builder_argdict(args))

View File

@ -1,187 +0,0 @@
#!/usr/bin/env python3
import argparse
import logging
from packaging.version import Version
from misoc.integration.builder import builder_args, builder_argdict
from misoc.targets.kasli import soc_kasli_args, soc_kasli_argdict
from artiq import __version__ as artiq_version
from artiq.coredevice import jsondesc
from artiq.gateware import rtio, eem_7series
from artiq.gateware.rtio.phy import ttl_simple
from artiq.gateware.targets.kasli import StandaloneBase, MasterBase, SatelliteBase
from artiq.build_soc import *
logger = logging.getLogger(__name__)
class GenericStandalone(StandaloneBase):
def __init__(self, description, hw_rev=None,**kwargs):
if hw_rev is None:
hw_rev = description["hw_rev"]
self.class_name_override = description["variant"]
StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs)
self.config["RTIO_FREQUENCY"] = "{:.1f}".format(description["rtio_frequency"]/1e6)
if "ext_ref_frequency" in description:
self.config["SI5324_EXT_REF"] = None
self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format(
description["ext_ref_frequency"]/1e6)
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"])
if has_grabber:
self.grabber_csr_group = []
self.rtio_channels = []
eem_7series.add_peripherals(self, description["peripherals"])
if hw_rev in ("v1.0", "v1.1"):
for i in (1, 2):
print("SFP LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels)))
sfp_ctl = self.platform.request("sfp_ctl", i)
phy = ttl_simple.Output(sfp_ctl.led)
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
if hw_rev == "v2.0":
for i in (1, 2):
print("USER LED at RTIO channel 0x{:06x}".format(len(self.rtio_channels)))
phy = ttl_simple.Output(self.platform.request("user_led", i))
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
if has_grabber:
self.config["HAS_GRABBER"] = None
self.add_csr_group("grabber", self.grabber_csr_group)
for grabber in self.grabber_csr_group:
self.platform.add_false_path_constraints(
self.crg.cd_sys.clk, getattr(self, grabber).deserializer.cd_cl.clk)
class GenericMaster(MasterBase):
def __init__(self, description, hw_rev=None, **kwargs):
if hw_rev is None:
hw_rev = description["hw_rev"]
self.class_name_override = description["variant"]
has_drtio_over_eem = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"])
MasterBase.__init__(self,
hw_rev=hw_rev,
rtio_clk_freq=description["rtio_frequency"],
enable_sata=description["enable_sata_drtio"],
enable_sys5x=has_drtio_over_eem,
**kwargs)
if "ext_ref_frequency" in description:
self.config["SI5324_EXT_REF"] = None
self.config["EXT_REF_FREQUENCY"] = "{:.1f}".format(
description["ext_ref_frequency"]/1e6)
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
if has_drtio_over_eem:
self.eem_drtio_channels = []
has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"])
if has_grabber:
self.grabber_csr_group = []
self.rtio_channels = []
eem_7series.add_peripherals(self, description["peripherals"])
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
if has_drtio_over_eem:
self.add_eem_drtio(self.eem_drtio_channels)
self.add_drtio_cpuif_groups()
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
if has_grabber:
self.config["HAS_GRABBER"] = None
self.add_csr_group("grabber", self.grabber_csr_group)
for grabber in self.grabber_csr_group:
self.platform.add_false_path_constraints(
self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk)
class GenericSatellite(SatelliteBase):
def __init__(self, description, hw_rev=None, **kwargs):
if hw_rev is None:
hw_rev = description["hw_rev"]
self.class_name_override = description["variant"]
SatelliteBase.__init__(self,
hw_rev=hw_rev,
rtio_clk_freq=description["rtio_frequency"],
enable_sata=description["enable_sata_drtio"],
**kwargs)
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
has_grabber = any(peripheral["type"] == "grabber" for peripheral in description["peripherals"])
if has_grabber:
self.grabber_csr_group = []
self.rtio_channels = []
eem_7series.add_peripherals(self, description["peripherals"])
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
if has_grabber:
self.config["HAS_GRABBER"] = None
self.add_csr_group("grabber", self.grabber_csr_group)
for grabber in self.grabber_csr_group:
self.platform.add_false_path_constraints(
self.gt_drtio.gtps[0].txoutclk, getattr(self, grabber).deserializer.cd_cl.clk)
def main():
parser = argparse.ArgumentParser(
description="ARTIQ device binary builder for generic Kasli systems")
builder_args(parser)
soc_kasli_args(parser)
parser.set_defaults(output_dir="artiq_kasli")
parser.add_argument("description", metavar="DESCRIPTION",
help="JSON system description file")
parser.add_argument("--gateware-identifier-str", default=None,
help="Override ROM identifier")
args = parser.parse_args()
description = jsondesc.load(args.description)
min_artiq_version = description["min_artiq_version"]
if Version(artiq_version) < Version(min_artiq_version):
logger.warning("ARTIQ version mismatch: current %s < %s minimum",
artiq_version, min_artiq_version)
if description["target"] != "kasli":
raise ValueError("Description is for a different target")
if description["drtio_role"] == "standalone":
cls = GenericStandalone
elif description["drtio_role"] == "master":
cls = GenericMaster
elif description["drtio_role"] == "satellite":
cls = GenericSatellite
else:
raise ValueError("Invalid DRTIO role")
has_shuttler = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"])
if has_shuttler and (description["drtio_role"] == "standalone"):
raise ValueError("Shuttler requires DRTIO, please switch role to master")
soc = cls(description, gateware_identifier_str=args.gateware_identifier_str, **soc_kasli_argdict(args))
args.variant = description["variant"]
build_artiq_soc(soc, builder_argdict(args))
if __name__ == "__main__":
main()

View File

@ -273,7 +273,8 @@ class _MasterBase(MiniSoC, AMPSoC):
txout_buf = Signal() txout_buf = Signal()
self.specials += Instance("BUFG", i_I=gtx0.txoutclk, o_O=txout_buf) self.specials += Instance("BUFG", i_I=gtx0.txoutclk, o_O=txout_buf)
self.crg.configure(txout_buf, clk_sw=gtx0.tx_init.done) self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtx0.tx_init.done)
self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap")
self.comb += [ self.comb += [
platform.request("user_sma_clock_p").eq(ClockSignal("rtio_rx0")), platform.request("user_sma_clock_p").eq(ClockSignal("rtio_rx0")),
@ -440,7 +441,8 @@ class _SatelliteBase(BaseSoC, AMPSoC):
txout_buf = Signal() txout_buf = Signal()
self.specials += Instance("BUFG", i_I=gtx0.txoutclk, o_O=txout_buf) self.specials += Instance("BUFG", i_I=gtx0.txoutclk, o_O=txout_buf)
self.crg.configure(txout_buf, clk_sw=gtx0.tx_init.done) self.crg.configure(txout_buf, clk_sw=self.gt_drtio.stable_clkin.storage, ext_async_rst=self.crg.clk_sw_fsm.o_clk_sw & ~gtx0.tx_init.done)
self.specials += MultiReg(self.crg.clk_sw_fsm.o_clk_sw & self.crg.mmcm_locked, self.gt_drtio.clk_path_ready, odomain="bootstrap")
self.comb += [ self.comb += [
platform.request("user_sma_clock_p").eq(ClockSignal("rtio_rx0")), platform.request("user_sma_clock_p").eq(ClockSignal("rtio_rx0")),

View File

@ -1,7 +1,7 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg <svg
sodipodi:docname="logo_ver.svg" sodipodi:docname="logo_ver.svg"
inkscape:version="1.2.1 (9c6d41e410, 2022-07-14)" inkscape:version="1.3 (0e150ed6c4, 2023-07-21)"
id="svg2" id="svg2"
xml:space="preserve" xml:space="preserve"
enable-background="new 0 0 800 800" enable-background="new 0 0 800 800"
@ -27,9 +27,9 @@
inkscape:window-maximized="1" inkscape:window-maximized="1"
inkscape:window-y="32" inkscape:window-y="32"
inkscape:window-x="0" inkscape:window-x="0"
inkscape:cy="136.44068" inkscape:cy="350.81934"
inkscape:cx="-147.0339" inkscape:cx="284.15356"
inkscape:zoom="1.18" inkscape:zoom="13.350176"
fit-margin-bottom="0" fit-margin-bottom="0"
fit-margin-right="0" fit-margin-right="0"
fit-margin-left="0" fit-margin-left="0"
@ -158,15 +158,8 @@
transform="translate(-1.7346398,0.84745763)" transform="translate(-1.7346398,0.84745763)"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:96px;line-height:25px;font-family:'Droid Sans Thai';-inkscape-font-specification:'Droid Sans Thai, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none" style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:96px;line-height:25px;font-family:'Droid Sans Thai';-inkscape-font-specification:'Droid Sans Thai, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
id="text861" id="text861"
aria-label="7"><text aria-label="7"><path
xml:space="preserve" style="font-size:53.3333px;line-height:13.8889px;font-family:Intro;-inkscape-font-specification:Intro;fill:#ffffff"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:53.3333px;line-height:13.8889px;font-family:'Droid Sans Thai';-inkscape-font-specification:'Droid Sans Thai, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1;stroke:none" d="m 325.20597,358.80118 c 5.38667,-5.86667 2.18667,-16.74666 -9.43999,-16.74666 -11.62666,0 -14.87999,10.77333 -9.44,16.69332 -8.90666,6.4 -5.75999,21.91999 9.44,21.91999 15.09332,0 18.23999,-15.41332 9.43999,-21.86665 z m -9.43999,13.81332 c -6.4,0 -6.4,-9.27999 0,-9.27999 6.45333,0 6.45333,9.27999 0,9.27999 z m 0,-17.06665 c -4.64,0 -4.64,-5.97333 0,-5.97333 4.58666,0 4.58666,5.97333 0,5.97333 z"
x="299.60599" id="text248"
y="380.02783" aria-label="8" /></g>&#10;</g></svg>
id="text248"><tspan
sodipodi:role="line"
id="tspan246"
x="299.60599"
y="380.02783"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:53.3333px;font-family:Intro;-inkscape-font-specification:Intro;fill:#ffffff">8</tspan></text></g>
</g></svg>

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

@ -7,7 +7,7 @@ from functools import wraps
import numpy import numpy
__all__ = ["kernel", "portable", "rpc", "syscall", "host_only", __all__ = ["kernel", "portable", "rpc", "subkernel", "syscall", "host_only",
"kernel_from_string", "set_time_manager", "set_watchdog_factory", "kernel_from_string", "set_time_manager", "set_watchdog_factory",
"TerminationRequested"] "TerminationRequested"]
@ -21,7 +21,7 @@ __all__.extend(kernel_globals)
_ARTIQEmbeddedInfo = namedtuple("_ARTIQEmbeddedInfo", _ARTIQEmbeddedInfo = namedtuple("_ARTIQEmbeddedInfo",
"core_name portable function syscall forbidden flags") "core_name portable function syscall forbidden destination flags")
def kernel(arg=None, flags={}): def kernel(arg=None, flags={}):
""" """
@ -54,7 +54,7 @@ def kernel(arg=None, flags={}):
return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs) return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
run_on_core.artiq_embedded = _ARTIQEmbeddedInfo( run_on_core.artiq_embedded = _ARTIQEmbeddedInfo(
core_name=arg, portable=False, function=function, syscall=None, core_name=arg, portable=False, function=function, syscall=None,
forbidden=False, flags=set(flags)) forbidden=False, destination=None, flags=set(flags))
return run_on_core return run_on_core
return inner_decorator return inner_decorator
elif arg is None: elif arg is None:
@ -64,6 +64,50 @@ def kernel(arg=None, flags={}):
else: else:
return kernel("core", flags)(arg) return kernel("core", flags)(arg)
def subkernel(arg=None, destination=0, flags={}):
"""
This decorator marks an object's method or function for execution on a satellite device.
Destination must be given, and it must be between 1 and 255 (inclusive).
Subkernels behave similarly to kernels, with few key differences:
- they are started from main kernels,
- they do not support RPCs, or running subsequent subkernels on other devices,
- but they can call other kernels or subkernels with the same destination.
Subkernels can accept arguments and return values. However, they must be fully
annotated with ARTIQ types.
To call a subkernel, call it like a normal function.
To await its finishing execution, call ``subkernel_await(subkernel, [timeout])``.
The timeout parameter is optional, and by default is equal to 10000 (miliseconds).
This time can be adjusted for subkernels that take a long time to execute.
The compiled subkernel is copied to satellites, but not yet to the kernel core
until it's called. For bigger subkernels it may take some time before they
actually start running. To help with that, subkernels can be preloaded, with
``subkernel_preload(subkernel)`` function. A call to a preloaded subkernel
will take less time, but only one subkernel can be preloaded at a time.
"""
if isinstance(arg, str):
def inner_decorator(function):
@wraps(function)
def run_subkernel(self, *k_args, **k_kwargs):
sid = getattr(self, arg).prepare_subkernel(destination, run_subkernel, ((self,) + k_args), k_kwargs)
getattr(self, arg).run_subkernel(sid)
run_subkernel.artiq_embedded = _ARTIQEmbeddedInfo(
core_name=arg, portable=False, function=function, syscall=None,
forbidden=False, destination=destination, flags=set(flags))
return run_subkernel
return inner_decorator
elif arg is None:
def inner_decorator(function):
return subkernel(function, destination, flags)
return inner_decorator
else:
return subkernel("core", destination, flags)(arg)
def portable(arg=None, flags={}): def portable(arg=None, flags={}):
""" """
This decorator marks a function for execution on the same device as its This decorator marks a function for execution on the same device as its
@ -84,7 +128,7 @@ def portable(arg=None, flags={}):
else: else:
arg.artiq_embedded = \ arg.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, portable=True, function=arg, syscall=None, _ARTIQEmbeddedInfo(core_name=None, portable=True, function=arg, syscall=None,
forbidden=False, flags=set(flags)) forbidden=False, destination=None, flags=set(flags))
return arg return arg
def rpc(arg=None, flags={}): def rpc(arg=None, flags={}):
@ -100,7 +144,7 @@ def rpc(arg=None, flags={}):
else: else:
arg.artiq_embedded = \ arg.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, portable=False, function=arg, syscall=None, _ARTIQEmbeddedInfo(core_name=None, portable=False, function=arg, syscall=None,
forbidden=False, flags=set(flags)) forbidden=False, destination=None, flags=set(flags))
return arg return arg
def syscall(arg=None, flags={}): def syscall(arg=None, flags={}):
@ -118,7 +162,7 @@ def syscall(arg=None, flags={}):
def inner_decorator(function): def inner_decorator(function):
function.artiq_embedded = \ function.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, portable=False, function=None, _ARTIQEmbeddedInfo(core_name=None, portable=False, function=None,
syscall=arg, forbidden=False, syscall=arg, forbidden=False, destination=None,
flags=set(flags)) flags=set(flags))
return function return function
return inner_decorator return inner_decorator
@ -136,7 +180,7 @@ def host_only(function):
""" """
function.artiq_embedded = \ function.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, portable=False, function=None, syscall=None, _ARTIQEmbeddedInfo(core_name=None, portable=False, function=None, syscall=None,
forbidden=True, flags={}) forbidden=True, destination=None, flags={})
return function return function

View File

@ -36,6 +36,9 @@ class DeviceDB:
desc = self.data.raw_view[desc] desc = self.data.raw_view[desc]
return desc return desc
def get_satellite_cpu_target(self, destination):
return self.data.raw_view["satellite_cpu_targets"][destination]
class DatasetDB(TaskObject): class DatasetDB(TaskObject):
def __init__(self, persist_file, autosave_period=30): def __init__(self, persist_file, autosave_period=30):

View File

@ -19,12 +19,13 @@ class DummyDevice:
pass pass
def _create_device(desc, device_mgr): def _create_device(desc, device_mgr, argument_overrides):
ty = desc["type"] ty = desc["type"]
if ty == "local": if ty == "local":
module = importlib.import_module(desc["module"]) module = importlib.import_module(desc["module"])
device_class = getattr(module, desc["class"]) device_class = getattr(module, desc["class"])
return device_class(device_mgr, **desc.get("arguments", {})) arguments = desc.get("arguments", {}) | argument_overrides
return device_class(device_mgr, **arguments)
elif ty == "controller": elif ty == "controller":
if desc.get("best_effort", False): if desc.get("best_effort", False):
cls = BestEffortClient cls = BestEffortClient
@ -60,6 +61,7 @@ class DeviceManager:
self.ddb = ddb self.ddb = ddb
self.virtual_devices = virtual_devices self.virtual_devices = virtual_devices
self.active_devices = [] self.active_devices = []
self.devarg_override = {}
def get_device_db(self): def get_device_db(self):
"""Returns the full contents of the device database.""" """Returns the full contents of the device database."""
@ -85,13 +87,20 @@ class DeviceManager:
return existing_dev return existing_dev
try: try:
dev = _create_device(desc, self) dev = _create_device(desc, self, self.devarg_override.get(name, {}))
except Exception as e: except Exception as e:
raise DeviceError("Failed to create device '{}'" raise DeviceError("Failed to create device '{}'"
.format(name)) from e .format(name)) from e
self.active_devices.append((desc, dev)) self.active_devices.append((desc, dev))
return dev return dev
def notify_run_end(self):
"""Sends a "end of Experiment run stage" notification to
all active devices."""
for _desc, dev in self.active_devices:
if hasattr(dev, "notify_run_end"):
dev.notify_run_end()
def close_devices(self): def close_devices(self):
"""Closes all active devices, in the opposite order as they were """Closes all active devices, in the opposite order as they were
requested.""" requested."""
@ -101,8 +110,9 @@ class DeviceManager:
dev.close_rpc() dev.close_rpc()
elif hasattr(dev, "close"): elif hasattr(dev, "close"):
dev.close() dev.close()
except Exception as e: except:
logger.warning("Exception %r when closing device %r", e, dev) logger.warning("Exception raised when closing device %r:",
dev, exc_info=True)
self.active_devices.clear() self.active_devices.clear()

View File

@ -306,6 +306,8 @@ def main():
start_time = time.time() start_time = time.time()
rid = obj["rid"] rid = obj["rid"]
expid = obj["expid"] expid = obj["expid"]
if "devarg_override" in expid:
device_mgr.devarg_override = expid["devarg_override"]
if "file" in expid: if "file" in expid:
if obj["wd"] is not None: if obj["wd"] is not None:
# Using repository # Using repository
@ -343,6 +345,8 @@ def main():
# for end of analyze stage. # for end of analyze stage.
write_results() write_results()
raise raise
finally:
device_mgr.notify_run_end()
put_completed() put_completed()
elif action == "analyze": elif action == "analyze":
try: try:

View File

@ -49,9 +49,9 @@ class TestCompile(ExperimentCase):
mgmt.clear_log() mgmt.clear_log()
with tempfile.TemporaryDirectory() as tmp: with tempfile.TemporaryDirectory() as tmp:
db_path = os.path.join(artiq_root, "device_db.py") db_path = os.path.join(artiq_root, "device_db.py")
subprocess.call([sys.executable, "-m", "artiq.frontend.artiq_compile", "--device-db", db_path, subprocess.check_call([sys.executable, "-m", "artiq.frontend.artiq_compile", "--device-db", db_path,
"-c", "CheckLog", "-o", os.path.join(tmp, "check_log.elf"), __file__]) "-c", "CheckLog", "-o", os.path.join(tmp, "check_log.elf"), __file__])
subprocess.call([sys.executable, "-m", "artiq.frontend.artiq_run", "--device-db", db_path, subprocess.check_call([sys.executable, "-m", "artiq.frontend.artiq_run", "--device-db", db_path,
os.path.join(tmp, "check_log.elf")]) os.path.join(tmp, "check_log.elf")])
log = mgmt.get_log() log = mgmt.get_log()
self.assertIn("test_artiq_compile", log) self.assertIn("test_artiq_compile", log)

View File

@ -55,6 +55,7 @@ class ExperimentCase(unittest.TestCase):
try: try:
exp = self.create(cls, *args, **kwargs) exp = self.create(cls, *args, **kwargs)
exp.run() exp.run()
self.device_mgr.notify_run_end()
exp.analyze() exp.analyze()
return exp return exp
except CompileError as error: except CompileError as error:

View File

@ -0,0 +1,15 @@
# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t
# RUN: OutputCheck %s --file-to-check=%t
from artiq.language.core import *
from artiq.language.types import *
# CHECK-L: ${LINE:+2}: error: type annotation for argument 'x', '1', is not an ARTIQ type
@subkernel(destination=1)
def foo(x: 1) -> TNone:
pass
@kernel
def entrypoint():
# CHECK-L: ${LINE:+1}: note: in subkernel call here
foo()

View File

@ -0,0 +1,15 @@
# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t
# RUN: OutputCheck %s --file-to-check=%t
from artiq.language.core import *
from artiq.language.types import *
# CHECK-L: ${LINE:+2}: error: type annotation for return type, '1', is not an ARTIQ type
@subkernel(destination=1)
def foo() -> 1:
pass
@kernel
def entrypoint():
# CHECK-L: ${LINE:+1}: note: in subkernel call here
foo()

View File

@ -0,0 +1,18 @@
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
# RUN: OutputCheck %s --file-to-check=%t.ll
from artiq.language.core import *
from artiq.language.types import *
@kernel
def entrypoint():
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !.
no_arg()
# CHECK-L: declare void @subkernel_load_run(i32, i1) local_unnamed_addr
# CHECK-NOT-L: declare void @subkernel_send_message(i32, { i8*, i32 }*, i8**) local_unnamed_addr
@subkernel(destination=1)
def no_arg() -> TStr:
pass

View File

@ -0,0 +1,22 @@
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
# RUN: OutputCheck %s --file-to-check=%t.ll
from artiq.language.core import *
from artiq.language.types import *
@kernel
def entrypoint():
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !.
returning()
# CHECK: call i8 @subkernel_await_message\(i32 1, i64 10000, { i8\*, i32 }\* nonnull .*, i8 1, i8 1\), !dbg !.
# CHECK: call void @subkernel_await_finish\(i32 1, i64 10000\), !dbg !.
subkernel_await(returning)
# CHECK-L: declare void @subkernel_load_run(i32, i1) local_unnamed_addr
# CHECK-NOT-L: declare void @subkernel_send_message(i32, i8, { i8*, i32 }*, i8**) local_unnamed_addr
# CHECK-L: declare i8 @subkernel_await_message(i32, i64, { i8*, i32 }*, i8, i8) local_unnamed_addr
# CHECK-L: declare void @subkernel_await_finish(i32, i64) local_unnamed_addr
@subkernel(destination=1)
def returning() -> TInt32:
return 1

View File

@ -0,0 +1,22 @@
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
# RUN: OutputCheck %s --file-to-check=%t.ll
from artiq.language.core import *
from artiq.language.types import *
@kernel
def entrypoint():
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !.
returning_none()
# CHECK: call void @subkernel_await_finish\(i32 1, i64 10000\), !dbg !.
# CHECK-NOT: call i8 @subkernel_await_message\(i32 1, i64 10000\, .*\), !dbg !.
subkernel_await(returning_none)
# CHECK-L: declare void @subkernel_load_run(i32, i1) local_unnamed_addr
# CHECK-NOT-L: declare void @subkernel_send_message(i32, { i8*, i32 }*, i8**) local_unnamed_addr
# CHECK-L: declare void @subkernel_await_finish(i32, i64) local_unnamed_addr
# CHECK-NOT-L: declare i8 @subkernel_await_message(i32, i64, { i8*, i32 }*, i8, i8) local_unnamed_addr
@subkernel(destination=1)
def returning_none() -> TNone:
pass

View File

@ -0,0 +1,25 @@
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
# RUN: OutputCheck %s --file-to-check=%t.ll
from artiq.language.core import *
from artiq.language.types import *
class A:
@subkernel(destination=1)
def sk(self):
pass
@kernel
def kernel_entrypoint(self):
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK-NOT: call void @subkernel_send_message\(.*\), !dbg !.
self.sk()
a = A()
@kernel
def entrypoint():
a.kernel_entrypoint()
# CHECK-L: declare void @subkernel_load_run(i32, i1) local_unnamed_addr
# CHECK-NOT-L: declare void @subkernel_send_message(i32, i8, { i8*, i32 }*, i8**) local_unnamed_addr

View File

@ -0,0 +1,25 @@
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
# RUN: OutputCheck %s --file-to-check=%t.ll
from artiq.language.core import *
from artiq.language.types import *
class A:
@subkernel(destination=1)
def sk(self, a):
pass
@kernel
def kernel_entrypoint(self):
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK: call void @subkernel_send_message\(i32 1, i8 1, .*\), !dbg !.
self.sk(1)
a = A()
@kernel
def entrypoint():
a.kernel_entrypoint()
# CHECK-L: declare void @subkernel_load_run(i32, i1) local_unnamed_addr
# CHECK-L: declare void @subkernel_send_message(i32, i8, { i8*, i32 }*, i8**) local_unnamed_addr

View File

@ -0,0 +1,18 @@
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
# RUN: OutputCheck %s --file-to-check=%t.ll
from artiq.language.core import *
from artiq.language.types import *
@kernel
def entrypoint():
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK: call void @subkernel_send_message\(i32 ., i8 1, .*\), !dbg !.
accept_arg(1)
# CHECK-L: declare void @subkernel_load_run(i32, i1) local_unnamed_addr
# CHECK-L: declare void @subkernel_send_message(i32, i8, { i8*, i32 }*, i8**) local_unnamed_addr
@subkernel(destination=1)
def accept_arg(arg: TInt32) -> TNone:
pass

View File

@ -0,0 +1,21 @@
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
# RUN: OutputCheck %s --file-to-check=%t.ll
from artiq.language.core import *
from artiq.language.types import *
@kernel
def entrypoint():
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK: call void @subkernel_send_message\(i32 ., i8 1, .*\), !dbg !.
accept_arg(1)
# CHECK: call void @subkernel_load_run\(i32 1, i1 true\), !dbg !.
# CHECK: call void @subkernel_send_message\(i32 ., i8 2, .*\), !dbg !.
accept_arg(1, 2)
# CHECK-L: declare void @subkernel_load_run(i32, i1) local_unnamed_addr
# CHECK-L: declare void @subkernel_send_message(i32, i8, { i8*, i32 }*, i8**) local_unnamed_addr
@subkernel(destination=1)
def accept_arg(arg_a, arg_b=5) -> TNone:
pass

View File

@ -0,0 +1,15 @@
# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t
# RUN: OutputCheck %s --file-to-check=%t
from artiq.experiment import *
import numpy as np
@kernel
def a():
# CHECK-L: ${LINE:+2}: error: cannot return an allocated value that does not live forever
# CHECK-L: ${LINE:+1}: note: ... to this point
return np.array([0, 1])
@kernel
def entrypoint():
a()

View File

@ -0,0 +1,16 @@
# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t
# RUN: OutputCheck %s --file-to-check=%t
from artiq.experiment import *
import numpy as np
@kernel
def a():
# CHECK-L: ${LINE:+2}: error: cannot return an allocated value that does not live forever
# CHECK-L: ${LINE:+1}: note: ... to this point
return np.full(10, 42.0)
@kernel
def entrypoint():
a()

View File

@ -0,0 +1,17 @@
# RUN: %python -m artiq.compiler.testbench.embedding +diag %s 2>%t
# RUN: OutputCheck %s --file-to-check=%t
from artiq.experiment import *
import numpy as np
data = np.array([[0, 1], [2, 3]])
@kernel
def a():
# CHECK-L: ${LINE:+2}: error: cannot return an allocated value that does not live forever
# CHECK-L: ${LINE:+1}: note: ... to this point
return np.transpose(data)
@kernel
def entrypoint():
a()

View File

@ -1,127 +0,0 @@
# Copyright (C) 2014, 2015 Robert Jordens <jordens@gmail.com>
import unittest
from artiq.wavesynth import compute_samples
class TestSynthesizer(unittest.TestCase):
program = [
[
# frame 0
{
# frame 0, segment 0, line 0
"dac_divider": 1,
"duration": 100,
"channel_data": [
{
# channel 0
"dds": {"amplitude": [0.0, 0.0, 0.01],
"phase": [0.0, 0.0, 0.0005],
"clear": False}
}
],
"trigger": True
},
{
# frame 0, segment 0, line 1
"dac_divider": 1,
"duration": 100,
"channel_data": [
{
# channel 0
"dds": {"amplitude": [49.5, 1.0, -0.01],
"phase": [0.0, 0.05, 0.0005],
"clear": False}
}
],
"trigger": False
},
],
[
# frame 1
{
# frame 1, segment 0, line 0
"dac_divider": 1,
"duration": 100,
"channel_data": [
{
# channel 0
"dds": {"amplitude": [100.0, 0.0, -0.01],
"phase": [0.0, 0.1, -0.0005],
"clear": False}
}
],
"trigger": True
},
{
# frame 1, segment 0, line 1
"dac_divider": 1,
"duration": 100,
"channel_data": [
{
# channel 0
"dds": {"amplitude": [50.5, -1.0, 0.01],
"phase": [0.0, 0.05, -0.0005],
"clear": False}
}
],
"trigger": False
}
],
[
# frame 2
{
# frame 2, segment 0, line 0
"dac_divider": 1,
"duration": 84,
"channel_data": [
{
# channel 0
"dds": {"amplitude": [100.0],
"phase": [0.0, 0.05],
"clear": False}
}
],
"trigger": True
},
{
# frame 2, segment 1, line 0
"dac_divider": 1,
"duration": 116,
"channel_data": [
{
# channel 0
"dds": {"amplitude": [100.0],
"phase": [0.0, 0.05],
"clear": True}
}
],
"trigger": True
}
]
]
def setUp(self):
self.dev = compute_samples.Synthesizer(1, self.program)
self.t = list(range(600))
def drive(self):
s = self.dev
y = []
for f in 0, 2, None, 1:
if f is not None:
s.select(f)
y += s.trigger()[0]
x = list(range(600))
return x, y
def test_run(self):
x, y = self.drive()
@unittest.skip("manual/visual test")
def test_plot(self):
from matplotlib import pyplot as plt
x, y = self.drive()
plt.plot(x, y)
plt.show()

View File

@ -18,7 +18,9 @@ from artiq.language.environment import is_public_experiment
from artiq.language import units from artiq.language import units
__all__ = ["parse_arguments", "elide", "scale_from_metadata", __all__ = ["parse_arguments",
"parse_devarg_override", "unparse_devarg_override",
"elide", "scale_from_metadata",
"short_format", "file_import", "short_format", "file_import",
"get_experiment", "get_experiment",
"exc_to_warning", "asyncio_wait_or_cancel", "exc_to_warning", "asyncio_wait_or_cancel",
@ -36,6 +38,29 @@ def parse_arguments(arguments):
return d return d
def parse_devarg_override(devarg_override):
devarg_override_dict = {}
for item in devarg_override.split():
device, _, override = item.partition(":")
if not override:
raise ValueError
if device not in devarg_override_dict:
devarg_override_dict[device] = {}
argument, _, value = override.partition("=")
if not value:
raise ValueError
devarg_override_dict[device][argument] = pyon.decode(value)
return devarg_override_dict
def unparse_devarg_override(devarg_override):
devarg_override_strs = [
"{}:{}={}".format(device, argument, pyon.encode(value))
for device, overrides in devarg_override.items()
for argument, value in overrides.items()]
return " ".join(devarg_override_strs)
def elide(s, maxlen): def elide(s, maxlen):
elided = False elided = False
if len(s) > maxlen: if len(s) > maxlen:

View File

@ -1,234 +0,0 @@
# Copyright (C) 2014, 2015 Robert Jordens <jordens@gmail.com>
import numpy as np
from scipy.interpolate import splrep, splev, spalde
class UnivariateMultiSpline:
"""Multidimensional wrapper around `scipy.interpolate.sp*` functions.
`scipy.inteprolate.splprep` is limited to 12 dimensions.
"""
def __init__(self, x, y, *, x0=None, order=4, **kwargs):
self.order = order
self.x = x
self.s = []
for i, yi in enumerate(y):
if x0 is not None:
yi = self.upsample_knots(x0[i], yi, x)
self.s.append(splrep(x, yi, k=order - 1, **kwargs))
def upsample_knots(self, x0, y0, x):
return splev(x, splrep(x0, y0, k=self.order - 1))
def lev(self, x, *a, **k):
return np.array([splev(x, si) for si in self.s])
def alde(self, x):
u = np.array([spalde(x, si) for si in self.s])
if len(x) == 1:
u = u[:, None, :]
return u
def __call__(self, x, use_alde=True):
if use_alde:
u = self.alde(x)[:, :, :self.order]
s = (len(self.s), len(x), self.order)
assert u.shape == s, (u.shape, s)
return u.transpose(2, 0, 1)
else:
return np.array([self.lev(x, der=i) for i in range(self.order)])
def pad_const(x, n, axis=0):
"""Prefix and postfix the array `x` by `n` repetitions of the first and
last value along `axis`.
"""
a = np.repeat(x.take([0], axis), n, axis)
b = np.repeat(x.take([-1], axis), n, axis)
xp = np.concatenate([a, x, b], axis)
s = list(x.shape)
s[axis] += 2*n
assert xp.shape == tuple(s), (x.shape, s, xp.shape)
return xp
def build_segment(durations, coefficients, target="bias",
variable="amplitude", compress=True):
"""Build a wavesynth-style segment from homogeneous duration and
coefficient data.
:param durations: 1D sequence of line durations.
:param coefficients: 3D array with shape `(n, m, len(durations))`,
with `n` being the interpolation order + 1 and `m` the number of
channels.
:param target: The target component of the channel to affect.
:param variable: The variable within the target component.
:param compress: If `True`, skip zero high order coefficients.
"""
for dxi, yi in zip(durations, coefficients.transpose()):
cd = []
for yij in yi:
cdj = []
for yijk in reversed(yij):
if cdj or abs(yijk) or not compress:
cdj.append(float(yijk))
cdj.reverse()
if not cdj:
cdj.append(float(yij[0]))
cd.append({target: {variable: cdj}})
yield {"duration": int(dxi), "channel_data": cd}
class CoefficientSource:
def crop_x(self, start, stop, num=2):
"""Return an array of valid sample positions.
This method needs to be overloaded if this `CoefficientSource`
does not support sampling at arbitrary positions or at arbitrary
density.
:param start: First sample position.
:param stop: Last sample position.
:param num: Number of samples between `start` and `stop`.
:return: Array of sample positions. `start` and `stop` should be
returned as the first and last value in the array respectively.
"""
return np.linspace(start, stop, num)
def scale_x(self, x, scale):
# TODO: This could be moved to the the Driver/Mediator code as it is
# device-specific.
"""Scale and round sample positions.
The sample times may need to be changed and/or decimated if
incompatible with hardware requirements.
:param x: Input sample positions in data space.
:param scale: Data space position to cycles conversion scale,
in units of x-units per clock cycle.
:return: `x_sample`, the rounded sample positions and `durations`, the
integer durations of the individual samples in cycles.
"""
t = np.rint(x/scale)
x_sample = t*scale
durations = np.diff(t).astype(int)
return x_sample, durations
def __call__(self, x, **kwargs):
"""Perform sampling and return coefficients.
:param x: Sample positions.
:return: `y` the array of coefficients. `y.shape == (order, n, len(x))`
with `n` being the number of channels."""
raise NotImplementedError
def get_segment(self, start, stop, scale, *, cutoff=1e-12,
target="bias", variable="amplitude"):
"""Build wavesynth segment.
:param start: see `crop_x()`.
:param stop: see `crop_x()`.
:param scale: see `scale_x()`.
:param cutoff: coefficient cutoff towards zero to compress data.
"""
x = self.crop_x(start, stop)
x_sample, durations = self.scale_x(x, scale)
coefficients = self(x_sample)
if len(x_sample) == 1 and start == stop:
coefficients = coefficients[:1]
# rescale coefficients accordingly
coefficients *= (scale*np.sign(durations))**np.arange(
coefficients.shape[0])[:, None, None]
if cutoff:
coefficients[np.fabs(coefficients) < cutoff] = 0
return build_segment(np.fabs(durations), coefficients, target=target,
variable=variable)
def extend_segment(self, segment, *args, **kwargs):
"""Extend a wavesynth segment.
See `get_segment()` for arguments.
"""
for line in self.get_segment(*args, **kwargs):
segment.add_line(**line)
class SplineSource(CoefficientSource):
def __init__(self, x, y, order=4, pad_dx=1.):
"""
:param x: 1D sample positions.
:param y: 2D sample values.
"""
self.x = np.asanyarray(x)
assert self.x.ndim == 1
self.y = np.asanyarray(y)
assert self.y.ndim == 2
if pad_dx is not None:
a = np.arange(-order, 0)*pad_dx + self.x[0]
b = self.x[-1] + np.arange(1, order + 1)*pad_dx
self.x = np.r_[a, self.x, b]
self.y = pad_const(self.y, order, axis=1)
assert self.y.shape[1] == self.x.shape[0]
self.spline = UnivariateMultiSpline(self.x, self.y, order=order)
def crop_x(self, start, stop):
ia, ib = np.searchsorted(self.x, (start, stop))
if start > stop:
x = self.x[ia - 1:ib - 1:-1]
else:
x = self.x[ia:ib]
return np.r_[start, x, stop]
def scale_x(self, x, scale, min_duration=1, min_length=20):
"""Enforce, round, and scale x to device-dependent values.
Due to minimum duration and/or minimum segment length constraints
this method may drop samples from `x_sample` to comply.
:param min_duration: Minimum duration of a line.
:param min_length: Minimum segment length to space triggers.
"""
# We want to only sample a spline at t_knot + epsilon
# where the highest order derivative has just jumped
# and is valid at least up to the next knot after t_knot.
#
# To ensure that we are on the correct side of a knot:
# * only ever increase t when rounding (for increasing t)
# * or only ever decrease it (for decreasing t)
t = x/scale
inc = np.diff(t) >= 0
inc = np.r_[inc, inc[-1]]
t = np.where(inc, np.ceil(t), np.floor(t))
dt = np.diff(t.astype(int))
valid = np.absolute(dt) >= min_duration
if not np.any(valid):
valid[0] = True
dt[0] = max(dt[0], min_length)
dt = dt[valid]
x_sample = t[:-1][valid]*scale
return x_sample, dt
def __call__(self, x):
return self.spline(x)
def discrete_compensate(c):
"""Compensate spline coefficients for discrete accumulators
Given continuous-time b-spline coefficients, this function
compensates for the effect of discrete time steps in the
target devices.
The compensation is performed in-place.
"""
l = len(c)
if l > 2:
c[1] += c[2]/2.
if l > 3:
c[1] += c[3]/6.
c[2] += c[3]
if l > 4:
raise ValueError("only third-order splines supported")

View File

@ -1,133 +0,0 @@
# Copyright (C) 2014, 2015 M-Labs Limited
# Copyright (C) 2014, 2015 Robert Jordens <jordens@gmail.com>
from copy import copy
from math import cos, pi
from artiq.wavesynth.coefficients import discrete_compensate
class Spline:
def __init__(self):
self.c = [0.0]
def set_coefficients(self, c):
if not c:
c = [0.]
self.c = copy(c)
discrete_compensate(self.c)
def next(self):
r = self.c[0]
for i in range(len(self.c) - 1):
self.c[i] += self.c[i + 1]
return r
class SplinePhase:
def __init__(self):
self.c = [0.0]
self.c0 = 0.0
def set_coefficients(self, c):
if not c:
c = [0.]
self.c0 = c[0]
c1p = c[1:]
discrete_compensate(c1p)
self.c[1:] = c1p
def clear(self):
self.c[0] = 0.0
def next(self):
r = self.c[0]
for i in range(len(self.c) - 1):
self.c[i] += self.c[i + 1]
self.c[i] %= 1.0
return r + self.c0
class DDS:
def __init__(self):
self.amplitude = Spline()
self.phase = SplinePhase()
def next(self):
return self.amplitude.next()*cos(2*pi*self.phase.next())
class Channel:
def __init__(self):
self.bias = Spline()
self.dds = DDS()
self.v = 0.
self.silence = False
def next(self):
v = self.bias.next() + self.dds.next()
if not self.silence:
self.v = v
return self.v
def set_silence(self, s):
self.silence = s
class TriggerError(Exception):
pass
class Synthesizer:
def __init__(self, nchannels, program):
self.channels = [Channel() for _ in range(nchannels)]
self.program = program
# line_iter is None: "wait for segment selection" state
# otherwise: iterator on the current position in the frame
self.line_iter = None
def select(self, selection):
if self.line_iter is not None:
raise TriggerError("a frame is already selected")
self.line_iter = iter(self.program[selection])
self.line = next(self.line_iter)
def trigger(self):
if self.line_iter is None:
raise TriggerError("no frame selected")
line = self.line
if not line.get("trigger", False):
raise TriggerError("segment is not triggered")
r = [[] for _ in self.channels]
while True:
for channel, channel_data in zip(self.channels,
line["channel_data"]):
channel.set_silence(channel_data.get("silence", False))
if "bias" in channel_data:
channel.bias.set_coefficients(
channel_data["bias"]["amplitude"])
if "dds" in channel_data:
channel.dds.amplitude.set_coefficients(
channel_data["dds"]["amplitude"])
if "phase" in channel_data["dds"]:
channel.dds.phase.set_coefficients(
channel_data["dds"]["phase"])
if channel_data["dds"].get("clear", False):
channel.dds.phase.clear()
if line.get("dac_divider", 1) != 1:
raise NotImplementedError
for channel, rc in zip(self.channels, r):
for i in range(line["duration"]):
rc.append(channel.next())
try:
self.line = line = next(self.line_iter)
if line.get("trigger", False):
return r
except StopIteration:
self.line_iter = None
return r

View File

@ -79,6 +79,25 @@ interpreter.
Empty lists do not have valid list element types, so they cannot be used in the kernel. Empty lists do not have valid list element types, so they cannot be used in the kernel.
In kernels, lifetime of allocated values (e.g. lists or numpy arrays) might not be correctly
tracked across function calls (see `#1497 <https://github.com/m-labs/artiq/issues/1497>`_,
`#1677 <https://github.com/m-labs/artiq/issues/1677>`_) like in this example ::
@kernel
def func(a):
return a
class ProblemReturn1(EnvExperiment):
def build(self):
self.setattr_device("core")
@kernel
def run(self):
# results in memory corruption
return func([1, 2, 3])
This results in memory corruption at runtime.
Asynchronous RPCs Asynchronous RPCs
----------------- -----------------

View File

@ -14,6 +14,10 @@ Default network ports
+---------------------------------+--------------+ +---------------------------------+--------------+
| Moninj (proxy control) | 1384 | | Moninj (proxy control) | 1384 |
+---------------------------------+--------------+ +---------------------------------+--------------+
| Core analyzer proxy (proxy) | 1385 |
+---------------------------------+--------------+
| Core analyzer proxy (control) | 1386 |
+---------------------------------+--------------+
| Master (logging input) | 1066 | | Master (logging input) | 1066 |
+---------------------------------+--------------+ +---------------------------------+--------------+
| Master (broadcasts) | 1067 | | Master (broadcasts) | 1067 |

Some files were not shown because too many files have changed in this diff Show More