forked from M-Labs/artiq
1
0
Fork 0

Compare commits

..

No commits in common. "master" and "syncrtio" have entirely different histories.

227 changed files with 2952 additions and 13646 deletions

3
.gitignore vendored
View File

@ -29,7 +29,6 @@ __pycache__/
/repository/
/results
/last_rid.pyon
/dataset_db.mdb
/dataset_db.mdb-lock
/dataset_db.pyon
/device_db*.py
/test*

View File

@ -26,6 +26,7 @@ report if possible:
* Operating System
* ARTIQ version (with recent versions of ARTIQ, run ``artiq_client --version``)
* Version of the gateware and runtime loaded in the core device (in the output of ``artiq_coremgmt -D .... log``)
* If using Conda, output of `conda list`
* Hardware involved

View File

@ -13,7 +13,7 @@ ARTIQ uses FPGA hardware to perform its time-critical tasks. The `Sinara hardwar
ARTIQ is designed to be portable to hardware platforms from different vendors and FPGA manufacturers.
Several different configurations of a `FPGA evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ and of a `Zynq evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html>`_ are also used and supported. FPGA platforms can be combined with any number of additional peripherals, either already accessible from ARTIQ or made accessible with little effort.
ARTIQ and its dependencies are available in the form of Nix packages (for Linux) and MSYS2 packages (for Windows). See `the manual <https://m-labs.hk/experiment-control/resources/>`_ for installation instructions.
ARTIQ and its dependencies are available in the form of Nix packages (for Linux) and Conda packages (for Windows and Linux). See `the manual <https://m-labs.hk/experiment-control/resources/>`_ for installation instructions.
Packages containing pre-compiled binary images to be loaded onto the hardware platforms are supplied for each configuration.
Like any open source software ARTIQ can equally be built and installed directly from `source <https://github.com/m-labs/artiq>`_.
@ -29,7 +29,7 @@ Website: https://m-labs.hk/artiq
License
=======
Copyright (C) 2014-2023 M-Labs Limited.
Copyright (C) 2014-2022 M-Labs Limited.
ARTIQ is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by

View File

@ -3,98 +3,14 @@
Release notes
=============
ARTIQ-8 (Unreleased)
--------------------
Unreleased
----------
Highlights:
* New hardware support:
- Support for Shuttler, a 16-channel 125MSPS DAC card intended for ion transport.
Waveform generator and user API are similar to the NIST PDQ.
- Implemented Phaser-servo. This requires recent gateware on Phaser.
- Almazny v1.2 with finer RF switch control.
- Metlino and Sayma support has been dropped due to complications with synchronous RTIO clocking.
- More user LEDs are exposed to RTIO on Kasli.
- Implemented Phaser-MIQRO support. This requires the proprietary Phaser MIQRO gateware
variant from QUARTIQ.
- Sampler: fixed ADC MU to Volt conversion factor for Sampler v2.2+.
For earlier hardware versions, specify the hardware version in the device
database file (e.g. ``"hw_rev": "v2.1"``) to use the correct conversion factor.
* Support for distributed DMA, where DMA is run directly on satellites for corresponding
RTIO events, increasing bandwidth in scenarios with heavy satellite usage.
* Support for subkernels, where kernels are run on satellite device CPUs to offload some
of the processing and RTIO operations.
* CPU (on softcore platforms) and AXI bus (on Zynq) are now clocked synchronously with the RTIO
clock, to facilitate implementation of local processing on DRTIO satellites, and to slightly
reduce RTIO latency.
* Support for DRTIO-over-EEM, used with Shuttler.
* Added channel names to RTIO error messages.
* GUI:
- Implemented Applet Request Interfaces which allow applets to modify datasets and set the
current values of widgets in the dashboard's experiment windows.
- Implemented a new EntryArea widget which allows argument entry widgets to be used in applets.
- The "Close all applets" command (shortcut: Ctrl-Alt-W) now ignores docked applets,
making it a convenient way to clean up after exploratory work without destroying a
carefully arranged default workspace.
- Hotkeys now organize experiment windows in the order they were last interacted with:
+ CTRL+SHIFT+T tiles experiment windows
+ CTRL+SHIFT+C cascades experiment windows
* Persistent datasets are now stored in a LMDB database for improved performance.
* Python's built-in types (such as ``float``, or ``List[...]``) can now be used in type annotations on
kernel functions.
* Full Python 3.10 support.
* MSYS2 packaging for Windows, which replaces Conda. Conda packages are still available to
support legacy installations, but may be removed in a future release.
Breaking changes:
* ``SimpleApplet`` now calls widget constructors with an additional ``ctl`` parameter for control
operations, which includes dataset operations. It can be ignored if not needed. For an example usage,
refer to the ``big_number.py`` applet.
* ``SimpleApplet`` and ``TitleApplet`` now call ``data_changed`` with additional parameters. Derived applets
should change the function signature as below:
::
# SimpleApplet
def data_changed(self, value, metadata, persist, mods)
# SimpleApplet (old version)
def data_changed(self, data, mods)
# TitleApplet
def data_changed(self, value, metadata, persist, mods, title)
# TitleApplet (old version)
def data_changed(self, data, mods, title)
Accesses to the data argument should be replaced as below:
::
data[key][0] ==> persist[key]
data[key][1] ==> value[key]
* The ``ndecimals`` parameter in ``NumberValue`` and ``Scannable`` has been renamed to ``precision``.
Parameters after and including ``scale`` in both constructors are now keyword-only.
Refer to the updated ``no_hardware/arguments_demo.py`` example for current usage.
* Almazny v1.2 is incompatible with the legacy versions and is the default.
To use legacy versions, specify ``almazny_hw_rev`` in the JSON description.
* kasli_generic.py has been merged into kasli.py, and the demonstration designs without JSON descriptions
have been removed. The base classes remain present in kasli.py to support third-party flows without
JSON descriptions.
* Legacy PYON databases should be converted to LMDB with the script below:
::
from sipyco import pyon
import lmdb
old = pyon.load_file("dataset_db.pyon")
new = lmdb.open("dataset_db.mdb", subdir=False, map_size=2**30)
with new.begin(write=True) as txn:
for key, value in old.items():
txn.put(key.encode(), pyon.encode((value, {})).encode())
new.close()
* ``artiq.wavesynth`` has been removed.
* Implemented Phaser-servo. This requires recent gateware on Phaser.
* Implemented Phaser-MIQRO support. This requires the Phaser MIQRO gateware
variant.
ARTIQ-7
-------
@ -110,7 +26,7 @@ Highlights:
- Almazny mezzanine board for Mirny
- Phaser: improved documentation, exposed the DAC coarse mixer and ``sif_sync``, exposed upconverter calibration
and enabling/disabling of upconverter LO & RF outputs, added helpers to align Phaser updates to the
RTIO timeline (``get_next_frame_mu()``).
RTIO timeline (``get_next_frame_mu()``
- Urukul: ``get()``, ``get_mu()``, ``get_att()``, and ``get_att_mu()`` functions added for AD9910 and AD9912.
* Softcore targets now use the RISC-V architecture (VexRiscv) instead of OR1K (mor1kx).
* Gateware FPU is supported on KC705 and Kasli 2.0.
@ -160,9 +76,9 @@ Breaking changes:
generated for some configurations.
* Phaser: fixed coarse mixer frequency configuration
* Mirny: Added extra delays in ``ADF5356.sync()``. This avoids the need of an extra delay before
calling ``ADF5356.init()``.
calling `ADF5356.init()`.
* The deprecated ``set_dataset(..., save=...)`` is no longer supported.
* The ``PCA9548`` I2C switch class was renamed to ``I2CSwitch``, to accommodate support for PCA9547,
* The ``PCA9548`` I2C switch class was renamed to ``I2CSwitch``, to accomodate support for PCA9547,
and possibly other switches in future. Readback has been removed, and now only one channel per
switch is supported.

View File

@ -1,7 +1,4 @@
import os
def get_rev():
return os.getenv("VERSIONEER_REV", default="unknown")
def get_version():
return os.getenv("VERSIONEER_OVERRIDE", default="8.0+unknown.beta")
return os.getenv("VERSIONEER_OVERRIDE", default="7.0.beta")

23
artiq/afws.pem Normal file
View File

@ -0,0 +1,23 @@
-----BEGIN CERTIFICATE-----
MIID0zCCArugAwIBAgIUPkNfEUx/uau3z8SD4mgMbCK/DEgwDQYJKoZIhvcNAQEL
BQAweTELMAkGA1UEBhMCSEsxEzARBgNVBAgMClNvbWUtU3RhdGUxFzAVBgNVBAoM
Dk0tTGFicyBMaW1pdGVkMRkwFwYDVQQDDBBuaXhibGQubS1sYWJzLmhrMSEwHwYJ
KoZIhvcNAQkBFhJoZWxwZGVza0BtLWxhYnMuaGswHhcNMjIwMjA2MTA1ODQ0WhcN
MjUwMjA1MTA1ODQ0WjB5MQswCQYDVQQGEwJISzETMBEGA1UECAwKU29tZS1TdGF0
ZTEXMBUGA1UECgwOTS1MYWJzIExpbWl0ZWQxGTAXBgNVBAMMEG5peGJsZC5tLWxh
YnMuaGsxITAfBgkqhkiG9w0BCQEWEmhlbHBkZXNrQG0tbGFicy5oazCCASIwDQYJ
KoZIhvcNAQEBBQADggEPADCCAQoCggEBAPWetZhoggPR2ae7waGzv1AQ8NQO3noW
8DofVjusNpX5i/YB0waAr1bm1tALLJoHV2r/gTxujlXCe/L/WG1DLseCf6NO9sHg
t0FLhDpF9kPMWBgauVVLepd2Y2yU1G8eFuEVGnsiQSu0IzsZP5FQBJSyxvxJ+V/L
EW9ox91VGOP9VZR9jqdlYjGhcwClHA/nHe0q1fZq42+9rG466I5yIlNSoa7ilhTU
2C2doxy6Sr6VJYnLEMQqoIF65t3MkKi9iaqN7az0OCrj6XR0P5iKBzUhIgMUd2qs
7Id0XUdbQvaoaRI67vhGkNr+f4rdAUNCDGcbbokuBnmE7/gva6BAABUCAwEAAaNT
MFEwHQYDVR0OBBYEFM2e2FmcytXhKyfC1KEjVJ2mPSy3MB8GA1UdIwQYMBaAFM2e
2FmcytXhKyfC1KEjVJ2mPSy3MA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL
BQADggEBAKH0z5vlbfTghjYWwd2yEEFBbZx5XxaLHboFQpFpxu9sZoidVs047tco
MOr1py9juiNGGM8G35sw9306f+thDFwqlQfSExUwp5pRQNq+mxglMSF05HWDqBwb
wnItKi/WXpkMQXgpQJFVeflz4B4ZFNlH1UQl5bwacXOM9NM9zO7duCjVXmGE0yxi
VQyApfPQYu9whCSowDYYaA0toJeikMzGfWxhlAH79/2Qmit8KcSCbX1fK/QoRZLa
5NeUi/OlJbBpkgTrfzfMLphmsPWPAVMeUKzqd/vXfG6ZBOZZm6e6sl8RBycBezII
15WekikTE5+T54/E0xiu+zIW/Xhhk14=
-----END CERTIFICATE-----

View File

@ -1,96 +1,22 @@
#!/usr/bin/env python3
from PyQt5 import QtWidgets, QtCore, QtGui
from PyQt5 import QtWidgets
from artiq.applets.simple import SimpleApplet
from artiq.tools import scale_from_metadata
from artiq.gui.tools import LayoutWidget
class QResponsiveLCDNumber(QtWidgets.QLCDNumber):
doubleClicked = QtCore.pyqtSignal()
def mouseDoubleClickEvent(self, event):
self.doubleClicked.emit()
class QCancellableLineEdit(QtWidgets.QLineEdit):
editCancelled = QtCore.pyqtSignal()
def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Escape:
self.editCancelled.emit()
else:
super().keyPressEvent(event)
class NumberWidget(LayoutWidget):
def __init__(self, args, req):
LayoutWidget.__init__(self)
class NumberWidget(QtWidgets.QLCDNumber):
def __init__(self, args):
QtWidgets.QLCDNumber.__init__(self)
self.setDigitCount(args.digit_count)
self.dataset_name = args.dataset
self.req = req
self.metadata = dict()
self.number_area = QtWidgets.QStackedWidget()
self.addWidget(self.number_area, 0, 0)
self.unit_area = QtWidgets.QLabel()
self.unit_area.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignTop)
self.addWidget(self.unit_area, 0, 1)
self.lcd_widget = QResponsiveLCDNumber()
self.lcd_widget.setDigitCount(args.digit_count)
self.lcd_widget.doubleClicked.connect(self.start_edit)
self.number_area.addWidget(self.lcd_widget)
self.edit_widget = QCancellableLineEdit()
self.edit_widget.setValidator(QtGui.QDoubleValidator())
self.edit_widget.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignVCenter)
self.edit_widget.editCancelled.connect(self.cancel_edit)
self.edit_widget.returnPressed.connect(self.confirm_edit)
self.number_area.addWidget(self.edit_widget)
font = QtGui.QFont()
font.setPointSize(60)
self.edit_widget.setFont(font)
unit_font = QtGui.QFont()
unit_font.setPointSize(20)
self.unit_area.setFont(unit_font)
self.number_area.setCurrentWidget(self.lcd_widget)
def start_edit(self):
# QLCDNumber value property contains the value of zero
# if the displayed value is not a number.
self.edit_widget.setText(str(self.lcd_widget.value()))
self.edit_widget.selectAll()
self.edit_widget.setFocus()
self.number_area.setCurrentWidget(self.edit_widget)
def confirm_edit(self):
scale = scale_from_metadata(self.metadata)
val = float(self.edit_widget.text())
val *= scale
self.req.set_dataset(self.dataset_name, val, **self.metadata)
self.number_area.setCurrentWidget(self.lcd_widget)
def cancel_edit(self):
self.number_area.setCurrentWidget(self.lcd_widget)
def data_changed(self, value, metadata, persist, mods):
def data_changed(self, data, mods):
try:
self.metadata = metadata[self.dataset_name]
# This applet will degenerate other scalar types to native float on edit
# Use the dashboard ChangeEditDialog for consistent type casting
val = float(value[self.dataset_name])
scale = scale_from_metadata(self.metadata)
val /= scale
n = float(data[self.dataset_name][1])
except (KeyError, ValueError, TypeError):
val = "---"
unit = self.metadata.get("unit", "")
self.unit_area.setText(unit)
self.lcd_widget.display(val)
n = "---"
self.display(n)
def main():

View File

@ -7,13 +7,13 @@ from artiq.applets.simple import SimpleApplet
class Image(pyqtgraph.ImageView):
def __init__(self, args, req):
def __init__(self, args):
pyqtgraph.ImageView.__init__(self)
self.args = args
def data_changed(self, value, metadata, persist, mods):
def data_changed(self, data, mods):
try:
img = value[self.args.img]
img = data[self.args.img][1]
except KeyError:
return
self.setImage(img)

View File

@ -8,20 +8,20 @@ from artiq.applets.simple import TitleApplet
class HistogramPlot(pyqtgraph.PlotWidget):
def __init__(self, args, req):
def __init__(self, args):
pyqtgraph.PlotWidget.__init__(self)
self.args = args
self.timer = QTimer()
self.timer.setSingleShot(True)
self.timer.timeout.connect(self.length_warning)
def data_changed(self, value, metadata, persist, mods, title):
def data_changed(self, data, mods, title):
try:
y = value[self.args.y]
y = data[self.args.y][1]
if self.args.x is None:
x = None
else:
x = value[self.args.x]
x = data[self.args.x][1]
except KeyError:
return
if x is None:

View File

@ -9,7 +9,7 @@ from artiq.applets.simple import TitleApplet
class XYPlot(pyqtgraph.PlotWidget):
def __init__(self, args, req):
def __init__(self, args):
pyqtgraph.PlotWidget.__init__(self)
self.args = args
self.timer = QTimer()
@ -19,16 +19,16 @@ class XYPlot(pyqtgraph.PlotWidget):
'Error bars': False,
'Fit values': False}
def data_changed(self, value, metadata, persist, mods, title):
def data_changed(self, data, mods, title):
try:
y = value[self.args.y]
y = data[self.args.y][1]
except KeyError:
return
x = value.get(self.args.x, (False, None))
x = data.get(self.args.x, (False, None))[1]
if x is None:
x = np.arange(len(y))
error = value.get(self.args.error, (False, None))
fit = value.get(self.args.fit, (False, None))
error = data.get(self.args.error, (False, None))[1]
fit = data.get(self.args.fit, (False, None))[1]
if not len(y) or len(y) != len(x):
self.mismatch['X values'] = True

View File

@ -22,7 +22,7 @@ def _compute_ys(histogram_bins, histograms_counts):
# pyqtgraph.GraphicsWindow fails to behave like a regular Qt widget
# and breaks embedding. Do not use as top widget.
class XYHistPlot(QtWidgets.QSplitter):
def __init__(self, args, req):
def __init__(self, args):
QtWidgets.QSplitter.__init__(self)
self.resize(1000, 600)
self.setWindowTitle("XY/Histogram")
@ -124,11 +124,11 @@ class XYHistPlot(QtWidgets.QSplitter):
return False
return True
def data_changed(self, value, metadata, persist, mods):
def data_changed(self, data, mods):
try:
xs = value[self.args.xs]
histogram_bins = value[self.args.histogram_bins]
histograms_counts = value[self.args.histograms_counts]
xs = data[self.args.xs][1]
histogram_bins = data[self.args.histogram_bins][1]
histograms_counts = data[self.args.histograms_counts][1]
except KeyError:
return
if len(xs) != histograms_counts.shape[0]:

View File

@ -6,18 +6,18 @@ from artiq.applets.simple import SimpleApplet
class ProgressWidget(QtWidgets.QProgressBar):
def __init__(self, args, req):
def __init__(self, args):
QtWidgets.QProgressBar.__init__(self)
self.setMinimum(args.min)
self.setMaximum(args.max)
self.dataset_value = args.value
def data_changed(self, value, metadata, persist, mods):
def data_changed(self, data, mods):
try:
val = round(value[self.dataset_value])
value = round(data[self.dataset_value][1])
except (KeyError, ValueError, TypeError):
val = 0
self.setValue(val)
value = 0
self.setValue(value)

View File

@ -7,112 +7,13 @@ import string
from qasync import QEventLoop, QtWidgets, QtCore
from sipyco.sync_struct import Subscriber, process_mod
from sipyco.pc_rpc import AsyncioClient as RPCClient
from sipyco import pyon
from sipyco.pipe_ipc import AsyncioChildComm
from artiq.language.scan import ScanObject
logger = logging.getLogger(__name__)
class _AppletRequestInterface:
def __init__(self):
raise NotImplementedError
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
"""
Set a dataset.
See documentation of ``artiq.language.environment.set_dataset``.
"""
raise NotImplementedError
def mutate_dataset(self, key, index, value):
"""
Mutate a dataset.
See documentation of ``artiq.language.environment.mutate_dataset``.
"""
raise NotImplementedError
def append_to_dataset(self, key, value):
"""
Append to a dataset.
See documentation of ``artiq.language.environment.append_to_dataset``.
"""
raise NotImplementedError
def set_argument_value(self, expurl, name, value):
"""
Temporarily set the value of an argument in a experiment in the dashboard.
The value resets to default value when recomputing the argument.
:param expurl: Experiment URL identifying the experiment in the dashboard. Example: 'repo:ArgumentsDemo'.
:param name: Name of the argument in the experiment.
:param value: Object representing the new temporary value of the argument. For ``Scannable`` arguments, this parameter
should be a ``ScanObject``. The type of the ``ScanObject`` will be set as the selected type when this function is called.
"""
raise NotImplementedError
class AppletRequestIPC(_AppletRequestInterface):
def __init__(self, ipc):
self.ipc = ipc
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
metadata = {}
if unit is not None:
metadata["unit"] = unit
if scale is not None:
metadata["scale"] = scale
if precision is not None:
metadata["precision"] = precision
self.ipc.set_dataset(key, value, metadata, persist)
def mutate_dataset(self, key, index, value):
mod = {"action": "setitem", "path": [key, 1], "key": index, "value": value}
self.ipc.update_dataset(mod)
def append_to_dataset(self, key, value):
mod = {"action": "append", "path": [key, 1], "x": value}
self.ipc.update_dataset(mod)
def set_argument_value(self, expurl, name, value):
if isinstance(value, ScanObject):
value = value.describe()
self.ipc.set_argument_value(expurl, name, value)
class AppletRequestRPC(_AppletRequestInterface):
def __init__(self, loop, dataset_ctl):
self.loop = loop
self.dataset_ctl = dataset_ctl
self.background_tasks = set()
def _background(self, coro, *args, **kwargs):
task = self.loop.create_task(coro(*args, **kwargs))
self.background_tasks.add(task)
task.add_done_callback(self.background_tasks.discard)
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
metadata = {}
if unit is not None:
metadata["unit"] = unit
if scale is not None:
metadata["scale"] = scale
if precision is not None:
metadata["precision"] = precision
self._background(self.dataset_ctl.set, key, value, metadata=metadata, persist=persist)
def mutate_dataset(self, key, index, value):
mod = {"action": "setitem", "path": [key, 1], "key": index, "value": value}
self._background(self.dataset_ctl.update, mod)
def append_to_dataset(self, key, value):
mod = {"action": "append", "path": [key, 1], "x": value}
self._background(self.dataset_ctl.update, mod)
class AppletIPCClient(AsyncioChildComm):
def set_close_cb(self, close_cb):
self.close_cb = close_cb
@ -163,30 +64,13 @@ class AppletIPCClient(AsyncioChildComm):
exc_info=True)
self.close_cb()
def subscribe(self, datasets, init_cb, mod_cb, dataset_prefixes=[], *, loop):
def subscribe(self, datasets, init_cb, mod_cb, dataset_prefixes=[]):
self.write_pyon({"action": "subscribe",
"datasets": datasets,
"dataset_prefixes": dataset_prefixes})
self.init_cb = init_cb
self.mod_cb = mod_cb
self.listen_task = loop.create_task(self.listen())
def set_dataset(self, key, value, metadata, persist=None):
self.write_pyon({"action": "set_dataset",
"key": key,
"value": value,
"metadata": metadata,
"persist": persist})
def update_dataset(self, mod):
self.write_pyon({"action": "update_dataset",
"mod": mod})
def set_argument_value(self, expurl, name, value):
self.write_pyon({"action": "set_argument_value",
"expurl": expurl,
"name": name,
"value": value})
asyncio.ensure_future(self.listen())
class SimpleApplet:
@ -208,11 +92,8 @@ class SimpleApplet:
"for dataset notifications "
"(ignored in embedded mode)")
group.add_argument(
"--port-notify", default=3250, type=int,
help="TCP port to connect to for notifications (ignored in embedded mode)")
group.add_argument(
"--port-control", default=3251, type=int,
help="TCP port to connect to for control (ignored in embedded mode)")
"--port", default=3250, type=int,
help="TCP port to connect to")
self._arggroup_datasets = self.argparser.add_argument_group("datasets")
@ -251,21 +132,8 @@ class SimpleApplet:
if self.embed is not None:
self.ipc.close()
def req_init(self):
if self.embed is None:
dataset_ctl = RPCClient()
self.loop.run_until_complete(dataset_ctl.connect_rpc(
self.args.server, self.args.port_control, "master_dataset_db"))
self.req = AppletRequestRPC(self.loop, dataset_ctl)
else:
self.req = AppletRequestIPC(self.ipc)
def req_close(self):
if self.embed is None:
self.req.dataset_ctl.close_rpc()
def create_main_widget(self):
self.main_widget = self.main_widget_class(self.args, self.req)
self.main_widget = self.main_widget_class(self.args)
if self.embed is not None:
self.ipc.set_close_cb(self.main_widget.close)
if os.name == "nt":
@ -321,12 +189,7 @@ class SimpleApplet:
return False
def emit_data_changed(self, data, mod_buffer):
persist = dict()
value = dict()
metadata = dict()
for k, d in data.items():
persist[k], value[k], metadata[k] = d
self.main_widget.data_changed(value, metadata, persist, mod_buffer)
self.main_widget.data_changed(data, mod_buffer)
def flush_mod_buffer(self):
self.emit_data_changed(self.data, self.mod_buffer)
@ -341,7 +204,7 @@ class SimpleApplet:
self.mod_buffer.append(mod)
else:
self.mod_buffer = [mod]
self.loop.call_later(self.args.update_delay,
asyncio.get_event_loop().call_later(self.args.update_delay,
self.flush_mod_buffer)
else:
self.emit_data_changed(self.data, [mod])
@ -351,11 +214,10 @@ class SimpleApplet:
self.subscriber = Subscriber("datasets",
self.sub_init, self.sub_mod)
self.loop.run_until_complete(self.subscriber.connect(
self.args.server, self.args.port_notify))
self.args.server, self.args.port))
else:
self.ipc.subscribe(self.datasets, self.sub_init, self.sub_mod,
dataset_prefixes=self.dataset_prefixes,
loop=self.loop)
dataset_prefixes=self.dataset_prefixes)
def unsubscribe(self):
if self.embed is None:
@ -366,8 +228,6 @@ class SimpleApplet:
self.qasync_init()
try:
self.ipc_init()
try:
self.req_init()
try:
self.create_main_widget()
self.subscribe()
@ -375,8 +235,6 @@ class SimpleApplet:
self.loop.run_forever()
finally:
self.unsubscribe()
finally:
self.req_close()
finally:
self.ipc_close()
finally:
@ -415,9 +273,4 @@ class TitleApplet(SimpleApplet):
title = self.args.title
else:
title = None
persist = dict()
value = dict()
metadata = dict()
for k, d in data.items():
persist[k], value[k], metadata[k] = d
self.main_widget.data_changed(value, metadata, persist, mod_buffer, title)
self.main_widget.data_changed(data, mod_buffer, title)

View File

@ -20,46 +20,11 @@ class Model(DictSyncTreeSepModel):
DictSyncTreeSepModel.__init__(self, ".", ["Dataset", "Value"], init)
def convert(self, k, v, column):
return short_format(v[1], v[2])
class DatasetCtl:
def __init__(self, master_host, master_port):
self.master_host = master_host
self.master_port = master_port
async def _execute_rpc(self, op_name, key_or_mod, value=None, persist=None, metadata=None):
logger.info("Starting %s operation on %s", op_name, key_or_mod)
try:
remote = RPCClient()
await remote.connect_rpc(self.master_host, self.master_port,
"master_dataset_db")
try:
if op_name == "set":
await remote.set(key_or_mod, value, persist, metadata)
elif op_name == "update":
await remote.update(key_or_mod)
else:
logger.error("Invalid operation: %s", op_name)
return
finally:
remote.close_rpc()
except:
logger.error("Failed %s operation on %s", op_name,
key_or_mod, exc_info=True)
else:
logger.info("Finished %s operation on %s", op_name,
key_or_mod)
async def set(self, key, value, persist=None, metadata=None):
await self._execute_rpc("set", key, value, persist, metadata)
async def update(self, mod):
await self._execute_rpc("update", mod)
return short_format(v[1])
class DatasetsDock(QtWidgets.QDockWidget):
def __init__(self, dataset_sub, dataset_ctl):
def __init__(self, datasets_sub, master_host, master_port):
QtWidgets.QDockWidget.__init__(self, "Datasets")
self.setObjectName("Datasets")
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
@ -97,9 +62,10 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table.addAction(upload_action)
self.set_model(Model(dict()))
dataset_sub.add_setmodel_callback(self.set_model)
datasets_sub.add_setmodel_callback(self.set_model)
self.dataset_ctl = dataset_ctl
self.master_host = master_host
self.master_port = master_port
def _search_datasets(self):
if hasattr(self, "table_model_filter"):
@ -116,14 +82,30 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table_model_filter.setSourceModel(self.table_model)
self.table.setModel(self.table_model_filter)
async def _upload_dataset(self, name, value,):
logger.info("Uploading dataset '%s' to master...", name)
try:
remote = RPCClient()
await remote.connect_rpc(self.master_host, self.master_port,
"master_dataset_db")
try:
await remote.set(name, value)
finally:
remote.close_rpc()
except:
logger.error("Failed uploading dataset '%s'",
name, exc_info=True)
else:
logger.info("Finished uploading dataset '%s'", name)
def upload_clicked(self):
idx = self.table.selectedIndexes()
if idx:
idx = self.table_model_filter.mapToSource(idx[0])
key = self.table_model.index_to_key(idx)
if key is not None:
persist, value, metadata = self.table_model.backing_store[key]
asyncio.ensure_future(self.dataset_ctl.set(key, value, metadata=metadata))
persist, value = self.table_model.backing_store[key]
asyncio.ensure_future(self._upload_dataset(key, value))
def save_state(self):
return bytes(self.table.header().saveState())

View File

@ -10,14 +10,22 @@ import h5py
from sipyco import pyon
from artiq import __artiq_dir__ as artiq_dir
from artiq.gui.tools import (LayoutWidget, WheelFilter,
log_level_to_name, get_open_file_name)
from artiq.gui.tools import LayoutWidget, log_level_to_name, get_open_file_name
from artiq.gui.entries import procdesc_to_entry
from artiq.master.worker import Worker, log_worker_exception
logger = logging.getLogger(__name__)
class _WheelFilter(QtCore.QObject):
def eventFilter(self, obj, event):
if (event.type() == QtCore.QEvent.Wheel and
event.modifiers() != QtCore.Qt.NoModifier):
event.ignore()
return True
return False
class _ArgumentEditor(QtWidgets.QTreeWidget):
def __init__(self, dock):
QtWidgets.QTreeWidget.__init__(self)
@ -38,7 +46,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
self.setStyleSheet("QTreeWidget {background: " +
self.palette().midlight().color().name() + " ;}")
self.viewport().installEventFilter(WheelFilter(self.viewport(), True))
self.viewport().installEventFilter(_WheelFilter(self.viewport()))
self._groups = dict()
self._arg_to_widgets = dict()
@ -370,9 +378,9 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
class LocalDatasetDB:
def __init__(self, dataset_sub):
self.dataset_sub = dataset_sub
dataset_sub.add_setmodel_callback(self.init)
def __init__(self, datasets_sub):
self.datasets_sub = datasets_sub
datasets_sub.add_setmodel_callback(self.init)
def init(self, data):
self._data = data
@ -381,11 +389,11 @@ class LocalDatasetDB:
return self._data.backing_store[key][1]
def update(self, mod):
self.dataset_sub.update(mod)
self.datasets_sub.update(mod)
class ExperimentsArea(QtWidgets.QMdiArea):
def __init__(self, root, dataset_sub):
def __init__(self, root, datasets_sub):
QtWidgets.QMdiArea.__init__(self)
self.pixmap = QtGui.QPixmap(os.path.join(
artiq_dir, "gui", "logo_ver.svg"))
@ -394,11 +402,11 @@ class ExperimentsArea(QtWidgets.QMdiArea):
self.open_experiments = []
self._ddb = LocalDatasetDB(dataset_sub)
self._ddb = LocalDatasetDB(datasets_sub)
self.worker_handlers = {
"get_device_db": lambda: {},
"get_device": lambda key, resolve_alias=False: {"type": "dummy"},
"get_device": lambda k: {"type": "dummy"},
"get_dataset": self._ddb.get,
"update_dataset": self._ddb.update,
}
@ -508,9 +516,5 @@ class ExperimentsArea(QtWidgets.QMdiArea):
self.open_experiments.append(dock)
return dock
def set_argument_value(self, expurl, name, value):
logger.warning("Unable to set argument '%s', dropping change. "
"'set_argument_value' not supported in browser.", name)
def on_dock_closed(self, dock):
self.open_experiments.remove(dock)

View File

@ -102,14 +102,13 @@ class Hdf5FileSystemModel(QtWidgets.QFileSystemModel):
h5 = open_h5(info)
if h5 is not None:
try:
expid = pyon.decode(h5["expid"][()]) if "expid" in h5 else dict()
start_time = datetime.fromtimestamp(h5["start_time"][()]) if "start_time" in h5 else "<none>"
expid = pyon.decode(h5["expid"][()])
start_time = datetime.fromtimestamp(h5["start_time"][()])
v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n"
"class_name: {}\nrid: {}\nstart_time: {}").format(
h5["artiq_version"].asstr()[()] if "artiq_version" in h5 else "<none>",
expid.get("repo_rev", "<none>"),
expid.get("file", "<none>"), expid.get("class_name", "<none>"),
h5["rid"][()] if "rid" in h5 else "<none>", start_time)
h5["artiq_version"][()], expid["repo_rev"],
expid.get("file", "<none>"), expid["class_name"],
h5["rid"][()], start_time)
return v
except:
logger.warning("unable to read metadata from %s",
@ -175,14 +174,14 @@ class FilesDock(QtWidgets.QDockWidget):
logger.debug("loading datasets from %s", info.filePath())
with f:
try:
expid = pyon.decode(f["expid"][()]) if "expid" in f else dict()
start_time = datetime.fromtimestamp(f["start_time"][()]) if "start_time" in f else "<none>"
expid = pyon.decode(f["expid"][()])
start_time = datetime.fromtimestamp(f["start_time"][()])
v = {
"artiq_version": f["artiq_version"].asstr()[()] if "artiq_version" in f else "<none>",
"repo_rev": expid.get("repo_rev", "<none>"),
"artiq_version": f["artiq_version"][()],
"repo_rev": expid["repo_rev"],
"file": expid.get("file", "<none>"),
"class_name": expid.get("class_name", "<none>"),
"rid": f["rid"][()] if "rid" in f else "<none>",
"class_name": expid["class_name"],
"rid": f["rid"][()],
"start_time": start_time,
}
self.metadata_changed.emit(v)
@ -194,9 +193,7 @@ class FilesDock(QtWidgets.QDockWidget):
if "archive" in f:
def visitor(k, v):
if isinstance(v, h5py.Dataset):
# v.attrs is a non-serializable h5py.AttributeManager, need to convert to dict
# See https://docs.h5py.org/en/stable/high/attr.html#h5py.AttributeManager
rd[k] = (True, v[()], dict(v.attrs))
rd[k] = (True, v[()])
f["archive"].visititems(visitor)
@ -206,9 +203,7 @@ class FilesDock(QtWidgets.QDockWidget):
if k in rd:
logger.warning("dataset '%s' is both in archive "
"and outputs", k)
# v.attrs is a non-serializable h5py.AttributeManager, need to convert to dict
# See https://docs.h5py.org/en/stable/high/attr.html#h5py.AttributeManager
rd[k] = (True, v[()], dict(v.attrs))
rd[k] = (True, v[()])
f["datasets"].visititems(visitor)

View File

@ -59,6 +59,7 @@ def build_artiq_soc(soc, argdict):
builder.software_packages = []
builder.add_software_package("bootloader", os.path.join(firmware_dir, "bootloader"))
is_kasli_v1 = isinstance(soc.platform, kasli.Platform) and soc.platform.hw_rev in ("v1.0", "v1.1")
if isinstance(soc, AMPSoC):
kernel_cpu_type = "vexriscv" if is_kasli_v1 else "vexriscv-g"
builder.add_software_package("libm", cpu_type=kernel_cpu_type)
builder.add_software_package("libprintf", cpu_type=kernel_cpu_type)
@ -68,9 +69,9 @@ def build_artiq_soc(soc, argdict):
# If the kernel lacks FPU, then the runtime unwinder is already generated
if not is_kasli_v1:
builder.add_software_package("libunwind")
if not soc.config["DRTIO_ROLE"] == "satellite":
builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime"))
else:
# Assume DRTIO satellite.
builder.add_software_package("satman", os.path.join(firmware_dir, "satman"))
try:
builder.build()

View File

@ -21,19 +21,13 @@ class scoped(object):
set of variables resolved as globals
"""
class remote(object):
"""
:ivar remote_fn: (bool) whether function is ran on a remote device,
meaning arguments are received remotely and return is sent remotely
"""
# Typed versions of untyped nodes
class argT(ast.arg, commontyped):
pass
class ClassDefT(ast.ClassDef):
_types = ("constructor_type",)
class FunctionDefT(ast.FunctionDef, scoped, remote):
class FunctionDefT(ast.FunctionDef, scoped):
_types = ("signature_type",)
class QuotedFunctionDefT(FunctionDefT):
"""
@ -64,7 +58,7 @@ class BinOpT(ast.BinOp, commontyped):
pass
class BoolOpT(ast.BoolOp, commontyped):
pass
class CallT(ast.Call, commontyped, remote):
class CallT(ast.Call, commontyped):
"""
:ivar iodelay: (:class:`iodelay.Expr`)
:ivar arg_exprs: (dict of str to :class:`iodelay.Expr`)

View File

@ -38,9 +38,6 @@ class TInt(types.TMono):
def one():
return 1
def TInt8():
return TInt(types.TValue(8))
def TInt32():
return TInt(types.TValue(32))
@ -247,12 +244,6 @@ def fn_at_mu():
def fn_rtio_log():
return types.TBuiltinFunction("rtio_log")
def fn_subkernel_await():
return types.TBuiltinFunction("subkernel_await")
def fn_subkernel_preload():
return types.TBuiltinFunction("subkernel_preload")
# Accessors
def is_none(typ):
@ -335,7 +326,7 @@ def get_iterable_elt(typ):
# n-dimensional arrays, rather than the n-1 dimensional result of iterating over
# the first axis, which makes the name a bit misleading.
if is_str(typ) or is_bytes(typ) or is_bytearray(typ):
return TInt8()
return TInt(types.TValue(8))
elif types._is_pointer(typ) or is_iterable(typ):
return typ.find()["elt"].find()
else:
@ -351,5 +342,5 @@ def is_allocated(typ):
is_float(typ) or is_range(typ) or
types._is_pointer(typ) or types.is_function(typ) or
types.is_external_function(typ) or types.is_rpc(typ) or
types.is_subkernel(typ) or types.is_method(typ) or
types.is_tuple(typ) or types.is_value(typ))
types.is_method(typ) or types.is_tuple(typ) or
types.is_value(typ))

View File

@ -5,7 +5,6 @@ the references to the host objects and translates the functions
annotated as ``@kernel`` when they are referenced.
"""
import typing
import os, re, linecache, inspect, textwrap, types as pytypes, numpy
from collections import OrderedDict, defaultdict
@ -19,13 +18,6 @@ from . import types, builtins, asttyped, math_fns, prelude
from .transforms import ASTTypedRewriter, Inferencer, IntMonomorphizer, TypedtreePrinter
from .transforms.asttyped_rewriter import LocalExtractor
try:
# From numpy=1.25.0 dispatching for `__array_function__` is done via
# a C wrapper: https://github.com/numpy/numpy/pull/23020
from numpy.core._multiarray_umath import _ArrayFunctionDispatcher
except ImportError:
_ArrayFunctionDispatcher = None
class SpecializedFunction:
def __init__(self, instance_type, host_function):
@ -53,14 +45,7 @@ class EmbeddingMap:
self.object_forward_map = {}
self.object_reverse_map = {}
self.module_map = {}
# type_map connects the host Python `type` to the pair of associated
# `(TInstance, TConstructor)`s. The `used_…_names` sets cache the
# respective `.name`s for O(1) collision avoidance.
self.type_map = {}
self.used_instance_type_names = set()
self.used_constructor_type_names = set()
self.function_map = {}
self.str_forward_map = {}
self.str_reverse_map = {}
@ -74,9 +59,7 @@ class EmbeddingMap:
"CacheError",
"SPIError",
"0:ZeroDivisionError",
"0:IndexError",
"UnwrapNoneError",
"SubkernelError"])
"0:IndexError"])
def preallocate_runtime_exception_names(self, names):
for i, name in enumerate(names):
@ -108,6 +91,16 @@ class EmbeddingMap:
# Types
def store_type(self, host_type, instance_type, constructor_type):
self._rename_type(instance_type)
self.type_map[host_type] = (instance_type, constructor_type)
def retrieve_type(self, host_type):
return self.type_map[host_type]
def has_type(self, host_type):
return host_type in self.type_map
def _rename_type(self, new_instance_type):
# Generally, user-defined types that have exact same name (which is to say, classes
# defined inside functions) do not pose a problem to the compiler. The two places which
# cannot handle this are:
@ -116,29 +109,12 @@ class EmbeddingMap:
# Since handling #2 requires renaming on ARTIQ side anyway, it's more straightforward
# to do it once when embedding (since non-embedded code cannot define classes in
# functions). Also, easier to debug.
suffix = 0
new_instance_name = instance_type.name
new_constructor_name = constructor_type.name
while True:
if (new_instance_name not in self.used_instance_type_names
and new_constructor_name not in self.used_constructor_type_names):
break
suffix += 1
new_instance_name = f"{instance_type.name}.{suffix}"
new_constructor_name = f"{constructor_type.name}.{suffix}"
self.used_instance_type_names.add(new_instance_name)
instance_type.name = new_instance_name
self.used_constructor_type_names.add(new_constructor_name)
constructor_type.name = new_constructor_name
self.type_map[host_type] = (instance_type, constructor_type)
def retrieve_type(self, host_type):
return self.type_map[host_type]
def has_type(self, host_type):
return host_type in self.type_map
n = 0
for host_type in self.type_map:
instance_type, constructor_type = self.type_map[host_type]
if instance_type.name == new_instance_type.name:
n += 1
new_instance_type.name = "{}.{}".format(new_instance_type.name, n)
def attribute_count(self):
count = 0
@ -185,22 +161,7 @@ class EmbeddingMap:
obj_typ, _ = self.type_map[type(obj_ref)]
yield obj_id, obj_ref, obj_typ
def subkernels(self):
subkernels = {}
for k, v in self.object_forward_map.items():
if hasattr(v, "artiq_embedded"):
if v.artiq_embedded.destination is not None:
subkernels[k] = v
return subkernels
def has_rpc(self):
return any(filter(
lambda x: (inspect.isfunction(x) or inspect.ismethod(x)) and \
(not hasattr(x, "artiq_embedded") or x.artiq_embedded.destination is None),
self.object_forward_map.values()
))
def has_rpc_or_subkernel(self):
return any(filter(lambda x: inspect.isfunction(x) or inspect.ismethod(x),
self.object_forward_map.values()))
@ -208,7 +169,6 @@ class EmbeddingMap:
class ASTSynthesizer:
def __init__(self, embedding_map, value_map, quote_function=None, expanded_from=None):
self.source = ""
self.source_last_new_line = 0
self.source_buffer = source.Buffer(self.source, "<synthesized>")
self.embedding_map = embedding_map
self.value_map = value_map
@ -227,14 +187,6 @@ class ASTSynthesizer:
return source.Range(self.source_buffer, range_from, range_to,
expanded_from=self.expanded_from)
def _add_iterable(self, fragment):
# Since DILocation points on the beginning of the piece of source
# we don't care if the fragment's end will overflow LLVM's limit.
if len(self.source) - self.source_last_new_line >= 2**16:
fragment = "\\\n" + fragment
self.source_last_new_line = len(self.source) + 2
return self._add(fragment)
def fast_quote_list(self, value):
elts = [None] * len(value)
is_T = False
@ -293,7 +245,7 @@ class ASTSynthesizer:
for index, elt in enumerate(value):
elts[index] = self.quote(elt)
if index < len(value) - 1:
self._add_iterable(", ")
self._add(", ")
return elts
def quote(self, value):
@ -344,28 +296,28 @@ class ASTSynthesizer:
loc=self._add(repr(value)))
elif isinstance(value, str):
return asttyped.StrT(s=value, ctx=None, type=builtins.TStr(),
loc=self._add_iterable(repr(value)))
loc=self._add(repr(value)))
elif isinstance(value, bytes):
return asttyped.StrT(s=value, ctx=None, type=builtins.TBytes(),
loc=self._add_iterable(repr(value)))
loc=self._add(repr(value)))
elif isinstance(value, bytearray):
quote_loc = self._add_iterable('`')
repr_loc = self._add_iterable(repr(value))
unquote_loc = self._add_iterable('`')
quote_loc = self._add('`')
repr_loc = self._add(repr(value))
unquote_loc = self._add('`')
loc = quote_loc.join(unquote_loc)
return asttyped.QuoteT(value=value, type=builtins.TByteArray(), loc=loc)
elif isinstance(value, list):
begin_loc = self._add_iterable("[")
begin_loc = self._add("[")
elts = self.fast_quote_list(value)
end_loc = self._add_iterable("]")
end_loc = self._add("]")
return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(),
begin_loc=begin_loc, end_loc=end_loc,
loc=begin_loc.join(end_loc))
elif isinstance(value, tuple):
begin_loc = self._add_iterable("(")
begin_loc = self._add("(")
elts = self.fast_quote_list(value)
end_loc = self._add_iterable(")")
end_loc = self._add(")")
return asttyped.TupleT(elts=elts, ctx=None,
type=types.TTuple([e.type for e in elts]),
begin_loc=begin_loc, end_loc=end_loc,
@ -375,9 +327,7 @@ class ASTSynthesizer:
elif inspect.isfunction(value) or inspect.ismethod(value) or \
isinstance(value, pytypes.BuiltinFunctionType) or \
isinstance(value, SpecializedFunction) or \
isinstance(value, numpy.ufunc) or \
(isinstance(value, _ArrayFunctionDispatcher) if
_ArrayFunctionDispatcher is not None else False):
isinstance(value, numpy.ufunc):
if inspect.ismethod(value):
quoted_self = self.quote(value.__self__)
function_type = self.quote_function(value.__func__, self.expanded_from)
@ -486,7 +436,7 @@ class ASTSynthesizer:
return asttyped.QuoteT(value=value, type=instance_type,
loc=loc)
def call(self, callee, args, kwargs, callback=None, remote_fn=False):
def call(self, callee, args, kwargs, callback=None):
"""
Construct an AST fragment calling a function specified by
an AST node `function_node`, with given arguments.
@ -530,7 +480,7 @@ class ASTSynthesizer:
starargs=None, kwargs=None,
type=types.TVar(), iodelay=None, arg_exprs={},
begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None,
loc=callee_node.loc.join(end_loc), remote_fn=remote_fn)
loc=callee_node.loc.join(end_loc))
if callback is not None:
node = asttyped.CallT(
@ -565,7 +515,7 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
arg=node.arg, annotation=None,
arg_loc=node.arg_loc, colon_loc=node.colon_loc, loc=node.loc)
def visit_quoted_function(self, node, function, remote_fn):
def visit_quoted_function(self, node, function):
extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine)
extractor.visit(node)
@ -582,11 +532,11 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
node = asttyped.QuotedFunctionDefT(
typing_env=extractor.typing_env, globals_in_scope=extractor.global_,
signature_type=types.TVar(), return_type=types.TVar(),
name=node.name, args=node.args, returns=None,
name=node.name, args=node.args, returns=node.returns,
body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
loc=node.loc, remote_fn=remote_fn)
loc=node.loc)
try:
self.env_stack.append(node.typing_env)
@ -794,7 +744,7 @@ class TypedtreeHasher(algorithm.Visitor):
return hash(tuple(freeze(getattr(node, field_name)) for field_name in fields))
class Stitcher:
def __init__(self, core, dmgr, engine=None, print_as_rpc=True, destination=0, subkernel_arg_types=[]):
def __init__(self, core, dmgr, engine=None, print_as_rpc=True):
self.core = core
self.dmgr = dmgr
if engine is None:
@ -820,19 +770,11 @@ class Stitcher:
self.value_map = defaultdict(lambda: [])
self.definitely_changed = False
self.destination = destination
self.first_call = True
# for non-annotated subkernels:
# main kernel inferencer output with types of arguments
self.subkernel_arg_types = subkernel_arg_types
def stitch_call(self, function, args, kwargs, callback=None):
# We synthesize source code for the initial call so that
# diagnostics would have something meaningful to display to the user.
synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function))
# first call of a subkernel will get its arguments from remote (DRTIO)
remote_fn = self.destination != 0
call_node = synthesizer.call(function, args, kwargs, callback, remote_fn=remote_fn)
call_node = synthesizer.call(function, args, kwargs, callback)
synthesizer.finalize()
self.typedtree.append(call_node)
@ -944,10 +886,6 @@ class Stitcher:
return [diagnostic.Diagnostic("note",
"in kernel function here", {},
call_loc)]
elif fn_kind == 'subkernel':
return [diagnostic.Diagnostic("note",
"in subkernel call here", {},
call_loc)]
else:
assert False
else:
@ -967,7 +905,7 @@ class Stitcher:
self._function_loc(function),
notes=self._call_site_note(loc, fn_kind))
self.engine.process(diag)
elif fn_kind == 'rpc' or fn_kind == 'subkernel' and param.default is not inspect.Parameter.empty:
elif fn_kind == 'rpc' and param.default is not inspect.Parameter.empty:
notes = []
notes.append(diagnostic.Diagnostic("note",
"expanded from here while trying to infer a type for an"
@ -986,18 +924,11 @@ class Stitcher:
Inferencer(engine=self.engine).visit(ast)
IntMonomorphizer(engine=self.engine).visit(ast)
return ast.type
elif fn_kind == 'kernel' and self.first_call and self.destination != 0:
# subkernels do not have access to the main kernel code to infer
# arg types - so these are cached and passed onto subkernel
# compilation, to avoid having to annotate them fully
for name, typ in self.subkernel_arg_types:
if param.name == name:
return typ
else:
# Let the rest of the program decide.
return types.TVar()
def _quote_embedded_function(self, function, flags, remote_fn=False):
def _quote_embedded_function(self, function, flags):
# we are now parsing new functions... definitely changed the type
self.definitely_changed = True
@ -1096,7 +1027,7 @@ class Stitcher:
engine=self.engine, prelude=self.prelude,
globals=self.globals, host_environment=host_environment,
quote=self._quote)
function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function, remote_fn)
function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function)
function_node.flags = flags
# Add it into our typedtree so that it gets inferenced and codegen'd.
@ -1108,6 +1039,9 @@ class Stitcher:
return function_node
def _extract_annot(self, function, annot, kind, call_loc, fn_kind):
if annot is None:
annot = builtins.TNone()
if isinstance(function, SpecializedFunction):
host_function = function.host_function
else:
@ -1121,20 +1055,9 @@ class Stitcher:
if isinstance(embedded_function, str):
embedded_function = host_function
return self._to_artiq_type(
annot,
function=function,
kind=kind,
eval_in_scope=lambda x: eval(x, embedded_function.__globals__),
call_loc=call_loc,
fn_kind=fn_kind)
def _to_artiq_type(
self, annot, *, function, kind: str, eval_in_scope, call_loc: str, fn_kind: str
) -> types.Type:
if isinstance(annot, str):
try:
annot = eval_in_scope(annot)
annot = eval(annot, embedded_function.__globals__)
except Exception:
diag = diagnostic.Diagnostic(
"error",
@ -1144,72 +1067,23 @@ class Stitcher:
notes=self._call_site_note(call_loc, fn_kind))
self.engine.process(diag)
if isinstance(annot, types.Type):
return annot
# Convert built-in Python types to ARTIQ ones.
if annot is None:
return builtins.TNone()
elif annot is numpy.int64:
return builtins.TInt64()
elif annot is numpy.int32:
return builtins.TInt32()
elif annot is float:
return builtins.TFloat()
elif annot is bool:
return builtins.TBool()
elif annot is str:
return builtins.TStr()
elif annot is bytes:
return builtins.TBytes()
elif annot is bytearray:
return builtins.TByteArray()
# Convert generic Python types to ARTIQ ones.
generic_ty = typing.get_origin(annot)
if generic_ty is not None:
type_args = typing.get_args(annot)
artiq_args = [
self._to_artiq_type(
x,
function=function,
kind=kind,
eval_in_scope=eval_in_scope,
call_loc=call_loc,
fn_kind=fn_kind)
for x in type_args
]
if generic_ty is list and len(artiq_args) == 1:
return builtins.TList(artiq_args[0])
elif generic_ty is tuple:
return types.TTuple(artiq_args)
# Otherwise report an unknown type and just use a fresh tyvar.
if annot is int:
message = (
"type annotation for {kind}, 'int' cannot be used as an ARTIQ type. "
"Use numpy's int32 or int64 instead."
)
ty = builtins.TInt()
else:
message = "type annotation for {kind}, '{annot}', is not an ARTIQ type"
ty = types.TVar()
if not isinstance(annot, types.Type):
diag = diagnostic.Diagnostic("error",
message,
"type annotation for {kind}, '{annot}', is not an ARTIQ type",
{"kind": kind, "annot": repr(annot)},
self._function_loc(function),
notes=self._call_site_note(call_loc, fn_kind))
self.engine.process(diag)
return ty
return types.TVar()
else:
return annot
def _quote_syscall(self, function, loc):
signature = inspect.signature(function)
arg_types = OrderedDict()
optarg_types = OrderedDict()
for param in signature.parameters.values():
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
diag = diagnostic.Diagnostic("error",
@ -1247,40 +1121,6 @@ class Stitcher:
self.functions[function] = function_type
return function_type
def _quote_subkernel(self, function, loc):
if isinstance(function, SpecializedFunction):
host_function = function.host_function
else:
host_function = function
ret_type = builtins.TNone()
signature = inspect.signature(host_function)
if signature.return_annotation is not inspect.Signature.empty:
ret_type = self._extract_annot(host_function, signature.return_annotation,
"return type", loc, fn_kind='subkernel')
arg_types = OrderedDict()
optarg_types = OrderedDict()
for param in signature.parameters.values():
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
diag = diagnostic.Diagnostic("error",
"subkernels must only use positional arguments; '{argument}' isn't",
{"argument": param.name},
self._function_loc(function),
notes=self._call_site_note(loc, fn_kind='subkernel'))
self.engine.process(diag)
arg_type = self._type_of_param(function, loc, param, fn_kind='subkernel')
if param.default is inspect.Parameter.empty:
arg_types[param.name] = arg_type
else:
optarg_types[param.name] = arg_type
function_type = types.TSubkernel(arg_types, optarg_types, ret_type,
sid=self.embedding_map.store_object(host_function),
destination=host_function.artiq_embedded.destination)
self.functions[function] = function_type
return function_type
def _quote_rpc(self, function, loc):
if isinstance(function, SpecializedFunction):
host_function = function.host_function
@ -1340,18 +1180,8 @@ class Stitcher:
(host_function.artiq_embedded.core_name is None and
host_function.artiq_embedded.portable is False and
host_function.artiq_embedded.syscall is None and
host_function.artiq_embedded.destination is None and
host_function.artiq_embedded.forbidden is False):
self._quote_rpc(function, loc)
elif host_function.artiq_embedded.destination is not None and \
host_function.artiq_embedded.destination != self.destination:
# treat subkernels as kernels if running on the same device
if not 0 < host_function.artiq_embedded.destination <= 255:
diag = diagnostic.Diagnostic("error",
"subkernel destination must be between 1 and 255 (inclusive)", {},
self._function_loc(host_function))
self.engine.process(diag)
self._quote_subkernel(function, loc)
elif host_function.artiq_embedded.function is not None:
if host_function.__name__ == "<lambda>":
note = diagnostic.Diagnostic("note",
@ -1375,13 +1205,8 @@ class Stitcher:
notes=[note])
self.engine.process(diag)
destination = host_function.artiq_embedded.destination
# remote_fn only for first call in subkernels
remote_fn = destination is not None and self.first_call
self._quote_embedded_function(function,
flags=host_function.artiq_embedded.flags,
remote_fn=remote_fn)
self.first_call = False
flags=host_function.artiq_embedded.flags)
elif host_function.artiq_embedded.syscall is not None:
# Insert a storage-less global whose type instructs the compiler
# to perform a system call instead of a regular call.

View File

@ -706,81 +706,6 @@ class SetLocal(Instruction):
def value(self):
return self.operands[1]
class GetArgFromRemote(Instruction):
"""
An instruction that receives function arguments from remote
(ie. subkernel in DRTIO context)
:ivar arg_name: (string) argument name
:ivar arg_type: argument type
"""
"""
:param arg_name: (string) argument name
:param arg_type: argument type
"""
def __init__(self, arg_name, arg_type, name=""):
assert isinstance(arg_name, str)
super().__init__([], arg_type, name)
self.arg_name = arg_name
self.arg_type = arg_type
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.arg_name = self.arg_name
self_copy.arg_type = self.arg_type
return self_copy
def opcode(self):
return "getargfromremote({})".format(repr(self.arg_name))
class GetOptArgFromRemote(GetArgFromRemote):
"""
An instruction that may or may not retrieve an optional function argument
from remote, depending on number of values received by firmware.
:ivar rcv_count: number of received values,
determined by firmware
:ivar index: (integer) index of the current argument,
in reference to remote arguments
"""
"""
:param rcv_count: number of received valuese
:param index: (integer) index of the current argument,
in reference to remote arguments
"""
def __init__(self, arg_name, arg_type, rcv_count, index, name=""):
super().__init__(arg_name, arg_type, name)
self.rcv_count = rcv_count
self.index = index
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.rcv_count = self.rcv_count
self_copy.index = self.index
return self_copy
def opcode(self):
return "getoptargfromremote({})".format(repr(self.arg_name))
class SubkernelAwaitArgs(Instruction):
"""
A builtin instruction that takes min and max received messages as operands,
and a list of received types.
:ivar arg_types: (list of types) types of passed arguments (including optional)
"""
"""
:param arg_types: (list of types) types of passed arguments (including optional)
"""
def __init__(self, operands, arg_types, name=None):
assert isinstance(arg_types, list)
self.arg_types = arg_types
super().__init__(operands, builtins.TNone(), name)
class GetAttr(Instruction):
"""
An intruction that loads an attribute from an object,
@ -803,7 +728,7 @@ class GetAttr(Instruction):
typ = obj.type.attributes[attr]
else:
typ = obj.type.constructor.attributes[attr]
if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
if types.is_function(typ) or types.is_rpc(typ):
typ = types.TMethod(obj.type, typ)
super().__init__([obj], typ, name)
self.attr = attr
@ -1265,18 +1190,14 @@ class IndirectBranch(Terminator):
class Return(Terminator):
"""
A return instruction.
:param remote_return: (bool)
marks a return in subkernel context,
where the return value is sent back through DRTIO
"""
"""
:param value: (:class:`Value`) return value
"""
def __init__(self, value, remote_return=False, name=""):
def __init__(self, value, name=""):
assert isinstance(value, Value)
super().__init__([value], builtins.TNone(), name)
self.remote_return = remote_return
def opcode(self):
return "return"

View File

@ -33,19 +33,9 @@ SECTIONS
KEEP(*(.eh_frame_hdr))
} : text : eh_frame
.got :
{
*(.got)
} : text
.got.plt :
{
*(.got.plt)
} : text
.data :
{
*(.data .data.*)
*(.data)
} : data
.dynamic :
@ -61,10 +51,6 @@ SECTIONS
_end = .;
}
/* Kernel stack grows downward from end of memory, so put guard page after
* all the program contents. Note: This requires all loaded sections (at
* least those accessed) to be explicitly listed in the above!
*/
. = ALIGN(0x1000);
_sstack_guard = .;
}

View File

@ -84,8 +84,6 @@ class Module:
constant_hoister.process(self.artiq_ir)
if remarks:
invariant_detection.process(self.artiq_ir)
# for subkernels: main kernel inferencer output, to be passed to further compilations
self.subkernel_arg_types = inferencer.subkernel_arg_types
def build_llvm_ir(self, target):
"""Compile the module to LLVM IR for the specified target."""

View File

@ -37,7 +37,6 @@ def globals():
# ARTIQ decorators
"kernel": builtins.fn_kernel(),
"subkernel": builtins.fn_kernel(),
"portable": builtins.fn_kernel(),
"rpc": builtins.fn_kernel(),
@ -55,8 +54,4 @@ def globals():
# ARTIQ utility functions
"rtio_log": builtins.fn_rtio_log(),
"core_log": builtins.fn_print(),
# ARTIQ subkernel utility functions
"subkernel_await": builtins.fn_subkernel_await(),
"subkernel_preload": builtins.fn_subkernel_preload(),
}

View File

@ -94,9 +94,8 @@ class Target:
tool_symbolizer = "llvm-symbolizer"
tool_cxxfilt = "llvm-cxxfilt"
def __init__(self, subkernel_id=None):
def __init__(self):
self.llcontext = ll.Context()
self.subkernel_id = subkernel_id
def target_machine(self):
lltarget = llvm.Target.from_triple(self.triple)
@ -149,8 +148,7 @@ class Target:
ir.BasicBlock._dump_loc = False
type_printer = types.TypePrinter()
suffix = "_subkernel_{}".format(self.subkernel_id) if self.subkernel_id is not None else ""
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", suffix + ".txt",
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", ".txt",
lambda: "\n".join(fn.as_entity(type_printer) for fn in module.artiq_ir))
llmod = module.build_llvm_ir(self)
@ -162,12 +160,12 @@ class Target:
_dump("", "LLVM IR (broken)", ".ll", lambda: str(llmod))
raise
_dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", suffix + "_unopt.ll",
_dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", "_unopt.ll",
lambda: str(llparsedmod))
self.optimize(llparsedmod)
_dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", suffix + ".ll",
_dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", ".ll",
lambda: str(llparsedmod))
return llparsedmod
@ -258,8 +256,6 @@ class Target:
return backtrace
def demangle(self, names):
if not any(names):
return names
with RunTool([self.tool_cxxfilt] + names) as results:
return results["__stdout__"].read().rstrip().split("\n")
@ -267,13 +263,13 @@ class NativeTarget(Target):
def __init__(self):
super().__init__()
self.triple = llvm.get_default_triple()
self.data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data)
host_data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data)
class RV32IMATarget(Target):
triple = "riscv32-unknown-linux"
data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
features = ["m", "a"]
additional_linker_options = ["-m", "elf32lriscv"]
additional_linker_options = []
print_function = "core_log"
now_pinning = True
@ -286,7 +282,7 @@ class RV32GTarget(Target):
triple = "riscv32-unknown-linux"
data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
features = ["m", "a", "f", "d"]
additional_linker_options = ["-m", "elf32lriscv"]
additional_linker_options = []
print_function = "core_log"
now_pinning = True
@ -299,7 +295,7 @@ class CortexA9Target(Target):
triple = "armv7-unknown-linux-gnueabihf"
data_layout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
features = ["dsp", "fp16", "neon", "vfp3"]
additional_linker_options = ["-m", "armelf_linux_eabi", "--target2=rel"]
additional_linker_options = ["--target2=rel"]
print_function = "core_log"
now_pinning = False

View File

@ -30,9 +30,8 @@ def main():
device_db_path = os.path.join(os.path.dirname(sys.argv[1]), "device_db.py")
device_mgr = DeviceManager(DeviceDB(device_db_path))
dataset_db_path = os.path.join(os.path.dirname(sys.argv[1]), "dataset_db.mdb")
dataset_db = DatasetDB(dataset_db_path)
dataset_mgr = DatasetManager()
dataset_db_path = os.path.join(os.path.dirname(sys.argv[1]), "dataset_db.pyon")
dataset_mgr = DatasetManager(DatasetDB(dataset_db_path))
argument_mgr = ProcessArgumentManager({})
@ -69,7 +68,5 @@ def main():
benchmark(lambda: target.strip(elf_shlib),
"Stripping debug information")
dataset_db.close_db()
if __name__ == "__main__":
main()

View File

@ -108,7 +108,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.current_args = None
self.current_assign = None
self.current_exception = None
self.current_remote_fn = False
self.break_target = None
self.continue_target = None
self.return_target = None
@ -212,8 +211,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
old_priv_env, self.current_private_env = self.current_private_env, priv_env
self.generic_visit(node)
self.terminate(ir.Return(ir.Constant(None, builtins.TNone()),
remote_return=self.current_remote_fn))
self.terminate(ir.Return(ir.Constant(None, builtins.TNone())))
return self.functions
finally:
@ -296,8 +294,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
old_block, self.current_block = self.current_block, entry
old_globals, self.current_globals = self.current_globals, node.globals_in_scope
old_remote_fn = self.current_remote_fn
self.current_remote_fn = getattr(node, "remote_fn", False)
env_without_globals = \
{var: node.typing_env[var]
@ -330,8 +326,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.terminate(ir.Return(result))
elif builtins.is_none(typ.ret):
if not self.current_block.is_terminated():
self.current_block.append(ir.Return(ir.Constant(None, builtins.TNone()),
remote_return=self.current_remote_fn))
self.current_block.append(ir.Return(ir.Constant(None, builtins.TNone())))
else:
if not self.current_block.is_terminated():
if len(self.current_block.predecessors()) != 0:
@ -350,7 +345,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.current_block = old_block
self.current_globals = old_globals
self.current_env = old_env
self.current_remote_fn = old_remote_fn
if not is_lambda:
self.current_private_env = old_priv_env
@ -373,8 +367,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
return_value = self.visit(node.value)
if self.return_target is None:
self.append(ir.Return(return_value,
remote_return=self.current_remote_fn))
self.append(ir.Return(return_value))
else:
self.append(ir.SetLocal(self.current_private_env, "$return", return_value))
self.append(ir.Branch(self.return_target))
@ -1205,27 +1198,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
finally:
self.current_assign = old_assign
if types.is_tuple(node.value.type):
assert isinstance(node.slice, ast.Index), \
"Internal compiler error: tuple index should be an Index"
assert isinstance(node.slice.value, ast.Num), \
"Internal compiler error: tuple index should be a constant"
if self.current_assign is not None:
diag = diagnostic.Diagnostic("error",
"cannot assign to a tuple element",
{}, node.loc)
self.engine.process(diag)
index = node.slice.value.n
indexed = self.append(
ir.GetAttr(value, index, name="{}.e{}".format(value.name, index)),
loc=node.loc
)
return indexed
elif isinstance(node.slice, ast.Index):
if isinstance(node.slice, ast.Index):
try:
old_assign, self.current_assign = self.current_assign, None
index = self.visit(node.slice.value)
@ -2531,33 +2504,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
or types.is_builtin(typ, "at_mu"):
return self.append(ir.Builtin(typ.name,
[self.visit(arg) for arg in node.args], node.type))
elif types.is_builtin(typ, "subkernel_await"):
if len(node.args) == 2 and len(node.keywords) == 0:
fn = node.args[0].type
timeout = self.visit(node.args[1])
elif len(node.args) == 1 and len(node.keywords) == 0:
fn = node.args[0].type
timeout = ir.Constant(10_000, builtins.TInt64())
else:
assert False
if types.is_method(fn):
fn = types.get_method_function(fn)
sid = ir.Constant(fn.sid, builtins.TInt32())
if not builtins.is_none(fn.ret):
ret = self.append(ir.Builtin("subkernel_retrieve_return", [sid, timeout], fn.ret))
else:
ret = ir.Constant(None, builtins.TNone())
self.append(ir.Builtin("subkernel_await_finish", [sid, timeout], builtins.TNone()))
return ret
elif types.is_builtin(typ, "subkernel_preload"):
if len(node.args) == 1 and len(node.keywords) == 0:
fn = node.args[0].type
else:
assert False
if types.is_method(fn):
fn = types.get_method_function(fn)
sid = ir.Constant(fn.sid, builtins.TInt32())
return self.append(ir.Builtin("subkernel_preload", [sid], builtins.TNone()))
elif types.is_exn_constructor(typ):
return self.alloc_exn(node.type, *[self.visit(arg_node) for arg_node in node.args])
elif types.is_constructor(typ):
@ -2569,8 +2515,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
node.loc)
self.engine.process(diag)
def _user_call(self, callee, positional, keywords, arg_exprs={}, remote_fn=False):
if types.is_function(callee.type) or types.is_rpc(callee.type) or types.is_subkernel(callee.type):
def _user_call(self, callee, positional, keywords, arg_exprs={}):
if types.is_function(callee.type) or types.is_rpc(callee.type):
func = callee
self_arg = None
fn_typ = callee.type
@ -2585,51 +2531,16 @@ class ARTIQIRGenerator(algorithm.Visitor):
else:
assert False
if types.is_rpc(fn_typ) or types.is_subkernel(fn_typ):
if self_arg is None or types.is_subkernel(fn_typ):
# self is not passed to subkernels by remote
if types.is_rpc(fn_typ):
if self_arg is None:
args = positional
elif self_arg is not None:
else:
args = [self_arg] + positional
for keyword in keywords:
arg = keywords[keyword]
args.append(self.append(ir.Alloc([ir.Constant(keyword, builtins.TStr()), arg],
ir.TKeyword(arg.type))))
elif remote_fn:
assert self_arg is None
assert len(fn_typ.args) >= len(positional)
assert len(keywords) == 0 # no keyword support
args = [None] * fn_typ.arity()
index = 0
# fill in first available args
for arg in positional:
args[index] = arg
index += 1
# remaining args are received through DRTIO
if index < len(args):
# min/max args received remotely (minus already filled)
offset = index
min_args = ir.Constant(len(fn_typ.args)-offset, builtins.TInt8())
max_args = ir.Constant(fn_typ.arity()-offset, builtins.TInt8())
arg_types = list(fn_typ.args.items())[offset:]
arg_type_list = [a[1] for a in arg_types] + [a[1] for a in fn_typ.optargs.items()]
rcvd_count = self.append(ir.SubkernelAwaitArgs([min_args, max_args], arg_type_list))
# obligatory arguments
for arg_name, arg_type in arg_types:
args[index] = self.append(ir.GetArgFromRemote(arg_name, arg_type,
name="ARG.{}".format(arg_name)))
index += 1
# optional arguments
for optarg_name, optarg_type in fn_typ.optargs.items():
idx = ir.Constant(index-offset, builtins.TInt8())
args[index] = \
self.append(ir.GetOptArgFromRemote(optarg_name, optarg_type, rcvd_count, idx))
index += 1
else:
args = [None] * (len(fn_typ.args) + len(fn_typ.optargs))
@ -2715,8 +2626,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
else:
assert False, "Broadcasting for {} arguments not implemented".format(len)
else:
remote_fn = getattr(node, "remote_fn", False)
insn = self._user_call(callee, args, keywords, node.arg_exprs, remote_fn)
insn = self._user_call(callee, args, keywords, node.arg_exprs)
if isinstance(node.func, asttyped.AttributeT):
attr_node = node.func
self.method_map[(attr_node.value.type.find(),

View File

@ -238,7 +238,7 @@ class ASTTypedRewriter(algorithm.Transformer):
body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
loc=node.loc, remote_fn=False)
loc=node.loc)
try:
self.env_stack.append(node.typing_env)
@ -440,8 +440,7 @@ class ASTTypedRewriter(algorithm.Transformer):
def visit_Call(self, node):
node = self.generic_visit(node)
node = asttyped.CallT(type=types.TVar(), iodelay=None, arg_exprs={},
remote_fn=False, func=node.func,
args=node.args, keywords=node.keywords,
func=node.func, args=node.args, keywords=node.keywords,
starargs=node.starargs, kwargs=node.kwargs,
star_loc=node.star_loc, dstar_loc=node.dstar_loc,
begin_loc=node.begin_loc, end_loc=node.end_loc, loc=node.loc)

View File

@ -46,7 +46,6 @@ class Inferencer(algorithm.Visitor):
self.function = None # currently visited function, for Return inference
self.in_loop = False
self.has_return = False
self.subkernel_arg_types = dict()
def _unify(self, typea, typeb, loca, locb, makenotes=None, when=""):
try:
@ -179,7 +178,7 @@ class Inferencer(algorithm.Visitor):
# Convert to a method.
attr_type = types.TMethod(object_type, attr_type)
self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc)
elif types.is_rpc(attr_type) or types.is_subkernel(attr_type):
elif types.is_rpc(attr_type):
# Convert to a method. We don't have to bother typechecking
# the self argument, since for RPCs anything goes.
attr_type = types.TMethod(object_type, attr_type)
@ -260,31 +259,7 @@ class Inferencer(algorithm.Visitor):
def visit_SubscriptT(self, node):
self.generic_visit(node)
if types.is_tuple(node.value.type):
if (not isinstance(node.slice, ast.Index) or
not isinstance(node.slice.value, ast.Num)):
diag = diagnostic.Diagnostic(
"error", "tuples can only be indexed by a constant", {},
node.slice.loc, []
)
self.engine.process(diag)
return
tuple_type = node.value.type.find()
index = node.slice.value.n
if index < 0 or index >= len(tuple_type.elts):
diag = diagnostic.Diagnostic(
"error",
"index {index} is out of range for tuple of size {size}",
{"index": index, "size": len(tuple_type.elts)},
node.slice.loc, []
)
self.engine.process(diag)
return
self._unify(node.type, tuple_type.elts[index], node.loc, node.value.loc)
elif isinstance(node.slice, ast.Index):
if isinstance(node.slice, ast.Index):
if types.is_tuple(node.slice.value.type):
if types.is_var(node.value.type):
return
@ -1294,55 +1269,6 @@ class Inferencer(algorithm.Visitor):
# Ignored.
self._unify(node.type, builtins.TNone(),
node.loc, None)
elif types.is_builtin(typ, "subkernel_await"):
valid_forms = lambda: [
valid_form("subkernel_await(f: subkernel) -> f return type"),
valid_form("subkernel_await(f: subkernel, timeout: numpy.int64) -> f return type")
]
if 1 <= len(node.args) <= 2:
arg0 = node.args[0].type
if types.is_var(arg0):
pass # undetermined yet
else:
if types.is_method(arg0):
fn = types.get_method_function(arg0)
elif types.is_function(arg0) or types.is_subkernel(arg0):
fn = arg0
else:
diagnose(valid_forms())
self._unify(node.type, fn.ret,
node.loc, None)
if len(node.args) == 2:
arg1 = node.args[1]
if types.is_var(arg1.type):
pass
elif builtins.is_int(arg1.type):
# promote to TInt64
self._unify(arg1.type, builtins.TInt64(),
arg1.loc, None)
else:
diagnose(valid_forms())
else:
diagnose(valid_forms())
elif types.is_builtin(typ, "subkernel_preload"):
valid_forms = lambda: [
valid_form("subkernel_preload(f: subkernel) -> None")
]
if len(node.args) == 1:
arg0 = node.args[0].type
if types.is_var(arg0):
pass # undetermined yet
else:
if types.is_method(arg0):
fn = types.get_method_function(arg0)
elif types.is_function(arg0) or types.is_subkernel(arg0):
fn = arg0
else:
diagnose(valid_forms())
self._unify(node.type, fn.ret,
node.loc, None)
else:
diagnose(valid_forms())
else:
assert False
@ -1381,7 +1307,6 @@ class Inferencer(algorithm.Visitor):
typ_args = typ.args
typ_optargs = typ.optargs
typ_ret = typ.ret
typ_func = typ
else:
typ_self = types.get_method_self(typ)
typ_func = types.get_method_function(typ)
@ -1439,23 +1364,12 @@ class Inferencer(algorithm.Visitor):
other_node=node.args[0])
self._unify(node.type, ret, node.loc, None)
return
if types.is_subkernel(typ_func) and typ_func.sid not in self.subkernel_arg_types:
self.subkernel_arg_types[typ_func.sid] = []
for actualarg, (formalname, formaltyp) in \
zip(node.args, list(typ_args.items()) + list(typ_optargs.items())):
self._unify(actualarg.type, formaltyp,
actualarg.loc, None)
passed_args[formalname] = actualarg.loc
if types.is_subkernel(typ_func):
if types.is_instance(actualarg.type):
# objects cannot be passed to subkernels, as rpc code doesn't support them
diag = diagnostic.Diagnostic("error",
"argument '{name}' of type: {typ} is not supported in subkernels",
{"name": formalname, "typ": actualarg.type},
actualarg.loc, [])
self.engine.process(diag)
self.subkernel_arg_types[typ_func.sid].append((formalname, formaltyp))
for keyword in node.keywords:
if keyword.arg in passed_args:
@ -1486,7 +1400,7 @@ class Inferencer(algorithm.Visitor):
passed_args[keyword.arg] = keyword.arg_loc
for formalname in typ_args:
if formalname not in passed_args and not node.remote_fn:
if formalname not in passed_args:
note = diagnostic.Diagnostic("note",
"the called function is of type {type}",
{"type": types.TypePrinter().name(node.func.type)},

View File

@ -280,7 +280,7 @@ class IODelayEstimator(algorithm.Visitor):
context="as an argument for delay_mu()")
call_delay = value
elif not types.is_builtin(typ):
if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
if types.is_function(typ) or types.is_rpc(typ):
offset = 0
elif types.is_method(typ):
offset = 1
@ -288,7 +288,7 @@ class IODelayEstimator(algorithm.Visitor):
else:
assert False
if types.is_rpc(typ) or types.is_subkernel(typ):
if types.is_rpc(typ):
call_delay = iodelay.Const(0)
else:
delay = typ.find().delay.find()
@ -311,7 +311,6 @@ class IODelayEstimator(algorithm.Visitor):
args[arg_name] = arg_node
free_vars = delay.duration.free_vars()
try:
node.arg_exprs = {
arg: self.evaluate(args[arg], abort=abort,
context="in the expression for argument '{}' "
@ -319,12 +318,6 @@ class IODelayEstimator(algorithm.Visitor):
for arg in free_vars
}
call_delay = delay.duration.fold(node.arg_exprs)
except KeyError as e:
if getattr(node, "remote_fn", False):
note = diagnostic.Diagnostic("note",
"function called here", {},
node.loc)
self.abort("due to arguments passed remotely", node.loc, note)
else:
assert False
else:

View File

@ -177,15 +177,6 @@ class LLVMIRGenerator:
self.empty_metadata = self.llmodule.add_metadata([])
self.quote_fail_msg = None
# Maximum alignment required according to the target platform ABI. As this is
# not directly exposed by LLVM, just take the maximum across all the "big"
# elementary types we use. (Vector types, should we ever support them, are
# likely contenders for even larger alignment requirements.)
self.max_target_alignment = max(map(
lambda t: self.abi_layout_info.get_size_align(t)[1],
[lli64, lldouble, llptr]
))
def add_pred(self, pred, block):
if block not in self.llpred_map:
self.llpred_map[block] = set()
@ -215,7 +206,7 @@ class LLVMIRGenerator:
typ = typ.find()
if types.is_tuple(typ):
return ll.LiteralStructType([self.llty_of_type(eltty) for eltty in typ.elts])
elif types.is_rpc(typ) or types.is_external_function(typ) or types.is_subkernel(typ):
elif types.is_rpc(typ) or types.is_external_function(typ):
if for_return:
return llvoid
else:
@ -344,8 +335,8 @@ class LLVMIRGenerator:
else:
value = const.value
llptr = self.llstr_of_str(value, linkage="private", unnamed_addr=True)
lllen = ll.Constant(lli32, len(value))
llptr = self.llstr_of_str(const.value, linkage="private", unnamed_addr=True)
lllen = ll.Constant(lli32, len(const.value))
return ll.Constant(llty, (llptr, lllen))
else:
assert False
@ -398,15 +389,6 @@ class LLVMIRGenerator:
elif name == "rpc_recv":
llty = ll.FunctionType(lli32, [llptr])
elif name == "subkernel_send_message":
llty = ll.FunctionType(llvoid, [lli32, lli8, llsliceptr, llptrptr])
elif name == "subkernel_load_run":
llty = ll.FunctionType(llvoid, [lli32, lli1])
elif name == "subkernel_await_finish":
llty = ll.FunctionType(llvoid, [lli32, lli64])
elif name == "subkernel_await_message":
llty = ll.FunctionType(lli8, [lli32, lli64, llsliceptr, lli8, lli8])
# with now-pinning
elif name == "now":
llty = lli64
@ -883,53 +865,6 @@ class LLVMIRGenerator:
llvalue = self.llbuilder.bitcast(llvalue, llptr.type.pointee)
return self.llbuilder.store(llvalue, llptr)
def process_GetArgFromRemote(self, insn):
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
llval = self._build_rpc_recv(insn.arg_type, llstackptr)
return llval
def process_GetOptArgFromRemote(self, insn):
# optarg = index < rcv_count ? Some(rcv_recv()) : None
llhead = self.llbuilder.basic_block
llrcv = self.llbuilder.append_basic_block(name="optarg.get.{}".format(insn.arg_name))
# argument received
self.llbuilder.position_at_end(llrcv)
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
llval = self._build_rpc_recv(insn.arg_type, llstackptr)
llrpcretblock = self.llbuilder.basic_block # 'return' from rpc_recv, will be needed later
# create the tail block, needs to be after the rpc recv tail block
lltail = self.llbuilder.append_basic_block(name="optarg.tail.{}".format(insn.arg_name))
self.llbuilder.branch(lltail)
# go back to head to add a branch to the tail
self.llbuilder.position_at_end(llhead)
llargrcvd = self.llbuilder.icmp_unsigned("<", self.map(insn.index), self.map(insn.rcv_count))
self.llbuilder.cbranch(llargrcvd, llrcv, lltail)
# argument not received/after arg recvd
self.llbuilder.position_at_end(lltail)
llargtype = self.llty_of_type(insn.arg_type)
llphi_arg_present = self.llbuilder.phi(lli1, name="optarg.phi.present.{}".format(insn.arg_name))
llphi_arg = self.llbuilder.phi(llargtype, name="optarg.phi.{}".format(insn.arg_name))
llphi_arg_present.add_incoming(ll.Constant(lli1, 0), llhead)
llphi_arg.add_incoming(ll.Constant(llargtype, ll.Undefined), llhead)
llphi_arg_present.add_incoming(ll.Constant(lli1, 1), llrpcretblock)
llphi_arg.add_incoming(llval, llrpcretblock)
lloptarg = ll.Constant(ll.LiteralStructType([lli1, llargtype]), ll.Undefined)
lloptarg = self.llbuilder.insert_value(lloptarg, llphi_arg_present, 0)
lloptarg = self.llbuilder.insert_value(lloptarg, llphi_arg, 1)
return lloptarg
def attr_index(self, typ, attr):
return list(typ.attributes.keys()).index(attr)
@ -954,8 +889,8 @@ class LLVMIRGenerator:
def get_global_closure_ptr(self, typ, attr):
closure_type = typ.attributes[attr]
assert types.is_constructor(typ)
assert types.is_function(closure_type) or types.is_rpc(closure_type) or types.is_subkernel(closure_type)
if types.is_external_function(closure_type) or types.is_rpc(closure_type) or types.is_subkernel(closure_type):
assert types.is_function(closure_type) or types.is_rpc(closure_type)
if types.is_external_function(closure_type) or types.is_rpc(closure_type):
return None
llty = self.llty_of_type(typ.attributes[attr])
@ -1400,36 +1335,9 @@ class LLVMIRGenerator:
return self.llbuilder.call(self.llbuiltin("delay_mu"), [llinterval])
elif insn.op == "end_catch":
return self.llbuilder.call(self.llbuiltin("__artiq_end_catch"), [])
elif insn.op == "subkernel_await_finish":
llsid = self.map(insn.operands[0])
lltimeout = self.map(insn.operands[1])
return self.llbuilder.call(self.llbuiltin("subkernel_await_finish"), [llsid, lltimeout],
name="subkernel.await.finish")
elif insn.op == "subkernel_retrieve_return":
llsid = self.map(insn.operands[0])
lltimeout = self.map(insn.operands[1])
lltagptr = self._build_subkernel_tags([insn.type])
self.llbuilder.call(self.llbuiltin("subkernel_await_message"),
[llsid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
name="subkernel.await.message")
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
return self._build_rpc_recv(insn.type, llstackptr)
elif insn.op == "subkernel_preload":
llsid = self.map(insn.operands[0])
return self.llbuilder.call(self.llbuiltin("subkernel_load_run"), [llsid, ll.Constant(lli1, 0)],
name="subkernel.preload")
else:
assert False
def process_SubkernelAwaitArgs(self, insn):
llmin = self.map(insn.operands[0])
llmax = self.map(insn.operands[1])
lltagptr = self._build_subkernel_tags(insn.arg_types)
return self.llbuilder.call(self.llbuiltin("subkernel_await_message"),
[ll.Constant(lli32, 0), ll.Constant(lli64, 10_000), lltagptr, llmin, llmax],
name="subkernel.await.args")
def process_Closure(self, insn):
llenv = self.map(insn.environment())
llenv = self.llbuilder.bitcast(llenv, llptr)
@ -1509,76 +1417,6 @@ class LLVMIRGenerator:
return llfun, list(llargs), llarg_attrs, llcallstackptr
def _build_subkernel_tags(self, tag_list):
def ret_error_handler(typ):
printer = types.TypePrinter()
note = diagnostic.Diagnostic("note",
"value of type {type}",
{"type": printer.name(typ)},
fun_loc)
diag = diagnostic.Diagnostic("error",
"type {type} is not supported in subkernels",
{"type": printer.name(fun_type.ret)},
fun_loc, notes=[note])
self.engine.process(diag)
tag = b"".join([ir.rpc_tag(arg_type, ret_error_handler) for arg_type in tag_list])
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
lltagptr = self.llbuilder.alloca(lltag.type)
self.llbuilder.store(lltag, lltagptr)
return lltagptr
def _build_rpc_recv(self, ret, llstackptr, llnormalblock=None, llunwindblock=None):
# T result = {
# void *ret_ptr = alloca(sizeof(T));
# void *ptr = ret_ptr;
# loop: int size = rpc_recv(ptr);
# // Non-zero: Provide `size` bytes of extra storage for variable-length data.
# if(size) { ptr = alloca(size); goto loop; }
# else *(T*)ret_ptr
# }
llprehead = self.llbuilder.basic_block
llhead = self.llbuilder.append_basic_block(name="rpc.head")
if llunwindblock:
llheadu = self.llbuilder.append_basic_block(name="rpc.head.unwind")
llalloc = self.llbuilder.append_basic_block(name="rpc.continue")
lltail = self.llbuilder.append_basic_block(name="rpc.tail")
llretty = self.llty_of_type(ret)
llslot = self.llbuilder.alloca(llretty, name="rpc.ret.alloc")
llslotgen = self.llbuilder.bitcast(llslot, llptr, name="rpc.ret.ptr")
self.llbuilder.branch(llhead)
self.llbuilder.position_at_end(llhead)
llphi = self.llbuilder.phi(llslotgen.type, name="rpc.ptr")
llphi.add_incoming(llslotgen, llprehead)
if llunwindblock:
llsize = self.llbuilder.invoke(self.llbuiltin("rpc_recv"), [llphi],
llheadu, llunwindblock,
name="rpc.size.next")
self.llbuilder.position_at_end(llheadu)
else:
llsize = self.llbuilder.call(self.llbuiltin("rpc_recv"), [llphi],
name="rpc.size.next")
lldone = self.llbuilder.icmp_unsigned('==', llsize, ll.Constant(llsize.type, 0),
name="rpc.done")
self.llbuilder.cbranch(lldone, lltail, llalloc)
self.llbuilder.position_at_end(llalloc)
llalloca = self.llbuilder.alloca(lli8, llsize, name="rpc.alloc")
llalloca.align = self.max_target_alignment
llphi.add_incoming(llalloca, llalloc)
self.llbuilder.branch(llhead)
self.llbuilder.position_at_end(lltail)
llret = self.llbuilder.load(llslot, name="rpc.ret")
if not ret.fold(False, lambda r, t: r or builtins.is_allocated(t)):
# We didn't allocate anything except the slot for the value itself.
# Don't waste stack space.
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
if llnormalblock:
self.llbuilder.branch(llnormalblock)
return llret
def _build_rpc(self, fun_loc, fun_type, args, llnormalblock, llunwindblock):
llservice = ll.Constant(lli32, fun_type.service)
@ -1654,102 +1492,56 @@ class LLVMIRGenerator:
return ll.Undefined
llret = self._build_rpc_recv(fun_type.ret, llstackptr, llnormalblock, llunwindblock)
# T result = {
# void *ret_ptr = alloca(sizeof(T));
# void *ptr = ret_ptr;
# loop: int size = rpc_recv(ptr);
# // Non-zero: Provide `size` bytes of extra storage for variable-length data.
# if(size) { ptr = alloca(size); goto loop; }
# else *(T*)ret_ptr
# }
llprehead = self.llbuilder.basic_block
llhead = self.llbuilder.append_basic_block(name="rpc.head")
if llunwindblock:
llheadu = self.llbuilder.append_basic_block(name="rpc.head.unwind")
llalloc = self.llbuilder.append_basic_block(name="rpc.continue")
lltail = self.llbuilder.append_basic_block(name="rpc.tail")
return llret
llretty = self.llty_of_type(fun_type.ret)
llslot = self.llbuilder.alloca(llretty, name="rpc.ret.alloc")
llslotgen = self.llbuilder.bitcast(llslot, llptr, name="rpc.ret.ptr")
self.llbuilder.branch(llhead)
def _build_subkernel_call(self, fun_loc, fun_type, args):
llsid = ll.Constant(lli32, fun_type.sid)
tag = b""
for arg in args:
def arg_error_handler(typ):
printer = types.TypePrinter()
note = diagnostic.Diagnostic("note",
"value of type {type}",
{"type": printer.name(typ)},
arg.loc)
diag = diagnostic.Diagnostic("error",
"type {type} is not supported in subkernel calls",
{"type": printer.name(arg.type)},
arg.loc, notes=[note])
self.engine.process(diag)
tag += ir.rpc_tag(arg.type, arg_error_handler)
tag += b":"
# run the kernel first
self.llbuilder.call(self.llbuiltin("subkernel_load_run"), [llsid, ll.Constant(lli1, 1)])
# arg sent in the same vein as RPC
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.stack")
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
lltagptr = self.llbuilder.alloca(lltag.type)
self.llbuilder.store(lltag, lltagptr)
if args:
# only send args if there's anything to send, 'self' is excluded
llargs = self.llbuilder.alloca(llptr, ll.Constant(lli32, len(args)),
name="subkernel.args")
for index, arg in enumerate(args):
if builtins.is_none(arg.type):
llargslot = self.llbuilder.alloca(llunit,
name="subkernel.arg{}".format(index))
self.llbuilder.position_at_end(llhead)
llphi = self.llbuilder.phi(llslotgen.type, name="rpc.ptr")
llphi.add_incoming(llslotgen, llprehead)
if llunwindblock:
llsize = self.llbuilder.invoke(self.llbuiltin("rpc_recv"), [llphi],
llheadu, llunwindblock,
name="rpc.size.next")
self.llbuilder.position_at_end(llheadu)
else:
llarg = self.map(arg)
llargslot = self.llbuilder.alloca(llarg.type,
name="subkernel.arg{}".format(index))
self.llbuilder.store(llarg, llargslot)
llargslot = self.llbuilder.bitcast(llargslot, llptr)
llsize = self.llbuilder.call(self.llbuiltin("rpc_recv"), [llphi],
name="rpc.size.next")
lldone = self.llbuilder.icmp_unsigned('==', llsize, ll.Constant(llsize.type, 0),
name="rpc.done")
self.llbuilder.cbranch(lldone, lltail, llalloc)
llargptr = self.llbuilder.gep(llargs, [ll.Constant(lli32, index)])
self.llbuilder.store(llargslot, llargptr)
self.llbuilder.position_at_end(llalloc)
llalloca = self.llbuilder.alloca(lli8, llsize, name="rpc.alloc")
llalloca.align = 4 # maximum alignment required by OR1K ABI
llphi.add_incoming(llalloca, llalloc)
self.llbuilder.branch(llhead)
llargcount = ll.Constant(lli8, len(args))
self.llbuilder.call(self.llbuiltin("subkernel_send_message"),
[llsid, llargcount, lltagptr, llargs])
self.llbuilder.position_at_end(lltail)
llret = self.llbuilder.load(llslot, name="rpc.ret")
if not fun_type.ret.fold(False, lambda r, t: r or builtins.is_allocated(t)):
# We didn't allocate anything except the slot for the value itself.
# Don't waste stack space.
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
return llsid
def _build_subkernel_return(self, insn):
# builds a remote return.
# unlike args, return only sends one thing.
if builtins.is_none(insn.value().type):
# do not waste time and bandwidth on Nones
return
def ret_error_handler(typ):
printer = types.TypePrinter()
note = diagnostic.Diagnostic("note",
"value of type {type}",
{"type": printer.name(typ)},
fun_loc)
diag = diagnostic.Diagnostic("error",
"return type {type} is not supported in subkernel returns",
{"type": printer.name(fun_type.ret)},
fun_loc, notes=[note])
self.engine.process(diag)
tag = ir.rpc_tag(insn.value().type, ret_error_handler)
tag += b":"
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
lltagptr = self.llbuilder.alloca(lltag.type)
self.llbuilder.store(lltag, lltagptr)
llrets = self.llbuilder.alloca(llptr, ll.Constant(lli32, 1),
name="subkernel.return")
llret = self.map(insn.value())
llretslot = self.llbuilder.alloca(llret.type, name="subkernel.retval")
self.llbuilder.store(llret, llretslot)
llretslot = self.llbuilder.bitcast(llretslot, llptr)
self.llbuilder.store(llretslot, llrets)
llsid = ll.Constant(lli32, 0) # return goes back to master, sid is ignored
lltagcount = ll.Constant(lli8, 1) # only one thing is returned
self.llbuilder.call(self.llbuiltin("subkernel_send_message"),
[llsid, lltagcount, lltagptr, llrets])
if llnormalblock:
self.llbuilder.branch(llnormalblock)
return llret
def process_Call(self, insn):
functiontyp = insn.target_function().type
@ -1758,10 +1550,6 @@ class LLVMIRGenerator:
functiontyp,
insn.arguments(),
llnormalblock=None, llunwindblock=None)
elif types.is_subkernel(functiontyp):
return self._build_subkernel_call(insn.target_function().loc,
functiontyp,
insn.arguments())
elif types.is_external_function(functiontyp):
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn)
else:
@ -1798,11 +1586,6 @@ class LLVMIRGenerator:
functiontyp,
insn.arguments(),
llnormalblock, llunwindblock)
elif types.is_subkernel(functiontyp):
return self._build_subkernel_call(insn.target_function().loc,
functiontyp,
insn.arguments(),
llnormalblock, llunwindblock)
elif types.is_external_function(functiontyp):
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn)
else:
@ -1881,8 +1664,7 @@ class LLVMIRGenerator:
attrvalue = getattr(value, attr)
is_class_function = (types.is_constructor(typ) and
types.is_function(typ.attributes[attr]) and
not types.is_external_function(typ.attributes[attr]) and
not types.is_subkernel(typ.attributes[attr]))
not types.is_external_function(typ.attributes[attr]))
if is_class_function:
attrvalue = self.embedding_map.specialize_function(typ.instance, attrvalue)
if not (types.is_instance(typ) and attr in typ.constant_attributes):
@ -1967,8 +1749,7 @@ class LLVMIRGenerator:
llelts = [self._quote(v, t, lambda: path() + [str(i)])
for i, (v, t) in enumerate(zip(value, typ.elts))]
return ll.Constant(llty, llelts)
elif types.is_rpc(typ) or types.is_external_function(typ) or \
types.is_builtin_function(typ) or types.is_subkernel(typ):
elif types.is_rpc(typ) or types.is_external_function(typ) or types.is_builtin_function(typ):
# RPC, C and builtin functions have no runtime representation.
return ll.Constant(llty, ll.Undefined)
elif types.is_function(typ):
@ -2023,8 +1804,6 @@ class LLVMIRGenerator:
return llinsn
def process_Return(self, insn):
if insn.remote_return:
self._build_subkernel_return(insn)
if builtins.is_none(insn.value().type):
return self.llbuilder.ret_void()
else:

View File

@ -385,50 +385,6 @@ class TRPC(Type):
def __hash__(self):
return hash(self.service)
class TSubkernel(TFunction):
"""
A kernel to be run on a satellite.
:ivar args: (:class:`collections.OrderedDict` of string to :class:`Type`)
function arguments
:ivar ret: (:class:`Type`)
return type
:ivar sid: (int) subkernel ID number
:ivar destination: (int) satellite destination number
"""
attributes = OrderedDict()
def __init__(self, args, optargs, ret, sid, destination):
assert isinstance(ret, Type)
super().__init__(args, optargs, ret)
self.sid, self.destination = sid, destination
self.delay = TFixedDelay(iodelay.Const(0))
def unify(self, other):
if other is self:
return
if isinstance(other, TSubkernel) and \
self.sid == other.sid and \
self.destination == other.destination:
self.ret.unify(other.ret)
elif isinstance(other, TVar):
other.unify(self)
else:
raise UnificationError(self, other)
def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TSubkernel({})".format(repr(self.ret))
def __eq__(self, other):
return isinstance(other, TSubkernel) and \
self.sid == other.sid
def __hash__(self):
return hash(self.sid)
class TBuiltin(Type):
"""
An instance of builtin type. Every instance of a builtin
@ -688,9 +644,6 @@ def is_function(typ):
def is_rpc(typ):
return isinstance(typ.find(), TRPC)
def is_subkernel(typ):
return isinstance(typ.find(), TSubkernel)
def is_external_function(typ, name=None):
typ = typ.find()
if name is None:
@ -857,10 +810,6 @@ class TypePrinter(object):
return "[rpc{} #{}](...)->{}".format(typ.service,
" async" if typ.is_async else "",
self.name(typ.ret, depth + 1))
elif isinstance(typ, TSubkernel):
return "<subkernel{} dest#{}>->{}".format(typ.sid,
typ.destination,
self.name(typ.ret, depth + 1))
elif isinstance(typ, TBuiltinFunction):
return "<function {}>".format(typ.name)
elif isinstance(typ, (TConstructor, TExceptionConstructor)):

View File

@ -102,19 +102,7 @@ class RegionOf(algorithm.Visitor):
if types.is_external_function(node.func.type, "cache_get"):
# The cache is borrow checked dynamically
return Global()
if (types.is_builtin_function(node.func.type, "array")
or types.is_builtin_function(node.func.type, "make_array")
or types.is_builtin_function(node.func.type, "numpy.transpose")):
# While lifetime tracking across function calls in general is currently
# broken (see below), these special builtins that allocate an array on
# the stack of the caller _always_ allocate regardless of the parameters,
# and we can thus handle them without running into the precision issue
# mentioned in commit ae999db.
return self.visit_allocating(node)
# FIXME: Return statement missing here, but see m-labs/artiq#1497 and
# commit ae999db.
else:
self.visit_sometimes_allocating(node)
# Value lives as long as the object/container, if it's mutable,

View File

@ -233,7 +233,7 @@ class AD53xx:
def write_gain_mu(self, channel, gain=0xffff):
"""Program the gain register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
The DAC output is not updated until LDAC is pulsed (see :meth load:).
This method advances the timeline by the duration of one SPI transfer.
:param gain: 16-bit gain register value (default: 0xffff)
@ -245,7 +245,7 @@ class AD53xx:
def write_offset_mu(self, channel, offset=0x8000):
"""Program the offset register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
The DAC output is not updated until LDAC is pulsed (see :meth load:).
This method advances the timeline by the duration of one SPI transfer.
:param offset: 16-bit offset register value (default: 0x8000)
@ -258,7 +258,7 @@ class AD53xx:
"""Program the DAC offset voltage for a channel.
An offset of +V can be used to trim out a DAC offset error of -V.
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
The DAC output is not updated until LDAC is pulsed (see :meth load:).
This method advances the timeline by the duration of one SPI transfer.
:param voltage: the offset voltage
@ -270,7 +270,7 @@ class AD53xx:
def write_dac_mu(self, channel, value):
"""Program the DAC input register for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
The DAC output is not updated until LDAC is pulsed (see :meth load:).
This method advances the timeline by the duration of one SPI transfer.
"""
self.bus.write(
@ -280,7 +280,7 @@ class AD53xx:
def write_dac(self, channel, voltage):
"""Program the DAC output voltage for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
The DAC output is not updated until LDAC is pulsed (see :meth load:).
This method advances the timeline by the duration of one SPI transfer.
"""
self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs,
@ -313,7 +313,7 @@ class AD53xx:
If no LDAC device was defined, the LDAC pulse is skipped.
See :meth:`load`.
See :meth load:.
:param values: list of DAC values to program
:param channels: list of DAC channels to program. If not specified,
@ -355,7 +355,7 @@ class AD53xx:
""" Two-point calibration of a DAC channel.
Programs the offset and gain register to trim out DAC errors. Does not
take effect until LDAC is pulsed (see :meth:`load`).
take effect until LDAC is pulsed (see :meth load:).
Calibration consists of measuring the DAC output voltage for a channel
with the DAC set to zero-scale (0x0000) and full-scale (0xffff).

View File

@ -25,14 +25,12 @@ class AD9912:
f_ref/clk_div*pll_n where f_ref is the reference frequency and clk_div
is the reference clock divider (both set in the parent Urukul CPLD
instance).
:param pll_en: PLL enable bit, set to 0 to bypass PLL (default: 1).
Note that when bypassing the PLL the red front panel LED may remain on.
"""
def __init__(self, dmgr, chip_select, cpld_device, sw_device=None,
pll_n=10, pll_en=1):
pll_n=10):
self.kernel_invariants = {"cpld", "core", "bus", "chip_select",
"pll_n", "pll_en", "ftw_per_hz"}
"pll_n", "ftw_per_hz"}
self.cpld = dmgr.get(cpld_device)
self.core = self.cpld.core
self.bus = self.cpld.bus
@ -41,12 +39,8 @@ class AD9912:
if sw_device:
self.sw = dmgr.get(sw_device)
self.kernel_invariants.add("sw")
self.pll_en = pll_en
self.pll_n = pll_n
if pll_en:
sysclk = self.cpld.refclk / [1, 1, 2, 4][self.cpld.clk_div] * pll_n
else:
sysclk = self.cpld.refclk
assert sysclk <= 1e9
self.ftw_per_hz = 1 / sysclk * (int64(1) << 48)
@ -108,10 +102,8 @@ class AD9912:
raise ValueError("Urukul AD9912 product id mismatch")
delay(50 * us)
# HSTL power down, CMOS power down
pwrcntrl1 = 0x80 | ((~self.pll_en & 1) << 4)
self.write(AD9912_PWRCNTRL1, pwrcntrl1, length=1)
self.write(AD9912_PWRCNTRL1, 0x80, length=1)
self.cpld.io_update.pulse(2 * us)
if self.pll_en:
self.write(AD9912_N_DIV, self.pll_n // 2 - 2, length=1)
self.cpld.io_update.pulse(2 * us)
# I_cp = 375 µA, VCO high range

View File

@ -80,13 +80,6 @@ class AD9914:
self.set_x_duration_mu = 7 * self.write_duration_mu
self.exit_x_duration_mu = 3 * self.write_duration_mu
@staticmethod
def get_rtio_channels(bus_channel, channel, **kwargs):
# return only first entry, as there are several devices with the same RTIO channel
if channel == 0:
return [(bus_channel, None)]
return []
@kernel
def write(self, addr, data):
rtio_output((self.bus_channel << 8) | addr, data)

View File

@ -73,10 +73,6 @@ class ADF5356:
self._init_registers()
@staticmethod
def get_rtio_channels(**kwargs):
return []
@kernel
def init(self, blind=False):
"""

View File

@ -1,185 +0,0 @@
from artiq.language.core import kernel, portable
from numpy import int32
# almazny-specific data
ALMAZNY_LEGACY_REG_BASE = 0x0C
ALMAZNY_LEGACY_OE_SHIFT = 12
# higher SPI write divider to match almazny shift register timing
# min SER time before SRCLK rise = 125ns
# -> div=32 gives 125ns for data before clock rise
# works at faster dividers too but could be less reliable
ALMAZNY_LEGACY_SPIT_WR = 32
class AlmaznyLegacy:
"""
Almazny (High frequency mezzanine board for Mirny)
This applies to Almazny hardware v1.1 and earlier.
Use :class:`artiq.coredevice.almazny.AlmaznyChannel` for Almazny v1.2 and later.
:param host_mirny: Mirny device Almazny is connected to
"""
def __init__(self, dmgr, host_mirny):
self.mirny_cpld = dmgr.get(host_mirny)
self.att_mu = [0x3f] * 4
self.channel_sw = [0] * 4
self.output_enable = False
@kernel
def init(self):
self.output_toggle(self.output_enable)
@kernel
def att_to_mu(self, att):
"""
Convert an attenuator setting in dB to machine units.
:param att: attenuator setting in dB [0-31.5]
:return: attenuator setting in machine units
"""
mu = round(att * 2.0)
if mu > 63 or mu < 0:
raise ValueError("Invalid Almazny attenuator settings!")
return mu
@kernel
def mu_to_att(self, att_mu):
"""
Convert a digital attenuator setting to dB.
:param att_mu: attenuator setting in machine units
:return: attenuator setting in dB
"""
return att_mu / 2
@kernel
def set_att(self, channel, att, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel: index of the register [0-3]
:param att: attenuation setting in dBm [0-31.5]
:param rf_switch: rf switch (bool)
"""
self.set_att_mu(channel, self.att_to_mu(att), rf_switch)
@kernel
def set_att_mu(self, channel, att_mu, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel: index of the register [0-3]
:param att_mu: attenuation setting in machine units [0-63]
:param rf_switch: rf switch (bool)
"""
self.channel_sw[channel] = 1 if rf_switch else 0
self.att_mu[channel] = att_mu
self._update_register(channel)
@kernel
def output_toggle(self, oe):
"""
Toggles output on all shift registers on or off.
:param oe: toggle output enable (bool)
"""
self.output_enable = oe
cfg_reg = self.mirny_cpld.read_reg(1)
en = 1 if self.output_enable else 0
delay(100 * us)
new_reg = (en << ALMAZNY_LEGACY_OE_SHIFT) | (cfg_reg & 0x3FF)
self.mirny_cpld.write_reg(1, new_reg)
delay(100 * us)
@kernel
def _flip_mu_bits(self, mu):
# in this form MSB is actually 0.5dB attenuator
# unnatural for users, so we flip the six bits
return (((mu & 0x01) << 5)
| ((mu & 0x02) << 3)
| ((mu & 0x04) << 1)
| ((mu & 0x08) >> 1)
| ((mu & 0x10) >> 3)
| ((mu & 0x20) >> 5))
@kernel
def _update_register(self, ch):
self.mirny_cpld.write_ext(
ALMAZNY_LEGACY_REG_BASE + ch,
8,
self._flip_mu_bits(self.att_mu[ch]) | (self.channel_sw[ch] << 6),
ALMAZNY_LEGACY_SPIT_WR
)
delay(100 * us)
@kernel
def _update_all_registers(self):
for i in range(4):
self._update_register(i)
class AlmaznyChannel:
"""
One Almazny channel
Almazny is a mezzanine for the Quad PLL RF source Mirny that exposes and
controls the frequency-doubled outputs.
This driver requires Almazny hardware revision v1.2 or later
and Mirny CPLD gateware v0.3 or later.
Use :class:`artiq.coredevice.almazny.AlmaznyLegacy` for Almazny hardware v1.1 and earlier.
:param host_mirny: Mirny CPLD device name
:param channel: channel index (0-3)
"""
def __init__(self, dmgr, host_mirny, channel):
self.channel = channel
self.mirny_cpld = dmgr.get(host_mirny)
@portable
def to_mu(self, att, enable, led):
"""
Convert an attenuation in dB, RF switch state and LED state to machine
units.
:param att: attenuator setting in dB (0-31.5)
:param enable: RF switch state (bool)
:param led: LED state (bool)
:return: channel setting in machine units
"""
mu = int32(round(att * 2.))
if mu >= 64 or mu < 0:
raise ValueError("Attenuation out of range")
# unfortunate hardware design: bit reverse
mu = ((mu & 0x15) << 1) | ((mu >> 1) & 0x15)
mu = ((mu & 0x03) << 4) | (mu & 0x0c) | ((mu >> 4) & 0x03)
if enable:
mu |= 1 << 6
if led:
mu |= 1 << 7
return mu
@kernel
def set_mu(self, mu):
"""
Set channel state (machine units).
:param mu: channel state in machine units.
"""
self.mirny_cpld.write_ext(
addr=0xc + self.channel, length=8, data=mu, ext_div=32)
@kernel
def set(self, att, enable, led=False):
"""
Set attenuation, RF switch, and LED state (SI units).
:param att: attenuator setting in dB (0-31.5)
:param enable: RF switch state (bool)
:param led: LED state (bool)
"""
self.set_mu(self.to_mu(att, enable, led))

View File

@ -102,15 +102,15 @@ def decode_dump(data):
# messages are big endian
parts = struct.unpack(endian + "IQbbb", data[:15])
(sent_bytes, total_byte_count,
error_occurred, log_channel, dds_onehot_sel) = parts
error_occured, log_channel, dds_onehot_sel) = parts
expected_len = sent_bytes + 15
if expected_len != len(data):
raise ValueError("analyzer dump has incorrect length "
"(got {}, expected {})".format(
len(data), expected_len))
if error_occurred:
logger.warning("error occurred within the analyzer, "
if error_occured:
logger.warning("error occured within the analyzer, "
"data may be corrupted")
if total_byte_count > sent_bytes:
logger.info("analyzer ring buffer has wrapped %d times",

View File

@ -23,8 +23,6 @@ class Request(Enum):
RPCReply = 7
RPCException = 8
SubkernelUpload = 9
class Reply(Enum):
SystemInfo = 2
@ -210,7 +208,6 @@ class CommKernel:
self.unpack_float64 = struct.Struct(self.endian + "d").unpack
self.pack_header = struct.Struct(self.endian + "lB").pack
self.pack_int8 = struct.Struct(self.endian + "B").pack
self.pack_int32 = struct.Struct(self.endian + "l").pack
self.pack_int64 = struct.Struct(self.endian + "q").pack
self.pack_float64 = struct.Struct(self.endian + "d").pack
@ -325,7 +322,7 @@ class CommKernel:
self._write(chunk)
def _write_int8(self, value):
self._write(self.pack_int8(value))
self._write(value)
def _write_int32(self, value):
self._write(self.pack_int32(value))
@ -385,19 +382,6 @@ class CommKernel:
else:
self._read_expect(Reply.LoadCompleted)
def upload_subkernel(self, kernel_library, id, destination):
self._write_header(Request.SubkernelUpload)
self._write_int32(id)
self._write_int8(destination)
self._write_bytes(kernel_library)
self._flush()
self._read_header()
if self._read_type == Reply.LoadFailed:
raise LoadError(self._read_string())
else:
self._read_expect(Reply.LoadCompleted)
def run(self):
self._write_empty(Request.RunKernel)
self._flush()

View File

@ -1,6 +1,5 @@
import os, sys
import numpy
from inspect import getfullargspec
from functools import wraps
from pythonparser import diagnostic
@ -54,17 +53,6 @@ def rtio_get_counter() -> TInt64:
raise NotImplementedError("syscall not simulated")
def get_target_cls(target):
if target == "rv32g":
return RV32GTarget
elif target == "rv32ima":
return RV32IMATarget
elif target == "cortexa9":
return CortexA9Target
else:
raise ValueError("Unsupported target")
class Core:
"""Core device driver.
@ -78,70 +66,58 @@ class Core:
:param ref_multiplier: ratio between the RTIO fine timestamp frequency
and the RTIO coarse timestamp frequency (e.g. SERDES multiplication
factor).
:param analyzer_proxy: name of the core device analyzer proxy to trigger
(optional).
:param analyze_at_run_end: automatically trigger the core device analyzer
proxy after the Experiment's run stage finishes.
"""
kernel_invariants = {
"core", "ref_period", "coarse_ref_period", "ref_multiplier",
}
def __init__(self, dmgr,
host, ref_period,
analyzer_proxy=None, analyze_at_run_end=False,
ref_multiplier=8,
target="rv32g", satellite_cpu_targets={}):
def __init__(self, dmgr, host, ref_period, ref_multiplier=8, target="rv32g"):
self.ref_period = ref_period
self.ref_multiplier = ref_multiplier
self.satellite_cpu_targets = satellite_cpu_targets
self.target_cls = get_target_cls(target)
if target == "rv32g":
self.target_cls = RV32GTarget
elif target == "rv32ima":
self.target_cls = RV32IMATarget
elif target == "cortexa9":
self.target_cls = CortexA9Target
else:
raise ValueError("Unsupported target")
self.coarse_ref_period = ref_period*ref_multiplier
if host is None:
self.comm = CommKernelDummy()
else:
self.comm = CommKernel(host)
self.analyzer_proxy_name = analyzer_proxy
self.analyze_at_run_end = analyze_at_run_end
self.first_run = True
self.dmgr = dmgr
self.core = self
self.comm.core = self
self.analyzer_proxy = None
def notify_run_end(self):
if self.analyze_at_run_end:
self.trigger_analyzer_proxy()
def close(self):
self.comm.close()
def compile(self, function, args, kwargs, set_result=None,
attribute_writeback=True, print_as_rpc=True,
target=None, destination=0, subkernel_arg_types=[]):
attribute_writeback=True, print_as_rpc=True):
try:
engine = _DiagnosticEngine(all_errors_are_fatal=True)
stitcher = Stitcher(engine=engine, core=self, dmgr=self.dmgr,
print_as_rpc=print_as_rpc,
destination=destination, subkernel_arg_types=subkernel_arg_types)
print_as_rpc=print_as_rpc)
stitcher.stitch_call(function, args, kwargs, set_result)
stitcher.finalize()
module = Module(stitcher,
ref_period=self.ref_period,
attribute_writeback=attribute_writeback)
target = target if target is not None else self.target_cls()
target = self.target_cls()
library = target.compile_and_link([module])
stripped_library = target.strip(library)
return stitcher.embedding_map, stripped_library, \
lambda addresses: target.symbolize(library, addresses), \
lambda symbols: target.demangle(symbols), \
module.subkernel_arg_types
lambda symbols: target.demangle(symbols)
except diagnostic.Error as error:
raise CompileError(error.diagnostic) from error
@ -159,38 +135,11 @@ class Core:
def set_result(new_result):
nonlocal result
result = new_result
embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
embedding_map, kernel_library, symbolizer, demangler = \
self.compile(function, args, kwargs, set_result)
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
self._run_compiled(kernel_library, embedding_map, symbolizer, demangler)
return result
def compile_subkernel(self, sid, subkernel_fn, embedding_map, args, subkernel_arg_types):
# pass self to subkernels (if applicable)
# assuming the first argument is self
subkernel_args = getfullargspec(subkernel_fn.artiq_embedded.function)
self_arg = []
if len(subkernel_args[0]) > 0:
if subkernel_args[0][0] == 'self':
self_arg = args[:1]
destination = subkernel_fn.artiq_embedded.destination
destination_tgt = self.satellite_cpu_targets[destination]
target = get_target_cls(destination_tgt)(subkernel_id=sid)
object_map, kernel_library, _, _, _ = \
self.compile(subkernel_fn, self_arg, {}, attribute_writeback=False,
print_as_rpc=False, target=target, destination=destination,
subkernel_arg_types=subkernel_arg_types.get(sid, []))
if object_map.has_rpc_or_subkernel():
raise ValueError("Subkernel must not use RPC or subkernels in other destinations")
return destination, kernel_library
def compile_and_upload_subkernels(self, embedding_map, args, subkernel_arg_types):
for sid, subkernel_fn in embedding_map.subkernels().items():
destination, kernel_library = \
self.compile_subkernel(sid, subkernel_fn, embedding_map,
args, subkernel_arg_types)
self.comm.upload_subkernel(kernel_library, sid, destination)
def precompile(self, function, *args, **kwargs):
"""Precompile a kernel and return a callable that executes it on the core device
at a later time.
@ -199,7 +148,7 @@ class Core:
as additional positional and keyword arguments.
The returned callable accepts no arguments.
Precompiled kernels may use RPCs and subkernels.
Precompiled kernels may use RPCs.
Object attributes at the beginning of a precompiled kernel execution have the
values they had at precompilation time. If up-to-date values are required,
@ -224,9 +173,8 @@ class Core:
nonlocal result
result = new_result
embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
embedding_map, kernel_library, symbolizer, demangler = \
self.compile(function, args, kwargs, set_result, attribute_writeback=False)
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
@wraps(function)
def run_precompiled():
@ -302,23 +250,3 @@ class Core:
min_now = rtio_get_counter() + 125000
if now_mu() < min_now:
at_mu(min_now)
def trigger_analyzer_proxy(self):
"""Causes the core analyzer proxy to retrieve a dump from the device,
and distribute it to all connected clients (typically dashboards).
Returns only after the dump has been retrieved from the device.
Raises IOError if no analyzer proxy has been configured, or if the
analyzer proxy fails. In the latter case, more details would be
available in the proxy log.
"""
if self.analyzer_proxy is None:
if self.analyzer_proxy_name is not None:
self.analyzer_proxy = self.dmgr.get(self.analyzer_proxy_name)
if self.analyzer_proxy is None:
raise IOError("No analyzer proxy configured")
else:
success = self.analyzer_proxy.trigger()
if not success:
raise IOError("Analyzer proxy reported failure")

View File

@ -19,24 +19,16 @@
},
"min_artiq_version": {
"type": "string",
"description": "Minimum required ARTIQ version",
"default": "0"
"description": "Minimum required ARTIQ version"
},
"hw_rev": {
"type": "string",
"description": "Hardware revision"
},
"base": {
"type": "string",
"enum": ["use_drtio_role", "standalone", "master", "satellite"],
"description": "Deprecated, use drtio_role instead",
"default": "use_drtio_role"
},
"drtio_role": {
"type": "string",
"enum": ["standalone", "master", "satellite"],
"description": "Role that this device takes in a DRTIO network; 'standalone' means no DRTIO",
"default": "standalone"
"description": "SoC base; value depends on intended system topology"
},
"ext_ref_frequency": {
"type": "number",
@ -130,7 +122,7 @@
},
"hw_rev": {
"type": "string",
"enum": ["v1.0", "v1.1"]
"enum": ["v1.0"]
}
}
}
@ -142,7 +134,7 @@
"properties": {
"type": {
"type": "string",
"enum": ["dio", "dio_spi", "urukul", "novogorny", "sampler", "suservo", "zotino", "grabber", "mirny", "fastino", "phaser", "hvamp", "shuttler"]
"enum": ["dio", "dio_spi", "urukul", "novogorny", "sampler", "suservo", "zotino", "grabber", "mirny", "fastino", "phaser", "hvamp"]
},
"board": {
"type": "string"
@ -212,8 +204,7 @@
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "dio_spi"
"type": "string"
},
"clk": {
"type": "integer",
@ -249,8 +240,7 @@
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "ttl"
"type": "string"
},
"pin": {
"type": "integer",
@ -267,8 +257,7 @@
}
},
"required": ["pin", "direction"]
},
"default": []
}
}
},
"required": ["ports", "spi"]
@ -308,18 +297,11 @@
"clk_div": {
"type": "integer",
"minimum": 0,
"maximum": 3,
"default": 0
"maximum": 3
},
"pll_n": {
"type": "integer"
},
"pll_en": {
"type": "integer",
"minimum": 0,
"maximum": 1,
"default": 1
},
"pll_vco": {
"type": "integer"
},
@ -394,11 +376,6 @@
"minItems": 2,
"maxItems": 2
},
"sampler_hw_rev": {
"type": "string",
"pattern": "^v[0-9]+\\.[0-9]+",
"default": "v2.2"
},
"urukul0_ports": {
"type": "array",
"items": {
@ -428,12 +405,6 @@
"type": "integer",
"default": 32
},
"pll_en": {
"type": "integer",
"minimum": 0,
"maximum": 1,
"default": 1
},
"pll_vco": {
"type": "integer"
}
@ -525,11 +496,6 @@
"almazny": {
"type": "boolean",
"default": false
},
"almazny_hw_rev": {
"type": "string",
"pattern": "^v[0-9]+\\.[0-9]+",
"default": "v1.2"
}
},
"required": ["ports"]
@ -579,11 +545,6 @@
},
"minItems": 1,
"maxItems": 1
},
"mode": {
"type": "string",
"enum": ["base", "miqro"],
"default": "base"
}
},
"required": ["ports"]
@ -610,31 +571,6 @@
},
"required": ["ports"]
}
},{
"title": "Shuttler",
"if": {
"properties": {
"type": {
"const": "shuttler"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 2
},
"drtio_destination": {
"type": "integer"
}
},
"required": ["ports"]
}
}]
}
}

View File

@ -6,7 +6,7 @@ alone could achieve.
"""
from artiq.language.core import syscall, kernel
from artiq.language.types import TInt32, TInt64, TStr, TNone, TTuple, TBool
from artiq.language.types import TInt32, TInt64, TStr, TNone, TTuple
from artiq.coredevice.exceptions import DMAError
from numpy import int64
@ -17,7 +17,7 @@ def dma_record_start(name: TStr) -> TNone:
raise NotImplementedError("syscall not simulated")
@syscall
def dma_record_stop(duration: TInt64, enable_ddma: TBool) -> TNone:
def dma_record_stop(duration: TInt64) -> TNone:
raise NotImplementedError("syscall not simulated")
@syscall
@ -25,11 +25,11 @@ def dma_erase(name: TStr) -> TNone:
raise NotImplementedError("syscall not simulated")
@syscall
def dma_retrieve(name: TStr) -> TTuple([TInt64, TInt32, TBool]):
def dma_retrieve(name: TStr) -> TTuple([TInt64, TInt32]):
raise NotImplementedError("syscall not simulated")
@syscall
def dma_playback(timestamp: TInt64, ptr: TInt32, enable_ddma: TBool) -> TNone:
def dma_playback(timestamp: TInt64, ptr: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated")
@ -47,7 +47,6 @@ class DMARecordContextManager:
def __init__(self):
self.name = ""
self.saved_now_mu = int64(0)
self.enable_ddma = False
@kernel
def __enter__(self):
@ -57,7 +56,7 @@ class DMARecordContextManager:
@kernel
def __exit__(self, type, value, traceback):
dma_record_stop(now_mu(), self.enable_ddma) # see above
dma_record_stop(now_mu()) # see above
at_mu(self.saved_now_mu)
@ -75,20 +74,12 @@ class CoreDMA:
self.epoch = 0
@kernel
def record(self, name, enable_ddma=False):
def record(self, name):
"""Returns a context manager that will record a DMA trace called ``name``.
Any previously recorded trace with the same name is overwritten.
The trace will persist across kernel switches.
In DRTIO context, distributed DMA can be toggled with ``enable_ddma``.
Enabling it allows running DMA on satellites, rather than sending all
events from the master.
Keeping it disabled it may improve performance in some scenarios,
e.g. when there are many small satellite buffers."""
The trace will persist across kernel switches."""
self.epoch += 1
self.recorder.name = name
self.recorder.enable_ddma = enable_ddma
return self.recorder
@kernel
@ -101,24 +92,24 @@ class CoreDMA:
def playback(self, name):
"""Replays a previously recorded DMA trace. This function blocks until
the entire trace is submitted to the RTIO FIFOs."""
(advance_mu, ptr, uses_ddma) = dma_retrieve(name)
dma_playback(now_mu(), ptr, uses_ddma)
(advance_mu, ptr) = dma_retrieve(name)
dma_playback(now_mu(), ptr)
delay_mu(advance_mu)
@kernel
def get_handle(self, name):
"""Returns a handle to a previously recorded DMA trace. The returned handle
is only valid until the next call to :meth:`record` or :meth:`erase`."""
(advance_mu, ptr, uses_ddma) = dma_retrieve(name)
return (self.epoch, advance_mu, ptr, uses_ddma)
(advance_mu, ptr) = dma_retrieve(name)
return (self.epoch, advance_mu, ptr)
@kernel
def playback_handle(self, handle):
"""Replays a handle obtained with :meth:`get_handle`. Using this function
is much faster than :meth:`playback` for replaying a set of traces repeatedly,
but incurs the overhead of managing the handles onto the programmer."""
(epoch, advance_mu, ptr, uses_ddma) = handle
(epoch, advance_mu, ptr) = handle
if self.epoch != epoch:
raise DMAError("Invalid handle")
dma_playback(now_mu(), ptr, uses_ddma)
dma_playback(now_mu(), ptr)
delay_mu(advance_mu)

View File

@ -91,10 +91,6 @@ class EdgeCounter:
self.channel = channel
self.counter_max = (1 << (gateware_width - 1)) - 1
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def gate_rising(self, duration):
"""Count rising edges for the given duration and request the total at

View File

@ -148,13 +148,6 @@ class DMAError(Exception):
artiq_builtin = True
class SubkernelError(Exception):
"""Raised when an operation regarding a subkernel is invalid
or cannot be completed.
"""
artiq_builtin = True
class ClockFailure(Exception):
"""Raised when RTIO PLL has lost lock."""

View File

@ -21,7 +21,7 @@ class Fastino:
DAC updates synchronized to a frame edge.
The `log2_width=0` RTIO layout uses one DAC channel per RTIO address and a
dense RTIO address space. The RTIO words are narrow (32 bit) and
dense RTIO address space. The RTIO words are narrow. (32 bit) and
few-channel updates are efficient. There is the least amount of DAC state
tracking in kernels, at the cost of more DMA and RTIO data.
The setting here and in the RTIO PHY (gateware) must match.
@ -52,10 +52,6 @@ class Fastino:
assert self.core.ref_period == 1*ns
self.t_frame = int64(14*7*4)
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def init(self):
"""Initialize the device.

View File

@ -25,10 +25,6 @@ class Grabber:
# ROI engine outputs for one video frame.
self.sentinel = int32(int64(2**count_width))
@staticmethod
def get_rtio_channels(channel_base, **kwargs):
return [(channel_base, "ROI coordinates"), (channel_base + 1, "ROI mask")]
@kernel
def setup_roi(self, n, x0, y0, x1, y1):
"""

View File

@ -161,7 +161,6 @@ class I2CSwitch:
@kernel
def set(self, channel):
"""Enable one channel.
:param channel: channel number (0-7)
"""
i2c_switch_select(self.busno, self.address >> 1, 1 << channel)

View File

@ -32,7 +32,4 @@ def load(description_path):
global validator
validator.validate(result)
if result["base"] != "use_drtio_role":
result["drtio_role"] = result["base"]
return result

View File

@ -31,6 +31,16 @@ WE = 1 << 24
# supported CPLD code version
PROTO_REV_MATCH = 0x0
# almazny-specific data
ALMAZNY_REG_BASE = 0x0C
ALMAZNY_OE_SHIFT = 12
# higher SPI write divider to match almazny shift register timing
# min SER time before SRCLK rise = 125ns
# -> div=32 gives 125ns for data before clock rise
# works at faster dividers too but could be less reliable
ALMAZNY_SPIT_WR = 32
class Mirny:
"""
@ -167,3 +177,106 @@ class Mirny:
if length < 32:
data <<= 32 - length
self.bus.write(data)
class Almazny:
"""
Almazny (High frequency mezzanine board for Mirny)
:param host_mirny - Mirny device Almazny is connected to
"""
def __init__(self, dmgr, host_mirny):
self.mirny_cpld = dmgr.get(host_mirny)
self.att_mu = [0x3f] * 4
self.channel_sw = [0] * 4
self.output_enable = False
@kernel
def init(self):
self.output_toggle(self.output_enable)
@kernel
def att_to_mu(self, att):
"""
Convert an attenuator setting in dB to machine units.
:param att: attenuator setting in dB [0-31.5]
:return: attenuator setting in machine units
"""
mu = round(att * 2.0)
if mu > 63 or mu < 0:
raise ValueError("Invalid Almazny attenuator settings!")
return mu
@kernel
def mu_to_att(self, att_mu):
"""
Convert a digital attenuator setting to dB.
:param att_mu: attenuator setting in machine units
:return: attenuator setting in dB
"""
return att_mu / 2
@kernel
def set_att(self, channel, att, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel - index of the register [0-3]
:param att_mu - attenuation setting in dBm [0-31.5]
:param rf_switch - rf switch (bool)
"""
self.set_att_mu(channel, self.att_to_mu(att), rf_switch)
@kernel
def set_att_mu(self, channel, att_mu, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel - index of the register [0-3]
:param att_mu - attenuation setting in machine units [0-63]
:param rf_switch - rf switch (bool)
"""
self.channel_sw[channel] = 1 if rf_switch else 0
self.att_mu[channel] = att_mu
self._update_register(channel)
@kernel
def output_toggle(self, oe):
"""
Toggles output on all shift registers on or off.
:param oe - toggle output enable (bool)
"""
self.output_enable = oe
cfg_reg = self.mirny_cpld.read_reg(1)
en = 1 if self.output_enable else 0
delay(100 * us)
new_reg = (en << ALMAZNY_OE_SHIFT) | (cfg_reg & 0x3FF)
self.mirny_cpld.write_reg(1, new_reg)
delay(100 * us)
@kernel
def _flip_mu_bits(self, mu):
# in this form MSB is actually 0.5dB attenuator
# unnatural for users, so we flip the six bits
return (((mu & 0x01) << 5)
| ((mu & 0x02) << 3)
| ((mu & 0x04) << 1)
| ((mu & 0x08) >> 1)
| ((mu & 0x10) >> 3)
| ((mu & 0x20) >> 5))
@kernel
def _update_register(self, ch):
self.mirny_cpld.write_ext(
ALMAZNY_REG_BASE + ch,
8,
self._flip_mu_bits(self.att_mu[ch]) | (self.channel_sw[ch] << 6),
ALMAZNY_SPIT_WR
)
delay(100 * us)
@kernel
def _update_all_registers(self):
for i in range(4):
self._update_register(i)

View File

@ -229,7 +229,7 @@ class Phaser:
"dac_mmap"}
def __init__(self, dmgr, channel_base, miso_delay=1, tune_fifo_offset=True,
clk_sel=0, sync_dly=0, dac=None, trf0=None, trf1=None, gw_rev=PHASER_GW_BASE,
clk_sel=0, sync_dly=0, dac=None, trf0=None, trf1=None,
core_device="core"):
self.channel_base = channel_base
self.core = dmgr.get(core_device)
@ -243,25 +243,13 @@ class Phaser:
self.clk_sel = clk_sel
self.tune_fifo_offset = tune_fifo_offset
self.sync_dly = sync_dly
self.gw_rev = gw_rev # verified in init()
self.gw_rev = -1 # discovered in init()
self.dac_mmap = DAC34H84(dac).get_mmap()
self.channel = [PhaserChannel(self, ch, trf)
for ch, trf in enumerate([trf0, trf1])]
@staticmethod
def get_rtio_channels(channel_base, gw_rev=PHASER_GW_BASE, **kwargs):
if gw_rev == PHASER_GW_MIQRO:
return [(channel_base, "base"), (channel_base + 1, "ch0"), (channel_base + 2, "ch1")]
elif gw_rev == PHASER_GW_BASE:
return [(channel_base, "base"),
(channel_base + 1, "ch0 frequency"),
(channel_base + 2, "ch0 phase amplitude"),
(channel_base + 3, "ch1 frequency"),
(channel_base + 4, "ch1 phase amplitude")]
raise ValueError("invalid gw_rev `{}`".format(gw_rev))
@kernel
def init(self, debug=False):
"""Initialize the board.
@ -279,11 +267,10 @@ class Phaser:
delay(.1*ms) # slack
is_baseband = hw_rev & PHASER_HW_REV_VARIANT
gw_rev = self.read8(PHASER_ADDR_GW_REV)
self.gw_rev = self.read8(PHASER_ADDR_GW_REV)
if debug:
print("gw_rev:", self.gw_rev)
self.core.break_realtime()
assert gw_rev == self.gw_rev
delay(.1*ms) # slack
# allow a few errors during startup and alignment since boot

View File

@ -15,26 +15,24 @@ SPI_CS_PGIA = 1 # separate SPI bus, CS used as RCLK
@portable
def adc_mu_to_volt(data, gain=0, corrected_fs=True):
def adc_mu_to_volt(data, gain=0):
"""Convert ADC data in machine units to Volts.
:param data: 16 bit signed ADC word
:param gain: PGIA gain setting (0: 1, ..., 3: 1000)
:param corrected_fs: use corrected ADC FS reference.
Should be True for Samplers' revisions after v2.1. False for v2.1 and earlier.
:return: Voltage in Volts
"""
if gain == 0:
volt_per_lsb = 20.48 / (1 << 16) if corrected_fs else 20. / (1 << 16)
volt_per_lsb = 20./(1 << 16)
elif gain == 1:
volt_per_lsb = 2.048 / (1 << 16) if corrected_fs else 2. / (1 << 16)
volt_per_lsb = 2./(1 << 16)
elif gain == 2:
volt_per_lsb = .2048 / (1 << 16) if corrected_fs else .2 / (1 << 16)
volt_per_lsb = .2/(1 << 16)
elif gain == 3:
volt_per_lsb = 0.02048 / (1 << 16) if corrected_fs else .02 / (1 << 16)
volt_per_lsb = .02/(1 << 16)
else:
raise ValueError("invalid gain")
return data * volt_per_lsb
return data*volt_per_lsb
class Sampler:
@ -50,13 +48,12 @@ class Sampler:
:param gains: Initial value for PGIA gains shift register
(default: 0x0000). Knowledge of this state is not transferred
between experiments.
:param hw_rev: Sampler's hardware revision string (default 'v2.2')
:param core_device: Core device name
"""
kernel_invariants = {"bus_adc", "bus_pgia", "core", "cnv", "div", "corrected_fs"}
kernel_invariants = {"bus_adc", "bus_pgia", "core", "cnv", "div"}
def __init__(self, dmgr, spi_adc_device, spi_pgia_device, cnv_device,
div=8, gains=0x0000, hw_rev="v2.2", core_device="core"):
div=8, gains=0x0000, core_device="core"):
self.bus_adc = dmgr.get(spi_adc_device)
self.bus_adc.update_xfer_duration_mu(div, 32)
self.bus_pgia = dmgr.get(spi_pgia_device)
@ -65,11 +62,6 @@ class Sampler:
self.cnv = dmgr.get(cnv_device)
self.div = div
self.gains = gains
self.corrected_fs = self.use_corrected_fs(hw_rev)
@staticmethod
def use_corrected_fs(hw_rev):
return hw_rev != "v2.1"
@kernel
def init(self):
@ -152,4 +144,4 @@ class Sampler:
for i in range(n):
channel = i + 8 - len(data)
gain = (self.gains >> (channel*2)) & 0b11
data[i] = adc_mu_to_volt(adc_data[i], gain, self.corrected_fs)
data[i] = adc_mu_to_volt(adc_data[i], gain)

View File

@ -1,623 +0,0 @@
from artiq.language.core import *
from artiq.language.types import *
from artiq.coredevice.rtio import rtio_output, rtio_input_data
from artiq.coredevice import spi2 as spi
from artiq.language.units import us
@portable
def shuttler_volt_to_mu(volt):
"""Return the equivalent DAC code. Valid input range is from -10 to
10 - LSB.
"""
return round((1 << 16) * (volt / 20.0)) & 0xffff
class Config:
"""Shuttler configuration registers interface.
The configuration registers control waveform phase auto-clear, and pre-DAC
gain & offset values for calibration with ADC on the Shuttler AFE card.
To find the calibrated DAC code, the Shuttler Core first multiplies the
output data with pre-DAC gain, then adds the offset.
.. note::
The DAC code is capped at 0x7fff and 0x8000.
:param channel: RTIO channel number of this interface.
:param core_device: Core device name.
"""
kernel_invariants = {
"core", "channel", "target_base", "target_read",
"target_gain", "target_offset", "target_clr"
}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_base = channel << 8
self.target_read = 1 << 6
self.target_gain = 0 * (1 << 4)
self.target_offset = 1 * (1 << 4)
self.target_clr = 1 * (1 << 5)
@kernel
def set_clr(self, clr):
"""Set/Unset waveform phase clear bits.
Each bit corresponds to a Shuttler waveform generator core. Setting a
clear bit forces the Shuttler Core to clear the phase accumulator on
waveform trigger (See :class:`Trigger` for the trigger method).
Otherwise, the phase accumulator increments from its original value.
:param clr: Waveform phase clear bits. The MSB corresponds to Channel
15, LSB corresponds to Channel 0.
"""
rtio_output(self.target_base | self.target_clr, clr)
@kernel
def set_gain(self, channel, gain):
"""Set the 16-bits pre-DAC gain register of a Shuttler Core channel.
The `gain` parameter represents the decimal portion of the gain
factor. The MSB represents 0.5 and the sign bit. Hence, the valid
total gain value (1 +/- 0.gain) ranges from 0.5 to 1.5 - LSB.
:param channel: Shuttler Core channel to be configured.
:param gain: Shuttler Core channel gain.
"""
rtio_output(self.target_base | self.target_gain | channel, gain)
@kernel
def get_gain(self, channel):
"""Return the pre-DAC gain value of a Shuttler Core channel.
:param channel: The Shuttler Core channel.
:return: Pre-DAC gain value. See :meth:`set_gain`.
"""
rtio_output(self.target_base | self.target_gain |
self.target_read | channel, 0)
return rtio_input_data(self.channel)
@kernel
def set_offset(self, channel, offset):
"""Set the 16-bits pre-DAC offset register of a Shuttler Core channel.
.. seealso::
:meth:`shuttler_volt_to_mu`
:param channel: Shuttler Core channel to be configured.
:param offset: Shuttler Core channel offset.
"""
rtio_output(self.target_base | self.target_offset | channel, offset)
@kernel
def get_offset(self, channel):
"""Return the pre-DAC offset value of a Shuttler Core channel.
:param channel: The Shuttler Core channel.
:return: Pre-DAC offset value. See :meth:`set_offset`.
"""
rtio_output(self.target_base | self.target_offset |
self.target_read | channel, 0)
return rtio_input_data(self.channel)
class DCBias:
"""Shuttler Core cubic DC-bias spline.
A Shuttler channel can generate a waveform `w(t)` that is the sum of a
cubic spline `a(t)` and a sinusoid modulated in amplitude by a cubic
spline `b(t)` and in phase/frequency by a quadratic spline `c(t)`, where
.. math::
w(t) = a(t) + b(t) * cos(c(t))
And `t` corresponds to time in seconds.
This class controls the cubic spline `a(t)`, in which
.. math::
a(t) = p_0 + p_1t + \\frac{p_2t^2}{2} + \\frac{p_3t^3}{6}
And `a(t)` is in Volt.
:param channel: RTIO channel number of this DC-bias spline interface.
:param core_device: Core device name.
"""
kernel_invariants = {"core", "channel", "target_o"}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_o = channel << 8
@kernel
def set_waveform(self, a0: TInt32, a1: TInt32, a2: TInt64, a3: TInt64):
"""Set the DC-bias spline waveform.
Given `a(t)` as defined in :class:`DCBias`, the coefficients should be
configured by the following formulae.
.. math::
T &= 8*10^{-9}
a_0 &= p_0
a_1 &= p_1T + \\frac{p_2T^2}{2} + \\frac{p_3T^3}{6}
a_2 &= p_2T^2 + p_3T^3
a_3 &= p_3T^3
:math:`a_0`, :math:`a_1`, :math:`a_2` and :math:`a_3` are 16, 32, 48
and 48 bits in width respectively. See :meth:`shuttler_volt_to_mu` for
machine unit conversion.
Note: The waveform is not updated to the Shuttler Core until
triggered. See :class:`Trigger` for the update triggering mechanism.
:param a0: The :math:`a_0` coefficient in machine unit.
:param a1: The :math:`a_1` coefficient in machine unit.
:param a2: The :math:`a_2` coefficient in machine unit.
:param a3: The :math:`a_3` coefficient in machine unit.
"""
coef_words = [
a0,
a1,
a1 >> 16,
a2 & 0xFFFF,
(a2 >> 16) & 0xFFFF,
(a2 >> 32) & 0xFFFF,
a3 & 0xFFFF,
(a3 >> 16) & 0xFFFF,
(a3 >> 32) & 0xFFFF,
]
for i in range(len(coef_words)):
rtio_output(self.target_o | i, coef_words[i])
delay_mu(int64(self.core.ref_multiplier))
class DDS:
"""Shuttler Core DDS spline.
A Shuttler channel can generate a waveform `w(t)` that is the sum of a
cubic spline `a(t)` and a sinusoid modulated in amplitude by a cubic
spline `b(t)` and in phase/frequency by a quadratic spline `c(t)`, where
.. math::
w(t) = a(t) + b(t) * cos(c(t))
And `t` corresponds to time in seconds.
This class controls the cubic spline `b(t)` and quadratic spline `c(t)`,
in which
.. math::
b(t) &= g * (q_0 + q_1t + \\frac{q_2t^2}{2} + \\frac{q_3t^3}{6})
c(t) &= r_0 + r_1t + \\frac{r_2t^2}{2}
And `b(t)` is in Volt, `c(t)` is in number of turns. Note that `b(t)`
contributes to a constant gain of :math:`g=1.64676`.
:param channel: RTIO channel number of this DC-bias spline interface.
:param core_device: Core device name.
"""
kernel_invariants = {"core", "channel", "target_o"}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_o = channel << 8
@kernel
def set_waveform(self, b0: TInt32, b1: TInt32, b2: TInt64, b3: TInt64,
c0: TInt32, c1: TInt32, c2: TInt32):
"""Set the DDS spline waveform.
Given `b(t)` and `c(t)` as defined in :class:`DDS`, the coefficients
should be configured by the following formulae.
.. math::
T &= 8*10^{-9}
b_0 &= q_0
b_1 &= q_1T + \\frac{q_2T^2}{2} + \\frac{q_3T^3}{6}
b_2 &= q_2T^2 + q_3T^3
b_3 &= q_3T^3
c_0 &= r_0
c_1 &= r_1T + \\frac{r_2T^2}{2}
c_2 &= r_2T^2
:math:`b_0`, :math:`b_1`, :math:`b_2` and :math:`b_3` are 16, 32, 48
and 48 bits in width respectively. See :meth:`shuttler_volt_to_mu` for
machine unit conversion. :math:`c_0`, :math:`c_1` and :math:`c_2` are
16, 32 and 32 bits in width respectively.
Note: The waveform is not updated to the Shuttler Core until
triggered. See :class:`Trigger` for the update triggering mechanism.
:param b0: The :math:`b_0` coefficient in machine unit.
:param b1: The :math:`b_1` coefficient in machine unit.
:param b2: The :math:`b_2` coefficient in machine unit.
:param b3: The :math:`b_3` coefficient in machine unit.
:param c0: The :math:`c_0` coefficient in machine unit.
:param c1: The :math:`c_1` coefficient in machine unit.
:param c2: The :math:`c_2` coefficient in machine unit.
"""
coef_words = [
b0,
b1,
b1 >> 16,
b2 & 0xFFFF,
(b2 >> 16) & 0xFFFF,
(b2 >> 32) & 0xFFFF,
b3 & 0xFFFF,
(b3 >> 16) & 0xFFFF,
(b3 >> 32) & 0xFFFF,
c0,
c1,
c1 >> 16,
c2,
c2 >> 16,
]
for i in range(len(coef_words)):
rtio_output(self.target_o | i, coef_words[i])
delay_mu(int64(self.core.ref_multiplier))
class Trigger:
"""Shuttler Core spline coefficients update trigger.
:param channel: RTIO channel number of the trigger interface.
:param core_device: Core device name.
"""
kernel_invariants = {"core", "channel", "target_o"}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_o = channel << 8
@kernel
def trigger(self, trig_out):
"""Triggers coefficient update of (a) Shuttler Core channel(s).
Each bit corresponds to a Shuttler waveform generator core. Setting
`trig_out` bits commits the pending coefficient update (from
`set_waveform` in :class:`DCBias` and :class:`DDS`) to the Shuttler Core
synchronously.
:param trig_out: Coefficient update trigger bits. The MSB corresponds
to Channel 15, LSB corresponds to Channel 0.
"""
rtio_output(self.target_o, trig_out)
RELAY_SPI_CONFIG = (0*spi.SPI_OFFLINE | 1*spi.SPI_END |
0*spi.SPI_INPUT | 0*spi.SPI_CS_POLARITY |
0*spi.SPI_CLK_POLARITY | 0*spi.SPI_CLK_PHASE |
0*spi.SPI_LSB_FIRST | 0*spi.SPI_HALF_DUPLEX)
ADC_SPI_CONFIG = (0*spi.SPI_OFFLINE | 0*spi.SPI_END |
0*spi.SPI_INPUT | 0*spi.SPI_CS_POLARITY |
1*spi.SPI_CLK_POLARITY | 1*spi.SPI_CLK_PHASE |
0*spi.SPI_LSB_FIRST | 0*spi.SPI_HALF_DUPLEX)
# SPI clock write and read dividers
# CS should assert at least 9.5 ns after clk pulse
SPIT_RELAY_WR = 4
# 25 ns high/low pulse hold (limiting for write)
SPIT_ADC_WR = 4
SPIT_ADC_RD = 16
# SPI CS line
CS_RELAY = 1 << 0
CS_LED = 1 << 1
CS_ADC = 1 << 0
# Referenced AD4115 registers
_AD4115_REG_STATUS = 0x00
_AD4115_REG_ADCMODE = 0x01
_AD4115_REG_DATA = 0x04
_AD4115_REG_ID = 0x07
_AD4115_REG_CH0 = 0x10
_AD4115_REG_SETUPCON0 = 0x20
class Relay:
"""Shuttler AFE relay switches.
It controls the AFE relay switches and the LEDs. Switch on the relay to
enable AFE output; And off to disable the output. The LEDs indicates the
relay status.
.. note::
The relay does not disable ADC measurements. Voltage of any channels
can still be read by the ADC even after switching off the relays.
:param spi_device: SPI bus device name.
:param core_device: Core device name.
"""
kernel_invariant = {"core", "bus"}
def __init__(self, dmgr, spi_device, core_device="core"):
self.core = dmgr.get(core_device)
self.bus = dmgr.get(spi_device)
@kernel
def init(self):
"""Initialize SPI device.
Configures the SPI bus to 16-bits, write-only, simultaneous relay
switches and LED control.
"""
self.bus.set_config_mu(
RELAY_SPI_CONFIG, 16, SPIT_RELAY_WR, CS_RELAY | CS_LED)
@kernel
def enable(self, en: TInt32):
"""Enable/Disable relay switches of corresponding channels.
Each bit corresponds to the relay switch of a channel. Asserting a bit
turns on the corresponding relay switch; Deasserting the same bit
turns off the switch instead.
:param en: Switch enable bits. The MSB corresponds to Channel 15, LSB
corresponds to Channel 0.
"""
self.bus.write(en << 16)
class ADC:
"""Shuttler AFE ADC (AD4115) driver.
:param spi_device: SPI bus device name.
:param core_device: Core device name.
"""
kernel_invariant = {"core", "bus"}
def __init__(self, dmgr, spi_device, core_device="core"):
self.core = dmgr.get(core_device)
self.bus = dmgr.get(spi_device)
@kernel
def read_id(self) -> TInt32:
"""Read the product ID of the ADC.
The expected return value is 0x38DX, the 4 LSbs are don't cares.
:return: The read-back product ID.
"""
return self.read16(_AD4115_REG_ID)
@kernel
def reset(self):
"""AD4115 reset procedure.
This performs a write operation of 96 serial clock cycles with DIN
held at high. It resets the entire device, including the register
contents.
.. note::
The datasheet only requires 64 cycles, but reasserting `CS_n` right
after the transfer appears to interrupt the start-up sequence.
"""
self.bus.set_config_mu(ADC_SPI_CONFIG, 32, SPIT_ADC_WR, CS_ADC)
self.bus.write(0xffffffff)
self.bus.write(0xffffffff)
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 32, SPIT_ADC_WR, CS_ADC)
self.bus.write(0xffffffff)
@kernel
def read8(self, addr: TInt32) -> TInt32:
"""Read from 8 bit register.
:param addr: Register address.
:return: Read-back register content.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
16, SPIT_ADC_RD, CS_ADC)
self.bus.write((addr | 0x40) << 24)
return self.bus.read() & 0xff
@kernel
def read16(self, addr: TInt32) -> TInt32:
"""Read from 16 bit register.
:param addr: Register address.
:return: Read-back register content.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
24, SPIT_ADC_RD, CS_ADC)
self.bus.write((addr | 0x40) << 24)
return self.bus.read() & 0xffff
@kernel
def read24(self, addr: TInt32) -> TInt32:
"""Read from 24 bit register.
:param addr: Register address.
:return: Read-back register content.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
32, SPIT_ADC_RD, CS_ADC)
self.bus.write((addr | 0x40) << 24)
return self.bus.read() & 0xffffff
@kernel
def write8(self, addr: TInt32, data: TInt32):
"""Write to 8 bit register.
:param addr: Register address.
:param data: Data to be written.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 16, SPIT_ADC_WR, CS_ADC)
self.bus.write(addr << 24 | (data & 0xff) << 16)
@kernel
def write16(self, addr: TInt32, data: TInt32):
"""Write to 16 bit register.
:param addr: Register address.
:param data: Data to be written.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 24, SPIT_ADC_WR, CS_ADC)
self.bus.write(addr << 24 | (data & 0xffff) << 8)
@kernel
def write24(self, addr: TInt32, data: TInt32):
"""Write to 24 bit register.
:param addr: Register address.
:param data: Data to be written.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 32, SPIT_ADC_WR, CS_ADC)
self.bus.write(addr << 24 | (data & 0xffffff))
@kernel
def read_ch(self, channel: TInt32) -> TFloat:
"""Sample a Shuttler channel on the AFE.
It performs a single conversion using profile 0 and setup 0, on the
selected channel. The sample is then recovered and converted to volt.
:param channel: Shuttler channel to be sampled.
:return: Voltage sample in volt.
"""
# Always configure Profile 0 for single conversion
self.write16(_AD4115_REG_CH0, 0x8000 | ((channel * 2 + 1) << 4))
self.write16(_AD4115_REG_SETUPCON0, 0x1300)
self.single_conversion()
delay(100*us)
adc_code = self.read24(_AD4115_REG_DATA)
return ((adc_code / (1 << 23)) - 1) * 2.5 / 0.1
@kernel
def single_conversion(self):
"""Place the ADC in single conversion mode.
The ADC returns to standby mode after the conversion is complete.
"""
self.write16(_AD4115_REG_ADCMODE, 0x8010)
@kernel
def standby(self):
"""Place the ADC in standby mode and disables power down the clock.
The ADC can be returned to single conversion mode by calling
:meth:`single_conversion`.
"""
# Selecting internal XO (0b00) also disables clock during standby
self.write16(_AD4115_REG_ADCMODE, 0x8020)
@kernel
def power_down(self):
"""Place the ADC in power-down mode.
The ADC must be reset before returning to other modes.
.. note::
The AD4115 datasheet suggests placing the ADC in standby mode
before power-down. This is to prevent accidental entry into the
power-down mode.
.. seealso::
:meth:`standby`
:meth:`power_up`
"""
self.write16(_AD4115_REG_ADCMODE, 0x8030)
@kernel
def power_up(self):
"""Exit the ADC power-down mode.
The ADC should be in power-down mode before calling this method.
.. seealso::
:meth:`power_down`
"""
self.reset()
# Although the datasheet claims 500 us reset wait time, only waiting
# for ~500 us can result in DOUT pin stuck in high
delay(2500*us)
@kernel
def calibrate(self, volts, trigger, config, samples=[-5.0, 0.0, 5.0]):
"""Calibrate the Shuttler waveform generator using the ADC on the AFE.
It finds the average slope rate and average offset by samples, and
compensate by writing the pre-DAC gain and offset registers in the
configuration registers.
.. note::
If the pre-calibration slope rate < 1, the calibration procedure
will introduce a pre-DAC gain compensation. However, this may
saturate the pre-DAC voltage code. (See :class:`Config` notes).
Shuttler cannot cover the entire +/- 10 V range in this case.
.. seealso::
:meth:`Config.set_gain`
:meth:`Config.set_offset`
:param volts: A list of all 16 cubic DC-bias spline.
(See :class:`DCBias`)
:param trigger: The Shuttler spline coefficient update trigger.
:param config: The Shuttler Core configuration registers.
:param samples: A list of sample voltages for calibration. There must
be at least 2 samples to perform slope rate calculation.
"""
assert len(volts) == 16
assert len(samples) > 1
measurements = [0.0] * len(samples)
for ch in range(16):
# Find the average slope rate and offset
for i in range(len(samples)):
self.core.break_realtime()
volts[ch].set_waveform(
shuttler_volt_to_mu(samples[i]), 0, 0, 0)
trigger.trigger(1 << ch)
measurements[i] = self.read_ch(ch)
# Find the average output slope
slope_sum = 0.0
for i in range(len(samples) - 1):
slope_sum += (measurements[i+1] - measurements[i])/(samples[i+1] - samples[i])
slope_avg = slope_sum / (len(samples) - 1)
gain_code = int32(1 / slope_avg * (2 ** 16)) & 0xffff
# Scale the measurements by 1/slope, find average offset
offset_sum = 0.0
for i in range(len(samples)):
offset_sum += (measurements[i] / slope_avg) - samples[i]
offset_avg = offset_sum / len(samples)
offset_code = shuttler_volt_to_mu(-offset_avg)
self.core.break_realtime()
config.set_gain(ch, gain_code)
delay_mu(int64(self.core.ref_multiplier))
config.set_offset(ch, offset_code)

View File

@ -72,10 +72,6 @@ class SPIMaster:
self.channel = channel
self.update_xfer_duration_mu(div, length)
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@portable
def frequency_to_div(self, f):
"""Convert a SPI clock frequency to the closest SPI clock divider."""
@ -277,8 +273,9 @@ class NRTSPIMaster:
def set_config_mu(self, flags=0, length=8, div=6, cs=1):
"""Set the ``config`` register.
In many cases, the SPI configuration is already set by the firmware
and you do not need to call this method.
Note that the non-realtime SPI cores are usually clocked by the system
clock and not the RTIO clock. In many cases, the SPI configuration is
already set by the firmware and you do not need to call this method.
"""
spi_set_config(self.busno, flags, length, div, cs)

View File

@ -23,12 +23,12 @@ def y_mu_to_full_scale(y):
@portable
def adc_mu_to_volts(x, gain, corrected_fs=True):
def adc_mu_to_volts(x, gain):
"""Convert servo ADC data from machine units to Volt."""
val = (x >> 1) & 0xffff
mask = 1 << 15
val = -(val & mask) + (val & ~mask)
return sampler.adc_mu_to_volt(val, gain, corrected_fs)
return sampler.adc_mu_to_volt(val, gain)
class SUServo:
@ -62,15 +62,14 @@ class SUServo:
:param gains: Initial value for PGIA gains shift register
(default: 0x0000). Knowledge of this state is not transferred
between experiments.
:param sampler_hw_rev: Sampler's revision string
:param core_device: Core device name
"""
kernel_invariants = {"channel", "core", "pgia", "cplds", "ddses",
"ref_period_mu", "corrected_fs"}
"ref_period_mu"}
def __init__(self, dmgr, channel, pgia_device,
cpld_devices, dds_devices,
gains=0x0000, sampler_hw_rev="v2.2", core_device="core"):
gains=0x0000, core_device="core"):
self.core = dmgr.get(core_device)
self.pgia = dmgr.get(pgia_device)
@ -82,13 +81,8 @@ class SUServo:
self.gains = gains
self.ref_period_mu = self.core.seconds_to_mu(
self.core.coarse_ref_period)
self.corrected_fs = sampler.Sampler.use_corrected_fs(sampler_hw_rev)
assert self.ref_period_mu == self.core.ref_multiplier
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def init(self):
"""Initialize the servo, Sampler and both Urukuls.
@ -240,7 +234,7 @@ class SUServo:
"""
val = self.get_adc_mu(channel)
gain = (self.gains >> (channel*2)) & 0b11
return adc_mu_to_volts(val, gain, self.corrected_fs)
return adc_mu_to_volts(val, gain)
class Channel:
@ -261,10 +255,6 @@ class Channel:
self.servo.channel)
self.dds = self.servo.ddses[self.servo_channel // 4]
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def set(self, en_out, en_iir=0, profile=0):
"""Operate channel.

View File

@ -36,10 +36,6 @@ class TTLOut:
self.channel = channel
self.target_o = channel << 8
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def output(self):
pass
@ -132,10 +128,6 @@ class TTLInOut:
self.target_sens = (channel << 8) + 2
self.target_sample = (channel << 8) + 3
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def set_oe(self, oe):
rtio_output(self.target_oe, 1 if oe else 0)
@ -473,10 +465,6 @@ class TTLClockGen:
self.acc_width = numpy.int64(acc_width)
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@portable
def frequency_to_ftw(self, frequency):
"""Returns the frequency tuning word corresponding to the given

View File

@ -5,7 +5,7 @@ import numpy as np
from PyQt5 import QtCore, QtWidgets
from sipyco import pyon
from artiq.tools import scale_from_metadata, short_format, exc_to_warning
from artiq.tools import short_format, exc_to_warning
from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel
from artiq.gui.models import DictSyncTreeSepModel
from artiq.gui.scientific_spinbox import ScientificSpinBox
@ -14,18 +14,92 @@ from artiq.gui.scientific_spinbox import ScientificSpinBox
logger = logging.getLogger(__name__)
async def rename(key, new_key, value, metadata, persist, dataset_ctl):
if key != new_key:
async def rename(key, newkey, value, dataset_ctl):
if key != newkey:
await dataset_ctl.delete(key)
await dataset_ctl.set(new_key, value, metadata=metadata, persist=persist)
await dataset_ctl.set(newkey, value)
class CreateEditDialog(QtWidgets.QDialog):
def __init__(self, parent, dataset_ctl, key=None, value=None, metadata=None, persist=False):
class Editor(QtWidgets.QDialog):
def __init__(self, parent, dataset_ctl, key, value):
QtWidgets.QDialog.__init__(self, parent=parent)
self.dataset_ctl = dataset_ctl
self.key = key
self.initial_type = type(value)
self.setWindowTitle("Edit dataset")
grid = QtWidgets.QGridLayout()
self.setLayout(grid)
grid.addWidget(QtWidgets.QLabel("Name:"), 0, 0)
self.name_widget = QtWidgets.QLineEdit()
self.name_widget.setText(key)
grid.addWidget(self.name_widget, 0, 1)
grid.addWidget(QtWidgets.QLabel("Value:"), 1, 0)
grid.addWidget(self.get_edit_widget(value), 1, 1)
buttons = QtWidgets.QDialogButtonBox(
QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel)
grid.setRowStretch(2, 1)
grid.addWidget(buttons, 3, 0, 1, 2)
buttons.accepted.connect(self.accept)
buttons.rejected.connect(self.reject)
def accept(self):
newkey = self.name_widget.text()
value = self.initial_type(self.get_edit_widget_value())
asyncio.ensure_future(rename(self.key, newkey, value, self.dataset_ctl))
QtWidgets.QDialog.accept(self)
def get_edit_widget(self, initial_value):
raise NotImplementedError
def get_edit_widget_value(self):
raise NotImplementedError
class NumberEditor(Editor):
def get_edit_widget(self, initial_value):
self.edit_widget = ScientificSpinBox()
self.edit_widget.setDecimals(13)
self.edit_widget.setPrecision()
self.edit_widget.setRelativeStep()
self.edit_widget.setValue(float(initial_value))
return self.edit_widget
def get_edit_widget_value(self):
return self.edit_widget.value()
class BoolEditor(Editor):
def get_edit_widget(self, initial_value):
self.edit_widget = QtWidgets.QCheckBox()
self.edit_widget.setChecked(bool(initial_value))
return self.edit_widget
def get_edit_widget_value(self):
return self.edit_widget.isChecked()
class StringEditor(Editor):
def get_edit_widget(self, initial_value):
self.edit_widget = QtWidgets.QLineEdit()
self.edit_widget.setText(initial_value)
return self.edit_widget
def get_edit_widget_value(self):
return self.edit_widget.text()
class Creator(QtWidgets.QDialog):
def __init__(self, parent, dataset_ctl):
QtWidgets.QDialog.__init__(self, parent=parent)
self.dataset_ctl = dataset_ctl
self.setWindowTitle("Create dataset" if key is None else "Edit dataset")
self.setWindowTitle("Create dataset")
grid = QtWidgets.QGridLayout()
grid.setRowMinimumHeight(1, 40)
grid.setColumnMinimumWidth(2, 60)
@ -43,21 +117,9 @@ class CreateEditDialog(QtWidgets.QDialog):
grid.addWidget(self.data_type, 1, 2)
self.value_widget.textChanged.connect(self.dtype)
grid.addWidget(QtWidgets.QLabel("Unit:"), 2, 0)
self.unit_widget = QtWidgets.QLineEdit()
grid.addWidget(self.unit_widget, 2, 1)
grid.addWidget(QtWidgets.QLabel("Scale:"), 3, 0)
self.scale_widget = QtWidgets.QLineEdit()
grid.addWidget(self.scale_widget, 3, 1)
grid.addWidget(QtWidgets.QLabel("Precision:"), 4, 0)
self.precision_widget = QtWidgets.QLineEdit()
grid.addWidget(self.precision_widget, 4, 1)
grid.addWidget(QtWidgets.QLabel("Persist:"), 5, 0)
grid.addWidget(QtWidgets.QLabel("Persist:"), 2, 0)
self.box_widget = QtWidgets.QCheckBox()
grid.addWidget(self.box_widget, 5, 1)
grid.addWidget(self.box_widget, 2, 1)
self.ok = QtWidgets.QPushButton('&Ok')
self.ok.setEnabled(False)
@ -67,62 +129,23 @@ class CreateEditDialog(QtWidgets.QDialog):
self.ok, QtWidgets.QDialogButtonBox.AcceptRole)
self.buttons.addButton(
self.cancel, QtWidgets.QDialogButtonBox.RejectRole)
grid.setRowStretch(6, 1)
grid.addWidget(self.buttons, 7, 0, 1, 3, alignment=QtCore.Qt.AlignHCenter)
grid.setRowStretch(3, 1)
grid.addWidget(self.buttons, 4, 0, 1, 3)
self.buttons.accepted.connect(self.accept)
self.buttons.rejected.connect(self.reject)
self.key = key
self.name_widget.setText(key)
value_edit_string = self.value_to_edit_string(value)
if metadata is not None:
scale = scale_from_metadata(metadata)
t = value.dtype if value is np.ndarray else type(value)
if scale != 1 and np.issubdtype(t, np.number):
# degenerates to float type
value_edit_string = self.value_to_edit_string(
np.float64(value / scale))
self.unit_widget.setText(metadata.get('unit', ''))
self.scale_widget.setText(str(metadata.get('scale', '')))
self.precision_widget.setText(str(metadata.get('precision', '')))
self.value_widget.setText(value_edit_string)
self.box_widget.setChecked(persist)
def accept(self):
key = self.name_widget.text()
value = self.value_widget.text()
persist = self.box_widget.isChecked()
unit = self.unit_widget.text()
scale = self.scale_widget.text()
precision = self.precision_widget.text()
metadata = {}
if unit != "":
metadata['unit'] = unit
if scale != "":
metadata['scale'] = float(scale)
if precision != "":
metadata['precision'] = int(precision)
scale = scale_from_metadata(metadata)
value = self.parse_edit_string(value)
t = value.dtype if value is np.ndarray else type(value)
if scale != 1 and np.issubdtype(t, np.number):
# degenerates to float type
value = np.float64(value * scale)
if self.key and self.key != key:
asyncio.ensure_future(exc_to_warning(rename(self.key, key, value, metadata, persist, self.dataset_ctl)))
else:
asyncio.ensure_future(exc_to_warning(self.dataset_ctl.set(key, value, metadata=metadata, persist=persist)))
self.key = key
asyncio.ensure_future(exc_to_warning(self.dataset_ctl.set(
key, pyon.decode(value), persist)))
QtWidgets.QDialog.accept(self)
def dtype(self):
txt = self.value_widget.text()
try:
result = self.parse_edit_string(txt)
# ensure only pyon compatible types are permissable
pyon.encode(result)
result = pyon.decode(txt)
except:
pixmap = self.style().standardPixmap(
QtWidgets.QStyle.SP_MessageBoxWarning)
@ -132,35 +155,6 @@ class CreateEditDialog(QtWidgets.QDialog):
self.data_type.setText(type(result).__name__)
self.ok.setEnabled(True)
@staticmethod
def parse_edit_string(s):
if s == "":
raise TypeError
_eval_dict = {
"__builtins__": {},
"array": np.array,
"null": np.nan,
"inf": np.inf
}
for t_ in pyon._numpy_scalar:
_eval_dict[t_] = eval("np.{}".format(t_), {"np": np})
return eval(s, _eval_dict, {})
@staticmethod
def value_to_edit_string(v):
t = type(v)
r = ""
if isinstance(v, np.generic):
r += t.__name__
r += "("
r += repr(v)
r += ")"
elif v is None:
return r
else:
r += repr(v)
return r
class Model(DictSyncTreeSepModel):
def __init__(self, init):
@ -172,13 +166,13 @@ class Model(DictSyncTreeSepModel):
if column == 1:
return "Y" if v[0] else "N"
elif column == 2:
return short_format(v[1], v[2])
return short_format(v[1])
else:
raise ValueError
class DatasetsDock(QtWidgets.QDockWidget):
def __init__(self, dataset_sub, dataset_ctl):
def __init__(self, datasets_sub, dataset_ctl):
QtWidgets.QDockWidget.__init__(self, "Datasets")
self.setObjectName("Datasets")
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
@ -218,7 +212,7 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table.addAction(delete_action)
self.table_model = Model(dict())
dataset_sub.add_setmodel_callback(self.set_model)
datasets_sub.add_setmodel_callback(self.set_model)
def _search_datasets(self):
if hasattr(self, "table_model_filter"):
@ -232,7 +226,7 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table.setModel(self.table_model_filter)
def create_clicked(self):
CreateEditDialog(self, self.dataset_ctl).open()
Creator(self, self.dataset_ctl).open()
def edit_clicked(self):
idx = self.table.selectedIndexes()
@ -240,8 +234,19 @@ class DatasetsDock(QtWidgets.QDockWidget):
idx = self.table_model_filter.mapToSource(idx[0])
key = self.table_model.index_to_key(idx)
if key is not None:
persist, value, metadata = self.table_model.backing_store[key]
CreateEditDialog(self, self.dataset_ctl, key, value, metadata, persist).open()
persist, value = self.table_model.backing_store[key]
t = type(value)
if np.issubdtype(t, np.number):
dialog_cls = NumberEditor
elif np.issubdtype(t, np.bool_):
dialog_cls = BoolEditor
elif np.issubdtype(t, np.unicode_):
dialog_cls = StringEditor
else:
logger.error("Cannot edit dataset %s: "
"type %s is not supported", key, t)
return
dialog_cls(self, self.dataset_ctl, key, value).open()
def delete_clicked(self):
idx = self.table.selectedIndexes()

View File

@ -11,9 +11,7 @@ from sipyco import pyon
from artiq.gui.entries import procdesc_to_entry, ScanEntry
from artiq.gui.fuzzy_select import FuzzySelectWidget
from artiq.gui.tools import (LayoutWidget, WheelFilter,
log_level_to_name, get_open_file_name)
from artiq.tools import parse_devarg_override, unparse_devarg_override
from artiq.gui.tools import LayoutWidget, log_level_to_name, get_open_file_name
logger = logging.getLogger(__name__)
@ -25,6 +23,15 @@ logger = logging.getLogger(__name__)
# 2. file:<class name>@<file name>
class _WheelFilter(QtCore.QObject):
def eventFilter(self, obj, event):
if (event.type() == QtCore.QEvent.Wheel and
event.modifiers() != QtCore.Qt.NoModifier):
event.ignore()
return True
return False
class _ArgumentEditor(QtWidgets.QTreeWidget):
def __init__(self, manager, dock, expurl):
self.manager = manager
@ -48,7 +55,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
self.setStyleSheet("QTreeWidget {background: " +
self.palette().midlight().color().name() + " ;}")
self.viewport().installEventFilter(WheelFilter(self.viewport(), True))
self.viewport().installEventFilter(_WheelFilter(self.viewport()))
self._groups = dict()
self._arg_to_widgets = dict()
@ -152,23 +159,6 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
self._groups[name] = group
return group
def update_argument(self, name, argument):
widgets = self._arg_to_widgets[name]
# Qt needs a setItemWidget() to handle layout correctly,
# simply replacing the entry inside the LayoutWidget
# results in a bug.
widgets["entry"].deleteLater()
widgets["entry"] = procdesc_to_entry(argument["desc"])(argument)
widgets["disable_other_scans"].setVisible(
isinstance(widgets["entry"], ScanEntry))
widgets["fix_layout"].deleteLater()
widgets["fix_layout"] = LayoutWidget()
widgets["fix_layout"].addWidget(widgets["entry"])
self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"])
self.updateGeometries()
def _recompute_argument_clicked(self, name):
asyncio.ensure_future(self._recompute_argument(name))
@ -185,7 +175,22 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
state = procdesc_to_entry(procdesc).default_state(procdesc)
argument["desc"] = procdesc
argument["state"] = state
self.update_argument(name, argument)
# Qt needs a setItemWidget() to handle layout correctly,
# simply replacing the entry inside the LayoutWidget
# results in a bug.
widgets = self._arg_to_widgets[name]
widgets["entry"].deleteLater()
widgets["entry"] = procdesc_to_entry(procdesc)(argument)
widgets["disable_other_scans"].setVisible(
isinstance(widgets["entry"], ScanEntry))
widgets["fix_layout"].deleteLater()
widgets["fix_layout"] = LayoutWidget()
widgets["fix_layout"].addWidget(widgets["entry"])
self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"])
self.updateGeometries()
def _disable_other_scans(self, current_name):
for name, widgets in self._arg_to_widgets.items():
@ -263,7 +268,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
datetime.setDate(QtCore.QDate.currentDate())
else:
datetime.setDateTime(QtCore.QDateTime.fromMSecsSinceEpoch(
int(scheduling["due_date"]*1000)))
scheduling["due_date"]*1000))
datetime_en.setChecked(scheduling["due_date"] is not None)
def update_datetime(dt):
@ -306,7 +311,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
flush = self.flush
flush.setToolTip("Flush the pipeline (of current- and higher-priority "
"experiments) before starting the experiment")
self.layout.addWidget(flush, 2, 2)
self.layout.addWidget(flush, 2, 2, 1, 2)
flush.setChecked(scheduling["flush"])
@ -314,20 +319,6 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
scheduling["flush"] = bool(checked)
flush.stateChanged.connect(update_flush)
devarg_override = QtWidgets.QComboBox()
devarg_override.setEditable(True)
devarg_override.lineEdit().setPlaceholderText("Override device arguments")
devarg_override.lineEdit().setClearButtonEnabled(True)
devarg_override.insertItem(0, "core:analyze_at_run_end=True")
self.layout.addWidget(devarg_override, 2, 3)
devarg_override.setCurrentText(options["devarg_override"])
def update_devarg_override(text):
options["devarg_override"] = text
devarg_override.editTextChanged.connect(update_devarg_override)
self.devarg_override = devarg_override
log_level = QtWidgets.QComboBox()
log_level.addItems(log_levels)
log_level.setCurrentIndex(1)
@ -348,7 +339,6 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
if "repo_rev" in options:
repo_rev = QtWidgets.QLineEdit()
repo_rev.setPlaceholderText("current")
repo_rev.setClearButtonEnabled(True)
repo_rev_label = QtWidgets.QLabel("Revision:")
repo_rev_label.setToolTip("Experiment repository revision "
"(commit ID) to use")
@ -481,9 +471,6 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
return
try:
if "devarg_override" in expid:
self.devarg_override.setCurrentText(
unparse_devarg_override(expid["devarg_override"]))
self.log_level.setCurrentIndex(log_levels.index(
log_level_to_name(expid["log_level"])))
if ("repo_rev" in expid and
@ -657,8 +644,7 @@ class ExperimentManager:
else:
# mutated by _ExperimentDock
options = {
"log_level": logging.WARNING,
"devarg_override": ""
"log_level": logging.WARNING
}
if expurl[:5] == "repo:":
options["repo_rev"] = None
@ -679,20 +665,6 @@ class ExperimentManager:
self.argument_ui_names[expurl] = ui_name
return arguments
def set_argument_value(self, expurl, name, value):
try:
argument = self.submission_arguments[expurl][name]
if argument["desc"]["ty"] == "Scannable":
ty = value["ty"]
argument["state"]["selected"] = ty
argument["state"][ty] = value
else:
argument["state"] = value
if expurl in self.open_experiments.keys():
self.open_experiments[expurl].argeditor.update_argument(name, argument)
except:
logger.warn("Failed to set value for argument \"{}\" in experiment: {}.".format(name, expurl), exc_info=1)
def get_submission_arguments(self, expurl):
if expurl in self.submission_arguments:
return self.submission_arguments[expurl]
@ -753,14 +725,7 @@ class ExperimentManager:
entry_cls = procdesc_to_entry(argument["desc"])
argument_values[name] = entry_cls.state_to_value(argument["state"])
try:
devarg_override = parse_devarg_override(options["devarg_override"])
except:
logger.error("Failed to parse device argument overrides for %s", expurl)
return
expid = {
"devarg_override": devarg_override,
"log_level": options["log_level"],
"file": file,
"class_name": class_name,

View File

@ -7,11 +7,7 @@ device_db = {
"type": "local",
"module": "artiq.coredevice.core",
"class": "Core",
"arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
"arguments": {"host": core_addr, "ref_period": 1e-9}
},
"core_log": {
"type": "controller",
@ -26,13 +22,6 @@ device_db = {
"port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": {
"type": "local",
"module": "artiq.coredevice.cache",

View File

@ -5,11 +5,7 @@ device_db = {
"type": "local",
"module": "artiq.coredevice.core",
"class": "Core",
"arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
"arguments": {"host": core_addr, "ref_period": 1/(8*150e6)}
},
"core_log": {
"type": "controller",
@ -24,13 +20,6 @@ device_db = {
"port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": {
"type": "local",
"module": "artiq.coredevice.cache",

View File

@ -1,18 +0,0 @@
{
"target": "kasli",
"variant": "shuttlerdemo",
"hw_rev": "v2.0",
"drtio_role": "master",
"peripherals": [
{
"type": "shuttler",
"ports": [0]
},
{
"type": "dio",
"ports": [1],
"bank_direction_low": "input",
"bank_direction_high": "output"
}
]
}

View File

@ -1,330 +0,0 @@
from artiq.experiment import *
from artiq.coredevice.shuttler import shuttler_volt_to_mu
DAC_Fs_MHZ = 125
CORDIC_GAIN = 1.64676
@portable
def shuttler_phase_offset(offset_degree):
return round(offset_degree / 360 * (2 ** 16))
@portable
def shuttler_freq_mu(freq_mhz):
return round(float(2) ** 32 / DAC_Fs_MHZ * freq_mhz)
@portable
def shuttler_chirp_rate_mu(freq_mhz_per_us):
return round(float(2) ** 32 * freq_mhz_per_us / (DAC_Fs_MHZ ** 2))
@portable
def shuttler_freq_sweep(start_f_MHz, end_f_MHz, time_us):
return shuttler_chirp_rate_mu((end_f_MHz - start_f_MHz)/(time_us))
@portable
def shuttler_volt_amp_mu(volt):
return shuttler_volt_to_mu(volt)
@portable
def shuttler_volt_damp_mu(volt_per_us):
return round(float(2) ** 32 * (volt_per_us / 20) / DAC_Fs_MHZ)
@portable
def shuttler_volt_ddamp_mu(volt_per_us_square):
return round(float(2) ** 48 * (volt_per_us_square / 20) * 2 / (DAC_Fs_MHZ ** 2))
@portable
def shuttler_volt_dddamp_mu(volt_per_us_cube):
return round(float(2) ** 48 * (volt_per_us_cube / 20) * 6 / (DAC_Fs_MHZ ** 3))
@portable
def shuttler_dds_amp_mu(volt):
return shuttler_volt_amp_mu(volt / CORDIC_GAIN)
@portable
def shuttler_dds_damp_mu(volt_per_us):
return shuttler_volt_damp_mu(volt_per_us / CORDIC_GAIN)
@portable
def shuttler_dds_ddamp_mu(volt_per_us_square):
return shuttler_volt_ddamp_mu(volt_per_us_square / CORDIC_GAIN)
@portable
def shuttler_dds_dddamp_mu(volt_per_us_cube):
return shuttler_volt_dddamp_mu(volt_per_us_cube / CORDIC_GAIN)
class Shuttler(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("core_dma")
self.setattr_device("scheduler")
self.shuttler0_leds = [ self.get_device("shuttler0_led{}".format(i)) for i in range(2) ]
self.setattr_device("shuttler0_config")
self.setattr_device("shuttler0_trigger")
self.shuttler0_dcbias = [ self.get_device("shuttler0_dcbias{}".format(i)) for i in range(16) ]
self.shuttler0_dds = [ self.get_device("shuttler0_dds{}".format(i)) for i in range(16) ]
self.setattr_device("shuttler0_relay")
self.setattr_device("shuttler0_adc")
@kernel
def record(self):
with self.core_dma.record("example_waveform"):
self.example_waveform()
@kernel
def init(self):
self.led()
self.relay_init()
self.adc_init()
self.shuttler_reset()
@kernel
def run(self):
self.core.reset()
self.core.break_realtime()
self.init()
self.record()
example_waveform_handle = self.core_dma.get_handle("example_waveform")
print("Example Waveforms are on OUT0 and OUT1")
self.core.break_realtime()
while not(self.scheduler.check_termination()):
delay(1*s)
self.core_dma.playback_handle(example_waveform_handle)
@kernel
def shuttler_reset(self):
for i in range(16):
self.shuttler_channel_reset(i)
# To avoid RTIO Underflow
delay(50*us)
@kernel
def shuttler_channel_reset(self, ch):
self.shuttler0_dcbias[ch].set_waveform(
a0=0,
a1=0,
a2=0,
a3=0,
)
self.shuttler0_dds[ch].set_waveform(
b0=0,
b1=0,
b2=0,
b3=0,
c0=0,
c1=0,
c2=0,
)
self.shuttler0_trigger.trigger(1 << ch)
@kernel
def example_waveform(self):
# Equation of Output Waveform
# w(t_us) = a(t_us) + b(t_us) * cos(c(t_us))
# Step 1:
# Enable the Output Relay of OUT0 and OUT1
# Step 2: Cosine Wave Frequency Sweep from 10kHz to 50kHz in 500us
# OUT0: b(t_us) = 1
# c(t_us) = 2 * pi * (0.08 * t_us ^ 2 + 0.01 * t_us)
# OUT1: b(t_us) = 1
# c(t_us) = 2 * pi * (0.05 * t_us)
# Step 3(after 500us): Cosine Wave with 180 Degree Phase Offset
# OUT0: b(t_us) = 1
# c(t_us) = 2 * pi * (0.05 * t_us) + pi
# OUT1: b(t_us) = 1
# c(t_us) = 2 * pi * (0.05 * t_us)
# Step 4(after 500us): Cosine Wave with Amplitude Envelop
# OUT0: b(t_us) = -0.0001367187 * t_us ^ 2 + 0.06835937 * t_us
# c(t_us) = 2 * pi * (0.05 * t_us)
# OUT1: b(t_us) = -0.0001367187 * t_us ^ 2 + 0.06835937 * t_us
# c(t_us) = 0
# Step 5(after 500us): Sawtooth Wave Modulated with 50kHz Cosine Wave
# OUT0: a(t_us) = 0.01 * t_us - 5
# b(t_us) = 1
# c(t_us) = 2 * pi * (0.05 * t_us)
# OUT1: a(t_us) = 0.01 * t_us - 5
# Step 6(after 1000us): A Combination of Previous Waveforms
# OUT0: a(t_us) = 0.01 * t_us - 5
# b(t_us) = -0.0001367187 * t_us ^ 2 + 0.06835937 * t_us
# c(t_us) = 2 * pi * (0.08 * t_us ^ 2 + 0.01 * t_us)
# Step 7(after 500us): Mirrored Waveform in Step 6
# OUT0: a(t_us) = 2.5 + -0.01 * (1000 ^ 2) * t_us
# b(t_us) = 0.0001367187 * t_us ^ 2 - 0.06835937 * t_us
# c(t_us) = 2 * pi * (-0.08 * t_us ^ 2 + 0.05 * t_us) + pi
# Step 8(after 500us):
# Disable Output Relay of OUT0 and OUT1
# Reset OUT0 and OUT1
## Step 1 ##
self.shuttler0_relay.enable(0b11)
## Step 2 ##
start_f_MHz = 0.01
end_f_MHz = 0.05
duration_us = 500
# OUT0 and OUT1 have their frequency and phase aligned at 500us
self.shuttler0_dds[0].set_waveform(
b0=shuttler_dds_amp_mu(1.0),
b1=0,
b2=0,
b3=0,
c0=0,
c1=shuttler_freq_mu(start_f_MHz),
c2=shuttler_freq_sweep(start_f_MHz, end_f_MHz, duration_us),
)
self.shuttler0_dds[1].set_waveform(
b0=shuttler_dds_amp_mu(1.0),
b1=0,
b2=0,
b3=0,
c0=0,
c1=shuttler_freq_mu(end_f_MHz),
c2=0,
)
self.shuttler0_trigger.trigger(0b11)
delay(500*us)
## Step 3 ##
# OUT0 and OUT1 has 180 degree phase difference
self.shuttler0_dds[0].set_waveform(
b0=shuttler_dds_amp_mu(1.0),
b1=0,
b2=0,
b3=0,
c0=shuttler_phase_offset(180.0),
c1=shuttler_freq_mu(end_f_MHz),
c2=0,
)
# Phase and Output Setting of OUT1 is retained
# if the channel is not triggered or config is not cleared
self.shuttler0_trigger.trigger(0b1)
delay(500*us)
## Step 4 ##
# b(0) = 0, b(250) = 8.545, b(500) = 0
self.shuttler0_dds[0].set_waveform(
b0=0,
b1=shuttler_dds_damp_mu(0.06835937),
b2=shuttler_dds_ddamp_mu(-0.0001367187),
b3=0,
c0=0,
c1=shuttler_freq_mu(end_f_MHz),
c2=0,
)
self.shuttler0_dds[1].set_waveform(
b0=0,
b1=shuttler_dds_damp_mu(0.06835937),
b2=shuttler_dds_ddamp_mu(-0.0001367187),
b3=0,
c0=0,
c1=0,
c2=0,
)
self.shuttler0_trigger.trigger(0b11)
delay(500*us)
## Step 5 ##
self.shuttler0_dcbias[0].set_waveform(
a0=shuttler_volt_amp_mu(-5.0),
a1=int32(shuttler_volt_damp_mu(0.01)),
a2=0,
a3=0,
)
self.shuttler0_dds[0].set_waveform(
b0=shuttler_dds_amp_mu(1.0),
b1=0,
b2=0,
b3=0,
c0=0,
c1=shuttler_freq_mu(end_f_MHz),
c2=0,
)
self.shuttler0_dcbias[1].set_waveform(
a0=shuttler_volt_amp_mu(-5.0),
a1=int32(shuttler_volt_damp_mu(0.01)),
a2=0,
a3=0,
)
self.shuttler0_dds[1].set_waveform(
b0=0,
b1=0,
b2=0,
b3=0,
c0=0,
c1=0,
c2=0,
)
self.shuttler0_trigger.trigger(0b11)
delay(1000*us)
## Step 6 ##
self.shuttler0_dcbias[0].set_waveform(
a0=shuttler_volt_amp_mu(-2.5),
a1=int32(shuttler_volt_damp_mu(0.01)),
a2=0,
a3=0,
)
self.shuttler0_dds[0].set_waveform(
b0=0,
b1=shuttler_dds_damp_mu(0.06835937),
b2=shuttler_dds_ddamp_mu(-0.0001367187),
b3=0,
c0=0,
c1=shuttler_freq_mu(start_f_MHz),
c2=shuttler_freq_sweep(start_f_MHz, end_f_MHz, duration_us),
)
self.shuttler0_trigger.trigger(0b1)
self.shuttler_channel_reset(1)
delay(500*us)
## Step 7 ##
self.shuttler0_dcbias[0].set_waveform(
a0=shuttler_volt_amp_mu(2.5),
a1=int32(shuttler_volt_damp_mu(-0.01)),
a2=0,
a3=0,
)
self.shuttler0_dds[0].set_waveform(
b0=0,
b1=shuttler_dds_damp_mu(-0.06835937),
b2=shuttler_dds_ddamp_mu(0.0001367187),
b3=0,
c0=shuttler_phase_offset(180.0),
c1=shuttler_freq_mu(end_f_MHz),
c2=shuttler_freq_sweep(end_f_MHz, start_f_MHz, duration_us),
)
self.shuttler0_trigger.trigger(0b1)
delay(500*us)
## Step 8 ##
self.shuttler0_relay.enable(0)
self.shuttler_channel_reset(0)
self.shuttler_channel_reset(1)
@kernel
def led(self):
for i in range(2):
for j in range(3):
self.shuttler0_leds[i].pulse(.1*s)
delay(.1*s)
@kernel
def relay_init(self):
self.shuttler0_relay.init()
self.shuttler0_relay.enable(0x0000)
@kernel
def adc_init(self):
delay_mu(int64(self.core.ref_multiplier))
self.shuttler0_adc.power_up()
delay_mu(int64(self.core.ref_multiplier))
assert self.shuttler0_adc.read_id() >> 4 == 0x038d
delay_mu(int64(self.core.ref_multiplier))
# The actual output voltage is limited by the hardware, the calculated calibration gain and offset.
# For example, if the system has a calibration gain of 1.06, then the max output voltage = 10 / 1.06 = 9.43V.
# Setting a value larger than 9.43V will result in overflow.
self.shuttler0_adc.calibrate(self.shuttler0_dcbias, self.shuttler0_trigger, self.shuttler0_config)

View File

@ -5,11 +5,7 @@ device_db = {
"type": "local",
"module": "artiq.coredevice.core",
"class": "Core",
"arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
"arguments": {"host": core_addr, "ref_period": 1e-9}
},
"core_log": {
"type": "controller",
@ -24,13 +20,6 @@ device_db = {
"port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": {
"type": "local",
"module": "artiq.coredevice.cache",

View File

@ -9,11 +9,7 @@ device_db = {
"type": "local",
"module": "artiq.coredevice.core",
"class": "Core",
"arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
"arguments": {"host": core_addr, "ref_period": 1e-9}
},
"core_log": {
"type": "controller",
@ -28,13 +24,6 @@ device_db = {
"port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": {
"type": "local",
"module": "artiq.coredevice.cache",

View File

@ -20,7 +20,7 @@ class DDSSetter(EnvExperiment):
"driver": self.get_device(k),
"frequency": self.get_argument(
"{}_frequency".format(k),
NumberValue(100e6, scale=1e6, unit="MHz", precision=6))
NumberValue(100e6, scale=1e6, unit="MHz", ndecimals=6))
}
@kernel

View File

@ -12,8 +12,8 @@ class PhotonHistogram(EnvExperiment):
self.setattr_device("bdd_sw")
self.setattr_device("pmt")
self.setattr_argument("nbins", NumberValue(100, precision=0, step=1))
self.setattr_argument("repeats", NumberValue(100, precision=0, step=1))
self.setattr_argument("nbins", NumberValue(100, ndecimals=0, step=1))
self.setattr_argument("repeats", NumberValue(100, ndecimals=0, step=1))
self.setattr_dataset("cool_f", 230*MHz)
self.setattr_dataset("detect_f", 220*MHz)

View File

@ -79,7 +79,7 @@ class SpeedBenchmark(EnvExperiment):
"CoreSend1MB",
"CorePrimes"]))
self.setattr_argument("nruns", NumberValue(10, min=1, max=1000,
precision=0, step=1))
ndecimals=0, step=1))
self.setattr_device("core")
self.setattr_device("scheduler")

View File

@ -45,13 +45,13 @@ class ArgumentsDemo(EnvExperiment):
PYONValue(self.get_dataset("foo", default=42)))
self.setattr_argument("number", NumberValue(42e-6,
unit="us",
precision=4))
ndecimals=4))
self.setattr_argument("integer", NumberValue(42,
step=1, precision=0))
step=1, ndecimals=0))
self.setattr_argument("string", StringValue("Hello World"))
self.setattr_argument("scan", Scannable(global_max=400,
default=NoScan(325),
precision=6))
ndecimals=6))
self.setattr_argument("boolean", BooleanValue(True), "Group")
self.setattr_argument("enum", EnumerationValue(
["foo", "bar", "quux"], "foo"), "Group")

View File

@ -4,13 +4,13 @@ from artiq.applets.simple import SimpleApplet
class DemoWidget(QtWidgets.QLabel):
def __init__(self, args, ctl):
def __init__(self, args):
QtWidgets.QLabel.__init__(self)
self.dataset_name = args.dataset
def data_changed(self, value, metadata, persist, mods):
def data_changed(self, data, mods):
try:
n = str(value[self.dataset_name])
n = str(data[self.dataset_name][1])
except (KeyError, ValueError, TypeError):
n = "---"
n = "<font size=15>" + n + "</font>"

View File

@ -13,12 +13,6 @@ dependencies = [
name = "alloc_list"
version = "0.0.0"
[[package]]
name = "arrayvec"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96d30a06541fbafbc7f82ed10c06164cfbd2c401138f6addd8404629c4b16711"
[[package]]
name = "bare-metal"
version = "0.2.5"
@ -326,7 +320,6 @@ dependencies = [
"build_misoc",
"byteorder",
"cslice",
"dyld",
"eh",
"failure",
"failure_derive",
@ -338,7 +331,6 @@ dependencies = [
"proto_artiq",
"riscv",
"smoltcp",
"tar-no-std",
"unwind_backtrace",
]
@ -355,15 +347,10 @@ dependencies = [
name = "satman"
version = "0.0.0"
dependencies = [
"alloc_list",
"board_artiq",
"board_misoc",
"build_misoc",
"cslice",
"eh",
"io",
"log",
"proto_artiq",
"riscv",
]
@ -384,9 +371,9 @@ checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
[[package]]
name = "smoltcp"
version = "0.8.2"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ee34c1e1bfc7e9206cc0fb8030a90129b4e319ab53856249bb27642cab914fb3"
checksum = "d2308a1657c8db1f5b4993bab4e620bdbe5623bd81f254cf60326767bb243237"
dependencies = [
"bitflags",
"byteorder",
@ -423,16 +410,6 @@ dependencies = [
"syn",
]
[[package]]
name = "tar-no-std"
version = "0.1.8"
source = "git+https://git.m-labs.hk/M-Labs/tar-no-std?rev=2ab6dc5#2ab6dc58e5249c59c4eb03eaf3a119bcdd678d32"
dependencies = [
"arrayvec",
"bitflags",
"log",
]
[[package]]
name = "unicode-xid"
version = "0.0.4"

View File

@ -16,5 +16,5 @@ build_misoc = { path = "../libbuild_misoc" }
byteorder = { version = "1.0", default-features = false }
crc = { version = "1.7", default-features = false }
board_misoc = { path = "../libboard_misoc", features = ["uart_console", "smoltcp"] }
smoltcp = { version = "0.8.2", default-features = false, features = ["medium-ethernet", "proto-ipv4", "proto-ipv6", "socket-tcp"] }
smoltcp = { version = "0.8.0", default-features = false, features = ["medium-ethernet", "proto-ipv4", "proto-ipv6", "socket-tcp"] }
riscv = { version = "0.6.0", features = ["inline-asm"] }

View File

@ -18,10 +18,8 @@ use board_misoc::slave_fpga;
use board_misoc::{clock, ethmac, net_settings};
use board_misoc::uart_console::Console;
use riscv::register::{mcause, mepc, mtval};
#[cfg(has_ethmac)]
use smoltcp::iface::{Routes, SocketStorage};
#[cfg(has_ethmac)]
use smoltcp::wire::{HardwareAddress, IpAddress, Ipv4Address, Ipv6Address};
use smoltcp::iface::SocketStorage;
use smoltcp::wire::{HardwareAddress, IpAddress, Ipv4Address};
fn check_integrity() -> bool {
extern {
@ -413,29 +411,28 @@ fn network_boot() {
println!("Network addresses: {}", net_addresses);
let mut ip_addrs = [
IpCidr::new(IpAddress::Ipv4(Ipv4Address::UNSPECIFIED), 0),
net_addresses.ipv6_ll_addr,
IpCidr::new(IpAddress::Ipv6(Ipv6Address::UNSPECIFIED), 0)
IpCidr::new(net_addresses.ipv6_ll_addr, 0),
IpCidr::new(net_addresses.ipv6_ll_addr, 0)
];
if let net_settings::Ipv4AddrConfig::Static(ipv4) = net_addresses.ipv4_addr {
ip_addrs[0] = IpCidr::Ipv4(ipv4);
ip_addrs[0] = IpCidr::new(IpAddress::Ipv4(ipv4), 0);
}
if let Some(ipv6) = net_addresses.ipv6_addr {
ip_addrs[2] = IpCidr::Ipv6(ipv6);
};
let mut routes = [None; 2];
let mut interface = smoltcp::iface::InterfaceBuilder::new(net_device, &mut sockets[..])
let mut interface = match net_addresses.ipv6_addr {
Some(addr) => {
ip_addrs[2] = IpCidr::new(addr, 0);
smoltcp::iface::InterfaceBuilder::new(net_device, &mut sockets[..])
.hardware_addr(HardwareAddress::Ethernet(net_addresses.hardware_addr))
.ip_addrs(&mut ip_addrs[..])
.neighbor_cache(neighbor_cache)
.routes(Routes::new(&mut routes[..]))
.finalize();
if let Some(default_route) = net_addresses.ipv4_default_route {
interface.routes_mut().add_default_ipv4_route(default_route).unwrap();
}
if let Some(default_route) = net_addresses.ipv6_default_route {
interface.routes_mut().add_default_ipv6_route(default_route).unwrap();
.finalize()
}
None =>
smoltcp::iface::InterfaceBuilder::new(net_device, &mut sockets[..])
.hardware_addr(HardwareAddress::Ethernet(net_addresses.hardware_addr))
.ip_addrs(&mut ip_addrs[..2])
.neighbor_cache(neighbor_cache)
.finalize()
};
let mut rx_storage = [0; 4096];
let mut tx_storage = [0; 128];
@ -500,7 +497,7 @@ pub extern fn main() -> i32 {
println!(r"|_| |_|_|____/ \___/ \____|");
println!("");
println!("MiSoC Bootloader");
println!("Copyright (c) 2017-2023 M-Labs Limited");
println!("Copyright (c) 2017-2022 M-Labs Limited");
println!("");
#[cfg(has_ethmac)]

View File

@ -26,7 +26,7 @@ $(RUSTOUT)/libksupport.a:
ksupport.elf: $(RUSTOUT)/libksupport.a glue.o
$(link) -T $(KSUPPORT_DIRECTORY)/ksupport.ld \
-lunwind-$(CPU)-libc -lprintf-float -lm
-lunwind-$(CPU)-elf -lprintf-float -lm
%.o: $(KSUPPORT_DIRECTORY)/%.c
$(compile)

View File

@ -157,11 +157,6 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(dma_retrieve = ::dma_retrieve),
api!(dma_playback = ::dma_playback),
api!(subkernel_load_run = ::subkernel_load_run),
api!(subkernel_send_message = ::subkernel_send_message),
api!(subkernel_await_message = ::subkernel_await_message),
api!(subkernel_await_finish = ::subkernel_await_finish),
api!(i2c_start = ::nrt_bus::i2c::start),
api!(i2c_restart = ::nrt_bus::i2c::restart),
api!(i2c_stop = ::nrt_bus::i2c::stop),

View File

@ -333,7 +333,7 @@ extern fn stop_fn(_version: c_int,
}
}
static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [
static EXCEPTION_ID_LOOKUP: [(&str, u32); 11] = [
("RuntimeError", 0),
("RTIOUnderflow", 1),
("RTIOOverflow", 2),
@ -345,7 +345,6 @@ static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [
("ZeroDivisionError", 8),
("IndexError", 9),
("UnwrapNoneError", 10),
("SubkernelError", 11)
];
pub fn get_exception_id(name: &str) -> u32 {

View File

@ -17,7 +17,7 @@ void send_to_rtio_log(struct slice data);
#define KERNELCPU_EXEC_ADDRESS 0x45000000
#define KERNELCPU_PAYLOAD_ADDRESS 0x45060000
#define KERNELCPU_LAST_ADDRESS 0x4fffffff
#define KSUPPORT_HEADER_SIZE 0x74
#define KSUPPORT_HEADER_SIZE 0x80
FILE *stderr;

View File

@ -35,6 +35,16 @@ SECTIONS
*(.text .text.*)
} :text
/* https://sourceware.org/bugzilla/show_bug.cgi?id=20475 */
.got : {
PROVIDE(_GLOBAL_OFFSET_TABLE_ = .);
*(.got)
} :text
.got.plt : {
*(.got.plt)
} :text
.rodata :
{
*(.rodata .rodata.*)
@ -50,11 +60,6 @@ SECTIONS
KEEP(*(.eh_frame_hdr))
} > ksupport :text :eh_frame
.gcc_except_table :
{
*(.gcc_except_table)
} > ksupport
.data :
{
*(.data .data.*)

View File

@ -21,6 +21,7 @@ use dyld::Library;
use board_artiq::{mailbox, rpc_queue};
use proto_artiq::{kernel_proto, rpc_proto};
use kernel_proto::*;
#[cfg(has_rtio_dma)]
use board_misoc::csr;
use riscv::register::{mcause, mepc, mtval};
@ -138,7 +139,7 @@ extern fn rpc_send_async(service: u32, tag: &CSlice<u8>, data: *const *const ())
rpc_queue::enqueue(|mut slice| {
let length = {
let mut writer = Cursor::new(&mut slice[4..]);
rpc_proto::send_args(&mut writer, service, tag.as_ref(), data, true)?;
rpc_proto::send_args(&mut writer, service, tag.as_ref(), data)?;
writer.position()
};
io::ProtoWrite::write_u32(&mut slice, length as u32)
@ -155,21 +156,6 @@ extern fn rpc_send_async(service: u32, tag: &CSlice<u8>, data: *const *const ())
})
}
/// Receives the result from an RPC call into the given memory buffer.
///
/// To handle aggregate objects with an a priori unknown size and number of
/// sub-allocations (e.g. a list of list of lists, where, at each level, the number of
/// elements is not statically known), this function needs to be called in a loop:
///
/// On the first call, `slot` should be a buffer of suitable size and alignment for
/// the top-level return value (e.g. in the case of a list, the pointer/length pair).
/// A return value of zero indicates that the value has been completely received.
/// As long as the return value is positive, another allocation with the given number of
/// bytes is needed, so the function should be called again with such a buffer (aligned
/// to the maximum required for any of the possible types according to the target ABI).
///
/// If the RPC call resulted in an exception, it is reconstructed and raised.
#[unwind(allowed)]
extern fn rpc_recv(slot: *mut ()) -> usize {
send(&RpcRecvRequest(slot));
@ -268,7 +254,7 @@ extern fn dma_record_start(name: &CSlice<u8>) {
}
#[unwind(allowed)]
extern fn dma_record_stop(duration: i64, enable_ddma: bool) {
extern fn dma_record_stop(duration: i64) {
unsafe {
dma_record_flush();
@ -284,8 +270,7 @@ extern fn dma_record_stop(duration: i64, enable_ddma: bool) {
DMA_RECORDER.active = false;
send(&DmaRecordStop {
duration: duration as u64,
enable_ddma: enable_ddma
duration: duration as u64
});
}
}
@ -342,7 +327,7 @@ extern fn dma_record_output(target: i32, word: i32) {
}
#[unwind(aborts)]
extern fn dma_record_output_wide(target: i32, words: &CSlice<i32>) {
extern fn dma_record_output_wide(target: i32, words: CSlice<i32>) {
assert!(words.len() <= 16); // enforce the hardware limit
unsafe {
@ -371,7 +356,6 @@ extern fn dma_erase(name: &CSlice<u8>) {
struct DmaTrace {
duration: i64,
address: i32,
uses_ddma: bool,
}
#[unwind(allowed)]
@ -379,12 +363,11 @@ extern fn dma_retrieve(name: &CSlice<u8>) -> DmaTrace {
let name = str::from_utf8(name.as_ref()).unwrap();
send(&DmaRetrieveRequest { name: name });
recv!(&DmaRetrieveReply { trace, duration, uses_ddma } => {
recv!(&DmaRetrieveReply { trace, duration } => {
match trace {
Some(bytes) => Ok(DmaTrace {
address: bytes.as_ptr() as i32,
duration: duration as i64,
uses_ddma: uses_ddma,
duration: duration as i64
}),
None => Err(())
}
@ -395,9 +378,9 @@ extern fn dma_retrieve(name: &CSlice<u8>) -> DmaTrace {
})
}
#[cfg(kernel_has_rtio_dma)]
#[cfg(has_rtio_dma)]
#[unwind(allowed)]
extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) {
extern fn dma_playback(timestamp: i64, ptr: i32) {
assert!(ptr % 64 == 0);
unsafe {
@ -406,10 +389,6 @@ extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) {
csr::cri_con::selected_write(1);
csr::rtio_dma::enable_write(1);
#[cfg(has_drtio)]
if _uses_ddma {
send(&DmaStartRemoteRequest { id: ptr as i32, timestamp: timestamp });
}
while csr::rtio_dma::enable_read() != 0 {}
csr::cri_con::selected_write(0);
@ -420,107 +399,22 @@ extern fn dma_playback(timestamp: i64, ptr: i32, _uses_ddma: bool) {
csr::rtio_dma::error_write(1);
if error & 1 != 0 {
raise!("RTIOUnderflow",
"RTIO underflow at channel {rtio_channel_info:0}, {1} mu",
channel as i64, timestamp as i64, 0);
"RTIO underflow at {0} mu, channel {1}",
timestamp as i64, channel as i64, 0);
}
if error & 2 != 0 {
raise!("RTIODestinationUnreachable",
"RTIO destination unreachable, output, at channel {rtio_channel_info:0}, {1} mu",
channel as i64, timestamp as i64, 0);
"RTIO destination unreachable, output, at {0} mu, channel {1}",
timestamp as i64, channel as i64, 0);
}
}
}
#[cfg(has_drtio)]
if _uses_ddma {
send(&DmaAwaitRemoteRequest { id: ptr as i32 });
recv!(&DmaAwaitRemoteReply { timeout, error, channel, timestamp } => {
if timeout {
raise!("DMAError",
"Error running DMA on satellite device, timed out waiting for results");
}
if error & 1 != 0 {
raise!("RTIOUnderflow",
"RTIO underflow at channel {rtio_channel_info:0}, {1} mu",
channel as i64, timestamp as i64, 0);
}
if error & 2 != 0 {
raise!("RTIODestinationUnreachable",
"RTIO destination unreachable, output, at channel {rtio_channel_info:0}, {1} mu",
channel as i64, timestamp as i64, 0);
}
});
}
}
#[cfg(not(kernel_has_rtio_dma))]
#[cfg(not(has_rtio_dma))]
#[unwind(allowed)]
extern fn dma_playback(_timestamp: i64, _ptr: i32, _uses_ddma: bool) {
unimplemented!("not(kernel_has_rtio_dma)")
}
#[unwind(allowed)]
extern fn subkernel_load_run(id: u32, run: bool) {
send(&SubkernelLoadRunRequest { id: id, run: run });
recv!(&SubkernelLoadRunReply { succeeded } => {
if !succeeded {
raise!("SubkernelError",
"Error loading or running the subkernel");
}
});
}
#[unwind(allowed)]
extern fn subkernel_await_finish(id: u32, timeout: u64) {
send(&SubkernelAwaitFinishRequest { id: id, timeout: timeout });
recv!(SubkernelAwaitFinishReply { status } => {
match status {
SubkernelStatus::NoError => (),
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation")
}
})
}
#[unwind(aborts)]
extern fn subkernel_send_message(id: u32, count: u8, tag: &CSlice<u8>, data: *const *const ()) {
send(&SubkernelMsgSend {
id: id,
count: count,
tag: tag.as_ref(),
data: data
});
}
#[unwind(allowed)]
extern fn subkernel_await_message(id: u32, timeout: u64, tags: &CSlice<u8>, min: u8, max: u8) -> u8 {
send(&SubkernelMsgRecvRequest { id: id, timeout: timeout, tags: tags.as_ref() });
recv!(SubkernelMsgRecvReply { status, count } => {
match status {
SubkernelStatus::NoError => {
if count < &min || count > &max {
raise!("SubkernelError",
"Received less or more arguments than expected");
}
*count
}
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation")
}
})
// RpcRecvRequest should be called `count` times after this to receive message data
extern fn dma_playback(_timestamp: i64, _ptr: i32) {
unimplemented!("not(has_rtio_dma)")
}
unsafe fn attribute_writeback(typeinfo: *const ()) {
@ -562,8 +456,6 @@ unsafe fn attribute_writeback(typeinfo: *const ()) {
}
}
static mut STACK_GUARD_BASE: usize = 0x0;
#[no_mangle]
pub unsafe fn main() {
eh_artiq::reset_exception_buffer(KERNELCPU_PAYLOAD_ADDRESS);
@ -595,7 +487,6 @@ pub unsafe fn main() {
ptr::write_bytes(__bss_start as *mut u8, 0, (_end - __bss_start) as usize);
board_misoc::pmp::init_stack_guard(_sstack_guard as usize);
STACK_GUARD_BASE = _sstack_guard as usize;
board_misoc::cache::flush_cpu_dcache();
board_misoc::cache::flush_cpu_icache();
@ -625,20 +516,10 @@ pub unsafe fn main() {
#[no_mangle]
#[unwind(allowed)]
pub unsafe extern fn exception(_regs: *const u32) {
pub extern fn exception(_regs: *const u32) {
let pc = mepc::read();
let cause = mcause::read().cause();
let mtval = mtval::read();
if let mcause::Trap::Exception(mcause::Exception::LoadFault)
| mcause::Trap::Exception(mcause::Exception::StoreFault) = cause
{
if mtval >= STACK_GUARD_BASE
&& mtval < (STACK_GUARD_BASE + board_misoc::pmp::STACK_GUARD_SIZE)
{
panic!("{:?} at PC {:#08x} in stack guard page ({:#08x}); stack overflow in user kernel code?",
cause, u32::try_from(pc).unwrap(), mtval);
}
}
panic!("{:?} at PC {:#08x}, trap value {:#08x}", cause, u32::try_from(pc).unwrap(), mtval);
}

View File

@ -67,13 +67,13 @@ mod imp {
}
if status & RTIO_O_STATUS_UNDERFLOW != 0 {
raise!("RTIOUnderflow",
"RTIO underflow at channel {rtio_channel_info:0}, {1} mu, slack {2} mu",
channel as i64, timestamp, timestamp - get_counter());
"RTIO underflow at {0} mu, channel {1}, slack {2} mu",
timestamp, channel as i64, timestamp - get_counter());
}
if status & RTIO_O_STATUS_DESTINATION_UNREACHABLE != 0 {
raise!("RTIODestinationUnreachable",
"RTIO destination unreachable, output, at channel {rtio_channel_info:0}, {1} mu",
channel as i64, timestamp, 0);
"RTIO destination unreachable, output, at {0} mu, channel {1}",
timestamp, channel as i64, 0);
}
}
@ -89,7 +89,7 @@ mod imp {
}
}
pub extern fn output_wide(target: i32, data: &CSlice<i32>) {
pub extern fn output_wide(target: i32, data: CSlice<i32>) {
unsafe {
csr::rtio::target_write(target as u32);
// writing target clears o_data
@ -115,7 +115,7 @@ mod imp {
if status & RTIO_I_STATUS_OVERFLOW != 0 {
raise!("RTIOOverflow",
"RTIO input overflow on channel {rtio_channel_info:0}",
"RTIO input overflow on channel {0}",
channel as i64, 0, 0);
}
if status & RTIO_I_STATUS_WAIT_EVENT != 0 {
@ -123,7 +123,7 @@ mod imp {
}
if status & RTIO_I_STATUS_DESTINATION_UNREACHABLE != 0 {
raise!("RTIODestinationUnreachable",
"RTIO destination unreachable, input, on channel {rtio_channel_info:0}",
"RTIO destination unreachable, input, on channel {0}",
channel as i64, 0, 0);
}
@ -143,12 +143,12 @@ mod imp {
if status & RTIO_I_STATUS_OVERFLOW != 0 {
raise!("RTIOOverflow",
"RTIO input overflow on channel {rtio_channel_info:0}",
"RTIO input overflow on channel {0}",
channel as i64, 0, 0);
}
if status & RTIO_I_STATUS_DESTINATION_UNREACHABLE != 0 {
raise!("RTIODestinationUnreachable",
"RTIO destination unreachable, input, on channel {rtio_channel_info:0}",
"RTIO destination unreachable, input, on channel {0}",
channel as i64, 0, 0);
}
@ -168,7 +168,7 @@ mod imp {
if status & RTIO_I_STATUS_OVERFLOW != 0 {
raise!("RTIOOverflow",
"RTIO input overflow on channel {rtio_channel_info:0}",
"RTIO input overflow on channel {0}",
channel as i64, 0, 0);
}
if status & RTIO_I_STATUS_WAIT_EVENT != 0 {
@ -176,7 +176,7 @@ mod imp {
}
if status & RTIO_I_STATUS_DESTINATION_UNREACHABLE != 0 {
raise!("RTIODestinationUnreachable",
"RTIO destination unreachable, input, on channel {rtio_channel_info:0}",
"RTIO destination unreachable, input, on channel {0}",
channel as i64, 0, 0);
}
@ -235,7 +235,7 @@ mod imp {
unimplemented!("not(has_rtio)")
}
pub extern fn output_wide(_target: i32, _data: &CSlice<i32>) {
pub extern fn output_wide(_target: i32, _data: CSlice<i32>) {
unimplemented!("not(has_rtio)")
}

View File

@ -24,4 +24,3 @@ proto_artiq = { path = "../libproto_artiq" }
[features]
uart_console = []
alloc = []

View File

@ -1,77 +0,0 @@
use spi;
use board_misoc::{csr, clock};
const DATA_CTRL_REG : u8 = 0x02;
const IRCML_REG : u8 = 0x05;
const QRCML_REG : u8 = 0x08;
const CLKMODE_REG : u8 = 0x14;
const VERSION_REG : u8 = 0x1F;
const RETIMER_CLK_PHASE : u8 = 0b11;
fn hard_reset() {
unsafe {
// Min Pulse Width: 50ns
csr::dac_rst::out_write(1);
clock::spin_us(1);
csr::dac_rst::out_write(0);
}
}
fn spi_setup(dac_sel: u8, half_duplex: bool, end: bool) -> Result<(), &'static str> {
// Clear the cs_polarity and cs config
spi::set_config(0, 0, 8, 64, 0b1111)?;
spi::set_config(0, 1 << 3, 8, 64, (7 - dac_sel) << 1)?;
spi::set_config(0, (half_duplex as u8) << 7 | (end as u8) << 1, 8, 64, 0b0001)?;
Ok(())
}
fn write(dac_sel: u8, reg: u8, val: u8) -> Result<(), &'static str> {
spi_setup(dac_sel, false, false)?;
spi::write(0, (reg as u32) << 24)?;
spi_setup(dac_sel, false, true)?;
spi::write(0, (val as u32) << 24)?;
Ok(())
}
fn read(dac_sel: u8, reg: u8) -> Result<u8, &'static str> {
spi_setup(dac_sel, false, false)?;
spi::write(0, ((reg | 1 << 7) as u32) << 24)?;
spi_setup(dac_sel, true, true)?;
spi::write(0, 0)?;
Ok(spi::read(0)? as u8)
}
pub fn init() -> Result<(), &'static str> {
hard_reset();
for channel in 0..8 {
let reg = read(channel, VERSION_REG)?;
if reg != 0x0A {
debug!("DAC AD9117 Channel {} has incorrect hardware version. VERSION reg: {:02x}", channel, reg);
return Err("DAC AD9117 hardware version is not equal to 0x0A");
}
// Check for the presence of DCLKIO and CLKIN
let reg = read(channel, CLKMODE_REG)?;
if reg >> 4 & 1 != 0 {
debug!("DAC AD9117 Channel {} retiming fails. CLKMODE reg: {:02x}", channel, reg);
return Err("DAC AD9117 retiming failure");
}
// Force RETIMER-CLK to be Phase 1 as DCLKIO and CLKIN is known to be safe at Phase 1
// See Issue #2200
write(channel, CLKMODE_REG, RETIMER_CLK_PHASE << 6 | 1 << 2 | RETIMER_CLK_PHASE)?;
// Set the DACs input data format to be twos complement
// Set IFIRST and IRISING to True
write(channel, DATA_CTRL_REG, 1 << 7 | 1 << 5 | 1 << 4)?;
// Enable internal common mode resistors of both channels
write(channel, IRCML_REG, 1 << 7)?;
write(channel, QRCML_REG, 1 << 7)?;
}
Ok(())
}

View File

@ -1,219 +0,0 @@
use board_misoc::{csr, clock, config};
#[cfg(feature = "alloc")]
use alloc::format;
struct SerdesConfig {
pub delay: [u8; 4],
}
impl SerdesConfig {
pub fn as_bytes(&self) -> &[u8] {
unsafe {
core::slice::from_raw_parts(
(self as *const SerdesConfig) as *const u8,
core::mem::size_of::<SerdesConfig>(),
)
}
}
}
fn select_lane(lane_no: u8) {
unsafe {
csr::eem_transceiver::lane_sel_write(lane_no);
}
}
fn apply_delay(tap: u8) {
unsafe {
csr::eem_transceiver::dly_cnt_in_write(tap);
csr::eem_transceiver::dly_ld_write(1);
clock::spin_us(1);
assert!(tap as u8 == csr::eem_transceiver::dly_cnt_out_read());
}
}
fn apply_config(config: &SerdesConfig) {
for lane_no in 0..4 {
select_lane(lane_no as u8);
apply_delay(config.delay[lane_no]);
}
}
unsafe fn assign_delay() -> SerdesConfig {
// Select an appropriate delay for lane 0
select_lane(0);
let read_align = |dly: u8| -> f32 {
apply_delay(dly);
csr::eem_transceiver::counter_reset_write(1);
csr::eem_transceiver::counter_enable_write(1);
clock::spin_us(2000);
csr::eem_transceiver::counter_enable_write(0);
let (high, low) = (
csr::eem_transceiver::counter_high_count_read(),
csr::eem_transceiver::counter_low_count_read(),
);
if csr::eem_transceiver::counter_overflow_read() == 1 {
panic!("Unexpected phase detector counter overflow");
}
low as f32 / (low + high) as f32
};
let mut best_dly = None;
loop {
let mut prev = None;
for curr_dly in 0..32 {
let curr_low_rate = read_align(curr_dly);
if let Some(prev_low_rate) = prev {
// This is potentially a crossover position
if prev_low_rate <= curr_low_rate && curr_low_rate >= 0.5 {
let prev_dev = 0.5 - prev_low_rate;
let curr_dev = curr_low_rate - 0.5;
let selected_idx = if prev_dev < curr_dev {
curr_dly - 1
} else {
curr_dly
};
// The setup setup/hold calibration timing (even with
// tolerance) might be invalid in other lanes due to skew.
// 5 taps is very conservative, generally it is 1 or 2
if selected_idx < 5 {
prev = None;
continue;
} else {
best_dly = Some(selected_idx);
break;
}
}
}
// Only rising slope from <= 0.5 can result in a rising low rate
// crossover at 50%.
if curr_low_rate <= 0.5 {
prev = Some(curr_low_rate);
}
}
if best_dly.is_none() {
error!("setup/hold timing calibration failed, retry in 1s...");
clock::spin_us(1_000_000);
} else {
break;
}
}
let best_dly = best_dly.unwrap();
apply_delay(best_dly);
let mut delay_list = [best_dly; 4];
// Assign delay for other lanes
for lane_no in 1..=3 {
select_lane(lane_no as u8);
let mut min_deviation = 0.5;
let mut min_idx = 0;
for dly_delta in -3..=3 {
let index = (best_dly as isize + dly_delta) as u8;
let low_rate = read_align(index);
// abs() from f32 is not available in core library
let deviation = if low_rate < 0.5 {
0.5 - low_rate
} else {
low_rate - 0.5
};
if deviation < min_deviation {
min_deviation = deviation;
min_idx = index;
}
}
apply_delay(min_idx);
delay_list[lane_no] = min_idx;
}
debug!("setup/hold timing calibration: {:?}", delay_list);
SerdesConfig {
delay: delay_list,
}
}
unsafe fn align_comma() {
loop {
for slip in 1..=10 {
// The soft transceiver has 2 8b10b decoders, which receives lane
// 0/1 and lane 2/3 respectively. The decoder are time-multiplexed
// to decode exactly 1 lane each sysclk cycle.
//
// The decoder decodes lane 0/2 data on odd sysclk cycles, buffer
// on even cycles, and vice versa for lane 1/3. Data/Clock latency
// could change timing. The extend bit flips the decoding timing,
// so lane 0/2 data are decoded on even cycles, and lane 1/3 data
// are decoded on odd cycles.
//
// This is needed because transmitting/receiving a 8b10b character
// takes 2 sysclk cycles. Adjusting bitslip only via ISERDES
// limits the range to 1 cycle. The wordslip bit extends the range
// to 2 sysclk cycles.
csr::eem_transceiver::wordslip_write((slip > 5) as u8);
// Apply a double bitslip since the ISERDES is 2x oversampled.
// Bitslip is used for comma alignment purposes once setup/hold
// timing is met.
csr::eem_transceiver::bitslip_write(1);
csr::eem_transceiver::bitslip_write(1);
clock::spin_us(1);
csr::eem_transceiver::comma_align_reset_write(1);
clock::spin_us(100);
if csr::eem_transceiver::comma_read() == 1 {
debug!("comma alignment completed after {} bitslips", slip);
return;
}
}
error!("comma alignment failed, retrying in 1s...");
clock::spin_us(1_000_000);
}
}
pub fn init() {
for trx_no in 0..csr::CONFIG_EEM_DRTIO_COUNT {
unsafe {
csr::eem_transceiver::transceiver_sel_write(trx_no as u8);
}
let key = format!("eem_drtio_delay{}", trx_no);
config::read(&key, |r| {
match r {
Ok(record) => {
info!("loading calibrated timing values from flash");
unsafe {
apply_config(&*(record.as_ptr() as *const SerdesConfig));
}
},
Err(_) => {
info!("calibrating...");
let config = unsafe { assign_delay() };
config::write(&key, config.as_bytes()).unwrap();
}
}
});
unsafe {
align_comma();
csr::eem_transceiver::rx_ready_write(1);
}
}
}

View File

@ -12,8 +12,6 @@ extern crate log;
extern crate io;
extern crate board_misoc;
extern crate proto_artiq;
#[cfg(feature = "alloc")]
extern crate alloc;
pub mod spi;
@ -31,9 +29,3 @@ pub mod grabber;
#[cfg(has_drtio)]
pub mod drtioaux;
pub mod drtio_routing;
#[cfg(all(has_drtio_eem, feature = "alloc"))]
pub mod drtio_eem;
#[cfg(soc_platform = "efc")]
pub mod ad9117;

View File

@ -28,7 +28,7 @@ pub struct FrequencySettings {
pub n31: u32,
pub n32: u32,
pub bwsel: u8,
pub crystal_as_ckin2: bool
pub crystal_ref: bool
}
pub enum Input {
@ -83,7 +83,7 @@ fn map_frequency_settings(settings: &FrequencySettings) -> Result<FrequencySetti
n31: settings.n31 - 1,
n32: settings.n32 - 1,
bwsel: settings.bwsel,
crystal_as_ckin2: settings.crystal_as_ckin2
crystal_ref: settings.crystal_ref
};
Ok(r)
}
@ -210,14 +210,13 @@ pub fn bypass(input: Input) -> Result<()> {
pub fn setup(settings: &FrequencySettings, input: Input) -> Result<()> {
let s = map_frequency_settings(settings)?;
let cksel_reg = match input {
Input::Ckin1 => 0b00,
Input::Ckin2 => 0b01,
};
init()?;
if settings.crystal_as_ckin2 {
if settings.crystal_ref {
write(0, read(0)? | 0x40)?; // FREE_RUN=1
}
write(2, (read(2)? & 0x0f) | (s.bwsel << 4))?;

View File

@ -2,11 +2,9 @@
mod imp {
use board_misoc::csr;
const INVALID_BUS: &'static str = "Invalid SPI bus";
pub fn set_config(busno: u8, flags: u8, length: u8, div: u8, cs: u8) -> Result<(), &'static str> {
pub fn set_config(busno: u8, flags: u8, length: u8, div: u8, cs: u8) -> Result<(), ()> {
if busno != 0 {
return Err(INVALID_BUS)
return Err(())
}
unsafe {
while csr::converter_spi::writable_read() == 0 {}
@ -33,9 +31,9 @@ mod imp {
Ok(())
}
pub fn write(busno: u8, data: u32) -> Result<(), &'static str> {
pub fn write(busno: u8, data: u32) -> Result<(), ()> {
if busno != 0 {
return Err(INVALID_BUS)
return Err(())
}
unsafe {
while csr::converter_spi::writable_read() == 0 {}
@ -44,9 +42,9 @@ mod imp {
Ok(())
}
pub fn read(busno: u8) -> Result<u32, &'static str> {
pub fn read(busno: u8) -> Result<u32, ()> {
if busno != 0 {
return Err(INVALID_BUS)
return Err(())
}
Ok(unsafe {
while csr::converter_spi::writable_read() == 0 {}

View File

@ -15,7 +15,7 @@ build_misoc = { path = "../libbuild_misoc" }
[dependencies]
byteorder = { version = "1.0", default-features = false }
log = { version = "0.4", default-features = false, optional = true }
smoltcp = { version = "0.8.2", default-features = false, optional = true }
smoltcp = { version = "0.8.0", default-features = false, optional = true }
riscv = { version = "0.6.0", features = ["inline-asm"] }
[features]

View File

@ -163,7 +163,7 @@ mod imp {
while let Some(result) = iter.next() {
let (record_key, record_value) = result?;
if key.as_bytes() == record_key {
found = !record_value.is_empty();
found = true;
// last write wins
value = record_value
}

View File

@ -1,14 +1,5 @@
use csr;
use i2c;
// Only the bare minimum registers. Bits/IO connections equivalent between IC types.
struct Registers {
// PCA9539 equivalent register names in comments
iodira: u8, // Configuration Port 0
iodirb: u8, // Configuration Port 1
gpioa: u8, // Output Port 0
gpiob: u8, // Output Port 1
}
use csr;
pub struct IoExpander {
busno: u8,
@ -18,17 +9,15 @@ pub struct IoExpander {
iodir: [u8; 2],
out_current: [u8; 2],
out_target: [u8; 2],
registers: Registers,
}
impl IoExpander {
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))]
pub fn new(index: u8) -> Result<Self, &'static str> {
pub fn new(index: u8) -> Self {
const VIRTUAL_LED_MAPPING0: [(u8, u8, u8); 2] = [(0, 0, 6), (1, 1, 6)];
const VIRTUAL_LED_MAPPING1: [(u8, u8, u8); 2] = [(2, 0, 6), (3, 1, 6)];
// Both expanders on SHARED I2C bus
let mut io_expander = match index {
match index {
0 => IoExpander {
busno: 0,
port: 11,
@ -37,12 +26,6 @@ impl IoExpander {
iodir: [0xff; 2],
out_current: [0; 2],
out_target: [0; 2],
registers: Registers {
iodira: 0x00,
iodirb: 0x01,
gpioa: 0x12,
gpiob: 0x13,
},
},
1 => IoExpander {
busno: 0,
@ -52,58 +35,9 @@ impl IoExpander {
iodir: [0xff; 2],
out_current: [0; 2],
out_target: [0; 2],
registers: Registers {
iodira: 0x00,
iodirb: 0x01,
gpioa: 0x12,
gpiob: 0x13,
},
},
_ => return Err("incorrect I/O expander index"),
};
if !io_expander.check_ack()? {
#[cfg(feature = "log")]
log::info!(
"MCP23017 io expander {} not found. Checking for PCA9539.",
index
);
io_expander.address += 0xa8; // translate to PCA9539 addresses (see schematic)
io_expander.registers = Registers {
iodira: 0x06,
iodirb: 0x07,
gpioa: 0x02,
gpiob: 0x03,
};
if !io_expander.check_ack()? {
return Err("Neither MCP23017 nor PCA9539 io expander found.");
};
_ => panic!("incorrect I/O expander index"),
}
Ok(io_expander)
}
#[cfg(soc_platform = "efc")]
pub fn new() -> Result<Self, &'static str> {
const VIRTUAL_LED_MAPPING: [(u8, u8, u8); 2] = [(0, 0, 5), (1, 0, 6)];
let io_expander = IoExpander {
busno: 0,
port: 1,
address: 0x40,
virtual_led_mapping: &VIRTUAL_LED_MAPPING,
iodir: [0xff; 2],
out_current: [0; 2],
out_target: [0; 2],
registers: Registers {
iodira: 0x00,
iodirb: 0x01,
gpioa: 0x12,
gpiob: 0x13,
},
};
if !io_expander.check_ack()? {
return Err("MCP23017 io expander not found.");
};
Ok(io_expander)
}
#[cfg(soc_platform = "kasli")]
@ -114,13 +48,6 @@ impl IoExpander {
Ok(())
}
#[cfg(soc_platform = "efc")]
fn select(&self) -> Result<(), &'static str> {
let mask: u16 = 1 << self.port;
i2c::switch_select(self.busno, 0x70, mask as u8)?;
Ok(())
}
fn write(&self, addr: u8, value: u8) -> Result<(), &'static str> {
i2c::start(self.busno)?;
i2c::write(self.busno, self.address)?;
@ -130,18 +57,9 @@ impl IoExpander {
Ok(())
}
fn check_ack(&self) -> Result<bool, &'static str> {
// Check for ack from io expander
self.select()?;
i2c::start(self.busno)?;
let ack = i2c::write(self.busno, self.address)?;
i2c::stop(self.busno)?;
Ok(ack)
}
fn update_iodir(&self) -> Result<(), &'static str> {
self.write(self.registers.iodira, self.iodir[0])?;
self.write(self.registers.iodirb, self.iodir[1])?;
self.write(0x00, self.iodir[0])?;
self.write(0x01, self.iodir[1])?;
Ok(())
}
@ -154,9 +72,9 @@ impl IoExpander {
self.update_iodir()?;
self.out_current[0] = 0x00;
self.write(self.registers.gpioa, 0x00)?;
self.write(0x12, 0x00)?;
self.out_current[1] = 0x00;
self.write(self.registers.gpiob, 0x00)?;
self.write(0x13, 0x00)?;
Ok(())
}
@ -176,18 +94,20 @@ impl IoExpander {
pub fn service(&mut self) -> Result<(), &'static str> {
for (led, port, bit) in self.virtual_led_mapping.iter() {
let level = unsafe { (csr::virtual_leds::status_read() >> led) & 1 };
let level = unsafe {
(csr::virtual_leds::status_read() >> led) & 1
};
self.set(*port, *bit, level != 0);
}
if self.out_target != self.out_current {
self.select()?;
if self.out_target[0] != self.out_current[0] {
self.write(self.registers.gpioa, self.out_target[0])?;
self.write(0x12, self.out_target[0])?;
self.out_current[0] = self.out_target[0];
}
if self.out_target[1] != self.out_current[1] {
self.write(self.registers.gpiob, self.out_target[1])?;
self.write(0x13, self.out_target[1])?;
self.out_current[1] = self.out_target[1];
}
}

View File

@ -40,7 +40,7 @@ pub mod ethmac;
pub mod i2c;
#[cfg(soc_platform = "kasli")]
pub mod i2c_eeprom;
#[cfg(any(all(soc_platform = "kasli", hw_rev = "v2.0"), soc_platform = "efc"))]
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))]
pub mod io_expander;
#[cfg(all(has_ethmac, feature = "smoltcp"))]
pub mod net_settings;

View File

@ -2,7 +2,7 @@ use core::fmt;
use core::fmt::{Display, Formatter};
use core::str::FromStr;
use smoltcp::wire::{EthernetAddress, IpAddress, IpCidr, Ipv4Address, Ipv4Cidr, Ipv6Address, Ipv6Cidr};
use smoltcp::wire::{EthernetAddress, IpAddress, Ipv4Address};
use config;
#[cfg(soc_platform = "kasli")]
@ -10,22 +10,18 @@ use i2c_eeprom;
pub enum Ipv4AddrConfig {
UseDhcp,
Static(Ipv4Cidr),
Static(Ipv4Address),
}
impl FromStr for Ipv4AddrConfig {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s == "use_dhcp" {
Ok(Ipv4AddrConfig::UseDhcp)
} else if let Ok(cidr) = Ipv4Cidr::from_str(s) {
Ok(Ipv4AddrConfig::Static(cidr))
} else if let Ok(addr) = Ipv4Address::from_str(s) {
Ok(Ipv4AddrConfig::Static(Ipv4Cidr::new(addr, 0)))
Ok(if s == "use_dhcp" {
Ipv4AddrConfig::UseDhcp
} else {
Err(())
}
Ipv4AddrConfig::Static(Ipv4Address::from_str(s)?)
})
}
}
@ -42,10 +38,8 @@ impl Display for Ipv4AddrConfig {
pub struct NetAddresses {
pub hardware_addr: EthernetAddress,
pub ipv4_addr: Ipv4AddrConfig,
pub ipv6_ll_addr: IpCidr,
pub ipv6_addr: Option<Ipv6Cidr>,
pub ipv4_default_route: Option<Ipv4Address>,
pub ipv6_default_route: Option<Ipv6Address>,
pub ipv6_ll_addr: IpAddress,
pub ipv6_addr: Option<IpAddress>
}
impl fmt::Display for NetAddresses {
@ -78,39 +72,28 @@ pub fn get_adresses() -> NetAddresses {
}
}
let ipv4_addr = match config::read_str("ip", |r| r.map(|s| s.parse())) {
Ok(Ok(addr)) => addr,
_ => Ipv4AddrConfig::UseDhcp,
};
let ipv4_addr;
match config::read_str("ip", |r| r.map(|s| s.parse())) {
Ok(Ok(addr)) => ipv4_addr = addr,
_ => ipv4_addr = Ipv4AddrConfig::UseDhcp,
}
let ipv4_default_route = match config::read_str("ipv4_default_route", |r| r.map(|s| s.parse())) {
Ok(Ok(addr)) => Some(addr),
_ => None,
};
let ipv6_ll_addr = IpCidr::new(IpAddress::v6(
let ipv6_ll_addr = IpAddress::v6(
0xfe80, 0x0000, 0x0000, 0x0000,
(((hardware_addr.0[0] ^ 0x02) as u16) << 8) | (hardware_addr.0[1] as u16),
((hardware_addr.0[2] as u16) << 8) | 0x00ff,
0xfe00 | (hardware_addr.0[3] as u16),
((hardware_addr.0[4] as u16) << 8) | (hardware_addr.0[5] as u16)), 10);
((hardware_addr.0[4] as u16) << 8) | (hardware_addr.0[5] as u16));
let ipv6_addr = match config::read_str("ip6", |r| r.map(|s| s.parse())) {
Ok(Ok(addr)) => Some(addr),
_ => None
};
let ipv6_default_route = match config::read_str("ipv6_default_route", |r| r.map(|s| s.parse())) {
Ok(Ok(addr)) => Some(addr),
_ => None,
};
NetAddresses {
hardware_addr,
ipv4_addr,
ipv6_ll_addr,
ipv6_addr,
ipv4_default_route,
ipv6_default_route,
hardware_addr: hardware_addr,
ipv4_addr: ipv4_addr,
ipv6_ll_addr: ipv6_ll_addr,
ipv6_addr: ipv6_addr
}
}

View File

@ -9,8 +9,6 @@ const PMP_W : usize = 0b00000010;
const PMP_R : usize = 0b00000001;
const PMP_OFF : usize = 0b00000000;
pub const STACK_GUARD_SIZE: usize = 0x1000;
#[inline(always)]
pub unsafe fn init_stack_guard(guard_base: usize) {
pmpaddr2::write((guard_base >> 2) | ((0x1000 - 1) >> 3));

View File

@ -129,6 +129,8 @@ _abs_start:
restores caller saved registers and then returns.
*/
.global _start_trap
/* Make it .weak so PAC/HAL can provide their own if needed. */
.weak _start_trap
_start_trap:
addi sp, sp, -16*REGBYTES

View File

@ -8,6 +8,9 @@ mod ddr {
DFII_COMMAND_WRDATA, DFII_COMMAND_RDDATA};
use sdram_phy::{DFII_NPHASES, DFII_PIX_DATA_SIZE, DFII_PIX_WRDATA_ADDR, DFII_PIX_RDDATA_ADDR};
#[cfg(kusddrphy)]
const DDRPHY_MAX_DELAY: u16 = 512;
#[cfg(not(kusddrphy))]
const DDRPHY_MAX_DELAY: u16 = 32;
const DQS_SIGNAL_COUNT: usize = DFII_PIX_DATA_SIZE / 2;
@ -32,12 +35,17 @@ mod ddr {
#[cfg(ddrphy_wlevel)]
unsafe fn write_level_scan(logger: &mut Option<&mut dyn fmt::Write>) {
#[cfg(kusddrphy)]
log!(logger, "DQS initial delay: {} taps\n", ddrphy::wdly_dqs_taps_read());
log!(logger, "Write leveling scan:\n");
enable_write_leveling(true);
spin_cycles(100);
#[cfg(not(kusddrphy))]
let ddrphy_max_delay : u16 = DDRPHY_MAX_DELAY;
#[cfg(kusddrphy)]
let ddrphy_max_delay : u16 = DDRPHY_MAX_DELAY - ddrphy::wdly_dqs_taps_read();
for n in 0..DQS_SIGNAL_COUNT {
let dq_addr = dfii::PI0_RDDATA_ADDR
@ -49,6 +57,10 @@ mod ddr {
ddrphy::wdly_dq_rst_write(1);
ddrphy::wdly_dqs_rst_write(1);
#[cfg(kusddrphy)]
for _ in 0..ddrphy::wdly_dqs_taps_read() {
ddrphy::wdly_dqs_inc_write(1);
}
let mut dq;
for _ in 0..ddrphy_max_delay {
@ -76,12 +88,17 @@ mod ddr {
unsafe fn write_level(logger: &mut Option<&mut dyn fmt::Write>,
delay: &mut [u16; DQS_SIGNAL_COUNT],
high_skew: &mut [bool; DQS_SIGNAL_COUNT]) -> bool {
#[cfg(kusddrphy)]
log!(logger, "DQS initial delay: {} taps\n", ddrphy::wdly_dqs_taps_read());
log!(logger, "Write leveling: ");
enable_write_leveling(true);
spin_cycles(100);
#[cfg(not(kusddrphy))]
let ddrphy_max_delay : u16 = DDRPHY_MAX_DELAY;
#[cfg(kusddrphy)]
let ddrphy_max_delay : u16 = DDRPHY_MAX_DELAY - ddrphy::wdly_dqs_taps_read();
let mut failed = false;
for n in 0..DQS_SIGNAL_COUNT {
@ -95,6 +112,10 @@ mod ddr {
ddrphy::wdly_dq_rst_write(1);
ddrphy::wdly_dqs_rst_write(1);
#[cfg(kusddrphy)]
for _ in 0..ddrphy::wdly_dqs_taps_read() {
ddrphy::wdly_dqs_inc_write(1);
}
ddrphy::wlevel_strobe_write(1);
spin_cycles(10);
@ -125,6 +146,11 @@ mod ddr {
dq = ptr::read_volatile(dq_addr);
}
// Get a bit further into the 0 zone
#[cfg(kusddrphy)]
for _ in 0..32 {
incr_delay();
}
}
while dq == 0 {
@ -165,6 +191,9 @@ mod ddr {
if delay[n] > threshold {
ddrphy::dly_sel_write(1 << n);
#[cfg(kusddrphy)]
ddrphy::rdly_dq_bitslip_write(1);
#[cfg(not(kusddrphy))]
for _ in 0..3 {
ddrphy::rdly_dq_bitslip_write(1);
}
@ -217,7 +246,7 @@ mod ddr {
ddrphy::dly_sel_write(1 << (DQS_SIGNAL_COUNT - n - 1));
ddrphy::rdly_dq_rst_write(1);
#[cfg(any(soc_platform = "kasli", soc_platform = "efc"))]
#[cfg(soc_platform = "kasli")]
{
for _ in 0..3 {
ddrphy::rdly_dq_bitslip_write(1);
@ -308,7 +337,7 @@ mod ddr {
let mut max_seen_valid = 0;
ddrphy::rdly_dq_rst_write(1);
#[cfg(any(soc_platform = "kasli", soc_platform = "efc"))]
#[cfg(soc_platform = "kasli")]
{
for _ in 0..3 {
ddrphy::rdly_dq_bitslip_write(1);
@ -371,7 +400,7 @@ mod ddr {
// Set delay to the middle
ddrphy::rdly_dq_rst_write(1);
#[cfg(any(soc_platform = "kasli", soc_platform = "efc"))]
#[cfg(soc_platform = "kasli")]
{
for _ in 0..3 {
ddrphy::rdly_dq_bitslip_write(1);
@ -422,9 +451,13 @@ pub unsafe fn init(mut _logger: Option<&mut dyn fmt::Write>) -> bool {
#[cfg(has_ddrphy)]
{
#[cfg(kusddrphy)]
csr::ddrphy::en_vtc_write(0);
if !ddr::level(&mut _logger) {
return false
}
#[cfg(kusddrphy)]
csr::ddrphy::en_vtc_write(1);
}
csr::dfii::control_write(sdram_phy::DFII_CONTROL_SEL);

View File

@ -5,7 +5,7 @@ use elf::*;
pub mod elf;
pub fn read_unaligned<T: Copy>(data: &[u8], offset: usize) -> Result<T, ()> {
fn read_unaligned<T: Copy>(data: &[u8], offset: usize) -> Result<T, ()> {
if data.len() < offset + mem::size_of::<T>() {
Err(())
} else {
@ -75,25 +75,6 @@ impl<'a> fmt::Display for Error<'a> {
}
}
pub fn is_elf_for_current_arch(ehdr: &Elf32_Ehdr, e_type: u16) -> bool {
const IDENT: [u8; EI_NIDENT] = [
ELFMAG0, ELFMAG1, ELFMAG2, ELFMAG3,
ELFCLASS32, ELFDATA2LSB, EV_CURRENT, ELFOSABI_NONE,
/* ABI version */ 0, /* padding */ 0, 0, 0, 0, 0, 0, 0
];
if ehdr.e_ident != IDENT { return false; }
if ehdr.e_type != e_type { return false; }
#[cfg(target_arch = "riscv32")]
const ARCH: u16 = EM_RISCV;
#[cfg(not(target_arch = "riscv32"))]
const ARCH: u16 = EM_NONE;
if ehdr.e_machine != ARCH { return false; }
true
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Arch {
RiscV,
@ -287,16 +268,25 @@ impl<'a> Library<'a> {
let ehdr = read_unaligned::<Elf32_Ehdr>(data, 0)
.map_err(|()| "cannot read ELF header")?;
if !is_elf_for_current_arch(&ehdr, ET_DYN) {
return Err("not a shared library for current architecture")?
}
const IDENT: [u8; EI_NIDENT] = [
ELFMAG0, ELFMAG1, ELFMAG2, ELFMAG3,
ELFCLASS32, ELFDATA2LSB, EV_CURRENT, ELFOSABI_NONE,
/* ABI version */ 0, /* padding */ 0, 0, 0, 0, 0, 0, 0
];
#[cfg(target_arch = "riscv32")]
const ARCH: u16 = EM_RISCV;
#[cfg(not(target_arch = "riscv32"))]
const ARCH: u16 = EM_NONE;
#[cfg(all(target_feature = "f", target_feature = "d"))]
const FLAGS: u32 = EF_RISCV_FLOAT_ABI_DOUBLE;
#[cfg(not(all(target_feature = "f", target_feature = "d")))]
const FLAGS: u32 = EF_RISCV_FLOAT_ABI_SOFT;
if ehdr.e_flags != FLAGS {
return Err("unexpected flags for shared library (wrong floating point ABI?)")?
if ehdr.e_ident != IDENT || ehdr.e_type != ET_DYN || ehdr.e_machine != ARCH || ehdr.e_flags != FLAGS {
return Err("not a shared library for current architecture")?
}
let mut dyn_off = None;

View File

@ -14,52 +14,6 @@ impl<T> From<IoError<T>> for Error<T> {
}
}
// maximum size of arbitrary payloads
// used by satellite -> master analyzer, subkernel exceptions
pub const SAT_PAYLOAD_MAX_SIZE: usize = /*max size*/512 - /*CRC*/4 - /*packet ID*/1 - /*last*/1 - /*length*/2;
// used by DDMA, subkernel program data (need to provide extra ID and destination)
pub const MASTER_PAYLOAD_MAX_SIZE: usize = SAT_PAYLOAD_MAX_SIZE - /*destination*/1 - /*ID*/4;
#[derive(PartialEq, Clone, Copy, Debug)]
#[repr(u8)]
pub enum PayloadStatus {
Middle = 0,
First = 1,
Last = 2,
FirstAndLast = 3,
}
impl From<u8> for PayloadStatus {
fn from(value: u8) -> PayloadStatus {
match value {
0 => PayloadStatus::Middle,
1 => PayloadStatus::First,
2 => PayloadStatus::Last,
3 => PayloadStatus::FirstAndLast,
_ => unreachable!(),
}
}
}
impl PayloadStatus {
pub fn is_first(self) -> bool {
self == PayloadStatus::First || self == PayloadStatus::FirstAndLast
}
pub fn is_last(self) -> bool {
self == PayloadStatus::Last || self == PayloadStatus::FirstAndLast
}
pub fn from_status(first: bool, last: bool) -> PayloadStatus {
match (first, last) {
(true, true) => PayloadStatus::FirstAndLast,
(true, false) => PayloadStatus::First,
(false, true) => PayloadStatus::Last,
(false, false) => PayloadStatus::Middle
}
}
}
#[derive(PartialEq, Debug)]
pub enum Packet {
EchoRequest,
@ -101,28 +55,8 @@ pub enum Packet {
SpiReadReply { succeeded: bool, data: u32 },
SpiBasicReply { succeeded: bool },
AnalyzerHeaderRequest { destination: u8 },
AnalyzerHeader { sent_bytes: u32, total_byte_count: u64, overflow_occurred: bool },
AnalyzerDataRequest { destination: u8 },
AnalyzerData { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE]},
DmaAddTraceRequest { destination: u8, id: u32, status: PayloadStatus, length: u16, trace: [u8; MASTER_PAYLOAD_MAX_SIZE] },
DmaAddTraceReply { succeeded: bool },
DmaRemoveTraceRequest { destination: u8, id: u32 },
DmaRemoveTraceReply { succeeded: bool },
DmaPlaybackRequest { destination: u8, id: u32, timestamp: u64 },
DmaPlaybackReply { succeeded: bool },
DmaPlaybackStatus { destination: u8, id: u32, error: u8, channel: u32, timestamp: u64 },
SubkernelAddDataRequest { destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelAddDataReply { succeeded: bool },
SubkernelLoadRunRequest { destination: u8, id: u32, run: bool },
SubkernelLoadRunReply { succeeded: bool },
SubkernelFinished { id: u32, with_exception: bool },
SubkernelExceptionRequest { destination: u8 },
SubkernelException { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE] },
SubkernelMessage { destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelMessageAck { destination: u8 },
JdacBasicRequest { destination: u8, dacno: u8, reqno: u8, param: u8 },
JdacBasicReply { succeeded: bool, retval: u8 },
}
impl Packet {
@ -254,131 +188,15 @@ impl Packet {
succeeded: reader.read_bool()?
},
0xa0 => Packet::AnalyzerHeaderRequest {
destination: reader.read_u8()?
},
0xa1 => Packet::AnalyzerHeader {
sent_bytes: reader.read_u32()?,
total_byte_count: reader.read_u64()?,
overflow_occurred: reader.read_bool()?,
},
0xa2 => Packet::AnalyzerDataRequest {
destination: reader.read_u8()?
},
0xa3 => {
let last = reader.read_bool()?;
let length = reader.read_u16()?;
let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::AnalyzerData {
last: last,
length: length,
data: data
}
},
0xb0 => {
let destination = reader.read_u8()?;
let id = reader.read_u32()?;
let status = reader.read_u8()?;
let length = reader.read_u16()?;
let mut trace: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut trace[0..length as usize])?;
Packet::DmaAddTraceRequest {
destination: destination,
id: id,
status: PayloadStatus::from(status),
length: length as u16,
trace: trace,
}
},
0xb1 => Packet::DmaAddTraceReply {
succeeded: reader.read_bool()?
},
0xb2 => Packet::DmaRemoveTraceRequest {
0xa0 => Packet::JdacBasicRequest {
destination: reader.read_u8()?,
id: reader.read_u32()?
dacno: reader.read_u8()?,
reqno: reader.read_u8()?,
param: reader.read_u8()?,
},
0xb3 => Packet::DmaRemoveTraceReply {
succeeded: reader.read_bool()?
},
0xb4 => Packet::DmaPlaybackRequest {
destination: reader.read_u8()?,
id: reader.read_u32()?,
timestamp: reader.read_u64()?
},
0xb5 => Packet::DmaPlaybackReply {
succeeded: reader.read_bool()?
},
0xb6 => Packet::DmaPlaybackStatus {
destination: reader.read_u8()?,
id: reader.read_u32()?,
error: reader.read_u8()?,
channel: reader.read_u32()?,
timestamp: reader.read_u64()?
},
0xc0 => {
let destination = reader.read_u8()?;
let id = reader.read_u32()?;
let status = reader.read_u8()?;
let length = reader.read_u16()?;
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelAddDataRequest {
destination: destination,
id: id,
status: PayloadStatus::from(status),
length: length as u16,
data: data,
}
},
0xc1 => Packet::SubkernelAddDataReply {
succeeded: reader.read_bool()?
},
0xc4 => Packet::SubkernelLoadRunRequest {
destination: reader.read_u8()?,
id: reader.read_u32()?,
run: reader.read_bool()?
},
0xc5 => Packet::SubkernelLoadRunReply {
succeeded: reader.read_bool()?
},
0xc8 => Packet::SubkernelFinished {
id: reader.read_u32()?,
with_exception: reader.read_bool()?,
},
0xc9 => Packet::SubkernelExceptionRequest {
destination: reader.read_u8()?
},
0xca => {
let last = reader.read_bool()?;
let length = reader.read_u16()?;
let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelException {
last: last,
length: length,
data: data
}
},
0xcb => {
let destination = reader.read_u8()?;
let id = reader.read_u32()?;
let status = reader.read_u8()?;
let length = reader.read_u16()?;
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelMessage {
destination: destination,
id: id,
status: PayloadStatus::from(status),
length: length as u16,
data: data,
}
},
0xcc => Packet::SubkernelMessageAck {
destination: reader.read_u8()?
0xa1 => Packet::JdacBasicReply {
succeeded: reader.read_bool()?,
retval: reader.read_u8()?
},
ty => return Err(Error::UnknownPacket(ty))
@ -540,117 +358,17 @@ impl Packet {
writer.write_bool(succeeded)?;
},
Packet::AnalyzerHeaderRequest { destination } => {
Packet::JdacBasicRequest { destination, dacno, reqno, param } => {
writer.write_u8(0xa0)?;
writer.write_u8(destination)?;
},
Packet::AnalyzerHeader { sent_bytes, total_byte_count, overflow_occurred } => {
writer.write_u8(dacno)?;
writer.write_u8(reqno)?;
writer.write_u8(param)?;
}
Packet::JdacBasicReply { succeeded, retval } => {
writer.write_u8(0xa1)?;
writer.write_u32(sent_bytes)?;
writer.write_u64(total_byte_count)?;
writer.write_bool(overflow_occurred)?;
},
Packet::AnalyzerDataRequest { destination } => {
writer.write_u8(0xa2)?;
writer.write_u8(destination)?;
},
Packet::AnalyzerData { last, length, data } => {
writer.write_u8(0xa3)?;
writer.write_bool(last)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
},
Packet::DmaAddTraceRequest { destination, id, status, trace, length } => {
writer.write_u8(0xb0)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_u8(status as u8)?;
// trace may be broken down to fit within drtio aux memory limit
// will be reconstructed by satellite
writer.write_u16(length)?;
writer.write_all(&trace[0..length as usize])?;
},
Packet::DmaAddTraceReply { succeeded } => {
writer.write_u8(0xb1)?;
writer.write_bool(succeeded)?;
},
Packet::DmaRemoveTraceRequest { destination, id } => {
writer.write_u8(0xb2)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
},
Packet::DmaRemoveTraceReply { succeeded } => {
writer.write_u8(0xb3)?;
writer.write_bool(succeeded)?;
},
Packet::DmaPlaybackRequest { destination, id, timestamp } => {
writer.write_u8(0xb4)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_u64(timestamp)?;
},
Packet::DmaPlaybackReply { succeeded } => {
writer.write_u8(0xb5)?;
writer.write_bool(succeeded)?;
},
Packet::DmaPlaybackStatus { destination, id, error, channel, timestamp } => {
writer.write_u8(0xb6)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_u8(error)?;
writer.write_u32(channel)?;
writer.write_u64(timestamp)?;
},
Packet::SubkernelAddDataRequest { destination, id, status, data, length } => {
writer.write_u8(0xc0)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_u8(status as u8)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
},
Packet::SubkernelAddDataReply { succeeded } => {
writer.write_u8(0xc1)?;
writer.write_bool(succeeded)?;
},
Packet::SubkernelLoadRunRequest { destination, id, run } => {
writer.write_u8(0xc4)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_bool(run)?;
},
Packet::SubkernelLoadRunReply { succeeded } => {
writer.write_u8(0xc5)?;
writer.write_bool(succeeded)?;
},
Packet::SubkernelFinished { id, with_exception } => {
writer.write_u8(0xc8)?;
writer.write_u32(id)?;
writer.write_bool(with_exception)?;
},
Packet::SubkernelExceptionRequest { destination } => {
writer.write_u8(0xc9)?;
writer.write_u8(destination)?;
},
Packet::SubkernelException { last, length, data } => {
writer.write_u8(0xca)?;
writer.write_bool(last)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
},
Packet::SubkernelMessage { destination, id, status, data, length } => {
writer.write_u8(0xcb)?;
writer.write_u8(destination)?;
writer.write_u32(id)?;
writer.write_u8(status as u8)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
},
Packet::SubkernelMessageAck { destination } => {
writer.write_u8(0xcc)?;
writer.write_u8(destination)?;
writer.write_u8(retval)?;
},
}
Ok(())

View File

@ -5,19 +5,7 @@ use dyld;
pub const KERNELCPU_EXEC_ADDRESS: usize = 0x45000000;
pub const KERNELCPU_PAYLOAD_ADDRESS: usize = 0x45060000;
pub const KERNELCPU_LAST_ADDRESS: usize = 0x4fffffff;
// Must match the offset of the first (starting at KERNELCPU_EXEC_ADDRESS)
// section in ksupport.elf.
pub const KSUPPORT_HEADER_SIZE: usize = 0x74;
#[derive(Debug)]
pub enum SubkernelStatus {
NoError,
Timeout,
IncorrectState,
CommLost,
OtherError
}
pub const KSUPPORT_HEADER_SIZE: usize = 0x80;
#[derive(Debug)]
pub enum Message<'a> {
@ -32,8 +20,7 @@ pub enum Message<'a> {
DmaRecordStart(&'a str),
DmaRecordAppend(&'a [u8]),
DmaRecordStop {
duration: u64,
enable_ddma: bool
duration: u64
},
DmaEraseRequest {
@ -45,25 +32,9 @@ pub enum Message<'a> {
},
DmaRetrieveReply {
trace: Option<&'a [u8]>,
duration: u64,
uses_ddma: bool,
duration: u64
},
DmaStartRemoteRequest {
id: i32,
timestamp: i64,
},
DmaAwaitRemoteRequest {
id: i32
},
DmaAwaitRemoteReply {
timeout: bool,
error: u8,
channel: u32,
timestamp: u64
},
RunFinished,
RunException {
exceptions: &'a [Option<eh::eh_artiq::Exception<'a>>],
@ -103,14 +74,6 @@ pub enum Message<'a> {
SpiReadReply { succeeded: bool, data: u32 },
SpiBasicReply { succeeded: bool },
SubkernelLoadRunRequest { id: u32, run: bool },
SubkernelLoadRunReply { succeeded: bool },
SubkernelAwaitFinishRequest { id: u32, timeout: u64 },
SubkernelAwaitFinishReply { status: SubkernelStatus },
SubkernelMsgSend { id: u32, count: u8, tag: &'a [u8], data: *const *const () },
SubkernelMsgRecvRequest { id: u32, timeout: u64, tags: &'a [u8] },
SubkernelMsgRecvReply { status: SubkernelStatus, count: u8 },
Log(fmt::Arguments<'a>),
LogSlice(&'a str)
}

View File

@ -6,81 +6,22 @@ use io::{ProtoRead, Read, Write, ProtoWrite, Error};
use self::tag::{Tag, TagIterator, split_tag};
#[inline]
fn round_up(val: usize, power_of_two: usize) -> usize {
assert!(power_of_two.is_power_of_two());
let max_rem = power_of_two - 1;
(val + max_rem) & (!max_rem)
fn alignment_offset(alignment: isize, ptr: isize) -> isize {
(-ptr).rem_euclid(alignment)
}
#[inline]
unsafe fn round_up_mut<T>(ptr: *mut T, power_of_two: usize) -> *mut T {
round_up(ptr as usize, power_of_two) as *mut T
}
#[inline]
unsafe fn round_up_const<T>(ptr: *const T, power_of_two: usize) -> *const T {
round_up(ptr as usize, power_of_two) as *const T
}
#[inline]
unsafe fn align_ptr<T>(ptr: *const ()) -> *const T {
round_up_const(ptr, core::mem::align_of::<T>()) as *const T
let alignment = core::mem::align_of::<T>() as isize;
let fix = alignment_offset(alignment as isize, ptr as isize);
((ptr as isize) + fix) as *const T
}
#[inline]
unsafe fn align_ptr_mut<T>(ptr: *mut ()) -> *mut T {
round_up_mut(ptr, core::mem::align_of::<T>()) as *mut T
let alignment = core::mem::align_of::<T>() as isize;
let fix = alignment_offset(alignment as isize, ptr as isize);
((ptr as isize) + fix) as *mut T
}
/// Reads (deserializes) `length` array or list elements of type `tag` from `reader`,
/// writing them into the buffer given by `storage`.
///
/// `alloc` is used for nested allocations (if elements themselves contain
/// lists/arrays), see [recv_value].
unsafe fn recv_elements<R, E>(
reader: &mut R,
tag: Tag,
length: usize,
storage: *mut (),
alloc: &dyn Fn(usize) -> Result<*mut (), E>,
) -> Result<(), E>
where
R: Read + ?Sized,
E: From<Error<R::ReadError>>,
{
// List of simple types are special-cased in the protocol for performance.
match tag {
Tag::Bool => {
let dest = slice::from_raw_parts_mut(storage as *mut u8, length);
reader.read_exact(dest)?;
},
Tag::Int32 => {
let dest = slice::from_raw_parts_mut(storage as *mut u8, length * 4);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(storage as *mut i32, length);
NativeEndian::from_slice_i32(dest);
},
Tag::Int64 | Tag::Float64 => {
let dest = slice::from_raw_parts_mut(storage as *mut u8, length * 8);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(storage as *mut i64, length);
NativeEndian::from_slice_i64(dest);
},
_ => {
let mut data = storage;
for _ in 0..length {
recv_value(reader, tag, &mut data, alloc)?
}
}
}
Ok(())
}
/// Reads (deserializes) a value of type `tag` from `reader`, writing the results to
/// the kernel-side buffer `data` (the passed pointer to which is incremented to point
/// past the just-received data). For nested allocations (lists/arrays), `alloc` is
/// invoked any number of times with the size of the required allocation as a parameter
/// (which is assumed to be correctly aligned for all payload types).
unsafe fn recv_value<R, E>(reader: &mut R, tag: Tag, data: &mut *mut (),
alloc: &dyn Fn(usize) -> Result<*mut (), E>)
-> Result<(), E>
@ -112,73 +53,105 @@ unsafe fn recv_value<R, E>(reader: &mut R, tag: Tag, data: &mut *mut (),
Tag::String | Tag::Bytes | Tag::ByteArray => {
consume_value!(CMutSlice<u8>, |ptr| {
let length = reader.read_u32()? as usize;
if length > 0 {
*ptr = CMutSlice::new(alloc(length)? as *mut u8, length);
reader.read_exact((*ptr).as_mut())?;
} else {
*ptr = CMutSlice::new(core::ptr::NonNull::<u8>::dangling().as_ptr(), 0);
}
Ok(())
})
}
Tag::Tuple(it, arity) => {
let alignment = tag.alignment();
*data = round_up_mut(*data, alignment);
*data = data.offset(alignment_offset(tag.alignment() as isize, *data as isize));
let mut it = it.clone();
for _ in 0..arity {
let tag = it.next().expect("truncated tag");
recv_value(reader, tag, data, alloc)?
}
// Take into account any tail padding (if element(s) with largest alignment
// are not at the end).
*data = round_up_mut(*data, alignment);
Ok(())
}
Tag::List(it) => {
#[repr(C)]
struct List { elements: *mut (), length: usize }
consume_value!(*mut List, |ptr_to_list| {
struct List { elements: *mut (), length: u32 }
consume_value!(*mut List, |ptr| {
let tag = it.clone().next().expect("truncated tag");
let padding = if let Tag::Int64 | Tag::Float64 = tag { 4 } else { 0 };
let length = reader.read_u32()? as usize;
let data = alloc(tag.size() * length + padding + 8)? as *mut u8;
*ptr = data as *mut List;
let ptr = data as *mut List;
let mut data = data.offset(8 + alignment_offset(tag.alignment() as isize, data as isize)) as *mut ();
// To avoid multiple kernel CPU roundtrips, use a single allocation for
// both the pointer/length List (slice) and the backing storage for the
// elements. We can assume that alloc() is aligned suitably, so just
// need to take into account any extra padding required.
// (Note: On RISC-V, there will never actually be any types with
// alignment larger than 8 bytes, so storage_offset == 0 always.)
let list_size = 4 + 4;
let storage_offset = round_up(list_size, tag.alignment());
let storage_size = tag.size() * length;
let allocation = alloc(storage_offset + storage_size)? as *mut u8;
*ptr_to_list = allocation as *mut List;
let storage = allocation.offset(storage_offset as isize) as *mut ();
(**ptr_to_list).length = length;
(**ptr_to_list).elements = storage;
recv_elements(reader, tag, length, storage, alloc)
(*ptr).length = length as u32;
(*ptr).elements = data;
match tag {
Tag::Bool => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length);
reader.read_exact(dest)?;
},
Tag::Int32 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 4);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i32, length);
NativeEndian::from_slice_i32(dest);
},
Tag::Int64 | Tag::Float64 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 8);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i64, length);
NativeEndian::from_slice_i64(dest);
},
_ => {
for _ in 0..length {
recv_value(reader, tag, &mut data, alloc)?
}
}
}
Ok(())
})
}
Tag::Array(it, num_dims) => {
consume_value!(*mut (), |buffer| {
// Deserialize length along each dimension and compute total number of
// elements.
let mut total_len: usize = 1;
let mut total_len: u32 = 1;
for _ in 0..num_dims {
let len = reader.read_u32()? as usize;
let len = reader.read_u32()?;
total_len *= len;
consume_value!(usize, |ptr| *ptr = len )
consume_value!(u32, |ptr| *ptr = len )
}
let length = total_len as usize;
// Allocate backing storage for elements; deserialize them.
let elt_tag = it.clone().next().expect("truncated tag");
*buffer = alloc(elt_tag.size() * total_len)?;
recv_elements(reader, elt_tag, total_len, *buffer, alloc)
let padding = if let Tag::Int64 | Tag::Float64 = tag { 4 } else { 0 };
let mut data = alloc(elt_tag.size() * length + padding)?;
data = data.offset(alignment_offset(tag.alignment() as isize, data as isize));
*buffer = data;
match elt_tag {
Tag::Bool => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length);
reader.read_exact(dest)?;
},
Tag::Int32 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 4);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i32, length);
NativeEndian::from_slice_i32(dest);
},
Tag::Int64 | Tag::Float64 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 8);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i64, length);
NativeEndian::from_slice_i64(dest);
},
_ => {
for _ in 0..length {
recv_value(reader, elt_tag, &mut data, alloc)?
}
}
}
Ok(())
})
}
Tag::Range(it) => {
*data = round_up_mut(*data, tag.alignment());
*data = data.offset(alignment_offset(tag.alignment() as isize, *data as isize));
let tag = it.clone().next().expect("truncated tag");
recv_value(reader, tag, data, alloc)?;
recv_value(reader, tag, data, alloc)?;
@ -190,9 +163,9 @@ unsafe fn recv_value<R, E>(reader: &mut R, tag: Tag, data: &mut *mut (),
}
}
pub fn recv_return<'a, R, E>(reader: &mut R, tag_bytes: &'a [u8], data: *mut (),
pub fn recv_return<R, E>(reader: &mut R, tag_bytes: &[u8], data: *mut (),
alloc: &dyn Fn(usize) -> Result<*mut (), E>)
-> Result<&'a [u8], E>
-> Result<(), E>
where R: Read + ?Sized,
E: From<Error<R::ReadError>>
{
@ -204,42 +177,10 @@ pub fn recv_return<'a, R, E>(reader: &mut R, tag_bytes: &'a [u8], data: *mut (),
let mut data = data;
unsafe { recv_value(reader, tag, &mut data, alloc)? };
Ok(it.data)
}
unsafe fn send_elements<W>(writer: &mut W, elt_tag: Tag, length: usize, data: *const (), write_tags: bool)
-> Result<(), Error<W::WriteError>>
where W: Write + ?Sized
{
if write_tags {
writer.write_u8(elt_tag.as_u8())?;
}
match elt_tag {
// we cannot use NativeEndian::from_slice_i32 as the data is not mutable,
// and that is not needed as the data is already in native endian
Tag::Bool => {
let slice = slice::from_raw_parts(data as *const u8, length);
writer.write_all(slice)?;
},
Tag::Int32 => {
let slice = slice::from_raw_parts(data as *const u8, length * 4);
writer.write_all(slice)?;
},
Tag::Int64 | Tag::Float64 => {
let slice = slice::from_raw_parts(data as *const u8, length * 8);
writer.write_all(slice)?;
},
_ => {
let mut data = data;
for _ in 0..length {
send_value(writer, elt_tag, &mut data, write_tags)?;
}
}
}
Ok(())
}
unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_tags: bool)
unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
-> Result<(), Error<W::WriteError>>
where W: Write + ?Sized
{
@ -250,9 +191,8 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_ta
$map
})
}
if write_tags {
writer.write_u8(tag.as_u8())?;
}
match tag {
Tag::None => Ok(()),
Tag::Bool =>
@ -272,16 +212,11 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_ta
writer.write_bytes((*ptr).as_ref())),
Tag::Tuple(it, arity) => {
let mut it = it.clone();
if write_tags {
writer.write_u8(arity)?;
}
let mut max_alignment = 0;
for _ in 0..arity {
let tag = it.next().expect("truncated tag");
max_alignment = core::cmp::max(max_alignment, tag.alignment());
send_value(writer, tag, data, write_tags)?
send_value(writer, tag, data)?
}
*data = round_up_const(*data, max_alignment);
Ok(())
}
Tag::List(it) => {
@ -291,13 +226,34 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_ta
let length = (**ptr).length as usize;
writer.write_u32((**ptr).length)?;
let tag = it.clone().next().expect("truncated tag");
send_elements(writer, tag, length, (**ptr).elements, write_tags)
let mut data = (**ptr).elements;
writer.write_u8(tag.as_u8())?;
match tag {
// we cannot use NativeEndian::from_slice_i32 as the data is not mutable,
// and that is not needed as the data is already in native endian
Tag::Bool => {
let slice = slice::from_raw_parts(data as *const u8, length);
writer.write_all(slice)?;
},
Tag::Int32 => {
let slice = slice::from_raw_parts(data as *const u8, length * 4);
writer.write_all(slice)?;
},
Tag::Int64 | Tag::Float64 => {
let slice = slice::from_raw_parts(data as *const u8, length * 8);
writer.write_all(slice)?;
},
_ => {
for _ in 0..length {
send_value(writer, tag, &mut data)?;
}
}
}
Ok(())
})
}
Tag::Array(it, num_dims) => {
if write_tags {
writer.write_u8(num_dims)?;
}
consume_value!(*const(), |buffer| {
let elt_tag = it.clone().next().expect("truncated tag");
@ -309,14 +265,37 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_ta
})
}
let length = total_len as usize;
send_elements(writer, elt_tag, length, *buffer, write_tags)
let mut data = *buffer;
writer.write_u8(elt_tag.as_u8())?;
match elt_tag {
// we cannot use NativeEndian::from_slice_i32 as the data is not mutable,
// and that is not needed as the data is already in native endian
Tag::Bool => {
let slice = slice::from_raw_parts(data as *const u8, length);
writer.write_all(slice)?;
},
Tag::Int32 => {
let slice = slice::from_raw_parts(data as *const u8, length * 4);
writer.write_all(slice)?;
},
Tag::Int64 | Tag::Float64 => {
let slice = slice::from_raw_parts(data as *const u8, length * 8);
writer.write_all(slice)?;
},
_ => {
for _ in 0..length {
send_value(writer, elt_tag, &mut data)?;
}
}
}
Ok(())
})
}
Tag::Range(it) => {
let tag = it.clone().next().expect("truncated tag");
send_value(writer, tag, data, write_tags)?;
send_value(writer, tag, data, write_tags)?;
send_value(writer, tag, data, write_tags)?;
send_value(writer, tag, data)?;
send_value(writer, tag, data)?;
send_value(writer, tag, data)?;
Ok(())
}
Tag::Keyword(it) => {
@ -326,7 +305,7 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_ta
writer.write_string(str::from_utf8((*ptr).name.as_ref()).unwrap())?;
let tag = it.clone().next().expect("truncated tag");
let mut data = ptr.offset(1) as *const ();
send_value(writer, tag, &mut data, write_tags)
send_value(writer, tag, &mut data)
})
// Tag::Keyword never appears in composite types, so we don't have
// to accurately advance data.
@ -340,7 +319,7 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const (), write_ta
}
}
pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const *const (), write_tags: bool)
pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const *const ())
-> Result<(), Error<W::WriteError>>
where W: Write + ?Sized
{
@ -357,7 +336,7 @@ pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const
for index in 0.. {
if let Some(arg_tag) = args_it.next() {
let mut data = unsafe { *data.offset(index) };
unsafe { send_value(writer, arg_tag, &mut data, write_tags)? };
unsafe { send_value(writer, arg_tag, &mut data)? };
} else {
break
}
@ -370,7 +349,7 @@ pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const
mod tag {
use core::fmt;
use super::round_up;
use super::alignment_offset;
pub fn split_tag(tag_bytes: &[u8]) -> (&[u8], &[u8]) {
let tag_separator =
@ -437,18 +416,16 @@ mod tag {
let it = it.clone();
it.take(3).map(|t| t.alignment()).max().unwrap()
}
// the ptr/length(s) pair is basically CSlice
Tag::Bytes | Tag::String | Tag::ByteArray | Tag::List(_) | Tag::Array(_, _) =>
// CSlice basically
Tag::Bytes | Tag::String | Tag::ByteArray | Tag::List(_) =>
core::mem::align_of::<CSlice<()>>(),
Tag::Keyword(_) => unreachable!("Tag::Keyword should not appear in composite types"),
Tag::Object => core::mem::align_of::<u32>(),
// array buffer is allocated, so no need for alignment first
Tag::Array(_, _) => 1,
// will not be sent from the host
_ => unreachable!("unexpected tag from host")
}
}
/// Returns the "alignment size" of a value with the type described by the tag
/// (in bytes), i.e. the stride between successive elements in a list/array of
/// the given type, or the offset from a struct element of this type to the
/// next field.
pub fn size(self) -> usize {
match self {
Tag::None => 0,
@ -461,18 +438,12 @@ mod tag {
Tag::ByteArray => 8,
Tag::Tuple(it, arity) => {
let mut size = 0;
let mut max_alignment = 0;
let mut it = it.clone();
for _ in 0..arity {
let tag = it.next().expect("truncated tag");
let alignment = tag.alignment();
max_alignment = core::cmp::max(max_alignment, alignment);
size = round_up(size, alignment);
size += tag.size();
size += alignment_offset(tag.alignment() as isize, size as isize) as usize;
}
// Take into account any tail padding (if element(s) with largest
// alignment are not at the end).
size = round_up(size, max_alignment);
size
}
Tag::List(_) => 8,
@ -489,7 +460,7 @@ mod tag {
#[derive(Debug, Clone, Copy)]
pub struct TagIterator<'a> {
pub data: &'a [u8]
data: &'a [u8]
}
impl<'a> TagIterator<'a> {

View File

@ -1,14 +1,10 @@
use core::{str, str::Utf8Error, slice};
use alloc::{vec::Vec, format, collections::BTreeMap, string::String};
use core::str::Utf8Error;
use alloc::vec::Vec;
use eh::eh_artiq::{Exception, StackPointerBacktrace};
use cslice::CSlice;
use io::{Read, ProtoRead, Write, ProtoWrite, Error as IoError, ReadStringError};
pub type DeviceMap = BTreeMap<u32, String>;
static mut RTIO_DEVICE_MAP: Option<DeviceMap> = None;
#[derive(Fail, Debug)]
pub enum Error<T> {
#[fail(display = "incorrect magic")]
@ -84,8 +80,6 @@ pub enum Request {
column: u32,
function: u32,
},
UploadSubkernel { id: u32, destination: u8, kernel: Vec<u8> },
}
#[derive(Debug)]
@ -139,11 +133,6 @@ impl Request {
column: reader.read_u32()?,
function: reader.read_u32()?
},
9 => Request::UploadSubkernel {
id: reader.read_u32()?,
destination: reader.read_u8()?,
kernel: reader.read_bytes()?
},
ty => return Err(Error::UnknownPacket(ty))
})
@ -201,14 +190,7 @@ impl<'a> Reply<'a> {
for exception in exceptions.iter() {
let exception = exception.as_ref().unwrap();
writer.write_u32(exception.id as u32)?;
if exception.message.len() == usize::MAX {
// exception with host string
write_exception_string(writer, &exception.message)?;
} else {
let msg = str::from_utf8(unsafe { slice::from_raw_parts(exception.message.as_ptr(), exception.message.len()) }).unwrap()
.replace("{rtio_channel_info:0}", &format!("0x{:04x}:{}", exception.param[0], resolve_channel_name(exception.param[0] as u32)));
write_exception_string(writer, unsafe { &CSlice::new(msg.as_ptr(), msg.len()) })?;
}
writer.write_u64(exception.param[0] as u64)?;
writer.write_u64(exception.param[1] as u64)?;
writer.write_u64(exception.param[2] as u64)?;
@ -244,20 +226,3 @@ impl<'a> Reply<'a> {
Ok(())
}
}
pub fn set_device_map(device_map: DeviceMap) {
unsafe { RTIO_DEVICE_MAP = Some(device_map); }
}
fn _resolve_channel_name(channel: u32, device_map: &Option<DeviceMap>) -> String {
if let Some(dev_map) = device_map {
match dev_map.get(&channel) {
Some(val) => val.clone(),
None => String::from("unknown")
}
} else { String::from("unknown") }
}
pub fn resolve_channel_name(channel: u32) -> String {
_resolve_channel_name(channel, unsafe{&RTIO_DEVICE_MAP})
}

View File

@ -19,14 +19,13 @@ byteorder = { version = "1.0", default-features = false }
cslice = { version = "0.3" }
log = { version = "0.4", default-features = false }
managed = { version = "^0.7.1", default-features = false, features = ["alloc", "map"] }
dyld = { path = "../libdyld" }
eh = { path = "../libeh" }
unwind_backtrace = { path = "../libunwind_backtrace" }
io = { path = "../libio", features = ["byteorder"] }
alloc_list = { path = "../liballoc_list" }
board_misoc = { path = "../libboard_misoc", features = ["uart_console", "smoltcp"] }
logger_artiq = { path = "../liblogger_artiq" }
board_artiq = { path = "../libboard_artiq", features = ["alloc"] }
board_artiq = { path = "../libboard_artiq" }
proto_artiq = { path = "../libproto_artiq", features = ["log", "alloc"] }
riscv = { version = "0.6.0", features = ["inline-asm"] }
@ -40,7 +39,3 @@ git = "https://git.m-labs.hk/M-Labs/libfringe.git"
rev = "3ecbe5"
default-features = false
features = ["alloc"]
[dependencies.tar-no-std]
git = "https://git.m-labs.hk/M-Labs/tar-no-std"
rev = "2ab6dc5"

View File

@ -21,8 +21,8 @@ $(RUSTOUT)/libruntime.a:
--target $(RUNTIME_DIRECTORY)/../$(CARGO_TRIPLE).json
runtime.elf: $(RUSTOUT)/libruntime.a ksupport_data.o
$(link) -T $(RUNTIME_DIRECTORY)/../firmware.ld \
-lunwind-vexriscv-bare -m elf32lriscv
$(link) -T $(RUNTIME_DIRECTORY)/runtime.ld \
-lunwind-bare -m elf32lriscv
ksupport_data.o: ../ksupport/ksupport.elf
$(LD) -r -m elf32lriscv -b binary -o $@ $<

Some files were not shown because too many files have changed in this diff Show More