mirror of https://github.com/m-labs/artiq.git
Merge branch 'master' into nac3
This commit is contained in:
commit
b452789f03
|
@ -8,29 +8,40 @@ ARTIQ-7
|
||||||
|
|
||||||
Highlights:
|
Highlights:
|
||||||
|
|
||||||
* Support for Kasli-SoC, a new EEM carrier based on a Zynq SoC, enabling much faster kernel execution.
|
* New hardware support:
|
||||||
|
- Kasli-SoC, a new EEM carrier based on a Zynq SoC, enabling much faster kernel execution.
|
||||||
|
- HVAMP_8CH 8 channel HV amplifier for Fastino / Zotinos
|
||||||
|
- Almazny mezzanine board for Mirny
|
||||||
* Softcore targets now use the RISC-V architecture (VexRiscv) instead of OR1K (mor1kx).
|
* Softcore targets now use the RISC-V architecture (VexRiscv) instead of OR1K (mor1kx).
|
||||||
* WRPLL
|
* Faster compilation for large arrays/lists.
|
||||||
* Compiler:
|
|
||||||
- Supports kernel decorator with paths.
|
|
||||||
- Faster compilation for large arrays/lists.
|
|
||||||
* Phaser:
|
* Phaser:
|
||||||
- Improved documentation
|
- Improved documentation
|
||||||
- Expose the DAC coarse mixer and ``sif_sync``
|
- Expose the DAC coarse mixer and ``sif_sync``
|
||||||
- Exposes upconverter calibration and enabling/disabling of upconverter LO & RF outputs.
|
- Exposes upconverter calibration and enabling/disabling of upconverter LO & RF outputs.
|
||||||
- Add helpers to align Phaser updates to the RTIO timeline (``get_next_frame_mu()``)
|
- Add helpers to align Phaser updates to the RTIO timeline (``get_next_frame_mu()``)
|
||||||
* ``get()``, ``get_mu()``, ``get_att()``, and ``get_att_mu()`` functions added for AD9910 and AD9912
|
* ``get()``, ``get_mu()``, ``get_att()``, and ``get_att_mu()`` functions added for AD9910 and AD9912
|
||||||
* New hardware support:
|
* On Kasli, the number of FIFO lanes in the scalable events dispatcher (SED) can now be configured in
|
||||||
- HVAMP_8CH 8 channel HV amplifier for Fastino / Zotino
|
the JSON hardware description file.
|
||||||
* ``artiq_ddb_template`` generates edge-counter keys that start with the key of the corresponding
|
* ``artiq_ddb_template`` generates edge-counter keys that start with the key of the corresponding
|
||||||
TTL device (e.g. ``"ttl_0_counter"`` for the edge counter on TTL device``"ttl_0"``)
|
TTL device (e.g. ``"ttl_0_counter"`` for the edge counter on TTL device``"ttl_0"``)
|
||||||
* ``artiq_master`` now has an ``--experiment-subdir`` option to scan only a subdirectory of the
|
* ``artiq_master`` now has an ``--experiment-subdir`` option to scan only a subdirectory of the
|
||||||
repository when building the list of experiments.
|
repository when building the list of experiments.
|
||||||
* The configuration entry ``rtio_clock`` supports multiple clocking settings, deprecating the usage
|
* The configuration entry ``rtio_clock`` supports multiple clocking settings, deprecating the usage
|
||||||
of compile-time options.
|
of compile-time options.
|
||||||
|
* DRTIO: added support for 100MHz clock.
|
||||||
|
* Previously detected RTIO async errors are reported to the host after each kernel terminates and a
|
||||||
|
warning is logged. The warning is additional to the one already printed in the core device log upon
|
||||||
|
detection of the error.
|
||||||
|
* HDF5 options can now be passed when creating datasets with ``set_dataset``. This allows
|
||||||
|
in particular to use transparent compression filters as follows:
|
||||||
|
``set_dataset(name, value, hdf5_options={"compression": "gzip"})``.
|
||||||
|
* Removed worker DB warning for writing a dataset that is also in the archive
|
||||||
|
|
||||||
Breaking changes:
|
Breaking changes:
|
||||||
|
|
||||||
|
* Due to the new RISC-V CPU, the device database entry for the core device needs to be updated.
|
||||||
|
The ``target`` parameter needs to be set to ``rv32ima`` for Kasli 1.x and to ``rv32g`` for all
|
||||||
|
other boards. Freshly generated device database templates already contain this update.
|
||||||
* Updated Phaser-Upconverter default frequency 2.875 GHz. The new default uses the target PFD
|
* Updated Phaser-Upconverter default frequency 2.875 GHz. The new default uses the target PFD
|
||||||
frequency of the hardware design.
|
frequency of the hardware design.
|
||||||
* ``Phaser.init()`` now disables all Kasli-oscillators. This avoids full power RF output being
|
* ``Phaser.init()`` now disables all Kasli-oscillators. This avoids full power RF output being
|
||||||
|
@ -38,6 +49,11 @@ Breaking changes:
|
||||||
* Phaser: fixed coarse mixer frequency configuration
|
* Phaser: fixed coarse mixer frequency configuration
|
||||||
* Mirny: Added extra delays in ``ADF5356.sync()``. This avoids the need of an extra delay before
|
* Mirny: Added extra delays in ``ADF5356.sync()``. This avoids the need of an extra delay before
|
||||||
calling `ADF5356.init()`.
|
calling `ADF5356.init()`.
|
||||||
|
* DRTIO: Changed message alignment from 32-bits to 64-bits.
|
||||||
|
* The deprecated ``set_dataset(..., save=...)`` is no longer supported.
|
||||||
|
* The internal dataset representation was changed to support tracking HDF5 options like e.g.
|
||||||
|
a compression method. This requires changes to code reading the dataset persistence file
|
||||||
|
(``dataset_db.pyon``) and to custom applets.
|
||||||
|
|
||||||
|
|
||||||
ARTIQ-6
|
ARTIQ-6
|
||||||
|
@ -70,8 +86,11 @@ Highlights:
|
||||||
- Improved performance for kernel RPC involving list and array.
|
- Improved performance for kernel RPC involving list and array.
|
||||||
* Coredevice SI to mu conversions now always return valid codes, or raise a ``ValueError``.
|
* Coredevice SI to mu conversions now always return valid codes, or raise a ``ValueError``.
|
||||||
* Zotino now exposes ``voltage_to_mu()``
|
* Zotino now exposes ``voltage_to_mu()``
|
||||||
* ``ad9910``: The maximum amplitude scale factor is now ``0x3fff`` (was ``0x3ffe``
|
* ``ad9910``:
|
||||||
before).
|
- The maximum amplitude scale factor is now ``0x3fff`` (was ``0x3ffe`` before).
|
||||||
|
- The default single-tone profile is now 7 (was 0).
|
||||||
|
- Added option to ``set_mu()`` that affects the ASF, FTW and POW registers
|
||||||
|
instead of the single-tone profile register.
|
||||||
* Mirny now supports HW revision independent, human readable ``clk_sel`` parameters:
|
* Mirny now supports HW revision independent, human readable ``clk_sel`` parameters:
|
||||||
"XO", "SMA", and "MMCX". Passing an integer is backwards compatible.
|
"XO", "SMA", and "MMCX". Passing an integer is backwards compatible.
|
||||||
* Dashboard:
|
* Dashboard:
|
||||||
|
@ -104,9 +123,13 @@ Breaking changes:
|
||||||
* ``quamash`` has been replaced with ``qasync``.
|
* ``quamash`` has been replaced with ``qasync``.
|
||||||
* Protocols are updated to use device endian.
|
* Protocols are updated to use device endian.
|
||||||
* Analyzer dump format includes a byte for device endianness.
|
* Analyzer dump format includes a byte for device endianness.
|
||||||
|
* To support variable numbers of Urukul cards in the future, the
|
||||||
|
``artiq.coredevice.suservo.SUServo`` constructor now accepts two device name lists,
|
||||||
|
``cpld_devices`` and ``dds_devices``, rather than four individual arguments.
|
||||||
* Experiment classes with underscore-prefixed names are now ignored when ``artiq_client``
|
* Experiment classes with underscore-prefixed names are now ignored when ``artiq_client``
|
||||||
determines which experiment to submit (consistent with ``artiq_run``).
|
determines which experiment to submit (consistent with ``artiq_run``).
|
||||||
|
|
||||||
|
|
||||||
ARTIQ-5
|
ARTIQ-5
|
||||||
-------
|
-------
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ class NumberWidget(QtWidgets.QLCDNumber):
|
||||||
|
|
||||||
def data_changed(self, data, mods):
|
def data_changed(self, data, mods):
|
||||||
try:
|
try:
|
||||||
n = float(data[self.dataset_name][1])
|
n = float(data[self.dataset_name]["value"])
|
||||||
except (KeyError, ValueError, TypeError):
|
except (KeyError, ValueError, TypeError):
|
||||||
n = "---"
|
n = "---"
|
||||||
self.display(n)
|
self.display(n)
|
||||||
|
|
|
@ -13,7 +13,7 @@ class Image(pyqtgraph.ImageView):
|
||||||
|
|
||||||
def data_changed(self, data, mods):
|
def data_changed(self, data, mods):
|
||||||
try:
|
try:
|
||||||
img = data[self.args.img][1]
|
img = data[self.args.img]["value"]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
self.setImage(img)
|
self.setImage(img)
|
||||||
|
|
|
@ -17,11 +17,11 @@ class HistogramPlot(pyqtgraph.PlotWidget):
|
||||||
|
|
||||||
def data_changed(self, data, mods, title):
|
def data_changed(self, data, mods, title):
|
||||||
try:
|
try:
|
||||||
y = data[self.args.y][1]
|
y = data[self.args.y]["value"]
|
||||||
if self.args.x is None:
|
if self.args.x is None:
|
||||||
x = None
|
x = None
|
||||||
else:
|
else:
|
||||||
x = data[self.args.x][1]
|
x = data[self.args.x]["value"]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
if x is None:
|
if x is None:
|
||||||
|
|
|
@ -6,6 +6,7 @@ from PyQt5.QtCore import QTimer
|
||||||
import pyqtgraph
|
import pyqtgraph
|
||||||
|
|
||||||
from artiq.applets.simple import TitleApplet
|
from artiq.applets.simple import TitleApplet
|
||||||
|
from artiq.master.databases import make_dataset as empty_dataset
|
||||||
|
|
||||||
|
|
||||||
class XYPlot(pyqtgraph.PlotWidget):
|
class XYPlot(pyqtgraph.PlotWidget):
|
||||||
|
@ -21,14 +22,14 @@ class XYPlot(pyqtgraph.PlotWidget):
|
||||||
|
|
||||||
def data_changed(self, data, mods, title):
|
def data_changed(self, data, mods, title):
|
||||||
try:
|
try:
|
||||||
y = data[self.args.y][1]
|
y = data[self.args.y]["value"]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
x = data.get(self.args.x, (False, None))[1]
|
x = data.get(self.args.x, empty_dataset())["value"]
|
||||||
if x is None:
|
if x is None:
|
||||||
x = np.arange(len(y))
|
x = np.arange(len(y))
|
||||||
error = data.get(self.args.error, (False, None))[1]
|
error = data.get(self.args.error, empty_dataset())["value"]
|
||||||
fit = data.get(self.args.fit, (False, None))[1]
|
fit = data.get(self.args.fit, empty_dataset())["value"]
|
||||||
|
|
||||||
if not len(y) or len(y) != len(x):
|
if not len(y) or len(y) != len(x):
|
||||||
self.mismatch['X values'] = True
|
self.mismatch['X values'] = True
|
||||||
|
|
|
@ -126,9 +126,9 @@ class XYHistPlot(QtWidgets.QSplitter):
|
||||||
|
|
||||||
def data_changed(self, data, mods):
|
def data_changed(self, data, mods):
|
||||||
try:
|
try:
|
||||||
xs = data[self.args.xs][1]
|
xs = data[self.args.xs]["value"]
|
||||||
histogram_bins = data[self.args.histogram_bins][1]
|
histogram_bins = data[self.args.histogram_bins]["value"]
|
||||||
histograms_counts = data[self.args.histograms_counts][1]
|
histograms_counts = data[self.args.histograms_counts]["value"]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
if len(xs) != histograms_counts.shape[0]:
|
if len(xs) != histograms_counts.shape[0]:
|
||||||
|
|
|
@ -10,6 +10,7 @@ from sipyco.sync_struct import Subscriber, process_mod
|
||||||
from sipyco import pyon
|
from sipyco import pyon
|
||||||
from sipyco.pipe_ipc import AsyncioChildComm
|
from sipyco.pipe_ipc import AsyncioChildComm
|
||||||
|
|
||||||
|
from artiq.master.databases import make_dataset as empty_dataset
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -251,7 +252,7 @@ class TitleApplet(SimpleApplet):
|
||||||
|
|
||||||
def emit_data_changed(self, data, mod_buffer):
|
def emit_data_changed(self, data, mod_buffer):
|
||||||
if self.args.title is not None:
|
if self.args.title is not None:
|
||||||
title_values = {k.replace(".", "/"): data.get(k, (False, None))[1]
|
title_values = {k.replace(".", "/"): data.get(k, empty_dataset())["value"]
|
||||||
for k in self.dataset_title}
|
for k in self.dataset_title}
|
||||||
try:
|
try:
|
||||||
title = self.args.title.format(**title_values)
|
title = self.args.title.format(**title_values)
|
||||||
|
|
|
@ -104,8 +104,8 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
idx = self.table_model_filter.mapToSource(idx[0])
|
idx = self.table_model_filter.mapToSource(idx[0])
|
||||||
key = self.table_model.index_to_key(idx)
|
key = self.table_model.index_to_key(idx)
|
||||||
if key is not None:
|
if key is not None:
|
||||||
persist, value = self.table_model.backing_store[key]
|
dataset = self.table_model.backing_store[key]
|
||||||
asyncio.ensure_future(self._upload_dataset(key, value))
|
asyncio.ensure_future(self._upload_dataset(key, dataset["value"]))
|
||||||
|
|
||||||
def save_state(self):
|
def save_state(self):
|
||||||
return bytes(self.table.header().saveState())
|
return bytes(self.table.header().saveState())
|
||||||
|
|
|
@ -7,6 +7,7 @@ from artiq.language.types import TBool, TInt32, TInt64, TFloat, TList, TTuple
|
||||||
|
|
||||||
from artiq.coredevice import spi2 as spi
|
from artiq.coredevice import spi2 as spi
|
||||||
from artiq.coredevice import urukul
|
from artiq.coredevice import urukul
|
||||||
|
from artiq.coredevice.urukul import DEFAULT_PROFILE
|
||||||
|
|
||||||
# Work around ARTIQ-Python import machinery
|
# Work around ARTIQ-Python import machinery
|
||||||
urukul_sta_pll_lock = urukul.urukul_sta_pll_lock
|
urukul_sta_pll_lock = urukul.urukul_sta_pll_lock
|
||||||
|
@ -60,6 +61,9 @@ RAM_MODE_BIDIR_RAMP = 2
|
||||||
RAM_MODE_CONT_BIDIR_RAMP = 3
|
RAM_MODE_CONT_BIDIR_RAMP = 3
|
||||||
RAM_MODE_CONT_RAMPUP = 4
|
RAM_MODE_CONT_RAMPUP = 4
|
||||||
|
|
||||||
|
# Default profile for RAM mode
|
||||||
|
_DEFAULT_PROFILE_RAM = 0
|
||||||
|
|
||||||
|
|
||||||
class SyncDataUser:
|
class SyncDataUser:
|
||||||
def __init__(self, core, sync_delay_seed, io_update_delay):
|
def __init__(self, core, sync_delay_seed, io_update_delay):
|
||||||
|
@ -236,7 +240,7 @@ class AD9910:
|
||||||
"""
|
"""
|
||||||
self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END, 24,
|
self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END, 24,
|
||||||
urukul.SPIT_DDS_WR, self.chip_select)
|
urukul.SPIT_DDS_WR, self.chip_select)
|
||||||
self.bus.write((addr << 24) | (data << 8))
|
self.bus.write((addr << 24) | ((data & 0xffff) << 8))
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write32(self, addr: TInt32, data: TInt32):
|
def write32(self, addr: TInt32, data: TInt32):
|
||||||
|
@ -374,18 +378,25 @@ class AD9910:
|
||||||
data[(n - preload) + i] = self.bus.read()
|
data[(n - preload) + i] = self.bus.read()
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_cfr1(self, power_down: TInt32 = 0b0000,
|
def set_cfr1(self,
|
||||||
|
power_down: TInt32 = 0b0000,
|
||||||
phase_autoclear: TInt32 = 0,
|
phase_autoclear: TInt32 = 0,
|
||||||
drg_load_lrr: TInt32 = 0, drg_autoclear: TInt32 = 0,
|
drg_load_lrr: TInt32 = 0,
|
||||||
internal_profile: TInt32 = 0, ram_destination: TInt32 = 0,
|
drg_autoclear: TInt32 = 0,
|
||||||
ram_enable: TInt32 = 0, manual_osk_external: TInt32 = 0,
|
phase_clear: TInt32 = 0,
|
||||||
osk_enable: TInt32 = 0, select_auto_osk: TInt32 = 0):
|
internal_profile: TInt32 = 0,
|
||||||
|
ram_destination: TInt32 = 0,
|
||||||
|
ram_enable: TInt32 = 0,
|
||||||
|
manual_osk_external: TInt32 = 0,
|
||||||
|
osk_enable: TInt32 = 0,
|
||||||
|
select_auto_osk: TInt32 = 0):
|
||||||
"""Set CFR1. See the AD9910 datasheet for parameter meanings.
|
"""Set CFR1. See the AD9910 datasheet for parameter meanings.
|
||||||
|
|
||||||
This method does not pulse IO_UPDATE.
|
This method does not pulse IO_UPDATE.
|
||||||
|
|
||||||
:param power_down: Power down bits.
|
:param power_down: Power down bits.
|
||||||
:param phase_autoclear: Autoclear phase accumulator.
|
:param phase_autoclear: Autoclear phase accumulator.
|
||||||
|
:param phase_clear: Asynchronous, static reset of the phase accumulator.
|
||||||
:param drg_load_lrr: Load digital ramp generator LRR.
|
:param drg_load_lrr: Load digital ramp generator LRR.
|
||||||
:param drg_autoclear: Autoclear digital ramp generator.
|
:param drg_autoclear: Autoclear digital ramp generator.
|
||||||
:param internal_profile: Internal profile control.
|
:param internal_profile: Internal profile control.
|
||||||
|
@ -405,11 +416,41 @@ class AD9910:
|
||||||
(drg_load_lrr << 15) |
|
(drg_load_lrr << 15) |
|
||||||
(drg_autoclear << 14) |
|
(drg_autoclear << 14) |
|
||||||
(phase_autoclear << 13) |
|
(phase_autoclear << 13) |
|
||||||
|
(phase_clear << 11) |
|
||||||
(osk_enable << 9) |
|
(osk_enable << 9) |
|
||||||
(select_auto_osk << 8) |
|
(select_auto_osk << 8) |
|
||||||
(power_down << 4) |
|
(power_down << 4) |
|
||||||
2) # SDIO input only, MSB first
|
2) # SDIO input only, MSB first
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_cfr2(self,
|
||||||
|
asf_profile_enable: TInt32 = 1,
|
||||||
|
drg_enable: TInt32 = 0,
|
||||||
|
effective_ftw: TInt32 = 1,
|
||||||
|
sync_validation_disable: TInt32 = 0,
|
||||||
|
matched_latency_enable: TInt32 = 0):
|
||||||
|
"""Set CFR2. See the AD9910 datasheet for parameter meanings.
|
||||||
|
|
||||||
|
This method does not pulse IO_UPDATE.
|
||||||
|
|
||||||
|
:param asf_profile_enable: Enable amplitude scale from single tone profiles.
|
||||||
|
:param drg_enable: Digital ramp enable.
|
||||||
|
:param effective_ftw: Read effective FTW.
|
||||||
|
:param sync_validation_disable: Disable the SYNC_SMP_ERR pin indicating
|
||||||
|
(active high) detection of a synchronization pulse sampling error.
|
||||||
|
:param matched_latency_enable: Simultaneous application of amplitude,
|
||||||
|
phase, and frequency changes to the DDS arrive at the output
|
||||||
|
|
||||||
|
* matched_latency_enable = 0: in the order listed
|
||||||
|
* matched_latency_enable = 1: simultaneously.
|
||||||
|
"""
|
||||||
|
self.write32(_AD9910_REG_CFR2,
|
||||||
|
(asf_profile_enable << 24) |
|
||||||
|
(drg_enable << 19) |
|
||||||
|
(effective_ftw << 16) |
|
||||||
|
(matched_latency_enable << 7) |
|
||||||
|
(sync_validation_disable << 5))
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def init(self, blind: TBool = False):
|
def init(self, blind: TBool = False):
|
||||||
"""Initialize and configure the DDS.
|
"""Initialize and configure the DDS.
|
||||||
|
@ -442,7 +483,7 @@ class AD9910:
|
||||||
# enable amplitude scale from profiles
|
# enable amplitude scale from profiles
|
||||||
# read effective FTW
|
# read effective FTW
|
||||||
# sync timing validation disable (enabled later)
|
# sync timing validation disable (enabled later)
|
||||||
self.write32(_AD9910_REG_CFR2, 0x01010020)
|
self.set_cfr2(sync_validation_disable=1)
|
||||||
self.cpld.io_update.pulse(1 * us)
|
self.cpld.io_update.pulse(1 * us)
|
||||||
cfr3 = (0x0807c000 | (self.pll_vco << 24) |
|
cfr3 = (0x0807c000 | (self.pll_vco << 24) |
|
||||||
(self.pll_cp << 19) | (self.pll_en << 8) |
|
(self.pll_cp << 19) | (self.pll_en << 8) |
|
||||||
|
@ -465,7 +506,7 @@ class AD9910:
|
||||||
if i >= 100 - 1:
|
if i >= 100 - 1:
|
||||||
raise ValueError("PLL lock timeout")
|
raise ValueError("PLL lock timeout")
|
||||||
delay(10 * us) # slack
|
delay(10 * us) # slack
|
||||||
if self.sync_data.sync_delay_seed >= 0:
|
if self.sync_data.sync_delay_seed >= 0 and not blind:
|
||||||
self.tune_sync_delay(self.sync_data.sync_delay_seed)
|
self.tune_sync_delay(self.sync_data.sync_delay_seed)
|
||||||
delay(1 * ms)
|
delay(1 * ms)
|
||||||
|
|
||||||
|
@ -479,10 +520,12 @@ class AD9910:
|
||||||
self.cpld.io_update.pulse(1 * us)
|
self.cpld.io_update.pulse(1 * us)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_mu(self, ftw: TInt32, pow_: TInt32 = 0, asf: TInt32 = 0x3fff,
|
def set_mu(self, ftw: TInt32 = 0, pow_: TInt32 = 0, asf: TInt32 = 0x3fff,
|
||||||
phase_mode: TInt32 = _PHASE_MODE_DEFAULT,
|
phase_mode: TInt32 = _PHASE_MODE_DEFAULT,
|
||||||
ref_time_mu: TInt64 = int64(-1), profile: TInt32 = 0):
|
ref_time_mu: TInt64 = int64(-1),
|
||||||
"""Set profile 0 data in machine units.
|
profile: TInt32 = DEFAULT_PROFILE,
|
||||||
|
ram_destination: TInt32 = -1) -> TInt32:
|
||||||
|
"""Set DDS data in machine units.
|
||||||
|
|
||||||
This uses machine units (FTW, POW, ASF). The frequency tuning word
|
This uses machine units (FTW, POW, ASF). The frequency tuning word
|
||||||
width is 32, the phase offset word width is 16, and the amplitude
|
width is 32, the phase offset word width is 16, and the amplitude
|
||||||
|
@ -501,7 +544,13 @@ class AD9910:
|
||||||
by :meth:`set_phase_mode` for this call.
|
by :meth:`set_phase_mode` for this call.
|
||||||
:param ref_time_mu: Fiducial time used to compute absolute or tracking
|
:param ref_time_mu: Fiducial time used to compute absolute or tracking
|
||||||
phase updates. In machine units as obtained by `now_mu()`.
|
phase updates. In machine units as obtained by `now_mu()`.
|
||||||
:param profile: Profile number to set (0-7, default: 0).
|
:param profile: Single tone profile number to set (0-7, default: 7).
|
||||||
|
Ineffective if `ram_destination` is specified.
|
||||||
|
:param ram_destination: RAM destination (:const:`RAM_DEST_FTW`,
|
||||||
|
:const:`RAM_DEST_POW`, :const:`RAM_DEST_ASF`,
|
||||||
|
:const:`RAM_DEST_POWASF`). If specified, write free DDS parameters
|
||||||
|
to the ASF/FTW/POW registers instead of to the single tone profile
|
||||||
|
register (default behaviour, see `profile`).
|
||||||
:return: Resulting phase offset word after application of phase
|
:return: Resulting phase offset word after application of phase
|
||||||
tracking offset. When using :const:`PHASE_MODE_CONTINUOUS` in
|
tracking offset. When using :const:`PHASE_MODE_CONTINUOUS` in
|
||||||
subsequent calls, use this value as the "current" phase.
|
subsequent calls, use this value as the "current" phase.
|
||||||
|
@ -524,8 +573,17 @@ class AD9910:
|
||||||
# is equivalent to an output pipeline latency.
|
# is equivalent to an output pipeline latency.
|
||||||
dt = int32(now_mu()) - int32(ref_time_mu)
|
dt = int32(now_mu()) - int32(ref_time_mu)
|
||||||
pow_ += dt * ftw * self.sysclk_per_mu >> 16
|
pow_ += dt * ftw * self.sysclk_per_mu >> 16
|
||||||
self.write64(_AD9910_REG_PROFILE0 + profile,
|
if ram_destination == -1:
|
||||||
(asf << 16) | (pow_ & 0xffff), ftw)
|
self.write64(_AD9910_REG_PROFILE0 + profile,
|
||||||
|
(asf << 16) | (pow_ & 0xffff), ftw)
|
||||||
|
else:
|
||||||
|
if not ram_destination == RAM_DEST_FTW:
|
||||||
|
self.set_ftw(ftw)
|
||||||
|
if not ram_destination == RAM_DEST_POWASF:
|
||||||
|
if not ram_destination == RAM_DEST_ASF:
|
||||||
|
self.set_asf(asf)
|
||||||
|
if not ram_destination == RAM_DEST_POW:
|
||||||
|
self.set_pow(pow_)
|
||||||
delay_mu(int64(self.sync_data.io_update_delay))
|
delay_mu(int64(self.sync_data.io_update_delay))
|
||||||
self.cpld.io_update.pulse_mu(8) # assumes 8 mu > t_SYN_CCLK
|
self.cpld.io_update.pulse_mu(8) # assumes 8 mu > t_SYN_CCLK
|
||||||
at_mu(now_mu() & ~7) # clear fine TSC again
|
at_mu(now_mu() & ~7) # clear fine TSC again
|
||||||
|
@ -535,13 +593,14 @@ class AD9910:
|
||||||
return pow_
|
return pow_
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def get_mu(self, profile: TInt32 = 0) -> TTuple([TInt32, TInt32, TInt32]):
|
def get_mu(self, profile: TInt32 = DEFAULT_PROFILE
|
||||||
|
) -> TTuple([TInt32, TInt32, TInt32]):
|
||||||
"""Get the frequency tuning word, phase offset word,
|
"""Get the frequency tuning word, phase offset word,
|
||||||
and amplitude scale factor.
|
and amplitude scale factor.
|
||||||
|
|
||||||
.. seealso:: :meth:`get`
|
.. seealso:: :meth:`get`
|
||||||
|
|
||||||
:param profile: Profile number to get (0-7, default: 0)
|
:param profile: Profile number to get (0-7, default: 7)
|
||||||
:return: A tuple ``(ftw, pow, asf)``
|
:return: A tuple ``(ftw, pow, asf)``
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -555,8 +614,9 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_profile_ram(self, start: TInt32, end: TInt32, step: TInt32 = 1,
|
def set_profile_ram(self, start: TInt32, end: TInt32, step: TInt32 = 1,
|
||||||
profile: TInt32 = 0, nodwell_high: TInt32 = 0,
|
profile: TInt32 = _DEFAULT_PROFILE_RAM,
|
||||||
zero_crossing: TInt32 = 0, mode: TInt32 = 1):
|
nodwell_high: TInt32 = 0, zero_crossing: TInt32 = 0,
|
||||||
|
mode: TInt32 = 1):
|
||||||
"""Set the RAM profile settings.
|
"""Set the RAM profile settings.
|
||||||
|
|
||||||
:param start: Profile start address in RAM.
|
:param start: Profile start address in RAM.
|
||||||
|
@ -784,10 +844,11 @@ class AD9910:
|
||||||
return self.pow_to_turns(self.get_pow())
|
return self.pow_to_turns(self.get_pow())
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set(self, frequency: TFloat, phase: TFloat = 0.0,
|
def set(self, frequency: TFloat = 0.0, phase: TFloat = 0.0,
|
||||||
amplitude: TFloat = 1.0, phase_mode: TInt32 = _PHASE_MODE_DEFAULT,
|
amplitude: TFloat = 1.0, phase_mode: TInt32 = _PHASE_MODE_DEFAULT,
|
||||||
ref_time_mu: TInt64 = int64(-1), profile: TInt32 = 0):
|
ref_time_mu: TInt64 = int64(-1), profile: TInt32 = DEFAULT_PROFILE,
|
||||||
"""Set profile 0 data in SI units.
|
ram_destination: TInt32 = -1) -> TFloat:
|
||||||
|
"""Set DDS data in SI units.
|
||||||
|
|
||||||
.. seealso:: :meth:`set_mu`
|
.. seealso:: :meth:`set_mu`
|
||||||
|
|
||||||
|
@ -796,21 +857,23 @@ class AD9910:
|
||||||
:param amplitude: Amplitude in units of full scale
|
:param amplitude: Amplitude in units of full scale
|
||||||
:param phase_mode: Phase mode constant
|
:param phase_mode: Phase mode constant
|
||||||
:param ref_time_mu: Fiducial time stamp in machine units
|
:param ref_time_mu: Fiducial time stamp in machine units
|
||||||
:param profile: Profile to affect
|
:param profile: Single tone profile to affect.
|
||||||
|
:param ram_destination: RAM destination.
|
||||||
:return: Resulting phase offset in turns
|
:return: Resulting phase offset in turns
|
||||||
"""
|
"""
|
||||||
return self.pow_to_turns(self.set_mu(
|
return self.pow_to_turns(self.set_mu(
|
||||||
self.frequency_to_ftw(frequency), self.turns_to_pow(phase),
|
self.frequency_to_ftw(frequency), self.turns_to_pow(phase),
|
||||||
self.amplitude_to_asf(amplitude), phase_mode, ref_time_mu,
|
self.amplitude_to_asf(amplitude), phase_mode, ref_time_mu,
|
||||||
profile))
|
profile, ram_destination))
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def get(self, profile: TInt32 = 0) -> TTuple([TFloat, TFloat, TFloat]):
|
def get(self, profile: TInt32 = DEFAULT_PROFILE
|
||||||
|
) -> TTuple([TFloat, TFloat, TFloat]):
|
||||||
"""Get the frequency, phase, and amplitude.
|
"""Get the frequency, phase, and amplitude.
|
||||||
|
|
||||||
.. seealso:: :meth:`get_mu`
|
.. seealso:: :meth:`get_mu`
|
||||||
|
|
||||||
:param profile: Profile number to get (0-7, default: 0)
|
:param profile: Profile number to get (0-7, default: 7)
|
||||||
:return: A tuple ``(frequency, phase, amplitude)``
|
:return: A tuple ``(frequency, phase, amplitude)``
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -875,20 +938,26 @@ class AD9910:
|
||||||
self.cpld.cfg_sw(self.chip_select - 4, state)
|
self.cpld.cfg_sw(self.chip_select - 4, state)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_sync(self, in_delay: TInt32, window: TInt32):
|
def set_sync(self,
|
||||||
|
in_delay: TInt32,
|
||||||
|
window: TInt32,
|
||||||
|
en_sync_gen: TInt32 = 0):
|
||||||
"""Set the relevant parameters in the multi device synchronization
|
"""Set the relevant parameters in the multi device synchronization
|
||||||
register. See the AD9910 datasheet for details. The SYNC clock
|
register. See the AD9910 datasheet for details. The SYNC clock
|
||||||
generator preset value is set to zero, and the SYNC_OUT generator is
|
generator preset value is set to zero, and the SYNC_OUT generator is
|
||||||
disabled.
|
disabled by default.
|
||||||
|
|
||||||
:param in_delay: SYNC_IN delay tap (0-31) in steps of ~75ps
|
:param in_delay: SYNC_IN delay tap (0-31) in steps of ~75ps
|
||||||
:param window: Symmetric SYNC_IN validation window (0-15) in
|
:param window: Symmetric SYNC_IN validation window (0-15) in
|
||||||
steps of ~75ps for both hold and setup margin.
|
steps of ~75ps for both hold and setup margin.
|
||||||
|
:param en_sync_gen: Whether to enable the DDS-internal sync generator
|
||||||
|
(SYNC_OUT, cf. sync_sel == 1). Should be left off for the normal
|
||||||
|
use case, where the SYNC clock is supplied by the core device.
|
||||||
"""
|
"""
|
||||||
self.write32(_AD9910_REG_SYNC,
|
self.write32(_AD9910_REG_SYNC,
|
||||||
(window << 28) | # SYNC S/H validation delay
|
(window << 28) | # SYNC S/H validation delay
|
||||||
(1 << 27) | # SYNC receiver enable
|
(1 << 27) | # SYNC receiver enable
|
||||||
(0 << 26) | # SYNC generator disable
|
(en_sync_gen << 26) | # SYNC generator enable
|
||||||
(0 << 25) | # SYNC generator SYS rising edge
|
(0 << 25) | # SYNC generator SYS rising edge
|
||||||
(0 << 18) | # SYNC preset
|
(0 << 18) | # SYNC preset
|
||||||
(0 << 11) | # SYNC output delay
|
(0 << 11) | # SYNC output delay
|
||||||
|
@ -904,9 +973,10 @@ class AD9910:
|
||||||
|
|
||||||
Also modifies CFR2.
|
Also modifies CFR2.
|
||||||
"""
|
"""
|
||||||
self.write32(_AD9910_REG_CFR2, 0x01010020) # clear SMP_ERR
|
self.set_cfr2(sync_validation_disable=1) # clear SMP_ERR
|
||||||
self.cpld.io_update.pulse(1 * us)
|
self.cpld.io_update.pulse(1 * us)
|
||||||
self.write32(_AD9910_REG_CFR2, 0x01010000) # enable SMP_ERR
|
delay(10 * us) # slack
|
||||||
|
self.set_cfr2(sync_validation_disable=0) # enable SMP_ERR
|
||||||
self.cpld.io_update.pulse(1 * us)
|
self.cpld.io_update.pulse(1 * us)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
|
@ -984,7 +1054,7 @@ class AD9910:
|
||||||
# set up DRG
|
# set up DRG
|
||||||
self.set_cfr1(drg_load_lrr=1, drg_autoclear=1)
|
self.set_cfr1(drg_load_lrr=1, drg_autoclear=1)
|
||||||
# DRG -> FTW, DRG enable
|
# DRG -> FTW, DRG enable
|
||||||
self.write32(_AD9910_REG_CFR2, 0x01090000)
|
self.set_cfr2(drg_enable=1)
|
||||||
# no limits
|
# no limits
|
||||||
self.write64(_AD9910_REG_RAMP_LIMIT, -1, 0)
|
self.write64(_AD9910_REG_RAMP_LIMIT, -1, 0)
|
||||||
# DRCTL=0, dt=1 t_SYNC_CLK
|
# DRCTL=0, dt=1 t_SYNC_CLK
|
||||||
|
@ -1005,7 +1075,7 @@ class AD9910:
|
||||||
ftw = self.read32(_AD9910_REG_FTW) # read out effective FTW
|
ftw = self.read32(_AD9910_REG_FTW) # read out effective FTW
|
||||||
delay(100 * us) # slack
|
delay(100 * us) # slack
|
||||||
# disable DRG
|
# disable DRG
|
||||||
self.write32(_AD9910_REG_CFR2, 0x01010000)
|
self.set_cfr2(drg_enable=0)
|
||||||
self.cpld.io_update.pulse_mu(8)
|
self.cpld.io_update.pulse_mu(8)
|
||||||
return ftw & 1
|
return ftw & 1
|
||||||
|
|
||||||
|
|
|
@ -164,7 +164,7 @@ class AD9912:
|
||||||
return self.cpld.get_channel_att(self.chip_select - 4)
|
return self.cpld.get_channel_att(self.chip_select - 4)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_mu(self, ftw: int64, pow_: int32):
|
def set_mu(self, ftw: int64, pow_: int32 = 0):
|
||||||
"""Set profile 0 data in machine units.
|
"""Set profile 0 data in machine units.
|
||||||
|
|
||||||
After the SPI transfer, the shared IO update pin is pulsed to
|
After the SPI transfer, the shared IO update pin is pulsed to
|
||||||
|
|
|
@ -437,12 +437,12 @@ class CommKernel:
|
||||||
self._write_bool(value)
|
self._write_bool(value)
|
||||||
elif tag == "i":
|
elif tag == "i":
|
||||||
check(isinstance(value, (int, numpy.int32)) and
|
check(isinstance(value, (int, numpy.int32)) and
|
||||||
(-2**31 < value < 2**31-1),
|
(-2**31 <= value < 2**31),
|
||||||
lambda: "32-bit int")
|
lambda: "32-bit int")
|
||||||
self._write_int32(value)
|
self._write_int32(value)
|
||||||
elif tag == "I":
|
elif tag == "I":
|
||||||
check(isinstance(value, (int, numpy.int32, numpy.int64)) and
|
check(isinstance(value, (int, numpy.int32, numpy.int64)) and
|
||||||
(-2**63 < value < 2**63-1),
|
(-2**63 <= value < 2**63),
|
||||||
lambda: "64-bit int")
|
lambda: "64-bit int")
|
||||||
self._write_int64(value)
|
self._write_int64(value)
|
||||||
elif tag == "f":
|
elif tag == "f":
|
||||||
|
@ -451,8 +451,8 @@ class CommKernel:
|
||||||
self._write_float64(value)
|
self._write_float64(value)
|
||||||
elif tag == "F":
|
elif tag == "F":
|
||||||
check(isinstance(value, Fraction) and
|
check(isinstance(value, Fraction) and
|
||||||
(-2**63 < value.numerator < 2**63-1) and
|
(-2**63 <= value.numerator < 2**63) and
|
||||||
(-2**63 < value.denominator < 2**63-1),
|
(-2**63 <= value.denominator < 2**63),
|
||||||
lambda: "64-bit Fraction")
|
lambda: "64-bit Fraction")
|
||||||
self._write_int64(value.numerator)
|
self._write_int64(value.numerator)
|
||||||
self._write_int64(value.denominator)
|
self._write_int64(value.denominator)
|
||||||
|
@ -476,11 +476,19 @@ class CommKernel:
|
||||||
if tag_element == "b":
|
if tag_element == "b":
|
||||||
self._write(bytes(value))
|
self._write(bytes(value))
|
||||||
elif tag_element == "i":
|
elif tag_element == "i":
|
||||||
self._write(struct.pack(self.endian + "%sl" %
|
try:
|
||||||
len(value), *value))
|
self._write(struct.pack(self.endian + "%sl" % len(value), *value))
|
||||||
|
except struct.error:
|
||||||
|
raise RPCReturnValueError(
|
||||||
|
"type mismatch: cannot serialize {value} as {type}".format(
|
||||||
|
value=repr(value), type="32-bit integer list"))
|
||||||
elif tag_element == "I":
|
elif tag_element == "I":
|
||||||
self._write(struct.pack(self.endian + "%sq" %
|
try:
|
||||||
len(value), *value))
|
self._write(struct.pack(self.endian + "%sq" % len(value), *value))
|
||||||
|
except struct.error:
|
||||||
|
raise RPCReturnValueError(
|
||||||
|
"type mismatch: cannot serialize {value} as {type}".format(
|
||||||
|
value=repr(value), type="64-bit integer list"))
|
||||||
elif tag_element == "f":
|
elif tag_element == "f":
|
||||||
self._write(struct.pack(self.endian + "%sd" %
|
self._write(struct.pack(self.endian + "%sd" %
|
||||||
len(value), *value))
|
len(value), *value))
|
||||||
|
@ -555,14 +563,6 @@ class CommKernel:
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result = service(*args, **kwargs)
|
result = service(*args, **kwargs)
|
||||||
logger.debug("rpc service: %d %r %r = %r",
|
|
||||||
service_id, args, kwargs, result)
|
|
||||||
|
|
||||||
self._write_header(Request.RPCReply)
|
|
||||||
self._write_bytes(return_tags)
|
|
||||||
self._send_rpc_value(bytearray(return_tags),
|
|
||||||
result, result, service)
|
|
||||||
self._flush()
|
|
||||||
except RPCReturnValueError as exn:
|
except RPCReturnValueError as exn:
|
||||||
raise
|
raise
|
||||||
except Exception as exn:
|
except Exception as exn:
|
||||||
|
@ -609,6 +609,14 @@ class CommKernel:
|
||||||
self._write_int32(-1) # column not known
|
self._write_int32(-1) # column not known
|
||||||
self._write_string(function)
|
self._write_string(function)
|
||||||
self._flush()
|
self._flush()
|
||||||
|
else:
|
||||||
|
logger.debug("rpc service: %d %r %r = %r",
|
||||||
|
service_id, args, kwargs, result)
|
||||||
|
self._write_header(Request.RPCReply)
|
||||||
|
self._write_bytes(return_tags)
|
||||||
|
self._send_rpc_value(bytearray(return_tags),
|
||||||
|
result, result, service)
|
||||||
|
self._flush()
|
||||||
|
|
||||||
def _serve_exception(self, embedding_map, symbolizer, demangler):
|
def _serve_exception(self, embedding_map, symbolizer, demangler):
|
||||||
name = self._read_string()
|
name = self._read_string()
|
||||||
|
@ -621,6 +629,7 @@ class CommKernel:
|
||||||
function = self._read_string()
|
function = self._read_string()
|
||||||
|
|
||||||
backtrace = [self._read_int32() for _ in range(self._read_int32())]
|
backtrace = [self._read_int32() for _ in range(self._read_int32())]
|
||||||
|
self._process_async_error()
|
||||||
|
|
||||||
traceback = list(reversed(symbolizer(backtrace))) + \
|
traceback = list(reversed(symbolizer(backtrace))) + \
|
||||||
[(filename, line, column, *demangler([function]), None)]
|
[(filename, line, column, *demangler([function]), None)]
|
||||||
|
@ -635,6 +644,16 @@ class CommKernel:
|
||||||
python_exn.artiq_core_exception = core_exn
|
python_exn.artiq_core_exception = core_exn
|
||||||
raise python_exn
|
raise python_exn
|
||||||
|
|
||||||
|
def _process_async_error(self):
|
||||||
|
errors = self._read_int8()
|
||||||
|
if errors > 0:
|
||||||
|
map_name = lambda y, z: [f"{y}(s)"] if z else []
|
||||||
|
errors = map_name("collision", errors & 2 ** 0) + \
|
||||||
|
map_name("busy error", errors & 2 ** 1) + \
|
||||||
|
map_name("sequence error", errors & 2 ** 2)
|
||||||
|
logger.warning(f"{(', '.join(errors[:-1]) + ' and ') if len(errors) > 1 else ''}{errors[-1]} "
|
||||||
|
f"reported during kernel execution")
|
||||||
|
|
||||||
def serve(self, embedding_map, symbolizer, demangler):
|
def serve(self, embedding_map, symbolizer, demangler):
|
||||||
while True:
|
while True:
|
||||||
self._read_header()
|
self._read_header()
|
||||||
|
@ -646,4 +665,5 @@ class CommKernel:
|
||||||
raise exceptions.ClockFailure
|
raise exceptions.ClockFailure
|
||||||
else:
|
else:
|
||||||
self._read_expect(Reply.KernelFinished)
|
self._read_expect(Reply.KernelFinished)
|
||||||
|
self._process_async_error()
|
||||||
return
|
return
|
||||||
|
|
|
@ -64,6 +64,13 @@
|
||||||
"type": "boolean",
|
"type": "boolean",
|
||||||
"default": false
|
"default": false
|
||||||
},
|
},
|
||||||
|
"sed_lanes": {
|
||||||
|
"type": "number",
|
||||||
|
"minimum": 1,
|
||||||
|
"maximum": 32,
|
||||||
|
"default": 8,
|
||||||
|
"description": "Number of FIFOs in the SED, must be a power of 2"
|
||||||
|
},
|
||||||
"peripherals": {
|
"peripherals": {
|
||||||
"type": "array",
|
"type": "array",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -402,6 +409,10 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"default": 0
|
"default": 0
|
||||||
|
},
|
||||||
|
"almazny": {
|
||||||
|
"type": "boolean",
|
||||||
|
"default": false
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"required": ["ports"]
|
"required": ["ports"]
|
||||||
|
|
|
@ -32,6 +32,16 @@ WE = 1 << 24
|
||||||
# supported CPLD code version
|
# supported CPLD code version
|
||||||
PROTO_REV_MATCH = 0x0
|
PROTO_REV_MATCH = 0x0
|
||||||
|
|
||||||
|
# almazny-specific data
|
||||||
|
ALMAZNY_REG_BASE = 0x0C
|
||||||
|
ALMAZNY_OE_SHIFT = 12
|
||||||
|
|
||||||
|
# higher SPI write divider to match almazny shift register timing
|
||||||
|
# min SER time before SRCLK rise = 125ns
|
||||||
|
# -> div=32 gives 125ns for data before clock rise
|
||||||
|
# works at faster dividers too but could be less reliable
|
||||||
|
ALMAZNY_SPIT_WR = 32
|
||||||
|
|
||||||
|
|
||||||
@nac3
|
@nac3
|
||||||
class Mirny:
|
class Mirny:
|
||||||
|
@ -140,11 +150,114 @@ class Mirny:
|
||||||
self.bus.write(((channel | 8) << 25) | (att << 16))
|
self.bus.write(((channel | 8) << 25) | (att << 16))
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write_ext(self, addr: int32, length: int32, data: int32):
|
def write_ext(self, addr: int32, length: int32, data: int32, ext_div: int32 = SPIT_WR):
|
||||||
"""Perform SPI write to a prefixed address"""
|
"""Perform SPI write to a prefixed address"""
|
||||||
self.bus.set_config_mu(SPI_CONFIG, 8, SPIT_WR, SPI_CS)
|
self.bus.set_config_mu(SPI_CONFIG, 8, SPIT_WR, SPI_CS)
|
||||||
self.bus.write(addr << 25)
|
self.bus.write(addr << 25)
|
||||||
self.bus.set_config_mu(SPI_CONFIG | SPI_END, length, SPIT_WR, SPI_CS)
|
self.bus.set_config_mu(SPI_CONFIG | SPI_END, length, ext_div, SPI_CS)
|
||||||
if length < 32:
|
if length < 32:
|
||||||
data <<= 32 - length
|
data <<= 32 - length
|
||||||
self.bus.write(data)
|
self.bus.write(data)
|
||||||
|
|
||||||
|
|
||||||
|
class Almazny:
|
||||||
|
"""
|
||||||
|
Almazny (High frequency mezzanine board for Mirny)
|
||||||
|
|
||||||
|
:param host_mirny - Mirny device Almazny is connected to
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, dmgr, host_mirny):
|
||||||
|
self.mirny_cpld = dmgr.get(host_mirny)
|
||||||
|
self.att_mu = [0x3f] * 4
|
||||||
|
self.channel_sw = [0] * 4
|
||||||
|
self.output_enable = False
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def init(self):
|
||||||
|
self.output_toggle(self.output_enable)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def att_to_mu(self, att):
|
||||||
|
"""
|
||||||
|
Convert an attenuator setting in dB to machine units.
|
||||||
|
|
||||||
|
:param att: attenuator setting in dB [0-31.5]
|
||||||
|
:return: attenuator setting in machine units
|
||||||
|
"""
|
||||||
|
mu = round(att * 2.0)
|
||||||
|
if mu > 63 or mu < 0:
|
||||||
|
raise ValueError("Invalid Almazny attenuator settings!")
|
||||||
|
return mu
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def mu_to_att(self, att_mu):
|
||||||
|
"""
|
||||||
|
Convert a digital attenuator setting to dB.
|
||||||
|
|
||||||
|
:param att_mu: attenuator setting in machine units
|
||||||
|
:return: attenuator setting in dB
|
||||||
|
"""
|
||||||
|
return att_mu / 2
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_att(self, channel, att, rf_switch=True):
|
||||||
|
"""
|
||||||
|
Sets attenuators on chosen shift register (channel).
|
||||||
|
:param channel - index of the register [0-3]
|
||||||
|
:param att_mu - attenuation setting in dBm [0-31.5]
|
||||||
|
:param rf_switch - rf switch (bool)
|
||||||
|
"""
|
||||||
|
self.set_att_mu(channel, self.att_to_mu(att), rf_switch)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_att_mu(self, channel, att_mu, rf_switch=True):
|
||||||
|
"""
|
||||||
|
Sets attenuators on chosen shift register (channel).
|
||||||
|
:param channel - index of the register [0-3]
|
||||||
|
:param att_mu - attenuation setting in machine units [0-63]
|
||||||
|
:param rf_switch - rf switch (bool)
|
||||||
|
"""
|
||||||
|
self.channel_sw[channel] = 1 if rf_switch else 0
|
||||||
|
self.att_mu[channel] = att_mu
|
||||||
|
self._update_register(channel)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def output_toggle(self, oe):
|
||||||
|
"""
|
||||||
|
Toggles output on all shift registers on or off.
|
||||||
|
:param oe - toggle output enable (bool)
|
||||||
|
"""
|
||||||
|
self.output_enable = oe
|
||||||
|
cfg_reg = self.mirny_cpld.read_reg(1)
|
||||||
|
en = 1 if self.output_enable else 0
|
||||||
|
delay(100 * us)
|
||||||
|
new_reg = (en << ALMAZNY_OE_SHIFT) | (cfg_reg & 0x3FF)
|
||||||
|
self.mirny_cpld.write_reg(1, new_reg)
|
||||||
|
delay(100 * us)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def _flip_mu_bits(self, mu):
|
||||||
|
# in this form MSB is actually 0.5dB attenuator
|
||||||
|
# unnatural for users, so we flip the six bits
|
||||||
|
return (((mu & 0x01) << 5)
|
||||||
|
| ((mu & 0x02) << 3)
|
||||||
|
| ((mu & 0x04) << 1)
|
||||||
|
| ((mu & 0x08) >> 1)
|
||||||
|
| ((mu & 0x10) >> 3)
|
||||||
|
| ((mu & 0x20) >> 5))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def _update_register(self, ch):
|
||||||
|
self.mirny_cpld.write_ext(
|
||||||
|
ALMAZNY_REG_BASE + ch,
|
||||||
|
8,
|
||||||
|
self._flip_mu_bits(self.att_mu[ch]) | (self.channel_sw[ch] << 6),
|
||||||
|
ALMAZNY_SPIT_WR
|
||||||
|
)
|
||||||
|
delay(100 * us)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def _update_all_registers(self):
|
||||||
|
for i in range(4):
|
||||||
|
self._update_register(i)
|
|
@ -57,32 +57,26 @@ class SUServo:
|
||||||
|
|
||||||
:param channel: RTIO channel number
|
:param channel: RTIO channel number
|
||||||
:param pgia_device: Name of the Sampler PGIA gain setting SPI bus
|
:param pgia_device: Name of the Sampler PGIA gain setting SPI bus
|
||||||
:param cpld0_device: Name of the first Urukul CPLD SPI bus
|
:param cpld_devices: Names of the Urukul CPLD SPI buses
|
||||||
:param cpld1_device: Name of the second Urukul CPLD SPI bus
|
:param dds_devices: Names of the AD9910 devices
|
||||||
:param dds0_device: Name of the AD9910 device for the DDS on the first
|
|
||||||
Urukul
|
|
||||||
:param dds1_device: Name of the AD9910 device for the DDS on the second
|
|
||||||
Urukul
|
|
||||||
:param gains: Initial value for PGIA gains shift register
|
:param gains: Initial value for PGIA gains shift register
|
||||||
(default: 0x0000). Knowledge of this state is not transferred
|
(default: 0x0000). Knowledge of this state is not transferred
|
||||||
between experiments.
|
between experiments.
|
||||||
:param core_device: Core device name
|
:param core_device: Core device name
|
||||||
"""
|
"""
|
||||||
kernel_invariants = {"channel", "core", "pgia", "cpld0", "cpld1",
|
kernel_invariants = {"channel", "core", "pgia", "cplds", "ddses",
|
||||||
"dds0", "dds1", "ref_period_mu"}
|
"ref_period_mu"}
|
||||||
|
|
||||||
def __init__(self, dmgr, channel, pgia_device,
|
def __init__(self, dmgr, channel, pgia_device,
|
||||||
cpld0_device, cpld1_device,
|
cpld_devices, dds_devices,
|
||||||
dds0_device, dds1_device,
|
|
||||||
gains=0x0000, core_device="core"):
|
gains=0x0000, core_device="core"):
|
||||||
|
|
||||||
self.core = dmgr.get(core_device)
|
self.core = dmgr.get(core_device)
|
||||||
self.pgia = dmgr.get(pgia_device)
|
self.pgia = dmgr.get(pgia_device)
|
||||||
self.pgia.update_xfer_duration_mu(div=4, length=16)
|
self.pgia.update_xfer_duration_mu(div=4, length=16)
|
||||||
self.dds0 = dmgr.get(dds0_device)
|
assert len(dds_devices) == len(cpld_devices)
|
||||||
self.dds1 = dmgr.get(dds1_device)
|
self.ddses = [dmgr.get(dds) for dds in dds_devices]
|
||||||
self.cpld0 = dmgr.get(cpld0_device)
|
self.cplds = [dmgr.get(cpld) for cpld in cpld_devices]
|
||||||
self.cpld1 = dmgr.get(cpld1_device)
|
|
||||||
self.channel = channel
|
self.channel = channel
|
||||||
self.gains = gains
|
self.gains = gains
|
||||||
self.ref_period_mu = self.core.seconds_to_mu(
|
self.ref_period_mu = self.core.seconds_to_mu(
|
||||||
|
@ -109,17 +103,15 @@ class SUServo:
|
||||||
sampler.SPI_CONFIG | spi.SPI_END,
|
sampler.SPI_CONFIG | spi.SPI_END,
|
||||||
16, 4, sampler.SPI_CS_PGIA)
|
16, 4, sampler.SPI_CS_PGIA)
|
||||||
|
|
||||||
self.cpld0.init(blind=True)
|
for i in range(len(self.cplds)):
|
||||||
cfg0 = self.cpld0.cfg_reg
|
cpld = self.cplds[i]
|
||||||
self.cpld0.cfg_write(cfg0 | (0xf << urukul.CFG_MASK_NU))
|
dds = self.ddses[i]
|
||||||
self.dds0.init(blind=True)
|
|
||||||
self.cpld0.cfg_write(cfg0)
|
|
||||||
|
|
||||||
self.cpld1.init(blind=True)
|
cpld.init(blind=True)
|
||||||
cfg1 = self.cpld1.cfg_reg
|
prev_cpld_cfg = cpld.cfg_reg
|
||||||
self.cpld1.cfg_write(cfg1 | (0xf << urukul.CFG_MASK_NU))
|
cpld.cfg_write(prev_cpld_cfg | (0xf << urukul.CFG_MASK_NU))
|
||||||
self.dds1.init(blind=True)
|
dds.init(blind=True)
|
||||||
self.cpld1.cfg_write(cfg1)
|
cpld.cfg_write(prev_cpld_cfg)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write(self, addr, value):
|
def write(self, addr, value):
|
||||||
|
@ -257,9 +249,11 @@ class Channel:
|
||||||
self.servo = dmgr.get(servo_device)
|
self.servo = dmgr.get(servo_device)
|
||||||
self.core = self.servo.core
|
self.core = self.servo.core
|
||||||
self.channel = channel
|
self.channel = channel
|
||||||
# FIXME: this assumes the mem channel is right after the control
|
# This assumes the mem channel is right after the control channels
|
||||||
# channels
|
# Make sure this is always the case in eem.py
|
||||||
self.servo_channel = self.channel + 8 - self.servo.channel
|
self.servo_channel = (self.channel + 4 * len(self.servo.cplds) -
|
||||||
|
self.servo.channel)
|
||||||
|
self.dds = self.servo.ddses[self.servo_channel // 4]
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set(self, en_out, en_iir=0, profile=0):
|
def set(self, en_out, en_iir=0, profile=0):
|
||||||
|
@ -311,12 +305,8 @@ class Channel:
|
||||||
see :meth:`dds_offset_to_mu`
|
see :meth:`dds_offset_to_mu`
|
||||||
:param phase: DDS phase in turns
|
:param phase: DDS phase in turns
|
||||||
"""
|
"""
|
||||||
if self.servo_channel < 4:
|
ftw = self.dds.frequency_to_ftw(frequency)
|
||||||
dds = self.servo.dds0
|
pow_ = self.dds.turns_to_pow(phase)
|
||||||
else:
|
|
||||||
dds = self.servo.dds1
|
|
||||||
ftw = dds.frequency_to_ftw(frequency)
|
|
||||||
pow_ = dds.turns_to_pow(phase)
|
|
||||||
offs = self.dds_offset_to_mu(offset)
|
offs = self.dds_offset_to_mu(offset)
|
||||||
self.set_dds_mu(profile, ftw, offs, pow_)
|
self.set_dds_mu(profile, ftw, offs, pow_)
|
||||||
|
|
||||||
|
|
|
@ -54,6 +54,9 @@ CS_DDS_CH1 = 5
|
||||||
CS_DDS_CH2 = 6
|
CS_DDS_CH2 = 6
|
||||||
CS_DDS_CH3 = 7
|
CS_DDS_CH3 = 7
|
||||||
|
|
||||||
|
# Default profile
|
||||||
|
DEFAULT_PROFILE = 7
|
||||||
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def urukul_cfg(rf_sw: int32, led: int32, profile: int32, io_update: int32, mask_nu: int32,
|
def urukul_cfg(rf_sw: int32, led: int32, profile: int32, io_update: int32, mask_nu: int32,
|
||||||
|
@ -192,7 +195,7 @@ class CPLD:
|
||||||
assert sync_div is None
|
assert sync_div is None
|
||||||
sync_div = 0
|
sync_div = 0
|
||||||
|
|
||||||
self.cfg_reg = urukul_cfg(rf_sw=rf_sw, led=0, profile=0,
|
self.cfg_reg = urukul_cfg(rf_sw=rf_sw, led=0, profile=DEFAULT_PROFILE,
|
||||||
io_update=0, mask_nu=0, clk_sel=clk_sel,
|
io_update=0, mask_nu=0, clk_sel=clk_sel,
|
||||||
sync_sel=sync_sel,
|
sync_sel=sync_sel,
|
||||||
rst=0, io_rst=0, clk_div=clk_div)
|
rst=0, io_rst=0, clk_div=clk_div)
|
||||||
|
|
|
@ -146,16 +146,16 @@ class Creator(QtWidgets.QDialog):
|
||||||
|
|
||||||
|
|
||||||
class Model(DictSyncTreeSepModel):
|
class Model(DictSyncTreeSepModel):
|
||||||
def __init__(self, init):
|
def __init__(self, init):
|
||||||
DictSyncTreeSepModel.__init__(self, ".",
|
DictSyncTreeSepModel.__init__(
|
||||||
["Dataset", "Persistent", "Value"],
|
self, ".", ["Dataset", "Persistent", "Value"], init
|
||||||
init)
|
)
|
||||||
|
|
||||||
def convert(self, k, v, column):
|
def convert(self, k, v, column):
|
||||||
if column == 1:
|
if column == 1:
|
||||||
return "Y" if v[0] else "N"
|
return "Y" if v["persist"] else "N"
|
||||||
elif column == 2:
|
elif column == 2:
|
||||||
return short_format(v[1])
|
return short_format(v["value"])
|
||||||
else:
|
else:
|
||||||
raise ValueError
|
raise ValueError
|
||||||
|
|
||||||
|
@ -223,8 +223,8 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
idx = self.table_model_filter.mapToSource(idx[0])
|
idx = self.table_model_filter.mapToSource(idx[0])
|
||||||
key = self.table_model.index_to_key(idx)
|
key = self.table_model.index_to_key(idx)
|
||||||
if key is not None:
|
if key is not None:
|
||||||
persist, value = self.table_model.backing_store[key]
|
dataset = self.table_model.backing_store[key]
|
||||||
t = type(value)
|
t = type(dataset["value"])
|
||||||
if np.issubdtype(t, np.number):
|
if np.issubdtype(t, np.number):
|
||||||
dialog_cls = NumberEditor
|
dialog_cls = NumberEditor
|
||||||
elif np.issubdtype(t, np.bool_):
|
elif np.issubdtype(t, np.bool_):
|
||||||
|
@ -235,7 +235,7 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
logger.error("Cannot edit dataset %s: "
|
logger.error("Cannot edit dataset %s: "
|
||||||
"type %s is not supported", key, t)
|
"type %s is not supported", key, t)
|
||||||
return
|
return
|
||||||
dialog_cls(self, self.dataset_ctl, key, value).open()
|
dialog_cls(self, self.dataset_ctl, key, dataset["value"]).open()
|
||||||
|
|
||||||
def delete_clicked(self):
|
def delete_clicked(self):
|
||||||
idx = self.table.selectedIndexes()
|
idx = self.table.selectedIndexes()
|
||||||
|
|
|
@ -191,10 +191,8 @@ device_db = {
|
||||||
"arguments": {
|
"arguments": {
|
||||||
"channel": 24,
|
"channel": 24,
|
||||||
"pgia_device": "spi_sampler0_pgia",
|
"pgia_device": "spi_sampler0_pgia",
|
||||||
"cpld0_device": "urukul0_cpld",
|
"cpld_devices": ["urukul0_cpld", "urukul1_cpld"],
|
||||||
"cpld1_device": "urukul1_cpld",
|
"dds_devices": ["urukul0_dds", "urukul1_dds"],
|
||||||
"dds0_device": "urukul0_dds",
|
|
||||||
"dds1_device": "urukul1_dds"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
|
@ -88,7 +88,9 @@ pub extern fn personality(version: c_int,
|
||||||
let exception_info = &mut *(uw_exception as *mut ExceptionInfo);
|
let exception_info = &mut *(uw_exception as *mut ExceptionInfo);
|
||||||
let exception = &exception_info.exception.unwrap();
|
let exception = &exception_info.exception.unwrap();
|
||||||
|
|
||||||
let eh_action = match dwarf::find_eh_action(lsda, &eh_context) {
|
let name_ptr = exception.name.as_ptr();
|
||||||
|
let len = exception.name.len();
|
||||||
|
let eh_action = match dwarf::find_eh_action(lsda, &eh_context, name_ptr, len) {
|
||||||
Ok(action) => action,
|
Ok(action) => action,
|
||||||
Err(_) => return uw::_URC_FATAL_PHASE1_ERROR,
|
Err(_) => return uw::_URC_FATAL_PHASE1_ERROR,
|
||||||
};
|
};
|
||||||
|
|
|
@ -137,11 +137,10 @@ pub fn send(linkno: u8, packet: &Packet) -> Result<(), Error<!>> {
|
||||||
|
|
||||||
packet.write_to(&mut writer)?;
|
packet.write_to(&mut writer)?;
|
||||||
|
|
||||||
let padding = 4 - (writer.position() % 4);
|
// Pad till offset 4, insert checksum there
|
||||||
if padding != 4 {
|
let padding = (12 - (writer.position() % 8)) % 8;
|
||||||
for _ in 0..padding {
|
for _ in 0..padding {
|
||||||
writer.write_u8(0)?;
|
writer.write_u8(0)?;
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let checksum = crc::crc32::checksum_ieee(&writer.get_ref()[0..writer.position()]);
|
let checksum = crc::crc32::checksum_ieee(&writer.get_ref()[0..writer.position()]);
|
||||||
|
|
|
@ -12,6 +12,7 @@
|
||||||
#![allow(unused)]
|
#![allow(unused)]
|
||||||
|
|
||||||
use core::mem;
|
use core::mem;
|
||||||
|
use cslice::CSlice;
|
||||||
|
|
||||||
pub const DW_EH_PE_omit: u8 = 0xFF;
|
pub const DW_EH_PE_omit: u8 = 0xFF;
|
||||||
pub const DW_EH_PE_absptr: u8 = 0x00;
|
pub const DW_EH_PE_absptr: u8 = 0x00;
|
||||||
|
@ -63,6 +64,10 @@ impl DwarfReader {
|
||||||
result
|
result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub unsafe fn offset(&mut self, offset: isize) {
|
||||||
|
self.ptr = self.ptr.offset(offset);
|
||||||
|
}
|
||||||
|
|
||||||
// ULEB128 and SLEB128 encodings are defined in Section 7.6 - "Variable
|
// ULEB128 and SLEB128 encodings are defined in Section 7.6 - "Variable
|
||||||
// Length Data".
|
// Length Data".
|
||||||
pub unsafe fn read_uleb128(&mut self) -> u64 {
|
pub unsafe fn read_uleb128(&mut self) -> u64 {
|
||||||
|
@ -104,11 +109,20 @@ unsafe fn read_encoded_pointer(
|
||||||
reader: &mut DwarfReader,
|
reader: &mut DwarfReader,
|
||||||
context: &EHContext<'_>,
|
context: &EHContext<'_>,
|
||||||
encoding: u8,
|
encoding: u8,
|
||||||
|
) -> Result<usize, ()> {
|
||||||
|
read_encoded_pointer_with_base(reader, encoding, get_base(encoding, context)?)
|
||||||
|
}
|
||||||
|
|
||||||
|
unsafe fn read_encoded_pointer_with_base(
|
||||||
|
reader: &mut DwarfReader,
|
||||||
|
encoding: u8,
|
||||||
|
base: usize,
|
||||||
) -> Result<usize, ()> {
|
) -> Result<usize, ()> {
|
||||||
if encoding == DW_EH_PE_omit {
|
if encoding == DW_EH_PE_omit {
|
||||||
return Err(());
|
return Err(());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let original_ptr = reader.ptr;
|
||||||
// DW_EH_PE_aligned implies it's an absolute pointer value
|
// DW_EH_PE_aligned implies it's an absolute pointer value
|
||||||
if encoding == DW_EH_PE_aligned {
|
if encoding == DW_EH_PE_aligned {
|
||||||
reader.ptr = round_up(reader.ptr as usize, mem::size_of::<usize>())? as *const u8;
|
reader.ptr = round_up(reader.ptr as usize, mem::size_of::<usize>())? as *const u8;
|
||||||
|
@ -128,19 +142,10 @@ unsafe fn read_encoded_pointer(
|
||||||
_ => return Err(()),
|
_ => return Err(()),
|
||||||
};
|
};
|
||||||
|
|
||||||
result += match encoding & 0x70 {
|
result += if (encoding & 0x70) == DW_EH_PE_pcrel {
|
||||||
DW_EH_PE_absptr => 0,
|
original_ptr as usize
|
||||||
// relative to address of the encoded value, despite the name
|
} else {
|
||||||
DW_EH_PE_pcrel => reader.ptr as usize,
|
base
|
||||||
DW_EH_PE_funcrel => {
|
|
||||||
if context.func_start == 0 {
|
|
||||||
return Err(());
|
|
||||||
}
|
|
||||||
context.func_start
|
|
||||||
}
|
|
||||||
DW_EH_PE_textrel => (*context.get_text_start)(),
|
|
||||||
DW_EH_PE_datarel => (*context.get_data_start)(),
|
|
||||||
_ => return Err(()),
|
|
||||||
};
|
};
|
||||||
|
|
||||||
if encoding & DW_EH_PE_indirect != 0 {
|
if encoding & DW_EH_PE_indirect != 0 {
|
||||||
|
@ -150,6 +155,7 @@ unsafe fn read_encoded_pointer(
|
||||||
Ok(result)
|
Ok(result)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
pub enum EHAction {
|
pub enum EHAction {
|
||||||
None,
|
None,
|
||||||
Cleanup(usize),
|
Cleanup(usize),
|
||||||
|
@ -159,7 +165,46 @@ pub enum EHAction {
|
||||||
|
|
||||||
pub const USING_SJLJ_EXCEPTIONS: bool = cfg!(all(target_os = "ios", target_arch = "arm"));
|
pub const USING_SJLJ_EXCEPTIONS: bool = cfg!(all(target_os = "ios", target_arch = "arm"));
|
||||||
|
|
||||||
pub unsafe fn find_eh_action(lsda: *const u8, context: &EHContext<'_>) -> Result<EHAction, ()> {
|
fn size_of_encoded_value(encoding: u8) -> usize {
|
||||||
|
if encoding == DW_EH_PE_omit {
|
||||||
|
0
|
||||||
|
} else {
|
||||||
|
let encoding = encoding & 0x07;
|
||||||
|
match encoding {
|
||||||
|
DW_EH_PE_absptr => core::mem::size_of::<*const ()>(),
|
||||||
|
DW_EH_PE_udata2 => 2,
|
||||||
|
DW_EH_PE_udata4 => 4,
|
||||||
|
DW_EH_PE_udata8 => 8,
|
||||||
|
_ => unreachable!(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
unsafe fn get_ttype_entry(
|
||||||
|
offset: usize,
|
||||||
|
encoding: u8,
|
||||||
|
ttype_base: usize,
|
||||||
|
ttype: *const u8,
|
||||||
|
) -> Result<*const u8, ()> {
|
||||||
|
let i = (offset * size_of_encoded_value(encoding)) as isize;
|
||||||
|
read_encoded_pointer_with_base(
|
||||||
|
&mut DwarfReader::new(ttype.offset(-i)),
|
||||||
|
// the DW_EH_PE_pcrel is a hack.
|
||||||
|
// It seems that the default encoding is absolute, but we have to take reallocation into
|
||||||
|
// account. Unsure if we can fix this in the compiler setting or if this would be affected
|
||||||
|
// by updating the compiler
|
||||||
|
encoding,
|
||||||
|
ttype_base,
|
||||||
|
)
|
||||||
|
.map(|v| v as *const u8)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub unsafe fn find_eh_action(
|
||||||
|
lsda: *const u8,
|
||||||
|
context: &EHContext<'_>,
|
||||||
|
name: *const u8,
|
||||||
|
len: usize,
|
||||||
|
) -> Result<EHAction, ()> {
|
||||||
if lsda.is_null() {
|
if lsda.is_null() {
|
||||||
return Ok(EHAction::None);
|
return Ok(EHAction::None);
|
||||||
}
|
}
|
||||||
|
@ -176,10 +221,14 @@ pub unsafe fn find_eh_action(lsda: *const u8, context: &EHContext<'_>) -> Result
|
||||||
};
|
};
|
||||||
|
|
||||||
let ttype_encoding = reader.read::<u8>();
|
let ttype_encoding = reader.read::<u8>();
|
||||||
if ttype_encoding != DW_EH_PE_omit {
|
// we do care about the type table
|
||||||
// Rust doesn't analyze exception types, so we don't care about the type table
|
let ttype_offset = if ttype_encoding != DW_EH_PE_omit {
|
||||||
reader.read_uleb128();
|
reader.read_uleb128()
|
||||||
}
|
} else {
|
||||||
|
0
|
||||||
|
};
|
||||||
|
let ttype_base = get_base(ttype_encoding, context).unwrap_or(0);
|
||||||
|
let ttype_table = reader.ptr.offset(ttype_offset as isize);
|
||||||
|
|
||||||
let call_site_encoding = reader.read::<u8>();
|
let call_site_encoding = reader.read::<u8>();
|
||||||
let call_site_table_length = reader.read_uleb128();
|
let call_site_table_length = reader.read_uleb128();
|
||||||
|
@ -198,11 +247,62 @@ pub unsafe fn find_eh_action(lsda: *const u8, context: &EHContext<'_>) -> Result
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
if ip < func_start + cs_start + cs_len {
|
if ip < func_start + cs_start + cs_len {
|
||||||
|
// https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/libsupc%2B%2B/eh_personality.cc#L528
|
||||||
|
let lpad = lpad_base + cs_lpad;
|
||||||
if cs_lpad == 0 {
|
if cs_lpad == 0 {
|
||||||
|
// no cleanups/handler
|
||||||
return Ok(EHAction::None);
|
return Ok(EHAction::None);
|
||||||
|
} else if cs_action == 0 {
|
||||||
|
return Ok(EHAction::Cleanup(lpad));
|
||||||
} else {
|
} else {
|
||||||
let lpad = lpad_base + cs_lpad;
|
let mut saw_cleanup = false;
|
||||||
return Ok(interpret_cs_action(cs_action, lpad));
|
let mut action_record = action_table.offset(cs_action as isize - 1);
|
||||||
|
loop {
|
||||||
|
let mut reader = DwarfReader::new(action_record);
|
||||||
|
let ar_filter = reader.read_sleb128();
|
||||||
|
action_record = reader.ptr;
|
||||||
|
let ar_disp = reader.read_sleb128();
|
||||||
|
if ar_filter == 0 {
|
||||||
|
saw_cleanup = true;
|
||||||
|
} else if ar_filter > 0 {
|
||||||
|
let catch_type = get_ttype_entry(
|
||||||
|
ar_filter as usize,
|
||||||
|
ttype_encoding,
|
||||||
|
ttype_base,
|
||||||
|
ttype_table,
|
||||||
|
)?;
|
||||||
|
if (catch_type as *const CSlice<u8>).is_null() {
|
||||||
|
return Ok(EHAction::Catch(lpad));
|
||||||
|
}
|
||||||
|
// this seems to be target dependent
|
||||||
|
let clause_ptr = *(catch_type as *const CSlice<u8>);
|
||||||
|
let clause_name_ptr = (clause_ptr).as_ptr();
|
||||||
|
let clause_name_len = (clause_ptr).len();
|
||||||
|
if clause_name_len == len {
|
||||||
|
if (clause_name_ptr == core::ptr::null() ||
|
||||||
|
clause_name_ptr == name ||
|
||||||
|
// somehow their name pointers might differ, but the content is the
|
||||||
|
// same
|
||||||
|
core::slice::from_raw_parts(clause_name_ptr, clause_name_len) ==
|
||||||
|
core::slice::from_raw_parts(name, len))
|
||||||
|
{
|
||||||
|
return Ok(EHAction::Catch(lpad));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if ar_filter < 0 {
|
||||||
|
// FIXME: how to handle this?
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if ar_disp == 0 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
action_record = action_record.offset((ar_disp as usize) as isize);
|
||||||
|
}
|
||||||
|
if saw_cleanup {
|
||||||
|
return Ok(EHAction::Cleanup(lpad));
|
||||||
|
} else {
|
||||||
|
return Ok(EHAction::None);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -210,7 +310,7 @@ pub unsafe fn find_eh_action(lsda: *const u8, context: &EHContext<'_>) -> Result
|
||||||
// So rather than returning EHAction::Terminate, we do this.
|
// So rather than returning EHAction::Terminate, we do this.
|
||||||
Ok(EHAction::None)
|
Ok(EHAction::None)
|
||||||
} else {
|
} else {
|
||||||
// SjLj version:
|
// SjLj version: (not yet modified)
|
||||||
// The "IP" is an index into the call-site table, with two exceptions:
|
// The "IP" is an index into the call-site table, with two exceptions:
|
||||||
// -1 means 'no-action', and 0 means 'terminate'.
|
// -1 means 'no-action', and 0 means 'terminate'.
|
||||||
match ip as isize {
|
match ip as isize {
|
||||||
|
@ -246,5 +346,20 @@ fn interpret_cs_action(cs_action: u64, lpad: usize) -> EHAction {
|
||||||
|
|
||||||
#[inline]
|
#[inline]
|
||||||
fn round_up(unrounded: usize, align: usize) -> Result<usize, ()> {
|
fn round_up(unrounded: usize, align: usize) -> Result<usize, ()> {
|
||||||
if align.is_power_of_two() { Ok((unrounded + align - 1) & !(align - 1)) } else { Err(()) }
|
if align.is_power_of_two() {
|
||||||
|
Ok((unrounded + align - 1) & !(align - 1))
|
||||||
|
} else {
|
||||||
|
Err(())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn get_base(encoding: u8, context: &EHContext<'_>) -> Result<usize, ()> {
|
||||||
|
match encoding & 0x70 {
|
||||||
|
DW_EH_PE_absptr | DW_EH_PE_pcrel | DW_EH_PE_aligned => Ok(0),
|
||||||
|
DW_EH_PE_textrel => Ok((*context.get_text_start)()),
|
||||||
|
DW_EH_PE_datarel => Ok((*context.get_data_start)()),
|
||||||
|
DW_EH_PE_funcrel if context.func_start != 0 => Ok(context.func_start),
|
||||||
|
_ => return Err(()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
|
@ -87,5 +87,5 @@ unsafe fn find_eh_action(context: *mut uw::_Unwind_Context) -> Result<EHAction,
|
||||||
get_text_start: &|| uw::_Unwind_GetTextRelBase(context),
|
get_text_start: &|| uw::_Unwind_GetTextRelBase(context),
|
||||||
get_data_start: &|| uw::_Unwind_GetDataRelBase(context),
|
get_data_start: &|| uw::_Unwind_GetDataRelBase(context),
|
||||||
};
|
};
|
||||||
dwarf::find_eh_action(lsda, &eh_context)
|
dwarf::find_eh_action(lsda, &eh_context, core::ptr::null(), 0)
|
||||||
}
|
}
|
||||||
|
|
|
@ -90,7 +90,9 @@ pub enum Reply<'a> {
|
||||||
LoadCompleted,
|
LoadCompleted,
|
||||||
LoadFailed(&'a str),
|
LoadFailed(&'a str),
|
||||||
|
|
||||||
KernelFinished,
|
KernelFinished {
|
||||||
|
async_errors: u8
|
||||||
|
},
|
||||||
KernelStartupFailed,
|
KernelStartupFailed,
|
||||||
KernelException {
|
KernelException {
|
||||||
name: &'a str,
|
name: &'a str,
|
||||||
|
@ -100,7 +102,8 @@ pub enum Reply<'a> {
|
||||||
line: u32,
|
line: u32,
|
||||||
column: u32,
|
column: u32,
|
||||||
function: &'a str,
|
function: &'a str,
|
||||||
backtrace: &'a [usize]
|
backtrace: &'a [usize],
|
||||||
|
async_errors: u8
|
||||||
},
|
},
|
||||||
|
|
||||||
RpcRequest { async: bool },
|
RpcRequest { async: bool },
|
||||||
|
@ -160,14 +163,16 @@ impl<'a> Reply<'a> {
|
||||||
writer.write_string(reason)?;
|
writer.write_string(reason)?;
|
||||||
},
|
},
|
||||||
|
|
||||||
Reply::KernelFinished => {
|
Reply::KernelFinished { async_errors } => {
|
||||||
writer.write_u8(7)?;
|
writer.write_u8(7)?;
|
||||||
|
writer.write_u8(async_errors)?;
|
||||||
},
|
},
|
||||||
Reply::KernelStartupFailed => {
|
Reply::KernelStartupFailed => {
|
||||||
writer.write_u8(8)?;
|
writer.write_u8(8)?;
|
||||||
},
|
},
|
||||||
Reply::KernelException {
|
Reply::KernelException {
|
||||||
name, message, param, file, line, column, function, backtrace
|
name, message, param, file, line, column, function, backtrace,
|
||||||
|
async_errors
|
||||||
} => {
|
} => {
|
||||||
writer.write_u8(9)?;
|
writer.write_u8(9)?;
|
||||||
writer.write_string(name)?;
|
writer.write_string(name)?;
|
||||||
|
@ -183,6 +188,7 @@ impl<'a> Reply<'a> {
|
||||||
for &addr in backtrace {
|
for &addr in backtrace {
|
||||||
writer.write_u32(addr as u32)?
|
writer.write_u32(addr as u32)?
|
||||||
}
|
}
|
||||||
|
writer.write_u8(async_errors)?;
|
||||||
},
|
},
|
||||||
|
|
||||||
Reply::RpcRequest { async } => {
|
Reply::RpcRequest { async } => {
|
||||||
|
|
|
@ -326,6 +326,14 @@ pub mod drtio {
|
||||||
pub fn reset(_io: &Io, _aux_mutex: &Mutex) {}
|
pub fn reset(_io: &Io, _aux_mutex: &Mutex) {}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static mut SEEN_ASYNC_ERRORS: u8 = 0;
|
||||||
|
|
||||||
|
pub unsafe fn get_async_errors() -> u8 {
|
||||||
|
let errors = SEEN_ASYNC_ERRORS;
|
||||||
|
SEEN_ASYNC_ERRORS = 0;
|
||||||
|
errors
|
||||||
|
}
|
||||||
|
|
||||||
fn async_error_thread(io: Io) {
|
fn async_error_thread(io: Io) {
|
||||||
loop {
|
loop {
|
||||||
unsafe {
|
unsafe {
|
||||||
|
@ -343,6 +351,7 @@ fn async_error_thread(io: Io) {
|
||||||
error!("RTIO sequence error involving channel {}",
|
error!("RTIO sequence error involving channel {}",
|
||||||
csr::rtio_core::sequence_error_channel_read());
|
csr::rtio_core::sequence_error_channel_read());
|
||||||
}
|
}
|
||||||
|
SEEN_ASYNC_ERRORS = errors;
|
||||||
csr::rtio_core::async_error_write(errors);
|
csr::rtio_core::async_error_write(errors);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -9,6 +9,7 @@ use urc::Urc;
|
||||||
use sched::{ThreadHandle, Io, Mutex, TcpListener, TcpStream, Error as SchedError};
|
use sched::{ThreadHandle, Io, Mutex, TcpListener, TcpStream, Error as SchedError};
|
||||||
use rtio_clocking;
|
use rtio_clocking;
|
||||||
use rtio_dma::Manager as DmaManager;
|
use rtio_dma::Manager as DmaManager;
|
||||||
|
use rtio_mgt::get_async_errors;
|
||||||
use cache::Cache;
|
use cache::Cache;
|
||||||
use kern_hwreq;
|
use kern_hwreq;
|
||||||
use board_artiq::drtio_routing;
|
use board_artiq::drtio_routing;
|
||||||
|
@ -431,7 +432,9 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
|
||||||
match stream {
|
match stream {
|
||||||
None => return Ok(true),
|
None => return Ok(true),
|
||||||
Some(ref mut stream) =>
|
Some(ref mut stream) =>
|
||||||
host_write(stream, host::Reply::KernelFinished).map_err(|e| e.into())
|
host_write(stream, host::Reply::KernelFinished {
|
||||||
|
async_errors: unsafe { get_async_errors() }
|
||||||
|
}).map_err(|e| e.into())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
&kern::RunException {
|
&kern::RunException {
|
||||||
|
@ -458,7 +461,8 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
|
||||||
line: line,
|
line: line,
|
||||||
column: column,
|
column: column,
|
||||||
function: function,
|
function: function,
|
||||||
backtrace: backtrace
|
backtrace: backtrace,
|
||||||
|
async_errors: unsafe { get_async_errors() }
|
||||||
}).map_err(|e| e.into())
|
}).map_err(|e| e.into())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -447,6 +447,19 @@ const SI5324_SETTINGS: si5324::FrequencySettings
|
||||||
crystal_ref: true
|
crystal_ref: true
|
||||||
};
|
};
|
||||||
|
|
||||||
|
#[cfg(all(has_si5324, rtio_frequency = "100.0"))]
|
||||||
|
const SI5324_SETTINGS: si5324::FrequencySettings
|
||||||
|
= si5324::FrequencySettings {
|
||||||
|
n1_hs : 5,
|
||||||
|
nc1_ls : 10,
|
||||||
|
n2_hs : 10,
|
||||||
|
n2_ls : 250,
|
||||||
|
n31 : 50,
|
||||||
|
n32 : 50,
|
||||||
|
bwsel : 4,
|
||||||
|
crystal_ref: true
|
||||||
|
};
|
||||||
|
|
||||||
#[no_mangle]
|
#[no_mangle]
|
||||||
pub extern fn main() -> i32 {
|
pub extern fn main() -> i32 {
|
||||||
extern {
|
extern {
|
||||||
|
|
|
@ -294,6 +294,18 @@ class PeripheralManager:
|
||||||
name=mirny_name,
|
name=mirny_name,
|
||||||
refclk=peripheral["refclk"],
|
refclk=peripheral["refclk"],
|
||||||
clk_sel=clk_sel)
|
clk_sel=clk_sel)
|
||||||
|
almazny = peripheral.get("almazny", False)
|
||||||
|
if almazny:
|
||||||
|
self.gen("""
|
||||||
|
device_db["{name}_almazny"] = {{
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.mirny",
|
||||||
|
"class": "Almazny",
|
||||||
|
"arguments": {{
|
||||||
|
"host_mirny": "{name}_cpld",
|
||||||
|
}},
|
||||||
|
}}""",
|
||||||
|
name=mirny_name)
|
||||||
|
|
||||||
return next(channel)
|
return next(channel)
|
||||||
|
|
||||||
|
@ -364,8 +376,7 @@ class PeripheralManager:
|
||||||
def process_suservo(self, rtio_offset, peripheral):
|
def process_suservo(self, rtio_offset, peripheral):
|
||||||
suservo_name = self.get_name("suservo")
|
suservo_name = self.get_name("suservo")
|
||||||
sampler_name = self.get_name("sampler")
|
sampler_name = self.get_name("sampler")
|
||||||
urukul0_name = self.get_name("urukul")
|
urukul_names = [self.get_name("urukul") for _ in range(2)]
|
||||||
urukul1_name = self.get_name("urukul")
|
|
||||||
channel = count(0)
|
channel = count(0)
|
||||||
for i in range(8):
|
for i in range(8):
|
||||||
self.gen("""
|
self.gen("""
|
||||||
|
@ -386,16 +397,14 @@ class PeripheralManager:
|
||||||
"arguments": {{
|
"arguments": {{
|
||||||
"channel": 0x{suservo_channel:06x},
|
"channel": 0x{suservo_channel:06x},
|
||||||
"pgia_device": "spi_{sampler_name}_pgia",
|
"pgia_device": "spi_{sampler_name}_pgia",
|
||||||
"cpld0_device": "{urukul0_name}_cpld",
|
"cpld_devices": {cpld_names_list},
|
||||||
"cpld1_device": "{urukul1_name}_cpld",
|
"dds_devices": {dds_names_list}
|
||||||
"dds0_device": "{urukul0_name}_dds",
|
|
||||||
"dds1_device": "{urukul1_name}_dds"
|
|
||||||
}}
|
}}
|
||||||
}}""",
|
}}""",
|
||||||
suservo_name=suservo_name,
|
suservo_name=suservo_name,
|
||||||
sampler_name=sampler_name,
|
sampler_name=sampler_name,
|
||||||
urukul0_name=urukul0_name,
|
cpld_names_list=[urukul_name + "_cpld" for urukul_name in urukul_names],
|
||||||
urukul1_name=urukul1_name,
|
dds_names_list=[urukul_name + "_dds" for urukul_name in urukul_names],
|
||||||
suservo_channel=rtio_offset+next(channel))
|
suservo_channel=rtio_offset+next(channel))
|
||||||
self.gen("""
|
self.gen("""
|
||||||
device_db["spi_{sampler_name}_pgia"] = {{
|
device_db["spi_{sampler_name}_pgia"] = {{
|
||||||
|
@ -407,7 +416,7 @@ class PeripheralManager:
|
||||||
sampler_name=sampler_name,
|
sampler_name=sampler_name,
|
||||||
sampler_channel=rtio_offset+next(channel))
|
sampler_channel=rtio_offset+next(channel))
|
||||||
pll_vco = peripheral.get("pll_vco")
|
pll_vco = peripheral.get("pll_vco")
|
||||||
for urukul_name in (urukul0_name, urukul1_name):
|
for urukul_name in urukul_names:
|
||||||
self.gen("""
|
self.gen("""
|
||||||
device_db["spi_{urukul_name}"] = {{
|
device_db["spi_{urukul_name}"] = {{
|
||||||
"type": "local",
|
"type": "local",
|
||||||
|
|
|
@ -362,7 +362,10 @@ def main():
|
||||||
variants.remove("rtm")
|
variants.remove("rtm")
|
||||||
except ValueError:
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
if len(variants) == 0:
|
if all(action in ["rtm_gateware", "storage", "rtm_load", "erase", "start"]
|
||||||
|
for action in args.action) and args.action:
|
||||||
|
pass
|
||||||
|
elif len(variants) == 0:
|
||||||
raise FileNotFoundError("no variants found, did you install a board binary package?")
|
raise FileNotFoundError("no variants found, did you install a board binary package?")
|
||||||
elif len(variants) == 1:
|
elif len(variants) == 1:
|
||||||
variant = variants[0]
|
variant = variants[0]
|
||||||
|
|
|
@ -3,7 +3,6 @@
|
||||||
import asyncio
|
import asyncio
|
||||||
import argparse
|
import argparse
|
||||||
import atexit
|
import atexit
|
||||||
import os
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from sipyco.pc_rpc import Server as RPCServer
|
from sipyco.pc_rpc import Server as RPCServer
|
||||||
|
@ -75,11 +74,7 @@ class MasterConfig:
|
||||||
def main():
|
def main():
|
||||||
args = get_argparser().parse_args()
|
args = get_argparser().parse_args()
|
||||||
log_forwarder = init_log(args)
|
log_forwarder = init_log(args)
|
||||||
if os.name == "nt":
|
loop = asyncio.get_event_loop()
|
||||||
loop = asyncio.ProactorEventLoop()
|
|
||||||
asyncio.set_event_loop(loop)
|
|
||||||
else:
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
atexit.register(loop.close)
|
atexit.register(loop.close)
|
||||||
bind = common_args.bind_address_from_args(args)
|
bind = common_args.bind_address_from_args(args)
|
||||||
|
|
||||||
|
|
|
@ -59,6 +59,7 @@ class SinaraTester(EnvExperiment):
|
||||||
self.mirnies = dict()
|
self.mirnies = dict()
|
||||||
self.suservos = dict()
|
self.suservos = dict()
|
||||||
self.suschannels = dict()
|
self.suschannels = dict()
|
||||||
|
self.almaznys = dict()
|
||||||
|
|
||||||
ddb = self.get_device_db()
|
ddb = self.get_device_db()
|
||||||
for name, desc in ddb.items():
|
for name, desc in ddb.items():
|
||||||
|
@ -96,6 +97,8 @@ class SinaraTester(EnvExperiment):
|
||||||
self.suservos[name] = self.get_device(name)
|
self.suservos[name] = self.get_device(name)
|
||||||
elif (module, cls) == ("artiq.coredevice.suservo", "Channel"):
|
elif (module, cls) == ("artiq.coredevice.suservo", "Channel"):
|
||||||
self.suschannels[name] = self.get_device(name)
|
self.suschannels[name] = self.get_device(name)
|
||||||
|
elif (module, cls) == ("artiq.coredevice.mirny", "Almazny"):
|
||||||
|
self.almaznys[name] = self.get_device(name)
|
||||||
|
|
||||||
# Remove Urukul, Sampler, Zotino and Mirny control signals
|
# Remove Urukul, Sampler, Zotino and Mirny control signals
|
||||||
# from TTL outs (tested separately) and remove Urukuls covered by
|
# from TTL outs (tested separately) and remove Urukuls covered by
|
||||||
|
@ -115,11 +118,10 @@ class SinaraTester(EnvExperiment):
|
||||||
del self.ttl_outs[io_update_device]
|
del self.ttl_outs[io_update_device]
|
||||||
# check for suservos and delete respective urukuls
|
# check for suservos and delete respective urukuls
|
||||||
elif (module, cls) == ("artiq.coredevice.suservo", "SUServo"):
|
elif (module, cls) == ("artiq.coredevice.suservo", "SUServo"):
|
||||||
del self.urukuls[desc["arguments"]["dds0_device"]]
|
for cpld in desc["arguments"]["cpld_devices"]:
|
||||||
del self.urukul_cplds[desc["arguments"]["cpld0_device"]]
|
del self.urukul_cplds[cpld]
|
||||||
if "dds1_device" in desc["arguments"]:
|
for dds in desc["arguments"]["dds_devices"]:
|
||||||
del self.urukuls[desc["arguments"]["dds1_device"]]
|
del self.urukuls[dds]
|
||||||
del self.urukul_cplds[desc["arguments"]["cpld1_device"]]
|
|
||||||
elif (module, cls) == ("artiq.coredevice.sampler", "Sampler"):
|
elif (module, cls) == ("artiq.coredevice.sampler", "Sampler"):
|
||||||
cnv_device = desc["arguments"]["cnv_device"]
|
cnv_device = desc["arguments"]["cnv_device"]
|
||||||
del self.ttl_outs[cnv_device]
|
del self.ttl_outs[cnv_device]
|
||||||
|
@ -229,6 +231,17 @@ class SinaraTester(EnvExperiment):
|
||||||
self.core.break_realtime()
|
self.core.break_realtime()
|
||||||
cpld.init()
|
cpld.init()
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def test_urukul_att(self, cpld):
|
||||||
|
self.core.break_realtime()
|
||||||
|
for i in range(32):
|
||||||
|
test_word = 1 << i
|
||||||
|
cpld.set_all_att_mu(test_word)
|
||||||
|
readback_word = cpld.get_att_mu()
|
||||||
|
if readback_word != test_word:
|
||||||
|
print(readback_word, test_word)
|
||||||
|
raise ValueError
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def calibrate_urukul(self, channel):
|
def calibrate_urukul(self, channel):
|
||||||
self.core.break_realtime()
|
self.core.break_realtime()
|
||||||
|
@ -268,11 +281,12 @@ class SinaraTester(EnvExperiment):
|
||||||
def test_urukuls(self):
|
def test_urukuls(self):
|
||||||
print("*** Testing Urukul DDSes.")
|
print("*** Testing Urukul DDSes.")
|
||||||
|
|
||||||
print("Initializing CPLDs...")
|
|
||||||
for name, cpld in sorted(self.urukul_cplds.items(), key=lambda x: x[0]):
|
for name, cpld in sorted(self.urukul_cplds.items(), key=lambda x: x[0]):
|
||||||
print(name + "...")
|
print(name + ": initializing CPLD...")
|
||||||
self.init_urukul(cpld)
|
self.init_urukul(cpld)
|
||||||
print("...done")
|
print(name + ": testing attenuator digital control...")
|
||||||
|
self.test_urukul_att(cpld)
|
||||||
|
print(name + ": done")
|
||||||
|
|
||||||
print("Calibrating inter-device synchronization...")
|
print("Calibrating inter-device synchronization...")
|
||||||
for channel_name, channel_dev in self.urukuls:
|
for channel_name, channel_dev in self.urukuls:
|
||||||
|
@ -340,6 +354,68 @@ class SinaraTester(EnvExperiment):
|
||||||
for channel in channels:
|
for channel in channels:
|
||||||
channel.pulse(100*ms)
|
channel.pulse(100*ms)
|
||||||
delay(100*ms)
|
delay(100*ms)
|
||||||
|
@kernel
|
||||||
|
def init_almazny(self, almazny):
|
||||||
|
self.core.break_realtime()
|
||||||
|
almazny.init()
|
||||||
|
almazny.output_toggle(True)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def almazny_set_attenuators_mu(self, almazny, ch, atts):
|
||||||
|
self.core.break_realtime()
|
||||||
|
almazny.set_att_mu(ch, atts)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def almazny_set_attenuators(self, almazny, ch, atts):
|
||||||
|
self.core.break_realtime()
|
||||||
|
almazny.set_att(ch, atts)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def almazny_toggle_output(self, almazny, rf_on):
|
||||||
|
self.core.break_realtime()
|
||||||
|
almazny.output_toggle(rf_on)
|
||||||
|
|
||||||
|
def test_almaznys(self):
|
||||||
|
print("*** Testing Almaznys.")
|
||||||
|
for name, almazny in sorted(self.almaznys.items(), key=lambda x: x[0]):
|
||||||
|
print(name + "...")
|
||||||
|
print("Initializing Mirny CPLDs...")
|
||||||
|
for name, cpld in sorted(self.mirny_cplds.items(), key=lambda x: x[0]):
|
||||||
|
print(name + "...")
|
||||||
|
self.init_mirny(cpld)
|
||||||
|
print("...done")
|
||||||
|
|
||||||
|
print("Testing attenuators. Frequencies:")
|
||||||
|
for card_n, channels in enumerate(chunker(self.mirnies, 4)):
|
||||||
|
for channel_n, (channel_name, channel_dev) in enumerate(channels):
|
||||||
|
frequency = 2000 + card_n * 250 + channel_n * 50
|
||||||
|
print("{}\t{}MHz".format(channel_name, frequency*2))
|
||||||
|
self.setup_mirny(channel_dev, frequency)
|
||||||
|
print("{} info: {}".format(channel_name, channel_dev.info()))
|
||||||
|
self.init_almazny(almazny)
|
||||||
|
print("RF ON, all attenuators ON. Press ENTER when done.")
|
||||||
|
for i in range(4):
|
||||||
|
self.almazny_set_attenuators_mu(almazny, i, 63)
|
||||||
|
input()
|
||||||
|
print("RF ON, half power attenuators ON. Press ENTER when done.")
|
||||||
|
for i in range(4):
|
||||||
|
self.almazny_set_attenuators(almazny, i, 15.5)
|
||||||
|
input()
|
||||||
|
print("RF ON, all attenuators OFF. Press ENTER when done.")
|
||||||
|
for i in range(4):
|
||||||
|
self.almazny_set_attenuators(almazny, i, 0)
|
||||||
|
input()
|
||||||
|
print("SR outputs are OFF. Press ENTER when done.")
|
||||||
|
self.almazny_toggle_output(almazny, False)
|
||||||
|
input()
|
||||||
|
print("RF ON, all attenuators are ON. Press ENTER when done.")
|
||||||
|
for i in range(4):
|
||||||
|
self.almazny_set_attenuators(almazny, i, 31.5)
|
||||||
|
self.almazny_toggle_output(almazny, True)
|
||||||
|
input()
|
||||||
|
print("RF OFF. Press ENTER when done.")
|
||||||
|
self.almazny_toggle_output(almazny, False)
|
||||||
|
input()
|
||||||
|
|
||||||
def test_mirnies(self):
|
def test_mirnies(self):
|
||||||
print("*** Testing Mirny PLLs.")
|
print("*** Testing Mirny PLLs.")
|
||||||
|
@ -354,7 +430,7 @@ class SinaraTester(EnvExperiment):
|
||||||
print("Frequencies:")
|
print("Frequencies:")
|
||||||
for card_n, channels in enumerate(chunker(self.mirnies, 4)):
|
for card_n, channels in enumerate(chunker(self.mirnies, 4)):
|
||||||
for channel_n, (channel_name, channel_dev) in enumerate(channels):
|
for channel_n, (channel_name, channel_dev) in enumerate(channels):
|
||||||
frequency = 1000*(card_n + 1) + channel_n * 100 + 8 # Extra 8 Hz for easier observation
|
frequency = 1000*(card_n + 1) + channel_n * 100
|
||||||
print("{}\t{}MHz".format(channel_name, frequency))
|
print("{}\t{}MHz".format(channel_name, frequency))
|
||||||
self.setup_mirny(channel_dev, frequency)
|
self.setup_mirny(channel_dev, frequency)
|
||||||
print("{} info: {}".format(channel_name, channel_dev.info()))
|
print("{} info: {}".format(channel_name, channel_dev.info()))
|
||||||
|
@ -585,8 +661,8 @@ class SinaraTester(EnvExperiment):
|
||||||
delay(10*us)
|
delay(10*us)
|
||||||
# DDS attenuator 10dB
|
# DDS attenuator 10dB
|
||||||
for i in range(4):
|
for i in range(4):
|
||||||
channel.cpld0.set_att(i, 10.)
|
for cpld in channel.cplds:
|
||||||
channel.cpld1.set_att(i, 10.)
|
cpld.set_att(i, 10.)
|
||||||
delay(1*us)
|
delay(1*us)
|
||||||
# Servo is done and disabled
|
# Servo is done and disabled
|
||||||
assert channel.get_status() & 0xff == 2
|
assert channel.get_status() & 0xff == 2
|
||||||
|
|
|
@ -4,7 +4,7 @@ from migen.genlib.cdc import MultiReg, PulseSynchronizer
|
||||||
from misoc.interconnect.csr import *
|
from misoc.interconnect.csr import *
|
||||||
|
|
||||||
|
|
||||||
# This code assumes 125/62.5MHz reference clock and 125MHz or 150MHz RTIO
|
# This code assumes 125/62.5MHz reference clock and 100MHz, 125MHz or 150MHz RTIO
|
||||||
# frequency.
|
# frequency.
|
||||||
|
|
||||||
class SiPhaser7Series(Module, AutoCSR):
|
class SiPhaser7Series(Module, AutoCSR):
|
||||||
|
@ -15,9 +15,9 @@ class SiPhaser7Series(Module, AutoCSR):
|
||||||
self.phase_shift_done = CSRStatus(reset=1)
|
self.phase_shift_done = CSRStatus(reset=1)
|
||||||
self.error = CSR()
|
self.error = CSR()
|
||||||
|
|
||||||
assert rtio_clk_freq in (125e6, 150e6)
|
assert rtio_clk_freq in (100e6, 125e6, 150e6)
|
||||||
|
|
||||||
# 125MHz/62.5MHz reference clock to 125MHz/150MHz. VCO @ 750MHz.
|
# 125MHz/62.5MHz reference clock to 100MHz/125MHz/150MHz. VCO @ 750MHz.
|
||||||
# Used to provide a startup clock to the transceiver through the Si,
|
# Used to provide a startup clock to the transceiver through the Si,
|
||||||
# we do not use the crystal reference so that the PFD (f3) frequency
|
# we do not use the crystal reference so that the PFD (f3) frequency
|
||||||
# can be high.
|
# can be high.
|
||||||
|
@ -43,8 +43,8 @@ class SiPhaser7Series(Module, AutoCSR):
|
||||||
else:
|
else:
|
||||||
mmcm_freerun_output = mmcm_freerun_output_raw
|
mmcm_freerun_output = mmcm_freerun_output_raw
|
||||||
|
|
||||||
# 125MHz/150MHz to 125MHz/150MHz with controllable phase shift,
|
# 100MHz/125MHz/150MHz to 100MHz/125MHz/150MHz with controllable phase shift,
|
||||||
# VCO @ 1000MHz/1200MHz.
|
# VCO @ 800MHz/1000MHz/1200MHz.
|
||||||
# Inserted between CDR and output to Si, used to correct
|
# Inserted between CDR and output to Si, used to correct
|
||||||
# non-determinstic skew of Si5324.
|
# non-determinstic skew of Si5324.
|
||||||
mmcm_ps_fb = Signal()
|
mmcm_ps_fb = Signal()
|
||||||
|
|
|
@ -0,0 +1,10 @@
|
||||||
|
"""Gateware implementation of the Sampler-Urukul (AD9910) DDS amplitude servo.
|
||||||
|
|
||||||
|
General conventions:
|
||||||
|
|
||||||
|
- ``t_...`` signals and constants refer to time spans measured in the gateware
|
||||||
|
module's default clock (typically a 125 MHz RTIO clock).
|
||||||
|
- ``start`` signals cause modules to proceed with the next servo iteration iff
|
||||||
|
they are currently idle (i.e. their value is irrelevant while the module is
|
||||||
|
busy, so they are not necessarily one-clock-period strobes).
|
||||||
|
"""
|
|
@ -2,6 +2,8 @@ import logging
|
||||||
|
|
||||||
from migen import *
|
from migen import *
|
||||||
|
|
||||||
|
from artiq.coredevice.urukul import DEFAULT_PROFILE
|
||||||
|
|
||||||
from . import spi
|
from . import spi
|
||||||
|
|
||||||
|
|
||||||
|
@ -26,7 +28,8 @@ class DDS(spi.SPISimple):
|
||||||
|
|
||||||
self.profile = [Signal(32 + 16 + 16, reset_less=True)
|
self.profile = [Signal(32 + 16 + 16, reset_less=True)
|
||||||
for i in range(params.channels)]
|
for i in range(params.channels)]
|
||||||
cmd = Signal(8, reset=0x0e) # write to single tone profile 0
|
# write to single tone default profile
|
||||||
|
cmd = Signal(8, reset=0x0e + DEFAULT_PROFILE)
|
||||||
assert params.width == len(cmd) + len(self.profile[0])
|
assert params.width == len(cmd) + len(self.profile[0])
|
||||||
|
|
||||||
self.sync += [
|
self.sync += [
|
||||||
|
|
|
@ -1,9 +1,7 @@
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from migen import *
|
from migen import *
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@ -222,31 +220,30 @@ class IIR(Module):
|
||||||
assert w.word <= w.coeff # same memory
|
assert w.word <= w.coeff # same memory
|
||||||
assert w.state + w.coeff + 3 <= w.accu
|
assert w.state + w.coeff + 3 <= w.accu
|
||||||
|
|
||||||
# m_coeff of active profiles should only be accessed during
|
# m_coeff of active profiles should only be accessed externally during
|
||||||
# ~processing
|
# ~processing
|
||||||
self.specials.m_coeff = Memory(
|
self.specials.m_coeff = Memory(
|
||||||
width=2*w.coeff, # Cat(pow/ftw/offset, cfg/a/b)
|
width=2*w.coeff, # Cat(pow/ftw/offset, cfg/a/b)
|
||||||
depth=4 << w.profile + w.channel)
|
depth=4 << w.profile + w.channel)
|
||||||
# m_state[x] should only be read during ~(shifting |
|
# m_state[x] should only be read externally during ~(shifting | loading)
|
||||||
# loading)
|
# m_state[y] of active profiles should only be read externally during
|
||||||
# m_state[y] of active profiles should only be read during
|
|
||||||
# ~processing
|
# ~processing
|
||||||
self.specials.m_state = Memory(
|
self.specials.m_state = Memory(
|
||||||
width=w.state, # y1,x0,x1
|
width=w.state, # y1,x0,x1
|
||||||
depth=(1 << w.profile + w.channel) + (2 << w.channel))
|
depth=(1 << w.profile + w.channel) + (2 << w.channel))
|
||||||
# ctrl should only be updated synchronously
|
# ctrl should only be updated synchronously
|
||||||
self.ctrl = [Record([
|
self.ctrl = [Record([
|
||||||
("profile", w.profile),
|
("profile", w.profile),
|
||||||
("en_out", 1),
|
("en_out", 1),
|
||||||
("en_iir", 1),
|
("en_iir", 1),
|
||||||
("clip", 1),
|
("clip", 1),
|
||||||
("stb", 1)])
|
("stb", 1)])
|
||||||
for i in range(1 << w.channel)]
|
for i in range(1 << w.channel)]
|
||||||
# only update during ~loading
|
# only update during ~loading
|
||||||
self.adc = [Signal((w.adc, True), reset_less=True)
|
self.adc = [Signal((w.adc, True), reset_less=True)
|
||||||
for i in range(1 << w.channel)]
|
for i in range(1 << w.channel)]
|
||||||
# Cat(ftw0, ftw1, pow, asf)
|
# Cat(ftw0, ftw1, pow, asf)
|
||||||
# only read during ~processing
|
# only read externally during ~processing
|
||||||
self.dds = [Signal(4*w.word, reset_less=True)
|
self.dds = [Signal(4*w.word, reset_less=True)
|
||||||
for i in range(1 << w.channel)]
|
for i in range(1 << w.channel)]
|
||||||
# perform one IIR iteration, start with loading,
|
# perform one IIR iteration, start with loading,
|
||||||
|
@ -270,100 +267,116 @@ class IIR(Module):
|
||||||
en_iirs = Array([ch.en_iir for ch in self.ctrl])
|
en_iirs = Array([ch.en_iir for ch in self.ctrl])
|
||||||
clips = Array([ch.clip for ch in self.ctrl])
|
clips = Array([ch.clip for ch in self.ctrl])
|
||||||
|
|
||||||
# state counter
|
# Main state machine sequencing the steps of each servo iteration. The
|
||||||
state = Signal(w.channel + 2)
|
# module IDLEs until self.start is asserted, and then runs through LOAD,
|
||||||
# pipeline group activity flags (SR)
|
# PROCESS and SHIFT in order (see description of corresponding flags
|
||||||
stage = Signal(3)
|
# above). The steps share the same memory ports, and are executed
|
||||||
|
# strictly sequentially.
|
||||||
|
#
|
||||||
|
# LOAD/SHIFT just read/write one address per cycle; the duration needed
|
||||||
|
# to iterate over all channels is determined by counting cycles.
|
||||||
|
#
|
||||||
|
# The PROCESSing step is split across a three-stage pipeline, where each
|
||||||
|
# stage has up to four clock cycles latency. We feed the first stage
|
||||||
|
# using the (MSBs of) t_current_step, and, after all channels have been
|
||||||
|
# covered, proceed once the pipeline has completely drained.
|
||||||
self.submodules.fsm = fsm = FSM("IDLE")
|
self.submodules.fsm = fsm = FSM("IDLE")
|
||||||
state_clr = Signal()
|
t_current_step = Signal(w.channel + 2)
|
||||||
stage_en = Signal()
|
t_current_step_clr = Signal()
|
||||||
|
|
||||||
|
# pipeline group activity flags (SR)
|
||||||
|
# 0: load from memory
|
||||||
|
# 1: compute
|
||||||
|
# 2: write to output registers (DDS profiles, clip flags)
|
||||||
|
stages_active = Signal(3)
|
||||||
fsm.act("IDLE",
|
fsm.act("IDLE",
|
||||||
self.done.eq(1),
|
self.done.eq(1),
|
||||||
state_clr.eq(1),
|
t_current_step_clr.eq(1),
|
||||||
If(self.start,
|
If(self.start,
|
||||||
NextState("LOAD")
|
NextState("LOAD")
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
fsm.act("LOAD",
|
fsm.act("LOAD",
|
||||||
self.loading.eq(1),
|
self.loading.eq(1),
|
||||||
If(state == (1 << w.channel) - 1,
|
If(t_current_step == (1 << w.channel) - 1,
|
||||||
state_clr.eq(1),
|
t_current_step_clr.eq(1),
|
||||||
stage_en.eq(1),
|
NextValue(stages_active[0], 1),
|
||||||
NextState("PROCESS")
|
NextState("PROCESS")
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
fsm.act("PROCESS",
|
fsm.act("PROCESS",
|
||||||
self.processing.eq(1),
|
self.processing.eq(1),
|
||||||
# this is technically wasting three cycles
|
# this is technically wasting three cycles
|
||||||
# (one for setting stage, and phase=2,3 with stage[2])
|
# (one for setting stages_active, and phase=2,3 with stages_active[2])
|
||||||
If(stage == 0,
|
If(stages_active == 0,
|
||||||
state_clr.eq(1),
|
t_current_step_clr.eq(1),
|
||||||
NextState("SHIFT")
|
NextState("SHIFT"),
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
fsm.act("SHIFT",
|
fsm.act("SHIFT",
|
||||||
self.shifting.eq(1),
|
self.shifting.eq(1),
|
||||||
If(state == (2 << w.channel) - 1,
|
If(t_current_step == (2 << w.channel) - 1,
|
||||||
NextState("IDLE")
|
NextState("IDLE")
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
self.sync += [
|
self.sync += [
|
||||||
state.eq(state + 1),
|
If(t_current_step_clr,
|
||||||
If(state_clr,
|
t_current_step.eq(0)
|
||||||
state.eq(0),
|
).Else(
|
||||||
),
|
t_current_step.eq(t_current_step + 1)
|
||||||
If(stage_en,
|
|
||||||
stage[0].eq(1)
|
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
# pipeline group channel pointer
|
# global pipeline phase (lower two bits of t_current_step)
|
||||||
|
pipeline_phase = Signal(2, reset_less=True)
|
||||||
|
# pipeline group channel pointer (SR)
|
||||||
# for each pipeline stage, this is the channel currently being
|
# for each pipeline stage, this is the channel currently being
|
||||||
# processed
|
# processed
|
||||||
channel = [Signal(w.channel, reset_less=True) for i in range(3)]
|
channel = [Signal(w.channel, reset_less=True) for i in range(3)]
|
||||||
|
self.comb += Cat(pipeline_phase, channel[0]).eq(t_current_step)
|
||||||
|
self.sync += [
|
||||||
|
If(pipeline_phase == 3,
|
||||||
|
Cat(channel[1:]).eq(Cat(channel[:-1])),
|
||||||
|
stages_active[1:].eq(stages_active[:-1]),
|
||||||
|
If(channel[0] == (1 << w.channel) - 1,
|
||||||
|
stages_active[0].eq(0)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
# pipeline group profile pointer (SR)
|
# pipeline group profile pointer (SR)
|
||||||
# for each pipeline stage, this is the profile currently being
|
# for each pipeline stage, this is the profile currently being
|
||||||
# processed
|
# processed
|
||||||
profile = [Signal(w.profile, reset_less=True) for i in range(2)]
|
profile = [Signal(w.profile, reset_less=True) for i in range(2)]
|
||||||
# pipeline phase (lower two bits of state)
|
|
||||||
phase = Signal(2, reset_less=True)
|
|
||||||
|
|
||||||
self.comb += Cat(phase, channel[0]).eq(state)
|
|
||||||
self.sync += [
|
self.sync += [
|
||||||
Case(phase, {
|
If(pipeline_phase == 0,
|
||||||
0: [
|
profile[0].eq(profiles[channel[0]]),
|
||||||
profile[0].eq(profiles[channel[0]]),
|
profile[1].eq(profile[0]),
|
||||||
profile[1].eq(profile[0])
|
)
|
||||||
],
|
|
||||||
3: [
|
|
||||||
Cat(channel[1:]).eq(Cat(channel[:-1])),
|
|
||||||
stage[1:].eq(stage[:-1]),
|
|
||||||
If(channel[0] == (1 << w.channel) - 1,
|
|
||||||
stage[0].eq(0)
|
|
||||||
)
|
|
||||||
]
|
|
||||||
})
|
|
||||||
]
|
]
|
||||||
|
|
||||||
m_coeff = self.m_coeff.get_port()
|
m_coeff = self.m_coeff.get_port()
|
||||||
m_state = self.m_state.get_port(write_capable=True) # mode=READ_FIRST
|
m_state = self.m_state.get_port(write_capable=True) # mode=READ_FIRST
|
||||||
self.specials += m_state, m_coeff
|
self.specials += m_state, m_coeff
|
||||||
|
|
||||||
|
#
|
||||||
|
# Hook up main IIR filter.
|
||||||
|
#
|
||||||
|
|
||||||
dsp = DSP(w)
|
dsp = DSP(w)
|
||||||
self.submodules += dsp
|
self.submodules += dsp
|
||||||
|
|
||||||
offset_clr = Signal()
|
offset_clr = Signal()
|
||||||
|
|
||||||
self.comb += [
|
self.comb += [
|
||||||
m_coeff.adr.eq(Cat(phase, profile[0],
|
m_coeff.adr.eq(Cat(pipeline_phase, profile[0],
|
||||||
Mux(phase==0, channel[1], channel[0]))),
|
Mux(pipeline_phase == 0, channel[1], channel[0]))),
|
||||||
dsp.offset[-w.coeff - 1:].eq(Mux(offset_clr, 0,
|
dsp.offset[-w.coeff - 1:].eq(Mux(offset_clr, 0,
|
||||||
Cat(m_coeff.dat_r[:w.coeff], m_coeff.dat_r[w.coeff - 1])
|
Cat(m_coeff.dat_r[:w.coeff], m_coeff.dat_r[w.coeff - 1])
|
||||||
)),
|
)),
|
||||||
dsp.coeff.eq(m_coeff.dat_r[w.coeff:]),
|
dsp.coeff.eq(m_coeff.dat_r[w.coeff:]),
|
||||||
dsp.state.eq(m_state.dat_r),
|
dsp.state.eq(m_state.dat_r),
|
||||||
Case(phase, {
|
Case(pipeline_phase, {
|
||||||
0: dsp.accu_clr.eq(1),
|
0: dsp.accu_clr.eq(1),
|
||||||
2: [
|
2: [
|
||||||
offset_clr.eq(1),
|
offset_clr.eq(1),
|
||||||
|
@ -373,6 +386,11 @@ class IIR(Module):
|
||||||
})
|
})
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
#
|
||||||
|
# Arbitrate state memory access between steps.
|
||||||
|
#
|
||||||
|
|
||||||
# selected adc and profile delay (combinatorial from dat_r)
|
# selected adc and profile delay (combinatorial from dat_r)
|
||||||
# both share the same coeff word (sel in the lower 8 bits)
|
# both share the same coeff word (sel in the lower 8 bits)
|
||||||
sel_profile = Signal(w.channel)
|
sel_profile = Signal(w.channel)
|
||||||
|
@ -389,13 +407,13 @@ class IIR(Module):
|
||||||
sel_profile.eq(m_coeff.dat_r[w.coeff:]),
|
sel_profile.eq(m_coeff.dat_r[w.coeff:]),
|
||||||
dly_profile.eq(m_coeff.dat_r[w.coeff + 8:]),
|
dly_profile.eq(m_coeff.dat_r[w.coeff + 8:]),
|
||||||
If(self.shifting,
|
If(self.shifting,
|
||||||
m_state.adr.eq(state | (1 << w.profile + w.channel)),
|
m_state.adr.eq(t_current_step | (1 << w.profile + w.channel)),
|
||||||
m_state.dat_w.eq(m_state.dat_r),
|
m_state.dat_w.eq(m_state.dat_r),
|
||||||
m_state.we.eq(state[0])
|
m_state.we.eq(t_current_step[0])
|
||||||
),
|
),
|
||||||
If(self.loading,
|
If(self.loading,
|
||||||
m_state.adr.eq((state << 1) | (1 << w.profile + w.channel)),
|
m_state.adr.eq((t_current_step << 1) | (1 << w.profile + w.channel)),
|
||||||
m_state.dat_w[-w.adc - 1:-1].eq(Array(self.adc)[state]),
|
m_state.dat_w[-w.adc - 1:-1].eq(Array(self.adc)[t_current_step]),
|
||||||
m_state.dat_w[-1].eq(m_state.dat_w[-2]),
|
m_state.dat_w[-1].eq(m_state.dat_w[-2]),
|
||||||
m_state.we.eq(1)
|
m_state.we.eq(1)
|
||||||
),
|
),
|
||||||
|
@ -405,16 +423,20 @@ class IIR(Module):
|
||||||
Cat(profile[1], channel[2]),
|
Cat(profile[1], channel[2]),
|
||||||
# read old y
|
# read old y
|
||||||
Cat(profile[0], channel[0]),
|
Cat(profile[0], channel[0]),
|
||||||
# x0 (recent)
|
# read x0 (recent)
|
||||||
0 | (sel_profile << 1) | (1 << w.profile + w.channel),
|
0 | (sel_profile << 1) | (1 << w.profile + w.channel),
|
||||||
# x1 (old)
|
# read x1 (old)
|
||||||
1 | (sel << 1) | (1 << w.profile + w.channel),
|
1 | (sel << 1) | (1 << w.profile + w.channel),
|
||||||
])[phase]),
|
])[pipeline_phase]),
|
||||||
m_state.dat_w.eq(dsp.output),
|
m_state.dat_w.eq(dsp.output),
|
||||||
m_state.we.eq((phase == 0) & stage[2] & en[1]),
|
m_state.we.eq((pipeline_phase == 0) & stages_active[2] & en[1]),
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
#
|
||||||
|
# Compute auxiliary signals (delayed servo enable, clip indicators, etc.).
|
||||||
|
#
|
||||||
|
|
||||||
# internal channel delay counters
|
# internal channel delay counters
|
||||||
dlys = Array([Signal(w.dly)
|
dlys = Array([Signal(w.dly)
|
||||||
for i in range(1 << w.channel)])
|
for i in range(1 << w.channel)])
|
||||||
|
@ -434,51 +456,65 @@ class IIR(Module):
|
||||||
en_out = Signal(reset_less=True)
|
en_out = Signal(reset_less=True)
|
||||||
# latched channel en_iir
|
# latched channel en_iir
|
||||||
en_iir = Signal(reset_less=True)
|
en_iir = Signal(reset_less=True)
|
||||||
|
|
||||||
|
self.sync += [
|
||||||
|
Case(pipeline_phase, {
|
||||||
|
0: [
|
||||||
|
dly.eq(dlys[channel[0]]),
|
||||||
|
en_out.eq(en_outs[channel[0]]),
|
||||||
|
en_iir.eq(en_iirs[channel[0]]),
|
||||||
|
If(stages_active[2] & en[1] & dsp.clip,
|
||||||
|
clips[channel[2]].eq(1)
|
||||||
|
)
|
||||||
|
],
|
||||||
|
2: [
|
||||||
|
en[0].eq(0),
|
||||||
|
en[1].eq(en[0]),
|
||||||
|
sel.eq(sel_profile),
|
||||||
|
If(stages_active[0] & en_out,
|
||||||
|
If(dly != dly_profile,
|
||||||
|
dlys[channel[0]].eq(dly + 1)
|
||||||
|
).Elif(en_iir,
|
||||||
|
en[0].eq(1)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
],
|
||||||
|
}),
|
||||||
|
]
|
||||||
|
|
||||||
|
#
|
||||||
|
# Update DDS profile with FTW/POW/ASF
|
||||||
|
# Stage 0 loads the POW, stage 1 the FTW, and stage 2 writes
|
||||||
|
# the ASF computed by the IIR filter.
|
||||||
|
#
|
||||||
|
|
||||||
# muxing
|
# muxing
|
||||||
ddss = Array(self.dds)
|
ddss = Array(self.dds)
|
||||||
|
|
||||||
self.sync += [
|
self.sync += [
|
||||||
Case(phase, {
|
Case(pipeline_phase, {
|
||||||
0: [
|
0: [
|
||||||
dly.eq(dlys[channel[0]]),
|
If(stages_active[1],
|
||||||
en_out.eq(en_outs[channel[0]]),
|
ddss[channel[1]][:w.word].eq(m_coeff.dat_r), # ftw0
|
||||||
en_iir.eq(en_iirs[channel[0]]),
|
),
|
||||||
If(stage[1],
|
],
|
||||||
ddss[channel[1]][:w.word].eq(m_coeff.dat_r)
|
1: [
|
||||||
),
|
If(stages_active[1],
|
||||||
If(stage[2] & en[1] & dsp.clip,
|
ddss[channel[1]][w.word:2 * w.word].eq(m_coeff.dat_r), # ftw1
|
||||||
clips[channel[2]].eq(1)
|
),
|
||||||
)
|
If(stages_active[2],
|
||||||
],
|
ddss[channel[2]][3*w.word:].eq( # asf
|
||||||
1: [
|
m_state.dat_r[w.state - w.asf - 1:w.state - 1])
|
||||||
If(stage[1],
|
)
|
||||||
ddss[channel[1]][w.word:2*w.word].eq(
|
],
|
||||||
m_coeff.dat_r),
|
2: [
|
||||||
),
|
If(stages_active[0],
|
||||||
If(stage[2],
|
ddss[channel[0]][2*w.word:3*w.word].eq(m_coeff.dat_r), # pow
|
||||||
ddss[channel[2]][3*w.word:].eq(
|
),
|
||||||
m_state.dat_r[w.state - w.asf - 1:w.state - 1])
|
],
|
||||||
)
|
3: [
|
||||||
],
|
],
|
||||||
2: [
|
}),
|
||||||
en[0].eq(0),
|
|
||||||
en[1].eq(en[0]),
|
|
||||||
sel.eq(sel_profile),
|
|
||||||
If(stage[0],
|
|
||||||
ddss[channel[0]][2*w.word:3*w.word].eq(
|
|
||||||
m_coeff.dat_r),
|
|
||||||
If(en_out,
|
|
||||||
If(dly != dly_profile,
|
|
||||||
dlys[channel[0]].eq(dly + 1)
|
|
||||||
).Elif(en_iir,
|
|
||||||
en[0].eq(1)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
],
|
|
||||||
3: [
|
|
||||||
],
|
|
||||||
}),
|
|
||||||
]
|
]
|
||||||
|
|
||||||
def _coeff(self, channel, profile, coeff):
|
def _coeff(self, channel, profile, coeff):
|
||||||
|
|
|
@ -5,32 +5,76 @@ from .iir import IIR, IIRWidths
|
||||||
from .dds_ser import DDS, DDSParams
|
from .dds_ser import DDS, DDSParams
|
||||||
|
|
||||||
|
|
||||||
|
def predict_timing(adc_p, iir_p, dds_p):
|
||||||
|
"""
|
||||||
|
The following is a sketch of the timing for 1 Sampler (8 ADCs) and N Urukuls
|
||||||
|
Shown here, the cycle duration is limited by the IIR loading+processing time.
|
||||||
|
|
||||||
|
ADC|CONVH|CONV|READ|RTT|IDLE|CONVH|CONV|READ|RTT|IDLE|CONVH|CONV|READ|RTT|...
|
||||||
|
|4 |57 |16 |8 | .. |4 |57 |16 |8 | .. |4 |57 |16 |8 |...
|
||||||
|
---+-------------------+------------------------+------------------------+---
|
||||||
|
IIR| |LOAD|PROC |SHIFT|LOAD|PROC |SHIFT|...
|
||||||
|
| |8 |16*N+9 |16 |8 |16*N+9 |16 |...
|
||||||
|
---+--------------------------------------+------------------------+---------
|
||||||
|
DDS| |CMD|PROF|WAIT|IO_UP|IDLE|CMD|PR...
|
||||||
|
| |16 |128 |1 |1 | .. |16 | ...
|
||||||
|
|
||||||
|
IIR loading starts once the ADC presents its data, the DDSes are updated
|
||||||
|
once the IIR processing is over. These are the only blocking processes.
|
||||||
|
IIR shifting happens in parallel to writing to the DDSes and ADC conversions
|
||||||
|
take place while the IIR filter is processing or the DDSes are being
|
||||||
|
written to, depending on the cycle duration (given by whichever module
|
||||||
|
takes the longest).
|
||||||
|
"""
|
||||||
|
t_adc = (adc_p.t_cnvh + adc_p.t_conv + adc_p.t_rtt +
|
||||||
|
adc_p.channels*adc_p.width//adc_p.lanes) + 1
|
||||||
|
# load adc_p.channels values, process dds_p.channels
|
||||||
|
# (4 processing phases and 2 additional stages à 4 phases
|
||||||
|
# to complete the processing of the last channel)
|
||||||
|
t_iir = adc_p.channels + 4*dds_p.channels + 8 + 1
|
||||||
|
t_dds = (dds_p.width*2 + 1)*dds_p.clk + 1
|
||||||
|
t_cycle = max(t_adc, t_iir, t_dds)
|
||||||
|
return t_adc, t_iir, t_dds, t_cycle
|
||||||
|
|
||||||
class Servo(Module):
|
class Servo(Module):
|
||||||
def __init__(self, adc_pads, dds_pads, adc_p, iir_p, dds_p):
|
def __init__(self, adc_pads, dds_pads, adc_p, iir_p, dds_p):
|
||||||
|
t_adc, t_iir, t_dds, t_cycle = predict_timing(adc_p, iir_p, dds_p)
|
||||||
|
assert t_iir + 2*adc_p.channels < t_cycle, "need shifting time"
|
||||||
|
|
||||||
self.submodules.adc = ADC(adc_pads, adc_p)
|
self.submodules.adc = ADC(adc_pads, adc_p)
|
||||||
self.submodules.iir = IIR(iir_p)
|
self.submodules.iir = IIR(iir_p)
|
||||||
self.submodules.dds = DDS(dds_pads, dds_p)
|
self.submodules.dds = DDS(dds_pads, dds_p)
|
||||||
|
|
||||||
# adc channels are reversed on Sampler
|
# adc channels are reversed on Sampler
|
||||||
for i, j, k, l in zip(reversed(self.adc.data), self.iir.adc,
|
for iir, adc in zip(self.iir.adc, reversed(self.adc.data)):
|
||||||
self.iir.dds, self.dds.profile):
|
self.comb += iir.eq(adc)
|
||||||
self.comb += j.eq(i), l.eq(k)
|
for dds, iir in zip(self.dds.profile, self.iir.dds):
|
||||||
|
self.comb += dds.eq(iir)
|
||||||
t_adc = (adc_p.t_cnvh + adc_p.t_conv + adc_p.t_rtt +
|
|
||||||
adc_p.channels*adc_p.width//adc_p.lanes) + 1
|
|
||||||
t_iir = ((1 + 4 + 1) << iir_p.channel) + 1
|
|
||||||
t_dds = (dds_p.width*2 + 1)*dds_p.clk + 1
|
|
||||||
|
|
||||||
t_cycle = max(t_adc, t_iir, t_dds)
|
|
||||||
assert t_iir + (2 << iir_p.channel) < t_cycle, "need shifting time"
|
|
||||||
|
|
||||||
|
# If high, a new cycle is started if the current cycle (if any) is
|
||||||
|
# finished. Consequently, if low, servo iterations cease after the
|
||||||
|
# current cycle is finished. Don't care while the first step (ADC)
|
||||||
|
# is active.
|
||||||
self.start = Signal()
|
self.start = Signal()
|
||||||
|
|
||||||
|
# Counter for delay between end of ADC cycle and start of next one,
|
||||||
|
# depending on the duration of the other steps.
|
||||||
t_restart = t_cycle - t_adc + 1
|
t_restart = t_cycle - t_adc + 1
|
||||||
assert t_restart > 1
|
assert t_restart > 1
|
||||||
cnt = Signal(max=t_restart)
|
cnt = Signal(max=t_restart)
|
||||||
cnt_done = Signal()
|
cnt_done = Signal()
|
||||||
active = Signal(3)
|
active = Signal(3)
|
||||||
|
|
||||||
|
# Indicates whether different steps (0: ADC, 1: IIR, 2: DDS) are
|
||||||
|
# currently active (exposed for simulation only), with each bit being
|
||||||
|
# reset once the successor step is launched. Depending on the
|
||||||
|
# timing details of the different steps, any number can be concurrently
|
||||||
|
# active (e.g. ADC read from iteration n, IIR computation from iteration
|
||||||
|
# n - 1, and DDS write from iteration n - 2).
|
||||||
|
|
||||||
|
# Asserted once per cycle when the DDS write has been completed.
|
||||||
self.done = Signal()
|
self.done = Signal()
|
||||||
|
|
||||||
self.sync += [
|
self.sync += [
|
||||||
If(self.dds.done,
|
If(self.dds.done,
|
||||||
active[2].eq(0)
|
active[2].eq(0)
|
||||||
|
|
|
@ -135,12 +135,12 @@ class StandaloneBase(MiniSoC, AMPSoC):
|
||||||
self.config["HAS_SI5324"] = None
|
self.config["HAS_SI5324"] = None
|
||||||
self.config["SI5324_SOFT_RESET"] = None
|
self.config["SI5324_SOFT_RESET"] = None
|
||||||
|
|
||||||
def add_rtio(self, rtio_channels):
|
def add_rtio(self, rtio_channels, sed_lanes=8):
|
||||||
self.submodules.rtio_crg = _RTIOCRG(self.platform)
|
self.submodules.rtio_crg = _RTIOCRG(self.platform)
|
||||||
self.csr_devices.append("rtio_crg")
|
self.csr_devices.append("rtio_crg")
|
||||||
fix_serdes_timing_path(self.platform)
|
fix_serdes_timing_path(self.platform)
|
||||||
self.submodules.rtio_tsc = rtio.TSC("async", glbl_fine_ts_width=3)
|
self.submodules.rtio_tsc = rtio.TSC("async", glbl_fine_ts_width=3)
|
||||||
self.submodules.rtio_core = rtio.Core(self.rtio_tsc, rtio_channels)
|
self.submodules.rtio_core = rtio.Core(self.rtio_tsc, rtio_channels, lane_count=sed_lanes)
|
||||||
self.csr_devices.append("rtio_core")
|
self.csr_devices.append("rtio_core")
|
||||||
self.submodules.rtio = rtio.KernelInitiator(self.rtio_tsc)
|
self.submodules.rtio = rtio.KernelInitiator(self.rtio_tsc)
|
||||||
self.submodules.rtio_dma = ClockDomainsRenamer("sys_kernel")(
|
self.submodules.rtio_dma = ClockDomainsRenamer("sys_kernel")(
|
||||||
|
@ -228,9 +228,9 @@ class SUServo(StandaloneBase):
|
||||||
ttl_serdes_7series.Output_8X, ttl_serdes_7series.Output_8X)
|
ttl_serdes_7series.Output_8X, ttl_serdes_7series.Output_8X)
|
||||||
|
|
||||||
# EEM3/2: Sampler, EEM5/4: Urukul, EEM7/6: Urukul
|
# EEM3/2: Sampler, EEM5/4: Urukul, EEM7/6: Urukul
|
||||||
eem.SUServo.add_std(
|
eem.SUServo.add_std(self,
|
||||||
self, eems_sampler=(3, 2),
|
eems_sampler=(3, 2),
|
||||||
eems_urukul0=(5, 4), eems_urukul1=(7, 6))
|
eems_urukul=[[5, 4], [7, 6]])
|
||||||
|
|
||||||
for i in (1, 2):
|
for i in (1, 2):
|
||||||
sfp_ctl = self.platform.request("sfp_ctl", i)
|
sfp_ctl = self.platform.request("sfp_ctl", i)
|
||||||
|
@ -375,13 +375,13 @@ class MasterBase(MiniSoC, AMPSoC):
|
||||||
self.csr_devices.append("rtio_crg")
|
self.csr_devices.append("rtio_crg")
|
||||||
fix_serdes_timing_path(platform)
|
fix_serdes_timing_path(platform)
|
||||||
|
|
||||||
def add_rtio(self, rtio_channels):
|
def add_rtio(self, rtio_channels, sed_lanes=8):
|
||||||
# Only add MonInj core if there is anything to monitor
|
# Only add MonInj core if there is anything to monitor
|
||||||
if any([len(c.probes) for c in rtio_channels]):
|
if any([len(c.probes) for c in rtio_channels]):
|
||||||
self.submodules.rtio_moninj = rtio.MonInj(rtio_channels)
|
self.submodules.rtio_moninj = rtio.MonInj(rtio_channels)
|
||||||
self.csr_devices.append("rtio_moninj")
|
self.csr_devices.append("rtio_moninj")
|
||||||
|
|
||||||
self.submodules.rtio_core = rtio.Core(self.rtio_tsc, rtio_channels)
|
self.submodules.rtio_core = rtio.Core(self.rtio_tsc, rtio_channels, lane_count=sed_lanes)
|
||||||
self.csr_devices.append("rtio_core")
|
self.csr_devices.append("rtio_core")
|
||||||
|
|
||||||
self.submodules.rtio = rtio.KernelInitiator(self.rtio_tsc)
|
self.submodules.rtio = rtio.KernelInitiator(self.rtio_tsc)
|
||||||
|
@ -608,13 +608,13 @@ class SatelliteBase(BaseSoC):
|
||||||
self.csr_devices.append("rtio_crg")
|
self.csr_devices.append("rtio_crg")
|
||||||
fix_serdes_timing_path(platform)
|
fix_serdes_timing_path(platform)
|
||||||
|
|
||||||
def add_rtio(self, rtio_channels):
|
def add_rtio(self, rtio_channels, sed_lanes=8):
|
||||||
# Only add MonInj core if there is anything to monitor
|
# Only add MonInj core if there is anything to monitor
|
||||||
if any([len(c.probes) for c in rtio_channels]):
|
if any([len(c.probes) for c in rtio_channels]):
|
||||||
self.submodules.rtio_moninj = rtio.MonInj(rtio_channels)
|
self.submodules.rtio_moninj = rtio.MonInj(rtio_channels)
|
||||||
self.csr_devices.append("rtio_moninj")
|
self.csr_devices.append("rtio_moninj")
|
||||||
|
|
||||||
self.submodules.local_io = SyncRTIO(self.rtio_tsc, rtio_channels)
|
self.submodules.local_io = SyncRTIO(self.rtio_tsc, rtio_channels, lane_count=sed_lanes)
|
||||||
self.comb += self.drtiosat.async_errors.eq(self.local_io.async_errors)
|
self.comb += self.drtiosat.async_errors.eq(self.local_io.async_errors)
|
||||||
self.submodules.cri_con = rtio.CRIInterconnectShared(
|
self.submodules.cri_con = rtio.CRIInterconnectShared(
|
||||||
[self.drtiosat.cri],
|
[self.drtiosat.cri],
|
||||||
|
|
|
@ -57,7 +57,8 @@ class GenericStandalone(StandaloneBase):
|
||||||
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
|
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
|
||||||
self.rtio_channels.append(rtio.LogChannel())
|
self.rtio_channels.append(rtio.LogChannel())
|
||||||
|
|
||||||
self.add_rtio(self.rtio_channels)
|
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
|
||||||
|
|
||||||
if has_grabber:
|
if has_grabber:
|
||||||
self.config["HAS_GRABBER"] = None
|
self.config["HAS_GRABBER"] = None
|
||||||
self.add_csr_group("grabber", self.grabber_csr_group)
|
self.add_csr_group("grabber", self.grabber_csr_group)
|
||||||
|
@ -94,7 +95,7 @@ class GenericMaster(MasterBase):
|
||||||
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
|
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
|
||||||
self.rtio_channels.append(rtio.LogChannel())
|
self.rtio_channels.append(rtio.LogChannel())
|
||||||
|
|
||||||
self.add_rtio(self.rtio_channels)
|
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
|
||||||
if has_grabber:
|
if has_grabber:
|
||||||
self.config["HAS_GRABBER"] = None
|
self.config["HAS_GRABBER"] = None
|
||||||
self.add_csr_group("grabber", self.grabber_csr_group)
|
self.add_csr_group("grabber", self.grabber_csr_group)
|
||||||
|
@ -127,7 +128,7 @@ class GenericSatellite(SatelliteBase):
|
||||||
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
|
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
|
||||||
self.rtio_channels.append(rtio.LogChannel())
|
self.rtio_channels.append(rtio.LogChannel())
|
||||||
|
|
||||||
self.add_rtio(self.rtio_channels)
|
self.add_rtio(self.rtio_channels, sed_lanes=description["sed_lanes"])
|
||||||
if has_grabber:
|
if has_grabber:
|
||||||
self.config["HAS_GRABBER"] = None
|
self.config["HAS_GRABBER"] = None
|
||||||
self.add_csr_group("grabber", self.grabber_csr_group)
|
self.add_csr_group("grabber", self.grabber_csr_group)
|
||||||
|
|
|
@ -129,7 +129,7 @@ class _StandaloneBase(MiniSoC, AMPSoC):
|
||||||
}
|
}
|
||||||
mem_map.update(MiniSoC.mem_map)
|
mem_map.update(MiniSoC.mem_map)
|
||||||
|
|
||||||
def __init__(self, gateware_identifier_str=None, **kwargs):
|
def __init__(self, gateware_identifier_str=None, drtio_100mhz=False, **kwargs):
|
||||||
MiniSoC.__init__(self,
|
MiniSoC.__init__(self,
|
||||||
cpu_type="vexriscv",
|
cpu_type="vexriscv",
|
||||||
cpu_bus_width=64,
|
cpu_bus_width=64,
|
||||||
|
@ -207,7 +207,7 @@ class _MasterBase(MiniSoC, AMPSoC):
|
||||||
}
|
}
|
||||||
mem_map.update(MiniSoC.mem_map)
|
mem_map.update(MiniSoC.mem_map)
|
||||||
|
|
||||||
def __init__(self, gateware_identifier_str=None, **kwargs):
|
def __init__(self, gateware_identifier_str=None, drtio_100mhz=False, **kwargs):
|
||||||
MiniSoC.__init__(self,
|
MiniSoC.__init__(self,
|
||||||
cpu_type="vexriscv",
|
cpu_type="vexriscv",
|
||||||
cpu_bus_width=64,
|
cpu_bus_width=64,
|
||||||
|
@ -236,11 +236,14 @@ class _MasterBase(MiniSoC, AMPSoC):
|
||||||
platform.request("sfp"), platform.request("user_sma_mgt")
|
platform.request("sfp"), platform.request("user_sma_mgt")
|
||||||
]
|
]
|
||||||
|
|
||||||
# 1000BASE_BX10 Ethernet compatible, 125MHz RTIO clock
|
rtio_clk_freq = 100e6 if drtio_100mhz else 125e6
|
||||||
|
|
||||||
|
# 1000BASE_BX10 Ethernet compatible, 100/125MHz RTIO clock
|
||||||
self.submodules.drtio_transceiver = gtx_7series.GTX(
|
self.submodules.drtio_transceiver = gtx_7series.GTX(
|
||||||
clock_pads=platform.request("si5324_clkout"),
|
clock_pads=platform.request("si5324_clkout"),
|
||||||
pads=data_pads,
|
pads=data_pads,
|
||||||
sys_clk_freq=self.clk_freq)
|
sys_clk_freq=self.clk_freq,
|
||||||
|
rtio_clk_freq=rtio_clk_freq)
|
||||||
self.csr_devices.append("drtio_transceiver")
|
self.csr_devices.append("drtio_transceiver")
|
||||||
|
|
||||||
self.submodules.rtio_tsc = rtio.TSC("async", glbl_fine_ts_width=3)
|
self.submodules.rtio_tsc = rtio.TSC("async", glbl_fine_ts_width=3)
|
||||||
|
@ -341,7 +344,7 @@ class _SatelliteBase(BaseSoC):
|
||||||
}
|
}
|
||||||
mem_map.update(BaseSoC.mem_map)
|
mem_map.update(BaseSoC.mem_map)
|
||||||
|
|
||||||
def __init__(self, gateware_identifier_str=None, sma_as_sat=False, **kwargs):
|
def __init__(self, gateware_identifier_str=None, sma_as_sat=False, drtio_100mhz=False, **kwargs):
|
||||||
BaseSoC.__init__(self,
|
BaseSoC.__init__(self,
|
||||||
cpu_type="vexriscv",
|
cpu_type="vexriscv",
|
||||||
cpu_bus_width=64,
|
cpu_bus_width=64,
|
||||||
|
@ -369,11 +372,14 @@ class _SatelliteBase(BaseSoC):
|
||||||
if sma_as_sat:
|
if sma_as_sat:
|
||||||
data_pads = data_pads[::-1]
|
data_pads = data_pads[::-1]
|
||||||
|
|
||||||
# 1000BASE_BX10 Ethernet compatible, 125MHz RTIO clock
|
rtio_clk_freq = 100e6 if drtio_100mhz else 125e6
|
||||||
|
|
||||||
|
# 1000BASE_BX10 Ethernet compatible, 100/125MHz RTIO clock
|
||||||
self.submodules.drtio_transceiver = gtx_7series.GTX(
|
self.submodules.drtio_transceiver = gtx_7series.GTX(
|
||||||
clock_pads=platform.request("si5324_clkout"),
|
clock_pads=platform.request("si5324_clkout"),
|
||||||
pads=data_pads,
|
pads=data_pads,
|
||||||
sys_clk_freq=self.clk_freq)
|
sys_clk_freq=self.clk_freq,
|
||||||
|
rtio_clk_freq=rtio_clk_freq)
|
||||||
self.csr_devices.append("drtio_transceiver")
|
self.csr_devices.append("drtio_transceiver")
|
||||||
|
|
||||||
self.submodules.rtio_tsc = rtio.TSC("sync", glbl_fine_ts_width=3)
|
self.submodules.rtio_tsc = rtio.TSC("sync", glbl_fine_ts_width=3)
|
||||||
|
@ -673,6 +679,8 @@ def main():
|
||||||
"(default: %(default)s)")
|
"(default: %(default)s)")
|
||||||
parser.add_argument("--gateware-identifier-str", default=None,
|
parser.add_argument("--gateware-identifier-str", default=None,
|
||||||
help="Override ROM identifier")
|
help="Override ROM identifier")
|
||||||
|
parser.add_argument("--drtio100mhz", action="store_true", default=False,
|
||||||
|
help="DRTIO systems only - use 100MHz RTIO clock")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
variant = args.variant.lower()
|
variant = args.variant.lower()
|
||||||
|
@ -681,7 +689,7 @@ def main():
|
||||||
except KeyError:
|
except KeyError:
|
||||||
raise SystemExit("Invalid variant (-V/--variant)")
|
raise SystemExit("Invalid variant (-V/--variant)")
|
||||||
|
|
||||||
soc = cls(gateware_identifier_str=args.gateware_identifier_str, **soc_kc705_argdict(args))
|
soc = cls(gateware_identifier_str=args.gateware_identifier_str, drtio_100mhz=args.drtio100mhz, **soc_kc705_argdict(args))
|
||||||
build_artiq_soc(soc, builder_argdict(args))
|
build_artiq_soc(soc, builder_argdict(args))
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -43,7 +43,7 @@ class TB(Module):
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
cnv_old = Signal(reset_less=True)
|
cnv_old = Signal(reset_less=True)
|
||||||
self.sync.async += [
|
self.sync.async_ += [
|
||||||
cnv_old.eq(self.cnv),
|
cnv_old.eq(self.cnv),
|
||||||
If(Cat(cnv_old, self.cnv) == 0b10,
|
If(Cat(cnv_old, self.cnv) == 0b10,
|
||||||
sr.eq(Cat(reversed(self.data[2*i:2*i + 2]))),
|
sr.eq(Cat(reversed(self.data[2*i:2*i + 2]))),
|
||||||
|
@ -62,7 +62,7 @@ class TB(Module):
|
||||||
def _dly(self, sig, n=0):
|
def _dly(self, sig, n=0):
|
||||||
n += self.params.t_rtt*4//2 # t_{sys,adc,ret}/t_async half rtt
|
n += self.params.t_rtt*4//2 # t_{sys,adc,ret}/t_async half rtt
|
||||||
dly = Signal(n, reset_less=True)
|
dly = Signal(n, reset_less=True)
|
||||||
self.sync.async += dly.eq(Cat(sig, dly))
|
self.sync.async_ += dly.eq(Cat(sig, dly))
|
||||||
return dly[-1]
|
return dly[-1]
|
||||||
|
|
||||||
|
|
||||||
|
@ -85,8 +85,8 @@ def main():
|
||||||
assert not (yield dut.done)
|
assert not (yield dut.done)
|
||||||
while not (yield dut.done):
|
while not (yield dut.done):
|
||||||
yield
|
yield
|
||||||
x = (yield from [(yield d) for d in dut.data])
|
for i, d in enumerate(dut.data):
|
||||||
for i, ch in enumerate(x):
|
ch = yield d
|
||||||
assert ch == i, (hex(ch), hex(i))
|
assert ch == i, (hex(ch), hex(i))
|
||||||
|
|
||||||
run_simulation(tb, [run(tb)],
|
run_simulation(tb, [run(tb)],
|
||||||
|
@ -95,7 +95,7 @@ def main():
|
||||||
"sys": (8, 0),
|
"sys": (8, 0),
|
||||||
"adc": (8, 0),
|
"adc": (8, 0),
|
||||||
"ret": (8, 0),
|
"ret": (8, 0),
|
||||||
"async": (2, 0),
|
"async_": (2, 0),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -44,8 +44,10 @@ class TB(Module):
|
||||||
yield
|
yield
|
||||||
dat = []
|
dat = []
|
||||||
for dds in self.ddss:
|
for dds in self.ddss:
|
||||||
v = yield from [(yield getattr(dds, k))
|
v = []
|
||||||
for k in "cmd ftw pow asf".split()]
|
for k in "cmd ftw pow asf".split():
|
||||||
|
f = yield getattr(dds, k)
|
||||||
|
v.append(f)
|
||||||
dat.append(v)
|
dat.append(v)
|
||||||
data.append((i, dat))
|
data.append((i, dat))
|
||||||
else:
|
else:
|
||||||
|
@ -79,7 +81,7 @@ def main():
|
||||||
data = []
|
data = []
|
||||||
run_simulation(tb, [tb.log(data), run(tb)], vcd_name="dds.vcd")
|
run_simulation(tb, [tb.log(data), run(tb)], vcd_name="dds.vcd")
|
||||||
|
|
||||||
assert data[-1][1] == [[0xe, 0x40 | i, 0x30 | i, 0x20 | i] for i in
|
assert data[-1][1] == [[0x15, 0x40 | i, 0x30 | i, 0x20 | i] for i in
|
||||||
range(4)]
|
range(4)]
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -91,7 +91,7 @@ def main():
|
||||||
"sys": (8, 0),
|
"sys": (8, 0),
|
||||||
"adc": (8, 0),
|
"adc": (8, 0),
|
||||||
"ret": (8, 0),
|
"ret": (8, 0),
|
||||||
"async": (2, 0),
|
"async_": (2, 0),
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,3 @@
|
||||||
import warnings
|
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
from inspect import isclass
|
from inspect import isclass
|
||||||
|
|
||||||
|
@ -331,7 +330,8 @@ class HasEnvironment:
|
||||||
|
|
||||||
@rpc(flags={"async"})
|
@rpc(flags={"async"})
|
||||||
def set_dataset(self, key, value,
|
def set_dataset(self, key, value,
|
||||||
broadcast=False, persist=False, archive=True, save=None):
|
broadcast=False, persist=False, archive=True,
|
||||||
|
hdf5_options=None):
|
||||||
"""Sets the contents and handling modes of a dataset.
|
"""Sets the contents and handling modes of a dataset.
|
||||||
|
|
||||||
Datasets must be scalars (``bool``, ``int``, ``float`` or NumPy scalar)
|
Datasets must be scalars (``bool``, ``int``, ``float`` or NumPy scalar)
|
||||||
|
@ -343,13 +343,16 @@ class HasEnvironment:
|
||||||
broadcast.
|
broadcast.
|
||||||
:param archive: the data is saved into the local storage of the current
|
:param archive: the data is saved into the local storage of the current
|
||||||
run (archived as a HDF5 file).
|
run (archived as a HDF5 file).
|
||||||
:param save: deprecated.
|
:param hdf5_options: dict of keyword arguments to pass to
|
||||||
|
:meth:`h5py.Group.create_dataset`. For example, pass ``{"compression": "gzip"}``
|
||||||
|
to enable transparent zlib compression of this dataset in the HDF5 archive.
|
||||||
|
See the `h5py documentation <https://docs.h5py.org/en/stable/high/group.html#h5py.Group.create_dataset>`_
|
||||||
|
for a list of valid options.
|
||||||
"""
|
"""
|
||||||
if save is not None:
|
|
||||||
warnings.warn("set_dataset save parameter is deprecated, "
|
self.__dataset_mgr.set(
|
||||||
"use archive instead", FutureWarning)
|
key, value, broadcast, persist, archive, hdf5_options
|
||||||
archive = save
|
)
|
||||||
self.__dataset_mgr.set(key, value, broadcast, persist, archive)
|
|
||||||
|
|
||||||
@rpc(flags={"async"})
|
@rpc(flags={"async"})
|
||||||
def mutate_dataset(self, key, index, value):
|
def mutate_dataset(self, key, index, value):
|
||||||
|
@ -389,11 +392,11 @@ class HasEnvironment:
|
||||||
|
|
||||||
By default, datasets obtained by this method are archived into the output
|
By default, datasets obtained by this method are archived into the output
|
||||||
HDF5 file of the experiment. If an archived dataset is requested more
|
HDF5 file of the experiment. If an archived dataset is requested more
|
||||||
than one time (and therefore its value has potentially changed) or is
|
than one time or is modified, only the value at the time of the first call
|
||||||
modified, a warning is emitted.
|
is archived. This may impact reproducibility of experiments.
|
||||||
|
|
||||||
:param archive: Set to ``False`` to prevent archival together with the run's results.
|
:param archive: Set to ``False`` to prevent archival together with the run's results.
|
||||||
Default is ``True``
|
Default is ``True``.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
return self.__dataset_mgr.get(key, archive)
|
return self.__dataset_mgr.get(key, archive)
|
||||||
|
|
|
@ -35,6 +35,15 @@ class DeviceDB:
|
||||||
return desc
|
return desc
|
||||||
|
|
||||||
|
|
||||||
|
def make_dataset(*, persist=False, value=None, hdf5_options=None):
|
||||||
|
"PYON-serializable representation of a dataset in the DatasetDB"
|
||||||
|
return {
|
||||||
|
"persist": persist,
|
||||||
|
"value": value,
|
||||||
|
"hdf5_options": hdf5_options or {},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class DatasetDB(TaskObject):
|
class DatasetDB(TaskObject):
|
||||||
def __init__(self, persist_file, autosave_period=30):
|
def __init__(self, persist_file, autosave_period=30):
|
||||||
self.persist_file = persist_file
|
self.persist_file = persist_file
|
||||||
|
@ -44,10 +53,23 @@ class DatasetDB(TaskObject):
|
||||||
file_data = pyon.load_file(self.persist_file)
|
file_data = pyon.load_file(self.persist_file)
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
file_data = dict()
|
file_data = dict()
|
||||||
self.data = Notifier({k: (True, v) for k, v in file_data.items()})
|
self.data = Notifier(
|
||||||
|
{
|
||||||
|
k: make_dataset(
|
||||||
|
persist=True,
|
||||||
|
value=v["value"],
|
||||||
|
hdf5_options=v["hdf5_options"]
|
||||||
|
)
|
||||||
|
for k, v in file_data.items()
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
def save(self):
|
def save(self):
|
||||||
data = {k: v[1] for k, v in self.data.raw_view.items() if v[0]}
|
data = {
|
||||||
|
k: d
|
||||||
|
for k, d in self.data.raw_view.items()
|
||||||
|
if d["persist"]
|
||||||
|
}
|
||||||
pyon.store_file(self.persist_file, data)
|
pyon.store_file(self.persist_file, data)
|
||||||
|
|
||||||
async def _do(self):
|
async def _do(self):
|
||||||
|
@ -59,20 +81,23 @@ class DatasetDB(TaskObject):
|
||||||
self.save()
|
self.save()
|
||||||
|
|
||||||
def get(self, key):
|
def get(self, key):
|
||||||
return self.data.raw_view[key][1]
|
return self.data.raw_view[key]
|
||||||
|
|
||||||
def update(self, mod):
|
def update(self, mod):
|
||||||
process_mod(self.data, mod)
|
process_mod(self.data, mod)
|
||||||
|
|
||||||
# convenience functions (update() can be used instead)
|
# convenience functions (update() can be used instead)
|
||||||
def set(self, key, value, persist=None):
|
def set(self, key, value, persist=None, hdf5_options=None):
|
||||||
if persist is None:
|
if persist is None:
|
||||||
if key in self.data.raw_view:
|
if key in self.data.raw_view:
|
||||||
persist = self.data.raw_view[key][0]
|
persist = self.data.raw_view[key]["persist"]
|
||||||
else:
|
else:
|
||||||
persist = False
|
persist = False
|
||||||
self.data[key] = (persist, value)
|
self.data[key] = make_dataset(
|
||||||
|
persist=persist,
|
||||||
|
value=value,
|
||||||
|
hdf5_options=hdf5_options,
|
||||||
|
)
|
||||||
|
|
||||||
def delete(self, key):
|
def delete(self, key):
|
||||||
del self.data[key]
|
del self.data[key]
|
||||||
#
|
|
||||||
|
|
|
@ -8,9 +8,12 @@ from operator import setitem
|
||||||
import importlib
|
import importlib
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
from sipyco.sync_struct import Notifier
|
from sipyco.sync_struct import Notifier
|
||||||
from sipyco.pc_rpc import AutoTarget, Client, BestEffortClient
|
from sipyco.pc_rpc import AutoTarget, Client, BestEffortClient
|
||||||
|
|
||||||
|
from artiq.master.databases import make_dataset
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -115,22 +118,26 @@ class DatasetManager:
|
||||||
self.ddb = ddb
|
self.ddb = ddb
|
||||||
self._broadcaster.publish = ddb.update
|
self._broadcaster.publish = ddb.update
|
||||||
|
|
||||||
def set(self, key, value, broadcast=False, persist=False, archive=True):
|
def set(self, key, value, broadcast=False, persist=False, archive=True,
|
||||||
if key in self.archive:
|
hdf5_options=None):
|
||||||
logger.warning("Modifying dataset '%s' which is in archive, "
|
|
||||||
"archive will remain untouched",
|
|
||||||
key, stack_info=True)
|
|
||||||
|
|
||||||
if persist:
|
if persist:
|
||||||
broadcast = True
|
broadcast = True
|
||||||
|
|
||||||
if broadcast:
|
if broadcast:
|
||||||
self._broadcaster[key] = persist, value
|
self._broadcaster[key] = make_dataset(
|
||||||
|
persist=persist,
|
||||||
|
value=value,
|
||||||
|
hdf5_options=hdf5_options,
|
||||||
|
)
|
||||||
elif key in self._broadcaster.raw_view:
|
elif key in self._broadcaster.raw_view:
|
||||||
del self._broadcaster[key]
|
del self._broadcaster[key]
|
||||||
|
|
||||||
if archive:
|
if archive:
|
||||||
self.local[key] = value
|
self.local[key] = make_dataset(
|
||||||
|
persist=persist,
|
||||||
|
value=value,
|
||||||
|
hdf5_options=hdf5_options,
|
||||||
|
)
|
||||||
elif key in self.local:
|
elif key in self.local:
|
||||||
del self.local[key]
|
del self.local[key]
|
||||||
|
|
||||||
|
@ -138,11 +145,11 @@ class DatasetManager:
|
||||||
target = self.local.get(key, None)
|
target = self.local.get(key, None)
|
||||||
if key in self._broadcaster.raw_view:
|
if key in self._broadcaster.raw_view:
|
||||||
if target is not None:
|
if target is not None:
|
||||||
assert target is self._broadcaster.raw_view[key][1]
|
assert target["value"] is self._broadcaster.raw_view[key]["value"]
|
||||||
return self._broadcaster[key][1]
|
return self._broadcaster[key]["value"]
|
||||||
if target is None:
|
if target is None:
|
||||||
raise KeyError("Cannot mutate nonexistent dataset '{}'".format(key))
|
raise KeyError("Cannot mutate nonexistent dataset '{}'".format(key))
|
||||||
return target
|
return target["value"]
|
||||||
|
|
||||||
def mutate(self, key, index, value):
|
def mutate(self, key, index, value):
|
||||||
target = self._get_mutation_target(key)
|
target = self._get_mutation_target(key)
|
||||||
|
@ -158,15 +165,15 @@ class DatasetManager:
|
||||||
|
|
||||||
def get(self, key, archive=False):
|
def get(self, key, archive=False):
|
||||||
if key in self.local:
|
if key in self.local:
|
||||||
return self.local[key]
|
return self.local[key]["value"]
|
||||||
|
|
||||||
data = self.ddb.get(key)
|
dataset = self.ddb.get(key)
|
||||||
if archive:
|
if archive:
|
||||||
if key in self.archive:
|
if key in self.archive:
|
||||||
logger.warning("Dataset '%s' is already in archive, "
|
logger.warning("Dataset '%s' is already in archive, "
|
||||||
"overwriting", key, stack_info=True)
|
"overwriting", key, stack_info=True)
|
||||||
self.archive[key] = data
|
self.archive[key] = dataset
|
||||||
return data
|
return dataset["value"]
|
||||||
|
|
||||||
def write_hdf5(self, f):
|
def write_hdf5(self, f):
|
||||||
datasets_group = f.create_group("datasets")
|
datasets_group = f.create_group("datasets")
|
||||||
|
@ -182,7 +189,7 @@ def _write(group, k, v):
|
||||||
# Add context to exception message when the user writes a dataset that is
|
# Add context to exception message when the user writes a dataset that is
|
||||||
# not representable in HDF5.
|
# not representable in HDF5.
|
||||||
try:
|
try:
|
||||||
group[k] = v
|
group.create_dataset(k, data=v["value"], **v["hdf5_options"])
|
||||||
except TypeError as e:
|
except TypeError as e:
|
||||||
raise TypeError("Error writing dataset '{}' of type '{}': {}".format(
|
raise TypeError("Error writing dataset '{}' of type '{}': {}".format(
|
||||||
k, type(v), e))
|
k, type(v["value"]), e))
|
||||||
|
|
|
@ -0,0 +1,30 @@
|
||||||
|
# RUN: env ARTIQ_DUMP_LLVM=%t %python -m artiq.compiler.testbench.embedding +compile %s
|
||||||
|
# RUN: OutputCheck %s --file-to-check=%t.ll
|
||||||
|
|
||||||
|
from artiq.language.core import *
|
||||||
|
from artiq.language.types import *
|
||||||
|
|
||||||
|
# Make sure `byval` and `sret` are specified both at the call site and the
|
||||||
|
# declaration. This isn't caught by the LLVM IR validator, but mismatches
|
||||||
|
# lead to miscompilations (at least in LLVM 11).
|
||||||
|
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def entrypoint():
|
||||||
|
# CHECK: call void @accept_str\({ i8\*, i32 }\* nonnull byval
|
||||||
|
accept_str("foo")
|
||||||
|
|
||||||
|
# CHECK: call void @return_str\({ i8\*, i32 }\* nonnull sret
|
||||||
|
return_str()
|
||||||
|
|
||||||
|
|
||||||
|
# CHECK: declare void @accept_str\({ i8\*, i32 }\* byval\)
|
||||||
|
@syscall
|
||||||
|
def accept_str(name: TStr) -> TNone:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
# CHECK: declare void @return_str\({ i8\*, i32 }\* sret\)
|
||||||
|
@syscall
|
||||||
|
def return_str() -> TStr:
|
||||||
|
pass
|
|
@ -3,6 +3,9 @@
|
||||||
import copy
|
import copy
|
||||||
import unittest
|
import unittest
|
||||||
|
|
||||||
|
import h5py
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
from sipyco.sync_struct import process_mod
|
from sipyco.sync_struct import process_mod
|
||||||
|
|
||||||
from artiq.experiment import EnvExperiment
|
from artiq.experiment import EnvExperiment
|
||||||
|
@ -14,7 +17,7 @@ class MockDatasetDB:
|
||||||
self.data = dict()
|
self.data = dict()
|
||||||
|
|
||||||
def get(self, key):
|
def get(self, key):
|
||||||
return self.data[key][1]
|
return self.data[key]["value"]
|
||||||
|
|
||||||
def update(self, mod):
|
def update(self, mod):
|
||||||
# Copy mod before applying to avoid sharing references to objects
|
# Copy mod before applying to avoid sharing references to objects
|
||||||
|
@ -82,9 +85,9 @@ class ExperimentDatasetCase(unittest.TestCase):
|
||||||
def test_append_broadcast(self):
|
def test_append_broadcast(self):
|
||||||
self.exp.set(KEY, [], broadcast=True)
|
self.exp.set(KEY, [], broadcast=True)
|
||||||
self.exp.append(KEY, 0)
|
self.exp.append(KEY, 0)
|
||||||
self.assertEqual(self.dataset_db.data[KEY][1], [0])
|
self.assertEqual(self.dataset_db.data[KEY]["value"], [0])
|
||||||
self.exp.append(KEY, 1)
|
self.exp.append(KEY, 1)
|
||||||
self.assertEqual(self.dataset_db.data[KEY][1], [0, 1])
|
self.assertEqual(self.dataset_db.data[KEY]["value"], [0, 1])
|
||||||
|
|
||||||
def test_append_array(self):
|
def test_append_array(self):
|
||||||
for broadcast in (True, False):
|
for broadcast in (True, False):
|
||||||
|
@ -103,3 +106,44 @@ class ExperimentDatasetCase(unittest.TestCase):
|
||||||
with self.assertRaises(KeyError):
|
with self.assertRaises(KeyError):
|
||||||
self.exp.append(KEY, 0)
|
self.exp.append(KEY, 0)
|
||||||
|
|
||||||
|
def test_write_hdf5_options(self):
|
||||||
|
data = np.random.randint(0, 1024, 1024)
|
||||||
|
self.exp.set(
|
||||||
|
KEY,
|
||||||
|
data,
|
||||||
|
hdf5_options=dict(
|
||||||
|
compression="gzip",
|
||||||
|
compression_opts=6,
|
||||||
|
shuffle=True,
|
||||||
|
fletcher32=True
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
with h5py.File("test.h5", "a", "core", backing_store=False) as f:
|
||||||
|
self.dataset_mgr.write_hdf5(f)
|
||||||
|
|
||||||
|
self.assertTrue(np.array_equal(f["datasets"][KEY][()], data))
|
||||||
|
self.assertEqual(f["datasets"][KEY].compression, "gzip")
|
||||||
|
self.assertEqual(f["datasets"][KEY].compression_opts, 6)
|
||||||
|
self.assertTrue(f["datasets"][KEY].shuffle)
|
||||||
|
self.assertTrue(f["datasets"][KEY].fletcher32)
|
||||||
|
|
||||||
|
def test_write_hdf5_no_options(self):
|
||||||
|
data = np.random.randint(0, 1024, 1024)
|
||||||
|
self.exp.set(KEY, data)
|
||||||
|
|
||||||
|
with h5py.File("test.h5", "a", "core", backing_store=False) as f:
|
||||||
|
self.dataset_mgr.write_hdf5(f)
|
||||||
|
self.assertTrue(np.array_equal(f["datasets"][KEY][()], data))
|
||||||
|
self.assertIsNone(f["datasets"][KEY].compression)
|
||||||
|
|
||||||
|
def test_write_hdf5_invalid_type(self):
|
||||||
|
class CustomType:
|
||||||
|
def __init__(self, x):
|
||||||
|
self.x = x
|
||||||
|
|
||||||
|
self.exp.set(KEY, CustomType(42))
|
||||||
|
|
||||||
|
with h5py.File("test.h5", "w", "core", backing_store=False) as f:
|
||||||
|
with self.assertRaisesRegex(TypeError, "CustomType"):
|
||||||
|
self.dataset_mgr.write_hdf5(f)
|
||||||
|
|
|
@ -2,11 +2,11 @@ import unittest
|
||||||
import logging
|
import logging
|
||||||
import asyncio
|
import asyncio
|
||||||
import sys
|
import sys
|
||||||
import os
|
|
||||||
from time import time, sleep
|
from time import time, sleep
|
||||||
|
|
||||||
from artiq.experiment import *
|
from artiq.experiment import *
|
||||||
from artiq.master.scheduler import Scheduler
|
from artiq.master.scheduler import Scheduler
|
||||||
|
from artiq.master.databases import make_dataset
|
||||||
|
|
||||||
|
|
||||||
class EmptyExperiment(EnvExperiment):
|
class EmptyExperiment(EnvExperiment):
|
||||||
|
@ -86,10 +86,7 @@ class _RIDCounter:
|
||||||
|
|
||||||
class SchedulerCase(unittest.TestCase):
|
class SchedulerCase(unittest.TestCase):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
if os.name == "nt":
|
self.loop = asyncio.new_event_loop()
|
||||||
self.loop = asyncio.ProactorEventLoop()
|
|
||||||
else:
|
|
||||||
self.loop = asyncio.new_event_loop()
|
|
||||||
asyncio.set_event_loop(self.loop)
|
asyncio.set_event_loop(self.loop)
|
||||||
|
|
||||||
def test_steps(self):
|
def test_steps(self):
|
||||||
|
@ -291,8 +288,13 @@ class SchedulerCase(unittest.TestCase):
|
||||||
nonlocal termination_ok
|
nonlocal termination_ok
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
mod,
|
mod,
|
||||||
{"action": "setitem", "key": "termination_ok",
|
{
|
||||||
"value": (False, True), "path": []})
|
"action": "setitem",
|
||||||
|
"key": "termination_ok",
|
||||||
|
"value": make_dataset(value=True),
|
||||||
|
"path": []
|
||||||
|
}
|
||||||
|
)
|
||||||
termination_ok = True
|
termination_ok = True
|
||||||
handlers = {
|
handlers = {
|
||||||
"update_dataset": check_termination
|
"update_dataset": check_termination
|
||||||
|
|
|
@ -1,7 +1,8 @@
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
import unittest
|
import importlib
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import tempfile
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
|
||||||
from artiq import tools
|
from artiq import tools
|
||||||
|
|
||||||
|
@ -10,13 +11,13 @@ from artiq import tools
|
||||||
# Very simplified version of CPython's
|
# Very simplified version of CPython's
|
||||||
# Lib/test/test_importlib/util.py:create_modules
|
# Lib/test/test_importlib/util.py:create_modules
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def create_modules(*names):
|
def create_modules(*names, extension=".py"):
|
||||||
mapping = {}
|
mapping = {}
|
||||||
with tempfile.TemporaryDirectory() as temp_dir:
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
mapping[".root"] = Path(temp_dir)
|
mapping[".root"] = Path(temp_dir)
|
||||||
|
|
||||||
for name in names:
|
for name in names:
|
||||||
file_path = Path(temp_dir) / f"{name}.py"
|
file_path = Path(temp_dir) / f"{name}{extension}"
|
||||||
with file_path.open("w") as fp:
|
with file_path.open("w") as fp:
|
||||||
print(f"_MODULE_NAME = {name!r}", file=fp)
|
print(f"_MODULE_NAME = {name!r}", file=fp)
|
||||||
mapping[name] = file_path
|
mapping[name] = file_path
|
||||||
|
@ -45,6 +46,13 @@ class TestFileImport(unittest.TestCase):
|
||||||
|
|
||||||
self.assertEqual(mod2._M1_NAME, mod1._MODULE_NAME)
|
self.assertEqual(mod2._M1_NAME, mod1._MODULE_NAME)
|
||||||
|
|
||||||
|
def test_can_import_not_in_source_suffixes(self):
|
||||||
|
for extension in ["", ".something"]:
|
||||||
|
self.assertNotIn(extension, importlib.machinery.SOURCE_SUFFIXES)
|
||||||
|
with create_modules(MODNAME, extension=extension) as mods:
|
||||||
|
mod = tools.file_import(str(mods[MODNAME]))
|
||||||
|
self.assertEqual(Path(mod.__file__).name, f"{MODNAME}{extension}")
|
||||||
|
|
||||||
|
|
||||||
class TestGetExperiment(unittest.TestCase):
|
class TestGetExperiment(unittest.TestCase):
|
||||||
def test_fail_no_experiments(self):
|
def test_fail_no_experiments(self):
|
||||||
|
|
|
@ -2,7 +2,6 @@ import unittest
|
||||||
import logging
|
import logging
|
||||||
import asyncio
|
import asyncio
|
||||||
import sys
|
import sys
|
||||||
import os
|
|
||||||
from time import sleep
|
from time import sleep
|
||||||
|
|
||||||
from artiq.experiment import *
|
from artiq.experiment import *
|
||||||
|
@ -77,10 +76,7 @@ def _run_experiment(class_name):
|
||||||
|
|
||||||
class WorkerCase(unittest.TestCase):
|
class WorkerCase(unittest.TestCase):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
if os.name == "nt":
|
self.loop = asyncio.new_event_loop()
|
||||||
self.loop = asyncio.ProactorEventLoop()
|
|
||||||
else:
|
|
||||||
self.loop = asyncio.new_event_loop()
|
|
||||||
asyncio.set_event_loop(self.loop)
|
asyncio.set_event_loop(self.loop)
|
||||||
|
|
||||||
def test_simple_run(self):
|
def test_simple_run(self):
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
import asyncio
|
import asyncio
|
||||||
import importlib.util
|
import importlib.util
|
||||||
|
import importlib.machinery
|
||||||
import inspect
|
import inspect
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
|
@ -78,7 +79,10 @@ def file_import(filename, prefix="file_import_"):
|
||||||
sys.path.insert(0, path)
|
sys.path.insert(0, path)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
spec = importlib.util.spec_from_file_location(modname, filename)
|
spec = importlib.util.spec_from_loader(
|
||||||
|
modname,
|
||||||
|
importlib.machinery.SourceFileLoader(modname, str(filename)),
|
||||||
|
)
|
||||||
module = importlib.util.module_from_spec(spec)
|
module = importlib.util.module_from_spec(spec)
|
||||||
# Add to sys.modules, otherwise inspect thinks it is a built-in module and often breaks.
|
# Add to sys.modules, otherwise inspect thinks it is a built-in module and often breaks.
|
||||||
# This must take place before module execution because NAC3 decorators call inspect.
|
# This must take place before module execution because NAC3 decorators call inspect.
|
||||||
|
|
|
@ -5,24 +5,19 @@ Developing ARTIQ
|
||||||
This section is only for software or FPGA developers who want to modify ARTIQ. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we normally provide board binaries for you.
|
This section is only for software or FPGA developers who want to modify ARTIQ. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we normally provide board binaries for you.
|
||||||
|
|
||||||
The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
|
The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
|
||||||
ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``.nix`` files from the ``nix-scripts`` repository and run the commands manually) - but Nix makes the process a lot easier.
|
ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``flake.nix`` file and/or nixpkgs, and run the commands manually) - but Nix makes the process a lot easier.
|
||||||
|
|
||||||
* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra <https://nixbld.m-labs.hk/>`_ and/or the ``vivado.nix`` file from the ``nix-scripts`` repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
|
* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra <https://nixbld.m-labs.hk/>`_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
|
||||||
* During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives).
|
* During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives).
|
||||||
* Install the `Nix package manager <http://nixos.org/nix/>`_ and Git (e.g. ``$ nix-shell -p git``).
|
* Install the `Nix package manager <http://nixos.org/nix/>`_, version 2.4 or later. Prefer a single-user installation for simplicity.
|
||||||
* Set up the M-Labs binary substituter (:ref:`same procedure as the user section <installing-nix-users>`) to allow binaries to be downloaded. Otherwise, tools such as LLVM and the Rust compiler will be compiled on your machine, which uses a lot of CPU time, memory, and disk space.
|
* If you did not install Vivado in its default location ``/opt``, clone the ARTIQ Git repository and edit ``flake.nix`` accordingly.
|
||||||
* Clone the repositories https://github.com/m-labs/artiq and https://git.m-labs.hk/m-labs/nix-scripts.
|
* Enable flakes in Nix by e.g. adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (for example ``~/.config/nix/nix.conf``).
|
||||||
* If you did not install Vivado in its default location ``/opt``, edit ``vivado.nix`` accordingly.
|
* Enter the development shell by running ``nix develop git+https://github.com/m-labs/artiq.git``, or alternatively by cloning the ARTIQ Git repository and running ``nix develop`` at the root (where ``flake.nix`` is).
|
||||||
* Run ``$ nix-shell -I artiqSrc=path_to_artiq_sources shell-dev.nix`` to obtain an environment containing all the required development tools (e.g. Migen, MiSoC, Clang, Rust, OpenOCD...) in addition to the ARTIQ user environment. ``artiqSrc`` should point to the root of the cloned ``artiq`` repository, and ``shell-dev.nix`` can be found in the ``artiq-fast`` folder of the ``nix-scripts`` repository.
|
|
||||||
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli``. If you are using a JSON system description file, use ``$ python -m artiq.gateware.targets.kasli_generic file.json``.
|
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli``. If you are using a JSON system description file, use ``$ python -m artiq.gateware.targets.kasli_generic file.json``.
|
||||||
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli -V <your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <configuring-openocd>`. OpenOCD is already part of the shell started by ``shell-dev.nix``.
|
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli -V <your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <configuring-openocd>`. OpenOCD is already part of the flake's development environment.
|
||||||
* Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed by ``shell-dev.nix``). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``.
|
* Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed in the flake's development environment). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``.
|
||||||
* The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group).
|
* The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group).
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If you do not plan to modify ``nix-scripts``, with the ARTIQ channel configured you can simply enter the development Nix shell with ``nix-shell "<artiq-full/fast/shell-dev.nix>"``. No repositories need to be cloned. This is especially useful if you simply want to build firmware using an unmodified version of ARTIQ.
|
|
||||||
|
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
Nix will make a read-only copy of the ARTIQ source to use in the shell environment. Therefore, any modifications that you make to the source after the shell is started will not be taken into account. A solution applicable to ARTIQ (and several other Python packages such as Migen and MiSoC) is to prepend the ARTIQ source directory to the ``PYTHONPATH`` environment variable after entering the shell. If you want this to be done by default, edit ``profile`` in ``shell-dev.nix``.
|
Nix will make a read-only copy of the ARTIQ source to use in the shell environment. Therefore, any modifications that you make to the source after the shell is started will not be taken into account. A solution applicable to ARTIQ (and several other Python packages such as Migen and MiSoC) is to prepend the ARTIQ source directory to the ``PYTHONPATH`` environment variable after entering the shell. If you want this to be done by default, edit the ``devShell`` section of ``flake.nix``.
|
||||||
|
|
12
flake.nix
12
flake.nix
|
@ -11,6 +11,12 @@
|
||||||
outputs = { self, mozilla-overlay, src-sipyco, src-nac3, src-migen, src-misoc }:
|
outputs = { self, mozilla-overlay, src-sipyco, src-nac3, src-migen, src-misoc }:
|
||||||
let
|
let
|
||||||
pkgs = import src-nac3.inputs.nixpkgs { system = "x86_64-linux"; overlays = [ (import mozilla-overlay) ]; };
|
pkgs = import src-nac3.inputs.nixpkgs { system = "x86_64-linux"; overlays = [ (import mozilla-overlay) ]; };
|
||||||
|
|
||||||
|
artiqVersionMajor = 8;
|
||||||
|
artiqVersionMinor = self.sourceInfo.revCount or 0;
|
||||||
|
artiqVersionId = self.sourceInfo.shortRev or "unknown";
|
||||||
|
artiqVersion = (builtins.toString artiqVersionMajor) + "." + (builtins.toString artiqVersionMinor) + "-" + artiqVersionId + "-beta";
|
||||||
|
|
||||||
rustManifest = pkgs.fetchurl {
|
rustManifest = pkgs.fetchurl {
|
||||||
url = "https://static.rust-lang.org/dist/2021-01-29/channel-rust-nightly.toml";
|
url = "https://static.rust-lang.org/dist/2021-01-29/channel-rust-nightly.toml";
|
||||||
sha256 = "sha256-EZKgw89AH4vxaJpUHmIMzMW/80wAFQlfcxRoBD9nz0c=";
|
sha256 = "sha256-EZKgw89AH4vxaJpUHmIMzMW/80wAFQlfcxRoBD9nz0c=";
|
||||||
|
@ -53,12 +59,12 @@
|
||||||
|
|
||||||
qasync = pkgs.python3Packages.buildPythonPackage rec {
|
qasync = pkgs.python3Packages.buildPythonPackage rec {
|
||||||
pname = "qasync";
|
pname = "qasync";
|
||||||
version = "0.10.0";
|
version = "0.19.0";
|
||||||
src = pkgs.fetchFromGitHub {
|
src = pkgs.fetchFromGitHub {
|
||||||
owner = "CabbageDevelopment";
|
owner = "CabbageDevelopment";
|
||||||
repo = "qasync";
|
repo = "qasync";
|
||||||
rev = "v${version}";
|
rev = "v${version}";
|
||||||
sha256 = "1zga8s6dr7gk6awmxkh4pf25gbg8n6dv1j4b0by7y0fhi949qakq";
|
sha256 = "sha256-xGAUAyOq+ELwzMGbLLmXijxLG8pv4a6tPvfAVOt1YwU=";
|
||||||
};
|
};
|
||||||
propagatedBuildInputs = [ pkgs.python3Packages.pyqt5 ];
|
propagatedBuildInputs = [ pkgs.python3Packages.pyqt5 ];
|
||||||
checkInputs = [ pkgs.python3Packages.pytest ];
|
checkInputs = [ pkgs.python3Packages.pytest ];
|
||||||
|
@ -69,7 +75,7 @@
|
||||||
|
|
||||||
artiq = pkgs.python3Packages.buildPythonPackage rec {
|
artiq = pkgs.python3Packages.buildPythonPackage rec {
|
||||||
pname = "artiq";
|
pname = "artiq";
|
||||||
version = "8.0-dev";
|
version = artiqVersion;
|
||||||
src = self;
|
src = self;
|
||||||
|
|
||||||
preBuild = "export VERSIONEER_OVERRIDE=${version}";
|
preBuild = "export VERSIONEER_OVERRIDE=${version}";
|
||||||
|
|
Loading…
Reference in New Issue