doc: Trailing spaces

This commit is contained in:
architeuthidae 2024-07-19 18:07:46 +08:00 committed by Sébastien Bourdeauducq
parent 5aee0df9f0
commit 7e6a94c6b0
18 changed files with 391 additions and 392 deletions

View File

@ -4,34 +4,34 @@ Networking and configuration
Setting up core device networking
---------------------------------
For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to an Ethernet port in your network. If the port is 10Mbps or 100Mbps and not 1000Mbps, make sure that the SFP/RJ45 transceiver supports the lower rate. Many SFP/RJ45 transceivers only support the 1000Mbps rate. If you do not have a SFP/RJ45 transceiver that supports 10Mbps and 100Mbps rates, you may instead use a gigabit Ethernet switch in the middle to perform rate conversion.
For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to an Ethernet port in your network. If the port is 10Mbps or 100Mbps and not 1000Mbps, make sure that the SFP/RJ45 transceiver supports the lower rate. Many SFP/RJ45 transceivers only support the 1000Mbps rate. If you do not have a SFP/RJ45 transceiver that supports 10Mbps and 100Mbps rates, you may instead use a gigabit Ethernet switch in the middle to perform rate conversion.
You can also insert other types of SFP transceivers into Kasli if you wish to use it directly in e.g. an optical fiber Ethernet network. Kasli-SoC already directly features RJ45 10/100/1000 Ethernet.
IP address and ping
IP address and ping
^^^^^^^^^^^^^^^^^^^
If you purchased a Kasli or Kasli-SoC device from M-Labs, it will arrive with an IP address already set, normally the address requested in the web shop at time of purchase. If you did not specify an address at purchase, the default IP M-Labs uses is ``192.168.1.75``. If you did not obtain your hardware from M-Labs, or if you have just reflashed your core device, see :ref:`networking-tips` below.
If you purchased a Kasli or Kasli-SoC device from M-Labs, it will arrive with an IP address already set, normally the address requested in the web shop at time of purchase. If you did not specify an address at purchase, the default IP M-Labs uses is ``192.168.1.75``. If you did not obtain your hardware from M-Labs, or if you have just reflashed your core device, see :ref:`networking-tips` below.
Once you know the IP, check that you can ping your device: ::
Once you know the IP, check that you can ping your device: ::
$ ping <IP_address>
If ping fails, check that the Ethernet LED is ON; on Kasli, it is the LED next to the SFP0 connector. As a next step, try connecting to the serial port to read the UART log. See :ref:`connecting-UART`.
If ping fails, check that the Ethernet LED is ON; on Kasli, it is the LED next to the SFP0 connector. As a next step, try connecting to the serial port to read the UART log. See :ref:`connecting-UART`.
Core management tool
Core management tool
^^^^^^^^^^^^^^^^^^^^
The tool used to configure the core device is the command-line utility ``artiq_coremgmt``. In order for it to connect to your core device, it is necessary to supply it somehow with the correct IP address for your core device. This can be done directly through use of the ``-D`` option, for example in: ::
The tool used to configure the core device is the command-line utility ``artiq_coremgmt``. In order for it to connect to your core device, it is necessary to supply it somehow with the correct IP address for your core device. This can be done directly through use of the ``-D`` option, for example in: ::
$ artiq_coremgmt -D <IP_address> log
.. note::
.. note::
This command reads and displays the core log. If you have recently rebooted or reflashed your core device, you should see the startup logs in your terminal.
Normally, however, the core device IP is supplied through the *device database* for your system, which comes in the form of a Python script called ``device_db.py`` (see also :ref:`device-db`). If you purchased a system from M-Labs, the ``device_db.py`` for your system will have been provided for you, either on the USB stick, inside ``~/artiq`` on your NUC, or sent by email.
Normally, however, the core device IP is supplied through the *device database* for your system, which comes in the form of a Python script called ``device_db.py`` (see also :ref:`device-db`). If you purchased a system from M-Labs, the ``device_db.py`` for your system will have been provided for you, either on the USB stick, inside ``~/artiq`` on your NUC, or sent by email.
Make sure the field ``core_addr`` at the top of the file is set to your core device's correct IP address, and always execute ``artiq_coremgmt`` from the same directory the device database is placed in.
Make sure the field ``core_addr`` at the top of the file is set to your core device's correct IP address, and always execute ``artiq_coremgmt`` from the same directory the device database is placed in.
Once you can reach your core device, the IP can be changed at any time by running: ::
@ -39,68 +39,68 @@ Once you can reach your core device, the IP can be changed at any time by runnin
and then rebooting the device: ::
$ artiq_coremgmt [-D old_IP] reboot
$ artiq_coremgmt [-D old_IP] reboot
Make sure to correspondingly edit your ``device_db.py`` after rebooting.
Make sure to correspondingly edit your ``device_db.py`` after rebooting.
.. _networking-tips:
Tips and troubleshooting
Tips and troubleshooting
^^^^^^^^^^^^^^^^^^^^^^^^
For Kasli-SoC:
If the ``ip`` config is not set, Kasli-SoC firmware defaults to using the IP address ``192.168.1.56``.
For Kasli-SoC:
If the ``ip`` config is not set, Kasli-SoC firmware defaults to using the IP address ``192.168.1.56``.
For ZC706:
If the ``ip`` config is not set, ZC706 firmware defaults to using the IP address ``192.168.1.52``.
For ZC706:
If the ``ip`` config is not set, ZC706 firmware defaults to using the IP address ``192.168.1.52``.
For Kasli or KC705:
If the ``ip`` config field is not set or set to ``use_dhcp``, the device will attempt to obtain an IP address and default gateway using DHCP. The chosen IP address will be in log output, which can be accessed via the :ref:`UART log <connecting-UART>`.
For Kasli or KC705:
If the ``ip`` config field is not set or set to ``use_dhcp``, the device will attempt to obtain an IP address and default gateway using DHCP. The chosen IP address will be in log output, which can be accessed via the :ref:`UART log <connecting-UART>`.
If a static IP address is preferred, it can be flashed directly (OpenOCD must be installed and configured, as in :doc:`flashing`), along with, as necessary, default gateway, IPv6, and/or MAC address: ::
$ artiq_mkfs flash_storage.img [-s mac xx:xx:xx:xx:xx:xx] [-s ip xx.xx.xx.xx/xx] [-s ipv4_default_route xx.xx.xx.xx] [-s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx] [-s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx]
$ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start
On Kasli or Kasli-SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. On any board, once a device can be reached by ``artiq_coremgmt``, these values can be set and edited at any time, following the procedure for IP above.
On Kasli or Kasli-SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. On any board, once a device can be reached by ``artiq_coremgmt``, these values can be set and edited at any time, following the procedure for IP above.
Regarding IPv6, note that the device also has a link-local address that corresponds to its EUI-64, which can be used simultaneously to the (potentially unrelated) IPv6 address defined by using the ``ip6`` configuration key.
.. _core-device-config:
.. _core-device-config:
Configuring the core device
---------------------------
.. note::
.. note::
The following steps are optional, and you only need to execute them if they are necessary for your specific system. To learn more about how ARTIQ works and how to use it first, you might skip to the next page, :doc:`rtio`. For all configuration options, the core device generally must be restarted for changes to take effect.
Flash idle and/or startup kernel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The *idle kernel* is the kernel (that is, a piece of code running on the core device; see :doc:`the next page <rtio>` for further explanation) which the core device runs in between experiments and whenever not connected to the host. It is saved directly to the core device's flash storage in compiled form. Potential uses include cleanup of the environment between experiments, state maintenance for certain hardware, or anything else that should run continuously whenever the system is not otherwise occupied.
The *idle kernel* is the kernel (that is, a piece of code running on the core device; see :doc:`the next page <rtio>` for further explanation) which the core device runs in between experiments and whenever not connected to the host. It is saved directly to the core device's flash storage in compiled form. Potential uses include cleanup of the environment between experiments, state maintenance for certain hardware, or anything else that should run continuously whenever the system is not otherwise occupied.
To flash an idle kernel, first write an idle experiment. Note that since the idle kernel runs regardless of whether the core device is connected to the host, remote procedure calls or RPCs (functions called by a kernel to run on the host) are forbidden and the ``run()`` method must be a kernel marked with ``@kernel``. Once written, you can compile and flash your idle experiment: ::
$ artiq_compile idle.py
$ artiq_compile idle.py
$ artiq_coremgmt config write -f idle_kernel idle.elf
The *startup kernel* is a kernel executed once and only once immediately whenever the core device powers on. Uses include initializing DDSes and setting TTL directions. For DRTIO systems, the startup kernel should wait until the desired destinations, including local RTIO, are up, using ``self.core.get_rtio_destination_status`` (:meth:`~artiq.coredevice.core.Core.get_rtio_destination_status`).
The *startup kernel* is a kernel executed once and only once immediately whenever the core device powers on. Uses include initializing DDSes and setting TTL directions. For DRTIO systems, the startup kernel should wait until the desired destinations, including local RTIO, are up, using ``self.core.get_rtio_destination_status`` (:meth:`~artiq.coredevice.core.Core.get_rtio_destination_status`).
To flash a startup kernel, proceed as with the idle kernel, but using the ``startup_kernel`` key in the ``artiq_coremgmt`` command.
To flash a startup kernel, proceed as with the idle kernel, but using the ``startup_kernel`` key in the ``artiq_coremgmt`` command.
.. note::
Subkernels (see :doc:`using_drtio_subkernels`) are allowed in idle (and startup) experiments without any additional ceremony. ``artiq_compile`` will produce a ``.tar`` rather than a ``.elf``; simply substitute ``idle.tar`` for ``idle.elf`` in the ``artiq_coremgmt config write`` command.
.. note::
Subkernels (see :doc:`using_drtio_subkernels`) are allowed in idle (and startup) experiments without any additional ceremony. ``artiq_compile`` will produce a ``.tar`` rather than a ``.elf``; simply substitute ``idle.tar`` for ``idle.elf`` in the ``artiq_coremgmt config write`` command.
Select the RTIO clock source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The core device may use any of an external clock signal, its internal clock with external frequency reference, or its internal clock with internal crystal reference. Clock source and timing are set at power-up. To find out what clock signal you are using, check the startup logs with ``artiq_coremgmt log``.
The core device may use any of an external clock signal, its internal clock with external frequency reference, or its internal clock with internal crystal reference. Clock source and timing are set at power-up. To find out what clock signal you are using, check the startup logs with ``artiq_coremgmt log``.
The default is to use an internal 125MHz clock. To select a source, use a command of the form: ::
$ artiq_coremgmt config write -s rtio_clock int_125 # internal 125MHz clock (default)
$ artiq_coremgmt config write -s rtio_clock ext0_synth0_10to125 # external 10MHz reference used to synthesize internal 125MHz
See :ref:`core-device-clocking` for availability of specific options.
See :ref:`core-device-clocking` for availability of specific options.
Set up resolving RTIO channels to their names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -112,7 +112,7 @@ This feature allows you to print the channels' respective names alongside with t
More information on the ``artiq_rtiomap`` utility can be found on the :ref:`Utilities <rtiomap-tool>` page.
Enable event spreading
Enable event spreading
^^^^^^^^^^^^^^^^^^^^^^
This feature changes the logic used for queueing RTIO output events in the core device for a more efficient use of FPGA resources, at the cost of introducing nondeterminism and potential unpredictability in certain timing errors (specifically gateware :ref:`sequence errors<sequence-errors>`). It can be enabled with the config key ``sed_spread_enable``. See :ref:`sed-event-spreading`.

View File

@ -1,7 +1,7 @@
Core device
===========
The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :ref:`artiq-real-time-i-o-concepts`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code.
The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :ref:`artiq-real-time-i-o-concepts`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code.
While it is possible to use the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, many experiments require it.
@ -13,15 +13,15 @@ Flash storage
The core device contains some flash storage space which is used to store configuration data. It is one sector (typically 64 kB) large and organized as a list of key-value records, accessible either through ``artiq_mkfs`` and ``artiq_flash`` or, preferably in most cases, the ``config`` option of the ``artiq_coremgmt`` core management tool (see below). Information can be stored to keys of any name, but the specific keys currently used and referenced by ARTIQ are summarized below:
``idle_kernel``
Stores (compiled ``.tar`` or ``.elf`` binary of) idle kernel. See :ref:`miscellaneous_config_core_device`.
Stores (compiled ``.tar`` or ``.elf`` binary of) idle kernel. See :ref:`miscellaneous_config_core_device`.
``startup_kernel``
Stores (compiled ``.tar`` or ``.elf`` binary of) startup kernel. See :ref:`miscellaneous_config_core_device`.
``ip``
``ip``
Sets IP address of core device. For this and other networking options see also :ref:`core-device-networking`.
``mac``
Sets MAC address of core device. Unnecessary on Kasli or Kasli-SoC, which can obtain it from EEPROM.
Sets MAC address of core device. Unnecessary on Kasli or Kasli-SoC, which can obtain it from EEPROM.
``ipv4_default_route``
Sets IPv4 default route.
Sets IPv4 default route.
``ip6``
Sets IPv6 address of core device. Can be set irrespective of and used simultaneously as IPv4 address above.
``ipv6_default_route``
@ -31,7 +31,7 @@ The core device contains some flash storage space which is used to store configu
``log_level``
Sets core device log level. Possible levels are ``TRACE``, ``DEBUG``, ``INFO``, ``WARN``, ``ERROR``, and ``OFF``. Note that enabling higher log levels will produce some core device slowdown.
``uart_log_level``
Sets UART log level, with same options. Printing a large number of messages to UART log will produce a significant slowdown.
Sets UART log level, with same options. Printing a large number of messages to UART log will produce a significant slowdown.
``rtio_clock``
Sets the RTIO clock; see :ref:`core-device-clocking`.
``routing_table``
@ -40,13 +40,13 @@ The core device contains some flash storage space which is used to store configu
If set, allows the core log to connect RTIO channels to device names and use device names as well as channel numbers in log output. A correctly formatted table can be automatically generated with ``artiq_rtiomap``, see :ref:`Utilities<rtiomap-tool>`.
``net_trace``
If set to ``1``, will activate net trace (print all packets sent and received to UART and core log). This will considerably slow down all network response from the core. Not applicable for ARTIQ-Zynq (Kasli-SoC, ZC706).
``panic_reset``
``panic_reset``
If set to ``1``, core device will restart automatically. Not applicable for ARTIQ-Zynq.
``no_flash_boot``
If set to ``1``, will disable flash boot. Network boot is attempted if possible. Not applicable for ARTIQ-Zynq.
``boot``
Allows full firmware/gateware (``boot.bin``) to be written with ``artiq_coremgmt``, on ARTIQ-Zynq systems only.
Common configuration commands
-----------------------------
@ -73,11 +73,11 @@ You can also write entire files in a record using the ``-f`` option. This is use
$ artiq_coremgmt config read idle_kernel | head -c9
b'\x7fELF
The same option is used to write ``boot.bin`` in ARTIQ-Zynq. Note that the ``boot`` key is write-only.
The same option is used to write ``boot.bin`` in ARTIQ-Zynq. Note that the ``boot`` key is write-only.
See also the full reference of ``artiq_coremgmt`` in :ref:`Utilities <core-device-management-tool>`.
Board details
Board details
-------------
FPGA board ports
@ -88,11 +88,11 @@ All boards have a serial interface running at 115200bps 8-N-1 that can be used f
Kasli and Kasli SoC
^^^^^^^^^^^^^^^^^^^
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is architecturally separate, among other things being capable of performing much heavier software computations quickly locally to the board, but provides generally similar features to Kasli. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original Kasli v1.0 remains supported by ARTIQ.
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is architecturally separate, among other things being capable of performing much heavier software computations quickly locally to the board, but provides generally similar features to Kasli. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original Kasli v1.0 remains supported by ARTIQ.
Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC).
Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC).
The DRTIO line rate depends upon the RTIO clock frequency running, e.g., at 125MHz the line rate is 2.5Gbps, at 150MHz 3.0Gbps, etc. See below for information on RTIO clocks.
The DRTIO line rate depends upon the RTIO clock frequency running, e.g., at 125MHz the line rate is 2.5Gbps, at 150MHz 3.0Gbps, etc. See below for information on RTIO clocks.
KC705
^^^^^
@ -205,23 +205,23 @@ See :mod:`artiq.coredevice.i2c` for more details.
Clocking
--------
The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include:
The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include:
* ``int_100`` - internal crystal reference is used to synthesize a 100MHz RTIO clock,
* ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option),
* ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option),
* ``int_150`` - internal crystal reference is used to synthesize a 150MHz RTIO clock.
* ``ext0_synth0_10to125`` - external 10MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_80to125`` - external 80MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_100to125`` - external 100MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock.
* ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock.
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the JSON description file. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the JSON description file. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.
If not using WRPLL, PLL can also be bypassed entirely with the options
If not using WRPLL, PLL can also be bypassed entirely with the options
* ``ext0_bypass`` (input clock used directly)
* ``ext0_bypass_125`` (explicit alias)

View File

@ -27,8 +27,8 @@ The most commonly used features from the ARTIQ language modules and from the cor
:mod:`artiq.language.units` module
----------------------------------
.. displays nothing, but makes references work
.. automodule:: artiq.language.units
.. displays nothing, but makes references work
.. automodule:: artiq.language.units
:members:
This module contains floating point constants that correspond to common physical units (ns, MHz, ...). They are provided for convenience (e.g write ``MHz`` instead of ``1000000.0``) and code clarity purposes.

View File

@ -1,21 +1,21 @@
Developing a Network Device Support Package (NDSP)
==================================================
Besides the kind of specialized real-time hardware most of ARTIQ is concerned with the control and management of, ARTIQ also easily handles more conventional 'slow' devices. This is done through *controllers*, based on `SiPyCo <https://github.com/m-labs/sipyco>`_ (manual hosted `here <https://m-labs.hk/artiq/sipyco-manual/>`_), which expose remote procedure call (RPC) interfaces to the network. This allows experiments to issue RPCs to the controllers as necessary, without needing to do direct I/O to the devices. Some advantages of this architecture include:
Besides the kind of specialized real-time hardware most of ARTIQ is concerned with the control and management of, ARTIQ also easily handles more conventional 'slow' devices. This is done through *controllers*, based on `SiPyCo <https://github.com/m-labs/sipyco>`_ (manual hosted `here <https://m-labs.hk/artiq/sipyco-manual/>`_), which expose remote procedure call (RPC) interfaces to the network. This allows experiments to issue RPCs to the controllers as necessary, without needing to do direct I/O to the devices. Some advantages of this architecture include:
* Controllers/drivers can be run on different machines, alleviating cabling issues and OS compatibility problems.
* Reduces the impact of driver crashes.
* Reduces the impact of driver memory leaks.
* Controllers/drivers can be run on different machines, alleviating cabling issues and OS compatibility problems.
* Reduces the impact of driver crashes.
* Reduces the impact of driver memory leaks.
Certain devices (such as the PDQ2) may still perform real-time operations by having certain controls physically connected to the core device (for example, the trigger and frame selection signals on the PDQ2). For handling such cases, parts of the support infrastructure may be kernels executed on the core device.
.. seealso::
Some NDSPs for particular devices have already been written and made available to the public. A (non-exhaustive) list can be found on the page :doc:`list_of_ndsps`.
Some NDSPs for particular devices have already been written and made available to the public. A (non-exhaustive) list can be found on the page :doc:`list_of_ndsps`.
Components of a NDSP
--------------------
Full support for a specific device, called a network device support package or NDSP, requires several parts:
Full support for a specific device, called a network device support package or NDSP, requires several parts:
1. The `driver`, which contains the Python API functions to be called over the network and performs the I/O to the device. The top-level module of the driver should be called ``artiq.devices.XXX.driver``.
2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and should be called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it.
@ -25,7 +25,7 @@ Full support for a specific device, called a network device support package or N
The driver and controller
-------------------------
As an example, we will develop a controller for a "device" that is very easy to work with: the console from which the controller is run. The operation that the driver will implement (and offer as an RPC) is writing a message to that console.
As an example, we will develop a controller for a "device" that is very easy to work with: the console from which the controller is run. The operation that the driver will implement (and offer as an RPC) is writing a message to that console.
To use RPCs, the functions that a driver provides must be the methods of a single object. We will thus define a class that provides our message-printing method: ::
@ -33,10 +33,10 @@ To use RPCs, the functions that a driver provides must be the methods of a singl
def message(self, msg):
print("message: " + msg)
For a more complex driver, we would place this class definition into a separate Python module called ``driver``. In this example, for simplicity, we can include it in the controller module.
For a more complex driver, we would place this class definition into a separate Python module called ``driver``. In this example, for simplicity, we can include it in the controller module.
For the controller itself, we will turn this method into a server using ``sipyco.pc_rpc``. Import the function we will use: ::
For the controller itself, we will turn this method into a server using ``sipyco.pc_rpc``. Import the function we will use: ::
from sipyco.pc_rpc import simple_server_loop
and add a ``main`` function that is run when the program is executed: ::
@ -71,7 +71,7 @@ In a different console, verify that you can connect to the TCP port: ::
.. tip ::
To exit telnet, use the escape character combination (Ctrl + ]) to access the ``telnet>`` prompt, and then enter ``quit`` or ``close`` to close the connection.
To exit telnet, use the escape character combination (Ctrl + ]) to access the ``telnet>`` prompt, and then enter ``quit`` or ``close`` to close the connection.
Also verify that a target (i.e. available service for RPC) named "hello" exists: ::
@ -81,7 +81,7 @@ Also verify that a target (i.e. available service for RPC) named "hello" exists:
The client
----------
Clients are small command-line utilities that expose certain functionalities of the drivers. The ``sipyco_rpctool`` utility contains a generic client that can be used in most cases, and developing a custom client is not required. You have already used it above in the form of ``list-targets``. Try these commands: ::
Clients are small command-line utilities that expose certain functionalities of the drivers. The ``sipyco_rpctool`` utility contains a generic client that can be used in most cases, and developing a custom client is not required. You have already used it above in the form of ``list-targets``. Try these commands: ::
$ sipyco_rpctool ::1 3249 list-methods
$ sipyco_rpctool ::1 3249 call message test
@ -108,39 +108,39 @@ Run it as before, making sure the controller is running first. You should see th
$ ./aqctl_hello.py
message: Hello World!
We see that the client has made a request to the server, which has, through the driver, performed the requisite I/O with the "device" (its console), resulting in the operation we wanted. Success!
We see that the client has made a request to the server, which has, through the driver, performed the requisite I/O with the "device" (its console), resulting in the operation we wanted. Success!
.. warning::
.. warning::
Note that RPC servers operate on copies of objects provided by the client, and modifications to mutable types are not written back. For example, if the client passes a list as a parameter of an RPC method, and that method ``append()s`` an element to the list, the element is not appended to the client's list.
To access this driver in an experiment, we can retrieve the ``Client`` instance with the ``get_device`` and ``set_device`` methods of :class:`artiq.language.environment.HasEnvironment`, and then use it like any other device (provided the controller is running and accessible at the time).
To access this driver in an experiment, we can retrieve the ``Client`` instance with the ``get_device`` and ``set_device`` methods of :class:`artiq.language.environment.HasEnvironment`, and then use it like any other device (provided the controller is running and accessible at the time).
.. _ndsp-integration:
Integration with ARTIQ experiments
----------------------------------
Generally we will want to add the device to our :ref:`device database <device-db>` so that we can add it to an experiment with ``self.setattr_device`` and so the controller can be started and stopped automatically by a controller manager (the ``artiq_ctlmgr`` utility from ``artiq-comtools``). To do so, add an entry to your device database in this format: ::
Generally we will want to add the device to our :ref:`device database <device-db>` so that we can add it to an experiment with ``self.setattr_device`` and so the controller can be started and stopped automatically by a controller manager (the ``artiq_ctlmgr`` utility from ``artiq-comtools``). To do so, add an entry to your device database in this format: ::
device_db.update({
"hello": {
"type": "controller",
"host": "::1",
"port": 3249,
"command": "python /abs/path/to/aqctl_hello.py -p {port}"
"command": "python /abs/path/to/aqctl_hello.py -p {port}"
},
})
Now it can be added using ``self.setattr_device("hello")`` in the ``build()`` phase of the experiment, and its methods accessed via: ::
self.hello.message("Hello world!")
.. note::
In order to be correctly started and stopped by a controller manager, your controller must additionally implement a ``ping()`` method, which should simply return true, e.g. ::
.. note::
In order to be correctly started and stopped by a controller manager, your controller must additionally implement a ``ping()`` method, which should simply return true, e.g. ::
def ping(self):
return True
Remote execution support
------------------------
@ -209,14 +209,14 @@ The program below exemplifies how to use logging: ::
Additional guidelines
---------------------
Command line and options
Command line and options
^^^^^^^^^^^^^^^^^^^^^^^^
* Controllers should be able to operate in "simulation" mode, specified with ``--simulation``, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file.
* The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter name.
* Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it.
Style
Style
^^^^^
* Do not use ``__del__`` to implement the cleanup code of your driver. Instead, define a ``close`` method, and call it using a ``try...finally...`` block in the controller.

View File

@ -1,4 +1,4 @@
DRTIO system
DRTIO system
============
DRTIO is the time and data transfer system that allows ARTIQ RTIO channels to be distributed among several satellite devices, synchronized and controlled by a central core device. The main source of DRTIO traffic is the remote control of RTIO output and input channels. The protocol is optimized to maximize throughput and minimize latency, and handles flow control and error conditions (underflows, overflows, etc.)
@ -58,7 +58,7 @@ A real-time packet is defined by a series of D characters containing the packet'
K characters, which are transmitted whenever there is no real-time data to transmit and to delimit real-time packets, are chosen using a 3-bit K selection word. If this K character is the first character in the set of N characters processed by the transceiver in the logic clock cycle, the mapping between the K selection word and the 8b/10b K space contains commas. If the K character is any of the subsequent characters processed by the transceiver, a different mapping is used that does not contain any commas. This scheme allows the receiver to align its logic clock with that of the transmitter, simply by shifting its logic clock so that commas are received into the first character position.
.. note::
.. note::
Due to the shoddy design of transceiver hardware, this simple process of clock and comma alignment is difficult to perform in practice. The paper "High-speed, fixed-latency serial links with Xilinx FPGAs" (by Xue LIU, Qing-xu DENG, Bo-ning HOU and Ze-ke WANG) discusses techniques that can be used. The ARTIQ implementation simply keeps resetting the receiver until the comma is aligned, since relatively long lock times are acceptable.
The series of K selection words is then used to form auxiliary packets and the idle pattern. When there is no auxiliary packet to transfer or to delimitate auxiliary packets, the K selection word ``100`` is used. To transfer data from an auxiliary packet, the K selection word ``0ab`` is used, with ``ab`` containing two bits of data from the packet. An auxiliary packet is delimited by at least one ``100`` K selection word.

View File

@ -1,26 +1,26 @@
Environment
===========
ARTIQ experiments exist in an environment, which consists of devices, arguments, and datasets. Access to the environment is handled through the :class:`~artiq.language.environment.HasEnvironment` manager provided by the :class:`~artiq.language.environment.EnvExperiment` class that experiments should derive from.
ARTIQ experiments exist in an environment, which consists of devices, arguments, and datasets. Access to the environment is handled through the :class:`~artiq.language.environment.HasEnvironment` manager provided by the :class:`~artiq.language.environment.EnvExperiment` class that experiments should derive from.
.. _device-db:
The device database
-------------------
Information about available devices is provided to ARTIQ through a file called the device database, typically called ``device_db.py``, which many of the ARTIQ front-end tools require access to in order to run. The device database specifies:
Information about available devices is provided to ARTIQ through a file called the device database, typically called ``device_db.py``, which many of the ARTIQ front-end tools require access to in order to run. The device database specifies:
* what devices are available to an ARTIQ installation
* what drivers to use
* what devices are available to an ARTIQ installation
* what drivers to use
* what controllers to use
* how and where to contact each device, i.e.
* how and where to contact each device, i.e.
- at which RTIO channel(s) each local device can be reached
- at which network address each controller can be reached
- at which RTIO channel(s) each local device can be reached
- at which network address each controller can be reached
as well as, if present, how and where to contact the core device itself (e.g., its IP address, often by a field named ``core_addr``).
as well as, if present, how and where to contact the core device itself (e.g., its IP address, often by a field named ``core_addr``).
This is stored in a Python dictionary whose keys are the device names, which the file must define as a global variable with the name ``device_db``. Examples for various system configurations can be found inside the subfolders of ``artiq/examples``. A typical device database entry looks like this: ::
This is stored in a Python dictionary whose keys are the device names, which the file must define as a global variable with the name ``device_db``. Examples for various system configurations can be found inside the subfolders of ``artiq/examples``. A typical device database entry looks like this: ::
"led": {
"type": "local",
@ -29,28 +29,28 @@ This is stored in a Python dictionary whose keys are the device names, which the
"arguments": {"channel": 19}
},
Note that the key (the name of the device) is ``led`` and the value is itself a Python dictionary. Names will later be used to gain access to a device through methods such as ``self.setattr_device("led")``. While in this case ``led`` can be replaced with another name, provided it is used consistently, some names (e.g. in particular ``core``) are used internally by ARTIQ and will cause problems if changed. It is often more convenient to use aliases for renaming purposes, see below.
Note that the key (the name of the device) is ``led`` and the value is itself a Python dictionary. Names will later be used to gain access to a device through methods such as ``self.setattr_device("led")``. While in this case ``led`` can be replaced with another name, provided it is used consistently, some names (e.g. in particular ``core``) are used internally by ARTIQ and will cause problems if changed. It is often more convenient to use aliases for renaming purposes, see below.
.. note::
The device database is generated and stored in the memory of the master when the master is first started. Changes to the ``device_db.py`` file will not immediately affect a running master. In order to update the device database, right-click in the Explorer window and select 'Scan device database', or run the command ``artiq_client scan-devices``.
.. note::
The device database is generated and stored in the memory of the master when the master is first started. Changes to the ``device_db.py`` file will not immediately affect a running master. In order to update the device database, right-click in the Explorer window and select 'Scan device database', or run the command ``artiq_client scan-devices``.
.. warning::
It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system.
.. warning::
It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system.
Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, realtime, e.g. your Sinara hardware) must be factually attached to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports.
While controllers can be added and removed to your device database relatively easily, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will normally be provided with new binaries and necessary assistance.)
Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, realtime, e.g. your Sinara hardware) must be factually attached to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports.
Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the *system description* file. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around.
While controllers can be added and removed to your device database relatively easily, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will normally be provided with new binaries and necessary assistance.)
Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the *system description* file. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around.
If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add remote devices, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the ``artiq_ddb_template`` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly.
If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add remote devices, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the ``artiq_ddb_template`` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly.
Local devices
^^^^^^^^^^^^^
Local device entries are dictionaries which contain a ``type`` field set to ``local``. They correspond to device drivers that are created locally on the master as opposed to using the controller mechanism; this is normally the real-time hardware of the system, including the core, which is itself considered a local device. The ``led`` example above is a local device entry.
Local device entries are dictionaries which contain a ``type`` field set to ``local``. They correspond to device drivers that are created locally on the master as opposed to using the controller mechanism; this is normally the real-time hardware of the system, including the core, which is itself considered a local device. The ``led`` example above is a local device entry.
The fields ``module`` and ``class`` determine the location of the Python class of the driver. The ``arguments`` field is another (possibly empty) dictionary that contains arguments to pass to the device driver constructor. ``arguments`` is often used to specify the RTIO channel number of a peripheral, which must match the channel number in gateware.
The fields ``module`` and ``class`` determine the location of the Python class of the driver. The ``arguments`` field is another (possibly empty) dictionary that contains arguments to pass to the device driver constructor. ``arguments`` is often used to specify the RTIO channel number of a peripheral, which must match the channel number in gateware.
On Kasli and Kasli-SoC, the allocation of RTIO channels to EEM ports is done automatically when the gateware is compiled, and while conceptually simple (channels are assigned one after the other, from zero upwards, for each device entry in the system description file) it is not entirely straightforward (different devices require different numbers of RTIO channels). Again, the easiest way to handle this when writing a new device database is automatically, using ``artiq_ddb_template``.
@ -61,38 +61,38 @@ Controller entries are dictionaries which contain a ``type`` field set to ``cont
The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matches the ``port`` field) and an appropriate bind address for the controller's listening socket.
An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default.
An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default.
Aliases
^^^^^^^
If an entry is a string, that string is used as a key for another lookup in the device database.
If an entry is a string, that string is used as a key for another lookup in the device database.
Arguments
---------
Arguments are values that parameterize the behavior of an experiment. ARTIQ supports both interactive arguments, requested and supplied at some point while an experiment is running, and submission-time arguments, requested in the build phase and set before the experiment is executed. For more on arguments in practice, see the tutorial section :ref:`mgmt-arguments`. For supported argument types and specific reference, see the relevant sections of :doc:`the core language reference <core_language_reference>`, as well as the example experiment ``examples/no_hardware/interactive.py``.
Arguments are values that parameterize the behavior of an experiment. ARTIQ supports both interactive arguments, requested and supplied at some point while an experiment is running, and submission-time arguments, requested in the build phase and set before the experiment is executed. For more on arguments in practice, see the tutorial section :ref:`mgmt-arguments`. For supported argument types and specific reference, see the relevant sections of :doc:`the core language reference <core_language_reference>`, as well as the example experiment ``examples/no_hardware/interactive.py``.
Datasets
--------
Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`management system tutorial <getting-started-datasets>`.
Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`management system tutorial <getting-started-datasets>`.
A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments.
A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments.
Broadcasted datasets are replaced when a new dataset with the same key (name) is produced. By default, they are erased when the master halts. Broadcasted datasets may be made persistent (``persistent=True``, which also implies ``broadcast=True``), in which case the master stores them in a LMDB database typically called ``dataset_db.mdb``, where they are saved across master restarts.
Broadcasted datasets are replaced when a new dataset with the same key (name) is produced. By default, they are erased when the master halts. Broadcasted datasets may be made persistent (``persistent=True``, which also implies ``broadcast=True``), in which case the master stores them in a LMDB database typically called ``dataset_db.mdb``, where they are saved across master restarts.
By default, datasets are archived in the HDF5 output for that run, although this can be opted against (``archive=False``).
By default, datasets are archived in the HDF5 output for that run, although this can be opted against (``archive=False``).
Datasets and units
Datasets and units
^^^^^^^^^^^^^^^^^^
Datasets accept metadata for numerical formatting with the ``unit``, ``scale`` and ``precision`` parameters of ``set_dataset``.
Datasets accept metadata for numerical formatting with the ``unit``, ``scale`` and ``precision`` parameters of ``set_dataset``.
.. note::
In experiment code, values are assumed to be in the SI base unit. Setting a dataset with a value of ``1000`` and the unit ``kV`` represents the quantity ``1 kV``. It is recommended to use the globals defined by :mod:`artiq.language.units` and write ``1*kV`` instead of ``1000`` for the value.
In dashboards and clients these globals are not available. However, setting a dataset with a value of ``1`` and the unit ``kV`` simply represents the quantity ``1 kV``.
.. note::
In experiment code, values are assumed to be in the SI base unit. Setting a dataset with a value of ``1000`` and the unit ``kV`` represents the quantity ``1 kV``. It is recommended to use the globals defined by :mod:`artiq.language.units` and write ``1*kV`` instead of ``1000`` for the value.
In dashboards and clients these globals are not available. However, setting a dataset with a value of ``1`` and the unit ``kV`` simply represents the quantity ``1 kV``.
``precision`` refers to the max number of decimal places to display. This parameter does not affect the underlying value, and is only used for display purposes.

View File

@ -64,7 +64,7 @@ create and use variable lengths arrays in kernels?
Don't. Preallocate everything. Or chunk it and e.g. read 100 events per
function call, push them upstream and retry until the gate time closes.
execute multiple slow controller RPCs in parallel without losing time?
execute multiple slow controller RPCs in parallel without losing time?
----------------------------------------------------------------------
Use ``threading.Thread``: portable, fast, simple for one-shot calls.

View File

@ -1,15 +1,15 @@
(Re)flashing your core device
(Re)flashing your core device
=============================
.. note::
If you have purchased a pre-assembled system from M-Labs or QUARTIQ, the gateware and firmware of your devices will already be flashed to the newest version of ARTIQ. Flashing your device is only necessary if you obtained your hardware in a different way, or if you want to change your system configuration or upgrade your ARTIQ version after the original purchase. Otherwise, skip straight to :doc:`configuring`.
If you have purchased a pre-assembled system from M-Labs or QUARTIQ, the gateware and firmware of your devices will already be flashed to the newest version of ARTIQ. Flashing your device is only necessary if you obtained your hardware in a different way, or if you want to change your system configuration or upgrade your ARTIQ version after the original purchase. Otherwise, skip straight to :doc:`configuring`.
.. _obtaining-binaries:
.. _obtaining-binaries:
Obtaining board binaries
------------------------
If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware for your system that corresponds to your currently installed version of ARTIQ using the ARTIQ firmware service (AFWS). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. The client ``afws_client`` is included in all ARTIQ installations.
If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware for your system that corresponds to your currently installed version of ARTIQ using the ARTIQ firmware service (AFWS). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. The client ``afws_client`` is included in all ARTIQ installations.
Run the command::
@ -25,13 +25,13 @@ Installing and configuring OpenOCD
----------------------------------
.. warning::
These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility ``artiq_flash`` to reflash. If your core device is a Zynq device, skip straight to :ref:`writing-flash`.
These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility ``artiq_flash`` to reflash. If your core device is a Zynq device, skip straight to :ref:`writing-flash`.
ARTIQ supplies the utility ``artiq_flash``, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards.
ARTIQ supplies the utility ``artiq_flash``, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards.
.. note::
With Conda, install ``openocd`` as follows: ::
With Conda, install ``openocd`` as follows: ::
$ conda install -c m-labs openocd
@ -71,26 +71,26 @@ On Windows
4. Select WinUSB from the spinner list.
5. Click "Install Driver" or "Replace Driver".
You may need to repeat these steps every time you plug the FPGA board into a port it has not previously been plugged into, even on the same system.
You may need to repeat these steps every time you plug the FPGA board into a port it has not previously been plugged into, even on the same system.
.. _writing-flash:
Writing the flash
-----------------
First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you can simply connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses ``artiq_coremgmt`` to flash over network, an Ethernet connection and an IP address, supplied either with the ``-D`` option or in your :ref:`device database <device-db>`, are sufficient.
First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you can simply connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses ``artiq_coremgmt`` to flash over network, an Ethernet connection and an IP address, supplied either with the ``-D`` option or in your :ref:`device database <device-db>`, are sufficient.
For Kasli-SoC or ZC706:
::
::
$ artiq_coremgmt [-D 192.168.1.75] config write -f boot [afws_directory]/boot.bin
$ artiq_coremgmt reboot
$ artiq_coremgmt reboot
If the device is not reachable due to corrupted firmware or networking problems, extract the SD card and copy ``boot.bin`` onto it manually.
For Kasli:
For Kasli:
::
$ artiq_flash -d [afws_directory]
For KC705:
@ -100,30 +100,30 @@ For KC705:
The SW13 switches need to be set to 00001.
Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for ``artiq_flash`` in the :ref:`Utilities <flashing-loading-tool>` reference.
Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for ``artiq_flash`` in the :ref:`Utilities <flashing-loading-tool>` reference.
.. _connecting-uart:
.. _connecting-uart:
Connecting to the UART log
Connecting to the UART log
--------------------------
A UART is a peripheral device for asynchronous serial communication; in the case of core device boards, it allows the reading of the UART log, which is used for debugging, especially when problems with booting or networking disallow checking core logs with ``artiq_coremgmt log``. If you had no issues flashing your board you can proceed directly to :doc:`configuring`.
A UART is a peripheral device for asynchronous serial communication; in the case of core device boards, it allows the reading of the UART log, which is used for debugging, especially when problems with booting or networking disallow checking core logs with ``artiq_coremgmt log``. If you had no issues flashing your board you can proceed directly to :doc:`configuring`.
Otherwise, ensure your core device is connected to your PC with a data micro-USB cable, as above, and wait at least fifteen seconds after startup to try to connect. To help find the correct port to connect to, you can list your system's serial devices by running: ::
Otherwise, ensure your core device is connected to your PC with a data micro-USB cable, as above, and wait at least fifteen seconds after startup to try to connect. To help find the correct port to connect to, you can list your system's serial devices by running: ::
$ python -m serial.tools.list_ports -v
$ python -m serial.tools.list_ports -v
This will give you the list of ``/dev/ttyUSBx`` or ``COMx`` device names (on Linux and Windows respectively). Most commonly, the correct option is the third, i.e. index number 2, but it can vary.
This will give you the list of ``/dev/ttyUSBx`` or ``COMx`` device names (on Linux and Windows respectively). Most commonly, the correct option is the third, i.e. index number 2, but it can vary.
On Linux:
Run the commands: ::
On Linux:
Run the commands: ::
stty 115200 < /dev/ttyUSBx
cat /dev/ttyUSBx
cat /dev/ttyUSBx
When you restart or reflash the core device you should see the startup logs in the terminal. If you encounter issues, try other ``ttyUSBx`` names, and make certain that your user is part of the ``dialout`` group (run ``groups`` in a terminal to check).
When you restart or reflash the core device you should see the startup logs in the terminal. If you encounter issues, try other ``ttyUSBx`` names, and make certain that your user is part of the ``dialout`` group (run ``groups`` in a terminal to check).
On Windows:
On Windows:
Use a program such as PuTTY to connect to the COM port. Connect to every available COM port at first, restart the core device, see which port produces meaningful output, and close the others. It may be necessary to install the `FTDI drivers <https://ftdichip.com/drivers/>`_ first.
Note that the correct parameters for the serial port are 115200bps 8-N-1 for every core device.
Note that the correct parameters for the serial port are 115200bps 8-N-1 for every core device.

View File

@ -23,9 +23,9 @@ As a very first step, we will turn on a LED on the core device. Create a file ``
The central part of our code is our ``LED`` class, which derives from :class:`~artiq.language.environment.EnvExperiment`. Almost all experiments should derive from this class, which provides access to the environment as well as including the necessary experiment framework from the base-level :class:`~artiq.language.environment.Experiment`. It will call our :meth:`~artiq.language.environment.HasEnvironment.build` at the right time and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` we use to gain access to our devices ``core`` and ``led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method is a kernel and must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host).
Before you can run the example experiment, you will need to supply ARTIQ with the device database for your system, just as you did when configuring the core device. Make sure ``device_db.py`` is in the same directory as ``led.py``. Check once again that the field ``core_addr``, placed at the top of the file, matches the current IP address of your core device.
Before you can run the example experiment, you will need to supply ARTIQ with the device database for your system, just as you did when configuring the core device. Make sure ``device_db.py`` is in the same directory as ``led.py``. Check once again that the field ``core_addr``, placed at the top of the file, matches the current IP address of your core device.
If you don't have a ``device_db.py`` for your system, consult :ref:`device-db` to find out how to construct one. You can also find example device databases in the ``examples`` folder of ARTIQ, sorted into corresponding subfolders by core device, which you can edit to match your system.
If you don't have a ``device_db.py`` for your system, consult :ref:`device-db` to find out how to construct one. You can also find example device databases in the ``examples`` folder of ARTIQ, sorted into corresponding subfolders by core device, which you can edit to match your system.
.. note::
To access the examples, find where the ARTIQ package is installed on your machine with: ::
@ -41,7 +41,7 @@ The process should terminate quietly and the LED of the device should turn on. C
Host/core device interaction (RPC)
----------------------------------
A method or function running on the core device (which we call a "kernel") may communicate with the host by calling non-kernel functions that may accept parameters and may return a value. The "remote procedure call" (RPC) mechanisms automatically handle the communication between the host and the device, conveying between them what function to call, what parameters to call it with, and the resulting value, once returned.
A method or function running on the core device (which we call a "kernel") may communicate with the host by calling non-kernel functions that may accept parameters and may return a value. The "remote procedure call" (RPC) mechanisms automatically handle the communication between the host and the device, conveying between them what function to call, what parameters to call it with, and the resulting value, once returned.
Modify ``led.py`` as follows: ::
@ -71,11 +71,11 @@ You can then turn the LED off and on by entering 0 or 1 at the prompt that appea
$ artiq_run led.py
Enter desired LED state: 0
What happens is that the ARTIQ compiler notices that the ``input_led_state`` function does not have a ``@kernel`` decorator (:func:`~artiq.language.core.kernel`) and thus must be executed on the host. When the function is called on the core device, it sends a request to the host, which executes it. The core device waits until the host returns, and then continues the kernel; in this case, the host displays the prompt, collects user input, and the core device sets the LED state accordingly.
What happens is that the ARTIQ compiler notices that the ``input_led_state`` function does not have a ``@kernel`` decorator (:func:`~artiq.language.core.kernel`) and thus must be executed on the host. When the function is called on the core device, it sends a request to the host, which executes it. The core device waits until the host returns, and then continues the kernel; in this case, the host displays the prompt, collects user input, and the core device sets the LED state accordingly.
The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`.
The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`.
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :ref:`artiq-real-time-i-o-concepts` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock.
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :ref:`artiq-real-time-i-o-concepts` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock.
Real-time Input/Output (RTIO)
-----------------------------
@ -145,7 +145,7 @@ Try reducing the period of the generated waveform until the CPU cannot keep up w
Parallel and sequential blocks
------------------------------
It is often necessary for several pulses to overlap one another. This can be expressed through the use of the ``with parallel`` construct, in which the events generated by individual statements are scheduled to execute at the same time, rather than sequentially. The duration of the ``parallel`` block is the duration of its longest statement.
It is often necessary for several pulses to overlap one another. This can be expressed through the use of the ``with parallel`` construct, in which the events generated by individual statements are scheduled to execute at the same time, rather than sequentially. The duration of the ``parallel`` block is the duration of its longest statement.
Try the following code and observe the generated pulses on a 2-channel oscilloscope or logic analyzer: ::
@ -167,12 +167,12 @@ Try the following code and observe the generated pulses on a 2-channel oscillosc
delay(4*us)
ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches.
It simply remembers its position on the timeline (``now_mu``) when entering the ``parallel`` block and resets to that position after each individual statement.
At the end of the block, the cursor is advanced to the furthest position it reached during the block.
In other words, the statements in a ``parallel`` block are actually executed sequentially.
Only the RTIO events generated by the statements are *scheduled* in parallel.
It simply remembers its position on the timeline (``now_mu``) when entering the ``parallel`` block and resets to that position after each individual statement.
At the end of the block, the cursor is advanced to the furthest position it reached during the block.
In other words, the statements in a ``parallel`` block are actually executed sequentially.
Only the RTIO events generated by the statements are *scheduled* in parallel.
Remember that while ``now_mu`` resets at the beginning of each statement in a ``parallel`` block, the wall clock advances regardless. If a particular statement takes a long time to execute (which is different from -- and unrelated to! -- the events *scheduled* by the statement taking a long time), the wall clock may advance past the reset value, putting any subsequent statements inside the block into a situation of negative slack (i.e., resulting in :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` ). Sometimes underflows may be avoided simply by reordering statements within the parallel block. This especially applies to input methods, which generally necessarily block CPU progress until the wall clock has caught up to or overtaken the cursor.
Remember that while ``now_mu`` resets at the beginning of each statement in a ``parallel`` block, the wall clock advances regardless. If a particular statement takes a long time to execute (which is different from -- and unrelated to! -- the events *scheduled* by the statement taking a long time), the wall clock may advance past the reset value, putting any subsequent statements inside the block into a situation of negative slack (i.e., resulting in :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` ). Sometimes underflows may be avoided simply by reordering statements within the parallel block. This especially applies to input methods, which generally necessarily block CPU progress until the wall clock has caught up to or overtaken the cursor.
Within a parallel block, some statements can be scheduled sequentially again using a ``with sequential`` block. Observe the pulses generated by this code: ::
@ -190,15 +190,15 @@ Within a parallel block, some statements can be scheduled sequentially again usi
for i in range(1000000):
with parallel:
self.ttl0.pulse(2*us) # 1
if True: # 2
self.ttl0.pulse(2*us) # 1
if True: # 2
self.ttl1.pulse(2*us) # 3
self.ttl2.pulse(2*us) # 4
delay(4*us)
This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of statement #2; it will not repeat the reset at the deeper indentation level for #3 or #4.
In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, statements #3 and #4, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To execute #3 and #4 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement.
This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of statement #2; it will not repeat the reset at the deeper indentation level for #3 or #4.
In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, statements #3 and #4, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To execute #3 and #4 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement.
Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to cause sequencing issues in the gateware; see also :ref:`sequence-errors`.
@ -225,14 +225,14 @@ The core device records the real-time I/O waveforms into a circular buffer. It i
rtio_log("ttl0", "i", i)
delay(...)
When using ``artiq_run``, the recorded data can be extracted using ``artiq_coreanalyzer`` (see :ref:`core-device-rtio-analyzer-tool`). To export it to VCD, which can be viewed using third-party tools such as GtkWave, use the command ``artiq_coreanalyzer -w rtio.vcd``. Recorded data can also be viewed directly with the ARTIQ dashboard, which will be presented later in :doc:`getting_started_mgmt`.
When using ``artiq_run``, the recorded data can be extracted using ``artiq_coreanalyzer`` (see :ref:`core-device-rtio-analyzer-tool`). To export it to VCD, which can be viewed using third-party tools such as GtkWave, use the command ``artiq_coreanalyzer -w rtio.vcd``. Recorded data can also be viewed directly with the ARTIQ dashboard, which will be presented later in :doc:`getting_started_mgmt`.
.. _getting-started-dma:
.. _getting-started-dma:
Direct Memory Access (DMA)
--------------------------
DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up'. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly.
DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up'. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly.
Try this: ::
@ -267,7 +267,7 @@ Try this: ::
# each playback advances the timeline by 50*(100+100) ns
self.core_dma.playback_handle(pulses_handle)
.. note::
.. note::
Only output events are redirected to the DMA core. Input methods inside a ``with dma`` block will be called as they would be outside of the block, in the current real-time context, and input events will be buffered normally, not to DMA.
For more documentation on the methods used, see the :mod:`artiq.coredevice.dma` reference.

View File

@ -1,27 +1,27 @@
Getting started with the management system
==========================================
In practice, rather than managing experiments by executing ``artiq_run`` over and over, most use cases are better served by using the ARTIQ *management system.* This is the high-level part of ARTIQ, which can be used to schedule experiments, distribute and store the results, and manage devices and parameters. It possesses a detailed GUI and can be used on several machines concurrently, allowing them to coordinate with each other and with the specialized hardware over the network. As a result, multiple users on different machines can schedule experiments or retrieve results on the same ARTIQ system, potentially simultaneously.
In practice, rather than managing experiments by executing ``artiq_run`` over and over, most use cases are better served by using the ARTIQ *management system.* This is the high-level part of ARTIQ, which can be used to schedule experiments, distribute and store the results, and manage devices and parameters. It possesses a detailed GUI and can be used on several machines concurrently, allowing them to coordinate with each other and with the specialized hardware over the network. As a result, multiple users on different machines can schedule experiments or retrieve results on the same ARTIQ system, potentially simultaneously.
The management system consists of at least two parts:
a. the **ARTIQ master,** which runs on a single machine, facilitates communication with the core device and peripherals, and is responsible for most of the actual duties of the system,
b. one or more **ARTIQ clients,** which may be local or remote and which communicate only with the master. Both a GUI (the **dashboard**) and a straightforward command line client are provided, with many of the same capabilities.
a. the **ARTIQ master,** which runs on a single machine, facilitates communication with the core device and peripherals, and is responsible for most of the actual duties of the system,
b. one or more **ARTIQ clients,** which may be local or remote and which communicate only with the master. Both a GUI (the **dashboard**) and a straightforward command line client are provided, with many of the same capabilities.
as well as, optionally,
as well as, optionally,
c. one or more **controller managers**, which help coordinate the operation of certain (generally, non-realtime) classes of device.
c. one or more **controller managers**, which help coordinate the operation of certain (generally, non-realtime) classes of device.
In this tutorial, we will explore the basic operation of the management system. Because the various components of the management system run wholly on the host machine, and not on the core device (in other words, they do not inherently involve any kernel functions), it is not necessary to have a core device or any special hardware set up to use it. The examples in this tutorial can all be carried out using only your computer.
In this tutorial, we will explore the basic operation of the management system. Because the various components of the management system run wholly on the host machine, and not on the core device (in other words, they do not inherently involve any kernel functions), it is not necessary to have a core device or any special hardware set up to use it. The examples in this tutorial can all be carried out using only your computer.
Starting your first experiment with the master
----------------------------------------------
In the previous tutorial, we used the ``artiq_run`` utility to execute our experiments, which is a simple standalone tool that bypasses the management system. We will now see how to run an experiment using the master and the dashboard.
In the previous tutorial, we used the ``artiq_run`` utility to execute our experiments, which is a simple standalone tool that bypasses the management system. We will now see how to run an experiment using the master and the dashboard.
First, create a folder ``~/artiq-master`` and copy into it the ``device_db.py`` for your system (your device database, exactly as in :ref:`connecting-to-the-core-device`.) The master uses the device database in the same way as ``artiq_run`` when communicating with the core device. Since no devices are actually used in these examples, you can also use the ``device_db.py`` found in ``examples/no_hardware``.
Secondly, create a subfolder ``~/artiq-master/repository`` to contain experiments. By default, the master scans for a folder of this name to determine what experiments are available. If you'd prefer to use a different name, this can be changed by running ``artiq_master -r [folder name]`` instead of ``artiq_master`` below.
Secondly, create a subfolder ``~/artiq-master/repository`` to contain experiments. By default, the master scans for a folder of this name to determine what experiments are available. If you'd prefer to use a different name, this can be changed by running ``artiq_master -r [folder name]`` instead of ``artiq_master`` below.
Create a very simple experiment in ``~/artiq-master/repository`` and save it as ``mgmt_tutorial.py``: ::
@ -37,7 +37,7 @@ Create a very simple experiment in ``~/artiq-master/repository`` and save it as
Start the master with: ::
$ cd ~/artiq-master
$ artiq_master
@ -48,22 +48,22 @@ Now, start the dashboard with the following commands in another terminal: ::
$ cd ~
$ artiq_dashboard
.. note::
.. note::
In order to connect to a master over the network, start it with the command ::
$ artiq_master --bind [hostname or IP]
and then use the option ``--server`` or ``-s`` for clients, as in: ::
and then use the option ``--server`` or ``-s`` for clients, as in: ::
$ artiq_dashboard -s [hostname or IP of the master]
$ artiq_client -s [hostname or IP of the master]
Both IPv4 and IPv6 are supported.
Both IPv4 and IPv6 are supported.
The dashboard should display the list of experiments from the repository folder in a dock called "Explorer". There should be only the experiment we created. Select it and click "Submit", then look at the "Log" dock for the output from this simple experiment.
.. seealso::
You may note that experiments may be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, especially when submitting longer experiments or submitting across multiple users, see :ref:`experiment-scheduling`.
You may note that experiments may be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, especially when submitting longer experiments or submitting across multiple users, see :ref:`experiment-scheduling`.
.. _mgmt-arguments:
@ -91,32 +91,32 @@ The dashboard should now display a spin box that allows you to set the value of
Interactive arguments
---------------------
It is also possible to use interactive arguments, which may be requested and supplied while the experiment is running. This time modify the experiment as follows: ::
It is also possible to use interactive arguments, which may be requested and supplied while the experiment is running. This time modify the experiment as follows: ::
def build(self):
pass
pass
def run(self):
repeat = True
while repeat:
repeat = True
while repeat:
print("Hello World")
with self.interactive(title="Repeat?") as interactive:
with self.interactive(title="Repeat?") as interactive:
interactive.setattr_argument("repeat", BooleanValue(True))
repeat = interactive.repeat
repeat = interactive.repeat
Trigger a repository rescan and click the button labeled "Recompute all arguments". Now submit the experiment. It should print once, then wait; in the same dock as "Explorer", find and navigate to the tab "Interactive Args". You can now choose and supply a value for the argument mid-experiment. Every time an argument is requested, the experiment pauses until the input is supplied. If you choose to "Cancel" instead, an :exc:`~artiq.language.environment.CancelledArgsError` will be raised (which the experiment can choose to catch, rather than halting.)
Trigger a repository rescan and click the button labeled "Recompute all arguments". Now submit the experiment. It should print once, then wait; in the same dock as "Explorer", find and navigate to the tab "Interactive Args". You can now choose and supply a value for the argument mid-experiment. Every time an argument is requested, the experiment pauses until the input is supplied. If you choose to "Cancel" instead, an :exc:`~artiq.language.environment.CancelledArgsError` will be raised (which the experiment can choose to catch, rather than halting.)
While regular arguments are all requested simultaneously before submitting, interactive arguments can be requested at any point. In order to request multiple interactive arguments at once, place them within the same ``with`` block; see also the example ``interactive.py`` in the ``examples/no_hardware`` folder.
While regular arguments are all requested simultaneously before submitting, interactive arguments can be requested at any point. In order to request multiple interactive arguments at once, place them within the same ``with`` block; see also the example ``interactive.py`` in the ``examples/no_hardware`` folder.
.. _master-setting-up-git:
.. _master-setting-up-git:
Setting up Git integration
--------------------------
So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with the tracking of modifications to experiments and with the traceability of a result to a particular version of an experiment.
.. note::
.. note::
The workflow we will describe in this tutorial corresponds to a situation where the ARTIQ master machine is also used as a Git server where multiple users may push and pull code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head.
We will use the current ``repository`` folder as working directory for making local modifications to the experiments, move it away from the master data directory, and create a new ``repository`` folder that holds the Git data used by the master. Stop the master with Ctrl-C and enter the following commands: ::
@ -130,7 +130,7 @@ We will use the current ``repository`` folder as working directory for making lo
Now, push data to into the bare repository. Initialize a regular (non-bare) Git repository into our working directory: ::
$ cd ~/artiq-work
$ git init
$ git init
Then commit our experiment: ::
@ -147,7 +147,7 @@ Start the master again with the ``-g`` flag, telling it to treat the contents of
$ cd ~/artiq-master
$ artiq_master -g
.. note::
.. note::
You need at least one commit in the repository before you can start the master.
There should be no errors displayed, and if you start the GUI again, you will find the experiment there.
@ -161,14 +161,14 @@ Then set the execution permission on it: ::
$ chmod 755 ~/artiq-master/repository/hooks/post-receive
.. note::
.. note::
Remote machines may also push and pull into the master's bare repository using e.g. Git over SSH.
Let's now make a modification to the experiment. In the source present in the working directory, add an exclamation mark at the end of "Hello World". Before committing it, check that the experiment can still be executed correctly by running it directly from the filesystem using: ::
$ artiq_client submit ~/artiq-work/mgmt_tutorial.py
.. note::
.. note::
You may also use the "Open file outside repository" feature of the GUI, by right-clicking on the explorer.
Verify the log in the GUI. If you are happy with the result, commit the new version and push it into the master's repository: ::
@ -177,19 +177,19 @@ Verify the log in the GUI. If you are happy with the result, commit the new vers
$ git commit -a -m "More enthusiasm"
$ git push
.. note::
.. note::
Notice that commands other than ``git push`` are no longer necessary.
The master should now run the new version from its repository.
As an exercise, add another experiment to the repository, commit and push the result, and verify that it appears in the GUI.
.. _getting-started-datasets:
.. _getting-started-datasets:
Datasets
--------
ARTIQ uses the concept of *datasets* to manage the data exchanged with experiments, both supplied *to* experiments (generally, from other experiments) and saved *from* experiments (i.e. results or records).
ARTIQ uses the concept of *datasets* to manage the data exchanged with experiments, both supplied *to* experiments (generally, from other experiments) and saved *from* experiments (i.e. results or records).
Modify the experiment as follows, once again using a single non-interactive argument: ::
@ -202,61 +202,61 @@ Modify the experiment as follows, once again using a single non-interactive argu
self.mutate_dataset("parabola", i, i*i)
time.sleep(0.5)
.. tip::
.. tip::
You need to import the ``time`` module, and the ``numpy`` module as ``np``.
Commit, push and submit the experiment as before. Go to the "Datasets" dock of the GUI and observe that a new dataset has been created. Once the experiment has finished executing, navigate to ``~/artiq-master/`` in a terminal or file manager and see that a new directory has been created called ``results``. Your dataset should be stored as an HD5 dump file in ``results`` under ``<date>/<hour>``.
Commit, push and submit the experiment as before. Go to the "Datasets" dock of the GUI and observe that a new dataset has been created. Once the experiment has finished executing, navigate to ``~/artiq-master/`` in a terminal or file manager and see that a new directory has been created called ``results``. Your dataset should be stored as an HD5 dump file in ``results`` under ``<date>/<hour>``.
.. note::
By default, datasets are primarily attributes of the experiments that run them, and are not shared with the master or the dashboard. The ``broadcast=True`` argument specifies that an argument should be shared in real-time with the master, which is responsible for dispatching it to the clients. A more detailed description of dataset methods and their arguments can be found under :mod:`artiq.language.environment.HasEnvironment`.
.. note::
By default, datasets are primarily attributes of the experiments that run them, and are not shared with the master or the dashboard. The ``broadcast=True`` argument specifies that an argument should be shared in real-time with the master, which is responsible for dispatching it to the clients. A more detailed description of dataset methods and their arguments can be found under :mod:`artiq.language.environment.HasEnvironment`.
Open the file for your first dataset with HDFView, h5dump, or any similar third-party tool, and observe the data we just generated as well as the Git commit ID of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645`` which represents a particular state of the Git repository). A list of Git commit IDs can be found by running the ``git log`` command in ``~/artiq-master/``.
Open the file for your first dataset with HDFView, h5dump, or any similar third-party tool, and observe the data we just generated as well as the Git commit ID of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645`` which represents a particular state of the Git repository). A list of Git commit IDs can be found by running the ``git log`` command in ``~/artiq-master/``.
Applets
-------
Often, rather than the HDF dump, we would like to see our result datasets in readable graphical form, preferably immediately. In the ARTIQ dashboard, this is achieved by programs called "applets". Applets are independent programs that add simple GUI features and are run as separate processes (to achieve goals of modularity and resilience against poorly written applets). ARTIQ supplies several applets for basic plotting in the ``artiq.applets`` module, and users may write their own using the provided interfaces.
Often, rather than the HDF dump, we would like to see our result datasets in readable graphical form, preferably immediately. In the ARTIQ dashboard, this is achieved by programs called "applets". Applets are independent programs that add simple GUI features and are run as separate processes (to achieve goals of modularity and resilience against poorly written applets). ARTIQ supplies several applets for basic plotting in the ``artiq.applets`` module, and users may write their own using the provided interfaces.
.. seealso::
For developing your own applets, see the references provided on the :ref:`management system page<applet-references>` of this manual.
For developing your own applets, see the references provided on the :ref:`management system page<applet-references>` of this manual.
For our ``parabola`` dataset, we will create an XY plot using the provided ``artiq.applets.plot_xy``. Applets are configured with simple command line options; we can find the list of available options using the ``-h`` flag. Try running: ::
$ python3 -m artiq.applets.plot_xy -h
$ python3 -m artiq.applets.plot_xy -h
In our case, we only need to supply our dataset as the y-values to be plotted. Navigate to the "Applet" dock in the dashboard. Right-click in the empty list and select "New applet from template" and "XY". This will generate a version of the applet command that shows all applicable options; edit the command so that it retrieves the ``parabola`` dataset and erase the unused options. The line should now be: ::
In our case, we only need to supply our dataset as the y-values to be plotted. Navigate to the "Applet" dock in the dashboard. Right-click in the empty list and select "New applet from template" and "XY". This will generate a version of the applet command that shows all applicable options; edit the command so that it retrieves the ``parabola`` dataset and erase the unused options. The line should now be: ::
${artiq_applet}plot_xy parabola
Run the experiment again, and observe how the points are added one by one to the plot.
RTIO analyzer and the dashboard
RTIO analyzer and the dashboard
-------------------------------
The :ref:`rtio-analyzer-example` is fully integrated into the dashboard. Navigate to the "Waveform" tab in the dashboard. After running the example experiment in that section, or any other experiment producing an analyzer trace, the waveform results will be directly displayed in this tab. It is also possible to save a trace, reopen it, or export it to VCD directly from the GUI.
The :ref:`rtio-analyzer-example` is fully integrated into the dashboard. Navigate to the "Waveform" tab in the dashboard. After running the example experiment in that section, or any other experiment producing an analyzer trace, the waveform results will be directly displayed in this tab. It is also possible to save a trace, reopen it, or export it to VCD directly from the GUI.
Non-RTIO devices and the controller manager
-------------------------------------------
As described in :ref:`artiq-real-time-i-o-concepts`, there are two classes of equipment a laboratory typically finds itself needing to operate. So far, we have largely discussed ARTIQ in terms of one only: the kind of specialized hardware that requires the very high-resolution timing control ARTIQ provides. The other class comprises the broad range of regular, "slow" laboratory devices, which do *not* require nanosecond precision and can generally be operated perfectly well from a regular PC over a non-realtime channel such as USB.
As described in :ref:`artiq-real-time-i-o-concepts`, there are two classes of equipment a laboratory typically finds itself needing to operate. So far, we have largely discussed ARTIQ in terms of one only: the kind of specialized hardware that requires the very high-resolution timing control ARTIQ provides. The other class comprises the broad range of regular, "slow" laboratory devices, which do *not* require nanosecond precision and can generally be operated perfectly well from a regular PC over a non-realtime channel such as USB.
To handle these "slow" devices, ARTIQ uses *controllers*, intermediate pieces of software which are responsible for the direct I/O to these devices and offer RPC interfaces to the network. Controllers can be started and run standalone, but are generally handled through the *controller manager*, available through the ``artiq-comtools`` package (normally automatically installed together with ARTIQ.) The controller manager in turn communicates with the ARTIQ master, and through it with clients or the GUI.
To handle these "slow" devices, ARTIQ uses *controllers*, intermediate pieces of software which are responsible for the direct I/O to these devices and offer RPC interfaces to the network. Controllers can be started and run standalone, but are generally handled through the *controller manager*, available through the ``artiq-comtools`` package (normally automatically installed together with ARTIQ.) The controller manager in turn communicates with the ARTIQ master, and through it with clients or the GUI.
To start the controller manager (the master must already be running), the only command necessary is: ::
To start the controller manager (the master must already be running), the only command necessary is: ::
$ artiq_ctlmgr
$ artiq_ctlmgr
Controllers may be run on a different machine from the master, or even on multiple different machines, alleviating cabling issues and OS compatibility problems. In this case, communication with the master happens over the network. If multiple machines are running controllers, they must each run their own controller manager (for which only ``artiq-comtools`` and its few dependencies are necessary, not the full ARTIQ installation.) Use the ``-s`` and ``--bind`` flags of ``artiq_ctlmgr`` to set IP addresses or hostnames to connect and bind to.
Note, however, that the controller for the particular device you are trying to connect to must first exist and be part of a complete Network Device Support Package, or NDSP. :doc:`Some NDSPs are already available <list_of_ndsps>`. If your device is not on this list, the system is designed to make it quite possible to write your own. For this, see the :doc:`developing_a_ndsp` page.
Note, however, that the controller for the particular device you are trying to connect to must first exist and be part of a complete Network Device Support Package, or NDSP. :doc:`Some NDSPs are already available <list_of_ndsps>`. If your device is not on this list, the system is designed to make it quite possible to write your own. For this, see the :doc:`developing_a_ndsp` page.
Once a device is correctly listed in ``device_db.py``, it can be added to an experiment using ``self.setattr_device([device_name])`` and the methods its API offers called straightforwardly as ``self.[device_name].[method_name]``. As long as the requisite controllers are running and available, the experiment can then be executed with ``artiq_run`` or through the management system.
Once a device is correctly listed in ``device_db.py``, it can be added to an experiment using ``self.setattr_device([device_name])`` and the methods its API offers called straightforwardly as ``self.[device_name].[method_name]``. As long as the requisite controllers are running and available, the experiment can then be executed with ``artiq_run`` or through the management system.
The ARTIQ session
-----------------
If (as is often the case) you intend to mostly operate your ARTIQ system and its devices from a single machine, i.e., the networked aspects of the management system are largely unnecessary and you will be running master, dashboard, and controller manager on one computer, they can all be started simultaneously with the single command: ::
If (as is often the case) you intend to mostly operate your ARTIQ system and its devices from a single machine, i.e., the networked aspects of the management system are largely unnecessary and you will be running master, dashboard, and controller manager on one computer, they can all be started simultaneously with the single command: ::
$ artiq_session
$ artiq_session
Arguments to the individuals (including ``-s`` and ``--bind``) can still be specified using the ``-m``, ``-d`` and ``-c`` options respectively.
Arguments to the individuals (including ``-s`` and ``--bind``) can still be specified using the ``-m``, ``-d`` and ``-c`` options respectively.

View File

@ -2,23 +2,23 @@ ARTIQ manual
============
.. toctree::
:caption: Overview
:caption: Overview
:maxdepth: 2
introduction
releases
.. toctree::
:caption: Setting up ARTIQ
:caption: Setting up ARTIQ
:maxdepth: 2
installing
flashing
flashing
configuring
developing
.. toctree::
:caption: Tutorials
:caption: Tutorials
:maxdepth: 2
rtio
@ -27,17 +27,17 @@ ARTIQ manual
using_drtio_subkernels
.. toctree::
:caption: ARTIQ components
:maxdepth: 2
:caption: ARTIQ components
:maxdepth: 2
environment
compiler
management_system
drtio
drtio
core_device
.. toctree::
:caption: Network devices
:caption: Network devices
:maxdepth: 2
developing_a_ndsp
@ -45,16 +45,16 @@ ARTIQ manual
default_network_ports
.. toctree::
:caption: References
:caption: References
:maxdepth: 2
main_frontend_reference
core_language_reference
core_drivers_reference
core_drivers_reference
utilities
.. toctree::
:caption: Addenda
:caption: Addenda
:maxdepth: 2
faq

View File

@ -43,7 +43,6 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
(pkgs.python3.withPackages(ps: [
# List desired Python packages here.
artiq.artiq
#ps.misoc # to access UART logs
#ps.paramiko # needed if and only if flashing boards remotely (artiq_flash -H)
#artiq.flake8-artiq
#artiq.dax

View File

@ -1,7 +1,7 @@
Main front-end tool reference
=============================
Main front-end tool reference
=============================
These are the top-level commands used to run and manage ARTIQ experiments. Not all of the ARTIQ front-end is described here (many additional useful commands are presented in this manual in :doc:`utilities`) but these together comprise the main points of access for using ARTIQ as a system.
These are the top-level commands used to run and manage ARTIQ experiments. Not all of the ARTIQ front-end is described here (many additional useful commands are presented in this manual in :doc:`utilities`) but these together comprise the main points of access for using ARTIQ as a system.
:mod:`artiq.frontend.artiq_run`
-------------------------------
@ -9,7 +9,7 @@ These are the top-level commands used to run and manage ARTIQ experiments. Not a
.. argparse::
:ref: artiq.frontend.artiq_run.get_argparser
:prog: artiq_run
:nodefault:
:nodefault:
.. _frontend-artiq-master:
@ -20,7 +20,7 @@ These are the top-level commands used to run and manage ARTIQ experiments. Not a
:ref: artiq.frontend.artiq_master.get_argparser
:prog: artiq_master
:nodefault:
.. _frontend-artiq-client:
:mod:`artiq.frontend.artiq_client`
@ -29,7 +29,7 @@ These are the top-level commands used to run and manage ARTIQ experiments. Not a
.. argparse::
:ref: artiq.frontend.artiq_client.get_argparser
:prog: artiq_client
:nodefault:
:nodefault:
.. _frontend-artiq-dashboard:

View File

@ -1,8 +1,8 @@
Management system
=================
.. note::
The ARTIQ management system as described here is optional. Experiments can be run one-by-one using :mod:`~artiq.frontend.artiq_run`, and controllers can be run without a controller manager. For their very first steps with ARTIQ or in simple or particular cases, users do not need to deploy the management system. For an introduction to the system and how to use it, see :doc:`getting_started_mgmt`.
.. note::
The ARTIQ management system as described here is optional. Experiments can be run one-by-one using :mod:`~artiq.frontend.artiq_run`, and controllers can be run without a controller manager. For their very first steps with ARTIQ or in simple or particular cases, users do not need to deploy the management system. For an introduction to the system and how to use it, see :doc:`getting_started_mgmt`.
Components
----------
@ -12,7 +12,7 @@ Master
The :ref:`ARTIQ master <frontend-artiq-master>` is responsible for managing the parameter and device databases, the experiment repository, scheduling and running experiments, archiving results, and distributing real-time results. It is a headless component, and one or several clients (command-line or GUI) use the network to interact with it.
It should not be confused with the 'master' device in a DRTIO system, which is only a designation for the particular core device acting as central node in a distributed configuration of ARTIQ. The two concepts are otherwise unrelated.
It should not be confused with the 'master' device in a DRTIO system, which is only a designation for the particular core device acting as central node in a distributed configuration of ARTIQ. The two concepts are otherwise unrelated.
Command-line client
^^^^^^^^^^^^^^^^^^^
@ -22,14 +22,14 @@ The :ref:`command-line client <frontend-artiq-client>` connects to the master an
Dashboard
^^^^^^^^^
The :ref:`dashboard <frontend-artiq-dashboard>` connects to the master and is the main method of interacting with it. The main features of the dashboard are scheduling of experiments, setting of their arguments, examining the schedule, displaying real-time results, and debugging TTL and DDS channels in real time.
The :ref:`dashboard <frontend-artiq-dashboard>` connects to the master and is the main method of interacting with it. The main features of the dashboard are scheduling of experiments, setting of their arguments, examining the schedule, displaying real-time results, and debugging TTL and DDS channels in real time.
The dashboard remembers and restores GUI state (window/dock positions, last values entered by the user, etc.) in between instances. This information is stored in a file called ``artiq_dashboard_{server}_{port}.pyon`` in the configuration directory (e.g. generally ``~/.config/artiq`` for Unix, same as data directory for Windows), distinguished in subfolders by ARTIQ version.
The dashboard remembers and restores GUI state (window/dock positions, last values entered by the user, etc.) in between instances. This information is stored in a file called ``artiq_dashboard_{server}_{port}.pyon`` in the configuration directory (e.g. generally ``~/.config/artiq`` for Unix, same as data directory for Windows), distinguished in subfolders by ARTIQ version.
Controller manager
^^^^^^^^^^^^^^^^^^
The controller manager is provided in the ``artiq-comtools`` package (which is also made available separately from mainline ARTIQ, to allow independent use with minimal dependencies) and started with the ``artiq_ctlmgr`` command. It is responsible for running and stopping controllers on a machine. One controller manager must be run by each network node that runs controllers.
The controller manager is provided in the ``artiq-comtools`` package (which is also made available separately from mainline ARTIQ, to allow independent use with minimal dependencies) and started with the ``artiq_ctlmgr`` command. It is responsible for running and stopping controllers on a machine. One controller manager must be run by each network node that runs controllers.
A controller manager connects to the master and accesses the device database through it to determine what controllers need to be run. The local network address of the connection is used to filter for only those controllers allocated to the current node. Hostname resolution is supported. Changes to the device database are tracked and controllers will be stopped and started accordingly.
@ -37,18 +37,18 @@ A controller manager connects to the master and accesses the device database thr
Git integration
---------------
The master may use a Git repository to store experiment source code. Using Git has many advantages. For example, each result file (HDF5) contains the commit ID corresponding to the exact source code it was produced by, which helps reproducibility.
The master may use a Git repository to store experiment source code. Using Git has many advantages. For example, each result file (HDF5) contains the commit ID corresponding to the exact source code it was produced by, which helps reproducibility.
Although the master also supports non-bare repositories, it is recommended to use a bare repository (e.g. ``git init --bare``) to easily support push transactions from clients.
Although the master also supports non-bare repositories, it is recommended to use a bare repository (e.g. ``git init --bare``) to easily support push transactions from clients.
You will want Git to notify the master every time the repository is pushed to (e.g. updated), so that the master knows to rescan the repository for new or changed experiments. This is easiest done with the ``post-receive`` hook, as described in :ref:`master-setting-up-git`.
You will want Git to notify the master every time the repository is pushed to (e.g. updated), so that the master knows to rescan the repository for new or changed experiments. This is easiest done with the ``post-receive`` hook, as described in :ref:`master-setting-up-git`.
.. note::
.. note::
If you plan to run the ARTIQ system entirely on a single machine, you may also consider using a non-bare repository and the ``post-commit`` hook to trigger repository scans every time you commit changes (locally). In this case, note that the ARTIQ master never uses the repository's working directory, but only what is committed. More precisely, when scanning the repository, it fetches the last (atomically) completed commit at that time of repository scan and checks it out in a temporary folder. This commit ID is used by default when subsequently submitting experiments. There is one temporary folder by commit ID currently referenced in the system, so concurrently running experiments from different repository revisions is fully supported by the master.
By default, the dashboard runs experiments from the repository, whereas the command-line client (``artiq_client submit``) runs experiments from the raw filesystem (which is useful for iterating rapidly without creating many disorganized commits). In order to run from the raw filesystem when using the dashboard, right-click in the Explorer window and select the option "Open file outside repository"; in order to run from the repository when using the command-line client, simply pass the ``-R`` flag.
By default, the dashboard runs experiments from the repository, whereas the command-line client (``artiq_client submit``) runs experiments from the raw filesystem (which is useful for iterating rapidly without creating many disorganized commits). In order to run from the raw filesystem when using the dashboard, right-click in the Explorer window and select the option "Open file outside repository"; in order to run from the repository when using the command-line client, simply pass the ``-R`` flag.
.. _experiment-scheduling:
.. _experiment-scheduling:
Experiment scheduling
---------------------
@ -58,10 +58,10 @@ Basics
To make more efficient use of hardware resources, experiments are generally split into three phases and pipelined, such that potentially compute-intensive pre-computation or analysis phases may be executed in parallel with the bodies of other experiments, which access hardware.
.. seealso::
.. seealso::
These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`.
There are three stages of a standard experiment users may write code in:
There are three stages of a standard experiment users may write code in:
1. The **preparation** stage, which pre-fetches and pre-computes any data that necessary to run the experiment. Users may implement this stage by overloading the :meth:`~artiq.language.environment.Experiment.prepare` method. It is not permitted to access hardware in this stage, as doing so may conflict with other experiments using the same devices.
2. The **run** stage, which corresponds to the body of the experiment and generally accesses hardware. Users must implement this stage and overload the :meth:`~artiq.language.environment.Experiment.run` method.
@ -81,9 +81,9 @@ Priorities and timed runs
When determining what experiment should begin executing next (i.e. enter the preparation stage), the scheduling looks at the following factors, by decreasing order of precedence:
1. Experiments may be scheduled with a due date. This is considered the *earliest possible* time of their execution (rather than a deadline, or latest possible -- ARTIQ makes no guarantees about experiments being started or completed before any specified time). If a due date is set and it has not yet been reached, the experiment is not eligible for preparation.
1. Experiments may be scheduled with a due date. This is considered the *earliest possible* time of their execution (rather than a deadline, or latest possible -- ARTIQ makes no guarantees about experiments being started or completed before any specified time). If a due date is set and it has not yet been reached, the experiment is not eligible for preparation.
2. The integer priority value specified by the user.
3. The due date itself. The earliest (reached) due date will be scheduled first.
3. The due date itself. The earliest (reached) due date will be scheduled first.
4. The run identifier (RID), an integer that is incremented at each experiment submission. This ensures that, all else being equal, experiments are scheduled in the same order as they are submitted.
Multiple pipelines
@ -91,7 +91,7 @@ Multiple pipelines
Experiments must be placed into a pipeline at submission time, set by the "Pipeline" field. The master supports multiple simultaneous pipelines, which will operate in parallel. Pipelines are identified by their names, and are automatically created (when an experiment is scheduled with a pipeline name that does not yet exist) and destroyed (when they run empty). By default, all experiments are submitted into the same pipeline, ``main``.
When using multiple pipelines it is the responsibility of the user to ensure that experiments scheduled in parallel will never conflict with those of another pipeline over resources (e.g. attempt to use the same devices simultaneously).
When using multiple pipelines it is the responsibility of the user to ensure that experiments scheduled in parallel will never conflict with those of another pipeline over resources (e.g. attempt to use the same devices simultaneously).
Pauses
^^^^^^
@ -125,14 +125,14 @@ CCBs are used by experiments to configure applets in the dashboard, for example
.. autoclass:: artiq.dashboard.applets_ccb.AppletsCCBDock
:members:
.. _applet-references:
.. _applet-references:
Applet request interfaces
-------------------------
Applet request interfaces allow applets to perform actions on the master database and set arguments in the dashboard. Applets may inherit from ``artiq.applets.simple.SimpleApplet`` and call the methods defined below through the ``req`` attribute.
Embedded applets should use ``AppletRequestIPC`` while standalone applets use ``AppletRequestRPC``. ``SimpleApplet`` automatically chooses the correct interface on initialization.
Embedded applets should use ``AppletRequestIPC`` while standalone applets use ``AppletRequestRPC``. ``SimpleApplet`` automatically chooses the correct interface on initialization.
.. autoclass:: artiq.applets.simple._AppletRequestInterface
:members:
@ -140,7 +140,7 @@ Embedded applets should use ``AppletRequestIPC`` while standalone applets use ``
Applet entry area
-----------------
Argument widgets can be used in applets through the :class:`~artiq.gui.applets.EntryArea` class. Below is a simple example code snippet: ::
Argument widgets can be used in applets through the :class:`~artiq.gui.applets.EntryArea` class. Below is a simple example code snippet: ::
entry_area = EntryArea()
@ -148,7 +148,7 @@ Argument widgets can be used in applets through the :class:`~artiq.gui.applets.E
entry_area.setattr_argument("bl", BooleanValue(True))
# Get the value of the widget (output: True)
print(entry_area.bl)
print(entry_area.bl)
# Set the value
entry_area.set_value("bl", False)
@ -159,7 +159,7 @@ Argument widgets can be used in applets through the :class:`~artiq.gui.applets.E
The :class:`~artiq.gui.applets.EntryArea` object can then be added to a layout and integrated with the applet GUI. Multiple :class:`~artiq.gui.applets.EntryArea` objects can be used in a single applet.
.. class:: artiq.gui.applets.EntryArea
.. method:: setattr_argument(name, proc, group=None, tooltip=None)
Sets an argument as attribute. The names of the argument and of the
@ -186,7 +186,7 @@ The :class:`~artiq.gui.applets.EntryArea` object can then be added to a layout
:param name: Argument name
:param value: Object representing the new value of the argument. For :class:`~artiq.language.scan.Scannable` arguments, this parameter
should be a :class:`~artiq.language.scan.ScanObject`. The type of the :class:`~artiq.language.scan.ScanObject` will be set as the selected type when this function is called.
should be a :class:`~artiq.language.scan.ScanObject`. The type of the :class:`~artiq.language.scan.ScanObject` will be set as the selected type when this function is called.
.. method:: set_values(values)

View File

@ -1,4 +1,4 @@
ARTIQ Releases
ARTIQ Releases
##############
ARTIQ follows a rolling release model, with beta, stable, and legacy channels. Different releases are saved as different branches on the M-Labs `ARTIQ repository <https://github.com/m-labs/artiq>`_. The ``master`` branch represents the beta version, where at any time the next stable release of ARTIQ is currently in development. This branch is unstable and does not yet guarantee reliability or consistency, but may also already offer new features and improvements; see the `beta release notes <https://github.com/m-labs/artiq/blob/master/RELEASE_NOTES.rst>`_ for the most up-to-date information. The ``release-[number]`` branches represent stable releases, of which the most recent is considered the current stable version, and the second-most recent the current legacy version.

View File

@ -22,33 +22,33 @@ The CPU feeds events into this FIFO.
A FIFO for an input channel contains timestamps and event data for events that have been recorded by the real-time gateware and are waiting to be read out by the CPU on the other end.
Timeline and terminology
Timeline and terminology
------------------------
The set of all input and output events on all channels constitutes the *timeline*.
A high-resolution wall clock (``rtio_counter_mu``) counts clock cycles and manages the precise timing of the events. Output events are executed when their timestamp matches the current clock value. Input events are recorded when they reach the gateware and stamped with the current clock value accordingly.
A high-resolution wall clock (``rtio_counter_mu``) counts clock cycles and manages the precise timing of the events. Output events are executed when their timestamp matches the current clock value. Input events are recorded when they reach the gateware and stamped with the current clock value accordingly.
The kernel runtime environment maintains a timeline cursor (called ``now_mu``) used as the timestamp when output events are submitted to the FIFOs. Both ``now_mu`` and ``rtio_counter_mu`` are counted in integer *machine units,* or mu, rather than SI units. The machine unit represents the maximum resolution of RTIO timing in an ARTIQ system. The duration of a machine unit is the *reference period* of the system, and may be changed by the user, but normally corresponds to a duration of one nanosecond.
The kernel runtime environment maintains a timeline cursor (called ``now_mu``) used as the timestamp when output events are submitted to the FIFOs. Both ``now_mu`` and ``rtio_counter_mu`` are counted in integer *machine units,* or mu, rather than SI units. The machine unit represents the maximum resolution of RTIO timing in an ARTIQ system. The duration of a machine unit is the *reference period* of the system, and may be changed by the user, but normally corresponds to a duration of one nanosecond.
The timeline cursor ``now_mu`` can be moved forward or backward on the timeline using :func:`artiq.language.core.delay` and :func:`artiq.language.core.delay_mu` (for delays given in SI units or machine units respectively). The absolute value of ``now_mu`` on the timeline can be retrieved using :func:`artiq.language.core.now_mu` and it can be set using :func:`artiq.language.core.at_mu`. The difference between the cursor and the wall clock is referred to as *slack.* A system is considered in a situation of *positive slack* when the cursor is ahead of the wall clock, i.e., in the future; respectively, it is in *negative slack* if the cursor is behind the wall clock, i.e. in the past.
The timeline cursor ``now_mu`` can be moved forward or backward on the timeline using :func:`artiq.language.core.delay` and :func:`artiq.language.core.delay_mu` (for delays given in SI units or machine units respectively). The absolute value of ``now_mu`` on the timeline can be retrieved using :func:`artiq.language.core.now_mu` and it can be set using :func:`artiq.language.core.at_mu`. The difference between the cursor and the wall clock is referred to as *slack.* A system is considered in a situation of *positive slack* when the cursor is ahead of the wall clock, i.e., in the future; respectively, it is in *negative slack* if the cursor is behind the wall clock, i.e. in the past.
RTIO timestamps, the timeline cursor, and the ``rtio_counter_mu`` wall clock are all counted relative to the core device startup/boot time. The wall clock keeps running across experiments.
RTIO timestamps, the timeline cursor, and the ``rtio_counter_mu`` wall clock are all counted relative to the core device startup/boot time. The wall clock keeps running across experiments.
Absolute timestamps can be large numbers.
They are represented internally as 64-bit integers.
With a typical one-nanosecond machine unit, this covers a range of hundreds of years.
Absolute timestamps can be large numbers.
They are represented internally as 64-bit integers.
With a typical one-nanosecond machine unit, this covers a range of hundreds of years.
Conversions between such a large integer number and a floating point representation can cause loss of precision through cancellation.
When computing the difference of absolute timestamps, use ``self.core.mu_to_seconds(t2-t1)``, not ``self.core.mu_to_seconds(t2)-self.core.mu_to_seconds(t1)`` (see :meth:`~artiq.coredevice.core.Core.mu_to_seconds`).
When accumulating time, do it in machine units and not in SI units, so that rounding errors do not accumulate.
.. note::
Absolute timestamps are also referred to as *RTIO fine timestamps,* because they run on a significantly finer resolution than the timestamps provided by the so-called *coarse RTIO clock,* the actual clocking signal provided to or generated by the core device.
The frequency of the coarse RTIO clock is set by the core device :ref:`clocking settings <core-device-clocking>` but is most commonly 125MHz, which corresponds to eight one-nanosecond machine units per coarse RTIO cycle.
.. note::
Absolute timestamps are also referred to as *RTIO fine timestamps,* because they run on a significantly finer resolution than the timestamps provided by the so-called *coarse RTIO clock,* the actual clocking signal provided to or generated by the core device.
The frequency of the coarse RTIO clock is set by the core device :ref:`clocking settings <core-device-clocking>` but is most commonly 125MHz, which corresponds to eight one-nanosecond machine units per coarse RTIO cycle.
The *coarse timestamp* of an event is its timestamp as according to the lower resolution of the coarse clock.
It is in practice a truncated version of the fine timestamp.
In general, ARTIQ offers *precision* on the fine level, but *operates* at the coarse level; this is rarely relevant to the user, but understanding it may clarify the behavior of some RTIO issues (e.g. sequence errors).
The *coarse timestamp* of an event is its timestamp as according to the lower resolution of the coarse clock.
It is in practice a truncated version of the fine timestamp.
In general, ARTIQ offers *precision* on the fine level, but *operates* at the coarse level; this is rarely relevant to the user, but understanding it may clarify the behavior of some RTIO issues (e.g. sequence errors).
.. Related: https://github.com/m-labs/artiq/issues/1237
The following basic example shows how to place output events on the timeline.
@ -61,7 +61,7 @@ It emits a precisely timed 2 µs pulse::
The device ``ttl`` represents a single digital output channel (:class:`artiq.coredevice.ttl.TTLOut`).
The :meth:`artiq.coredevice.ttl.TTLOut.on` method places an rising edge on the timeline at the current cursor position (``now_mu``).
Then the cursor is moved forward 2 µs and a falling edge is placed at the new cursor position.
Later, when the wall clock reaches the respective timestamps, the RTIO gateware executes the two events.
Later, when the wall clock reaches the respective timestamps, the RTIO gateware executes the two events.
The following diagram shows what is going on at the different levels of the software and gateware stack (assuming one machine unit of time is 1 ns):
@ -90,17 +90,17 @@ This sequence is exactly equivalent to::
This method :meth:`artiq.coredevice.ttl.TTLOut.pulse` advances the timeline cursor (using :func:`~artiq.language.core.delay` internally) by exactly the amount given. Other methods such as :meth:`~artiq.coredevice.ttl.TTLOut.on`, :meth:`~artiq.coredevice.ttl.TTLOut.off`, :meth:`~artiq.coredevice.ad9914.AD9914.set` do not modify the timeline cursor. The latter are called *zero-duration* methods.
Output errors and exceptions
Output errors and exceptions
----------------------------
Underflows
^^^^^^^^^^
A RTIO ouput event must always be programmed with a timestamp in the future.
In other words, the timeline cursor ``now_mu`` must be in advance of the current wall clock ``rtio_counter_mu``: the past cannot be altered.
The following example tries to place a rising edge event on the timeline.
If the current cursor is in the past, an :class:`artiq.coredevice.exceptions.RTIOUnderflow` exception is thrown.
The experiment attempts to handle the exception by moving the cursor forward and repeating the programming of the rising edge::
A RTIO ouput event must always be programmed with a timestamp in the future.
In other words, the timeline cursor ``now_mu`` must be in advance of the current wall clock ``rtio_counter_mu``: the past cannot be altered.
The following example tries to place a rising edge event on the timeline.
If the current cursor is in the past, an :class:`artiq.coredevice.exceptions.RTIOUnderflow` exception is thrown.
The experiment attempts to handle the exception by moving the cursor forward and repeating the programming of the rising edge::
try:
ttl.on()
@ -109,8 +109,8 @@ The experiment attempts to handle the exception by moving the cursor forward and
delay(16.6667*ms)
ttl.on()
Once the timeline cursor has overtaken the wall clock, the exception does not reoccur and the event can be scheduled successfully.
This can also be thought of as adding positive slack to the system.
Once the timeline cursor has overtaken the wall clock, the exception does not reoccur and the event can be scheduled successfully.
This can also be thought of as adding positive slack to the system.
.. wavedrom::
@ -135,47 +135,47 @@ To track down :class:`~artiq.coredevice.exceptions.RTIOUnderflow` exceptions in
* Exception backtraces show where underflow has occurred while executing the code.
* The :ref:`integrated logic analyzer <core-device-rtio-analyzer-tool>` shows the timeline context that lead to the exception. The analyzer is always active and supports plotting of RTIO slack. This may be useful to spot where and how an experiment has 'run out' of positive slack.
.. _sequence-errors:
.. _sequence-errors:
Sequence errors
^^^^^^^^^^^^^^^
A sequence error occurs when a sequence of coarse timestamps cannot be transferred to the gateware. Internally, the gateware stores output events in an array of FIFO buffers (the 'lanes'). Within each particular lane, the coarse timestamps of events must be strictly increasing.
A sequence error occurs when a sequence of coarse timestamps cannot be transferred to the gateware. Internally, the gateware stores output events in an array of FIFO buffers (the 'lanes'). Within each particular lane, the coarse timestamps of events must be strictly increasing.
If an event with a timestamp coarsely equal to or lesser than the previous timestamp is submitted, *or* if the current lane is nearly full, the scaleable event dispatcher (SED) selects the next lane, wrapping around once the final lane is reached. If this lane also contains an event with a timestamp equal to or beyond the one being submitted, the placement fails and a sequence error occurs.
If an event with a timestamp coarsely equal to or lesser than the previous timestamp is submitted, *or* if the current lane is nearly full, the scaleable event dispatcher (SED) selects the next lane, wrapping around once the final lane is reached. If this lane also contains an event with a timestamp equal to or beyond the one being submitted, the placement fails and a sequence error occurs.
.. note::
For performance reasons, unlike :class:`~artiq.coredevice.exceptions.RTIOUnderflow`, most gateware errors do not halt execution of the kernel, because the kernel cannot wait for potential error reports before continuing. As a result, sequence errors are not raised as exceptions and cannot be caught. Instead, the offending event -- in this case, the event that could not be queued -- is discarded, the experiment continues, and the error is reported in the core log. To check the core log, use the command ``artiq_coremgmt log``.
.. note::
For performance reasons, unlike :class:`~artiq.coredevice.exceptions.RTIOUnderflow`, most gateware errors do not halt execution of the kernel, because the kernel cannot wait for potential error reports before continuing. As a result, sequence errors are not raised as exceptions and cannot be caught. Instead, the offending event -- in this case, the event that could not be queued -- is discarded, the experiment continues, and the error is reported in the core log. To check the core log, use the command ``artiq_coremgmt log``.
By default, the ARTIQ SED has eight lanes, which normally suffices to avoid sequence errors, but problems may still occur if many (>8) events are issued to the gateware with interleaving timestamps. Due to the strict timing limitations imposed on RTIO gateware, it is not possible for the SED to rearrange events in a lane once submitted, nor to anticipate future events when making lane choices. This makes sequence errors fairly 'unintelligent', but also generally fairly easy to eliminate by manually rearranging the generation of events (*not* rearranging the timing of the events themselves, which is rarely necessary.)
By default, the ARTIQ SED has eight lanes, which normally suffices to avoid sequence errors, but problems may still occur if many (>8) events are issued to the gateware with interleaving timestamps. Due to the strict timing limitations imposed on RTIO gateware, it is not possible for the SED to rearrange events in a lane once submitted, nor to anticipate future events when making lane choices. This makes sequence errors fairly 'unintelligent', but also generally fairly easy to eliminate by manually rearranging the generation of events (*not* rearranging the timing of the events themselves, which is rarely necessary.)
It is also possible to increase the number of SED lanes in the gateware, which will reduce the frequency of sequencing issues, but will also correspondingly put more stress on FPGA resources and timing.
It is also possible to increase the number of SED lanes in the gateware, which will reduce the frequency of sequencing issues, but will also correspondingly put more stress on FPGA resources and timing.
Other notes:
Other notes:
* Strictly increasing (coarse) timestamps never cause sequence errors.
* Strictly increasing *fine* timestamps within the same coarse cycle may still cause sequence errors.
* The number of lanes is a hard limit on the number of RTIO output events that may be emitted within one coarse cycle.
* Strictly increasing (coarse) timestamps never cause sequence errors.
* Strictly increasing *fine* timestamps within the same coarse cycle may still cause sequence errors.
* The number of lanes is a hard limit on the number of RTIO output events that may be emitted within one coarse cycle.
* Zero-duration methods (such as :meth:`artiq.coredevice.ttl.TTLOut.on()`) do not advance the timeline and so will always consume additional lanes if they are scheduled simultaneously. Adding a delay of at least one coarse RTIO cycle will prevent this (e.g. ``delay_mu(np.int64(self.core.ref_multiplier))``).
* Whether a particular sequence of timestamps causes a sequence error or not is fully deterministic (starting from a known RTIO state, e.g. after a reset). Adding a constant offset to the sequence will not affect the result.
* Whether a particular sequence of timestamps causes a sequence error or not is fully deterministic (starting from a known RTIO state, e.g. after a reset). Adding a constant offset to the sequence will not affect the result.
.. note::
To change the number of SED lanes, it is necessary to recompile the gateware and reflash your core device. Use the ``sed_lanes`` field in your system description file to set the value, then follow the instructions in :doc:`developing`. Alternatively, if you have an active firmware subscription with M-Labs, contact helpdesk@ for edited binaries.
.. note::
To change the number of SED lanes, it is necessary to recompile the gateware and reflash your core device. Use the ``sed_lanes`` field in your system description file to set the value, then follow the instructions in :doc:`developing`. Alternatively, if you have an active firmware subscription with M-Labs, contact helpdesk@ for edited binaries.
.. _collisions-busy-errors:
.. _collisions-busy-errors:
Collisions
^^^^^^^^^^
A collision occurs when events are submitted to a given RTIO output channel at a resolution the channel is not equipped to handle. Some channels implement 'replacement behavior', meaning that RTIO events submitted to the same timestamp will override each other (for example, if a ``ttl.off()`` and ``ttl.on()`` are scheduled to the same timestamp, the latter automatically overrides the former and only ``ttl.on()`` will be submitted to the channel). On the other hand, if replacement behavior is absent or disabled, or if the two events have the same coarse timestamp with differing fine timestamps, a collision error will be reported.
Like sequence errors, collisions originate in gateware and do not stop the execution of the kernel. The offending event is discarded and the problem is reported asynchronously via the core log.
Like sequence errors, collisions originate in gateware and do not stop the execution of the kernel. The offending event is discarded and the problem is reported asynchronously via the core log.
Busy errors
^^^^^^^^^^^
A busy error occurs when at least one output event could not be executed because the output channel was already busy executing an event. This differs from a collision error in that a collision is triggered when a sequence of events overwhelms *communication* with a channel, and a busy error is triggered when *execution* is overwhelmed. Busy errors are only possible in the context of single events with execution times longer than a coarse RTIO clock cycle; the exact parameters will depend on the nature of the output channel (e.g. specific peripheral device).
A busy error occurs when at least one output event could not be executed because the output channel was already busy executing an event. This differs from a collision error in that a collision is triggered when a sequence of events overwhelms *communication* with a channel, and a busy error is triggered when *execution* is overwhelmed. Busy errors are only possible in the context of single events with execution times longer than a coarse RTIO clock cycle; the exact parameters will depend on the nature of the output channel (e.g. specific peripheral device).
Offending event(s) are discarded and the problem is reported asynchronously via the core log.
Offending event(s) are discarded and the problem is reported asynchronously via the core log.
Input channels and events
-------------------------
@ -188,11 +188,11 @@ If more than 20 rising edges are received, it outputs a pulse::
delay(2*us)
output.pulse(500*ns)
Note that many input methods will necessarily involve the wall clock catching up to the timeline cursor or advancing before it.
This is to be expected: managing output events means working to plan the future, but managing input events means working to react to the past.
For input channels, it is the past that is under discussion.
Note that many input methods will necessarily involve the wall clock catching up to the timeline cursor or advancing before it.
This is to be expected: managing output events means working to plan the future, but managing input events means working to react to the past.
For input channels, it is the past that is under discussion.
In this case, the :meth:`~artiq.coredevice.ttl.TTLInOut.gate_rising` waits for the duration of the 500ns interval (or *gate window*) and records an event for each rising edge. At the end of the interval it exits, leaving the timeline cursor at the end of the interval (``now_mu = rtio_counter_mu``). :meth:`~artiq.coredevice.ttl.TTLInOut.count` unloads these events from the input buffers and counts the number of events recorded, during which the wall clock necessarily advances (``rtio_counter_mu > now_mu``). Accordingly, before we place any further output events, a :func:`~artiq.language.core.delay` is necessary to re-establish positive slack.
In this case, the :meth:`~artiq.coredevice.ttl.TTLInOut.gate_rising` waits for the duration of the 500ns interval (or *gate window*) and records an event for each rising edge. At the end of the interval it exits, leaving the timeline cursor at the end of the interval (``now_mu = rtio_counter_mu``). :meth:`~artiq.coredevice.ttl.TTLInOut.count` unloads these events from the input buffers and counts the number of events recorded, during which the wall clock necessarily advances (``rtio_counter_mu > now_mu``). Accordingly, before we place any further output events, a :func:`~artiq.language.core.delay` is necessary to re-establish positive slack.
Similar situations arise with methods such as :meth:`TTLInOut.sample_get <artiq.coredevice.ttl.TTLInOut.sample_get>` and :meth:`TTLInOut.watch_done <artiq.coredevice.ttl.TTLInOut.watch_done>`.
@ -213,40 +213,40 @@ Similar situations arise with methods such as :meth:`TTLInOut.sample_get <artiq.
}
|
.. ^ for spacing reasons only
|
.. ^ for spacing reasons only
Overflow exceptions
^^^^^^^^^^^^^^^^^^^
The RTIO input channels buffer input events received while an input gate is open, or when using the sampling API (:meth:`TTLInOut.sample_input <artiq.coredevice.ttl.TTLInOut.sample_input>`) at certain points in time.
The events are kept in a FIFO until the CPU reads them out via e.g. :meth:`~artiq.coredevice.ttl.TTLInOut.count`, :meth:`~artiq.coredevice.ttl.TTLInOut.timestamp_mu` or :meth:`~artiq.coredevice.ttl.TTLInOut.sample_get`.
The size of these FIFOs is finite and specified in gateware; in practice, it is limited by the resources available to the FPGA, and therefore differs depending on the specific core device being used.
If a FIFO is full and another event comes in, this causes an overflow condition.
The RTIO input channels buffer input events received while an input gate is open, or when using the sampling API (:meth:`TTLInOut.sample_input <artiq.coredevice.ttl.TTLInOut.sample_input>`) at certain points in time.
The events are kept in a FIFO until the CPU reads them out via e.g. :meth:`~artiq.coredevice.ttl.TTLInOut.count`, :meth:`~artiq.coredevice.ttl.TTLInOut.timestamp_mu` or :meth:`~artiq.coredevice.ttl.TTLInOut.sample_get`.
The size of these FIFOs is finite and specified in gateware; in practice, it is limited by the resources available to the FPGA, and therefore differs depending on the specific core device being used.
If a FIFO is full and another event comes in, this causes an overflow condition.
The condition is converted into an :class:`~artiq.coredevice.exceptions.RTIOOverflow` exception that is raised on a subsequent invocation of one of the readout methods.
Overflow exceptions are generally best dealt with simply by reading out from the input buffers more frequently. In odd or particular cases, users may consider modifying the length of individual buffers in gateware.
Overflow exceptions are generally best dealt with simply by reading out from the input buffers more frequently. In odd or particular cases, users may consider modifying the length of individual buffers in gateware.
.. note::
It is not possible to provoke an :class:`~artiq.coredevice.exceptions.RTIOOverflow` on a RTIO output channel. While output buffers are also of finite size, and can be filled up, the CPU will simply stall the submission of further events until it is once again possible to buffer them. Among other things, this means that padding the timeline cursor with large amounts of positive slack is not always a valid strategy to avoid :class:`~artiq.coredevice.exceptions.RTIOOverflow` exceptions when generating fast event sequences. In practice only a fixed number of events can be generated in advance, and the rest of the processing will be carried out when the wall clock is much closer to ``now_mu``.
For larger numbers of events which run up against this restriction, the correct method is to use :ref:`getting-started-dma`. In edge cases, enabling event spreading (see below) may fix the problem.
.. note::
It is not possible to provoke an :class:`~artiq.coredevice.exceptions.RTIOOverflow` on a RTIO output channel. While output buffers are also of finite size, and can be filled up, the CPU will simply stall the submission of further events until it is once again possible to buffer them. Among other things, this means that padding the timeline cursor with large amounts of positive slack is not always a valid strategy to avoid :class:`~artiq.coredevice.exceptions.RTIOOverflow` exceptions when generating fast event sequences. In practice only a fixed number of events can be generated in advance, and the rest of the processing will be carried out when the wall clock is much closer to ``now_mu``.
For larger numbers of events which run up against this restriction, the correct method is to use :ref:`getting-started-dma`. In edge cases, enabling event spreading (see below) may fix the problem.
.. _sed-event-spreading:
Event spreading
Event spreading
---------------
By default, the SED only ever switches lanes for timestamp sequence reasons, as described above in :ref:`sequence-errors`. If only output events of strictly increasing coarse timestamps are queued, the SED fills up a single lane and stalls when it is full, regardless of the state of other lanes. This is preserved to avoid nondeterminism in sequence errors and corresponding unpredictable failures (since the timing of 'fullness' depends on the timing of when events are *queued*, which can vary slightly based on CPU execution jitter).
By default, the SED only ever switches lanes for timestamp sequence reasons, as described above in :ref:`sequence-errors`. If only output events of strictly increasing coarse timestamps are queued, the SED fills up a single lane and stalls when it is full, regardless of the state of other lanes. This is preserved to avoid nondeterminism in sequence errors and corresponding unpredictable failures (since the timing of 'fullness' depends on the timing of when events are *queued*, which can vary slightly based on CPU execution jitter).
For better utilization of resources and to maximize buffering capacity, *event spreading* may be enabled, which allows the SED to switch lanes immediately when they reach a certain high watermark of 'fullness', increasing the number of events that can be queued before stalls ensue. To enable event spreading, use the ``sed_spread_enable`` config key and set it to ``1``: ::
For better utilization of resources and to maximize buffering capacity, *event spreading* may be enabled, which allows the SED to switch lanes immediately when they reach a certain high watermark of 'fullness', increasing the number of events that can be queued before stalls ensue. To enable event spreading, use the ``sed_spread_enable`` config key and set it to ``1``: ::
$ artiq_coremgmt config write -s sed_spread_enable 1
$ artiq_coremgmt config write -s sed_spread_enable 1
This will change where and when sequence errors occur in your kernels, and might cause them to vary from execution to execution of the same experiment. It will generally reduce or eliminate :class:`~artiq.coredevice.exceptions.RTIOUnderflow` exceptions caused by queueing stalls and significantly increase the threshold on sequence length before :ref:`DMA <getting-started-dma>` becomes necessary.
This will change where and when sequence errors occur in your kernels, and might cause them to vary from execution to execution of the same experiment. It will generally reduce or eliminate :class:`~artiq.coredevice.exceptions.RTIOUnderflow` exceptions caused by queueing stalls and significantly increase the threshold on sequence length before :ref:`DMA <getting-started-dma>` becomes necessary.
Note that event spreading can be particularly helpful in DRTIO satellites, as it is the space remaining in the *fullest* FIFO that is used as a metric for when the satellite can receive more data from the master. The setting is not system-wide and can and must be set independently for each core device in a system. In other words, to enable or disable event spreading in satellites, flash the satellite core configuration directly; this will have no effect on any other satellites or the master.
Note that event spreading can be particularly helpful in DRTIO satellites, as it is the space remaining in the *fullest* FIFO that is used as a metric for when the satellite can receive more data from the master. The setting is not system-wide and can and must be set independently for each core device in a system. In other words, to enable or disable event spreading in satellites, flash the satellite core configuration directly; this will have no effect on any other satellites or the master.
Seamless handover
-----------------
@ -293,9 +293,9 @@ Synchronization
The seamless handover of the timeline (cursor and events) across kernels and experiments implies that a kernel can exit long before the events it has submitted have been executed. Generally, this is preferable: it frees up resources to the next kernel and allows work to be carried on from kernel to kernel without interruptions.
However, as a result, no guarantees are made about the state of the system when a new kernel enters. Slack may be positive, negative, or zero; input channels may be filled to overflowing, or empty; output channels may contain events currently being executed, contain events scheduled for the far future, or contain no events at all. Unexpected negative slack can cause RTIOUnderflows. Unexpected large positive slack may cause a system to appear to 'lock', as all its events are scheduled for a distant future and the CPU must wait for the output buffers to empty to continue.
However, as a result, no guarantees are made about the state of the system when a new kernel enters. Slack may be positive, negative, or zero; input channels may be filled to overflowing, or empty; output channels may contain events currently being executed, contain events scheduled for the far future, or contain no events at all. Unexpected negative slack can cause RTIOUnderflows. Unexpected large positive slack may cause a system to appear to 'lock', as all its events are scheduled for a distant future and the CPU must wait for the output buffers to empty to continue.
As a result, when beginning a new experiment in an uncertain context, we often want to clear the RTIO FIFOs and initialize the timeline cursor to a reasonable point in the near future. The method :meth:`artiq.coredevice.core.Core.reset` (``self.core.reset()``) is provided for this purpose. The example idle kernel implements this mechanism.
As a result, when beginning a new experiment in an uncertain context, we often want to clear the RTIO FIFOs and initialize the timeline cursor to a reasonable point in the near future. The method :meth:`artiq.coredevice.core.Core.reset` (``self.core.reset()``) is provided for this purpose. The example idle kernel implements this mechanism.
On the other hand, if a kernel exits while some of its events are still waiting to be executed, there is no guarantee made that the events in question ever *will* be executed (as opposed to being flushed out by a subsequent core reset). If a kernel should wait until all its events have been executed, use the method :meth:`~artiq.coredevice.core.Core.wait_until_mu` with a timestamp after (or at) the last event:

View File

@ -1,18 +1,18 @@
.. _drtio-and-subkernels:
.. _drtio-and-subkernels:
Using DRTIO and subkernels
==========================
Using DRTIO and subkernels
==========================
In larger or more spread-out systems, a single core device might not be suited to managing all the RTIO operations or channels necessary. For these situations ARTIQ supplies Distributed Real-Time IO, or DRTIO. This allows systems to be configured with some or all of their RTIO channels distributed to one or several *satellite* core devices, which are linked to the *master* core device. These remote channels are then accessible in kernels on the master device exactly like local channels.
In larger or more spread-out systems, a single core device might not be suited to managing all the RTIO operations or channels necessary. For these situations ARTIQ supplies Distributed Real-Time IO, or DRTIO. This allows systems to be configured with some or all of their RTIO channels distributed to one or several *satellite* core devices, which are linked to the *master* core device. These remote channels are then accessible in kernels on the master device exactly like local channels.
While the components of a system, as well as the distribution of peripherals among satellites, are necessarily fixed in the system configuration, the specific topology of core and satellite links is flexible and can be changed whenever necessary. It is supplied to the core device by means of a routing table (see below). Kasli and Kasli-SoC devices use the SFP ports for DRTIO connections, whereunder links should be high-speed duplex serial lines operating 1Gbps or more.
While the components of a system, as well as the distribution of peripherals among satellites, are necessarily fixed in the system configuration, the specific topology of core and satellite links is flexible and can be changed whenever necessary. It is supplied to the core device by means of a routing table (see below). Kasli and Kasli-SoC devices use the SFP ports for DRTIO connections, whereunder links should be high-speed duplex serial lines operating 1Gbps or more.
Certain peripheral cards with onboard FPGAs of their own (e.g. Shuttler) can be configured as satellites in a DRTIO setting, allowing them to run their own subkernels and make use of DDMA. In these cases, the EEM connection to the core device is used for DRTIO communication (DRTIO-over-EEM).
Certain peripheral cards with onboard FPGAs of their own (e.g. Shuttler) can be configured as satellites in a DRTIO setting, allowing them to run their own subkernels and make use of DDMA. In these cases, the EEM connection to the core device is used for DRTIO communication (DRTIO-over-EEM).
.. note::
As with other configuration changes (e.g. adding new hardware), if you are in possession of a non-distributed ARTIQ system and you'd like to expand it into a DRTIO setup, it's easily possible to do so, but you need to be sure that both master and satellite are (re)flashed with this in mind. As usual, if you obtained your hardware from M-Labs, you will normally be supplied with all the binaries you need, through ``afws_client`` or otherwise.
.. note::
As with other configuration changes (e.g. adding new hardware), if you are in possession of a non-distributed ARTIQ system and you'd like to expand it into a DRTIO setup, it's easily possible to do so, but you need to be sure that both master and satellite are (re)flashed with this in mind. As usual, if you obtained your hardware from M-Labs, you will normally be supplied with all the binaries you need, through ``afws_client`` or otherwise.
.. note::
.. note::
Do not confuse the DRTIO *master device* (used to mean the central controlling core device of a distributed system) with the *ARTIQ master* (the central piece of software of ARTIQ's management system, which interacts with ``artiq_client`` and the dashboard.) ``artiq_run`` can be used to run experiments on DRTIO systems just as easily as non-distributed ones, and the ARTIQ master interacts with the central core device regardless of whether it's configured as a DRTIO master or standalone.
Using DRTIO
@ -23,7 +23,7 @@ Using DRTIO
Configuring the routing table
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, DRTIO assumes a routing table for a star topology (i.e. all satellites directly connected to the master), with destination 0 being the master device's local RTIO core and destinations 1 and above corresponding to devices on the master's respective downstream ports. To use any other topology, it is necessary to supply a corresponding routing table in the form of a binary file, written to flash storage under the key ``routing_table``. The binary file is easily generated in the correct format using ``artiq_route``. This example is for a chain of 3 devices: ::
By default, DRTIO assumes a routing table for a star topology (i.e. all satellites directly connected to the master), with destination 0 being the master device's local RTIO core and destinations 1 and above corresponding to devices on the master's respective downstream ports. To use any other topology, it is necessary to supply a corresponding routing table in the form of a binary file, written to flash storage under the key ``routing_table``. The binary file is easily generated in the correct format using ``artiq_route``. This example is for a chain of 3 devices: ::
# create an empty routing table
$ artiq_route rt.bin init
@ -48,13 +48,13 @@ By default, DRTIO assumes a routing table for a star topology (i.e. all satellit
$ artiq_coremgmt config write -f routing_table rt.bin
The local RTIO core of the master device is considered a destination like any other; it must be explicitly listed in the routing table to be accessible to kernels.
The local RTIO core of the master device is considered a destination like any other; it must be explicitly listed in the routing table to be accessible to kernels.
All routes must end with the local RTIO core of the destination device. Incorrect routing tables will cause ``RTIODestinationUnreachable`` exceptions.
All routes must end with the local RTIO core of the destination device. Incorrect routing tables will cause ``RTIODestinationUnreachable`` exceptions.
As with other configuration changes, the core device should be restarted (``artiq_coremgmt reboot``, power cycle, etc.) for changes to take effect.
As with other configuration changes, the core device should be restarted (``artiq_coremgmt reboot``, power cycle, etc.) for changes to take effect.
Using the core language with DRTIO
Using the core language with DRTIO
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Remote channels are accessed just as local channels are (e.g., most commonly, by calling ``self.setattr_device()`` and then referencing the device by name.)
@ -82,29 +82,29 @@ To enable distributed DMA for the master, simply provide an ``enable_ddma=True``
self.ttl0.pulse(100*ns)
delay(100*ns)
In standalone systems, as well as in subkernels (see below), this argument is ignored; in standalone systems it is meaningless and in subkernels it must always be enabled for structural reasons.
In standalone systems, as well as in subkernels (see below), this argument is ignored; in standalone systems it is meaningless and in subkernels it must always be enabled for structural reasons.
Enabling DDMA on a purely local sequence on a DRTIO system introduces an overhead during trace recording which comes from additional processing done on the record, so careful use is advised. Due to the extra time that communicating with relevant satellites takes, an additional delay before playback may be necessary to prevent a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` when playing back a DDMA-enabled sequence.
Subkernels
----------
Rather than only offloading the RTIO channels to satellites and limiting all processing to the master core device, it is fully possible to run kernels directly on satellite devices. These are referred to as *subkernels*. Using subkernels to process and control remote RTIO channels can free up resources on the core device.
Rather than only offloading the RTIO channels to satellites and limiting all processing to the master core device, it is fully possible to run kernels directly on satellite devices. These are referred to as *subkernels*. Using subkernels to process and control remote RTIO channels can free up resources on the core device.
Subkernels behave for the most part like regular kernels; they accept arguments, can return values, and are marked by the decorator ``@subkernel(destination=i)``, where ``i`` is the satellite's destination number as used in the routing table. To call a subkernel, call it like any other function. There are however a few caveats:
Subkernels behave for the most part like regular kernels; they accept arguments, can return values, and are marked by the decorator ``@subkernel(destination=i)``, where ``i`` is the satellite's destination number as used in the routing table. To call a subkernel, call it like any other function. There are however a few caveats:
- subkernels do not support RPCs,
- subkernels do not support (recursive) DRTIO (but they can call other subkernels and send messages to each other, see below),
- they support DMA, for which DDMA is considered always enabled,
- they support DMA, for which DDMA is considered always enabled,
- their return values must be fully annotated with an ARTIQ type,
- their arguments should be annotated, and only basic ARTIQ types are supported,
- they can raise exceptions, but the exceptions cannot be caught by the master (they can only be caught locally or propagated directly to the host),
- they can raise exceptions, but the exceptions cannot be caught by the master (they can only be caught locally or propagated directly to the host),
- while ``self`` is allowed as an argument, it is retrieved at compile time and exists as a purely local object afterwards. Any changes made by other kernels will not be visible, and changes made locally will not be applied anywhere else.
Subkernels in practice
^^^^^^^^^^^^^^^^^^^^^^
Subkernels begin execution as soon as possible when called. By default, they are not awaited, but awaiting is necessary to receive results or exceptions. The await function ``subkernel_await(function, [timeout])`` takes as argument the subkernel to be awaited and, optionally, a timeout value in milliseconds. If the timeout is reached without response from the subkernel, a :exc:`~artiq.coredevice.exceptions.SubkernelError` is raised. If no timeout value is supplied the function waits indefinitely for the return. Negative timeout values are ignored.
Subkernels begin execution as soon as possible when called. By default, they are not awaited, but awaiting is necessary to receive results or exceptions. The await function ``subkernel_await(function, [timeout])`` takes as argument the subkernel to be awaited and, optionally, a timeout value in milliseconds. If the timeout is reached without response from the subkernel, a :exc:`~artiq.coredevice.exceptions.SubkernelError` is raised. If no timeout value is supplied the function waits indefinitely for the return. Negative timeout values are ignored.
For example, a subkernel performing integer addition: ::
@ -127,17 +127,17 @@ For example, a subkernel performing integer addition: ::
Subkernels are compiled after the main kernel and immediately sent to the designated satellite. When they are called, the master simply instructs the subkernel to load and run the corresponding kernel. When ``self`` is used in subkernels, it is embedded into the compiled and uploaded data; this is the reason why changes made do not propagate between kernels.
If a subkernel is called on a satellite where a kernel is already running, the newer kernel overrides silently, and the previous kernel will not be completed.
If a subkernel is called on a satellite where a kernel is already running, the newer kernel overrides silently, and the previous kernel will not be completed.
.. warning::
Be careful with use of ``self.core.reset()`` around subkernels. Since ``self`` in subkernels is purely local, calling ``self.core.reset()`` in a subkernel will only affect that specific satellite and its own FIFOs. On the other hand, calling ``self.core.reset()`` in the master kernel will clear FIFOs in all satellites, regardless of whether a subkernel is running, but will not stop the subkernel. As a result, any event currently in a FIFO queue will be cleared, but the subkernels may continue to queue events. This is likely to result in odd behavior; it's best to avoid using ``self.core.reset()`` during the lifetime of any subkernels.
Be careful with use of ``self.core.reset()`` around subkernels. Since ``self`` in subkernels is purely local, calling ``self.core.reset()`` in a subkernel will only affect that specific satellite and its own FIFOs. On the other hand, calling ``self.core.reset()`` in the master kernel will clear FIFOs in all satellites, regardless of whether a subkernel is running, but will not stop the subkernel. As a result, any event currently in a FIFO queue will be cleared, but the subkernels may continue to queue events. This is likely to result in odd behavior; it's best to avoid using ``self.core.reset()`` during the lifetime of any subkernels.
If a subkernel is complex and its binary relatively large, the delay between the call and actually running the subkernel may be substantial. If it's necessary to minimize this delay, ``subkernel_preload(function)`` should be used before the call.
If a subkernel is complex and its binary relatively large, the delay between the call and actually running the subkernel may be substantial. If it's necessary to minimize this delay, ``subkernel_preload(function)`` should be used before the call.
While a subkernel is running, the satellite is disconnected from the RTIO interface of the master. As a result, regardless of what devices the subkernel itself uses, none of the RTIO devices on that satellite will be available to the master, nor will messages be passed on to any further satellites downstream. This applies both to regular RTIO operations and DDMA. While a subkernel is running, a satellite may use its own local DMA, but an attempt by any other device to run DDMA through the satellite will fail. Control is returned to the master when no subkernel is running -- to be sure that a device will be accessible, await before performing any RTIO operations on the affected satellite.
.. note::
Subkernels do not exit automatically if a master kernel exits, and are seamlessly carried over between experiments. Much like RTIO events left in FIFO queues, the nature of seamless transition means subkernels left running after the end of an experiment cannot be guaranteed to complete (as they may be overriden by newer subkernels in the next experiment). Following experiments must also be aware of the risk of attempting to reach RTIO devices currently 'blocked' by an active subkernel left over from a previous experiment. This can be avoided simply by having each experiment await all of its subkernels at some point before exiting. Alternatively, if necessary, a system can be sanitized by calling trivial kernels in each satellite -- any leftover subkernels will be overriden and automatically cancelled.
.. note::
Subkernels do not exit automatically if a master kernel exits, and are seamlessly carried over between experiments. Much like RTIO events left in FIFO queues, the nature of seamless transition means subkernels left running after the end of an experiment cannot be guaranteed to complete (as they may be overriden by newer subkernels in the next experiment). Following experiments must also be aware of the risk of attempting to reach RTIO devices currently 'blocked' by an active subkernel left over from a previous experiment. This can be avoided simply by having each experiment await all of its subkernels at some point before exiting. Alternatively, if necessary, a system can be sanitized by calling trivial kernels in each satellite -- any leftover subkernels will be overriden and automatically cancelled.
Calling other kernels
^^^^^^^^^^^^^^^^^^^^^
@ -173,10 +173,10 @@ Subkernels can call other kernels and subkernels. For a more complex example: ::
assert result == 4
self.pulse_ttl(20)
In this case, without the preload, the delay after the core reset would need to be longer. Depending on the connection, the call may still take some time in itself. Notice that the method ``pulse_ttl()`` can be called both within a subkernel and on its own.
In this case, without the preload, the delay after the core reset would need to be longer. Depending on the connection, the call may still take some time in itself. Notice that the method ``pulse_ttl()`` can be called both within a subkernel and on its own.
.. note::
Subkernels can call subkernels on any other satellite, not only their own. Care should however be taken that different kernels do not call subkernels on the same satellite, or only very cautiously. If, e.g., a newer call overrides a subkernel that another caller is awaiting, unpredictable timeouts or locks may result, as the original subkernel will never return. There is not currently any mechanism to check whether a particular satellite is 'busy'; it is up to the programmer to handle this correctly.
.. note::
Subkernels can call subkernels on any other satellite, not only their own. Care should however be taken that different kernels do not call subkernels on the same satellite, or only very cautiously. If, e.g., a newer call overrides a subkernel that another caller is awaiting, unpredictable timeouts or locks may result, as the original subkernel will never return. There is not currently any mechanism to check whether a particular satellite is 'busy'; it is up to the programmer to handle this correctly.
Message passing
^^^^^^^^^^^^^^^
@ -203,6 +203,6 @@ Apart from arguments and returns, subkernels can also pass messages between each
The ``subkernel_send(destination, name, value)`` function requires three arguments: a destination, a name for the message (to be used for identification in the corresponding ``subkernel_recv()``), and the passed value.
The ``subkernel_recv(name, type, [timeout])`` function requires two arguments: message name (matching exactly the name provided in ``subkernel_send``) and expected type. Optionally, it also accepts a third argument, a timeout for the operation in milliseconds. As with ``subkernel_await``, the default behavior is to wait as long as necessary, and a negative argument is ignored.
The ``subkernel_recv(name, type, [timeout])`` function requires two arguments: message name (matching exactly the name provided in ``subkernel_send``) and expected type. Optionally, it also accepts a third argument, a timeout for the operation in milliseconds. As with ``subkernel_await``, the default behavior is to wait as long as necessary, and a negative argument is ignored.
A message can only be received while a subkernel is running, and is placed into a buffer to be retrieved when required. As a result ``send`` executes independently of any receive and never deadlocks. However, a ``receive`` function may timeout or lock (wait forever) if no message with the correct name and destination is ever sent.
A message can only be received while a subkernel is running, and is placed into a buffer to be retrieved when required. As a result ``send`` executes independently of any receive and never deadlocks. However, a ``receive`` function may timeout or lock (wait forever) if no message with the correct name and destination is ever sent.

View File

@ -7,18 +7,18 @@ Utilities
ARTIQ Firmware Service (AFWS) client
------------------------------------
This tool serves as a client for building tailored firmware and gateware from M-Lab's servers and downloading the binaries in ready-to-flash format. It is necessary to have a valid subscription to AFWS to use it. Subscription also includes general helpdesk support and can be purchased or extended by contacting ``sales@``. One year of support is included with any Kasli carriers or crates containing them purchased from M-Labs. Additional one-time use is generally provided with purchase of additional cards to facilitate the system configuration change.
This tool serves as a client for building tailored firmware and gateware from M-Lab's servers and downloading the binaries in ready-to-flash format. It is necessary to have a valid subscription to AFWS to use it. Subscription also includes general helpdesk support and can be purchased or extended by contacting ``sales@``. One year of support is included with any Kasli carriers or crates containing them purchased from M-Labs. Additional one-time use is generally provided with purchase of additional cards to facilitate the system configuration change.
.. argparse::
.. argparse::
:ref: artiq.frontend.afws_client.get_argparser
:prog: afws_client
:nodescription:
:nodefault:
:prog: afws_client
:nodescription:
:nodefault:
passwd
.. warning::
After receiving your credentials from M-Labs, it is recommended to change your password as soon as possible. It is your responsibility to set and remember a secure password. If necessary, passwords can be reset by contacting helpdesk@.
passwd
.. warning::
After receiving your credentials from M-Labs, it is recommended to change your password as soon as possible. It is your responsibility to set and remember a secure password. If necessary, passwords can be reset by contacting helpdesk@.
Static compiler
---------------
@ -28,20 +28,20 @@ This tool compiles an experiment into a ELF file. It is primarily used to prepar
:ref: artiq.frontend.artiq_compile.get_argparser
:prog: artiq_compile
:nodescription:
:nodefault:
:nodefault:
Flash storage image generator
-----------------------------
This tool compiles key/value pairs (e.g. configuration information) into a binary image suitable for flashing into the storage space of the core device. It can be used in combination with ``artiq_flash`` to configure the core device, but this is normally necessary at most to set the ``ip`` field; once the core device is reachable by network it is preferable to use ``artiq_coremgmt config``.
This tool compiles key/value pairs (e.g. configuration information) into a binary image suitable for flashing into the storage space of the core device. It can be used in combination with ``artiq_flash`` to configure the core device, but this is normally necessary at most to set the ``ip`` field; once the core device is reachable by network it is preferable to use ``artiq_coremgmt config``.
.. argparse::
:ref: artiq.frontend.artiq_mkfs.get_argparser
:prog: artiq_mkfs
:nodescription:
:nodefault:
:nodescription:
:nodefault:
.. _flashing-loading-tool:
.. _flashing-loading-tool:
Flashing/Loading tool
---------------------
@ -49,7 +49,7 @@ Flashing/Loading tool
.. argparse::
:ref: artiq.frontend.artiq_flash.get_argparser
:prog: artiq_flash
:nodefault:
:nodefault:
.. _core-device-management-tool:
@ -64,7 +64,7 @@ To use this tool, it is necessary to specify the IP address your core device can
:ref: artiq.frontend.artiq_coremgmt.get_argparser
:prog: artiq_coremgmt
:nodescription:
:nodefault:
:nodefault:
Core device logging controller
------------------------------
@ -77,18 +77,18 @@ Core device logging controller
Device database template generator
----------------------------------
.. argparse::
.. argparse::
:ref: artiq.frontend.artiq_ddb_template.get_argparser
:prog: artiq_ddb_template
:nodefault:
:prog: artiq_ddb_template
:nodefault:
ARTIQ RTIO monitor
ARTIQ RTIO monitor
------------------
.. argparse::
:ref: artiq.frontend.artiq_rtiomon.get_argparser
:prog: artiq_rtiomon
:nodefault:
:ref: artiq.frontend.artiq_rtiomon.get_argparser
:prog: artiq_rtiomon
:nodefault:
Moninj proxy
------------
@ -96,7 +96,7 @@ Moninj proxy
.. argparse::
:ref: artiq.frontend.aqctl_moninj_proxy.get_argparser
:prog: aqctl_moninj_proxy
:nodefault:
:nodefault:
.. _rtiomap-tool:
@ -106,33 +106,33 @@ RTIO channel name map tool
.. argparse::
:ref: artiq.frontend.artiq_rtiomap.get_argparser
:prog: artiq_rtiomap
:nodefault:
:nodefault:
.. _core-device-rtio-analyzer-tool:
Core device RTIO analyzer tool
------------------------------
This tool converts core device RTIO logs to VCD waveform files that are readable by third-party tools such as GtkWave. See :ref:`rtio-analyzer-example` for an example, or ``artiq.test.coredevice.test_analyzer`` for a relevant unit test. When using the ARTIQ dashboard, recorded data can be viewed or exported directly in the integrated waveform analyzer (the "Waveform" dock).
This tool converts core device RTIO logs to VCD waveform files that are readable by third-party tools such as GtkWave. See :ref:`rtio-analyzer-example` for an example, or ``artiq.test.coredevice.test_analyzer`` for a relevant unit test. When using the ARTIQ dashboard, recorded data can be viewed or exported directly in the integrated waveform analyzer (the "Waveform" dock).
.. argparse::
:ref: artiq.frontend.artiq_coreanalyzer.get_argparser
:prog: artiq_coreanalyzer
:nodescription:
:nodefault:
:nodescription:
:nodefault:
.. _routing-table-tool:
Core device RTIO analyzer proxy
-------------------------------
This tool distributes the core analyzer dump to several clients such as the dashboard.
This tool distributes the core analyzer dump to several clients such as the dashboard.
.. argparse::
:ref: artiq.frontend.aqctl_coreanalyzer_proxy.get_argparser
:prog: aqctl_coreanalyzer_proxy
:nodescription:
:nodefault:
:nodefault:
DRTIO routing table manipulation tool
-------------------------------------
@ -140,4 +140,4 @@ DRTIO routing table manipulation tool
.. argparse::
:ref: artiq.frontend.artiq_route.get_argparser
:prog: artiq_route
:nodefault:
:nodefault: