forked from M-Labs/artiq
1
0
Fork 0

doc: Minor fixes

This commit is contained in:
architeuthidae 2024-08-23 12:10:54 +08:00 committed by Sébastien Bourdeauducq
parent a7e1d2b1c1
commit 0311a37e59
13 changed files with 67 additions and 63 deletions

View File

@ -75,7 +75,7 @@ Common system description changes
To add or remove peripherals from the system, add or remove their entries from the ``peripherals`` field. When replacing hardware with upgraded versions, update the corresponding ``hw_rev`` (hardware revision) field. Other fields to consider include: To add or remove peripherals from the system, add or remove their entries from the ``peripherals`` field. When replacing hardware with upgraded versions, update the corresponding ``hw_rev`` (hardware revision) field. Other fields to consider include:
- ``enable_wrpll`` (a simple boolean, see :ref:`core-device-clocking`) - ``enable_wrpll`` (a simple boolean, see :ref:`core-device-clocking`)
- ``sed_lanes`` (increasing the number of SED lanes can reduce sequence errors, but correspondingly consumes more FPGA resources, see :ref:`sequence-errors` ) - ``sed_lanes`` (increasing the number of SED lanes can reduce sequence errors, but correspondingly consumes more FPGA resources, see :ref:`sequence-errors`)
- various defaults (e.g. ``core_addr`` defines a default IP address, which can be freely reconfigured later). - various defaults (e.g. ``core_addr`` defines a default IP address, which can be freely reconfigured later).
Nix development environment Nix development environment
@ -84,21 +84,23 @@ Nix development environment
* Install `Nix <http://nixos.org/nix/>`_ if you haven't already. Prefer a single-user installation for simplicity. * Install `Nix <http://nixos.org/nix/>`_ if you haven't already. Prefer a single-user installation for simplicity.
* Enable flakes in Nix, for example by adding ``experimental-features = nix-command flakes`` to ``nix.conf``; see the `NixOS Wiki on flakes <https://nixos.wiki/wiki/flakes>`_ for details and more options. * Enable flakes in Nix, for example by adding ``experimental-features = nix-command flakes`` to ``nix.conf``; see the `NixOS Wiki on flakes <https://nixos.wiki/wiki/flakes>`_ for details and more options.
* Clone `the ARTIQ Git repository <https://github.com/m-labs/artiq>`_, or `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`__ for Zynq devices (Kasli-SoC or ZC706). By default, you are working with the ``master`` branch, which represents the beta version and is not stable (see :doc:`releases`). Checkout the most recent release (``git checkout release-[number]``) for a stable version. * Clone `the ARTIQ Git repository <https://github.com/m-labs/artiq>`_, or `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`__ for Zynq devices (Kasli-SoC or ZC706). By default, you are working with the ``master`` branch, which represents the beta version and is not stable (see :doc:`releases`). Checkout the most recent release (``git checkout release-[number]``) for a stable version.
* If your Vivado installation is not in its default location ``/opt``, open ``flake.nix`` and edit it accordingly (once again text-search ``/opt/Xilinx/Vivado``). * If your Vivado installation is not in its default location ``/opt``, open ``flake.nix`` and edit it accordingly (note that the edits must be made in the main ARTIQ flake, even if you are working with Zynq, see also tip below).
* Run ``nix develop`` at the root of the repository, where ``flake.nix`` is. * Run ``nix develop`` at the root of the repository, where ``flake.nix`` is.
* Answer ``y``/'yes' to any Nix configuration questions if necessary, as in :ref:`installing-troubleshooting`. * Answer ``y``/'yes' to any Nix configuration questions if necessary, as in :ref:`installing-troubleshooting`.
.. note:: .. note::
You can also target legacy versions of ARTIQ; use Git to checkout older release branches. Note however that older releases of ARTIQ required different processes for developing and building, which you are broadly more likely to figure out by (also) consulting corresponding older versions of the manual. You can also target legacy versions of ARTIQ; use Git to checkout older release branches. Note however that older releases of ARTIQ required different processes for developing and building, which you are broadly more likely to figure out by (also) consulting the corresponding older versions of the manual.
Once you have run ``nix develop`` you are in the ARTIQ development environment. All ARTIQ commands and utilities -- :mod:`~artiq.frontend.artiq_run`, :mod:`~artiq.frontend.artiq_master`, etc. -- should be available, as well as all the packages necessary to build or run ARTIQ itself. You can exit the environment at any time using Control+D or the ``exit`` command and re-enter it by re-running ``nix develop`` again in the same location. Once you have run ``nix develop`` you are in the ARTIQ development environment. All ARTIQ commands and utilities -- :mod:`~artiq.frontend.artiq_run`, :mod:`~artiq.frontend.artiq_master`, etc. -- should be available, as well as all the packages necessary to build or run ARTIQ itself. You can exit the environment at any time using Control+D or the ``exit`` command and re-enter it by re-running ``nix develop`` again in the same location.
.. tip:: .. tip::
If you are developing for Zynq, you will have noted that the ARTIQ-Zynq repository consists largely of firmware. The firmware for Zynq (NAR3) is more modern than that used for current mainline ARTIQ, and is intended to eventually replace it; for now it constitutes most of the difference between the two ARTIQ variants. The gateware for Zynq, on the other hand, is largely imported from mainline ARTIQ. If you intend to modify the gateware housed in the original ARTIQ repository, but build and test the results on a Zynq device, clone both repositories and set your ``PYTHONPATH`` after entering the ARTIQ-Zynq development shell: :: If you are developing for Zynq, you will have noted that the ARTIQ-Zynq repository consists largely of firmware. The firmware for Zynq (NAR3) is more modern than that used for current mainline ARTIQ, and is intended to eventually replace it; for now it constitutes most of the difference between the two ARTIQ variants. The gateware for Zynq, on the other hand, is largely imported from mainline ARTIQ.
If you intend to modify the source housed in the original ARTIQ repository, but build and test the results on a Zynq device, clone both repositories and set your ``PYTHONPATH`` after entering the ARTIQ-Zynq development shell: ::
$ export PYTHONPATH=/absolute/path/to/your/artiq:$PYTHONPATH $ export PYTHONPATH=/absolute/path/to/your/artiq:$PYTHONPATH
Note that this only applies for incremental builds. If you want to use ``nix build``, look into changing the inputs of the ``flake.nix`` instead. You can do this by replacing the URL of the GitHub ARTIQ repository with ``path:/absolute/path/to/your/artiq``; remember that Nix caches dependencies, so to incorporate new changes you will need to exit the development shell, update the Nix cache with ``nix flake update``, and re-run ``nix develop``. Note that this only applies for incremental builds. If you want to use ``nix build``, or make changes to the dependencies, look into changing the inputs of the ``flake.nix`` instead. You can do this by replacing the URL of the GitHub ARTIQ repository with ``path:/absolute/path/to/your/artiq``; remember that Nix caches dependencies, so to incorporate new changes you will need to exit the development shell, update the Nix cache with ``nix flake update``, and re-run ``nix develop``.
Building only standard binaries Building only standard binaries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -5,7 +5,7 @@ The ARTIQ compiler transforms the Python code of the kernels into machine code e
ARTIQ kernel code accepts *nearly,* but not quite, a strict subset of Python 3. The necessities of real-time operation impose a harsher set of limitations; as a result, many Python features are necessarily omitted, and there are some specific discrepancies (see also :ref:`compiler-pitfalls`). ARTIQ kernel code accepts *nearly,* but not quite, a strict subset of Python 3. The necessities of real-time operation impose a harsher set of limitations; as a result, many Python features are necessarily omitted, and there are some specific discrepancies (see also :ref:`compiler-pitfalls`).
In general, ARTIQ Python supports only statically typed variables; it implements no heap allocation or garbage collection systems, essentially disallowing any heap-based data structures (although lists and arrays remain available in a stack-based form); and it cannot use runtime dispatch, meaning that, for example, all elements of an array must be of the same type. Nonetheless, technical details aside, a basic knowledge of Python is entirely sufficient to write useful and coherent ARTIQ experiments. In general, ARTIQ Python supports only statically typed variables; it implements no heap allocation or garbage collection systems, essentially disallowing any heap-based data structures (although lists and arrays remain available in a stack-based form); and it cannot use runtime dispatch, meaning that, for example, all elements of an array must be of the same type. Nonetheless, technical details aside, a basic knowledge of Python is entirely sufficient to write ARTIQ experiments.
.. note:: .. note::
The ARTIQ compiler is now in its second iteration. The third generation, known as NAC3, is `currently in development <https://git.m-labs.hk/M-Labs/nac3>`_, and available for pre-alpha experimental use. NAC3 represents a major overhaul of ARTIQ compilation, and will feature much faster compilation speeds, a greatly improved type system, and more predictable and transparent operation. It is compatible with ARTIQ firmware starting at ARTIQ-7. Instructions for installation and basic usage differences can also be found `on the M-Labs Forum <https://forum.m-labs.hk/d/392-nac3-new-artiq-compiler-3-prealpha-release>`_. While NAC3 is a work in progress and many important features remain unimplemented, installation and feedback is welcomed. The ARTIQ compiler is now in its second iteration. The third generation, known as NAC3, is `currently in development <https://git.m-labs.hk/M-Labs/nac3>`_, and available for pre-alpha experimental use. NAC3 represents a major overhaul of ARTIQ compilation, and will feature much faster compilation speeds, a greatly improved type system, and more predictable and transparent operation. It is compatible with ARTIQ firmware starting at ARTIQ-7. Instructions for installation and basic usage differences can also be found `on the M-Labs Forum <https://forum.m-labs.hk/d/392-nac3-new-artiq-compiler-3-prealpha-release>`_. While NAC3 is a work in progress and many important features remain unimplemented, installation and feedback is welcomed.
@ -20,7 +20,7 @@ Functions and decorators
The ARTIQ compiler recognizes several specialized decorators, which determine the way the decorated function will be compiled and handled. The ARTIQ compiler recognizes several specialized decorators, which determine the way the decorated function will be compiled and handled.
``@kernel`` (see :meth:`~artiq.language.core.kernel`) designates kernel functions, which will be compiled for and wholly executed on the core device; the basic setup and background for kernels is detailed on the :doc:`getting_started_core` page. ``@subkernel`` (:meth:`~artiq.language.core.subkernel`) designates subkernel functions, which are largely similar to kernels except that they are executed on satellite devices in a DRTIO setting, with some associated limitations; they are described in more detail on the :doc:`using_drtio_subkernels` page. ``@kernel`` (see :meth:`~artiq.language.core.kernel`) designates kernel functions, which will be compiled for and executed on the core device; the basic setup and background for kernels is detailed on the :doc:`getting_started_core` page. ``@subkernel`` (:meth:`~artiq.language.core.subkernel`) designates subkernel functions, which are largely similar to kernels except that they are executed on satellite devices in a DRTIO setting, with some associated limitations; they are described in more detail on the :doc:`using_drtio_subkernels` page.
``@rpc`` (:meth:`~artiq.language.core.rpc`) designates functions to be executed on the host machine, which are compiled and run in regular Python, outside of the core device's real-time limitations. Notably, functions without decorators are assumed to be host-bound by default, and treated identically to an explicitly marked ``@rpc``. As a result, the explicit decorator is only really necessary when specifying additional flags (for example, ``flags={"async"}``, see below). ``@rpc`` (:meth:`~artiq.language.core.rpc`) designates functions to be executed on the host machine, which are compiled and run in regular Python, outside of the core device's real-time limitations. Notably, functions without decorators are assumed to be host-bound by default, and treated identically to an explicitly marked ``@rpc``. As a result, the explicit decorator is only really necessary when specifying additional flags (for example, ``flags={"async"}``, see below).
@ -135,7 +135,7 @@ ARTIQ makes various useful built-in and mathematical functions from Python, NumP
- ``print()`` (with caveats; see below) - ``print()`` (with caveats; see below)
- all basic type conversions (``int()``, ``float()`` etc.) - all basic type conversions (``int()``, ``float()`` etc.)
+ * `NumPy mathematic utilities <https://numpy.org/doc/stable/reference/routines.math.html>`_ + * `NumPy mathematic utilities <https://numpy.org/doc/stable/reference/routines.math.html>`_
* - ``sqrt()``, ``cbrt``` * - ``sqrt()``, ``cbrt()``
- ``fabs()``, ``fmax()``, ``fmin()`` - ``fabs()``, ``fmax()``, ``fmin()``
- ``floor()``, ``ceil()``, ``trunc()``, ``rint()`` - ``floor()``, ``ceil()``, ``trunc()``, ``rint()``
+ * `NumPy exponents and logarithms <https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms>`_ + * `NumPy exponents and logarithms <https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms>`_
@ -154,12 +154,12 @@ ARTIQ makes various useful built-in and mathematical functions from Python, NumP
- ``gamma()``, ``gammaln()`` - ``gamma()``, ``gammaln()``
- ``j0()``, ``j1()``, ``y0()``, ``y1()`` - ``j0()``, ``j1()``, ``y0()``, ``y1()``
Basic NumPy array handling (``np.array()``, ``numpy.transpose()``, ``numpy.full``, ``@``, element-wise operation, etc.) is also available. NumPy functions are implicitly broadcast when applied to arrays. Basic NumPy array handling (``np.array()``, ``numpy.transpose()``, ``numpy.full()``, ``@``, element-wise operation, etc.) is also available. NumPy functions are implicitly broadcast when applied to arrays.
Print and logging functions Print and logging functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ARTIQ offers two native built-in logging functions: ``rtio_log()``, which prints to the :ref:`RTIO log <rtio-analyzer>`, as retrieved by :mod:`~artiq.frontend.artiq_coreanalyzer`, and ``core_log()``, which allows for printing directly to the core log regardless of context or network connection status. Both exist for debugging purposes, especially in contexts where a ``print()`` RPC is not suitable, such as in idle/startup kernels or when debugging delicate RTIO slack issues which may be strongly affected by the overhead of ``print()``. ARTIQ offers two native built-in logging functions: ``rtio_log()``, which prints to the :ref:`RTIO log <rtio-analyzer>`, as retrieved by :mod:`~artiq.frontend.artiq_coreanalyzer`, and ``core_log()``, which prints directly to the core log, regardless of context or network connection status. Both exist for debugging purposes, especially in contexts where a ``print()`` RPC is not suitable, such as in idle/startup kernels or when debugging delicate RTIO slack issues which may be significantly affected by the overhead of ``print()``.
``print()`` itself is in practice an RPC to the regular host Python ``print()``, i.e. with output either in the terminal of :mod:`~artiq.frontend.artiq_run` or in the client logs when using :mod:`~artiq.frontend.artiq_dashboard` or :mod:`~artiq.frontend.artiq_compile`. This means on one hand that it should not be used in idle, startup, or subkernels, and on the other hand that it suffers of some of the timing limitations of any other RPC, especially if the RPC queue is full. Accordingly, it is important to be aware that the timing of ``print()`` outputs can't reliably be used to debug timing in kernels, and especially not the timing of other RPCs. ``print()`` itself is in practice an RPC to the regular host Python ``print()``, i.e. with output either in the terminal of :mod:`~artiq.frontend.artiq_run` or in the client logs when using :mod:`~artiq.frontend.artiq_dashboard` or :mod:`~artiq.frontend.artiq_compile`. This means on one hand that it should not be used in idle, startup, or subkernels, and on the other hand that it suffers of some of the timing limitations of any other RPC, especially if the RPC queue is full. Accordingly, it is important to be aware that the timing of ``print()`` outputs can't reliably be used to debug timing in kernels, and especially not the timing of other RPCs.

View File

@ -19,7 +19,7 @@ Full support for a specific device, called a network device support package or N
1. The `driver`, which contains the Python API functions to be called over the network and performs the I/O to the device. The top-level module of the driver should be called ``artiq.devices.XXX.driver``. 1. The `driver`, which contains the Python API functions to be called over the network and performs the I/O to the device. The top-level module of the driver should be called ``artiq.devices.XXX.driver``.
2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and should be called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it. 2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and should be called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it.
3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data must be transferred over the network API, that would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters. 3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data, which would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters, must be transferred over the network API.
4. An optional `mediator`, which is code executed on the client that supplements the network API. A mediator may contain kernels that control real-time signals such as TTL lines connected to the device. Simple devices use the network API directly and do not have a mediator. Mediator modules are called ``artiq.devices.XXX.mediator`` and their public classes are exported at the ``artiq.devices.XXX`` level (via ``__init__.py``) for direct import and use by the experiments. 4. An optional `mediator`, which is code executed on the client that supplements the network API. A mediator may contain kernels that control real-time signals such as TTL lines connected to the device. Simple devices use the network API directly and do not have a mediator. Mediator modules are called ``artiq.devices.XXX.mediator`` and their public classes are exported at the ``artiq.devices.XXX`` level (via ``__init__.py``) for direct import and use by the experiments.
The driver and controller The driver and controller
@ -213,7 +213,7 @@ Command line and options
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
* Controllers should be able to operate in "simulation" mode, specified with ``--simulation``, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file. * Controllers should be able to operate in "simulation" mode, specified with ``--simulation``, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file.
* The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter name. * The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter names.
* Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it. * Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it.
Style Style

View File

@ -37,13 +37,13 @@ Note that the key (the name of the device) is ``led`` and the value is itself a
.. warning:: .. warning::
It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system. It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system.
Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, realtime, e.g. your Sinara hardware) must be connected to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports. Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, real-time, e.g. your Sinara hardware) must be connected to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports.
While controllers can be added and removed to your device database relatively easily, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will be provided with new binaries and necessary assistance.) See :doc:`building_developing`. While controllers can be added and removed to your device database on an *ad hoc* basis, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will be provided with new binaries and necessary assistance.) See :doc:`building_developing`.
Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the :ref:`system description file<system-description>`. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around. Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the :ref:`system description file<system-description>`. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around.
If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add remote devices, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the :mod:`~artiq.frontend.artiq_ddb_template` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly. See also the corresponding entry in :ref:`Utilities <ddb-template-tool>`. If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add controllers, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the :mod:`~artiq.frontend.artiq_ddb_template` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly. See also the corresponding entry in :ref:`Utilities <ddb-template-tool>`.
Local devices Local devices
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
@ -61,7 +61,7 @@ Controllers
Controller entries are dictionaries which contain a ``type`` field set to ``controller``. When an experiment requests such a device, a RPC client (see ``sipyco.pc_rpc``) is created and connected to the appropriate controller. Controller entries are also used by controller managers to determine what controllers to run. For an example, see :ref:`the NDSP development page <ndsp-integration>`. Controller entries are dictionaries which contain a ``type`` field set to ``controller``. When an experiment requests such a device, a RPC client (see ``sipyco.pc_rpc``) is created and connected to the appropriate controller. Controller entries are also used by controller managers to determine what controllers to run. For an example, see :ref:`the NDSP development page <ndsp-integration>`.
The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matches the ``port`` field) and an appropriate bind address for the controller's listening socket. The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matching the ``port`` field) and an appropriate bind address for the controller's listening socket.
An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default. An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default.
@ -80,7 +80,7 @@ Arguments are values that parameterize the behavior of an experiment. ARTIQ supp
Datasets Datasets
-------- --------
Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`interactivity tutorial <mgmt-datasets>`. Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`data interfaces tutorial <mgmt-datasets>`.
A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments. A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments.

View File

@ -80,14 +80,14 @@ Inconsistent DRTIO connections, especially with odd or absent errors in the core
add or remove EEM peripherals or DRTIO satellites? add or remove EEM peripherals or DRTIO satellites?
-------------------------------------------------- --------------------------------------------------
Adding new real-time hardware to an ARTIQ system almost always means reflashing the core device; if you are adding new satellite core devices, they will have to be flashed as well. If you have obtained your upgrades directly from M-Labs or QUARTIQ, updated binaries and reflashing support will normally be offered to you directly. In any other case, track down your JSON system description file(s), bring them up to date with the updated state of your system, and see :doc:`building_developing`. Adding new real-time hardware to an ARTIQ system almost always means reflashing the core device; if you are adding new satellite core devices, they will have to be flashed as well. If you have obtained your upgrades from M-Labs or QUARTIQ, updated binaries and reflashing support will normally be offered to you directly. In any other case, track down your JSON system description file(s), bring them up to date with the updated state of your system, and see :doc:`building_developing`.
Once you have an updated set of binaries, reflash the core device, following the instructions in :doc:`flashing`. Be sure to update your device database before starting experimentation; run :mod:`~artiq.frontend.artiq_ddb_template` on your system description(s) to update the local devices, and copy over any aliases or entries for NDSP controllers you may have been using. Note that the device database is a Python file, and the generated file of local devices can also simply be imported into the final version, allowing for dynamic modifications, especially for complex systems that may have multiple device databases in use. Once you have an updated set of binaries, reflash the core device, following the instructions in :doc:`flashing`. Be sure to update your device database before starting experimentation; run :mod:`~artiq.frontend.artiq_ddb_template` on your system description(s) to update the local devices, and copy over any aliases or entries for NDSP controllers you may have been using. Note that the device database is a Python file, and the generated file of local devices can also simply be imported into the final version, allowing for dynamic modifications, especially in complex systems that may have multiple device databases in use.
see command-line help? see command-line help?
---------------------- ----------------------
Like most if not almost all terminal utilities, ARTIQ commands, tools and applets print their help messages directly into the terminal and exit when run with the flag ``-h``: :: Like most if not almost all terminal utilities, ARTIQ commands, tools and applets print their help messages directly into the terminal and exit when run with the flag ``--help`` or ``-h``: ::
$ artiq_run -h $ artiq_run -h
@ -102,14 +102,14 @@ The official examples are stored in the ``examples`` folder of the ARTIQ package
python3 -c "import artiq; print(artiq.__path__[0])" python3 -c "import artiq; print(artiq.__path__[0])"
Copy the ``examples`` folder from that path into your home or user directory, and start experimenting! Copy the ``examples`` folder from that path into your home or user directory, and start experimenting! (Note that some examples have dependencies not included with a standard ARTIQ install, like matplotlib and numba. To run those examples properly, make sure those modules are accessible.)
On the other hand, if you have progressed past this level and would like to see more in-depth code or real-life examples of how other groups have handled running experiments with ARTIQ, see the "Community code" directory on the M-labs `resources page <https://m-labs.hk/experiment-control/resources/>`_. If you have progressed past this level and would like to see more in-depth code or real-life examples of how other groups have handled running experiments with ARTIQ, see the "Community code" directory on the M-labs `resources page <https://m-labs.hk/experiment-control/resources/>`_.
fix ``failed to connect to moninj`` in the dashboard? fix ``failed to connect to moninj`` in the dashboard?
----------------------------------------------------- -----------------------------------------------------
This and other similar messages almost always indicate that your device database lists controllers (for example, ``aqctl_moninj_proxy``) that either haven't been started or aren't reachable at the given host and port. See :ref:`mgmt-ctlmgr`, or navigate to the directory containing your ``device_db.py`` and run: :: This and other similar messages almost always indicate that your device database lists controllers (for example, ``aqctl_moninj_proxy``) that either haven't been started or aren't reachable at the given host and port. See :ref:`mgmt-ctlmgr`, or simply run: ::
$ artiq_ctlgmr $ artiq_ctlgmr

View File

@ -25,7 +25,7 @@ Installing and configuring OpenOCD
---------------------------------- ----------------------------------
.. warning:: .. warning::
These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility :mod:`~artiq.frontend.artiq_flash` to reflash. If your core device is a Zynq device, skip straight to :ref:`writing-flash`. These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility :mod:`~artiq.frontend.artiq_flash`. If your core device is a Zynq device, skip straight to :ref:`writing-flash`.
ARTIQ supplies the utility :mod:`~artiq.frontend.artiq_flash`, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards. ARTIQ supplies the utility :mod:`~artiq.frontend.artiq_flash`, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards.
@ -83,7 +83,7 @@ First ensure the board is connected to your computer. In the case of Kasli, the
For Kasli-SoC or ZC706: For Kasli-SoC or ZC706:
:: ::
$ artiq_coremgmt [-D 192.168.1.75] config write -f boot [afws_directory]/boot.bin $ artiq_coremgmt [-D IP_address] config write -f boot <afws_directory>/boot.bin
$ artiq_coremgmt reboot $ artiq_coremgmt reboot
If the device is not reachable due to corrupted firmware or networking problems, extract the SD card and copy ``boot.bin`` onto it manually. If the device is not reachable due to corrupted firmware or networking problems, extract the SD card and copy ``boot.bin`` onto it manually.
@ -91,12 +91,12 @@ For Kasli-SoC or ZC706:
For Kasli: For Kasli:
:: ::
$ artiq_flash -d [afws_directory] $ artiq_flash -d <afws_directory>
For KC705: For KC705:
:: ::
$ artiq_flash -t kc705 -d [afws_directory] $ artiq_flash -t kc705 -d <afws_directory>
The SW13 switches need to be set to 00001. The SW13 switches need to be set to 00001.

View File

@ -9,12 +9,12 @@ As a very first step, we will turn on a LED on the core device. Create a file ``
class LED(EnvExperiment): class LED(EnvExperiment):
def build(self): def build(self):
self.setattr_device("core") self.setattr_device("core")
self.setattr_device("led") self.setattr_device("led0")
@kernel @kernel
def run(self): def run(self):
self.core.reset() self.core.reset()
self.led.on() self.led0.on()
The central part of our code is our ``LED`` class, which derives from :class:`~artiq.language.environment.EnvExperiment`. Almost all experiments should derive from this class, which provides access to the environment as well as including the necessary experiment framework from the base-level :class:`~artiq.language.environment.Experiment`. It will call our :meth:`~artiq.language.environment.HasEnvironment.build` at the right time and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` we use to gain access to our devices ``core`` and ``led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method is a kernel and must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host). The central part of our code is our ``LED`` class, which derives from :class:`~artiq.language.environment.EnvExperiment`. Almost all experiments should derive from this class, which provides access to the environment as well as including the necessary experiment framework from the base-level :class:`~artiq.language.environment.Experiment`. It will call our :meth:`~artiq.language.environment.HasEnvironment.build` at the right time and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` we use to gain access to our devices ``core`` and ``led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method is a kernel and must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host).
@ -46,7 +46,7 @@ Modify ``led.py`` as follows: ::
class LED(EnvExperiment): class LED(EnvExperiment):
def build(self): def build(self):
self.setattr_device("core") self.setattr_device("core")
self.setattr_device("led") self.setattr_device("led0")
@kernel @kernel
def run(self): def run(self):
@ -54,9 +54,9 @@ Modify ``led.py`` as follows: ::
s = input_led_state() s = input_led_state()
self.core.break_realtime() self.core.break_realtime()
if s: if s:
self.led.on() self.led0.on()
else: else:
self.led.off() self.led0.off()
You can then turn the LED off and on by entering 0 or 1 at the prompt that appears: :: You can then turn the LED off and on by entering 0 or 1 at the prompt that appears: ::
@ -70,7 +70,7 @@ What happens is that the ARTIQ compiler notices that the ``input_led_state`` fun
The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`. The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`.
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :doc:`rtio` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock. Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :doc:`rtio` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock.
Real-time Input/Output (RTIO) Real-time Input/Output (RTIO)
----------------------------- -----------------------------
@ -97,7 +97,7 @@ Create a new file ``rtio.py`` containing the following: ::
In its :meth:`~artiq.language.environment.HasEnvironment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database. In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels. In its :meth:`~artiq.language.environment.HasEnvironment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database. In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels.
When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to.Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode. There are no input-only TTL channels. When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to. Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode. There are no input-only TTL channels.
The experiment then drives one million 2 µs long pulses separated by 2 µs each. Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run rtio.py``. Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%. This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled. The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable. Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle. The experiment then drives one million 2 µs long pulses separated by 2 µs each. Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run rtio.py``. Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%. This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled. The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable. Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle.
@ -169,19 +169,22 @@ Within a parallel block, some statements can be scheduled sequentially again usi
delay(4*us) delay(4*us)
.. warning:: .. warning::
``with parallel`` specifically 'parallelizes' the *top-level* statements inside a block. Consider as an example: :: ``with parallel`` specifically 'parallelizes' the *top-level* statements inside a block. Consider as an example:
.. code-block::
:linenos:
for i in range(1000000): for i in range(1000000):
with parallel: with parallel:
self.ttl0.pulse(2*us) # 1 self.ttl0.pulse(2*us)
if True: # 2 if True:
self.ttl1.pulse(2*us) # 3 self.ttl1.pulse(2*us)
self.ttl2.pulse(2*us) # 4 self.ttl2.pulse(2*us)
delay(4*us) delay(4*us)
This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of statement #2; it will not repeat the reset at the deeper indentation level for #3 or #4. This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of line #4; it will not repeat the reset at the deeper indentation level for #5 or #6.
In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, statements #3 and #4, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To execute #3 and #4 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement. In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, lines #5 and #6, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To schedule #5 and #6 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement.
Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to cause sequencing issues in the gateware; see also :ref:`sequence-errors`. Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to cause sequencing issues in the gateware; see also :ref:`sequence-errors`.
@ -223,7 +226,7 @@ The ``<file_name>.vcd`` file should be immediately created and written. Check th
Tutorials on GTKWave options (or other third-party tools) and how best to view VCD files can be found online. By default, the data in a trace like ``rtio_slack`` will probably be presented in a raw form. To see a stepped wave as in the ARTIQ dashboard, look for options to interpret the data as a real number, then as an analog signal. Tutorials on GTKWave options (or other third-party tools) and how best to view VCD files can be found online. By default, the data in a trace like ``rtio_slack`` will probably be presented in a raw form. To see a stepped wave as in the ARTIQ dashboard, look for options to interpret the data as a real number, then as an analog signal.
Pay attention to the timescale of the waveform dock in your chosen viewer; if you have set your signals to display but nothing is visible, it may be zoomed in or out much too far. Pay attention to the timescale of the waveform dock in your chosen viewer; if you have set your signals to display but nothing is visible, it is likely zoomed in or out much too far.
The easiest way to view recorded analyzer data, however, is directly in the ARTIQ dashboard, a feature which will be presented later in :ref:`interactivity-waveform`. The easiest way to view recorded analyzer data, however, is directly in the ARTIQ dashboard, a feature which will be presented later in :ref:`interactivity-waveform`.
@ -232,7 +235,7 @@ The easiest way to view recorded analyzer data, however, is directly in the ARTI
Direct Memory Access (DMA) Direct Memory Access (DMA)
-------------------------- --------------------------
DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up'. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly. DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up' to ``now_mu``. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly.
Try this: :: Try this: ::

View File

@ -54,7 +54,7 @@ Return to the terminal where the master is running. You should see an output sim
INFO:worker(0,mgmt_tutorial.py):print:Hello World INFO:worker(0,mgmt_tutorial.py):print:Hello World
In other words, a worker created by the master has executed the experiment, and carried out the print instruction. Congratulations! In other words, a worker created by the master has executed the experiment and carried out the print instruction. Congratulations!
.. tip:: .. tip::
@ -74,7 +74,7 @@ You may also notice that the master has created some other organizational files
Running the dashboard and controller manager Running the dashboard and controller manager
-------------------------------------------- --------------------------------------------
Submitting experiments with :mod:`~artiq.frontend.artiq_client` has some interesting qualities: for instance, experiments can be requested simultaneously by different clients and be relied upon to execute neatly in sequence, which is useful in a distributed context. On the other hand, on an local level, it doesn't necessarily carry many practical advantages over using :mod:`~artiq.frontend.artiq_run`. The real convenience of the management system lies in its GUI, the dashboard. We will now try submitting an experiment using the dashboard. Submitting experiments with :mod:`~artiq.frontend.artiq_client` has some interesting qualities: for instance, experiments can be requested simultaneously by different clients and be relied upon to execute cleanly in sequence, which is useful in a distributed context. On the other hand, on an local level, it doesn't necessarily carry many practical advantages over using :mod:`~artiq.frontend.artiq_run`. The real convenience of the management system lies in its GUI, the dashboard. We will now try submitting an experiment using the dashboard.
First, start the controller manager: :: First, start the controller manager: ::
@ -121,7 +121,7 @@ You can ask it to do this through the command-line client: ::
or you can right-click in the Explorer and select 'Scan repository HEAD'. Now you should be able to select and submit the new experiment. or you can right-click in the Explorer and select 'Scan repository HEAD'. Now you should be able to select and submit the new experiment.
If you switch the 'Log' dock to its 'Schedule' tab while the experiment is still running, you will see the experiment appear, displaying its RID, status, priority, and other information. Click 'Submit' again while the first experiment is progress, and a second iteration of the experiment will appear in the Schedule, queued up to execute next in line. If you switch the 'Log' dock to its 'Schedule' tab while the experiment is still running, you will see the experiment appear, displaying its RID, status, priority, and other information. Click 'Submit' again while the first experiment is in progress, and a second iteration of the experiment will appear in the Schedule, queued up to execute next in line.
.. note:: .. note::
You may have noted that experiments can be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, see :ref:`experiment-scheduling`. You may have noted that experiments can be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, see :ref:`experiment-scheduling`.
@ -180,7 +180,7 @@ Setting up Git integration
So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with tracking modifications to experiments and with the traceability to a particular version of an experiment. So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with tracking modifications to experiments and with the traceability to a particular version of an experiment.
.. note:: .. note::
The workflow we will describe in this tutorial corresponds to a situation where the computer running the ARTIQ master is also used as a Git server to which multiple users may contribute code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head. See the :doc:`management_system` page for notes on alternate workflows. The workflow we will describe in this tutorial corresponds to a situation where the computer running the ARTIQ master is also used as a Git server to which multiple users may contribute code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head. See the :ref:`Management system <mgmt-git-integration>` page for notes on alternate workflows.
We will use our current ``repository`` folder as the working directory for making local modifications to the experiments, move it away from the master's data directory, and replace it with a new ``repository`` folder, which will hold only the Git data used by the master. Stop the master with Ctrl+C and enter the following commands: :: We will use our current ``repository`` folder as the working directory for making local modifications to the experiments, move it away from the master's data directory, and replace it with a new ``repository`` folder, which will hold only the Git data used by the master. Stop the master with Ctrl+C and enter the following commands: ::

View File

@ -82,9 +82,6 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
}; };
} }
.. note::
You might consider adding matplotlib and numba in particular, as these are required by certain ARTIQ example experiments.
You can now spawn a shell containing these packages by running ``$ nix shell`` in the directory containing the ``flake.nix``. This should make both the ARTIQ commands and all the additional packages available to you. You can exit the shell with Control+D or with the command ``exit``. A first execution of ``$ nix shell`` may take some time, but for any future repetitions Nix will use cached packages and startup should be much faster. You can now spawn a shell containing these packages by running ``$ nix shell`` in the directory containing the ``flake.nix``. This should make both the ARTIQ commands and all the additional packages available to you. You can exit the shell with Control+D or with the command ``exit``. A first execution of ``$ nix shell`` may take some time, but for any future repetitions Nix will use cached packages and startup should be much faster.
You might be interested in creating multiple directories containing different ``flake.nix`` files which represent different sets of packages for different purposes. If you are familiar with Conda, using Nix in this way is similar to having multiple Conda environments. You might be interested in creating multiple directories containing different ``flake.nix`` files which represent different sets of packages for different purposes. If you are familiar with Conda, using Nix in this way is similar to having multiple Conda environments.

View File

@ -10,7 +10,7 @@ Components
Master Master
^^^^^^ ^^^^^^
The :ref:`ARTIQ master <frontend-artiq-master>` is responsible for managing the parameter and device databases, the experiment repository, scheduling and running experiments, archiving results, and distributing real-time results. It is a headless component, and one or several clients (command-line or GUI) use the network to interact with it. The :ref:`ARTIQ master <frontend-artiq-master>` is responsible for managing the dataset and device databases, the experiment repository, scheduling and running experiments, archiving results, and distributing real-time results. It is a headless component, and one or several clients (command-line or GUI) use the network to interact with it.
The master expects to be given a directory on startup, the experiment repository, containing these experiments which are automatically tracked and communicated to clients. By default, it simply looks for a directory called ``repository``. The ``-r`` flag can be used to substitute an alternate location. The master expects to be given a directory on startup, the experiment repository, containing these experiments which are automatically tracked and communicated to clients. By default, it simply looks for a directory called ``repository``. The ``-r`` flag can be used to substitute an alternate location.
@ -44,6 +44,8 @@ The controller manager is provided in the ``artiq-comtools`` package (which is a
A controller manager connects to the master and accesses the device database through it to determine what controllers need to be run. The local network address of the connection is used to filter for only those controllers allocated to the current node. Hostname resolution is supported. Changes to the device database are tracked and controllers will be stopped and started accordingly. A controller manager connects to the master and accesses the device database through it to determine what controllers need to be run. The local network address of the connection is used to filter for only those controllers allocated to the current node. Hostname resolution is supported. Changes to the device database are tracked and controllers will be stopped and started accordingly.
.. _mgmt-git-integration:
Git integration Git integration
--------------- ---------------
@ -67,7 +69,7 @@ Basics
To make more efficient use of hardware resources, experiments are generally split into three phases and pipelined, such that potentially compute-intensive pre-computation or analysis phases may be executed in parallel with the bodies of other experiments, which access hardware. To make more efficient use of hardware resources, experiments are generally split into three phases and pipelined, such that potentially compute-intensive pre-computation or analysis phases may be executed in parallel with the bodies of other experiments, which access hardware.
.. seealso:: .. seealso::
These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`. These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`, which additionally provides access to the methods of :class:`artiq.language.environment.HasEnvironment`.
There are three stages of a standard experiment users may write code in: There are three stages of a standard experiment users may write code in:

View File

@ -24,7 +24,7 @@ RTIO timestamps, the timeline cursor, and the ``rtio_counter_mu`` wall clock are
Absolute timestamps can be large numbers. They are represented internally as 64-bit integers. With a typical one-nanosecond machine unit, this covers a range of hundreds of years. Conversions between such a large integer number and a floating point representation can cause loss of precision through cancellation. When computing the difference of absolute timestamps, use ``self.core.mu_to_seconds(t2-t1)``, not ``self.core.mu_to_seconds(t2)-self.core.mu_to_seconds(t1)`` (see :meth:`~artiq.coredevice.core.Core.mu_to_seconds`). When accumulating time, do it in machine units and not in SI units, so that rounding errors do not accumulate. Absolute timestamps can be large numbers. They are represented internally as 64-bit integers. With a typical one-nanosecond machine unit, this covers a range of hundreds of years. Conversions between such a large integer number and a floating point representation can cause loss of precision through cancellation. When computing the difference of absolute timestamps, use ``self.core.mu_to_seconds(t2-t1)``, not ``self.core.mu_to_seconds(t2)-self.core.mu_to_seconds(t1)`` (see :meth:`~artiq.coredevice.core.Core.mu_to_seconds`). When accumulating time, do it in machine units and not in SI units, so that rounding errors do not accumulate.
.. note:: .. note::
Absolute timestamps are also referred to as *RTIO fine timestamps,* because they run on a significantly finer resolution than the timestamps provided by the so-called *coarse RTIO clock,* the actual clocking signal provided to or generated by the core device. The frequency of the coarse RTIO clock is set by the core device :ref:`clocking settings <core-device-clocking>` but is most commonly 125MHz, which corresponds to eight one-nanosecond machine units per coarse RTIO cycle. Absolute timestamps are also referred to as *RTIO fine timestamps,* because they run on a significantly finer resolution than the timestamps of the so-called *coarse RTIO clock,* the actual clocking signal provided to or generated by the core device. The frequency of the coarse RTIO clock is set by the core device :ref:`clocking settings <core-device-clocking>` but is most commonly 125MHz, which corresponds to eight one-nanosecond machine units per coarse RTIO cycle.
The *coarse timestamp* of an event is its timestamp as according to the lower resolution of the coarse clock. It is in practice a truncated version of the fine timestamp. In general, ARTIQ offers *precision* on the fine level, but *operates* at the coarse level; this is rarely relevant to the user, but understanding it may clarify the behavior of some RTIO issues (e.g. sequence errors). The *coarse timestamp* of an event is its timestamp as according to the lower resolution of the coarse clock. It is in practice a truncated version of the fine timestamp. In general, ARTIQ offers *precision* on the fine level, but *operates* at the coarse level; this is rarely relevant to the user, but understanding it may clarify the behavior of some RTIO issues (e.g. sequence errors).
@ -103,7 +103,7 @@ Once the timeline cursor has overtaken the wall clock, the exception does not re
To track down :class:`~artiq.coredevice.exceptions.RTIOUnderflow` exceptions in an experiment there are a few approaches: To track down :class:`~artiq.coredevice.exceptions.RTIOUnderflow` exceptions in an experiment there are a few approaches:
* Exception backtraces show where underflow has occurred while executing the code. * Exception backtraces show where underflow has occurred while executing the code.
* The :ref:`integrated logic analyzer <rtio-analyzer>` shows the timeline context that lead to the exception. The analyzer is always active and supports plotting of RTIO slack. This may be useful to spot where and how an experiment has 'run out' of positive slack. * The :ref:`integrated logic analyzer <rtio-analyzer>` shows the timeline context that lead to the exception. The analyzer is always active and supports plotting of RTIO slack. This makes it possible to visually find where and how an experiment has 'run out' of positive slack.
.. _sequence-errors: .. _sequence-errors:
@ -119,7 +119,7 @@ If an event with a timestamp coarsely equal to or lesser than the previous times
By default, the ARTIQ SED has eight lanes, which normally suffices to avoid sequence errors, but problems may still occur if many (>8) events are issued to the gateware with interleaving timestamps. Due to the strict timing limitations imposed on RTIO gateware, it is not possible for the SED to rearrange events in a lane once submitted, nor to anticipate future events when making lane choices. This makes sequence errors fairly 'unintelligent', but also generally fairly easy to eliminate by manually rearranging the generation of events (*not* rearranging the timing of the events themselves, which is rarely necessary.) By default, the ARTIQ SED has eight lanes, which normally suffices to avoid sequence errors, but problems may still occur if many (>8) events are issued to the gateware with interleaving timestamps. Due to the strict timing limitations imposed on RTIO gateware, it is not possible for the SED to rearrange events in a lane once submitted, nor to anticipate future events when making lane choices. This makes sequence errors fairly 'unintelligent', but also generally fairly easy to eliminate by manually rearranging the generation of events (*not* rearranging the timing of the events themselves, which is rarely necessary.)
It is also possible to increase the number of SED lanes in the gateware, which will reduce the frequency of sequencing issues, but will also correspondingly put more stress on FPGA resources and timing. It is also possible to increase the number of SED lanes in the gateware, which will reduce the frequency of sequencing issues, but will correspondingly put more stress on FPGA resources and timing.
Other notes: Other notes:
@ -143,7 +143,7 @@ Like sequence errors, collisions originate in gateware and do not stop the execu
Busy errors Busy errors
^^^^^^^^^^^ ^^^^^^^^^^^
A busy error occurs when at least one output event could not be executed because the output channel was already busy executing an event. This differs from a collision error in that a collision is triggered when a sequence of events overwhelms *communication* with a channel, and a busy error is triggered when *execution* is overwhelmed. Busy errors are only possible in the context of single events with execution times longer than a coarse RTIO clock cycle; the exact parameters will depend on the nature of the output channel (e.g. specific peripheral device). A busy error occurs when at least one output event could not be executed because the output channel was already busy executing an event. This differs from a collision error in that a collision is triggered when a sequence of events overwhelms *communication* with a channel, and a busy error is triggered when *execution* is overwhelmed. Busy errors are only possible in the context of single events with execution times longer than a coarse RTIO clock cycle; the exact parameters will depend on the nature of the output channel (e.g. the specific peripheral device).
Offending event(s) are discarded and the problem is reported asynchronously via the core log. Offending event(s) are discarded and the problem is reported asynchronously via the core log.

View File

@ -39,15 +39,15 @@ As long as ``archive=False`` is not explicitly set, datasets are among the infor
You can open the result file for this experiment with HDFView, h5dump, or any similar third-party tool. Observe that it contains the dataset we just generated, as well as other useful information such as RID, run time, start time, and the Git commit ID of the repository at the time of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645``). You can open the result file for this experiment with HDFView, h5dump, or any similar third-party tool. Observe that it contains the dataset we just generated, as well as other useful information such as RID, run time, start time, and the Git commit ID of the repository at the time of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645``).
.. tip:: .. tip::
If you are not familiar with Git, try running ``git log`` in either of your connected Git repositories to see a history of commits in the repository which includes their respective hashes. As long as this history remains intact, you can use a hash of this kind of to uniquely identify, and even retrieve, the state of the files in the repository at the time this experiment was run. In oher words, when running experiments from a Git repository, it's always possible to retrieve the code that led to a particular set of results. If you are not familiar with Git, try running ``git log`` in either of your connected Git repositories to see a history of commits in the repository which includes their respective hashes. As long as this history remains intact, you can use a hash of this kind of to uniquely identify, and even retrieve, the state of the files in the repository at the time this experiment was run. In other words, when running experiments from a Git repository, it's always possible to retrieve the code that led to a particular set of results.
Applets Applets
^^^^^^^ ^^^^^^^
Most of the time, rather than the HDF dump, we would like to see our result datasets in a readable graphical form, preferably without opening any third-party applications. In the ARTIQ dashboard, this is achieved by programs called "applets". Applets provide simple, modular GUI features; are run independently from the dashboard as separate processes to achieve goals of modularity and resilience. ARTIQ supplies several applets for basic plotting in the :mod:`artiq.applets` module, and provides interfaces so users can write their owns. Most of the time, rather than the HDF dump, we would like to see our result datasets in a readable graphical form, preferably without opening any third-party applications. In the ARTIQ dashboard, this is achieved by programs called "applets". Applets provide simple, modular GUI features; are run independently from the dashboard as separate processes to achieve goals of modularity and resilience. ARTIQ supplies several applets for basic plotting in the :mod:`artiq.applets` module, and provides interfaces so users can write their own.
.. seealso:: .. seealso::
For more about developing your own applets, see the references provided on the :ref:`Management system reference<applet-references>` page of this manual. When developing your own applets, see also the references provided on the :ref:`Management system reference<applet-references>` page of this manual.
For our ``parabola`` dataset, we will create an XY plot using the provided :mod:`artiq.applets.plot_xy`. Applets are configured with simple command line options. To figure out what configurations are accepted, use the ``-h`` flag, as in: :: For our ``parabola`` dataset, we will create an XY plot using the provided :mod:`artiq.applets.plot_xy`. Applets are configured with simple command line options. To figure out what configurations are accepted, use the ``-h`` flag, as in: ::

View File

@ -3,14 +3,14 @@ Using DRTIO and subkernels
In larger or more spread-out systems, a single core device might not be suited to managing all the RTIO operations or channels necessary. For these situations ARTIQ supplies Distributed Real-Time IO, or DRTIO. This allows systems to be configured with some or all of their RTIO channels distributed to one or several *satellite* core devices, which are linked to the *master* core device. These remote channels are then accessible in kernels on the master device exactly like local channels. In larger or more spread-out systems, a single core device might not be suited to managing all the RTIO operations or channels necessary. For these situations ARTIQ supplies Distributed Real-Time IO, or DRTIO. This allows systems to be configured with some or all of their RTIO channels distributed to one or several *satellite* core devices, which are linked to the *master* core device. These remote channels are then accessible in kernels on the master device exactly like local channels.
While the components of a system, as well as the distribution of peripherals among satellites, are necessarily fixed in the system configuration, the specific topology of core and satellite links is flexible and can be changed whenever necessary. It is supplied to the core device by means of a routing table (see below). Kasli and Kasli-SoC devices use the SFP ports for DRTIO connections, whereunder links should be high-speed duplex serial lines operating 1Gbps or more. While the components of a system, as well as the distribution of peripherals among satellites, are necessarily fixed in the system configuration, the specific topology of master and satellite links is flexible and can be changed whenever necessary. It is supplied to the core device by means of a routing table (see below). Kasli and Kasli-SoC devices use SFP ports for DRTIO connections. Links should be high-speed duplex serial lines operating 1Gbps or more.
Certain peripheral cards with onboard FPGAs of their own (e.g. Shuttler) can be configured as satellites in a DRTIO setting, allowing them to run their own subkernels and make use of DDMA. In these cases, the EEM connection to the core device is used for DRTIO communication (DRTIO-over-EEM). Certain peripheral cards with onboard FPGAs of their own (e.g. Shuttler) can be configured as satellites in a DRTIO setting, allowing them to run their own subkernels and make use of DDMA. In these cases, the EEM connection to the core device is used for DRTIO communication (DRTIO-over-EEM).
.. note:: .. note::
As with other configuration changes (e.g. adding new hardware), if you are in possession of a non-distributed ARTIQ system and you'd like to expand it into a DRTIO setup, it's easily possible to do so, but you need to be sure that both master and satellite are (re)flashed with this in mind. As usual, if you obtained your hardware from M-Labs, you will normally be supplied with all the binaries you need, through :mod:`~artiq.frontend.afws_client` or otherwise. As with other configuration changes (e.g. adding new hardware), if you are in possession of a non-distributed ARTIQ system and you'd like to expand it into a DRTIO setup, it's easily possible to do so, but you need to be sure that both master and satellite are (re)flashed with this in mind. As usual, if you obtained your hardware from M-Labs, you will normally be supplied with all the binaries you need, through :mod:`~artiq.frontend.afws_client` or otherwise.
.. note:: .. warning::
Do not confuse the DRTIO *master device* (used to mean the central controlling core device of a distributed system) with the *ARTIQ master* (the central piece of software of ARTIQ's management system, which interacts with :mod:`~artiq.frontend.artiq_client` and the dashboard.) :mod:`~artiq.frontend.artiq_run` can be used to run experiments on DRTIO systems just as easily as non-distributed ones, and the ARTIQ master interacts with the central core device regardless of whether it's configured as a DRTIO master or standalone. Do not confuse the DRTIO *master device* (used to mean the central controlling core device of a distributed system) with the *ARTIQ master* (the central piece of software of ARTIQ's management system, which interacts with :mod:`~artiq.frontend.artiq_client` and the dashboard.) :mod:`~artiq.frontend.artiq_run` can be used to run experiments on DRTIO systems just as easily as non-distributed ones, and the ARTIQ master interacts with the central core device regardless of whether it's configured as a DRTIO master or standalone.
Using DRTIO Using DRTIO
@ -87,7 +87,7 @@ Enabling DDMA on a purely local sequence on a DRTIO system introduces an overhea
Subkernels Subkernels
---------- ----------
Rather than only offloading the RTIO channels to satellites and limiting all processing to the master core device, it is fully possible to run kernels directly on satellite devices. These are referred to as *subkernels*. Using subkernels to process and control remote RTIO channels can free up resources on the core device. Rather than only offloading the RTIO channels to satellites and limiting all processing to the master core device, it is also possible to run kernels directly on satellite devices. These are referred to as *subkernels*. Using subkernels to process and control remote RTIO channels can free up resources on the core device.
Subkernels behave for the most part like regular kernels; they accept arguments, can return values, and are marked by the decorator ``@subkernel(destination=i)``, where ``i`` is the satellite's destination number as used in the routing table. To call a subkernel, call it like any other function. There are however a few caveats: Subkernels behave for the most part like regular kernels; they accept arguments, can return values, and are marked by the decorator ``@subkernel(destination=i)``, where ``i`` is the satellite's destination number as used in the routing table. To call a subkernel, call it like any other function. There are however a few caveats:
@ -174,7 +174,7 @@ Subkernels can call other kernels and subkernels. For a more complex example: ::
In this case, without the preload, the delay after the core reset would need to be longer. Depending on the connection, the call may still take some time in itself. Notice that the method ``pulse_ttl()`` can be called both within a subkernel and on its own. In this case, without the preload, the delay after the core reset would need to be longer. Depending on the connection, the call may still take some time in itself. Notice that the method ``pulse_ttl()`` can be called both within a subkernel and on its own.
.. note:: .. note::
Subkernels can call subkernels on any other satellite, not only their own. Care should however be taken that different kernels do not call subkernels on the same satellite, or only very cautiously. If, e.g., a newer call overrides a subkernel that another caller is awaiting, unpredictable timeouts or locks may result, as the original subkernel will never return. There is not currently any mechanism to check whether a particular satellite is 'busy'; it is up to the programmer to handle this correctly. Subkernels can call subkernels on any other satellite, not only their own. Care should however be taken that different kernels do not call subkernels on the same satellite, or only very cautiously. If, e.g., a newer call overrides a subkernel that another caller is awaiting, unpredictable timeouts or locks may result, as the original subkernel will never return. There is no mechanism to check whether a particular satellite is 'busy'; it is up to the programmer to handle this correctly.
Message passing Message passing
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^