forked from M-Labs/artiq
1
0
Fork 0

doc: Building + developing rewrite (#2496)

Co-authored-by: architeuthidae <am@m-labs.hk>
This commit is contained in:
architeuthidae 2024-07-27 22:14:25 +08:00 committed by GitHub
parent 191494e430
commit 6698a6f80c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 258 additions and 30 deletions

View File

@ -0,0 +1,250 @@
Building and developing ARTIQ
=============================
.. warning::
This section is only for software or FPGA developers who want to modify ARTIQ. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we usually provide board binaries for you; you can use the AFWS client to get updated versions if necessary, as described in :ref:`obtaining-binaries`. It is not necessary to build them yourself.
The easiest way to obtain an ARTIQ development environment is via the `Nix package manager <https://nixos.org/nix/>`_ on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
ARTIQ itself does not depend on Nix, and it is also possible to obtain everything from source (look into the ``flake.nix`` file to see what resources are used, and run the commands manually, adapting to your system) - but Nix makes the process a lot easier.
Installing Vivado
-----------------
It is necessary to independently install AMD's `Vivado <https://www.xilinx.com/support/download.html>`_, which requires a login for download and can't be automatically obtained by package managers. The "appropriate" Vivado version to use for building gateware and firmware can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs.
.. tip::
Text-search ``flake.nix`` for a mention of ``/opt/Xilinx/Vivado``. Given e.g. the line ::
profile = "set -e; source /opt/Xilinx/Vivado/2022.2/settings64.sh"
the intended Vivado version is 2022.2.
Download and run the official installer. If using NixOS, note that this will require a FHS chroot environment; the ARTIQ flake provides such an environment, which you can enter with the command ``vivado-env`` from the development environment (i.e. after ``nix develop``). Other tips:
- Be aware that Vivado is notoriously not a lightweight piece of software and you will likely need **at least 70GB+** of free space to install it.
- If you do not want to write to ``/opt``, you can install into a folder of your home directory.
- During the Vivado installation, uncheck ``Install cable drivers`` (they are not required, as we use better open source alternatives).
- If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
- Vivado installation issues are not uncommon. Searching for similar problems on `the M-Labs forum <https://forum.m-labs.hk/>`_ or `Vivado's own support forums <https://support.xilinx.com/s/topic/0TO2E000000YKXwWAO/installation-and-licensing>`_ might be helpful when looking for solutions.
.. _system-description:
System description file
-----------------------
ARTIQ gateware and firmware binaries are dependent on the system configuration. In other words, a specific set of ARTIQ binaries is bound to the exact arrangement of real-time hardware it was generated for: the core device itself, its role in a DRTIO context (master, satellite, or standalone), the (real-time) peripherals in use, the physical EEM ports they will be connected to, and various other basic specifications. This information is normally provided to the software in the form of a JSON file called the system description or system configuration file.
.. warning::
System configuration files are only used with Kasli and Kasli-SoC boards. KC705 and ZC706 ARTIQ configurations, due to their relative rarity and specialization, are handled on a case-by-case basis and selected through a variant name such as ``nist_clock``, with no system description file necessary. See below in :ref:`building` for where to find the list of supported variants. Writing new KC705 or ZC706 variants is not a trivial task, and not particularly recommended, unless you are an FPGA developer and know what you're doing.
If you already have your system configuration file on hand, you can edit it to reflect any changes in configuration. If you purchased your original system from M-Labs, or recently purchased new hardware to add to it, you can obtain your up-to-date system configuration file through AFWS at any time using the command ``$ afws_client get_json`` (see :ref:`AFWS client<afws-client>`). If you are starting from scratch, a close reading of ``coredevice_generic.schema.json`` in ``artiq/coredevice`` will be helpful.
System descriptions do not need to be very complex. At its most basic, a system description looks something like: ::
{
"target": "kasli",
"variant": "example",
"hw_rev": "v2.0",
"base": "master",
"peripherals": [
{
"type": "grabber",
"ports": [0]
}
]
}
Only these five fields are required, and the ``peripherals`` list can in principle be empty. A limited number of more extensive examples can currently be found in `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq/src/branch/master>`_, as well as in the main repository under ``artiq/examples/kasli_shuttler``. Once your system description file is complete, you can use ``artiq_ddb_template`` (see also :ref:`Utilities <ddb-template-tool>`) to test it and to generate a template for the corresponding :ref:`device database <device-db>`.
DRTIO descriptions
^^^^^^^^^^^^^^^^^^
Note that in DRTIO systems it is necessary to create one description file *per core device*. Satellites and their connected peripherals must be described separately. Satellites also need to be reflashed separately, albeit only if their personal system descriptions have changed. (The layout of satellites relative to the master is configurable on the fly and will be established much later, in the routing table; see :ref:`drtio-routing`. It is not necessary to rebuild or reflash if only changing the DRTIO routing table).
In contrast, only one device database should be generated even for a DRTIO system. Use a command of the form: ::
$ artiq_ddb_template -s 1 <satellite1>.json -s 2 <satellite2>.json <master>.json
The numbers designate the respective satellite's destination number, which must correspond to the destination numbers used when generating the routing table later.
Common system description changes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To add or remove peripherals from the system, add or remove their entries from the ``peripherals`` field. When replacing hardware with upgraded versions, update the corresponding ``hw_rev`` (hardware revision) field. Other fields to consider include:
- ``enable_wrpll`` (a simple boolean, see :ref:`core-device-clocking`)
- ``sed_lanes`` (increasing the number of SED lanes can reduce sequence errors, but correspondingly consumes more FPGA resources, see :ref:`sequence-errors` )
- various defaults (e.g. ``core_addr`` defines a default IP address, which can be freely reconfigured later).
Nix development environment
---------------------------
* Install `Nix <http://nixos.org/nix/>`_ if you haven't already. Prefer a single-user installation for simplicity.
* Enable flakes in Nix, for example by adding ``experimental-features = nix-command flakes`` to ``nix.conf``; see the `NixOS Wiki on flakes <https://nixos.wiki/wiki/flakes>`_ for details and more options.
* Clone `the ARTIQ Git repository <https://github.com/m-labs/artiq>`_, or `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`__ for Zynq devices (Kasli-SoC or ZC706). By default, you are working with the ``master`` branch, which represents the beta version and is not stable (see :doc:`releases`). Checkout the most recent release (``git checkout release-[number]``) for a stable version.
* If your Vivado installation is not in its default location ``/opt``, open ``flake.nix`` and edit it accordingly (once again text-search ``/opt/Xilinx/Vivado``).
* Run ``nix develop`` at the root of the repository, where ``flake.nix`` is.
* Answer ``y``/'yes' to any Nix configuration questions if necessary, as in :ref:`installing-troubleshooting`.
.. note::
You can also target legacy versions of ARTIQ; use Git to checkout older release branches. Note however that older releases of ARTIQ required different processes for developing and building, which you are broadly more likely to figure out by (also) consulting corresponding older versions of the manual.
Once you have run ``nix develop`` you are in the ARTIQ development environment. All ARTIQ commands and utilities -- ``artiq_run``, ``artiq_master``, etc. -- should be available, as well as all the packages necessary to build or run ARTIQ itself. You can exit the environment at any time using Control+D or the ``exit`` command and re-enter it by re-running ``nix develop`` again in the same location.
.. tip::
If you are developing for Zynq, you will have noted that the ARTIQ-Zynq repository consists largely of firmware. The firmware for Zynq (NAR3) is more modern than that used for current mainline ARTIQ, and is intended to eventually replace it; for now it constitutes most of the difference between the two ARTIQ variants. The gateware for Zynq, on the other hand, is largely imported from mainline ARTIQ. If you intend to modify the gateware housed in the original ARTIQ repository, but build and test the results on a Zynq device, clone both repositories and set your ``PYTHONPATH`` after entering the ARTIQ-Zynq development shell: ::
$ export PYTHONPATH=/absolute/path/to/your/artiq:$PYTHONPATH
Note that this only applies for incremental builds. If you want to use ``nix build``, look into changing the inputs of the ``flake.nix`` instead. You can do this by replacing the URL of the GitHub ARTIQ repository with ``path:/absolute/path/to/your/artiq``; remember that Nix caches dependencies, so to incorporate new changes you will need to exit the development shell, update the Nix cache with ``nix flake update``, and re-run ``nix develop``.
Building only standard binaries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are working with original ARTIQ, and you only want to build a set of standard binaries (i.e. without changing the source code), you can also enter the development shell without cloning the repository, using ``nix develop`` as follows: ::
$ nix develop git+https://github.com/m-labs/artiq.git\?ref=release-[number]#boards
Leave off ``\?ref=release-[number]`` to prefer the current beta version instead of a numbered release.
.. note::
Adding ``#boards`` makes use of the ARTIQ flake's provided ``artiq-boards-shell``, a lighter environment optimized for building firmware and flashing boards, which can also be accessed by running ``nix develop .#boards`` if you have already cloned the repository. Developers should be aware that in this shell the current copy of the ARTIQ sources is not added to your ``PYTHONPATH``. Run ``nix flake show`` and read ``flake.nix`` carefully to understand the different available shells.
The parallel command does exist for ARTIQ-Zynq: ::
$ nix develop git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number]
but if you are building ARTIQ-Zynq without intention to change the source, it is not actually necessary to enter the development environment at all; Nix is capable of accessing the official flake remotely for the build itself, eliminating the requirement for any particular environment.
This is equally possible for original ARTIQ, but not as useful, as the development environment (specifically the ``#boards`` shell) is still the easiest way to access the necessary tools for flashing the board. On the other hand, with Zynq, it is normally recommended to boot from SD card, which requires no further special tools. As long as you have a functioning Nix installation with flakes enabled, you can progress directly to the building instructions below.
.. _building:
Building ARTIQ
--------------
For general troubleshooting and debugging, especially with a 'fresh' board, see also :ref:`connecting-uart`.
Kasli or KC705 (ARTIQ original)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For Kasli, if you have your system description file on-hand, you can at this point build both firmware and gateware with a command of the form: ::
$ python -m artiq.gateware.targets.kasli <description>.json
With KC705, use: ::
$ python -m artiq.gateware.targets.kc705 -V <variant>
This will create a directory ``artiq_kasli`` or ``artiq_kc705`` containing the binaries in a subdirectory named after your description file or variant. Flash the board as described in :ref:`writing-flash`, adding the option ``--srcbuild``, e.g., assuming your board is already connected by JTAG USB: ::
$ artiq_flash --srcbuild [-t kc705] -d artiq_<board>/<variant>
.. note::
To see supported KC705 variants, run: ::
$ python -m artiq.gateware.targets.kc705 --help
Look for the option ``-V VARIANT, --variant VARIANT``.
Kasli-SoC or ZC706 (ARTIQ on Zynq)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The building process for Zynq devices is a little more complex. The easiest method is to leverage ``nix build`` and the ``makeArtiqZynqPackage`` utility provided by the official flake. The ensuing command is rather long, because it uses a multi-clause expression in the Nix language to describe the desired result; it can be executed piece-by-piece using the `Nix REPL <https://nix.dev/manual/nix/2.18/command-ref/new-cli/nix3-repl.html>`_, but ``nix build`` provides a lot of useful conveniences.
For Kasli-SoC, run: ::
$ nix build --print-build-logs --impure --expr 'let fl = builtins.getFlake "git+https://git.m-labs.hk/m-labs/artiq-zynq?ref=release-[number]"; in (fl.makeArtiqZynqPackage {target="kasli_soc"; variant="<variant>"; json=<path/to/description.json>;}).kasli_soc-<variant>-sd'
Replace ``<variant>`` with ``master``, ``satellite``, or ``standalone``, depending on your targeted DRTIO role. Remove ``?ref=release-[number]`` to use the current beta version rather than a numbered release. If you have cloned the repository and prefer to use your local copy of the flake, replace the corresponding clause with ``builtins.getFlake "/absolute/path/to/your/artiq-zynq"``.
For ZC706, you can use a command of the same form: ::
$ nix build --print-build-logs --impure --expr 'let fl = builtins.getFlake "git+https://git.m-labs.hk/m-labs/artiq-zynq?ref=release-[number]"; in (fl.makeArtiqZynqPackage {target="zc706"; variant="<variant>";}).zc706-<variant>-sd'
or you can use the more direct version: ::
$ nix build --print-build-logs git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number]#zc706-<variant>-sd
(which is possible for ZC706 because there is no need to be able to specify a system description file in the arguments.)
.. note::
To see supported ZC706 variants, you can run the following at the root of the repository: ::
$ src/gateware/zc706.py --help
Look for the option ``-V VARIANT, --variant VARIANT``. If you have not cloned the repository or are not in the development environment, try: ::
$ nix flake show git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number] | grep "package 'zc706.*sd"
to see the list of suitable build targets directly.
Any of these commands should produce a directory ``result`` which contains a file ``boot.bin``. As described in :ref:`writing-flash`, if your core device is currently accessible over the network, it can be flashed with ``artiq_coremgmt``. If it is not connected to the network:
1. Power off the board, extract the SD card and load ``boot.bin`` onto it manually.
2. Insert the SD card back into the board.
3. Ensure that the DIP switches (labeled BOOT MODE) are set correctly, to SD.
4. Power the board back on.
Optionally, the SD card may also be loaded at the same time with an additional file ``config.txt``, which can contain preset configuration values in the format ``key=value``, one per line. The keys are those used with ``artiq_coremgmt``. This allows e.g. presetting an IP address and any other configuration information.
After a successful boot, the "FPGA DONE" light should be illuminated and the board should respond to ping when plugged into Ethernet.
.. _zynq-jtag-boot :
Booting over JTAG/Ethernet
""""""""""""""""""""""""""
It is also possible to boot Zynq devices over USB and Ethernet. Flip the DIP switches to JTAG. The scripts ``remote_run.sh`` and ``local_run.sh`` in the ARTIQ-Zynq repository, intended for use with a remote JTAG server or a local connection to the core device respectively, are used at M-Labs to accomplish this. Both make use of the netboot tool ``artiq_netboot``, see also its source `here <https://git.m-labs.hk/M-Labs/artiq-netboot>`__, which is included in the ARTIQ-Zynq development environment. Adapt the relevant script to your system or read it closely to understand the options and the commands being run; note for example that ``remote_run.sh`` as written only supports ZC706.
You will need to generate the gateware, firmware and bootloader first, either through ``nix build`` or incrementally as below. After an incremental build add the option ``-i`` when running either of the scripts. If using ``nix build``, note that target names of the form ``<board>-<variant>-jtag`` (run ``nix flake show`` to see all targets) will output the three necessary files without combining them into ``boot.bin``.
.. warning::
A known Xilinx hardware bug on Zynq prevents repeatedly loading the SZL bootloader over JTAG (i.e. repeated calls of the ``*_run.sh`` scripts) without a POR reset. On Kasli-SoC, you can physically set a jumper on the ``PS_POR_B`` pins of your board and use the M-Labs `POR reset script <https://git.m-labs.hk/M-Labs/zynq-rs/src/branch/master/kasli_soc_por.py>`_.
Zynq incremental build
^^^^^^^^^^^^^^^^^^^^^^
The ``boot.bin`` file used in a Zynq SD card boot is in practice the combination of several files, normally ``top.bit`` (the gateware), ``runtime`` or ``satman`` (the firmware) and ``szl.elf`` (an open-source bootloader for Zynq `written by M-Labs <https://git.m-labs.hk/M-Labs/zynq-rs/src/branch/master/szl>`_, used in ARTIQ in place of Xilinx's FSBL). In some circumstances, especially if you are developing ARTIQ, you may prefer to construct these components separately. Be sure that you have cloned the repository and entered the development environment as described above.
To compile the gateware and firmware, enter the ``src`` directory and run two commands as follows:
For Kasli-SoC:
::
$ gateware/kasli_soc.py -g ../build/gateware <description.json>
$ make TARGET=kasli_soc GWARGS="path/to/description.json" <fw-type>
For ZC706:
::
$ gateware/zc706.py -g ../build/gateware -V <variant>
$ make TARGET=zc706 GWARGS="-V <variant>" <fw-type>
where ``fw-type`` is ``runtime`` for standalone or DRTIO master builds and ``satman`` for DRTIO satellites. Both the gateware and the firmware will generate into the ``../build`` destination directory. At this stage you can :ref:`boot from JTAG <zynq-jtag-boot>`; either of the ``*_run.sh`` scripts will expect the gateware and firmware files at their default locations, and the ``szl.elf`` bootloader is retrieved automatically.
.. warning::
Note that in between runs of ``make`` it is necessary to manually clear ``build``, even for different targets, or ``make`` will do nothing.
If you prefer to boot from SD card, you will need to construct your own ``boot.bin``. Build ``szl.elf`` from source by running a command of the form: ::
$ nix build git+https://git.m-labs.hk/m-labs/zynq-rs#<board>-szl
For easiest access run this command in the ``build`` directory. The ``szl.elf`` file will be in the subdirectory ``result``. To combine all three files into the boot image, create a file called ``boot.bif`` in ``build`` with the following contents: ::
the_ROM_image:
{
[bootloader]result/szl.elf
gateware/top.bit
firmware/armv7-none-eabihf/release/<fw-type>
}
EOF
Save this file. Now use ``mkbootimage`` to create ``boot.bin``. ::
$ mkbootimage boot.bif boot.bin
Boot from SD card as above.

View File

@ -85,10 +85,10 @@ FPGA board ports
All boards have a serial interface running at 115200bps 8-N-1 that can be used for debugging.
Kasli and Kasli SoC
Kasli and Kasli-SoC
^^^^^^^^^^^^^^^^^^^
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is architecturally separate, among other things being capable of performing much heavier software computations quickly locally to the board, but provides generally similar features to Kasli. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original Kasli v1.0 remains supported by ARTIQ.
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is architecturally separate, among other things being capable of performing much heavier software computations at high speeds on the board itself, but provides generally similar features to Kasli. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use; the original Kasli v1.0 remains supported by ARTIQ.
Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC).
@ -217,7 +217,7 @@ The core device generates the RTIO clock using a PLL locked either to an interna
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the JSON description file. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the :ref:`JSON description file <system-description>`. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.

View File

@ -1,22 +0,0 @@
.. _developing-artiq:
Developing ARTIQ
^^^^^^^^^^^^^^^^
.. warning::
This section is only for software or FPGA developers who want to modify ARTIQ. If you want to modify the firmware for Kasli-SoC or other ARM boards, please refer to the `ARTIQ on Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`_. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we normally provide board binaries for you.
The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``flake.nix`` file and/or nixpkgs, and run the commands manually) - but Nix makes the process a lot easier.
* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment, which can be entered with the command `vivado-env`). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra <https://nixbld.m-labs.hk/>`_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
* During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives).
* Install the `Nix package manager <http://nixos.org/nix/>`_, version 2.4 or later. Prefer a single-user installation for simplicity.
* If you did not install Vivado in its default location ``/opt``, clone the ARTIQ Git repository and edit ``flake.nix`` accordingly.
* Enable flakes in Nix by e.g. adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (for example ``~/.config/nix/nix.conf``).
* Clone the ARTIQ Git repository and run ``nix develop`` at the root (where ``flake.nix`` is).
* Make the current source code of ARTIQ available to the Python interpreter by running ``export PYTHONPATH=`pwd`:$PYTHONPATH``.
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli <description>.json``, using a JSON system description file.
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli/<your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <installing-configuring-openocd>`. OpenOCD is already part of the flake's development environment.
* Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed in the flake's development environment). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``.
* The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group).

View File

@ -35,7 +35,7 @@ It is stored in a binary format that can be generated and manipulated with the :
Internal details
----------------
Bits 16-24 of the RTIO channel number (assigned to a respective device in the initial system description JSON, and specified again for use of the ARTIQ front-end in the device database) define the destination. Bits 0-15 of the RTIO channel number select the channel within the destination.
Bits 16-24 of the RTIO channel number (assigned to a respective device in the initial :ref:`system description JSON <system-description>`, and specified again for use of the ARTIQ front-end in the device database) define the destination. Bits 0-15 of the RTIO channel number select the channel within the destination.
Real-time and auxiliary packets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -15,7 +15,7 @@ ARTIQ manual
installing
flashing
configuring
developing
building_developing
.. toctree::
:caption: Tutorials

View File

@ -160,7 +160,7 @@ Other notes:
* Whether a particular sequence of timestamps causes a sequence error or not is fully deterministic (starting from a known RTIO state, e.g. after a reset). Adding a constant offset to the sequence will not affect the result.
.. note::
To change the number of SED lanes, it is necessary to recompile the gateware and reflash your core device. Use the ``sed_lanes`` field in your system description file to set the value, then follow the instructions in :doc:`developing`. Alternatively, if you have an active firmware subscription with M-Labs, contact helpdesk@ for edited binaries.
To change the number of SED lanes, it is necessary to recompile the gateware and reflash your core device. Use the ``sed_lanes`` field in your system description file to set the value, then follow the instructions in :doc:`building_developing`. Alternatively, if you have an active firmware subscription with M-Labs, contact helpdesk@ for edited binaries.
.. _collisions-busy-errors:

View File

@ -48,9 +48,9 @@ By default, DRTIO assumes a routing table for a star topology (i.e. all satellit
$ artiq_coremgmt config write -f routing_table rt.bin
The local RTIO core of the master device is considered a destination like any other; it must be explicitly listed in the routing table to be accessible to kernels.
Destination numbers must correspond to the ones used in the :ref:`device database <device-db>`, listed in the ``satellite_cpu_targets`` field. If unsure which destination number corresponds to which physical satellite device, check the channel numbers of the peripherals associated with that device; in DRTIO systems bits 16-24 of the RTIO channel number correspond to the destination number of the core device they are bound to. See also the :doc:`drtio` page.
All routes must end with the local RTIO core of the destination device. Incorrect routing tables will cause ``RTIODestinationUnreachable`` exceptions.
All routes must end with the local RTIO core of the destination device. Incorrect routing tables will cause ``RTIODestinationUnreachable`` exceptions. The local RTIO core of the master device is considered a destination like any other; it must be explicitly listed in the routing table to be accessible to kernels.
As with other configuration changes, the core device should be restarted (``artiq_coremgmt reboot``, power cycle, etc.) for changes to take effect.