forked from M-Labs/artiq
1
0
Fork 0

Compare commits

...

120 Commits

Author SHA1 Message Date
Deepskyhunter 15ed584dc3 Bugfix: Add missing item inside state to solve KeyError
KeyError raised when trying to load default_state()
due to missing Key "seed" in "RangeScan" and "CenterScan" in
state. Add {"seed": None} to resolve the bug.
2022-06-14 11:43:47 +08:00
Leon Riesebos dfc6ecbf3d browser: support datasets that use h5 group notation
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2022-04-07 18:18:40 +08:00
Leon Riesebos acfac324ee browser: cleanup datasets panel for empty h5 files
this fix makes sure the datasets panel is cleared if an h5 file is empty or the datasets and archive groups are empty
2022-04-07 18:18:33 +08:00
Sebastien Bourdeauducq 0d79743a70 dashboard: fix typo (#1858) 2022-02-26 08:57:45 +08:00
Steve Fan 7a8576f69f dashboard: prioritize min as part of default value resolution (#1839) 2022-01-27 17:49:43 +08:00
Peter Drmota ee0eb238d5 gateware.test.suservo: Fix tests for python >=3.7
Closes #1748
2022-01-11 17:19:13 +08:00
Steve Fan 7253fdcc9e llvm_ir: move stacksave before lltag alloca in build_rpc
Signed-off-by: Steve Fan <sf@m-labs.hk>
2021-12-19 12:09:42 +08:00
David Nadlinger 826767023d language: Export TArray
(cherry picked from commit 504b8f0148)
2021-12-08 10:12:50 +08:00
Sebastien Bourdeauducq c0deb801db compiler: stop using sys.version_info for parser 2021-08-12 12:53:47 +08:00
Sebastien Bourdeauducq b6facf05d6 setup.py: remove outdated dependency_links 2021-08-12 12:53:44 +08:00
Sebastien Bourdeauducq d824c1002f compiler: turn __repr__ into __str__ when sphinx is used. Closes #741 2021-08-05 11:52:58 +08:00
Sebastien Bourdeauducq 0d67074f8e manual: Kasli now supports 10/100 Ethernet 2021-07-27 10:43:49 +08:00
Sebastien Bourdeauducq e00da98ac1 doc: nixpkgs 21.05 2021-07-27 09:32:50 +08:00
Sebastien Bourdeauducq 4ee25fc2d1 artiq_flash: cleanup openocd handling, do not follow symlinks
Not following symlinks allows files to be added to OpenOCD via nixpkgs buildEnv.
2021-07-27 09:32:45 +08:00
Sebastien Bourdeauducq 0a29b75f8e moninj: fix read of incomplete data (#1729) 2021-07-22 17:59:21 +08:00
StarChen b29d17747f documentation: correct artiq_coremgmt examples 2021-07-19 12:10:46 +08:00
Sebastien Bourdeauducq 3614600240 doc: nixpkgs 20.09 2021-06-02 08:25:11 +08:00
Sebastien Bourdeauducq beb49d4dab artiq_flash: improve openocd not found error message 2021-05-13 14:46:00 +08:00
Leon Riesebos 0a093a69b3 added DefaultMissing to __all__
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:44:28 +08:00
Leon Riesebos a5650290bd ad99xx added additional kernel invariants
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:19:38 +08:00
Leon Riesebos b5333d7105 ad99xx make kernel invariants instance variable
prevents mutations on class variable that applies to all instances at once
closes #1654

Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:19:38 +08:00
Sebastien Bourdeauducq 3d3a40a914 manual: fix OpenOCD conda instructions 2021-03-27 12:15:49 +08:00
Sebastien Bourdeauducq 10c9842fc8 doc: update URLs for legacy channel 2021-02-17 16:12:03 +08:00
Sebastien Bourdeauducq 60110b259e update copyright year 2021-02-17 15:51:37 +08:00
Drew 76370714cb master: fix DeprecationWarning on logger.warn
Resolves error message shown.

The following error message is shown when worker_impl.py:199 is run: 

```
WARNING:worker(RID,EXPERIMENT):py.warnings:/nix/store/77sw4p03cb7rdayx86agi4yqxh5wq46b-python3.7-artiq-5.7141.1b68906/lib/python3.7/site-packages/artiq/master/worker_impl.py:199: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
  logging.warn(message)
```
2021-02-10 15:32:44 +08:00
Ilia Sergachev 45f0687cdf doc: fix missing artiq_flash argument 2021-01-29 00:46:22 +08:00
Sebastien Bourdeauducq e922f65e21 Revert "doc: nixpkgs 20.09"
This reverts commit 5c87ccea34.
2021-01-28 12:21:00 +08:00
Sebastien Bourdeauducq cd3e956725 doc: fix typo. Closes #1586 2021-01-27 13:13:51 +08:00
Sebastien Bourdeauducq 5c87ccea34 doc: nixpkgs 20.09 2021-01-27 13:12:04 +08:00
Sebastien Bourdeauducq ceaf658477 doc: update development instructions. Closes #1585 2021-01-18 15:12:44 +08:00
Sebastien Bourdeauducq 8b3189b88c doc: remove qutip from install example (removed from nixpkgs) 2021-01-15 17:19:49 +08:00
Harry Ho 9ed8f66310 manual: fix artiq.dashboard becoming alias of unittest.mock 2021-01-12 12:06:53 +08:00
Harry Ho e79fedd67a sayma: add comments about CPLL line rate on KU GTH 2020-12-19 17:05:50 +08:00
Harry Ho 69cfa8e2e5 jesd204_tools: use new syntax from jesd204b core
* requires jesd204b changes as in https://github.com/HarryMakes/jesd204b/tree/gth
2020-12-19 17:05:45 +08:00
Aadit Rahul Kamat 1b68906a4f artiq_browser: update h5py api call
Signed-off-by: Aadit Rahul Kamat <aadit.k12@gmail.com>
2020-12-17 14:23:30 +08:00
Sebastien Bourdeauducq 33a15a6513 satman: backport JESD204 STPL fix from 73271600 2020-12-14 18:11:35 +08:00
Leon Riesebos a694d130b7 allow dashboard to close if no connection can be made to moninj
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2020-12-04 22:59:55 +08:00
Leon Riesebos 71719f3861 Added RuntimeError to prelude to make the name available in kernels
closes #1477

Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2020-12-04 22:58:04 +08:00
Harry Ho 90fbc58adb doc: fix missing instructions for bypassing Si5324 on Kasli 2020-11-26 12:07:50 +08:00
Sebastien Bourdeauducq 15bb0fa9a1 Revert "fixes with statement with multiple items"
This reverts commit e23ed15464.
2020-11-22 11:55:09 +08:00
Sebastien Bourdeauducq 9ae5e3c569 Revert "compiler: fix incorrect with behavior"
This reverts commit 86a6fe4606.
2020-11-22 11:55:06 +08:00
Robert Jördens 68582b8dfa
Merge pull request #1538 from lriesebos/release-5
fixed value scaling issue for the center scan gui widget
2020-11-10 18:38:21 +01:00
Leon Riesebos 27c69b5ec2 fixed value scaling issue for the center scan gui widget
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2020-10-19 22:05:44 -04:00
Sebastien Bourdeauducq e8ef0ce470 RELEASE_NOTES: fix typo 2020-10-15 16:58:25 +08:00
Sebastien Bourdeauducq 27a7480175 RELEASE_NOTES: fix formatting 2020-10-15 16:43:02 +08:00
Sebastien Bourdeauducq dc56b2c2e3 manual: clarify and expand nix-shell file 2020-10-15 14:32:10 +08:00
pca006132 083b3f5c7b test: added lit test for new error messages 2020-10-08 19:39:38 +08:00
pca006132 7406a1f983 compiler: error message for custom operations
Emit error messages for custom comparison and inclusion test,
instead of compiler crashing.
2020-10-08 19:39:38 +08:00
Sebastien Bourdeauducq 500c0e587f manual: sphinx mock module whack-a-mole 2020-10-07 19:26:00 +08:00
Sebastien Bourdeauducq 3812731716 dashboard: cleanup import 2020-10-07 19:25:55 +08:00
pca006132 86a6fe4606 compiler: fix incorrect with behavior 2020-10-07 19:03:18 +08:00
pca006132 7f79a54255 Fixes none to bool coercion
Fixes #1413 and #1414.
2020-10-07 15:35:16 +08:00
pca006132 e23ed15464 fixes with statement with multiple items
Closes #1478
2020-10-07 15:35:16 +08:00
Sebastien Bourdeauducq 929b04da59 test: relax device_to_host_rate 2020-07-30 17:47:06 +08:00
David Nadlinger 5efeb2fea0 compiler: Consistently use llunit through llvm_ir_generator [nfc] 2020-07-25 09:40:45 +08:00
David Nadlinger 5dc3b3b28c compiler: Handle None-returning function calls used as values
GitHub: Fixes #1493.
2020-07-25 09:39:54 +08:00
Robert Jördens f4d331d05d firmware/i2c: rewrite I2C implementation
* Never drive SDL or SDA high. They are specified to be open
  collector/drain and pulled up by resistive pullups. Driving
  high fails miserably in a multi-master topology (e.g. with
  a USB I2C interface). It would only ever be implemented to
  speed up the bus actively but that's tricky and completely
  unnecessary here.
* Make the handover states between the I2C protocol phases (start, stop,
  restart, write, read) well defined. Add comments stressing those
  pre/postconditions.
* Add checks for SDA arbitration failures and stuck SCL.
* Remove wrong, misleading or redundant comments.
2020-07-15 14:29:58 +02:00
Sebastien Bourdeauducq 40dacb72a9 sayma_rtm: fix Si5324 reset 2020-07-11 09:53:20 +08:00
Sebastien Bourdeauducq e04ce633b5 manual: fix conda url 2020-07-08 23:30:23 +08:00
Sebastien Bourdeauducq 74ca9dc99d manual: clarify board package installation 2020-06-26 10:52:09 +08:00
Sebastien Bourdeauducq 59172a4202 manual: use conda env creation command 2020-06-26 10:51:09 +08:00
Sebastien Bourdeauducq db2d900298 update Conda installation method 2020-06-20 19:31:26 +08:00
David Nadlinger c667fe136f master/scheduler: Fix priority/due date precedence order when waiting to prepare
See test case – previously, the highest-priority pending run would
be used to calculate the timeout, rather than the earliest one.

This probably managed to go undetected for that long as any unrelated
changes to the pipeline (e.g. new submissions, or experiments pausing)
would also cause _get_run() to be re-evaluated.
2020-06-20 10:46:32 +08:00
Sebastien Bourdeauducq d6aeb03889 doc: nixpkgs 20.03 2020-05-30 14:22:53 +08:00
Sebastien Bourdeauducq 5168b83158 drtio: make sure receive buffer is drained after ping reply 2020-04-06 23:33:33 +08:00
Sebastien Bourdeauducq 8839101085 manual: Kasli can get MAC address from EEPROM 2020-03-14 12:20:07 +08:00
Robert Jördens d3297df745 rtio: fix wide output after RTIO refactoring
fixes 3d0c3cc1cf
2020-03-06 11:38:44 +08:00
Sebastien Bourdeauducq ced5b938db manual: mention integrated Kasli JTAG 2020-03-02 18:42:17 +08:00
Sebastien Bourdeauducq 35ebe59c32 ad9912: fix ftw width docstring 2020-02-27 02:12:13 +08:00
Sebastien Bourdeauducq 8d61cd8344 kasli_generic: expose peripheral_processors dictionary. Closes #1403 2020-02-08 16:13:16 +08:00
Sebastien Bourdeauducq 5211534619 sayma: drive filtered_clk_sel on master variant 2020-02-07 22:11:20 +08:00
Sebastien Bourdeauducq 5733d70041 metlino: drive clock muxes 2020-02-05 00:07:01 +08:00
Sebastien Bourdeauducq a2d2ec9238 si5324: program I2C mux on Metlino 2020-02-03 19:01:45 +08:00
Sebastien Bourdeauducq b32e27abdb artiq_flash: use correct proxy bitstream for Metlino 2020-02-03 19:01:41 +08:00
Sebastien Bourdeauducq 8ca1fea161 typo 2020-01-20 20:15:45 +08:00
Sebastien Bourdeauducq 5bcedc17b4 kasli_sawgmaster: add basemod programming example 2020-01-20 20:10:24 +08:00
Sebastien Bourdeauducq ced4d74f2e basemod_att: fix imports 2020-01-20 20:07:35 +08:00
Sebastien Bourdeauducq f02a8e8ed6 sayma: do not pollute the log with DAC status on success 2020-01-20 20:07:21 +08:00
Sebastien Bourdeauducq f4a5c4503d sayma: initialize DAC before testing jesd::ready 2020-01-20 19:36:15 +08:00
Sebastien Bourdeauducq 705737f6e3 sayma: improve DAC status report 2020-01-20 18:19:19 +08:00
Sebastien Bourdeauducq c4e4d67cdf sayma: print DAC status on JESD not ready error 2020-01-20 18:06:50 +08:00
Sebastien Bourdeauducq eb0ce933c5 sayma: add JESD204 PHY done diagnostics 2020-01-20 12:05:56 +08:00
Sebastien Bourdeauducq 2ad7d2967a sayma: remove SYSREF DDMTD core (#1420) 2020-01-19 23:31:54 +08:00
Sebastien Bourdeauducq 622dad9bd9 sayma: fix hmc542 to/from mu 2020-01-16 09:33:16 +08:00
Sebastien Bourdeauducq 37bff3dab4 sayma: RF switch control is active-low on Basemod, invert 2020-01-16 09:33:16 +08:00
Sebastien Bourdeauducq 72fc1b9a4d update copyright year 2020-01-13 19:34:17 +08:00
Sebastien Bourdeauducq 3202b83f17 sayma: do not attempt DAC synchronization
This will be developed in release-6.
Also keep trying in case of DAC init failure since errors are different between DACs.
2020-01-13 18:39:56 +08:00
Sebastien Bourdeauducq 2b39ffed9a firmware: remove bitrotten Sayma code 2020-01-13 17:28:27 +08:00
Sebastien Bourdeauducq 1d678b7dac firmware: expose fmod to kernels. Closes #1417 2020-01-10 14:33:22 +08:00
Robert Jördens 0ef3515d22 artiq_client: add back quiet-verbose args for submission
close #1416
regression introduced in 3fd6962
2019-12-31 13:01:53 +01:00
Sebastien Bourdeauducq ae4f4b335b runtime: relax/fix TCP keepalive settings (#1125) 2019-12-23 19:58:39 +08:00
David Nadlinger 048de0f67b doc/manual/developing: Clarify Nix PYTHONPATH usage
PYTHONPATH should still contain all the other directories
(obvious once you've made that mistake once, of course).
2019-12-23 19:58:39 +08:00
David Nadlinger 4431f9b0e4 coredevice: Don't use `is` to compare with integer literal
This works on CPython, but is not guaranteed to do so, and
produces a warning since 3.8 (see https://bugs.python.org/issue34850).
2019-12-22 19:07:16 +08:00
Sebastien Bourdeauducq 6639d9443d basemod_att: add dB functions, document 2019-12-21 14:57:08 +08:00
Sebastien Bourdeauducq 5ff164b385 basemod: add coredevice driver 2019-12-21 14:57:08 +08:00
Sebastien Bourdeauducq 6c4790b979 kasli_sawgmaster: fix drtio_is_up 2019-12-21 14:57:08 +08:00
Sebastien Bourdeauducq 902db1d95a shiftreg: fix get method 2019-12-21 14:57:08 +08:00
Sebastien Bourdeauducq 0e847c07da kasli_sawgmaster: update device_db for BaseMod 2019-12-20 19:59:39 +08:00
Sebastien Bourdeauducq 59e8b77fca coredevice/shiftreg: add get method 2019-12-20 19:59:39 +08:00
Sebastien Bourdeauducq c8b8f7a4be sayma_rtm: connect attenuator shift registers in series 2019-12-20 19:59:39 +08:00
Sebastien Bourdeauducq 26e8b9d02a firmware: remove legacy hmc542 code 2019-12-20 15:25:12 +08:00
Sebastien Bourdeauducq a3e2a46510 sayma_rtm: add basemod attenuators on RTIO 2019-12-20 15:23:15 +08:00
Sebastien Bourdeauducq d5f92a20c6 sayma_rtm: basemod RF switches 2019-12-18 10:33:46 +08:00
Sebastien Bourdeauducq 029c69197f manual: clarify XY applet setup example 2019-12-15 10:42:34 +08:00
Sebastien Bourdeauducq cb0515e677 test: run test_help for browser and dashboard 2019-12-12 10:34:44 +08:00
Sebastien Bourdeauducq 007a98bac4 artiq_browser: fix command line argument handling. Closes #1404 2019-12-11 16:19:09 +08:00
Sebastien Bourdeauducq d159d0e901 sayma_rtm: drive clk_src_ext_sel 2019-12-09 19:50:00 +08:00
Paweł Kulik b6f7698a69 Enabled internal pullup for CML SYSREF outputs, otherwise there is no signal on them.
Signed-off-by: Paweł Kulik <pawel.kulik@creotech.pl>
2019-12-07 09:34:54 +08:00
Sebastien Bourdeauducq da95bb54ae artiq_flash: sayma fixes 2019-11-28 17:40:59 +08:00
Sebastien Bourdeauducq 281133ef6e libboard_misoc: fix !has_i2c 2019-11-27 21:21:29 +08:00
Robert Jördens 349474670f si5324: 10 MHz ext_ref_frequency
* close #1254
* tested on innsbruck2 kasli variant
* sponsored by Uni Innsbruck/AQT

Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-11-27 14:14:21 +08:00
Sebastien Bourdeauducq a5483ae371 DEVELOPER_NOTES: update board lockfile info 2019-11-18 15:27:06 +08:00
Sebastien Bourdeauducq aeebc8bc1a doc: add sipyco to mock modules 2019-11-18 08:37:55 +08:00
Sébastien Bourdeauducq 3a0354b198 RELEASE_NOTES: fix rst formatting further 2019-11-15 16:00:09 +08:00
Sébastien Bourdeauducq abb46de4f6 RELEASE_NOTES: fix rst formatting 2019-11-15 16:00:09 +08:00
Sebastien Bourdeauducq edb0cb5aa5 update release notes 2019-11-15 13:43:29 +08:00
Sebastien Bourdeauducq 70c363e99e doc: mention toptica-lasersdk-artiq conda package 2019-11-14 23:52:57 +08:00
Sebastien Bourdeauducq d0edd35a8b doc: link to release-5 installation script 2019-11-14 23:52:43 +08:00
Sebastien Bourdeauducq ef1871bcea gateware: remove wrpll
Will be part of ARTIQ-6.
2019-11-14 17:30:10 +08:00
Sebastien Bourdeauducq b9c86ae9ec install-with-conda: use stable channel 2019-11-14 17:03:26 +08:00
74 changed files with 1014 additions and 1964 deletions

View File

@ -1,39 +1,17 @@
Sharing development boards Sharing development boards
========================== ==========================
To avoid conflicts for development boards on the server, while using a board you must hold the corresponding lock file present in ``/var/lib/artiq/boards``. Holding the lock file grants you exclusive access to the board. To avoid conflicts for development boards on the server, while using a board you must hold the corresponding lock file present in the ``/tmp`` folder of the machine to which the board is connected. Holding the lock file grants you exclusive access to the board.
To lock the KC705 for 30 minutes or until Ctrl-C is pressed: For example, to lock the KC705 until ENTER is pressed:
:: ::
flock --verbose /var/lib/artiq/boards/kc705-1 sleep 1800 ssh rpi-1.m-labs.hk "flock /tmp/board_lock-kc705-1 -c 'echo locked; read; echo unlocked'"
Check that the command acquires the lock, i.e. prints something such as:
::
flock: getting lock took 0.000003 seconds
flock: executing sleep
To lock the KC705 for the duration of the execution of a shell:
::
flock /var/lib/artiq/boards/kc705-1 bash
You may also use this script:
::
#!/bin/bash
exec flock /var/lib/artiq/boards/$1 bash --rcfile <(cat ~/.bashrc; echo PS1=\"[$1\ lock]\ \$PS1\")
If the board is already locked by another user, the ``flock`` commands above will wait for the lock to be released. If the board is already locked by another user, the ``flock`` commands above will wait for the lock to be released.
To determine which user is locking a board, use: To determine which user is locking a board, use a command such as:
:: ::
fuser -v /var/lib/artiq/boards/kc705-1 ssh rpi-1.m-labs.hk "fuser -v /tmp/board_lock-kc705-1"
Selecting a development board with artiq_flash
==============================================
The board lock file also contains the openocd commands for selecting the corresponding developer board:
::
artiq_flash -I "$(cat /var/lib/artiq/boards/sayma-1)"
Deleting git branches Deleting git branches

View File

@ -29,7 +29,7 @@ Website: https://m-labs.hk/artiq
License License
======= =======
Copyright (C) 2014-2019 M-Labs Limited. Copyright (C) 2014-2021 M-Labs Limited.
ARTIQ is free software: you can redistribute it and/or modify ARTIQ is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by it under the terms of the GNU Lesser General Public License as published by

View File

@ -6,25 +6,60 @@ Release notes
ARTIQ-5 ARTIQ-5
------- -------
* The :class:`~artiq.coredevice.ad9910.AD9910` and Highlights:
:class:`~artiq.coredevice.ad9914.AD9914` phase reference timestamp parameters
have been renamed to ``ref_time_mu`` for consistency, as they are in machine * Performance improvements:
units. - Faster RTIO event submission (1.5x improvement in pulse rate test)
* :func:`~artiq.tools.verbosity_args` has been renamed to See: https://github.com/m-labs/artiq/issues/636
:func:`~artiq.tools.add_common_args`, and now adds a ``--version`` flag. - Faster compilation times (3 seconds saved on kernel compilation time on a typical
medium-size experiment)
See: https://github.com/m-labs/artiq/commit/611bcc4db4ed604a32d9678623617cd50e968cbf
* Improved packaging and build system:
- new continuous integration/delivery infrastructure based on Nix and Hydra,
providing reproducibility, speed and independence.
- rolling release process (https://github.com/m-labs/artiq/issues/1326).
- firmware, gateware and device database templates are automatically built for all
supported Kasli variants.
- new JSON description format for generic Kasli systems.
- Nix packages are now supported.
- many Conda problems worked around.
- controllers are now out-of-tree.
- split packages that enable lightweight applications that communicate with ARTIQ,
e.g. controllers running on non-x86 single-board computers.
* Improved Urukul support:
- AD9910 RAM mode.
- Configurable refclk divider and PLL bypass.
- More reliable phase synchronization at high sample rates.
- Synchronization calibration data can be read from EEPROM.
* A gateware-level input edge counter has been added, which offers higher * A gateware-level input edge counter has been added, which offers higher
throughput and increased flexibility over the usual TTL input PHYs where throughput and increased flexibility over the usual TTL input PHYs where
edge timestamps are not required. See :mod:`artiq.coredevice.edge_counter` for edge timestamps are not required. See ``artiq.coredevice.edge_counter`` for
the core device driver and :mod:`artiq.gateware.rtio.phy.edge_counter`/ the core device driver and ``artiq.gateware.rtio.phy.edge_counter``/
:meth:`artiq.gateware.eem.DIO.add_std` for the gateware components. ``artiq.gateware.eem.DIO.add_std`` for the gateware components.
* With DRTIO, Siphaser uses a better calibration mechanism.
See: https://github.com/m-labs/artiq/commit/cc58318500ecfa537abf24127f2c22e8fe66e0f8
* Schedule updates can be sent to influxdb (artiq_influxdb_schedule).
* Experiments can now programatically set their default pipeline, priority, and flush flag.
* List datasets can now be efficiently appended to from experiments using * List datasets can now be efficiently appended to from experiments using
:meth:`artiq.language.environment.HasEnvironment.append_to_dataset`. ``artiq.language.environment.HasEnvironment.append_to_dataset``.
* The core device now supports IPv6.
* To make development easier, the bootloader can receive firmware and secondary FPGA
gateware from the network.
* Python 3.7 compatibility (Nix and source builds only, no Conda).
* Various other bugs from 4.0 fixed.
* Preliminary Sayma v2 and Metlino hardware support.
Breaking changes:
* The ``artiq.coredevice.ad9910.AD9910`` and
``artiq.coredevice.ad9914.AD9914`` phase reference timestamp parameters
have been renamed to ``ref_time_mu`` for consistency, as they are in machine
units.
* The controller manager now ignores device database entries without the * The controller manager now ignores device database entries without the
``"command"`` key set to facilitate sharing of devices between multiple ``command`` key set to facilitate sharing of devices between multiple
masters. masters.
* The meaning of the ``-d/--dir`` and ``--srcbuild`` options of ``artiq_flash`` * The meaning of the ``-d/--dir`` and ``--srcbuild`` options of ``artiq_flash``
has changed. has changed.
* Experiments can now programatically set their default pipeline, priority, and flush flag.
* Controllers for third-party devices are now out-of-tree. * Controllers for third-party devices are now out-of-tree.
* ``aqctl_corelog`` now filters log messages below the ``WARNING`` level by default. * ``aqctl_corelog`` now filters log messages below the ``WARNING`` level by default.
This behavior can be changed using the ``-v`` and ``-q`` options like the other This behavior can be changed using the ``-v`` and ``-q`` options like the other

View File

@ -42,7 +42,7 @@ class ThumbnailIconProvider(QtWidgets.QFileIconProvider):
except KeyError: except KeyError:
return return
try: try:
img = QtGui.QImage.fromData(t.value) img = QtGui.QImage.fromData(t[()])
except: except:
logger.warning("unable to read thumbnail from %s", logger.warning("unable to read thumbnail from %s",
info.filePath(), exc_info=True) info.filePath(), exc_info=True)
@ -102,13 +102,13 @@ class Hdf5FileSystemModel(QtWidgets.QFileSystemModel):
h5 = open_h5(info) h5 = open_h5(info)
if h5 is not None: if h5 is not None:
try: try:
expid = pyon.decode(h5["expid"].value) expid = pyon.decode(h5["expid"][()])
start_time = datetime.fromtimestamp(h5["start_time"].value) start_time = datetime.fromtimestamp(h5["start_time"][()])
v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n" v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n"
"class_name: {}\nrid: {}\nstart_time: {}").format( "class_name: {}\nrid: {}\nstart_time: {}").format(
h5["artiq_version"].value, expid["repo_rev"], h5["artiq_version"][()], expid["repo_rev"],
expid["file"], expid["class_name"], expid["file"], expid["class_name"],
h5["rid"].value, start_time) h5["rid"][()], start_time)
return v return v
except: except:
logger.warning("unable to read metadata from %s", logger.warning("unable to read metadata from %s",
@ -174,31 +174,41 @@ class FilesDock(QtWidgets.QDockWidget):
logger.debug("loading datasets from %s", info.filePath()) logger.debug("loading datasets from %s", info.filePath())
with f: with f:
try: try:
expid = pyon.decode(f["expid"].value) expid = pyon.decode(f["expid"][()])
start_time = datetime.fromtimestamp(f["start_time"].value) start_time = datetime.fromtimestamp(f["start_time"][()])
v = { v = {
"artiq_version": f["artiq_version"].value, "artiq_version": f["artiq_version"][()],
"repo_rev": expid["repo_rev"], "repo_rev": expid["repo_rev"],
"file": expid["file"], "file": expid["file"],
"class_name": expid["class_name"], "class_name": expid["class_name"],
"rid": f["rid"].value, "rid": f["rid"][()],
"start_time": start_time, "start_time": start_time,
} }
self.metadata_changed.emit(v) self.metadata_changed.emit(v)
except: except:
logger.warning("unable to read metadata from %s", logger.warning("unable to read metadata from %s",
info.filePath(), exc_info=True) info.filePath(), exc_info=True)
rd = dict()
rd = {}
if "archive" in f: if "archive" in f:
rd = {k: (True, v.value) for k, v in f["archive"].items()} def visitor(k, v):
if isinstance(v, h5py.Dataset):
rd[k] = (True, v[()])
f["archive"].visititems(visitor)
if "datasets" in f: if "datasets" in f:
for k, v in f["datasets"].items(): def visitor(k, v):
if isinstance(v, h5py.Dataset):
if k in rd: if k in rd:
logger.warning("dataset '%s' is both in archive and " logger.warning("dataset '%s' is both in archive "
"outputs", k) "and outputs", k)
rd[k] = (True, v.value) rd[k] = (True, v[()])
if rd:
f["datasets"].visititems(visitor)
self.datasets.init(rd) self.datasets.init(rd)
self.dataset_changed.emit(info.filePath()) self.dataset_changed.emit(info.filePath())
def list_activated(self, idx): def list_activated(self, idx):

View File

@ -5,7 +5,7 @@ the references to the host objects and translates the functions
annotated as ``@kernel`` when they are referenced. annotated as ``@kernel`` when they are referenced.
""" """
import sys, os, re, linecache, inspect, textwrap, types as pytypes, numpy import os, re, linecache, inspect, textwrap, types as pytypes, numpy
from collections import OrderedDict, defaultdict from collections import OrderedDict, defaultdict
from pythonparser import ast, algorithm, source, diagnostic, parse_buffer from pythonparser import ast, algorithm, source, diagnostic, parse_buffer
@ -894,13 +894,11 @@ class Stitcher:
# Parse. # Parse.
source_buffer = source.Buffer(source_code, filename, first_line) source_buffer = source.Buffer(source_code, filename, first_line)
lexer = source_lexer.Lexer(source_buffer, version=sys.version_info[0:2], lexer = source_lexer.Lexer(source_buffer, version=(3, 6), diagnostic_engine=self.engine)
diagnostic_engine=self.engine)
lexer.indent = [(initial_indent, lexer.indent = [(initial_indent,
source.Range(source_buffer, 0, len(initial_whitespace)), source.Range(source_buffer, 0, len(initial_whitespace)),
initial_whitespace)] initial_whitespace)]
parser = source_parser.Parser(lexer, version=sys.version_info[0:2], parser = source_parser.Parser(lexer, version=(3, 6), diagnostic_engine=self.engine)
diagnostic_engine=self.engine)
function_node = parser.file_input().body[0] function_node = parser.file_input().body[0]
# Mangle the name, since we put everything into a single module. # Mangle the name, since we put everything into a single module.

View File

@ -25,6 +25,7 @@ def globals():
"IndexError": builtins.fn_IndexError(), "IndexError": builtins.fn_IndexError(),
"ValueError": builtins.fn_ValueError(), "ValueError": builtins.fn_ValueError(),
"ZeroDivisionError": builtins.fn_ZeroDivisionError(), "ZeroDivisionError": builtins.fn_ZeroDivisionError(),
"RuntimeError": builtins.fn_RuntimeError(),
# Built-in Python functions # Built-in Python functions
"len": builtins.fn_len(), "len": builtins.fn_len(),

View File

@ -408,6 +408,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
length = self.iterable_len(insn) length = self.iterable_len(insn)
return self.append(ir.Compare(ast.NotEq(loc=None), length, ir.Constant(0, length.type)), return self.append(ir.Compare(ast.NotEq(loc=None), length, ir.Constant(0, length.type)),
block=block) block=block)
elif builtins.is_none(insn.type):
return ir.Constant(False, builtins.TBool())
else: else:
note = diagnostic.Diagnostic("note", note = diagnostic.Diagnostic("note",
"this expression has type {type}", "this expression has type {type}",
@ -1480,7 +1482,13 @@ class ARTIQIRGenerator(algorithm.Visitor):
return result return result
else: else:
assert False loc = lhs.loc
loc.end = rhs.loc.end
diag = diagnostic.Diagnostic("error",
"Custom object comparison is not supported",
{},
loc)
self.engine.process(diag)
def polymorphic_compare_pair_inclusion(self, needle, haystack): def polymorphic_compare_pair_inclusion(self, needle, haystack):
if builtins.is_range(haystack.type): if builtins.is_range(haystack.type):
@ -1524,7 +1532,13 @@ class ARTIQIRGenerator(algorithm.Visitor):
result = phi result = phi
else: else:
assert False loc = needle.loc
loc.end = haystack.loc.end
diag = diagnostic.Diagnostic("error",
"Custom object inclusion test is not supported",
{},
loc)
self.engine.process(diag)
return result return result

View File

@ -209,7 +209,7 @@ class LLVMIRGenerator:
if for_return: if for_return:
return llvoid return llvoid
else: else:
return ll.LiteralStructType([]) return llunit
elif types._is_pointer(typ): elif types._is_pointer(typ):
return llptr return llptr
elif types.is_function(typ): elif types.is_function(typ):
@ -239,7 +239,7 @@ class LLVMIRGenerator:
if for_return: if for_return:
return llvoid return llvoid
else: else:
return ll.LiteralStructType([]) return llunit
elif builtins.is_bool(typ): elif builtins.is_bool(typ):
return lli1 return lli1
elif builtins.is_int(typ): elif builtins.is_int(typ):
@ -1346,18 +1346,18 @@ class LLVMIRGenerator:
self.engine.process(diag) self.engine.process(diag)
tag += self._rpc_tag(fun_type.ret, ret_error_handler) tag += self._rpc_tag(fun_type.ret, ret_error_handler)
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="rpc.stack")
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr())) lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
lltagptr = self.llbuilder.alloca(lltag.type) lltagptr = self.llbuilder.alloca(lltag.type)
self.llbuilder.store(lltag, lltagptr) self.llbuilder.store(lltag, lltagptr)
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="rpc.stack")
llargs = self.llbuilder.alloca(llptr, ll.Constant(lli32, len(args)), llargs = self.llbuilder.alloca(llptr, ll.Constant(lli32, len(args)),
name="rpc.args") name="rpc.args")
for index, arg in enumerate(args): for index, arg in enumerate(args):
if builtins.is_none(arg.type): if builtins.is_none(arg.type):
llargslot = self.llbuilder.alloca(ll.LiteralStructType([]), llargslot = self.llbuilder.alloca(llunit,
name="rpc.arg{}".format(index)) name="rpc.arg{}".format(index))
else: else:
llarg = self.map(arg) llarg = self.map(arg)
@ -1462,6 +1462,11 @@ class LLVMIRGenerator:
else: else:
llcall = llresult = self.llbuilder.call(llfun, llargs, name=insn.name) llcall = llresult = self.llbuilder.call(llfun, llargs, name=insn.name)
if isinstance(llresult.type, ll.VoidType):
# We have NoneType-returning functions return void, but None is
# {} elsewhere.
llresult = ll.Constant(llunit, [])
# Never add TBAA nowrite metadata to a functon with sret! # Never add TBAA nowrite metadata to a functon with sret!
# This leads to miscompilations. # This leads to miscompilations.
if types.is_c_function(functiontyp) and 'nowrite' in functiontyp.flags: if types.is_c_function(functiontyp) and 'nowrite' in functiontyp.flags:

View File

@ -3,6 +3,7 @@ The :mod:`types` module contains the classes describing the types
in :mod:`asttyped`. in :mod:`asttyped`.
""" """
import builtins
import string import string
from collections import OrderedDict from collections import OrderedDict
from . import iodelay from . import iodelay
@ -95,6 +96,8 @@ class TVar(Type):
return self.find().fold(accum, fn) return self.find().fold(accum, fn)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
if self.parent is self: if self.parent is self:
return "<artiq.compiler.types.TVar %d>" % id(self) return "<artiq.compiler.types.TVar %d>" % id(self)
else: else:
@ -139,6 +142,8 @@ class TMono(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TMono(%s, %s)" % (repr(self.name), repr(self.params)) return "artiq.compiler.types.TMono(%s, %s)" % (repr(self.name), repr(self.params))
def __getitem__(self, param): def __getitem__(self, param):
@ -185,6 +190,8 @@ class TTuple(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TTuple(%s)" % repr(self.elts) return "artiq.compiler.types.TTuple(%s)" % repr(self.elts)
def __eq__(self, other): def __eq__(self, other):
@ -259,6 +266,8 @@ class TFunction(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TFunction({}, {}, {})".format( return "artiq.compiler.types.TFunction({}, {}, {})".format(
repr(self.args), repr(self.optargs), repr(self.ret)) repr(self.args), repr(self.optargs), repr(self.ret))
@ -338,6 +347,8 @@ class TRPC(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TRPC({})".format(repr(self.ret)) return "artiq.compiler.types.TRPC({})".format(repr(self.ret))
def __eq__(self, other): def __eq__(self, other):
@ -373,6 +384,8 @@ class TBuiltin(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.{}({})".format(type(self).__name__, repr(self.name)) return "artiq.compiler.types.{}({})".format(type(self).__name__, repr(self.name))
def __eq__(self, other): def __eq__(self, other):
@ -428,6 +441,8 @@ class TInstance(TMono):
self.constant_attributes = set() self.constant_attributes = set()
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TInstance({}, {})".format( return "artiq.compiler.types.TInstance({}, {})".format(
repr(self.name), repr(self.attributes)) repr(self.name), repr(self.attributes))
@ -443,6 +458,8 @@ class TModule(TMono):
self.constant_attributes = set() self.constant_attributes = set()
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TModule({}, {})".format( return "artiq.compiler.types.TModule({}, {})".format(
repr(self.name), repr(self.attributes)) repr(self.name), repr(self.attributes))
@ -480,6 +497,8 @@ class TValue(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TValue(%s)" % repr(self.value) return "artiq.compiler.types.TValue(%s)" % repr(self.value)
def __eq__(self, other): def __eq__(self, other):
@ -538,6 +557,8 @@ class TDelay(Type):
return not (self == other) return not (self == other)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
if self.duration is None: if self.duration is None:
return "<{}.TIndeterminateDelay>".format(__name__) return "<{}.TIndeterminateDelay>".format(__name__)
elif self.cause is None: elif self.cause is None:

View File

@ -133,12 +133,14 @@ class AD9910:
from a I2C EEPROM; in which case, `sync_delay_seed` must be set to the from a I2C EEPROM; in which case, `sync_delay_seed` must be set to the
same string value. same string value.
""" """
kernel_invariants = {"chip_select", "cpld", "core", "bus",
"ftw_per_hz", "sysclk_per_mu"}
def __init__(self, dmgr, chip_select, cpld_device, sw_device=None, def __init__(self, dmgr, chip_select, cpld_device, sw_device=None,
pll_n=40, pll_cp=7, pll_vco=5, sync_delay_seed=-1, pll_n=40, pll_cp=7, pll_vco=5, sync_delay_seed=-1,
io_update_delay=0, pll_en=1): io_update_delay=0, pll_en=1):
self.kernel_invariants = {"cpld", "core", "bus", "chip_select",
"pll_en", "pll_n", "pll_vco", "pll_cp",
"ftw_per_hz", "sysclk_per_mu", "sysclk",
"sync_data"}
self.cpld = dmgr.get(cpld_device) self.cpld = dmgr.get(cpld_device)
self.core = self.cpld.core self.core = self.cpld.core
self.bus = self.cpld.bus self.bus = self.cpld.bus

View File

@ -25,10 +25,11 @@ class AD9912:
is the reference clock divider (both set in the parent Urukul CPLD is the reference clock divider (both set in the parent Urukul CPLD
instance). instance).
""" """
kernel_invariants = {"chip_select", "cpld", "core", "bus", "ftw_per_hz"}
def __init__(self, dmgr, chip_select, cpld_device, sw_device=None, def __init__(self, dmgr, chip_select, cpld_device, sw_device=None,
pll_n=10): pll_n=10):
self.kernel_invariants = {"cpld", "core", "bus", "chip_select",
"pll_n", "ftw_per_hz"}
self.cpld = dmgr.get(cpld_device) self.cpld = dmgr.get(cpld_device)
self.core = self.cpld.core self.core = self.cpld.core
self.bus = self.cpld.bus self.bus = self.cpld.bus
@ -136,7 +137,7 @@ class AD9912:
After the SPI transfer, the shared IO update pin is pulsed to After the SPI transfer, the shared IO update pin is pulsed to
activate the data. activate the data.
:param ftw: Frequency tuning word: 32 bit unsigned. :param ftw: Frequency tuning word: 48 bit unsigned.
:param pow: Phase tuning word: 16 bit unsigned. :param pow: Phase tuning word: 16 bit unsigned.
""" """
# streaming transfer of FTW and POW # streaming transfer of FTW and POW

View File

@ -0,0 +1,79 @@
from artiq.language.core import kernel, portable, delay
from artiq.language.units import us, ms
from artiq.coredevice.shiftreg import ShiftReg
@portable
def to_mu(att):
return round(att*2.0) ^ 0x3f
@portable
def from_mu(att_mu):
return 0.5*(att_mu ^ 0x3f)
class BaseModAtt:
def __init__(self, dmgr, rst_n, clk, le, mosi, miso):
self.rst_n = dmgr.get(rst_n)
self.shift_reg = ShiftReg(dmgr,
clk=clk, ser=mosi, latch=le, ser_in=miso, n=8*4)
@kernel
def reset(self):
# HMC's incompetence in digital design and interfaces means that
# the HMC542 needs a level low on RST_N and then a rising edge
# on Latch Enable. Their "latch" isn't a latch but a DFF.
# Of course, it also powers up with a random attenuation, and
# that cannot be fixed with simple pull-ups/pull-downs.
self.rst_n.off()
self.shift_reg.latch.off()
delay(1*us)
self.shift_reg.latch.on()
delay(1*us)
self.shift_reg.latch.off()
self.rst_n.on()
delay(1*us)
@kernel
def set_mu(self, att0, att1, att2, att3):
"""
Sets the four attenuators on BaseMod.
The values are in half decibels, between 0 (no attenuation)
and 63 (31.5dB attenuation).
"""
word = (
(att0 << 2) |
(att1 << 10) |
(att2 << 18) |
(att3 << 26)
)
self.shift_reg.set(word)
@kernel
def get_mu(self):
"""
Retrieves the current settings of the four attenuators on BaseMod.
"""
word = self.shift_reg.get()
att0 = (word >> 2) & 0x3f
att1 = (word >> 10) & 0x3f
att2 = (word >> 18) & 0x3f
att3 = (word >> 26) & 0x3f
return att0, att1, att2, att3
@kernel
def set(self, att0, att1, att2, att3):
"""
Sets the four attenuators on BaseMod.
The values are in decibels.
"""
self.set_mu(to_mu(att0), to_mu(att1), to_mu(att2), to_mu(att3))
@kernel
def get(self):
"""
Retrieves the current settings of the four attenuators on BaseMod.
The values are in decibels.
"""
att0, att1, att2, att3 = self.get_mu()
return from_mu(att0), from_mu(att1), from_mu(att2), from_mu(att3)

View File

@ -408,7 +408,7 @@ class CommKernel:
args, kwargs = self._receive_rpc_args(embedding_map) args, kwargs = self._receive_rpc_args(embedding_map)
return_tags = self._read_bytes() return_tags = self._read_bytes()
if service_id is 0: if service_id == 0:
service = lambda obj, attr, value: setattr(obj, attr, value) service = lambda obj, attr, value: setattr(obj, attr, value)
else: else:
service = embedding_map.retrieve_object(service_id) service = embedding_map.retrieve_object(service_id)

View File

@ -74,11 +74,11 @@ class CommMonInj:
if not ty: if not ty:
return return
if ty == b"\x00": if ty == b"\x00":
payload = await self._reader.read(9) payload = await self._reader.readexactly(9)
channel, probe, value = struct.unpack(">lbl", payload) channel, probe, value = struct.unpack(">lbl", payload)
self.monitor_cb(channel, probe, value) self.monitor_cb(channel, probe, value)
elif ty == b"\x01": elif ty == b"\x01":
payload = await self._reader.read(6) payload = await self._reader.readexactly(6)
channel, override, value = struct.unpack(">lbb", payload) channel, override, value = struct.unpack(">lbb", payload)
self.injection_status_cb(channel, override, value) self.injection_status_cb(channel, override, value)
else: else:

View File

@ -6,13 +6,15 @@ class ShiftReg:
"""Driver for shift registers/latch combos connected to TTLs""" """Driver for shift registers/latch combos connected to TTLs"""
kernel_invariants = {"dt", "n"} kernel_invariants = {"dt", "n"}
def __init__(self, dmgr, clk, ser, latch, n=32, dt=10*us): def __init__(self, dmgr, clk, ser, latch, n=32, dt=10*us, ser_in=None):
self.core = dmgr.get("core") self.core = dmgr.get("core")
self.clk = dmgr.get(clk) self.clk = dmgr.get(clk)
self.ser = dmgr.get(ser) self.ser = dmgr.get(ser)
self.latch = dmgr.get(latch) self.latch = dmgr.get(latch)
self.n = n self.n = n
self.dt = dt self.dt = dt
if ser_in is not None:
self.ser_in = dmgr.get(ser_in)
@kernel @kernel
def set(self, data): def set(self, data):
@ -34,3 +36,19 @@ class ShiftReg:
delay(self.dt) delay(self.dt)
self.latch.off() self.latch.off()
delay(self.dt) delay(self.dt)
@kernel
def get(self):
delay(-2*(self.n + 1)*self.dt)
data = 0
for i in range(self.n):
data <<= 1
self.ser_in.sample_input()
if self.ser_in.sample_get():
data |= 1
delay(self.dt)
self.clk.on()
delay(self.dt)
self.clk.off()
delay(self.dt)
return data

View File

@ -226,7 +226,7 @@ def setup_from_ddb(ddb):
dds_sysclk = v["arguments"]["sysclk"] dds_sysclk = v["arguments"]["sysclk"]
widget = _WidgetDesc(k, comment, _DDSWidget, (bus_channel, channel, k)) widget = _WidgetDesc(k, comment, _DDSWidget, (bus_channel, channel, k))
description.add(widget) description.add(widget)
elif ( (v["module"] == "artiq.coredevice.ad53xx" and v["class"] == "AD53XX") elif ( (v["module"] == "artiq.coredevice.ad53xx" and v["class"] == "AD53xx")
or (v["module"] == "artiq.coredevice.zotino" and v["class"] == "Zotino")): or (v["module"] == "artiq.coredevice.zotino" and v["class"] == "Zotino")):
spi_device = v["arguments"]["spi_device"] spi_device = v["arguments"]["spi_device"]
spi_device = ddb[spi_device] spi_device = ddb[spi_device]
@ -399,6 +399,9 @@ class _DeviceManager:
self.disconnect_cb) self.disconnect_cb)
try: try:
await new_core_connection.connect(self.core_addr, 1383) await new_core_connection.connect(self.core_addr, 1383)
except asyncio.CancelledError:
logger.info("cancelled connection to core device moninj")
break
except: except:
logger.error("failed to connect to core device moninj", exc_info=True) logger.error("failed to connect to core device moninj", exc_info=True)
await asyncio.sleep(10.) await asyncio.sleep(10.)

View File

@ -97,18 +97,89 @@ for i in range(4):
} }
} }
"""
artiq_route routing.bin init
artiq_route routing.bin set 0 0
artiq_route routing.bin set 1 1 0
artiq_route routing.bin set 2 1 1 0
artiq_route routing.bin set 3 2 0
artiq_route routing.bin set 4 2 1 0
artiq_coremgmt -D kasli config write -f routing_table routing.bin
"""
for sayma in range(2):
amc_base = 0x010000 + sayma*0x020000
rtm_base = 0x020000 + sayma*0x020000
for i in range(4):
device_db["led" + str(4*sayma+i)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": amc_base + i}
}
for i in range(2):
device_db["ttl_mcx" + str(2*sayma+i)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut",
"arguments": {"channel": amc_base + 4 + i}
}
for i in range(8): for i in range(8):
device_db["sawg" + str(i)] = { device_db["sawg" + str(8*sayma+i)] = {
"type": "local", "type": "local",
"module": "artiq.coredevice.sawg", "module": "artiq.coredevice.sawg",
"class": "SAWG", "class": "SAWG",
"arguments": {"channel_base": i*10+0x010006, "parallelism": 4} "arguments": {"channel_base": amc_base + 6 + i*10, "parallelism": 4}
}
for basemod in range(2):
for i in range(4):
device_db["sawg_sw" + str(8*sayma+4*basemod+i)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": rtm_base + basemod*9 + i}
}
att_idx = 2*sayma + basemod
device_db["basemod_att_rst_n"+str(att_idx)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": rtm_base + basemod*9 + 4}
}
device_db["basemod_att_clk"+str(att_idx)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": rtm_base + basemod*9 + 5}
}
device_db["basemod_att_le"+str(att_idx)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": rtm_base + basemod*9 + 6}
}
device_db["basemod_att_mosi"+str(att_idx)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": rtm_base + basemod*9 + 7}
}
device_db["basemod_att_miso"+str(att_idx)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut",
"arguments": {"channel": rtm_base + basemod*9 + 8}
}
device_db["basemod_att"+str(att_idx)] = {
"type": "local",
"module": "artiq.coredevice.basemod_att",
"class": "BaseModAtt",
"arguments": {
"rst_n": "basemod_att_rst_n"+str(att_idx),
"clk": "basemod_att_clk"+str(att_idx),
"le": "basemod_att_le"+str(att_idx),
"mosi": "basemod_att_mosi"+str(att_idx),
"miso": "basemod_att_miso"+str(att_idx),
}
} }
for i in range(8):
device_db["sawg" + str(8+i)] = {
"type": "local",
"module": "artiq.coredevice.sawg",
"class": "SAWG",
"arguments": {"channel_base": i*10+0x020006, "parallelism": 4}
}

View File

@ -0,0 +1,25 @@
from artiq.experiment import *
class BaseMod(EnvExperiment):
def build(self):
self.setattr_device("core")
self.basemods = [self.get_device("basemod_att0"), self.get_device("basemod_att1")]
self.rfsws = [self.get_device("sawg_sw"+str(i)) for i in range(8)]
@kernel
def run(self):
self.core.reset()
for basemod in self.basemods:
self.core.break_realtime()
delay(10*ms)
basemod.reset()
delay(10*ms)
basemod.set(0.0, 0.0, 0.0, 0.0)
delay(10*ms)
print(basemod.get_mu())
self.core.break_realtime()
for rfsw in self.rfsws:
rfsw.on()
delay(1*ms)

View File

@ -8,7 +8,7 @@ class Sines2Sayma(EnvExperiment):
@kernel @kernel
def drtio_is_up(self): def drtio_is_up(self):
for i in range(3): for i in range(5):
if not self.core.get_rtio_destination_status(i): if not self.core.get_rtio_destination_status(i):
return False return False
return True return True

View File

@ -23,7 +23,7 @@ class SinesUrukulSayma(EnvExperiment):
@kernel @kernel
def drtio_is_up(self): def drtio_is_up(self):
for i in range(2): for i in range(3):
if not self.core.get_rtio_destination_status(i): if not self.core.get_rtio_destination_status(i):
return False return False
return True return True

View File

@ -491,7 +491,7 @@ pub extern fn main() -> i32 {
println!(r"|_| |_|_|____/ \___/ \____|"); println!(r"|_| |_|_|____/ \___/ \____|");
println!(""); println!("");
println!("MiSoC Bootloader"); println!("MiSoC Bootloader");
println!("Copyright (c) 2017-2019 M-Labs Limited"); println!("Copyright (c) 2017-2021 M-Labs Limited");
println!(""); println!("");
#[cfg(has_ethmac)] #[cfg(has_ethmac)]

View File

@ -70,6 +70,7 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(sqrt), api!(sqrt),
api!(round), api!(round),
api!(floor), api!(floor),
api!(fmod),
/* exceptions */ /* exceptions */
api!(_Unwind_Resume = ::unwind::_Unwind_Resume), api!(_Unwind_Resume = ::unwind::_Unwind_Resume),

View File

@ -91,7 +91,7 @@ mod imp {
unsafe { unsafe {
csr::rtio::target_write(target as u32); csr::rtio::target_write(target as u32);
// writing target clears o_data // writing target clears o_data
for i in 0..data.len() { for i in (0..data.len()).rev() {
rtio_o_data_write(i, data[i] as _) rtio_o_data_write(i, data[i] as _)
} }
let status = csr::rtio::o_status_read(); let status = csr::rtio::o_status_read();

View File

@ -355,6 +355,8 @@ pub fn setup(dacno: u8, linerate: u64) -> Result<(), &'static str> {
pub fn status(dacno: u8) { pub fn status(dacno: u8) {
spi_setup(dacno); spi_setup(dacno);
info!("Printing status of AD9154-{}", dacno);
info!("PRODID: 0x{:04x}", (read(ad9154_reg::PRODIDH) as u16) << 8 | (read(ad9154_reg::PRODIDL) as u16));
info!("SERDES_PLL_LOCK: {}", info!("SERDES_PLL_LOCK: {}",
(read(ad9154_reg::PLL_STATUS) & ad9154_reg::SERDES_PLL_LOCK_RB)); (read(ad9154_reg::PLL_STATUS) & ad9154_reg::SERDES_PLL_LOCK_RB));
info!(""); info!("");

View File

@ -1,60 +0,0 @@
use board_misoc::{csr, clock};
const PIN_LE: u32 = 1 << 0;
const PIN_SIN: u32 = 1 << 1;
const PIN_CLK: u32 = 1 << 2;
const PIN_RST_N: u32 = 1 << 3;
const PIN_RST: u32 = PIN_RST_N;
const CARDS: usize = 4;
const CHANNELS: usize = 2;
fn set_pins(card_index: usize, chan_index: usize, pins: u32) {
let pins = pins ^ PIN_RST_N;
let shift = (card_index * 2 + chan_index)*4;
unsafe {
let state = csr::allaki_atts::out_read();
let state = state & !(0xf << shift);
let state = state | (pins << shift);
csr::allaki_atts::out_write(state);
}
clock::spin_us(100);
}
/// Attenuation is in units of 0.5 dB, from 0 dB (0) to 31.5 dB (63).
pub fn program(card_index: usize, chan_index: usize, atten: u8) {
assert!(card_index < 4 && chan_index < 2);
info!("card {} channel {} set to {}{} dB",
card_index, chan_index,
atten / 2, if atten % 2 != 0 { ".5" } else { "" });
// 0b111111 = 0dB
// 0b111110 = 0.5dB
// 0b111101 = 1dB
// 0b111100 = 1.5dB
// ...
// 0b011111 = 16dB
// ...
// 0b000000 = 31.5dB
let atten = !atten << 2;
let set_pins = |pins| set_pins(card_index, chan_index, pins);
set_pins(PIN_RST);
set_pins(0);
for n in (0..8).rev() {
let sin = if atten & 1 << n != 0 { PIN_SIN } else { 0 };
set_pins(sin);
set_pins(sin | PIN_CLK);
}
set_pins(PIN_LE);
}
/// See `program`.
pub fn program_all(atten: u8) {
for card in 0..CARDS {
for chan in 0..CHANNELS {
program(card, chan, atten)
}
}
}

View File

@ -150,11 +150,11 @@ pub mod hmc7043 {
// enabled, divider, output config, is sysref // enabled, divider, output config, is sysref
const OUTPUT_CONFIG: [(bool, u16, u8, bool); 14] = [ const OUTPUT_CONFIG: [(bool, u16, u8, bool); 14] = [
(true, DAC_CLK_DIV, 0x08, false), // 0: DAC1_CLK (true, DAC_CLK_DIV, 0x08, false), // 0: DAC1_CLK
(true, SYSREF_DIV, 0x00, true), // 1: DAC1_SYSREF (true, SYSREF_DIV, 0x01, true), // 1: DAC1_SYSREF
(true, DAC_CLK_DIV, 0x08, false), // 2: DAC0_CLK (true, DAC_CLK_DIV, 0x08, false), // 2: DAC0_CLK
(true, SYSREF_DIV, 0x00, true), // 3: DAC0_SYSREF (true, SYSREF_DIV, 0x01, true), // 3: DAC0_SYSREF
(true, SYSREF_DIV, 0x10, true), // 4: AMC_FPGA_SYSREF0 (true, SYSREF_DIV, 0x10, true), // 4: AMC_FPGA_SYSREF0
(true, FPGA_CLK_DIV, 0x10, true), // 5: AMC_FPGA_SYSREF1 (false, FPGA_CLK_DIV, 0x10, true), // 5: AMC_FPGA_SYSREF1
(false, 0, 0x10, false), // 6: unused (false, 0, 0x10, false), // 6: unused
(true, SYSREF_DIV, 0x10, true), // 7: RTM_FPGA_SYSREF0 (true, SYSREF_DIV, 0x10, true), // 7: RTM_FPGA_SYSREF0
(true, FPGA_CLK_DIV, 0x08, false), // 8: GTP_CLK0_IN (true, FPGA_CLK_DIV, 0x08, false), // 8: GTP_CLK0_IN

View File

@ -33,8 +33,6 @@ pub mod hmc830_7043;
mod ad9154_reg; mod ad9154_reg;
#[cfg(has_ad9154)] #[cfg(has_ad9154)]
pub mod ad9154; pub mod ad9154;
#[cfg(has_allaki_atts)]
pub mod hmc542;
#[cfg(has_grabber)] #[cfg(has_grabber)]
pub mod grabber; pub mod grabber;

View File

@ -186,6 +186,8 @@ fn init() -> Result<()> {
i2c::pca9548_select(BUSNO, 0x70, 1 << 4)?; i2c::pca9548_select(BUSNO, 0x70, 1 << 4)?;
#[cfg(soc_platform = "sayma_rtm")] #[cfg(soc_platform = "sayma_rtm")]
i2c::pca9548_select(BUSNO, 0x77, 1 << 5)?; i2c::pca9548_select(BUSNO, 0x77, 1 << 5)?;
#[cfg(soc_platform = "metlino")]
i2c::pca9548_select(BUSNO, 0x70, 1 << 4)?;
#[cfg(soc_platform = "kc705")] #[cfg(soc_platform = "kc705")]
i2c::pca9548_select(BUSNO, 0x74, 1 << 7)?; i2c::pca9548_select(BUSNO, 0x74, 1 << 7)?;

View File

@ -14,6 +14,12 @@ mod imp {
} }
} }
fn scl_i(busno: u8) -> bool {
unsafe {
csr::i2c::in_read() & scl_bit(busno) != 0
}
}
fn sda_oe(busno: u8, oe: bool) { fn sda_oe(busno: u8, oe: bool) {
unsafe { unsafe {
let reg = csr::i2c::oe_read(); let reg = csr::i2c::oe_read();
@ -49,13 +55,10 @@ mod imp {
pub fn init() -> Result<(), &'static str> { pub fn init() -> Result<(), &'static str> {
for busno in 0..csr::CONFIG_I2C_BUS_COUNT { for busno in 0..csr::CONFIG_I2C_BUS_COUNT {
let busno = busno as u8; let busno = busno as u8;
// Set SCL as output, and high level scl_oe(busno, false);
scl_o(busno, true);
scl_oe(busno, true);
// Prepare a zero level on SDA so that sda_oe pulls it down
sda_o(busno, false);
// Release SDA
sda_oe(busno, false); sda_oe(busno, false);
scl_o(busno, false);
sda_o(busno, false);
// Check the I2C bus is ready // Check the I2C bus is ready
half_period(); half_period();
@ -63,9 +66,9 @@ mod imp {
if !sda_i(busno) { if !sda_i(busno) {
// Try toggling SCL a few times // Try toggling SCL a few times
for _bit in 0..8 { for _bit in 0..8 {
scl_o(busno, false); scl_oe(busno, true);
half_period(); half_period();
scl_o(busno, true); scl_oe(busno, false);
half_period(); half_period();
} }
} }
@ -73,6 +76,10 @@ mod imp {
if !sda_i(busno) { if !sda_i(busno) {
return Err("SDA is stuck low and doesn't get unstuck"); return Err("SDA is stuck low and doesn't get unstuck");
} }
if !scl_i(busno) {
return Err("SCL is stuck low and doesn't get unstuck");
}
// postcondition: SCL and SDA high
} }
Ok(()) Ok(())
} }
@ -81,11 +88,17 @@ mod imp {
if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT { if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT {
return Err(INVALID_BUS) return Err(INVALID_BUS)
} }
// Set SCL high then SDA low // precondition: SCL and SDA high
scl_o(busno, true); if !scl_i(busno) {
half_period(); return Err("SCL is stuck low and doesn't get unstuck");
}
if !sda_i(busno) {
return Err("SDA arbitration lost");
}
sda_oe(busno, true); sda_oe(busno, true);
half_period(); half_period();
scl_oe(busno, true);
// postcondition: SCL and SDA low
Ok(()) Ok(())
} }
@ -93,13 +106,13 @@ mod imp {
if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT { if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT {
return Err(INVALID_BUS) return Err(INVALID_BUS)
} }
// Set SCL low then SDA high */ // precondition SCL and SDA low
scl_o(busno, false);
half_period();
sda_oe(busno, false); sda_oe(busno, false);
half_period(); half_period();
// Do a regular start scl_oe(busno, false);
half_period();
start(busno)?; start(busno)?;
// postcondition: SCL and SDA low
Ok(()) Ok(())
} }
@ -107,15 +120,16 @@ mod imp {
if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT { if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT {
return Err(INVALID_BUS) return Err(INVALID_BUS)
} }
// First, make sure SCL is low, so that the target releases the SDA line // precondition: SCL and SDA low
scl_o(busno, false);
half_period(); half_period();
// Set SCL high then SDA high scl_oe(busno, false);
sda_oe(busno, true);
scl_o(busno, true);
half_period(); half_period();
sda_oe(busno, false); sda_oe(busno, false);
half_period(); half_period();
if !sda_i(busno) {
return Err("SDA arbitration lost");
}
// postcondition: SCL and SDA high
Ok(()) Ok(())
} }
@ -123,57 +137,53 @@ mod imp {
if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT { if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT {
return Err(INVALID_BUS) return Err(INVALID_BUS)
} }
// precondition: SCL and SDA low
// MSB first // MSB first
for bit in (0..8).rev() { for bit in (0..8).rev() {
// Set SCL low and set our bit on SDA
scl_o(busno, false);
sda_oe(busno, data & (1 << bit) == 0); sda_oe(busno, data & (1 << bit) == 0);
half_period(); half_period();
// Set SCL high ; data is shifted on the rising edge of SCL scl_oe(busno, false);
scl_o(busno, true);
half_period(); half_period();
scl_oe(busno, true);
} }
// Check ack
// Set SCL low, then release SDA so that the I2C target can respond
scl_o(busno, false);
half_period();
sda_oe(busno, false); sda_oe(busno, false);
// Set SCL high and check for ack
scl_o(busno, true);
half_period(); half_period();
// returns true if acked (I2C target pulled SDA low) scl_oe(busno, false);
Ok(!sda_i(busno)) half_period();
// Read ack/nack
let ack = !sda_i(busno);
scl_oe(busno, true);
sda_oe(busno, true);
// postcondition: SCL and SDA low
Ok(ack)
} }
pub fn read(busno: u8, ack: bool) -> Result<u8, &'static str> { pub fn read(busno: u8, ack: bool) -> Result<u8, &'static str> {
if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT { if busno as u32 >= csr::CONFIG_I2C_BUS_COUNT {
return Err(INVALID_BUS) return Err(INVALID_BUS)
} }
// Set SCL low first, otherwise setting SDA as input may cause a transition // precondition: SCL and SDA low
// on SDA with SCL high which will be interpreted as START/STOP condition.
scl_o(busno, false);
half_period(); // make sure SCL has settled low
sda_oe(busno, false); sda_oe(busno, false);
let mut data: u8 = 0; let mut data: u8 = 0;
// MSB first // MSB first
for bit in (0..8).rev() { for bit in (0..8).rev() {
scl_o(busno, false);
half_period(); half_period();
// Set SCL high and shift data scl_oe(busno, false);
scl_o(busno, true);
half_period(); half_period();
if sda_i(busno) { data |= 1 << bit } if sda_i(busno) { data |= 1 << bit }
scl_oe(busno, true);
} }
// Send ack // Send ack/nack
// Set SCL low and pull SDA low when acking sda_oe(busno, ack);
scl_o(busno, false);
if ack { sda_oe(busno, true) }
half_period(); half_period();
// then set SCL high scl_oe(busno, false);
scl_o(busno, true);
half_period(); half_period();
scl_oe(busno, true);
sda_oe(busno, true);
// postcondition: SCL and SDA low
Ok(data) Ok(data)
} }
@ -194,13 +204,13 @@ mod imp {
#[cfg(not(has_i2c))] #[cfg(not(has_i2c))]
mod imp { mod imp {
const NO_I2C: &'static str = "No I2C support on this platform"; const NO_I2C: &'static str = "No I2C support on this platform";
pub fn init() { Err(NO_I2C) } pub fn init() -> Result<(), &'static str> { Err(NO_I2C) }
pub fn start(_busno: u8) -> Result<(), &'static str> { Err(NO_I2C) } pub fn start(_busno: u8) -> Result<(), &'static str> { Err(NO_I2C) }
pub fn restart(_busno: u8) -> Result<(), &'static str> { Err(NO_I2C) } pub fn restart(_busno: u8) -> Result<(), &'static str> { Err(NO_I2C) }
pub fn stop(_busno: u8) -> Result<(), &'static str> { Err(NO_I2C) } pub fn stop(_busno: u8) -> Result<(), &'static str> { Err(NO_I2C) }
pub fn write(_busno: u8, _data: u8) -> Result<bool, &'static str> { Err(NO_I2C) } pub fn write(_busno: u8, _data: u8) -> Result<bool, &'static str> { Err(NO_I2C) }
pub fn read(_busno: u8, _ack: bool) -> Result<u8, &'static str> { Err(NO_I2C) } pub fn read(_busno: u8, _ack: bool) -> Result<u8, &'static str> { Err(NO_I2C) }
pub fn pca9548_select(busno: u8, address: u8, channels: u8) -> Result<(), &'static str> { Err(NO_I2C) } pub fn pca9548_select(_busno: u8, _address: u8, _channels: u8) -> Result<(), &'static str> { Err(NO_I2C) }
} }
pub use self::imp::*; pub use self::imp::*;

View File

@ -88,25 +88,6 @@ fn setup_log_levels() {
} }
} }
fn sayma_hw_init() {
#[cfg(has_hmc830_7043)]
/* must be the first SPI init because of HMC830 SPI mode selection */
board_artiq::hmc830_7043::init().expect("cannot initialize HMC830/7043");
#[cfg(has_ad9154)]
{
board_artiq::ad9154::jesd_reset(false);
board_artiq::ad9154::init();
if let Err(e) = board_artiq::jesd204sync::sysref_auto_rtio_align() {
error!("failed to align SYSREF at FPGA: {}", e);
}
if let Err(e) = board_artiq::jesd204sync::sysref_auto_dac_align() {
error!("failed to align SYSREF at DAC: {}", e);
}
}
#[cfg(has_allaki_atts)]
board_artiq::hmc542::program_all(8/*=4dB*/);
}
fn startup() { fn startup() {
irq::set_mask(0); irq::set_mask(0);
irq::set_ie(true); irq::set_ie(true);
@ -118,7 +99,6 @@ fn startup() {
setup_log_levels(); setup_log_levels();
#[cfg(has_i2c)] #[cfg(has_i2c)]
board_misoc::i2c::init().expect("I2C initialization failed"); board_misoc::i2c::init().expect("I2C initialization failed");
sayma_hw_init();
rtio_clocking::init(); rtio_clocking::init();
let mut net_device = unsafe { ethmac::EthernetDevice::new() }; let mut net_device = unsafe { ethmac::EthernetDevice::new() };

View File

@ -67,6 +67,19 @@ pub mod crg {
#[cfg(si5324_as_synthesizer)] #[cfg(si5324_as_synthesizer)]
fn setup_si5324_as_synthesizer() { fn setup_si5324_as_synthesizer() {
// 125 MHz output from 10 MHz CLKIN2, 504 Hz BW
#[cfg(all(rtio_frequency = "125.0", si5324_ext_ref, ext_ref_frequency = "10.0"))]
const SI5324_SETTINGS: si5324::FrequencySettings
= si5324::FrequencySettings {
n1_hs : 10,
nc1_ls : 4,
n2_hs : 10,
n2_ls : 300,
n31 : 75,
n32 : 6,
bwsel : 4,
crystal_ref: false
};
// 125MHz output, from 100MHz CLKIN2 reference, 586 Hz loop bandwidth // 125MHz output, from 100MHz CLKIN2 reference, 586 Hz loop bandwidth
#[cfg(all(rtio_frequency = "125.0", si5324_ext_ref, ext_ref_frequency = "100.0"))] #[cfg(all(rtio_frequency = "125.0", si5324_ext_ref, ext_ref_frequency = "100.0"))]
const SI5324_SETTINGS: si5324::FrequencySettings const SI5324_SETTINGS: si5324::FrequencySettings

View File

@ -68,7 +68,17 @@ pub mod drtio {
} }
let reply = aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::EchoRequest); let reply = aux_transact(io, aux_mutex, linkno, &drtioaux::Packet::EchoRequest);
match reply { match reply {
Ok(drtioaux::Packet::EchoReply) => return count, Ok(drtioaux::Packet::EchoReply) => {
// make sure receive buffer is drained
let max_time = clock::get_ms() + 200;
loop {
if clock::get_ms() > max_time {
return count;
}
let _ = drtioaux::recv(linkno);
io.relinquish().unwrap();
}
}
_ => {} _ => {}
} }
io.relinquish().unwrap(); io.relinquish().unwrap();

View File

@ -629,7 +629,7 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
loop { loop {
if listener.can_accept() { if listener.can_accept() {
let mut stream = listener.accept().expect("session: cannot accept"); let mut stream = listener.accept().expect("session: cannot accept");
stream.set_timeout(Some(1000)); stream.set_timeout(Some(2250));
stream.set_keep_alive(Some(500)); stream.set_keep_alive(Some(500));
match host::read_magic(&mut stream) { match host::read_magic(&mut stream) {

View File

@ -2,7 +2,3 @@ pub const INIT: u8 = 0x00;
pub const PRINT_STATUS: u8 = 0x01; pub const PRINT_STATUS: u8 = 0x01;
pub const PRBS: u8 = 0x02; pub const PRBS: u8 = 0x02;
pub const STPL: u8 = 0x03; pub const STPL: u8 = 0x03;
pub const SYSREF_DELAY_DAC: u8 = 0x10;
pub const SYSREF_SLIP: u8 = 0x11;
pub const SYNC: u8 = 0x12;

View File

@ -11,7 +11,12 @@ pub mod jesd {
unsafe { unsafe {
(csr::JDCG[dacno as usize].jesd_control_enable_write)(if en {1} else {0}) (csr::JDCG[dacno as usize].jesd_control_enable_write)(if en {1} else {0})
} }
clock::spin_us(5000); }
pub fn phy_done(dacno: u8) -> bool {
unsafe {
(csr::JDCG[dacno as usize].jesd_control_phy_done_read)() != 0
}
} }
pub fn ready(dacno: u8) -> bool { pub fn ready(dacno: u8) -> bool {
@ -78,531 +83,61 @@ pub mod jdac {
} }
} }
pub fn init() -> Result<(), &'static str> { fn init_one(dacno: u8) -> Result<(), &'static str> {
for dacno in 0..csr::JDCG.len() {
let dacno = dacno as u8;
info!("DAC-{} initializing...", dacno);
jesd::enable(dacno, true); jesd::enable(dacno, true);
clock::spin_us(10); clock::spin_us(10_000);
if !jesd::ready(dacno) { if !jesd::phy_done(dacno) {
error!("JESD core reported not ready"); error!("JESD core PHY not done");
return Err("JESD core reported not ready"); return Err("JESD core PHY not done");
} }
basic_request(dacno, jdac_requests::INIT, 0)?; basic_request(dacno, jdac_requests::INIT, 0)?;
// JESD ready depends on JSYNC being valid, so DAC init needs to happen first
if !jesd::ready(dacno) {
error!("JESD core reported not ready, sending DAC status print request");
basic_request(dacno, jdac_requests::PRINT_STATUS, 0)?;
return Err("JESD core reported not ready");
}
jesd::prbs(dacno, true); jesd::prbs(dacno, true);
basic_request(dacno, jdac_requests::PRBS, 0)?; basic_request(dacno, jdac_requests::PRBS, 0)?;
jesd::prbs(dacno, false); jesd::prbs(dacno, false);
jesd::stpl(dacno, true);
basic_request(dacno, jdac_requests::STPL, 0)?;
jesd::stpl(dacno, false);
basic_request(dacno, jdac_requests::INIT, 0)?; basic_request(dacno, jdac_requests::INIT, 0)?;
clock::spin_us(5000); clock::spin_us(5000);
basic_request(dacno, jdac_requests::PRINT_STATUS, 0)?;
if !jesd::jsync(dacno) { if !jesd::jsync(dacno) {
error!("JESD core reported bad SYNC"); error!("JESD core reported bad SYNC");
return Err("JESD core reported bad SYNC"); return Err("JESD core reported bad SYNC");
} }
info!(" ...done");
}
Ok(())
}
}
pub mod jesd204sync {
use board_misoc::{csr, clock, config};
use super::jdac;
use super::super::jdac_requests;
const HMC7043_ANALOG_DELAY_RANGE: u8 = 24;
const FPGA_CLK_DIV: u16 = 16; // Keep in sync with hmc830_7043.rs
const SYSREF_DIV: u16 = 256; // Keep in sync with hmc830_7043.rs
fn hmc7043_sysref_delay_dac(dacno: u8, phase_offset: u8) -> Result<(), &'static str> {
match jdac::basic_request(dacno, jdac_requests::SYSREF_DELAY_DAC, phase_offset) {
Ok(_) => Ok(()),
Err(e) => Err(e)
}
}
fn hmc7043_sysref_slip() -> Result<(), &'static str> {
match jdac::basic_request(0, jdac_requests::SYSREF_SLIP, 0) {
Ok(_) => Ok(()),
Err(e) => Err(e)
}
}
fn ad9154_sync(dacno: u8) -> Result<bool, &'static str> {
match jdac::basic_request(dacno, jdac_requests::SYNC, 0) {
Ok(0) => Ok(false),
Ok(_) => Ok(true),
Err(e) => Err(e)
}
}
fn average_2phases(a: i32, b: i32, modulo: i32) -> i32 {
let diff = ((a - b + modulo/2 + modulo) % modulo) - modulo/2;
return (modulo + b + diff/2) % modulo;
}
fn average_phases(phases: &[i32], modulo: i32) -> i32 {
if phases.len() == 1 {
panic!("input array length must be a power of 2");
} else if phases.len() == 2 {
average_2phases(phases[0], phases[1], modulo)
} else {
let cut = phases.len()/2;
average_2phases(
average_phases(&phases[..cut], modulo),
average_phases(&phases[cut..], modulo),
modulo)
}
}
const RAW_DDMTD_N_SHIFT: i32 = 6;
const RAW_DDMTD_N: i32 = 1 << RAW_DDMTD_N_SHIFT;
const DDMTD_DITHER_BITS: i32 = 1;
const DDMTD_N_SHIFT: i32 = RAW_DDMTD_N_SHIFT + DDMTD_DITHER_BITS;
const DDMTD_N: i32 = 1 << DDMTD_N_SHIFT;
fn init_ddmtd() -> Result<(), &'static str> {
unsafe {
csr::sysref_ddmtd::reset_write(1);
clock::spin_us(1);
csr::sysref_ddmtd::reset_write(0);
clock::spin_us(100);
if csr::sysref_ddmtd::locked_read() != 0 {
Ok(())
} else {
Err("DDMTD helper PLL failed to lock")
}
}
}
fn measure_ddmdt_phase_raw() -> i32 {
unsafe { csr::sysref_ddmtd::dt_read() as i32 }
}
fn measure_ddmdt_phase() -> i32 {
const AVG_PRECISION_SHIFT: i32 = 6;
const AVG_PRECISION: i32 = 1 << AVG_PRECISION_SHIFT;
const AVG_MOD: i32 = 1 << (RAW_DDMTD_N_SHIFT + AVG_PRECISION_SHIFT + DDMTD_DITHER_BITS);
let mut measurements = [0; AVG_PRECISION as usize];
for i in 0..AVG_PRECISION {
measurements[i as usize] = measure_ddmdt_phase_raw() << (AVG_PRECISION_SHIFT + DDMTD_DITHER_BITS);
clock::spin_us(10);
}
average_phases(&measurements, AVG_MOD) >> AVG_PRECISION_SHIFT
}
fn test_ddmtd_stability(raw: bool, tolerance: i32) -> Result<(), &'static str> {
info!("testing DDMTD stability (raw={}, tolerance={})...", raw, tolerance);
let modulo = if raw { RAW_DDMTD_N } else { DDMTD_N };
let measurement = if raw { measure_ddmdt_phase_raw } else { measure_ddmdt_phase };
let ntests = if raw { 15000 } else { 150 };
let mut max_pkpk = 0;
for _ in 0..32 {
// If we are near the edges, wraparound can throw off the simple min/max computation.
// In this case, add an offset to get near the center.
let quadrant = measure_ddmdt_phase();
let center_offset =
if quadrant < DDMTD_N/4 || quadrant > 3*DDMTD_N/4 {
modulo/2
} else {
0
};
let mut min = modulo;
let mut max = 0;
for _ in 0..ntests {
let m = (measurement() + center_offset) % modulo;
if m < min {
min = m;
}
if m > max {
max = m;
}
}
let pkpk = max - min;
if pkpk > max_pkpk {
max_pkpk = pkpk;
}
if pkpk > tolerance {
error!(" ...excessive peak-peak jitter: {} (min={} max={} center_offset={})", pkpk,
min, max, center_offset);
return Err("excessive DDMTD peak-peak jitter");
}
hmc7043_sysref_slip();
}
info!(" ...passed, peak-peak jitter: {}", max_pkpk);
Ok(()) Ok(())
} }
fn test_slip_ddmtd() -> Result<(), &'static str> { pub fn init() {
// expected_step = (RTIO clock frequency)*(DDMTD N)/(HMC7043 CLKIN frequency)
let expected_step = 8;
let tolerance = 1;
info!("testing HMC7043 SYSREF slip against DDMTD...");
let mut old_phase = measure_ddmdt_phase();
for _ in 0..1024 {
hmc7043_sysref_slip();
let phase = measure_ddmdt_phase();
let step = (DDMTD_N + old_phase - phase) % DDMTD_N;
if (step - expected_step).abs() > tolerance {
error!(" ...got unexpected step: {} ({} -> {})", step, old_phase, phase);
return Err("HMC7043 SYSREF slip produced unexpected DDMTD step");
}
old_phase = phase;
}
info!(" ...passed");
Ok(())
}
fn sysref_sh_error() -> bool {
unsafe {
csr::sysref_sampler::sh_error_reset_write(1);
clock::spin_us(1);
csr::sysref_sampler::sh_error_reset_write(0);
clock::spin_us(10);
csr::sysref_sampler::sh_error_read() != 0
}
}
const SYSREF_SH_PRECISION_SHIFT: i32 = 5;
const SYSREF_SH_PRECISION: i32 = 1 << SYSREF_SH_PRECISION_SHIFT;
const SYSREF_SH_MOD: i32 = 1 << (DDMTD_N_SHIFT + SYSREF_SH_PRECISION_SHIFT);
#[derive(Default)]
struct SysrefShLimits {
rising_phases: [i32; SYSREF_SH_PRECISION as usize],
falling_phases: [i32; SYSREF_SH_PRECISION as usize],
}
fn measure_sysref_sh_limits() -> Result<SysrefShLimits, &'static str> {
let mut ret = SysrefShLimits::default();
let mut nslips = 0;
let mut rising_n = 0;
let mut falling_n = 0;
let mut previous = sysref_sh_error();
while rising_n < SYSREF_SH_PRECISION || falling_n < SYSREF_SH_PRECISION {
hmc7043_sysref_slip();
nslips += 1;
if nslips > 1024 {
return Err("too many slips and not enough SYSREF S/H error transitions");
}
let current = sysref_sh_error();
let phase = measure_ddmdt_phase();
if current && !previous && rising_n < SYSREF_SH_PRECISION {
ret.rising_phases[rising_n as usize] = phase << SYSREF_SH_PRECISION_SHIFT;
rising_n += 1;
}
if !current && previous && falling_n < SYSREF_SH_PRECISION {
ret.falling_phases[falling_n as usize] = phase << SYSREF_SH_PRECISION_SHIFT;
falling_n += 1;
}
previous = current;
}
Ok(ret)
}
fn max_phase_deviation(average: i32, phases: &[i32]) -> i32 {
let mut ret = 0;
for phase in phases.iter() {
let deviation = (phase - average + DDMTD_N) % DDMTD_N;
if deviation > ret {
ret = deviation;
}
}
return ret;
}
fn reach_sysref_ddmtd_target(target: i32, tolerance: i32) -> Result<i32, &'static str> {
for _ in 0..1024 {
let delta = (measure_ddmdt_phase() - target + DDMTD_N) % DDMTD_N;
if delta <= tolerance {
return Ok(delta)
}
hmc7043_sysref_slip();
}
Err("failed to reach SYSREF DDMTD phase target")
}
fn calibrate_sysref_target(rising_average: i32, falling_average: i32) -> Result<i32, &'static str> {
info!("calibrating SYSREF DDMTD target phase...");
let coarse_target =
if rising_average < falling_average {
(rising_average + falling_average)/2
} else {
((falling_average - (DDMTD_N - rising_average))/2 + DDMTD_N) % DDMTD_N
};
info!(" SYSREF calibration coarse target: {}", coarse_target);
reach_sysref_ddmtd_target(coarse_target, 8)?;
let target = measure_ddmdt_phase();
info!(" ...done, target={}", target);
Ok(target)
}
fn sysref_get_tsc_phase_raw() -> Result<u8, &'static str> {
if sysref_sh_error() {
return Err("SYSREF failed S/H timing");
}
let ret = unsafe { csr::sysref_sampler::sysref_phase_read() };
Ok(ret)
}
// Note: the code below assumes RTIO/SYSREF frequency ratio is a power of 2
fn sysref_get_tsc_phase() -> Result<i32, &'static str> {
let mask = (SYSREF_DIV/FPGA_CLK_DIV - 1) as u8;
Ok((sysref_get_tsc_phase_raw()? & mask) as i32)
}
pub fn test_sysref_frequency() -> Result<(), &'static str> {
info!("testing SYSREF frequency against raw TSC phase bit toggles...");
let mut all_toggles = 0;
let initial_phase = sysref_get_tsc_phase_raw()?;
for _ in 0..20000 {
clock::spin_us(1);
all_toggles |= sysref_get_tsc_phase_raw()? ^ initial_phase;
}
let ratio = (SYSREF_DIV/FPGA_CLK_DIV) as u8;
let expected_toggles = 0xff ^ (ratio - 1);
if all_toggles == expected_toggles {
info!(" ...done (0x{:02x})", all_toggles);
Ok(())
} else {
error!(" ...unexpected toggles: got 0x{:02x}, expected 0x{:02x}",
all_toggles, expected_toggles);
Err("unexpected toggles")
}
}
fn sysref_slip_rtio_cycle() {
for _ in 0..FPGA_CLK_DIV {
hmc7043_sysref_slip();
}
}
pub fn test_slip_tsc() -> Result<(), &'static str> {
info!("testing HMC7043 SYSREF slip against TSC phase...");
let initial_phase = sysref_get_tsc_phase()?;
let modulo = (SYSREF_DIV/FPGA_CLK_DIV) as i32;
for i in 0..128 {
sysref_slip_rtio_cycle();
let expected_phase = (initial_phase + i + 1) % modulo;
let phase = sysref_get_tsc_phase()?;
if phase != expected_phase {
error!(" ...unexpected TSC phase: got {}, expected {} ", phase, expected_phase);
return Err("HMC7043 SYSREF slip produced unexpected TSC phase");
}
}
info!(" ...done");
Ok(())
}
pub fn sysref_rtio_align() -> Result<(), &'static str> {
info!("aligning SYSREF with RTIO TSC...");
let mut nslips = 0;
loop {
sysref_slip_rtio_cycle();
if sysref_get_tsc_phase()? == 0 {
info!(" ...done");
return Ok(())
}
nslips += 1;
if nslips > SYSREF_DIV/FPGA_CLK_DIV {
return Err("failed to find SYSREF transition aligned with RTIO TSC");
}
}
}
pub fn sysref_auto_rtio_align() -> Result<(), &'static str> {
init_ddmtd()?;
test_ddmtd_stability(true, 4)?;
test_ddmtd_stability(false, 1)?;
test_slip_ddmtd()?;
info!("determining SYSREF S/H limits...");
let sysref_sh_limits = measure_sysref_sh_limits()?;
let rising_average = average_phases(&sysref_sh_limits.rising_phases, SYSREF_SH_MOD);
let falling_average = average_phases(&sysref_sh_limits.falling_phases, SYSREF_SH_MOD);
let rising_max_deviation = max_phase_deviation(rising_average, &sysref_sh_limits.rising_phases);
let falling_max_deviation = max_phase_deviation(falling_average, &sysref_sh_limits.falling_phases);
let rising_average = rising_average >> SYSREF_SH_PRECISION_SHIFT;
let falling_average = falling_average >> SYSREF_SH_PRECISION_SHIFT;
let rising_max_deviation = rising_max_deviation >> SYSREF_SH_PRECISION_SHIFT;
let falling_max_deviation = falling_max_deviation >> SYSREF_SH_PRECISION_SHIFT;
info!(" SYSREF S/H average limits (DDMTD phases): {} {}", rising_average, falling_average);
info!(" SYSREF S/H maximum limit deviation: {} {}", rising_max_deviation, falling_max_deviation);
if rising_max_deviation > 8 || falling_max_deviation > 8 {
return Err("excessive SYSREF S/H limit deviation");
}
info!(" ...done");
let entry = config::read_str("sysref_ddmtd_phase_fpga", |r| r.map(|s| s.parse()));
let target_phase = match entry {
Ok(Ok(phase)) => {
info!("using FPGA SYSREF DDMTD phase target from config: {}", phase);
phase
}
_ => {
let phase = calibrate_sysref_target(rising_average, falling_average)?;
if let Err(e) = config::write_int("sysref_ddmtd_phase_fpga", phase as u32) {
error!("failed to update FPGA SYSREF DDMTD phase target in config: {}", e);
}
phase
}
};
info!("aligning SYSREF with RTIO clock...");
let delta = reach_sysref_ddmtd_target(target_phase, 3)?;
if sysref_sh_error() {
return Err("SYSREF does not meet S/H timing at DDMTD phase target");
}
info!(" ...done, delta={}", delta);
test_sysref_frequency()?;
test_slip_tsc()?;
sysref_rtio_align()?;
Ok(())
}
fn sysref_cal_dac(dacno: u8) -> Result<u8, &'static str> {
info!("calibrating SYSREF delay at DAC-{}...", dacno);
// Allocate for more than expected as jitter may create spurious entries.
let mut limits_buf = [0; 8];
let mut n_limits = 0;
limits_buf[n_limits] = -1;
n_limits += 1;
// avoid spurious rotation at delay=0
hmc7043_sysref_delay_dac(dacno, 0);
ad9154_sync(dacno)?;
for scan_delay in 0..HMC7043_ANALOG_DELAY_RANGE {
hmc7043_sysref_delay_dac(dacno, scan_delay);
if ad9154_sync(dacno)? {
limits_buf[n_limits] = scan_delay as i16;
n_limits += 1;
if n_limits >= limits_buf.len() - 1 {
break;
}
}
}
limits_buf[n_limits] = HMC7043_ANALOG_DELAY_RANGE as i16;
n_limits += 1;
info!(" using limits: {:?}", &limits_buf[..n_limits]);
let mut delay = 0;
let mut best_margin = 0;
for i in 0..(n_limits-1) {
let margin = limits_buf[i+1] - limits_buf[i];
if margin > best_margin {
best_margin = margin;
delay = ((limits_buf[i+1] + limits_buf[i])/2) as u8;
}
}
info!(" ...done, delay={}", delay);
Ok(delay)
}
fn sysref_dac_align(dacno: u8, delay: u8) -> Result<(), &'static str> {
let tolerance = 5;
info!("verifying SYSREF margins at DAC-{}...", dacno);
// avoid spurious rotation at delay=0
hmc7043_sysref_delay_dac(dacno, 0);
ad9154_sync(dacno)?;
let mut rotation_seen = false;
for scan_delay in 0..HMC7043_ANALOG_DELAY_RANGE {
hmc7043_sysref_delay_dac(dacno, scan_delay);
if ad9154_sync(dacno)? {
rotation_seen = true;
let distance = (scan_delay as i16 - delay as i16).abs();
if distance < tolerance {
error!(" rotation at delay={} is {} delay steps from target (FAIL)", scan_delay, distance);
return Err("insufficient SYSREF margin at DAC");
} else {
info!(" rotation at delay={} is {} delay steps from target (PASS)", scan_delay, distance);
}
}
}
if !rotation_seen {
return Err("no rotation seen when scanning DAC SYSREF delay");
}
info!(" ...done");
// We tested that the value is correct - now use it
hmc7043_sysref_delay_dac(dacno, delay);
ad9154_sync(dacno)?;
Ok(())
}
pub fn sysref_auto_dac_align() -> Result<(), &'static str> {
// We assume that DAC SYSREF traces are length-matched so only one delay
// value is needed, and we use DAC-0 as calibration reference.
let entry = config::read_str("sysref_7043_delay_dac", |r| r.map(|s| s.parse()));
let delay = match entry {
Ok(Ok(delay)) => {
info!("using DAC SYSREF delay from config: {}", delay);
delay
},
_ => {
let delay = sysref_cal_dac(0)?;
if let Err(e) = config::write_int("sysref_7043_delay_dac", delay as u32) {
error!("failed to update DAC SYSREF delay in config: {}", e);
}
delay
}
};
for dacno in 0..csr::JDCG.len() { for dacno in 0..csr::JDCG.len() {
sysref_dac_align(dacno as u8, delay)?; let dacno = dacno as u8;
info!("DAC-{} initializing...", dacno);
match init_one(dacno) {
Ok(()) => info!(" ...done"),
Err(e) => error!(" ...failed: {}", e)
}
}
}
pub fn stpl() -> Result<(), &'static str> {
for dacno in 0..csr::JDCG.len() {
let dacno = dacno as u8;
info!("Running STPL test on DAC-{}...", dacno);
jesd::stpl(dacno, true);
basic_request(dacno, jdac_requests::STPL, 0)?;
jesd::stpl(dacno, false);
info!(" ...done STPL test");
} }
Ok(()) Ok(())
} }
pub fn sysref_auto_align() {
if let Err(e) = sysref_auto_rtio_align() {
error!("failed to align SYSREF at FPGA: {}", e);
}
if let Err(e) = sysref_auto_dac_align() {
error!("failed to align SYSREF at DAC: {}", e);
}
}
} }

View File

@ -306,18 +306,6 @@ fn process_aux_packet(_repeaters: &mut [repeater::Repeater],
jdac_requests::PRINT_STATUS => { board_artiq::ad9154::status(_dacno); (true, 0) }, jdac_requests::PRINT_STATUS => { board_artiq::ad9154::status(_dacno); (true, 0) },
jdac_requests::PRBS => (board_artiq::ad9154::prbs(_dacno).is_ok(), 0), jdac_requests::PRBS => (board_artiq::ad9154::prbs(_dacno).is_ok(), 0),
jdac_requests::STPL => (board_artiq::ad9154::stpl(_dacno, 4, 2).is_ok(), 0), jdac_requests::STPL => (board_artiq::ad9154::stpl(_dacno, 4, 2).is_ok(), 0),
jdac_requests::SYSREF_DELAY_DAC => { board_artiq::hmc830_7043::hmc7043::sysref_delay_dac(_dacno, _param); (true, 0) },
jdac_requests::SYSREF_SLIP => { board_artiq::hmc830_7043::hmc7043::sysref_slip(); (true, 0) },
jdac_requests::SYNC => {
match board_artiq::ad9154::sync(_dacno) {
Ok(false) => (true, 0),
Ok(true) => (true, 1),
Err(e) => {
error!("DAC sync failed: {}", e);
(false, 0)
}
}
}
_ => (false, 0) _ => (false, 0)
} }
}; };
@ -465,8 +453,6 @@ pub extern fn main() -> i32 {
board_artiq::ad9154::reset_and_detect(dacno as u8).expect("AD9154 DAC not detected"); board_artiq::ad9154::reset_and_detect(dacno as u8).expect("AD9154 DAC not detected");
} }
} }
#[cfg(has_allaki_atts)]
board_artiq::hmc542::program_all(8/*=4dB*/);
#[cfg(has_drtio_routing)] #[cfg(has_drtio_routing)]
let mut repeaters = [repeater::Repeater::default(); csr::DRTIOREP.len()]; let mut repeaters = [repeater::Repeater::default(); csr::DRTIOREP.len()];
@ -509,9 +495,6 @@ pub extern fn main() -> i32 {
* Si5324 is locked to the recovered clock. * Si5324 is locked to the recovered clock.
*/ */
jdcg::jesd::reset(false); jdcg::jesd::reset(false);
if repeaters[0].is_up() {
let _ = jdcg::jdac::init();
}
} }
drtioaux::reset(0); drtioaux::reset(0);
@ -519,7 +502,7 @@ pub extern fn main() -> i32 {
drtiosat_reset_phy(false); drtiosat_reset_phy(false);
#[cfg(has_jdcg)] #[cfg(has_jdcg)]
let mut rep0_was_up = repeaters[0].is_up(); let mut rep0_was_up = false;
while drtiosat_link_rx_up() { while drtiosat_link_rx_up() {
drtiosat_process_errors(); drtiosat_process_errors();
process_aux_packets(&mut repeaters, &mut routing_table, &mut rank); process_aux_packets(&mut repeaters, &mut routing_table, &mut rank);
@ -529,12 +512,6 @@ pub extern fn main() -> i32 {
hardware_tick(&mut hardware_tick_ts); hardware_tick(&mut hardware_tick_ts);
if drtiosat_tsc_loaded() { if drtiosat_tsc_loaded() {
info!("TSC loaded from uplink"); info!("TSC loaded from uplink");
#[cfg(has_jdcg)]
{
if rep0_was_up {
jdcg::jesd204sync::sysref_auto_align();
}
}
for rep in repeaters.iter() { for rep in repeaters.iter() {
if let Err(e) = rep.sync_tsc() { if let Err(e) = rep.sync_tsc() {
error!("failed to sync TSC ({})", e); error!("failed to sync TSC ({})", e);
@ -548,8 +525,10 @@ pub extern fn main() -> i32 {
{ {
let rep0_is_up = repeaters[0].is_up(); let rep0_is_up = repeaters[0].is_up();
if rep0_is_up && !rep0_was_up { if rep0_is_up && !rep0_was_up {
let _ = jdcg::jdac::init(); jdcg::jdac::init();
jdcg::jesd204sync::sysref_auto_align(); if let Err(e) = jdcg::jdac::stpl() {
error!("STPL test failed: {}", e);
}
} }
rep0_was_up = rep0_is_up; rep0_was_up = rep0_is_up;
} }

View File

@ -10,10 +10,11 @@ from PyQt5 import QtCore, QtGui, QtWidgets
from quamash import QEventLoop from quamash import QEventLoop
from sipyco.asyncio_tools import atexit_register_coroutine from sipyco.asyncio_tools import atexit_register_coroutine
from sipyco import common_args
from artiq import __version__ as artiq_version from artiq import __version__ as artiq_version
from artiq import __artiq_dir__ as artiq_dir from artiq import __artiq_dir__ as artiq_dir
from artiq.tools import add_common_args, get_user_config_dir from artiq.tools import get_user_config_dir
from artiq.gui import state, applets, models, log from artiq.gui import state, applets, models, log
from artiq.browser import datasets, files, experiments from artiq.browser import datasets, files, experiments
@ -41,7 +42,7 @@ def get_argparser():
help="TCP port to use to connect to the master") help="TCP port to use to connect to the master")
parser.add_argument("select", metavar="SELECT", nargs="?", parser.add_argument("select", metavar="SELECT", nargs="?",
help="directory to browse or file to load") help="directory to browse or file to load")
add_common_args(parser) common_args.verbosity_args(parser)
return parser return parser

View File

@ -20,7 +20,7 @@ from prettytable import PrettyTable
from sipyco.pc_rpc import Client from sipyco.pc_rpc import Client
from sipyco.sync_struct import Subscriber from sipyco.sync_struct import Subscriber
from sipyco.broadcast import Receiver from sipyco.broadcast import Receiver
from sipyco import pyon from sipyco import common_args, pyon
from artiq.tools import short_format, parse_arguments from artiq.tools import short_format, parse_arguments
from artiq import __version__ as artiq_version from artiq import __version__ as artiq_version
@ -122,6 +122,7 @@ def get_argparser():
"ls", help="list a directory on the master") "ls", help="list a directory on the master")
parser_ls.add_argument("directory", default="", nargs="?") parser_ls.add_argument("directory", default="", nargs="?")
common_args.verbosity_args(parser)
return parser return parser

View File

@ -14,7 +14,6 @@ from sipyco.broadcast import Receiver
from sipyco import common_args from sipyco import common_args
from sipyco.asyncio_tools import atexit_register_coroutine from sipyco.asyncio_tools import atexit_register_coroutine
from artiq import __version__ as artiq_version
from artiq import __artiq_dir__ as artiq_dir, __version__ as artiq_version from artiq import __artiq_dir__ as artiq_dir, __version__ as artiq_version
from artiq.tools import get_user_config_dir from artiq.tools import get_user_config_dir
from artiq.gui.models import ModelSubscriber from artiq.gui.models import ModelSubscriber

View File

@ -75,23 +75,22 @@ Prerequisites:
help="actions to perform, default: %(default)s") help="actions to perform, default: %(default)s")
return parser return parser
def openocd_root():
openocd = shutil.which("openocd")
if not openocd:
raise FileNotFoundError("OpenOCD is required but was not found in PATH. Is it installed?")
return os.path.dirname(os.path.dirname(openocd))
def scripts_path(): def scripts_path():
p = ["share", "openocd", "scripts"] p = ["share", "openocd", "scripts"]
if os.name == "nt": if os.name == "nt":
p.insert(0, "Library") p.insert(0, "Library")
p = os.path.abspath(os.path.join( return os.path.abspath(os.path.join(openocd_root(), *p))
os.path.dirname(os.path.realpath(shutil.which("openocd"))),
"..", *p))
return p
def proxy_path(): def proxy_path():
p = ["share", "bscan-spi-bitstreams"] return os.path.abspath(os.path.join(openocd_root(), "share", "bscan-spi-bitstreams"))
p = os.path.abspath(os.path.join(
os.path.dirname(os.path.realpath(shutil.which("openocd"))),
"..", *p))
return p
def find_proxy_bitfile(filename): def find_proxy_bitfile(filename):
@ -293,7 +292,7 @@ class ProgrammerMetlino(Programmer):
add_commands(self._script, "echo \"AMC FPGA XADC:\"", "xadc_report xcu.tap") add_commands(self._script, "echo \"AMC FPGA XADC:\"", "xadc_report xcu.tap")
def load_proxy(self): def load_proxy(self):
self.load(find_proxy_bitfile("bscan_spi_xcku040-sayma.bit"), pld=0) self.load(find_proxy_bitfile("bscan_spi_xcku040.bit"), pld=0)
def start(self): def start(self):
add_commands(self._script, "xcu_program xcu.tap") add_commands(self._script, "xcu_program xcu.tap")
@ -372,7 +371,7 @@ def main():
variant_dir = args.target + "-" + variant variant_dir = args.target + "-" + variant
if args.target == "sayma": if args.target == "sayma":
if args.srcbuild: if args.srcbuild:
rtm_variant_dir = variant rtm_variant_dir = "rtm"
else: else:
rtm_variant_dir = "sayma-rtm" rtm_variant_dir = "sayma-rtm"
@ -414,7 +413,7 @@ def main():
gateware_bin = convert_gateware( gateware_bin = convert_gateware(
artifact_path(variant_dir, "gateware", "top.bit")) artifact_path(variant_dir, "gateware", "top.bit"))
programmer.write_binary(*config["gateware"], gateware_bin) programmer.write_binary(*config["gateware"], gateware_bin)
if args.target == "sayma" and variant != "master": if args.target == "sayma" and variant != "simplesatellite" and variant != "master":
rtm_gateware_bin = convert_gateware( rtm_gateware_bin = convert_gateware(
artifact_path(rtm_variant_dir, "gateware", "top.bit"), header=True) artifact_path(rtm_variant_dir, "gateware", "top.bit"), header=True)
programmer.write_binary(*config["rtm_gateware"], programmer.write_binary(*config["rtm_gateware"],

View File

@ -8,14 +8,19 @@ from misoc.interconnect.csr import *
from jesd204b.common import (JESD204BTransportSettings, from jesd204b.common import (JESD204BTransportSettings,
JESD204BPhysicalSettings, JESD204BPhysicalSettings,
JESD204BSettings) JESD204BSettings)
from jesd204b.phy.gth import GTHChannelPLL as JESD204BGTHChannelPLL from jesd204b.phy.gth import (GTHChannelPLL as JESD204BGTHChannelPLL,
GTHQuadPLL as JESD204BGTHQuadPLL,
GTHTransmitter as JESD204BGTHTransmitter,
GTHInit as JESD204BGTHInit,
GTHTransmitterInterconnect as JESD204BGTHTransmitterInterconnect)
from jesd204b.phy import JESD204BPhyTX from jesd204b.phy import JESD204BPhyTX
from jesd204b.core import JESD204BCoreTX from jesd204b.core import JESD204BCoreTX
from jesd204b.core import JESD204BCoreTXControl from jesd204b.core import JESD204BCoreTXControl
class UltrascaleCRG(Module, AutoCSR): class UltrascaleCRG(Module, AutoCSR):
linerate = int(6e9) linerate = int(6e9) # linerate = 20*data_rate*4/8 = data_rate*10
# data_rate = dac_rate/interp_factor
refclk_freq = int(150e6) refclk_freq = int(150e6)
fabric_freq = int(125e6) fabric_freq = int(125e6)
@ -45,27 +50,96 @@ PhyPads = namedtuple("PhyPads", "txp txn")
class UltrascaleTX(Module, AutoCSR): class UltrascaleTX(Module, AutoCSR):
def __init__(self, platform, sys_crg, jesd_crg, dac): def __init__(self, platform, sys_crg, jesd_crg, dac, pll_type="cpll", tx_half=False):
# Note: In general, the choice between channel and quad PLLs can be made based on the "nominal operating ranges", which are (see UG576, Ch.2):
# CPLL: 2.0 - 6.25 GHz
# QPLL0: 9.8 - 16.375 GHz
# QPLL1: 8.0 - 13.0 GHz
# However, the exact frequency and/or linerate range should be checked according to the model and speed grade from their corresponding datasheets.
pll_cls = {
"cpll": JESD204BGTHChannelPLL,
"qpll": JESD204BGTHQuadPLL
}[pll_type]
ps = JESD204BPhysicalSettings(l=8, m=4, n=16, np=16) ps = JESD204BPhysicalSettings(l=8, m=4, n=16, np=16)
ts = JESD204BTransportSettings(f=2, s=2, k=16, cs=0) ts = JESD204BTransportSettings(f=2, s=2, k=16, cs=0)
settings = JESD204BSettings(ps, ts, did=0x5a, bid=0x5) settings = JESD204BSettings(ps, ts, did=0x5a, bid=0x5)
jesd_pads = platform.request("dac_jesd", dac) jesd_pads = platform.request("dac_jesd", dac)
plls = []
phys = [] phys = []
for i in range(len(jesd_pads.txp)): for i in range(len(jesd_pads.txp)):
cpll = JESD204BGTHChannelPLL( pll = pll_cls(
jesd_crg.refclk, jesd_crg.refclk_freq, jesd_crg.linerate) jesd_crg.refclk, jesd_crg.refclk_freq, jesd_crg.linerate)
self.submodules += cpll self.submodules += pll
plls.append(pll)
# QPLL quads
if pll_type == "qpll":
gthe3_common_cfgs = []
for i in range(0, len(plls), 4):
# GTHE3_COMMON common signals
qpll_clk = Signal()
qpll_refclk = Signal()
qpll_reset = Signal()
qpll_lock = Signal()
# GTHE3_COMMON
self.specials += pll_cls.get_gthe3_common(
jesd_crg.refclk, jesd_crg.refclk_freq, jesd_crg.linerate,
qpll_clk, qpll_refclk, qpll_reset, qpll_lock)
gthe3_common_cfgs.append({
"clk": qpll_clk,
"refclk": qpll_refclk,
"reset": qpll_reset,
"lock": qpll_lock
})
# Per-channel PLL phys
for i, pll in enumerate(plls):
# PhyTX
phy = JESD204BPhyTX( phy = JESD204BPhyTX(
cpll, PhyPads(jesd_pads.txp[i], jesd_pads.txn[i]), pll, jesd_crg.refclk, PhyPads(jesd_pads.txp[i], jesd_pads.txn[i]),
jesd_crg.fabric_freq, transceiver="gth") jesd_crg.fabric_freq, transceiver="gth", tx_half=tx_half)
phys.append(phy)
if tx_half:
platform.add_period_constraint(phy.transmitter.cd_tx_half.clk,
80*1e9/jesd_crg.linerate)
platform.add_false_path_constraints(
sys_crg.cd_sys.clk,
jesd_crg.cd_jesd.clk,
phy.transmitter.cd_tx_half.clk)
else:
platform.add_period_constraint(phy.transmitter.cd_tx.clk, platform.add_period_constraint(phy.transmitter.cd_tx.clk,
40*1e9/jesd_crg.linerate) 40*1e9/jesd_crg.linerate)
platform.add_false_path_constraints( platform.add_false_path_constraints(
sys_crg.cd_sys.clk, sys_crg.cd_sys.clk,
jesd_crg.cd_jesd.clk, jesd_crg.cd_jesd.clk,
phy.transmitter.cd_tx.clk) phy.transmitter.cd_tx.clk)
phys.append(phy) # CHANNEL & init interconnects
for i, (pll, phy) in enumerate(zip(plls, phys)):
# CPLLs: 1 init per channel
if pll_type == "cpll":
phy_channel_cfg = {}
# Connect reset/lock to init
pll_reset = pll.reset
pll_lock = pll.lock
self.submodules += JESD204BGTHTransmitterInterconnect(
pll_reset, pll_lock, phy.transmitter, phy.transmitter.init)
# QPLL: 4 inits and 4 channels per quad
elif pll_type == "qpll":
# Connect clk/refclk to CHANNEL
phy_cfg = gthe3_common_cfgs[int(i//4)]
phy_channel_cfg = {
"qpll_clk": phy_cfg["clk"],
"qpll_refclk": phy_cfg["refclk"]
}
# Connect reset/lock to init
pll_reset = phy_cfg["reset"]
pll_lock = phy_cfg["lock"]
if i % 4 == 0:
self.submodules += JESD204BGTHTransmitterInterconnect(
pll_reset, pll_lock, phy.transmitter,
[phys[j].transmitter.init for j in range(i, min(len(phys), i+4))])
# GTHE3_CHANNEL
self.specials += JESD204BGTHTransmitter.get_gthe3_channel(
pll, phy.transmitter, **phy_channel_cfg)
self.submodules.core = JESD204BCoreTX( self.submodules.core = JESD204BCoreTX(
phys, settings, converter_data_width=64) phys, settings, converter_data_width=64)

View File

@ -4,7 +4,7 @@ from artiq.gateware.rtio.phy import ttl_serdes_generic
class _OSERDESE2_8X(Module): class _OSERDESE2_8X(Module):
def __init__(self, pad, pad_n=None): def __init__(self, pad, pad_n=None, invert=False):
self.o = Signal(8) self.o = Signal(8)
self.t_in = Signal() self.t_in = Signal()
self.t_out = Signal() self.t_out = Signal()
@ -16,12 +16,13 @@ class _OSERDESE2_8X(Module):
self.specials += Instance("OSERDESE2", self.specials += Instance("OSERDESE2",
p_DATA_RATE_OQ="DDR", p_DATA_RATE_TQ="BUF", p_DATA_RATE_OQ="DDR", p_DATA_RATE_TQ="BUF",
p_DATA_WIDTH=8, p_TRISTATE_WIDTH=1, p_DATA_WIDTH=8, p_TRISTATE_WIDTH=1,
p_INIT_OQ=0b11111111 if invert else 0b00000000,
o_OQ=pad_o, o_TQ=self.t_out, o_OQ=pad_o, o_TQ=self.t_out,
i_RST=ResetSignal("rio_phy"), i_RST=ResetSignal("rio_phy"),
i_CLK=ClockSignal("rtiox4"), i_CLK=ClockSignal("rtiox4"),
i_CLKDIV=ClockSignal("rio_phy"), i_CLKDIV=ClockSignal("rio_phy"),
i_D1=o[0], i_D2=o[1], i_D3=o[2], i_D4=o[3], i_D1=o[0] ^ invert, i_D2=o[1] ^ invert, i_D3=o[2] ^ invert, i_D4=o[3] ^ invert,
i_D5=o[4], i_D6=o[5], i_D7=o[6], i_D8=o[7], i_D5=o[4] ^ invert, i_D6=o[5] ^ invert, i_D7=o[6] ^ invert, i_D8=o[7] ^ invert,
i_TCE=1, i_OCE=1, i_TCE=1, i_OCE=1,
i_T1=self.t_in) i_T1=self.t_in)
if pad_n is None: if pad_n is None:
@ -106,8 +107,8 @@ class _IOSERDESE2_8X(Module):
class Output_8X(ttl_serdes_generic.Output): class Output_8X(ttl_serdes_generic.Output):
def __init__(self, pad, pad_n=None): def __init__(self, pad, pad_n=None, invert=False):
serdes = _OSERDESE2_8X(pad, pad_n) serdes = _OSERDESE2_8X(pad, pad_n, invert=invert)
self.submodules += serdes self.submodules += serdes
ttl_serdes_generic.Output.__init__(self, serdes) ttl_serdes_generic.Output.__init__(self, serdes)

View File

@ -95,7 +95,6 @@ def peripheral_grabber(module, peripheral):
eem.Grabber.add_std(module, port, port_aux, port_aux2) eem.Grabber.add_std(module, port, port_aux, port_aux2)
def add_peripherals(module, peripherals):
peripheral_processors = { peripheral_processors = {
"dio": peripheral_dio, "dio": peripheral_dio,
"urukul": peripheral_urukul, "urukul": peripheral_urukul,
@ -105,6 +104,9 @@ def add_peripherals(module, peripherals):
"zotino": peripheral_zotino, "zotino": peripheral_zotino,
"grabber": peripheral_grabber, "grabber": peripheral_grabber,
} }
def add_peripherals(module, peripherals):
for peripheral in peripherals: for peripheral in peripherals:
peripheral_processors[peripheral["type"]](module, peripheral) peripheral_processors[peripheral["type"]](module, peripheral)

View File

@ -53,6 +53,8 @@ class Master(MiniSoC, AMPSoC):
platform = self.platform platform = self.platform
rtio_clk_freq = 150e6 rtio_clk_freq = 150e6
self.comb += platform.request("input_clk_sel").eq(1)
self.comb += platform.request("filtered_clk_sel").eq(1)
self.submodules.si5324_rst_n = gpio.GPIOOut(platform.request("si5324").rst_n) self.submodules.si5324_rst_n = gpio.GPIOOut(platform.request("si5324").rst_n)
self.csr_devices.append("si5324_rst_n") self.csr_devices.append("si5324_rst_n")
i2c = self.platform.request("i2c") i2c = self.platform.request("i2c")

View File

@ -169,6 +169,8 @@ class SatelliteBase(MiniSoC):
# JESD204 DAC Channel Group # JESD204 DAC Channel Group
class JDCG(Module, AutoCSR): class JDCG(Module, AutoCSR):
def __init__(self, platform, sys_crg, jesd_crg, dac): def __init__(self, platform, sys_crg, jesd_crg, dac):
# Kintex Ultrascale GTH, speed grade -1C:
# CPLL linerate (D=1): 4.0 - 8.5 Gb/s
self.submodules.jesd = jesd204_tools.UltrascaleTX( self.submodules.jesd = jesd204_tools.UltrascaleTX(
platform, sys_crg, jesd_crg, dac) platform, sys_crg, jesd_crg, dac)
@ -288,12 +290,6 @@ class Satellite(SatelliteBase):
self.jdcg_0.jesd.core.register_jref(self.sysref_sampler.jref) self.jdcg_0.jesd.core.register_jref(self.sysref_sampler.jref)
self.jdcg_1.jesd.core.register_jref(self.sysref_sampler.jref) self.jdcg_1.jesd.core.register_jref(self.sysref_sampler.jref)
# DDMTD
# https://github.com/sinara-hw/Sayma_RTM/issues/68
sysref_pads = platform.request("amc_fpga_sysref", 1)
self.submodules.sysref_ddmtd = jesd204_tools.DDMTD(sysref_pads, self.rtio_clk_freq)
self.csr_devices.append("sysref_ddmtd")
class SimpleSatellite(SatelliteBase): class SimpleSatellite(SatelliteBase):
def __init__(self, **kwargs): def __init__(self, **kwargs):
@ -365,6 +361,7 @@ class Master(MiniSoC, AMPSoC):
self.config["SI5324_AS_SYNTHESIZER"] = None self.config["SI5324_AS_SYNTHESIZER"] = None
self.config["RTIO_FREQUENCY"] = str(rtio_clk_freq/1e6) self.config["RTIO_FREQUENCY"] = str(rtio_clk_freq/1e6)
self.comb += platform.request("filtered_clk_sel").eq(1)
self.comb += platform.request("sfp_tx_disable", 0).eq(0) self.comb += platform.request("sfp_tx_disable", 0).eq(0)
self.submodules.drtio_transceiver = gth_ultrascale.GTH( self.submodules.drtio_transceiver = gth_ultrascale.GTH(
clock_pads=platform.request("cdr_clk_clean", 0), clock_pads=platform.request("cdr_clk_clean", 0),

View File

@ -16,7 +16,7 @@ from misoc.targets.sayma_rtm import BaseSoC, soc_sayma_rtm_args, soc_sayma_rtm_a
from misoc.integration.builder import Builder, builder_args, builder_argdict from misoc.integration.builder import Builder, builder_args, builder_argdict
from artiq.gateware import rtio from artiq.gateware import rtio
from artiq.gateware.rtio.phy import ttl_serdes_7series from artiq.gateware.rtio.phy import ttl_simple, ttl_serdes_7series
from artiq.gateware.drtio.transceiver import gtp_7series from artiq.gateware.drtio.transceiver import gtp_7series
from artiq.gateware.drtio.siphaser import SiPhaser7Series from artiq.gateware.drtio.siphaser import SiPhaser7Series
from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer
@ -141,12 +141,13 @@ class _SatelliteBase(BaseSoC):
platform.add_false_path_constraints( platform.add_false_path_constraints(
self.crg.cd_sys.clk, self.siphaser.mmcm_freerun_output) self.crg.cd_sys.clk, self.siphaser.mmcm_freerun_output)
self.csr_devices.append("siphaser") self.csr_devices.append("siphaser")
self.submodules.si5324_rst_n = gpio.GPIOOut(platform.request("si5324").rst_n)
self.csr_devices.append("si5324_rst_n")
i2c = self.platform.request("i2c") i2c = self.platform.request("i2c")
self.submodules.i2c = gpio.GPIOTristate([i2c.scl, i2c.sda]) self.submodules.i2c = gpio.GPIOTristate([i2c.scl, i2c.sda])
self.csr_devices.append("i2c") self.csr_devices.append("i2c")
self.config["I2C_BUS_COUNT"] = 1 self.config["I2C_BUS_COUNT"] = 1
self.config["HAS_SI5324"] = None self.config["HAS_SI5324"] = None
self.config["SI5324_SOFT_RESET"] = None
rtio_clk_period = 1e9/rtio_clk_freq rtio_clk_period = 1e9/rtio_clk_freq
gtp = self.drtio_transceiver.gtps[0] gtp = self.drtio_transceiver.gtps[0]
@ -176,15 +177,37 @@ class Satellite(_SatelliteBase):
platform = self.platform platform = self.platform
rtio_channels = [] rtio_channels = []
phy = ttl_serdes_7series.Output_8X(platform.request("allaki0_rfsw0")) for bm in range(2):
print("BaseMod{} RF switches starting at RTIO channel 0x{:06x}"
.format(bm, len(rtio_channels)))
for i in range(4):
phy = ttl_serdes_7series.Output_8X(platform.request("basemod{}_rfsw".format(bm), i),
invert=True)
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy)) rtio_channels.append(rtio.Channel.from_phy(phy))
phy = ttl_serdes_7series.Output_8X(platform.request("allaki0_rfsw1"))
print("BaseMod{} attenuator starting at RTIO channel 0x{:06x}"
.format(bm, len(rtio_channels)))
basemod_att = platform.request("basemod{}_att".format(bm))
for name in "rst_n clk le".split():
signal = getattr(basemod_att, name)
for i in range(len(signal)):
phy = ttl_simple.Output(signal[i])
self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy))
phy = ttl_simple.Output(basemod_att.mosi[0])
self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy))
for i in range(3):
self.comb += basemod_att.mosi[i+1].eq(basemod_att.miso[i])
phy = ttl_simple.InOut(basemod_att.miso[3])
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy)) rtio_channels.append(rtio.Channel.from_phy(phy))
self.add_rtio(rtio_channels) self.add_rtio(rtio_channels)
self.comb += platform.request("clk_src_ext_sel").eq(0)
# HMC clock chip and DAC control # HMC clock chip and DAC control
self.comb += [ self.comb += [
platform.request("ad9154_rst_n", 0).eq(1), platform.request("ad9154_rst_n", 0).eq(1),

View File

@ -43,7 +43,7 @@ class TB(Module):
) )
] ]
cnv_old = Signal(reset_less=True) cnv_old = Signal(reset_less=True)
self.sync.async += [ self.sync.async_ += [
cnv_old.eq(self.cnv), cnv_old.eq(self.cnv),
If(Cat(cnv_old, self.cnv) == 0b10, If(Cat(cnv_old, self.cnv) == 0b10,
sr.eq(Cat(reversed(self.data[2*i:2*i + 2]))), sr.eq(Cat(reversed(self.data[2*i:2*i + 2]))),
@ -62,7 +62,7 @@ class TB(Module):
def _dly(self, sig, n=0): def _dly(self, sig, n=0):
n += self.params.t_rtt*4//2 # t_{sys,adc,ret}/t_async half rtt n += self.params.t_rtt*4//2 # t_{sys,adc,ret}/t_async half rtt
dly = Signal(n, reset_less=True) dly = Signal(n, reset_less=True)
self.sync.async += dly.eq(Cat(sig, dly)) self.sync.async_ += dly.eq(Cat(sig, dly))
return dly[-1] return dly[-1]
@ -85,8 +85,8 @@ def main():
assert not (yield dut.done) assert not (yield dut.done)
while not (yield dut.done): while not (yield dut.done):
yield yield
x = (yield from [(yield d) for d in dut.data]) for i, d in enumerate(dut.data):
for i, ch in enumerate(x): ch = yield d
assert ch == i, (hex(ch), hex(i)) assert ch == i, (hex(ch), hex(i))
run_simulation(tb, [run(tb)], run_simulation(tb, [run(tb)],
@ -95,7 +95,7 @@ def main():
"sys": (8, 0), "sys": (8, 0),
"adc": (8, 0), "adc": (8, 0),
"ret": (8, 0), "ret": (8, 0),
"async": (2, 0), "async_": (2, 0),
}, },
) )

View File

@ -44,8 +44,10 @@ class TB(Module):
yield yield
dat = [] dat = []
for dds in self.ddss: for dds in self.ddss:
v = yield from [(yield getattr(dds, k)) v = []
for k in "cmd ftw pow asf".split()] for k in "cmd ftw pow asf".split():
f = yield getattr(dds, k)
v.append(f)
dat.append(v) dat.append(v)
data.append((i, dat)) data.append((i, dat))
else: else:

View File

@ -91,7 +91,7 @@ def main():
"sys": (8, 0), "sys": (8, 0),
"adc": (8, 0), "adc": (8, 0),
"ret": (8, 0), "ret": (8, 0),
"async": (2, 0), "async_": (2, 0),
}) })

View File

@ -1,62 +0,0 @@
helper_xn1 = 0
helper_xn2 = 0
helper_yn0 = 0
helper_yn1 = 0
helper_yn2 = 0
previous_helper_tag = 0
main_xn1 = 0
main_xn2 = 0
main_yn0 = 0
main_yn1 = 0
main_yn2 = 0
def helper(helper_tag):
global helper_xn1, helper_xn2, helper_yn0, \
helper_yn1, helper_yn2, previous_helper_tag
helper_xn0 = helper_tag - previous_helper_tag - 32768
helper_yr = 4294967296
helper_yn2 = helper_yn1
helper_yn1 = helper_yn0
helper_yn0 = (
((284885689*((217319150*helper_xn0 >> 44) +
(-17591968725107*helper_xn1 >> 44))) >> 44) +
(-35184372088832*helper_yn1 >> 44) -
(17592186044416*helper_yn2 >> 44))
helper_xn2 = helper_xn1
helper_xn1 = helper_xn0
previous_helper_tag = helper_tag
helper_yn0 = min(helper_yn0, helper_yr)
helper_yn0 = max(helper_yn0, 0 - helper_yr)
return helper_yn0
def main(main_xn0):
global main_xn1, main_xn2, main_yn0, main_yn1, main_yn2
main_yr = 4294967296
main_yn2 = main_yn1
main_yn1 = main_yn0
main_yn0 = (
((133450380908*(((35184372088832*main_xn0) >> 44) +
((17592186044417*main_xn1) >> 44))) >> 44) +
((29455872930889*main_yn1) >> 44) -
((12673794781453*main_yn2) >> 44))
main_xn2 = main_xn1
main_xn1 = main_xn0
main_yn0 = min(main_yn0, main_yr)
main_yn0 = max(main_yn0, 0 - main_yr)
return main_yn0

View File

@ -1,265 +0,0 @@
from migen import *
from migen.genlib.fsm import *
from migen.genlib.cdc import MultiReg, PulseSynchronizer, BlindTransfer
from misoc.interconnect.csr import *
class I2CClockGen(Module):
def __init__(self, width):
self.load = Signal(width)
self.clk2x = Signal()
cnt = Signal.like(self.load)
self.comb += [
self.clk2x.eq(cnt == 0),
]
self.sync += [
If(self.clk2x,
cnt.eq(self.load),
).Else(
cnt.eq(cnt - 1),
)
]
class I2CMasterMachine(Module):
def __init__(self, clock_width):
self.scl = Signal(reset=1)
self.sda_o = Signal(reset=1)
self.sda_i = Signal()
self.submodules.cg = CEInserter()(I2CClockGen(clock_width))
self.idle = Signal()
self.start = Signal()
self.stop = Signal()
self.write = Signal()
self.read = Signal()
self.ack = Signal()
self.data = Signal(8)
###
busy = Signal()
bits = Signal(4)
fsm = CEInserter()(FSM("IDLE"))
self.submodules += fsm
fsm.act("IDLE",
If(self.start,
NextState("START0"),
).Elif(self.stop & self.start,
NextState("RESTART0"),
).Elif(self.stop,
NextState("STOP0"),
).Elif(self.write,
NextValue(bits, 8),
NextState("WRITE0"),
).Elif(self.read,
NextValue(bits, 8),
NextState("READ0"),
)
)
fsm.act("START0",
NextValue(self.scl, 1),
NextState("START1"))
fsm.act("START1",
NextValue(self.sda_o, 0),
NextState("IDLE"))
fsm.act("RESTART0",
NextValue(self.scl, 0),
NextState("RESTART1"))
fsm.act("RESTART1",
NextValue(self.sda_o, 1),
NextState("START0"))
fsm.act("STOP0",
NextValue(self.scl, 0),
NextState("STOP1"))
fsm.act("STOP1",
NextValue(self.scl, 1),
NextValue(self.sda_o, 0),
NextState("STOP2"))
fsm.act("STOP2",
NextValue(self.sda_o, 1),
NextState("IDLE"))
fsm.act("WRITE0",
NextValue(self.scl, 0),
If(bits == 0,
NextValue(self.sda_o, 1),
NextState("READACK0"),
).Else(
NextValue(self.sda_o, self.data[7]),
NextState("WRITE1"),
)
)
fsm.act("WRITE1",
NextValue(self.scl, 1),
NextValue(self.data[1:], self.data[:-1]),
NextValue(bits, bits - 1),
NextState("WRITE0"),
)
fsm.act("READACK0",
NextValue(self.scl, 1),
NextState("READACK1"),
)
fsm.act("READACK1",
NextValue(self.ack, ~self.sda_i),
NextState("IDLE")
)
fsm.act("READ0",
NextValue(self.scl, 0),
NextState("READ1"),
)
fsm.act("READ1",
NextValue(self.data[0], self.sda_i),
NextValue(self.scl, 0),
If(bits == 0,
NextValue(self.sda_o, ~self.ack),
NextState("WRITEACK0"),
).Else(
NextValue(self.sda_o, 1),
NextState("READ2"),
)
)
fsm.act("READ2",
NextValue(self.scl, 1),
NextValue(self.data[:-1], self.data[1:]),
NextValue(bits, bits - 1),
NextState("READ1"),
)
fsm.act("WRITEACK0",
NextValue(self.scl, 1),
NextState("IDLE"),
)
run = Signal()
self.comb += [
run.eq(self.start | self.stop | self.write | self.read),
self.idle.eq(~run & fsm.ongoing("IDLE")),
self.cg.ce.eq(~self.idle),
fsm.ce.eq(run | self.cg.clk2x),
]
class ADPLLProgrammer(Module):
def __init__(self):
self.i2c_divider = Signal(16)
self.i2c_address = Signal(7)
self.adpll = Signal(24)
self.stb = Signal()
self.busy = Signal()
self.nack = Signal()
self.scl = Signal()
self.sda_i = Signal()
self.sda_o = Signal()
self.scl.attr.add("no_retiming")
self.sda_o.attr.add("no_retiming")
# # #
master = I2CMasterMachine(16)
self.submodules += master
self.comb += [
master.cg.load.eq(self.i2c_divider.storage),
self.scl.eq(master.scl),
master.sda_i.eq(self.sda_i),
self.sda_o.eq(master.sda_o)
]
class Si549(Module, AutoCSR):
def __init__(self, pads):
self.gpio_enable = CSRStorage(reset=1)
self.gpio_in = CSRStatus(2)
self.gpio_out = CSRStorage(2)
self.gpio_oe = CSRStorage(2)
self.i2c_divider = CSRStorage(16)
self.i2c_address = CSRStorage(7)
self.errors = CSR(2)
# in helper clock domain
self.adpll = Signal(24)
self.adpll_stb = Signal()
# # #
programmer = ClockDomainsRenamer("helper")(ADPLLProgrammer())
self.submodules += programmer
self.i2c_divider.storage.attr.add("no_retiming")
self.i2c_address.storage.attr.add("no_retiming")
self.specials += [
MultiReg(self.i2c_divider.storage, programmer.i2c_divider, "helper"),
MultiReg(self.i2c_address.storage, programmer.i2c_address, "helper")
]
self.comb += [
programmer.adpll.eq(self.adpll),
programmer.adpll_stb.eq(self.adpll_stb)
]
self.gpio_enable.storage.attr.add("no_retiming")
self.gpio_out.storage.attr.add("no_retiming")
self.gpio_oe.storage.attr.add("no_retiming")
# SCL GPIO and mux
ts_scl = TSTriple(1)
self.specials += ts_scl.get_tristate(pads.scl)
status = Signal()
self.comb += self.gpio_in.status[0].eq(status)
self.specials += MultiReg(ts_scl.i, status)
self.comb += [
If(self.gpio_enable.storage,
ts_scl.o.eq(self.gpio_out.storage[0]),
ts_scl.oe.eq(self.gpio_oe.storage[0])
).Else(
ts_scl.o.eq(programmer.scl),
ts_scl.oe.eq(1)
)
]
# SDA GPIO and mux
ts_sda = TSTriple(1)
self.specials += ts_sda.get_tristate(pads.sda)
status = Signal()
self.comb += self.gpio_in.status[1].eq(status)
self.specials += MultiReg(ts_sda.i, status)
self.comb += [
If(self.gpio_enable.storage,
ts_sda.o.eq(self.gpio_out.storage[1]),
ts_sda.oe.eq(self.gpio_oe.storage[1])
).Else(
ts_sda.o.eq(0),
ts_sda.oe.eq(~programmer.sda_o)
)
]
self.specials += MultiReg(ts_sda.i, programmer.sda_i, "helper")
# Error reporting
collision_cdc = BlindTransfer("helper", "sys")
self.submodules += collision_cdc
self.comb += collision_cdc.i.eq(programmer.stb & programmer.busy)
nack_cdc = PulseSynchronizer("helper", "sys")
self.submodules += nack_cdc
self.comb += nack_cdc.i.eq(programmer.nack)
for n, trig in enumerate([collision_cdc.o, nack_cdc.o]):
self.sync += [
If(self.errors.re & self.errors.r[n], self.errors.w[n].eq(0)),
If(trig, self.errors.w[n].eq(1))
]

View File

@ -1,636 +0,0 @@
import inspect
import ast
from copy import copy
import operator
from functools import reduce
from collections import OrderedDict
from migen import *
from migen.genlib.fsm import *
class Isn:
def __init__(self, immediate=None, inputs=None, outputs=None):
if inputs is None:
inputs = []
if outputs is None:
outputs = []
self.immediate = immediate
self.inputs = inputs
self.outputs = outputs
def __repr__(self):
r = "<"
r += self.__class__.__name__
if self.immediate is not None:
r += " (" + str(self.immediate) + ")"
for inp in self.inputs:
r += " r" + str(inp)
if self.outputs:
r += " ->"
for outp in self.outputs:
r += " r" + str(outp)
r += ">"
return r
class NopIsn(Isn):
opcode = 0
class AddIsn(Isn):
opcode = 1
class SubIsn(Isn):
opcode = 2
class MulShiftIsn(Isn):
opcode = 3
# opcode = 4: MulShift with alternate shift
class MinIsn(Isn):
opcode = 5
class MaxIsn(Isn):
opcode = 6
class CopyIsn(Isn):
opcode = 7
class InputIsn(Isn):
opcode = 8
class OutputIsn(Isn):
opcode = 9
class EndIsn(Isn):
opcode = 10
class ASTCompiler:
def __init__(self):
self.program = []
self.data = []
self.next_ssa_reg = -1
self.constants = dict()
self.names = dict()
self.globals = OrderedDict()
def get_ssa_reg(self):
r = self.next_ssa_reg
self.next_ssa_reg -= 1
return r
def add_global(self, name):
if name not in self.globals:
r = len(self.data)
self.data.append(0)
self.names[name] = r
self.globals[name] = r
def input(self, name):
target = self.get_ssa_reg()
self.program.append(InputIsn(outputs=[target]))
self.names[name] = target
def emit(self, node):
if isinstance(node, ast.BinOp):
if isinstance(node.op, ast.RShift):
if not isinstance(node.left, ast.BinOp) or not isinstance(node.left.op, ast.Mult):
raise NotImplementedError
if not isinstance(node.right, ast.Num):
raise NotImplementedError
left = self.emit(node.left.left)
right = self.emit(node.left.right)
cons = lambda **kwargs: MulShiftIsn(immediate=node.right.n, **kwargs)
else:
left = self.emit(node.left)
right = self.emit(node.right)
if isinstance(node.op, ast.Add):
cons = AddIsn
elif isinstance(node.op, ast.Sub):
cons = SubIsn
elif isinstance(node.op, ast.Mult):
cons = lambda **kwargs: MulShiftIsn(immediate=0, **kwargs)
else:
raise NotImplementedError
output = self.get_ssa_reg()
self.program.append(cons(inputs=[left, right], outputs=[output]))
return output
elif isinstance(node, ast.Call):
if not isinstance(node.func, ast.Name):
raise NotImplementedError
funcname = node.func.id
if node.keywords:
raise NotImplementedError
inputs = [self.emit(x) for x in node.args]
if funcname == "min":
cons = MinIsn
elif funcname == "max":
cons = MaxIsn
else:
raise NotImplementedError
output = self.get_ssa_reg()
self.program.append(cons(inputs=inputs, outputs=[output]))
return output
elif isinstance(node, (ast.Num, ast.UnaryOp)):
if isinstance(node, ast.UnaryOp):
if not isinstance(node.operand, ast.Num):
raise NotImplementedError
if isinstance(node.op, ast.UAdd):
transform = lambda x: x
elif isinstance(node.op, ast.USub):
transform = operator.neg
elif isinstance(node.op, ast.Invert):
transform = operator.invert
else:
raise NotImplementedError
node = node.operand
else:
transform = lambda x: x
n = transform(node.n)
if n in self.constants:
return self.constants[n]
else:
r = len(self.data)
self.data.append(n)
self.constants[n] = r
return r
elif isinstance(node, ast.Name):
return self.names[node.id]
elif isinstance(node, ast.Assign):
output = self.emit(node.value)
for target in node.targets:
assert isinstance(target, ast.Name)
self.names[target.id] = output
elif isinstance(node, ast.Return):
value = self.emit(node.value)
self.program.append(OutputIsn(inputs=[value]))
elif isinstance(node, ast.Global):
pass
else:
raise NotImplementedError
class Processor:
def __init__(self, data_width=32, multiplier_stages=2):
self.data_width = data_width
self.multiplier_stages = multiplier_stages
self.multiplier_shifts = []
self.program_rom_size = None
self.data_ram_size = None
self.opcode_bits = 4
self.reg_bits = None
def get_instruction_latency(self, isn):
return {
AddIsn: 2,
SubIsn: 2,
MulShiftIsn: 1 + self.multiplier_stages,
MinIsn: 2,
MaxIsn: 2,
CopyIsn: 1,
InputIsn: 1
}[isn.__class__]
def encode_instruction(self, isn, exit):
opcode = isn.opcode
if isn.immediate is not None and not isinstance(isn, MulShiftIsn):
r0 = isn.immediate
if len(isn.inputs) >= 1:
r1 = isn.inputs[0]
else:
r1 = 0
else:
if len(isn.inputs) >= 1:
r0 = isn.inputs[0]
else:
r0 = 0
if len(isn.inputs) >= 2:
r1 = isn.inputs[1]
else:
r1 = 0
r = 0
for value, bits in ((exit, self.reg_bits), (r1, self.reg_bits), (r0, self.reg_bits), (opcode, self.opcode_bits)):
r <<= bits
r |= value
return r
def instruction_bits(self):
return 3*self.reg_bits + self.opcode_bits
def implement(self, program, data):
return ProcessorImpl(self, program, data)
class Scheduler:
def __init__(self, processor, reserved_data, program):
self.processor = processor
self.reserved_data = reserved_data
self.used_registers = set(range(self.reserved_data))
self.exits = dict()
self.program = program
self.remaining = copy(program)
self.output = []
def allocate_register(self):
r = min(set(range(max(self.used_registers) + 2)) - self.used_registers)
self.used_registers.add(r)
return r
def free_register(self, r):
assert r >= self.reserved_data
self.used_registers.discard(r)
def find_inputs(self, cycle, isn):
mapped_inputs = []
for inp in isn.inputs:
if inp >= 0:
mapped_inputs.append(inp)
else:
found = False
for i in range(cycle):
if i in self.exits:
r, rm = self.exits[i]
if r == inp:
mapped_inputs.append(rm)
found = True
break
if not found:
return None
return mapped_inputs
def schedule_one(self, isn):
cycle = len(self.output)
mapped_inputs = self.find_inputs(cycle, isn)
if mapped_inputs is None:
return False
if isn.outputs:
# check that exit slot is free
latency = self.processor.get_instruction_latency(isn)
exit = cycle + latency
if exit in self.exits:
return False
# avoid RAW hazard with global writeback
for output in isn.outputs:
if output >= 0:
for risn in self.remaining:
for inp in risn.inputs:
if inp == output:
return False
# Instruction can be scheduled
self.remaining.remove(isn)
for inp, minp in zip(isn.inputs, mapped_inputs):
can_free = inp < 0 and all(inp != rinp for risn in self.remaining for rinp in risn.inputs)
if can_free:
self.free_register(minp)
if isn.outputs:
assert len(isn.outputs) == 1
if isn.outputs[0] < 0:
output = self.allocate_register()
else:
output = isn.outputs[0]
self.exits[exit] = (isn.outputs[0], output)
self.output.append(isn.__class__(immediate=isn.immediate, inputs=mapped_inputs))
return True
def schedule(self):
while self.remaining:
success = False
for isn in self.remaining:
if self.schedule_one(isn):
success = True
break
if not success:
self.output.append(NopIsn())
self.output += [NopIsn()]*(max(self.exits.keys()) - len(self.output) + 1)
return self.output
class CompiledProgram:
def __init__(self, processor, program, exits, data, glbs):
self.processor = processor
self.program = program
self.exits = exits
self.data = data
self.globals = glbs
def pretty_print(self):
for cycle, isn in enumerate(self.program):
l = "{:4d} {:15}".format(cycle, str(isn))
if cycle in self.exits:
l += " -> r{}".format(self.exits[cycle])
print(l)
def dimension_processor(self):
self.processor.program_rom_size = len(self.program)
self.processor.data_ram_size = len(self.data)
self.processor.reg_bits = (self.processor.data_ram_size - 1).bit_length()
for isn in self.program:
if isinstance(isn, MulShiftIsn) and isn.immediate not in self.processor.multiplier_shifts:
self.processor.multiplier_shifts.append(isn.immediate)
def encode(self):
r = []
for i, isn in enumerate(self.program):
exit = self.exits.get(i, 0)
r.append(self.processor.encode_instruction(isn, exit))
return r
def compile(processor, function):
node = ast.parse(inspect.getsource(function))
assert isinstance(node, ast.Module)
assert len(node.body) == 1
node = node.body[0]
assert isinstance(node, ast.FunctionDef)
assert len(node.args.args) == 1
arg = node.args.args[0].arg
body = node.body
astcompiler = ASTCompiler()
for node in body:
if isinstance(node, ast.Global):
for name in node.names:
astcompiler.add_global(name)
arg_r = astcompiler.input(arg)
for node in body:
astcompiler.emit(node)
if isinstance(node, ast.Return):
break
for glbl, location in astcompiler.globals.items():
new_location = astcompiler.names[glbl]
if new_location != location:
astcompiler.program.append(CopyIsn(inputs=[new_location], outputs=[location]))
scheduler = Scheduler(processor, len(astcompiler.data), astcompiler.program)
scheduler.schedule()
program = copy(scheduler.output)
program.append(EndIsn())
max_reg = max(max(max(isn.inputs + [0]) for isn in program), max(v[1] for k, v in scheduler.exits.items()))
return CompiledProgram(
processor=processor,
program=program,
exits={k: v[1] for k, v in scheduler.exits.items()},
data=astcompiler.data + [0]*(max_reg - len(astcompiler.data) + 1),
glbs=astcompiler.globals)
class BaseUnit(Module):
def __init__(self, data_width):
self.stb_i = Signal()
self.i0 = Signal((data_width, True))
self.i1 = Signal((data_width, True))
self.stb_o = Signal()
self.o = Signal((data_width, True))
class NopUnit(BaseUnit):
pass
class OpUnit(BaseUnit):
def __init__(self, op, data_width, stages):
BaseUnit.__init__(self, data_width)
o = op(self.i0, self.i1)
stb_o = self.stb_i
for i in range(stages):
n_o = Signal(data_width)
n_stb_o = Signal()
self.sync += [
n_o.eq(o),
n_stb_o.eq(stb_o)
]
o = n_o
stb_o = n_stb_o
self.comb += [
self.o.eq(o),
self.stb_o.eq(stb_o)
]
class SelectUnit(BaseUnit):
def __init__(self, op, data_width):
BaseUnit.__init__(self, data_width)
self.sync += [
self.stb_o.eq(self.stb_i),
If(op(self.i0, self.i1),
self.o.eq(self.i0)
).Else(
self.o.eq(self.i1)
)
]
class CopyUnit(BaseUnit):
def __init__(self, data_width):
BaseUnit.__init__(self, data_width)
self.comb += [
self.stb_o.eq(self.stb_i),
self.o.eq(self.i0)
]
class InputUnit(BaseUnit):
def __init__(self, data_width, input_stb, input):
BaseUnit.__init__(self, data_width)
self.buffer = Signal(data_width)
self.comb += [
self.stb_o.eq(self.stb_i),
self.o.eq(self.buffer)
]
class OutputUnit(BaseUnit):
def __init__(self, data_width, output_stb, output):
BaseUnit.__init__(self, data_width)
self.sync += [
output_stb.eq(self.stb_i),
output.eq(self.i0)
]
class ProcessorImpl(Module):
def __init__(self, pd, program, data):
self.input_stb = Signal()
self.input = Signal((pd.data_width, True))
self.output_stb = Signal()
self.output = Signal((pd.data_width, True))
self.busy = Signal()
# # #
program_mem = Memory(pd.instruction_bits(), pd.program_rom_size, init=program)
data_mem0 = Memory(pd.data_width, pd.data_ram_size, init=data)
data_mem1 = Memory(pd.data_width, pd.data_ram_size, init=data)
self.specials += program_mem, data_mem0, data_mem1
pc = Signal(pd.instruction_bits())
pc_next = Signal.like(pc)
pc_en = Signal()
self.sync += pc.eq(pc_next)
self.comb += [
If(pc_en,
pc_next.eq(pc + 1)
).Else(
pc_next.eq(0)
)
]
program_mem_port = program_mem.get_port()
self.specials += program_mem_port
self.comb += program_mem_port.adr.eq(pc_next)
s = 0
opcode = Signal(pd.opcode_bits)
self.comb += opcode.eq(program_mem_port.dat_r[s:s+pd.opcode_bits])
s += pd.opcode_bits
r0 = Signal(pd.reg_bits)
self.comb += r0.eq(program_mem_port.dat_r[s:s+pd.reg_bits])
s += pd.reg_bits
r1 = Signal(pd.reg_bits)
self.comb += r1.eq(program_mem_port.dat_r[s:s+pd.reg_bits])
s += pd.reg_bits
exit = Signal(pd.reg_bits)
self.comb += exit.eq(program_mem_port.dat_r[s:s+pd.reg_bits])
data_read_port0 = data_mem0.get_port()
data_read_port1 = data_mem1.get_port()
self.specials += data_read_port0, data_read_port1
self.comb += [
data_read_port0.adr.eq(r0),
data_read_port1.adr.eq(r1)
]
data_write_port = data_mem0.get_port(write_capable=True)
data_write_port_dup = data_mem1.get_port(write_capable=True)
self.specials += data_write_port, data_write_port_dup
self.comb += [
data_write_port_dup.we.eq(data_write_port.we),
data_write_port_dup.adr.eq(data_write_port.adr),
data_write_port_dup.dat_w.eq(data_write_port.dat_w),
data_write_port.adr.eq(exit)
]
nop = NopUnit(pd.data_width)
adder = OpUnit(operator.add, pd.data_width, 1)
subtractor = OpUnit(operator.sub, pd.data_width, 1)
if pd.multiplier_shifts:
if len(pd.multiplier_shifts) != 1:
raise NotImplementedError
multiplier = OpUnit(lambda a, b: a * b >> pd.multiplier_shifts[0],
pd.data_width, pd.multiplier_stages)
else:
multiplier = NopUnit(pd.data_width)
minu = SelectUnit(operator.lt, pd.data_width)
maxu = SelectUnit(operator.gt, pd.data_width)
copier = CopyUnit(pd.data_width)
inu = InputUnit(pd.data_width, self.input_stb, self.input)
outu = OutputUnit(pd.data_width, self.output_stb, self.output)
units = [nop, adder, subtractor, multiplier, minu, maxu, copier, inu, outu]
self.submodules += units
for unit in units:
self.sync += unit.stb_i.eq(0)
self.comb += [
unit.i0.eq(data_read_port0.dat_r),
unit.i1.eq(data_read_port1.dat_r),
If(unit.stb_o,
data_write_port.we.eq(1),
data_write_port.dat_w.eq(unit.o)
)
]
decode_table = [
(NopIsn.opcode, nop),
(AddIsn.opcode, adder),
(SubIsn.opcode, subtractor),
(MulShiftIsn.opcode, multiplier),
(MulShiftIsn.opcode + 1, multiplier),
(MinIsn.opcode, minu),
(MaxIsn.opcode, maxu),
(CopyIsn.opcode, copier),
(InputIsn.opcode, inu),
(OutputIsn.opcode, outu)
]
for allocated_opcode, unit in decode_table:
self.sync += If(pc_en & (opcode == allocated_opcode), unit.stb_i.eq(1))
fsm = FSM()
self.submodules += fsm
fsm.act("IDLE",
pc_en.eq(0),
NextValue(inu.buffer, self.input),
If(self.input_stb, NextState("PROCESSING"))
)
fsm.act("PROCESSING",
self.busy.eq(1),
pc_en.eq(1),
If(opcode == EndIsn.opcode,
pc_en.eq(0),
NextState("IDLE")
)
)
a = 0
b = 0
c = 0
def foo(x):
global a, b, c
c = b
b = a
a = x
return 4748*a + 259*b - 155*c
def simple_test(x):
global a
a = a + (x*4 >> 1)
return a
if __name__ == "__main__":
proc = Processor()
cp = compile(proc, simple_test)
cp.pretty_print()
cp.dimension_processor()
print(cp.encode())
proc_impl = proc.implement(cp.encode(), cp.data)
def send_values(values):
for value in values:
yield proc_impl.input.eq(value)
yield proc_impl.input_stb.eq(1)
yield
yield proc_impl.input.eq(0)
yield proc_impl.input_stb.eq(0)
yield
while (yield proc_impl.busy):
yield
@passive
def receive_values(callback):
while True:
while not (yield proc_impl.output_stb):
yield
callback((yield proc_impl.output))
yield
run_simulation(proc_impl, [send_values([42, 40, 10, 10]), receive_values(print)])

View File

@ -100,6 +100,17 @@ class NumberEntryInt(QtWidgets.QSpinBox):
if "default" in procdesc: if "default" in procdesc:
return procdesc["default"] return procdesc["default"]
else: else:
have_max = "max" in procdesc and procdesc["max"] is not None
have_min = "min" in procdesc and procdesc["min"] is not None
if have_max and have_min:
if procdesc["min"] <= 0 < procdesc["max"]:
return 0
elif have_min and not have_max:
if procdesc["min"] >= 0:
return procdesc["min"]
elif not have_min and have_max:
if procdesc["max"] < 0:
return procdesc["max"]
return 0 return 0
@ -301,7 +312,7 @@ class _CenterScan(LayoutWidget):
apply_properties(center) apply_properties(center)
center.setPrecision() center.setPrecision()
center.setRelativeStep() center.setRelativeStep()
center.setValue(state["center"]) center.setValue(state["center"]/scale)
self.addWidget(center, 0, 1) self.addWidget(center, 0, 1)
self.addWidget(QtWidgets.QLabel("Center:"), 0, 0) self.addWidget(QtWidgets.QLabel("Center:"), 0, 0)
@ -311,7 +322,7 @@ class _CenterScan(LayoutWidget):
span.setPrecision() span.setPrecision()
span.setRelativeStep() span.setRelativeStep()
span.setMinimum(0) span.setMinimum(0)
span.setValue(state["span"]) span.setValue(state["span"]/scale)
self.addWidget(span, 1, 1) self.addWidget(span, 1, 1)
self.addWidget(QtWidgets.QLabel("Span:"), 1, 0) self.addWidget(QtWidgets.QLabel("Span:"), 1, 0)
@ -321,7 +332,7 @@ class _CenterScan(LayoutWidget):
step.setPrecision() step.setPrecision()
step.setRelativeStep() step.setRelativeStep()
step.setMinimum(0) step.setMinimum(0)
step.setValue(state["step"]) step.setValue(state["step"]/scale)
self.addWidget(step, 2, 1) self.addWidget(step, 2, 1)
self.addWidget(QtWidgets.QLabel("Step:"), 2, 0) self.addWidget(QtWidgets.QLabel("Step:"), 2, 0)
@ -416,9 +427,10 @@ class ScanEntry(LayoutWidget):
"selected": "NoScan", "selected": "NoScan",
"NoScan": {"value": 0.0, "repetitions": 1}, "NoScan": {"value": 0.0, "repetitions": 1},
"RangeScan": {"start": 0.0, "stop": 100.0*scale, "npoints": 10, "RangeScan": {"start": 0.0, "stop": 100.0*scale, "npoints": 10,
"randomize": False}, "randomize": False, "seed": None},
"CenterScan": {"center": 0.*scale, "span": 100.*scale, "CenterScan": {"center": 0.*scale, "span": 100.*scale,
"step": 10.*scale, "randomize": False}, "step": 10.*scale, "randomize": False,
"seed": None},
"ExplicitScan": {"sequence": []} "ExplicitScan": {"sequence": []}
} }
if "default" in procdesc: if "default" in procdesc:

View File

@ -8,7 +8,7 @@ from artiq.language import units
from artiq.language.core import rpc from artiq.language.core import rpc
__all__ = ["NoDefault", __all__ = ["NoDefault", "DefaultMissing",
"PYONValue", "BooleanValue", "EnumerationValue", "PYONValue", "BooleanValue", "EnumerationValue",
"NumberValue", "StringValue", "NumberValue", "StringValue",
"HasEnvironment", "Experiment", "EnvExperiment"] "HasEnvironment", "Experiment", "EnvExperiment"]

View File

@ -8,7 +8,7 @@ from artiq.compiler import types, builtins
__all__ = ["TNone", "TTuple", __all__ = ["TNone", "TTuple",
"TBool", "TInt32", "TInt64", "TFloat", "TBool", "TInt32", "TInt64", "TFloat",
"TStr", "TBytes", "TByteArray", "TStr", "TBytes", "TByteArray",
"TList", "TRange32", "TRange64", "TList", "TArray", "TRange32", "TRange64",
"TVar"] "TVar"]
TNone = builtins.TNone() TNone = builtins.TNone()
@ -21,6 +21,7 @@ TBytes = builtins.TBytes()
TByteArray = builtins.TByteArray() TByteArray = builtins.TByteArray()
TTuple = types.TTuple TTuple = types.TTuple
TList = builtins.TList TList = builtins.TList
TArray = builtins.TArray
TRange32 = builtins.TRange(builtins.TInt(types.TValue(32))) TRange32 = builtins.TRange(builtins.TInt(types.TValue(32)))
TRange64 = builtins.TRange(builtins.TInt(types.TValue(64))) TRange64 = builtins.TRange(builtins.TInt(types.TValue(64)))
TVar = types.TVar TVar = types.TVar

View File

@ -87,17 +87,12 @@ class Run:
self._notifier[self.rid]["status"] = self._status.name self._notifier[self.rid]["status"] = self._status.name
self._state_changed.notify() self._state_changed.notify()
# The run with the largest priority_key is to be scheduled first def priority_key(self):
def priority_key(self, now=None): """Return a comparable value that defines a run priority order.
if self.due_date is None:
due_date_k = 0 Applies only to runs the due date of which has already elapsed.
else: """
due_date_k = -self.due_date return (self.priority, -(self.due_date or 0), -self.rid)
if now is not None and self.due_date is not None:
runnable = int(now > self.due_date)
else:
runnable = 1
return (runnable, self.priority, due_date_k, -self.rid)
async def close(self): async def close(self):
# called through pool # called through pool
@ -162,36 +157,37 @@ class PrepareStage(TaskObject):
self.delete_cb = delete_cb self.delete_cb = delete_cb
def _get_run(self): def _get_run(self):
"""If a run should get prepared now, return it. """If a run should get prepared now, return it. Otherwise, return a
Otherwise, return a float representing the time before the next timed float giving the time until the next check, or None if no time-based
run becomes due, or None if there is no such run.""" check is required.
The latter can be the case if there are no due-date runs, or none
of them are going to become next-in-line before further pool state
changes (which will also cause a re-evaluation).
"""
pending_runs = list(
filter(lambda r: r.status == RunStatus.pending,
self.pool.runs.values()))
now = time() now = time()
pending_runs = filter(lambda r: r.status == RunStatus.pending, def is_runnable(r):
self.pool.runs.values()) return (r.due_date or 0) < now
try:
candidate = max(pending_runs, key=lambda r: r.priority_key(now))
except ValueError:
# pending_runs is an empty sequence
return None
prepared_runs = filter(lambda r: r.status == RunStatus.prepare_done, prepared_max = max((r.priority_key() for r in self.pool.runs.values()
self.pool.runs.values()) if r.status == RunStatus.prepare_done),
try: default=None)
top_prepared_run = max(prepared_runs, def takes_precedence(r):
key=lambda r: r.priority_key()) return prepared_max is None or r.priority_key() > prepared_max
except ValueError:
# there are no existing prepared runs - go ahead with <candidate>
pass
else:
# prepare <candidate> (as well) only if it has higher priority than
# the highest priority prepared run
if top_prepared_run.priority_key() >= candidate.priority_key():
return None
if candidate.due_date is None or candidate.due_date < now: candidate = max(filter(is_runnable, pending_runs),
key=lambda r: r.priority_key(),
default=None)
if candidate is not None and takes_precedence(candidate):
return candidate return candidate
else:
return candidate.due_date - now return min((r.due_date - now for r in pending_runs
if (not is_runnable(r) and takes_precedence(r))),
default=None)
async def _do(self): async def _do(self):
while True: while True:

View File

@ -196,7 +196,7 @@ def setup_diagnostics(experiment_file, repository_path):
message = message.replace(repository_path, "<repository>") message = message.replace(repository_path, "<repository>")
if diagnostic.level == "warning": if diagnostic.level == "warning":
logging.warn(message) logging.warning(message)
else: else:
logging.error(message) logging.error(message)

View File

@ -59,7 +59,7 @@ class TransferTest(ExperimentCase):
exp = self.create(_Transfer) exp = self.create(_Transfer)
device_to_host_rate = exp.device_to_host() device_to_host_rate = exp.device_to_host()
print(device_to_host_rate, "B/s") print(device_to_host_rate, "B/s")
self.assertGreater(device_to_host_rate, 2.3e6) self.assertGreater(device_to_host_rate, 2.2e6)
def test_device_to_host_array(self): def test_device_to_host_array(self):
exp = self.create(_Transfer) exp = self.create(_Transfer)

View File

@ -0,0 +1,13 @@
# RUN: %python -m artiq.compiler.testbench.signature +diag %s >%t
# RUN: OutputCheck %s --file-to-check=%t
class Foo:
def __init__(self):
pass
a = Foo()
b = Foo()
# CHECK-L: ${LINE:+1}: error: Custom object comparison is not supported
a > b

View File

@ -0,0 +1,13 @@
# RUN: %python -m artiq.compiler.testbench.signature +diag %s >%t
# RUN: OutputCheck %s --file-to-check=%t
class Foo:
def __init__(self):
pass
a = Foo()
b = Foo()
# CHECK-L: ${LINE:+1}: error: Custom object inclusion test is not supported
a in b

View File

@ -0,0 +1,11 @@
# RUN: %python -m artiq.compiler.testbench.llvmgen %s
def make_none():
return None
def take_arg(arg):
pass
def run():
retval = make_none()
take_arg(retval)

View File

@ -7,7 +7,6 @@ import unittest
class TestFrontends(unittest.TestCase): class TestFrontends(unittest.TestCase):
def test_help(self): def test_help(self):
"""Test --help as a simple smoke test against catastrophic breakage.""" """Test --help as a simple smoke test against catastrophic breakage."""
# Skip tests for GUI programs on headless CI environments.
commands = { commands = {
"aqctl": [ "aqctl": [
"corelog" "corelog"
@ -15,7 +14,7 @@ class TestFrontends(unittest.TestCase):
"artiq": [ "artiq": [
"client", "compile", "coreanalyzer", "coremgmt", "client", "compile", "coreanalyzer", "coremgmt",
"netboot", "flash", "master", "mkfs", "route", "netboot", "flash", "master", "mkfs", "route",
"rtiomon", "run", "session" "rtiomon", "run", "session", "browser", "dashboard"
] ]
} }

View File

@ -28,7 +28,18 @@ class BackgroundExperiment(EnvExperiment):
sleep(0.2) sleep(0.2)
except TerminationRequested: except TerminationRequested:
self.set_dataset("termination_ok", True, self.set_dataset("termination_ok", True,
broadcast=True, save=False) broadcast=True, archive=False)
class CheckPauseBackgroundExperiment(EnvExperiment):
def build(self):
self.setattr_device("scheduler")
def run(self):
while True:
while not self.scheduler.check_pause():
sleep(0.2)
self.scheduler.pause()
def _get_expid(name): def _get_expid(name):
@ -117,6 +128,161 @@ class SchedulerCase(unittest.TestCase):
scheduler.notifier.publish = None scheduler.notifier.publish = None
loop.run_until_complete(scheduler.stop()) loop.run_until_complete(scheduler.stop())
def test_pending_priority(self):
"""Check due dates take precedence over priorities when waiting to
prepare."""
loop = self.loop
handlers = {}
scheduler = Scheduler(_RIDCounter(0), handlers, None)
handlers["scheduler_check_pause"] = scheduler.check_pause
expid_empty = _get_expid("EmptyExperiment")
expid_bg = _get_expid("CheckPauseBackgroundExperiment")
# Suppress the SystemExit backtrace when worker process is killed.
expid_bg["log_level"] = logging.CRITICAL
high_priority = 3
middle_priority = 2
low_priority = 1
late = time() + 100000
early = time() + 1
expect = [
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": low_priority,
"pipeline": "main",
"due_date": None,
"status": "pending",
"expid": expid_bg,
"flush": False
},
"key": 0
},
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": high_priority,
"pipeline": "main",
"due_date": late,
"status": "pending",
"expid": expid_empty,
"flush": False
},
"key": 1
},
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": middle_priority,
"pipeline": "main",
"due_date": early,
"status": "pending",
"expid": expid_empty,
"flush": False
},
"key": 2
},
{
"path": [0],
"action": "setitem",
"value": "preparing",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "prepare_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "running",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "preparing",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "prepare_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "paused",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "running",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "run_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "running",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "analyzing",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "deleting",
"key": "status"
},
{
"path": [],
"action": "delitem",
"key": 2
},
]
done = asyncio.Event()
expect_idx = 0
def notify(mod):
nonlocal expect_idx
self.assertEqual(mod, expect[expect_idx])
expect_idx += 1
if expect_idx >= len(expect):
done.set()
scheduler.notifier.publish = notify
scheduler.start()
scheduler.submit("main", expid_bg, low_priority)
scheduler.submit("main", expid_empty, high_priority, late)
scheduler.submit("main", expid_empty, middle_priority, early)
loop.run_until_complete(done.wait())
scheduler.notifier.publish = None
loop.run_until_complete(scheduler.stop())
def test_pause(self): def test_pause(self):
loop = self.loop loop = self.loop

View File

@ -20,22 +20,27 @@ from unittest.mock import Mock
import sphinx_rtd_theme import sphinx_rtd_theme
# Hack-patch Sphinx so that ARTIQ-Python types are correctly printed # Ensure that ARTIQ-Python types are correctly printed
# See: https://github.com/m-labs/artiq/issues/741 # See: https://github.com/m-labs/artiq/issues/741
from sphinx.ext import autodoc import builtins
from sphinx.util import inspect builtins.__in_sphinx__ = True
autodoc.repr = inspect.repr = str
# we cannot use autodoc_mock_imports (does not help with argparse) # we cannot use autodoc_mock_imports (does not help with argparse)
mock_modules = ["artiq.gui.waitingspinnerwidget", mock_modules = ["artiq.gui.waitingspinnerwidget",
"artiq.gui.flowlayout", "artiq.gui.flowlayout",
"artiq.gui.state",
"artiq.gui.log",
"artiq.gui.models",
"artiq.compiler.module", "artiq.compiler.module",
"artiq.compiler.embedding", "artiq.compiler.embedding",
"quamash", "pyqtgraph", "matplotlib", "quamash", "pyqtgraph", "matplotlib",
"numpy", "dateutil", "dateutil.parser", "prettytable", "PyQt5", "numpy", "dateutil", "dateutil.parser", "prettytable", "PyQt5",
"h5py", "serial", "scipy", "scipy.interpolate", "h5py", "serial", "scipy", "scipy.interpolate",
"llvmlite_artiq", "Levenshtein", "pythonparser"] "llvmlite_artiq", "Levenshtein", "pythonparser",
"sipyco", "sipyco.pc_rpc", "sipyco.sync_struct",
"sipyco.asyncio_tools", "sipyco.logging_tools",
"sipyco.broadcast", "sipyco.packed_exceptions"]
for module in mock_modules: for module in mock_modules:
sys.modules[module] = Mock() sys.modules[module] = Mock()
@ -89,7 +94,7 @@ master_doc = 'index'
# General information about the project. # General information about the project.
project = 'ARTIQ' project = 'ARTIQ'
copyright = '2014-2019, M-Labs Limited' copyright = '2014-2021, M-Labs Limited'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the

View File

@ -112,6 +112,13 @@ RF generation drivers
:members: :members:
:mod:`artiq.coredevice.basemod_att` module
++++++++++++++++++++++++++++++++++++++++++
.. automodule:: artiq.coredevice.basemod_att
:members:
DAC/ADC drivers DAC/ADC drivers
--------------- ---------------

View File

@ -7,19 +7,17 @@ Developing ARTIQ
The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler. The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``.nix`` files from the ``nix-scripts`` repository and run the commands manually) - but Nix makes the process a lot easier. ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``.nix`` files from the ``nix-scripts`` repository and run the commands manually) - but Nix makes the process a lot easier.
* Install the `Nix package manager <http://nixos.org/nix/>`_ and Git (e.g. ``$ nix-shell -p git``). * Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to the `M-Labs Hydra logs <https://nixbld.m-labs.hk/>`_ to determine which version is currently used when building the binary packages.
* Set up the M-Labs Hydra channels (:ref:`same procedure as the user section <installing-nix-users>`) to allow binaries to be downloaded. Otherwise, tools such as LLVM and the Rust compiler will be compiled on your machine, which uses a lot of CPU time, memory, and disk space. Simply setting up the channels is sufficient, Nix will automatically detect when a binary can be downloaded instead of being compiled locally.
* Clone the repositories https://github.com/m-labs/artiq and https://git.m-labs.hk/m-labs/nix-scripts.
* Run ``$ nix-shell -I artiqSrc=path_to_artiq_sources shell-dev.nix`` to obtain an environment containing all the required development tools (e.g. Migen, MiSoC, Clang, Rust, OpenOCD...) in addition to the ARTIQ user environment. ``artiqSrc`` should point to the root of the cloned ``artiq`` repository, and ``shell-dev.nix`` can be found in the ``artiq-fast`` folder of the ``nix-scripts`` repository.
* This enters a FHS chroot environment that simplifies the installation and patching of Xilinx Vivado.
* Download Vivado from Xilinx and install it (by running the official installer in the FHS chroot environment). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to the `M-Labs Hydra logs <https://nixbld.m-labs.hk/>`_ to determine which version is currently used when building the binary packages.
* During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives). * During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives).
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli``. * Install the `Nix package manager <http://nixos.org/nix/>`_ and Git (e.g. ``$ nix-shell -p git``).
* If you did not install Vivado in ``/opt``, add a command line option such as ``--gateware-toolchain-path ~/Xilinx/Vivado``. * Set up the M-Labs binary substituter (:ref:`same procedure as the user section <installing-nix-users>`) to allow binaries to be downloaded. Otherwise, tools such as LLVM and the Rust compiler will be compiled on your machine, which uses a lot of CPU time, memory, and disk space.
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild artiq_kasli -V <your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <configuring-openocd>`. OpenOCD is already part of the shell started by ``shell-dev.nix``. * Clone the repositories https://github.com/m-labs/artiq and https://git.m-labs.hk/m-labs/nix-scripts.
* If you did not install Vivado in its default location ``/opt``, edit the Nix files accordingly.
* Run ``$ nix-shell -I artiqSrc=path_to_artiq_sources shell-dev.nix`` to obtain an environment containing all the required development tools (e.g. Migen, MiSoC, Clang, Rust, OpenOCD...) in addition to the ARTIQ user environment. ``artiqSrc`` should point to the root of the cloned ``artiq`` repository, and ``shell-dev.nix`` can be found in the ``artiq-fast`` folder of the ``nix-scripts`` repository.
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli``. If you are using a JSON system description file, use ``$ python -m artiq.gateware.targets.kasli_generic file.json``.
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli -V <your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <configuring-openocd>`. OpenOCD is already part of the shell started by ``shell-dev.nix``.
* Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed by ``shell-dev.nix``). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``. * Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed by ``shell-dev.nix``). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``.
* The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (``$ sudo adduser $USER dialout`` assuming standard setup). * The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group).
* If you are modifying a dependency of ARTIQ, in addition to updating the relevant part of ``nix-scripts``, rebuild and upload the corresponding Conda packages manually, and update their version numbers in ``conda-artiq.nix``. For Conda, only the main ARTIQ package and the board packages are handled automatically on Hydra.
.. warning:: .. warning::
Nix will make a read-only copy of the ARTIQ source to use in the shell environment. Therefore, any modifications that you make to the source after the shell is started will not be taken into account. A solution applicable to ARTIQ (and several other Python packages such as Migen and MiSoC) is to set the ``PYTHONPATH`` environment variable in the shell to the root of the ARTIQ source. If you want this to be done by default, edit ``profile`` in ``artiq-dev.nix``. Nix will make a read-only copy of the ARTIQ source to use in the shell environment. Therefore, any modifications that you make to the source after the shell is started will not be taken into account. A solution applicable to ARTIQ (and several other Python packages such as Migen and MiSoC) is to prepend the ARTIQ source directory to the ``PYTHONPATH`` environment variable after entering the shell. If you want this to be done by default, edit ``profile`` in ``shell-dev.nix``.

View File

@ -157,7 +157,7 @@ Plotting in the ARTIQ dashboard is achieved by programs called "applets". Applet
Applets are configured through their command line to select parameters such as the names of the datasets to plot. The list of command-line options can be retrieved using the ``-h`` option; for example you can run ``python3 -m artiq.applets.plot_xy -h`` in a terminal. Applets are configured through their command line to select parameters such as the names of the datasets to plot. The list of command-line options can be retrieved using the ``-h`` option; for example you can run ``python3 -m artiq.applets.plot_xy -h`` in a terminal.
In our case, create a new applet from the XY template by right-clicking on the applet list, and edit the applet command line so that it retrieves the ``parabola`` dataset. Run the experiment again, and observe how the points are added one by one to the plot. In our case, create a new applet from the XY template by right-clicking on the applet list, and edit the applet command line so that it retrieves the ``parabola`` dataset (the command line should now be ``${artiq_applet}plot_xy parabola``). Run the experiment again, and observe how the points are added one by one to the plot.
After the experiment has finished executing, the results are written to a HDF5 file that resides in ``~/artiq-master/results/<date>/<hour>``. Open that file with HDFView or h5dump, and observe the data we just generated as well as the Git commit ID of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645`` that represents the data at a particular time in the Git repository). The list of Git commit IDs can be found using the ``git log`` command in ``~/artiq-work``. After the experiment has finished executing, the results are written to a HDF5 file that resides in ``~/artiq-master/results/<date>/<hour>``. Open that file with HDFView or h5dump, and observe the data we just generated as well as the Git commit ID of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645`` that represents the data at a particular time in the Git repository). The list of Git commit IDs can be found using the ``git log`` command in ``~/artiq-work``.

View File

@ -21,12 +21,12 @@ First, install the Nix package manager. Some distributions provide a package for
Once Nix is installed, add the M-Labs package channel with: :: Once Nix is installed, add the M-Labs package channel with: ::
$ nix-channel --add https://nixbld.m-labs.hk/channel/custom/artiq/full/artiq-full $ nix-channel --add https://nixbld.m-labs.hk/channel/custom/artiq/full-legacy/artiq-full
Those channels track `nixpkgs 19.09 <https://github.com/NixOS/nixpkgs/tree/release-19.09>`_. You can check the latest status through the `Hydra interface <https://nixbld.m-labs.hk>`_. As the Nix package manager default installation uses the development version of nixpkgs, we need to tell it to switch to the release: :: Those channels track `nixpkgs 21.05 <https://github.com/NixOS/nixpkgs/tree/release-21.05>`_. You can check the latest status through the `Hydra interface <https://nixbld.m-labs.hk>`_. As the Nix package manager default installation uses the development version of nixpkgs, we need to tell it to switch to the release: ::
$ nix-channel --remove nixpkgs $ nix-channel --remove nixpkgs
$ nix-channel --add https://nixos.org/channels/nixos-19.09 nixpkgs $ nix-channel --add https://nixos.org/channels/nixos-21.05 nixpkgs
Finally, make all the channel changes effective: :: Finally, make all the channel changes effective: ::
@ -48,8 +48,8 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
:: ::
let let
# Contains the NixOS package collection. ARTIQ depends on some of them, and # pkgs contains the NixOS package collection. ARTIQ depends on some of them, and
# you may also want certain packages from there. # you may want some additional packages from there.
pkgs = import <nixpkgs> {}; pkgs = import <nixpkgs> {};
artiq-full = import <artiq-full> { inherit pkgs; }; artiq-full = import <artiq-full> { inherit pkgs; };
in in
@ -57,25 +57,38 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
buildInputs = [ buildInputs = [
(pkgs.python3.withPackages(ps: [ (pkgs.python3.withPackages(ps: [
# List desired Python packages here. # List desired Python packages here.
# You probably want these two.
artiq-full.artiq artiq-full.artiq
artiq-full.artiq-comtools artiq-full.artiq-comtools
# The board packages are also "Python" packages. You only need a board
# package if you intend to reflash that board (those packages contain # You need a board support package if and only if you intend to flash
# only board firmware). # a board (those packages contain only board firmware).
artiq-full.artiq-board-kc705-nist_clock # The lines below are only examples, you need to select appropriate
artiq-full.artiq-board-kasli-wipm # packages for your boards.
# from the NixOS package collection: #artiq-full.artiq-board-kc705-nist_clock
ps.paramiko # needed for flashing boards remotely (artiq_flash -H) #artiq-full.artiq-board-kasli-wipm
ps.pandas #ps.paramiko # needed if and only if flashing boards remotely (artiq_flash -H)
ps.numpy
ps.scipy # The NixOS package collection contains many other packages that you may find
ps.numba # interesting for your research. Here are some examples:
(ps.matplotlib.override { enableQt = true; }) #ps.pandas
ps.bokeh #ps.numpy
#ps.scipy
#ps.numba
#(ps.matplotlib.override { enableQt = true; })
#ps.bokeh
#ps.cirq
#ps.qiskit
])) ]))
# List desired non-Python packages here # List desired non-Python packages here
artiq-full.openocd # needed for flashing boards, also provides proxy bitstreams #artiq-full.openocd # needed if and only if flashing boards
pkgs.spyder # Other potentially interesting packages from the NixOS package collection:
#pkgs.gtkwave
#pkgs.spyder
#pkgs.R
#pkgs.julia
]; ];
} }
@ -101,17 +114,14 @@ After installing either Anaconda or Miniconda, open a new terminal (also known a
Executing just ``conda`` should print the help of the ``conda`` command. If your shell does not find the ``conda`` command, make sure that the Conda binaries are in your ``$PATH``. If ``$ echo $PATH`` does not show the Conda directories, add them: execute ``$ export PATH=$HOME/miniconda3/bin:$PATH`` if you installed Conda into ``~/miniconda3``. Executing just ``conda`` should print the help of the ``conda`` command. If your shell does not find the ``conda`` command, make sure that the Conda binaries are in your ``$PATH``. If ``$ echo $PATH`` does not show the Conda directories, add them: execute ``$ export PATH=$HOME/miniconda3/bin:$PATH`` if you installed Conda into ``~/miniconda3``.
Download the `ARTIQ installer script <https://raw.githubusercontent.com/m-labs/artiq/master/install-with-conda.py>`_ and edit its beginning to define the Conda environment name (you can leave the default environment name if you are just getting started) and select the desired ARTIQ packages. Non-ARTIQ packages should be installed manually later. Set up the Conda channel and install ARTIQ into a new Conda environment: ::
$ conda config --prepend channels https://conda.m-labs.hk/artiq-legacy
$ conda config --append channels conda-forge
$ conda create -n artiq artiq
.. note:: .. note::
If you do not need to flash boards, the ``artiq`` package is sufficient. The packages named ``artiq-board-*`` contain only firmware for the FPGA board and are never necessary for using an ARTIQ system without reflashing it. If you do not need to flash boards, the ``artiq`` package is sufficient. The packages named ``artiq-board-*`` contain only firmware for the FPGA board, and you should not install them unless you are reflashing an FPGA board. Controllers for third-party devices (e.g. Thorlabs TCube, Lab Brick Digital Attenuator, etc.) that are not shipped with ARTIQ can also be installed with Conda. Browse `Hydra <https://nixbld.m-labs.hk/project/artiq>`_ or see the list of NDSPs in this manual to find the names of the corresponding packages.
Controllers for third-party devices (e.g. Thorlabs TCube, Lab Brick Digital Attenuator, etc.) that are not shipped with ARTIQ can also be installed with this script. Browse `Hydra <https://nixbld.m-labs.hk/project/artiq>`_ or see the list of NDSPs in this manual to find the names of the corresponding packages, and list them at the beginning of the script.
Make sure the base Conda environment is activated and then run the installer script: ::
$ conda activate base
$ python install-with-conda.py
After the installation, activate the newly created environment by name. :: After the installation, activate the newly created environment by name. ::
@ -137,7 +147,7 @@ Upgrading ARTIQ (with Conda)
When upgrading ARTIQ or when testing different versions it is recommended that new Conda environments are created instead of upgrading the packages in existing environments. When upgrading ARTIQ or when testing different versions it is recommended that new Conda environments are created instead of upgrading the packages in existing environments.
Keep previous environments around until you are certain that they are not needed anymore and a new environment is known to work correctly. Keep previous environments around until you are certain that they are not needed anymore and a new environment is known to work correctly.
To install the latest version, just select a different environment name and run the installer script again. To install the latest version, just select a different environment name and run the installation command again.
Switching between Conda environments using commands such as ``$ conda deactivate artiq-6`` and ``$ conda activate artiq-5`` is the recommended way to roll back to previous versions of ARTIQ. Switching between Conda environments using commands such as ``$ conda deactivate artiq-6`` and ``$ conda activate artiq-5`` is the recommended way to roll back to previous versions of ARTIQ.
@ -168,9 +178,9 @@ OpenOCD can be used to write the binary images into the core device FPGA board's
With Nix, add ``artiq-full.openocd`` to the shell packages. Be careful not to add ``pkgs.openocd`` instead - this would install OpenOCD from the NixOS package collection, which does not support ARTIQ boards. With Nix, add ``artiq-full.openocd`` to the shell packages. Be careful not to add ``pkgs.openocd`` instead - this would install OpenOCD from the NixOS package collection, which does not support ARTIQ boards.
With Conda, the ``artiq`` package installs ``openocd`` automatically but it can also be installed explicitly on both Linux and Windows:: With Conda, install ``openocd`` as follows::
$ conda install openocd $ conda install -c m-labs openocd
.. _configuring-openocd: .. _configuring-openocd:
@ -215,6 +225,8 @@ Then, you can write the flash:
$ artiq_flash -V [your system variant] $ artiq_flash -V [your system variant]
The JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you simply need to connect your computer to the micro-USB connector on the Kasli front panel.
* For the KC705 board:: * For the KC705 board::
$ artiq_flash -t kc705 -V [nist_clock/nist_qc2] $ artiq_flash -t kc705 -V [nist_clock/nist_qc2]
@ -224,21 +236,23 @@ Then, you can write the flash:
Setting up the core device IP networking Setting up the core device IP networking
---------------------------------------- ----------------------------------------
For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to a gigabit Ethernet port in your network. It is necessary that the port be gigabit - 10/100 ports cannot be used. If you need to interface Kasli with 10/100 network equipment, connect them through a gigabit switch. For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to an Ethernet port in your network. If the port is 10Mbps or 100Mbps and not 1000Mbps, make sure that the SFP/RJ45 transceiver supports the lower rate. Many SFP/RJ45 transceivers only support the 1000Mbps rate. If you do not have a SFP/RJ45 transceiver that supports 10Mbps and 100Mbps rates, you may instead use a gigabit Ethernet switch in the middle to perform rate conversion.
You can also insert other types of SFP transceivers into Kasli if you wish to use it directly in e.g. an optical fiber Ethernet network. You can also insert other types of SFP transceivers into Kasli if you wish to use it directly in e.g. an optical fiber Ethernet network.
If you purchased a device from M-Labs, it already comes with a valid MAC address and an IP address, usually ``192.168.1.75``. Once you can reach this IP, it can be changed with: :: If you purchased a Kasli device from M-Labs, it usually comes with the IP address ``192.168.1.75``. Once you can reach this IP, it can be changed with: ::
$ artiq_coremgmt -D 192.168.1.75 config write -s ip [new IP] $ artiq_coremgmt -D 192.168.1.75 config write -s ip [new IP]
and then reboot the device (with ``artiq_flash start`` or a power cycle). and then reboot the device (with ``artiq_flash start`` or a power cycle).
In other cases, install OpenOCD as before, and flash the IP and MAC addresses directly: :: In other cases, install OpenOCD as before, and flash the IP (and, if necessary, MAC) addresses directly: ::
$ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx $ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx
$ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start $ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start
For Kasli devices, flashing a MAC address is not necessary as they can obtain it from their EEPROM.
Check that you can ping the device. If ping fails, check that the Ethernet link LED is ON - on Kasli, it is the LED next to the SFP0 connector. As a next step, look at the messages emitted on the UART during boot. Use a program such as flterm or PuTTY to connect to the device's serial port at 115200bps 8-N-1 and reboot the device. On Kasli, the serial port is on FTDI channel 2 with v1.1 hardware (with channel 0 being JTAG) and on FTDI channel 1 with v1.0 hardware. Check that you can ping the device. If ping fails, check that the Ethernet link LED is ON - on Kasli, it is the LED next to the SFP0 connector. As a next step, look at the messages emitted on the UART during boot. Use a program such as flterm or PuTTY to connect to the device's serial port at 115200bps 8-N-1 and reboot the device. On Kasli, the serial port is on FTDI channel 2 with v1.1 hardware (with channel 0 being JTAG) and on FTDI channel 1 with v1.0 hardware.
If you want to use IPv6, the device also has a link-local address that corresponds to its EUI-64, and an additional arbitrary IPv6 address can be defined by using the ``ip6`` configuration key. All IPv4 and IPv6 addresses can be used at the same time. If you want to use IPv6, the device also has a link-local address that corresponds to its EUI-64, and an additional arbitrary IPv6 address can be defined by using the ``ip6`` configuration key. All IPv4 and IPv6 addresses can be used at the same time.
@ -270,9 +284,9 @@ For DRTIO systems, the startup kernel should wait until the desired destinations
If you are using DRTIO and the default routing table (for a star topology) is not suitable to your needs, prepare and load a different routing table. See :ref:`Using DRTIO <using-drtio>`. If you are using DRTIO and the default routing table (for a star topology) is not suitable to your needs, prepare and load a different routing table. See :ref:`Using DRTIO <using-drtio>`.
* Select the RTIO clock source (KC705 only) * Select the RTIO clock source (KC705 and Kasli)
The KC705 may use either an external clock signal or its internal clock. The clock is selected at power-up. Use one of these commands: :: The KC705 may use either an external clock signal or its internal clock. The clock is selected at power-up. For Kasli, setting the RTIO clock source to "external" would bypass the Si5324 synthesiser, requiring that an input clock be present. To select the source, use one of these commands: ::
$ artiq_coremgmt config write -s rtio_clock i # internal clock (default) $ artiq_coremgmt config write -s rtio_clock i # internal clock (default)
$ artiq_coremgmt config write -s rtio_clock e # external clock $ artiq_coremgmt config write -s rtio_clock e # external clock

View File

@ -27,4 +27,4 @@ Website: https://m-labs.hk/artiq
`Cite ARTIQ <http://dx.doi.org/10.5281/zenodo.51303>`_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``. `Cite ARTIQ <http://dx.doi.org/10.5281/zenodo.51303>`_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``.
Copyright (C) 2014-2019 M-Labs Limited. Licensed under GNU LGPL version 3+. Copyright (C) 2014-2021 M-Labs Limited. Licensed under GNU LGPL version 3+.

View File

@ -22,7 +22,7 @@ The following network device support packages are available for ARTIQ. This list
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+ +---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+
| Anel HUT2 power distribution | ``hut2`` | ``hut2`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/hut2-manual-html/latest/download/1>`_ | | Anel HUT2 power distribution | ``hut2`` | ``hut2`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/hut2-manual-html/latest/download/1>`_ |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+ +---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+
| TOPTICA lasers | ``toptica-lasersdk-artiq`` | See anaconda.org | Not available | | TOPTICA lasers | ``toptica-lasersdk-artiq`` | ``toptica-lasersdk-artiq`` | Not available |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+ +---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+
| HighFinesse wavemeters | ``highfinesse-net`` | ``highfinesse-net`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/highfinesse-net-manual-html/latest/download/1>`_ | | HighFinesse wavemeters | ``highfinesse-net`` | ``highfinesse-net`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/highfinesse-net-manual-html/latest/download/1>`_ |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+ +---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+

View File

@ -81,11 +81,11 @@ You can write several records at once::
To remove the previously written key ``my_key``:: To remove the previously written key ``my_key``::
$ artiq_coremgmt config delete my_key $ artiq_coremgmt config remove my_key
You can remove several keys at once:: You can remove several keys at once::
$ artiq_coremgmt config delete key1 key2 $ artiq_coremgmt config remove key1 key2
To erase the entire flash storage area:: To erase the entire flash storage area::

View File

@ -1,46 +0,0 @@
# This script installs ARTIQ using the conda packages built by the new Nix/Hydra system.
# It needs to be run in the root (base) conda environment with "python install-with-conda.py"
# It supports Linux and Windows, but Linux users should consider using the higher-quality
# Nix package manager instead of Conda.
# EDIT THIS:
# The name of the conda environment to create
CONDA_ENV_NAME = "artiq"
# The conda packages to download and install.
CONDA_PACKAGES = [
"artiq",
"artiq-comtools",
# Only install board packages if you plan to reflash the board.
# The two lines below are just examples and probably not what you want.
# Select the packages that correspond to your board, or remove them
# if you do not intend to reflash the board.
"artiq-board-kc705-nist_clock",
"artiq-board-kasli-wipm"
]
# Set to False if you have already set up conda channels
ADD_CHANNELS = True
# PROXY: If you are behind a web proxy, configure it in your .condarc (as per
# the conda manual).
# You should not need to modify the rest of the script below.
import os
def run(command):
r = os.system(command)
if r != 0:
raise SystemExit("command '{}' returned non-zero exit status: {}".format(command, r))
if ADD_CHANNELS:
run("conda config --prepend channels m-labs")
run("conda config --prepend channels https://conda.m-labs.hk/artiq-beta")
run("conda config --append channels conda-forge")
# Creating the environment first with python 3.5 hits fewer bugs in conda's broken dependency solver.
run("conda create -y -n {CONDA_ENV_NAME} python=3.5".format(CONDA_ENV_NAME=CONDA_ENV_NAME))
for package in CONDA_PACKAGES:
# Do not activate the environment yet - otherwise "conda install" may not find the SSL module anymore on Windows.
# Installing into the environment from the outside works around this conda bug.
run("conda install -y -n {CONDA_ENV_NAME} -c https://conda.m-labs.hk/artiq-beta {package}"
.format(CONDA_ENV_NAME=CONDA_ENV_NAME, package=package))

View File

@ -64,10 +64,6 @@ Topic :: System :: Hardware
""".splitlines(), """.splitlines(),
install_requires=requirements, install_requires=requirements,
extras_require={}, extras_require={},
dependency_links=[
"git+https://github.com/m-labs/pyqtgraph.git@develop#egg=pyqtgraph",
"git+https://github.com/m-labs/llvmlite.git@artiq#egg=llvmlite_artiq"
],
packages=find_packages(), packages=find_packages(),
namespace_packages=[], namespace_packages=[],
include_package_data=True, include_package_data=True,