forked from M-Labs/artiq
1
0
Fork 0

Compare commits

...

123 Commits

Author SHA1 Message Date
architeuthidae 0ac9e77dc3 doc: Dashboard 'Load HDF5' button + nitpicks 2024-08-28 11:22:17 +08:00
architeuthidae 4235309ab7 doc: Additional keyboard shortcuts 2024-08-23 16:46:19 +08:00
architeuthidae 0311a37e59 doc: Minor fixes 2024-08-23 16:46:15 +08:00
abdul124 a7e1d2b1c1 coredevice: add UnwrapNoneError class 2024-08-23 16:26:12 +08:00
abdul124 808d9c8004 compiler: restore exception ids used 2024-08-23 16:26:12 +08:00
abdul124 5463d66dcd firmware/ksupport: restore exception id order 2024-08-23 16:26:12 +08:00
architeuthidae 24f28d0b4e doc: Add ZC706 to core device page 2024-08-22 13:52:57 +08:00
architeuthidae 0808f7bc72 doc: Management expansion, suggested edits 2024-08-20 16:29:54 +08:00
architeuthidae f9ee6d66e8 doc: Add Waveform/RTIO analyzer 2024-08-20 16:29:54 +08:00
architeuthidae c4bcdf5f4c doc: Add reference descriptions 2024-08-20 16:29:54 +08:00
architeuthidae 400fb0f317 doc: GUI applet groups 2024-08-20 16:29:54 +08:00
architeuthidae 0c92fe5aeb doc: using_data_interfaces associated changes 2024-08-20 16:29:54 +08:00
architeuthidae 04e55246e9 doc: Add 'Data and user interfaces' page 2024-08-20 16:29:54 +08:00
abdul124 3079a1915f firmware/ksupport: improve comments and syscall name 2024-08-20 15:23:44 +08:00
abdul124 5454e6f1a9 coredevice/test: add unittests for exceptions 2024-08-20 15:23:44 +08:00
abdul124 68cbb09a64 firmware/ksupport: add exception unittests 2024-08-20 15:23:44 +08:00
abdul124 9d6defcea1 coredevice/comm_kernel: map exceptions to correct names 2024-08-20 15:23:44 +08:00
abdul124 4b9a910c88 sync exception names and ids 2024-08-20 15:23:44 +08:00
architeuthidae ccc4598049 doc: More FAQs 2024-08-19 13:04:19 +08:00
architeuthidae c6216e4b4e doc: Refactor management system reference 2024-08-13 14:53:48 +08:00
architeuthidae d1f6e2f1c1 doc: Minor link fixes 2024-08-12 16:57:57 +08:00
Sebastien Bourdeauducq 85c03addbd flake: update dependencies 2024-08-10 00:04:19 +08:00
architeuthidae 08c1770b31 CONTRIBUTING: Doc section 2024-08-01 19:30:49 +08:00
architeuthidae fc52cc8e4b doc: Refactor FAQ slightly 2024-08-01 19:30:49 +08:00
architeuthidae dd940184b7 doc: Add helpdesk instructions to FAQ 2024-08-01 19:30:49 +08:00
architeuthidae ca6f5b5258 doc: Add extra resources to FAQ 2024-08-01 19:30:49 +08:00
architeuthidae 781dccef11 doc: Add note on nix profile deletion, garbage collect 2024-08-01 19:06:26 +08:00
architeuthidae ebab2c4d34 doc: reST formatting 2024-08-01 19:05:55 +08:00
architeuthidae 42b48598e5 flake/sphinx: fix errors in manual nix build 2024-08-01 19:00:50 +08:00
architeuthidae 4e800af410 doc: Subkernel exception fix 2024-07-31 17:17:50 +08:00
mwojcik 7f36c9e9c1 satman: pass exceptions from one subkernel to another 2024-07-31 17:17:50 +08:00
mwojcik 5f1e33198e subkernels: separate error messages 2024-07-31 17:17:50 +08:00
mwojcik ccc6ae524a compiler: add builtinInvoke for subkernel raising functions 2024-07-31 17:17:50 +08:00
mwojcik 505ddff15d subkernel: pass exceptions to kernel 2024-07-31 17:17:50 +08:00
architeuthidae 4e39a7db44 doc: Rewrite FAQ 2024-07-31 17:16:09 +08:00
architeuthidae 82f6e14b78 git(hub): update gitignore, pull request template 2024-07-31 13:42:59 +08:00
architeuthidae d934092ecb doc: Document core_log() in manual 2024-07-31 13:42:59 +08:00
Florian Agbuya 13250427e0 afws_client: fix unicode error in json handling
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2024-07-29 18:32:58 +08:00
architeuthidae 61986e4f2b doc: Add references to command line tools 2024-07-29 13:41:29 +08:00
architeuthidae a324d77e37 doc: Add artiq_session and artiq_ctlmgr to front-end tools 2024-07-29 13:41:29 +08:00
architeuthidae a0fde00258 doc: Add automodule to command line references 2024-07-29 13:41:29 +08:00
architeuthidae be8fb8cdbe doc: Rename main frontend tools page 2024-07-29 13:41:29 +08:00
architeuthidae 96bc4174cd doc: Building + developing rewrite (#2496)
Co-authored-by: architeuthidae <am@m-labs.hk>
2024-07-27 22:18:26 +08:00
architeuthidae cd12600c1c ad9910: Add bitsizes to docstrings (#2505)
Co-authored-by: architeuthidae <am@m-labs.hk>
2024-07-22 18:37:57 +08:00
architeuthidae d7e78aded3 doc: Add links for ddb_template and afws_client 2024-07-22 18:36:49 +08:00
architeuthidae 59bb478ca1 artiq_dashboard: Replace --port-notify 2024-07-22 16:58:13 +08:00
architeuthidae a40d45b4d8 doc: link fixes 2024-07-19 18:35:36 +08:00
architeuthidae 0cdb35f426 doc: Trailing spaces 2024-07-19 18:18:07 +08:00
architeuthidae d8ba204970 doc: Split installing page (#2495)
Co-authored-by: architeuthidae <am@m-labs.hk>
2024-07-19 17:53:26 +08:00
Sébastien Bourdeauducq c9e571d7d7 afws_client: cleanup 2024-07-19 11:22:47 +08:00
architeuthidae ab1aeb136f doc: Edit of manual reference pages (#2466)
Co-authored-by: architeuthis <am@m-labs.hk>
2024-07-18 17:54:31 +08:00
Florian Agbuya 9a5ca15b80 afws_client: extract get_argparser() for modular arg parsing 2024-07-18 13:00:15 +08:00
Sébastien Bourdeauducq da5de4b537 doc: rename release_notes -> releases 2024-07-17 12:15:19 +08:00
architeuthidae 1b44f7dddf doc: Reword heading 2024-07-17 12:13:40 +08:00
architeuthidae d74e9a2bfc doc: Refactor manual index, make sections 2024-07-17 12:13:40 +08:00
architeuthidae f635392edf doc: Add explanation of event spreading 2024-07-17 12:06:16 +08:00
architeuthidae c10e8935c1 doc: Reference link fixes 2024-07-17 11:55:15 +08:00
architeuthidae f20912c330 doc: Sphinx configuration nitpicky 2024-07-17 11:55:15 +08:00
architeuthidae 3a3ac1eb99 doc: Formatting and link fixes in docstrings 2024-07-17 11:42:00 +08:00
architeuthidae 94f7a23fe2 doc: Assorted typos and dead links 2024-07-17 11:42:00 +08:00
Florian Agbuya 554d7bda26 flake: use upstream nixpkgs openocd
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2024-07-15 19:31:55 +08:00
architeuthis 266a2ee9ea doc: Rewrite and overhaul of 'Compiler' page 2024-07-15 19:28:50 +08:00
mwojcik fb6c1364a6 ad9912: use pll doubler for refclk <11mhz 2024-07-15 19:28:46 +08:00
Simon Renblad 86a8d2d87a moninj: add underscore to dac channel labels 2024-07-15 11:08:00 +08:00
architeuthidae 099a0869d9 doc: Put back telnet exit hint 2024-07-15 10:55:48 +08:00
Egor Savkin e6b88c9b9d Update llvmlite to 0.43 and llvm to 15, as in MSYS2
Signed-off-by: Egor Savkin <es@m-labs.hk>
2024-07-12 13:08:06 +02:00
architeuthidae aa30640658 doc: Environment, suggested changes 2024-07-12 10:53:51 +02:00
architeuthidae 5b9db674d2 doc: Mention artiq_ddb_template in Environment manual page 2024-07-12 10:53:51 +02:00
architeuthis 0265151f1a doc: Improve description of environment in tutorials 2024-07-12 10:53:51 +02:00
architeuthis a38fabe465 doc: Update + expansion of Environment manual page 2024-07-12 10:53:51 +02:00
architeuthidae 4c8da27196 doc: NDSP page edit, fixes 2024-07-12 09:10:57 +02:00
architeuthis 23444d0c03 doc: Mention ping() in NDSP tutorial 2024-07-12 09:10:57 +02:00
architeuthis 114084b12e doc: Edit of 'Developing an NDSP' page 2024-07-12 09:10:57 +02:00
Harry Ying a6b76c03f2 test_scheduler: remove the exact ordering assertion
Currently the exact ordering is compared in the `pending_priorities`
test which is not necessary and is non-deterministic. This patch only
 asserts the middle priority experiment is ran before the high priority
 experiment scheduled in the future.

Signed-off-by: Kanyang Ying <lexugeyky@outlook.com>
2024-07-11 09:38:22 +02:00
Simon Renblad e8da7581bc browser: fix ctrl scroll type error 2024-07-10 10:42:04 +02:00
mwojcik 4bc23192f1 tests: fix sed lane distributor test, force enable spread 2024-07-10 14:46:12 +08:00
mwojcik bf108a653d repeater: handle async packets on forwarding 2024-07-09 17:05:55 +02:00
mwojcik 60b1edaf6b allow toggling SED spread with flash config key 2024-07-09 16:58:55 +02:00
Sebastien Bourdeauducq 05d02f3f00 flake: update dependencies 2024-07-09 16:58:46 +02:00
architeuthidae d244ac1baa doc: Change references to rtio_clock/now to _mu 2024-07-08 23:11:19 +02:00
architeuthis c879f9737b doc: Add warning on sequence error nondeterminism, for now 2024-07-08 23:11:19 +02:00
architeuthis 2b7d1742fb doc: RTIO manual page edit, suggested changes 2024-07-08 23:11:19 +02:00
architeuthis 07197433a9 docs: RTIO manual page edit 2024-07-08 23:11:19 +02:00
architeuthidae 6866bf298e doc: Fix sphinx 'duplicated reference' warnings 2024-07-08 22:42:47 +02:00
abdul124 d46e949f9f firmware/ksupport: add rint api 2024-07-05 08:53:02 +02:00
mwojcik 37db6cb6ae moninj: restore urukul TTL control 2024-07-04 13:44:22 +02:00
mwojcik 7bbba2f67f repeater: clear buffer after ping 2024-07-04 13:38:40 +02:00
architeuthidae ac86265c16 math_fns: Delete duplicated definition 2024-07-04 13:31:52 +02:00
Charles Baynham 240b238cac worker_impl - try/catch for exceptions in notify_run_end to avoid data loss 2024-07-02 13:55:53 +02:00
Simon Renblad 6c28159541 moninj: set parent to main window on widget delete 2024-07-02 08:23:16 +02:00
Simon Renblad de1da7efb3 moninj: fix visual bug 2024-07-02 08:23:16 +02:00
architeuthis f20b6051d3 doc: Fix heading levels in Releases page 2024-06-27 15:53:17 +08:00
architeuthis f0384ca885 doc: Add description of release model to manual 2024-06-27 14:43:53 +08:00
Sébastien Bourdeauducq 8e51d6520c doc: fix nix URLs 2024-06-27 13:13:26 +08:00
Egor Savkin 3c4c2dcf65 Update dead links, partially remove unavailable resources
Signed-off-by: Egor Savkin <es@m-labs.hk>
2024-06-27 13:08:48 +08:00
architeuthis c449c6f0de docs: Add details of shallow 'with parallel' to manual 2024-06-25 17:48:10 +08:00
Egor Savkin a1b5aed9e5 doc: add reference to artiq-zynq for developers
Signed-off-by: Egor Savkin <es@m-labs.hk>
2024-06-25 17:36:48 +08:00
architeuthis 5a39de106d doc: Management system overhaul, further corrections 2024-06-25 17:34:12 +08:00
architeuthis 3204c601ff doc: Clearer+more accurate overview of mgmt system 2024-06-25 17:34:12 +08:00
architeuthis 5e30d81fbb doc: Management system update, corrections 2024-06-25 17:34:12 +08:00
architeuthis 316e7c21e7 doc: Split running/management tool reference into separate page 2024-06-25 17:34:12 +08:00
architeuthis a2f6d2ed55 doc: 'Management system' manual page copy+clarity edit 2024-06-25 17:34:12 +08:00
architeuthis eb469e28cc doc: Elaboration on interactive args & controllers 2024-06-25 17:34:12 +08:00
architeuthis 4ec97ae1f6 doc: 'Getting started with management system' manual page overhaul 2024-06-25 17:34:12 +08:00
Sébastien Bourdeauducq 6935c8cfe6 doc: minor correction 2024-06-20 17:39:12 +08:00
architeuthis a19afaea96 doc: Add DRTIO-over-EEM to manual + phrasing fixes 2024-06-20 17:39:12 +08:00
architeuthis 5e67001ba5 doc: Add "Using Drtio" and overhaul DRTIO page 2024-06-20 17:39:12 +08:00
Egor Savkin 36fd45b05d Fix missing jquery in docs - fixes broken search
Signed-off-by: Egor Savkin <es@m-labs.hk>
2024-06-20 12:52:08 +08:00
Sebastien Bourdeauducq a68846d960 flake: update dependencies 2024-06-19 12:45:05 +08:00
morgan 56059250ce kasli: raise error when enabling WRPLL with v1.x 2024-06-19 12:44:12 +08:00
morgan 49a5c1b13c kasli: fix v1.0 & v1.1 compilation error 2024-06-19 12:44:12 +08:00
Florian Agbuya 69a48464c3 plot_xy: fix missing x values handling
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2024-06-13 12:19:57 +08:00
architeuthis a53e0506cd doc: fix formatting in Installing page 2024-06-13 12:19:50 +08:00
architeuthis 5328f56246 doc: Refactor manual table of contents 2024-06-12 15:48:49 +08:00
architeuthidae f94ec844d7 doc: 'Getting started with core device' manual page edit (#2431) 2024-06-12 15:45:51 +08:00
architeuthidae 8b531e5060 doc: 'Core device' manual page overhaul (#2430) 2024-06-12 14:52:11 +08:00
Florian Agbuya 5eacbeec5d test_embedding: add int boundary test from 25168422a
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2024-06-07 14:11:49 +08:00
Florian Agbuya 535654938f compiler: fix int boundary checks
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2024-06-07 14:11:49 +08:00
Sébastien Bourdeauducq 9e28ee63a6 ARTIQ-8 release 2024-06-06 13:51:40 +08:00
Sébastien Bourdeauducq 53abbd569c doc: update repos URLs 2024-06-06 13:50:05 +08:00
architeuthis f84b67829c Deleted reference to board packages 2024-06-06 13:47:14 +08:00
architeuthis ed294a99a9 Remove outdated references to examples/master, fix labels 2024-06-06 13:47:14 +08:00
architeuthis 06a3557cda docs: 'Installing ARTIQ' manual page overhaul 2024-06-06 13:47:14 +08:00
107 changed files with 3922 additions and 2438 deletions

View File

@ -51,7 +51,7 @@ Closes #XXX
### Documentation Changes
- [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`cd doc/manual/; make html`) to ensure no errors.
- [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`nix build .#artiq-manual-html; nix build .#artiq-manual-pdf`) to ensure no errors.
### Git Logistics

4
.gitignore vendored
View File

@ -11,6 +11,7 @@ __pycache__/
.ipynb_checkpoints
/doc/manual/_build
/build
/result
/dist
/*.egg-info
/.coverage
@ -23,7 +24,8 @@ __pycache__/
/artiq/test/results
/artiq/examples/*/results
/artiq/examples/*/last_rid.pyon
/artiq/examples/*/dataset_db.pyon
/artiq/examples/*/dataset_db.mdb
/artiq/examples/*/dataset_db.mdb-lock
# when testing ad-hoc experiments at the root:
/repository/

View File

@ -8,27 +8,27 @@ Reporting Issues/Bugs
Thanks for `reporting issues to ARTIQ
<https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and
ask questions on IRC (the #m-labs channel on OFTC), the `Mattermost chat
<https://chat.m-labs.hk>`_, or on the `forum <https://forum.m-labs.hk>`_.
<https://chat.m-labs.hk>`_, or in the `forum <https://forum.m-labs.hk>`_.
The best bug reports are those which contain sufficient information. With
accurate and comprehensive context, an issue can be resolved quickly and
efficiently. Please consider adding the following data to your issue
report if possible:
* A clear and unique summary that fits into one line. Also check that
this issue has not yet been reported. If it has, add additional information there.
* Precise steps to reproduce (list of actions that leads to the issue)
* A clear and unique summary that fits into one line. Check that this
issue has not yet been reported; if it has, add additional information there.
* Precise steps to reproduce (a list of actions that leads to the issue)
* Expected behavior (what should happen)
* Actual behavior (what happens instead)
* Logging message, trace backs, screen shots where relevant
* Logging message, tracebacks, screenshots, where applicable
* Components involved (omit irrelevant parts):
* Operating System
* ARTIQ version (with recent versions of ARTIQ, run ``artiq_client --version``)
* Version of the gateware and runtime loaded in the core device (in the output of ``artiq_coremgmt -D .... log``)
* Operating system used
* ARTIQ version (run any command in the form of ``artiq_client --version``)
* Gateware and firmware loaded to the core device (in the output of
``artiq_coremgmt [-D ....] log``)
* Hardware involved
For in-depth information on bug reporting, see:
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
@ -38,10 +38,10 @@ https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines
Contributing Code
=================
ARTIQ welcomes contributions. Write bite-sized patches that can stand alone,
clean them up, write proper commit messages, add docstrings and unittests. Then
ARTIQ welcomes contributions. Write bite-size patches that can stand alone,
clean them up, write proper commit messages, add docstrings and unit tests;
``git rebase`` them onto the current master or merge the current master. Verify
that the testsuite passes. Then submit a pull request. Expect your contribution
that the test suite passes. Then submit a pull request. Expect your contribution
to be held up to coding standards (e.g. use ``flake8`` to check yourself).
Checklist for Code Contributions
@ -51,7 +51,7 @@ Checklist for Code Contributions
- Use correct spelling and grammar. Use your code editor to help you with
syntax, spelling, and style
- Style: PEP-8 (``flake8``)
- Add, check docstrings and comments
- Add or update docstrings and comments
- Split your contribution into logically separate changes (``git rebase
--interactive``). Merge (squash, fixup) commits that just fix previous commits
or amend them. Remove unintended changes. Clean up your commits.
@ -63,12 +63,37 @@ Checklist for Code Contributions
- Review each of your commits for the above items (``git show``)
- Update ``RELEASE_NOTES.md`` if there are noteworthy changes, especially if
there are changes to existing APIs
- Check, test, and update the documentation in `doc/`
- Check, test, and update the unittests
- Check, test, and update the documentation in ``doc/``
- Check, test, and update the unit tests
- Close and/or update issues
Contributing Documentation
==========================
ARTIQ welcomes documentation contributions. The ARTIQ manual is hosted online in HTML
form `here <https://m-labs.hk/artiq/manual/>`__ and in PDF form
`here <https://m-labs.hk/artiq/manual.pdf>`__. It is generated from source files
in ``doc/manual``, written in a variant of the
`reStructured Text <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_
markup language processed by `Sphinx <https://www.sphinx-doc.org/en/master/>`_, with
some of the additional reference material processed from inline documentation
in the ARTIQ source itself.
Write bite-size patches that can stand alone, clean them up, write proper commit
messages. Check that your edits render properly and compile without errors: ::
$ nix build .#artiq-manual-pdf
$ nix build .#artiq-manual-html
Elaborations, improvements, clarifications and corrections to any of the material
are happily accepted, but special attention is drawn to the manual
`FAQ <https://m-labs.hk/artiq/manual/faq.html>`_, where tips and solutions
are especially easy to add. See also the FAQ's own
`section on the subject <https://m-labs.hk/artiq/manual/faq.html#build-documentation>`_.
Copyright and Sign-Off
----------------------
======================
Authors retain copyright of their contributions to ARTIQ, but whenever possible
should use the GNU LGPL version 3 license for them to be merged.
@ -108,7 +133,7 @@ can certify the below:
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
then you just add a line saying
then add a line saying
Signed-off-by: Random J Developer <random@developer.example.org>

View File

@ -3,8 +3,8 @@
Release notes
=============
ARTIQ-8 (Unreleased)
--------------------
ARTIQ-8
-------
Highlights:

View File

@ -24,11 +24,11 @@ class XYPlot(pyqtgraph.PlotWidget):
y = value[self.args.y]
except KeyError:
return
x = value.get(self.args.x, (False, None))
x = value.get(self.args.x)
if x is None:
x = np.arange(len(y))
error = value.get(self.args.error, (False, None))
fit = value.get(self.args.fit, (False, None))
error = value.get(self.args.error)
fit = value.get(self.args.fit)
if not len(y) or len(y) != len(x):
self.mismatch['X values'] = True

View File

@ -24,21 +24,21 @@ class _AppletRequestInterface:
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
"""
Set a dataset.
See documentation of ``artiq.language.environment.set_dataset``.
See documentation of :meth:`~artiq.language.environment.HasEnvironment.set_dataset`.
"""
raise NotImplementedError
def mutate_dataset(self, key, index, value):
"""
Mutate a dataset.
See documentation of ``artiq.language.environment.mutate_dataset``.
See documentation of :meth:`~artiq.language.environment.HasEnvironment.mutate_dataset`.
"""
raise NotImplementedError
def append_to_dataset(self, key, value):
"""
Append to a dataset.
See documentation of ``artiq.language.environment.append_to_dataset``.
See documentation of :meth:`~artiq.language.environment.HasEnvironment.append_to_dataset`.
"""
raise NotImplementedError
@ -49,8 +49,9 @@ class _AppletRequestInterface:
:param expurl: Experiment URL identifying the experiment in the dashboard. Example: 'repo:ArgumentsDemo'.
:param key: Name of the argument in the experiment.
:param value: Object representing the new temporary value of the argument. For ``Scannable`` arguments, this parameter
should be a ``ScanObject``. The type of the ``ScanObject`` will be set as the selected type when this function is called.
:param value: Object representing the new temporary value of the argument. For :class:`~artiq.language.scan.Scannable` arguments,
this parameter should be a :class:`~artiq.language.scan.ScanObject`. The type of the :class:`~artiq.language.scan.ScanObject`
will be set as the selected type when this function is called.
"""
raise NotImplementedError

View File

@ -83,7 +83,7 @@ class ZoomIconView(QtWidgets.QListView):
w = self.iconSize().width()*self.zoom_step**(
ev.angleDelta().y()/120.)
if a <= w <= b:
self.setIconSize(QtCore.QSize(w, w*self.aspect))
self.setIconSize(QtCore.QSize(int(w), int(w*self.aspect)))
else:
QtWidgets.QListView.wheelEvent(self, ev)

View File

@ -88,18 +88,23 @@ class EmbeddingMap:
self.subkernel_message_map[msg_type.name] = msg_id
self.object_reverse_map[obj_id] = msg_id
self.preallocate_runtime_exception_names(["RuntimeError",
"RTIOUnderflow",
"RTIOOverflow",
"RTIODestinationUnreachable",
"DMAError",
"I2CError",
"CacheError",
"SPIError",
"0:ZeroDivisionError",
"0:IndexError",
"UnwrapNoneError",
"SubkernelError"])
# Keep this list of exceptions in sync with `EXCEPTION_ID_LOOKUP` in `artiq::firmware::ksupport::eh_artiq`
# The exceptions declared here must be defined in `artiq.coredevice.exceptions`
# Verify synchronization by running the test cases in `artiq.test.coredevice.test_exceptions`
self.preallocate_runtime_exception_names([
"0:RuntimeError",
"RTIOUnderflow",
"RTIOOverflow",
"RTIODestinationUnreachable",
"DMAError",
"I2CError",
"CacheError",
"SPIError",
"0:ZeroDivisionError",
"0:IndexError",
"UnwrapNoneError",
"SubkernelError",
])
def preallocate_runtime_exception_names(self, names):
for i, name in enumerate(names):
@ -747,9 +752,9 @@ class StitchingInferencer(Inferencer):
if elt.__class__ == float:
state |= IS_FLOAT
elif elt.__class__ == int:
if -2**31 < elt < 2**31-1:
if -2**31 <= elt <= 2**31-1:
state |= IS_INT32
elif -2**63 < elt < 2**63-1:
elif -2**63 <= elt <= 2**63-1:
state |= IS_INT64
else:
state = -1

View File

@ -1047,6 +1047,42 @@ class Builtin(Instruction):
def opcode(self):
return "builtin({})".format(self.op)
class BuiltinInvoke(Terminator):
"""
A builtin operation which can raise exceptions.
:ivar op: (string) operation name
"""
"""
:param op: (string) operation name
:param normal: (:class:`BasicBlock`) normal target
:param exn: (:class:`BasicBlock`) exceptional target
"""
def __init__(self, op, operands, typ, normal, exn, name=None):
assert isinstance(op, str)
for operand in operands: assert isinstance(operand, Value)
assert isinstance(normal, BasicBlock)
assert isinstance(exn, BasicBlock)
if name is None:
name = "BLTINV.{}".format(op)
super().__init__(operands + [normal, exn], typ, name)
self.op = op
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.op = self.op
return self_copy
def normal_target(self):
return self.operands[-2]
def exception_target(self):
return self.operands[-1]
def opcode(self):
return "builtinInvokable({})".format(self.op)
class Closure(Instruction):
"""
A closure creation operation.

View File

@ -61,22 +61,6 @@ unary_fp_runtime_calls = [
("cbrt", "cbrt"),
]
#: float -> float numpy.* math functions lowered to runtime calls.
unary_fp_runtime_calls = [
("tan", "tan"),
("arcsin", "asin"),
("arccos", "acos"),
("arctan", "atan"),
("sinh", "sinh"),
("cosh", "cosh"),
("tanh", "tanh"),
("arcsinh", "asinh"),
("arccosh", "acosh"),
("arctanh", "atanh"),
("expm1", "expm1"),
("cbrt", "cbrt"),
]
scipy_special_unary_runtime_calls = [
("erf", "erf"),
("erfc", "erfc"),

View File

@ -2544,10 +2544,22 @@ class ARTIQIRGenerator(algorithm.Visitor):
fn = types.get_method_function(fn)
sid = ir.Constant(fn.sid, builtins.TInt32())
if not builtins.is_none(fn.ret):
ret = self.append(ir.Builtin("subkernel_retrieve_return", [sid, timeout], fn.ret))
if self.unwind_target is None:
ret = self.append(ir.Builtin("subkernel_retrieve_return", [sid, timeout], fn.ret))
else:
after_invoke = self.add_block("invoke")
ret = self.append(ir.BuiltinInvoke("subkernel_retrieve_return", [sid, timeout],
fn.ret, after_invoke, self.unwind_target))
self.current_block = after_invoke
else:
ret = ir.Constant(None, builtins.TNone())
self.append(ir.Builtin("subkernel_await_finish", [sid, timeout], builtins.TNone()))
if self.unwind_target is None:
self.append(ir.Builtin("subkernel_await_finish", [sid, timeout], builtins.TNone()))
else:
after_invoke = self.add_block("invoke")
self.append(ir.BuiltinInvoke("subkernel_await_finish", [sid, timeout],
builtins.TNone(), after_invoke, self.unwind_target))
self.current_block = after_invoke
return ret
elif types.is_builtin(typ, "subkernel_preload"):
if len(node.args) == 1 and len(node.keywords) == 0:
@ -2594,7 +2606,14 @@ class ARTIQIRGenerator(algorithm.Visitor):
{"name": name, "recv": vartype, "send": msg.value_type},
node.loc)
self.engine.process(diag)
return self.append(ir.Builtin("subkernel_recv", [msg_id, timeout], vartype))
if self.unwind_target is None:
ret = self.append(ir.Builtin("subkernel_recv", [msg_id, timeout], vartype))
else:
after_invoke = self.add_block("invoke")
ret = self.append(ir.BuiltinInvoke("subkernel_recv", [msg_id, timeout],
vartype, after_invoke, self.unwind_target))
self.current_block = after_invoke
return ret
elif types.is_exn_constructor(typ):
return self.alloc_exn(node.type, *[self.visit(arg_node) for arg_node in node.args])
elif types.is_constructor(typ):
@ -2946,7 +2965,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
format_string += ")"
elif builtins.is_exception(value.type):
# message may not be an actual string...
# so we cannot really print it
# so we cannot really print itInvoke
name = self.append(ir.GetAttr(value, "#__name__"))
param1 = self.append(ir.GetAttr(value, "#__param0__"))
param2 = self.append(ir.GetAttr(value, "#__param1__"))

View File

@ -14,9 +14,9 @@ class IntMonomorphizer(algorithm.Visitor):
def visit_NumT(self, node):
if builtins.is_int(node.type):
if types.is_var(node.type["width"]):
if -2**31 < node.n < 2**31-1:
if -2**31 <= node.n <= 2**31-1:
width = 32
elif -2**63 < node.n < 2**63-1:
elif -2**63 <= node.n <= 2**63-1:
width = 64
else:
diag = diagnostic.Diagnostic("error",

View File

@ -1437,6 +1437,44 @@ class LLVMIRGenerator:
else:
assert False
def process_BuiltinInvoke(self, insn):
llnormalblock = self.map(insn.normal_target())
llunwindblock = self.map(insn.exception_target())
if insn.op == "subkernel_retrieve_return":
llsid = self.map(insn.operands[0])
lltimeout = self.map(insn.operands[1])
lltagptr = self._build_subkernel_tags([insn.type])
llheadu = self.llbuilder.append_basic_block(name="subkernel.await.unwind")
self.llbuilder.invoke(self.llbuiltin("subkernel_await_message"),
[llsid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
llheadu, llunwindblock,
name="subkernel.await.message")
self.llbuilder.position_at_end(llheadu)
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
return self._build_rpc_recv(insn.type, llstackptr, llnormalblock, llunwindblock)
elif insn.op == "subkernel_await_finish":
llsid = self.map(insn.operands[0])
lltimeout = self.map(insn.operands[1])
return self.llbuilder.invoke(self.llbuiltin("subkernel_await_finish"), [llsid, lltimeout],
llnormalblock, llunwindblock,
name="subkernel.await.finish")
elif insn.op == "subkernel_recv":
llmsgid = self.map(insn.operands[0])
lltimeout = self.map(insn.operands[1])
lltagptr = self._build_subkernel_tags([insn.type])
llheadu = self.llbuilder.append_basic_block(name="subkernel.await.unwind")
self.llbuilder.invoke(self.llbuiltin("subkernel_await_message"),
[llmsgid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
llheadu, llunwindblock,
name="subkernel.await.message")
self.llbuilder.position_at_end(llheadu)
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
name="subkernel.arg.stack")
return self._build_rpc_recv(insn.type, llstackptr, llnormalblock, llunwindblock)
else:
assert False
def process_SubkernelAwaitArgs(self, insn):
llmin = self.map(insn.operands[0])
llmax = self.map(insn.operands[1])

View File

@ -1,8 +1,8 @@
""""RTIO driver for the Analog Devices AD53[67][0123] family of multi-channel
"""RTIO driver for the Analog Devices AD53[67][0123] family of multi-channel
Digital to Analog Converters.
Output event replacement is not supported and issuing commands at the same
time is an error.
time results in a collision error.
"""
# Designed from the data sheets and somewhat after the linux kernel
@ -131,10 +131,10 @@ class AD53xx:
optimized for speed; datasheet says t22: 25ns min SCLK edge to SDO
valid, and suggests the SPI speed for reads should be <=20 MHz)
:param vref: DAC reference voltage (default: 5.)
:param offset_dacs: Initial register value for the two offset DACs, device
dependent and must be set correctly for correct voltage to mu
conversions. Knowledge of his state is not transferred between
experiments. (default: 8192)
:param offset_dacs: Initial register value for the two offset DACs
(default: 8192). Device dependent and must be set correctly for
correct voltage-to-mu conversions. Knowledge of this state is
not transferred between experiments.
:param core_device: Core device name (default: "core")
"""
kernel_invariants = {"bus", "ldac", "clr", "chip_select", "div_write",
@ -202,7 +202,7 @@ class AD53xx:
:param op: Operation to perform, one of :const:`AD53XX_READ_X1A`,
:const:`AD53XX_READ_X1B`, :const:`AD53XX_READ_OFFSET`,
:const:`AD53XX_READ_GAIN` etc. (default: :const:`AD53XX_READ_X1A`).
:return: The 16 bit register value
:return: The 16-bit register value
"""
self.bus.write(ad53xx_cmd_read_ch(channel, op) << 8)
self.bus.set_config_mu(SPI_AD53XX_CONFIG | spi.SPI_INPUT, 24,
@ -309,7 +309,7 @@ class AD53xx:
This method does not advance the timeline; write events are scheduled
in the past. The DACs will synchronously start changing their output
levels `now`.
levels ``now``.
If no LDAC device was defined, the LDAC pulse is skipped.
@ -364,8 +364,8 @@ class AD53xx:
high) can be calibrated in this fashion.
:param channel: The number of the calibrated channel
:params vzs: Measured voltage with the DAC set to zero-scale (0x0000)
:params vfs: Measured voltage with the DAC set to full-scale (0xffff)
:param vzs: Measured voltage with the DAC set to zero-scale (0x0000)
:param vfs: Measured voltage with the DAC set to full-scale (0xffff)
"""
offset_err = voltage_to_mu(vzs, self.offset_dacs, self.vref)
gain_err = voltage_to_mu(vfs, self.offset_dacs, self.vref) - (

View File

@ -114,27 +114,27 @@ class AD9910:
(as configured through CFG_MASK_NU), 4-7 for individual channels.
:param cpld_device: Name of the Urukul CPLD this device is on.
:param sw_device: Name of the RF switch device. The RF switch is a
TTLOut channel available as the :attr:`sw` attribute of this instance.
TTLOut channel available as the ``sw`` attribute of this instance.
:param pll_n: DDS PLL multiplier. The DDS sample clock is
f_ref/clk_div*pll_n where f_ref is the reference frequency and
clk_div is the reference clock divider (both set in the parent
``f_ref / clk_div * pll_n`` where ``f_ref`` is the reference frequency and
``clk_div`` is the reference clock divider (both set in the parent
Urukul CPLD instance).
:param pll_en: PLL enable bit, set to 0 to bypass PLL (default: 1).
Note that when bypassing the PLL the red front panel LED may remain on.
:param pll_cp: DDS PLL charge pump setting.
:param pll_vco: DDS PLL VCO range selection.
:param sync_delay_seed: SYNC_IN delay tuning starting value.
To stabilize the SYNC_IN delay tuning, run :meth:`tune_sync_delay` once
:param sync_delay_seed: ``SYNC_IN`` delay tuning starting value.
To stabilize the ``SYNC_IN`` delay tuning, run :meth:`tune_sync_delay` once
and set this to the delay tap number returned (default: -1 to signal no
synchronization and no tuning during :meth:`init`).
Can be a string of the form "eeprom_device:byte_offset" to read the
value from a I2C EEPROM; in which case, `io_update_delay` must be set
Can be a string of the form ``eeprom_device:byte_offset`` to read the
value from a I2C EEPROM, in which case ``io_update_delay`` must be set
to the same string value.
:param io_update_delay: IO_UPDATE pulse alignment delay.
To align IO_UPDATE to SYNC_CLK, run :meth:`tune_io_update_delay` and
:param io_update_delay: ``IO_UPDATE`` pulse alignment delay.
To align ``IO_UPDATE`` to ``SYNC_CLK``, run :meth:`tune_io_update_delay` and
set this to the delay tap number returned.
Can be a string of the form "eeprom_device:byte_offset" to read the
value from a I2C EEPROM; in which case, `sync_delay_seed` must be set
Can be a string of the form ``eeprom_device:byte_offset`` to read the
value from a I2C EEPROM, in which case ``sync_delay_seed`` must be set
to the same string value.
"""
@ -188,9 +188,7 @@ class AD9910:
@kernel
def set_phase_mode(self, phase_mode: TInt32):
r"""Set the default phase mode.
for future calls to :meth:`set` and
r"""Set the default phase mode for future calls to :meth:`set` and
:meth:`set_mu`. Supported phase modes are:
* :const:`PHASE_MODE_CONTINUOUS`: the phase accumulator is unchanged
@ -233,7 +231,7 @@ class AD9910:
@kernel
def write16(self, addr: TInt32, data: TInt32):
"""Write to 16 bit register.
"""Write to 16-bit register.
:param addr: Register address
:param data: Data to be written
@ -244,7 +242,7 @@ class AD9910:
@kernel
def write32(self, addr: TInt32, data: TInt32):
"""Write to 32 bit register.
"""Write to 32-bit register.
:param addr: Register address
:param data: Data to be written
@ -258,7 +256,7 @@ class AD9910:
@kernel
def read16(self, addr: TInt32) -> TInt32:
"""Read from 16 bit register.
"""Read from 16-bit register.
:param addr: Register address
"""
@ -273,7 +271,7 @@ class AD9910:
@kernel
def read32(self, addr: TInt32) -> TInt32:
"""Read from 32 bit register.
"""Read from 32-bit register.
:param addr: Register address
"""
@ -288,10 +286,10 @@ class AD9910:
@kernel
def read64(self, addr: TInt32) -> TInt64:
"""Read from 64 bit register.
"""Read from 64-bit register.
:param addr: Register address
:return: 64 bit integer register value
:return: 64-bit integer register value
"""
self.bus.set_config_mu(
urukul.SPI_CONFIG, 8,
@ -311,10 +309,10 @@ class AD9910:
@kernel
def write64(self, addr: TInt32, data_high: TInt32, data_low: TInt32):
"""Write to 64 bit register.
"""Write to 64-bit register.
:param addr: Register address
:param data_high: High (MSB) 32 bits of the data
:param data_high: High (MSB) 32 data bits
:param data_low: Low (LSB) 32 data bits
"""
self.bus.set_config_mu(urukul.SPI_CONFIG, 8,
@ -332,8 +330,9 @@ class AD9910:
"""Write data to RAM.
The profile to write to and the step, start, and end address
need to be configured before and separately using
:meth:`set_profile_ram` and the parent CPLD `set_profile`.
need to be configured in advance and separately using
:meth:`set_profile_ram` and the parent CPLD
:meth:`~artiq.coredevice.urukul.CPLD.set_profile`.
:param data: Data to be written to RAM.
"""
@ -354,7 +353,8 @@ class AD9910:
The profile to read from and the step, start, and end address
need to be configured before and separately using
:meth:`set_profile_ram` and the parent CPLD `set_profile`.
:meth:`set_profile_ram` and the parent CPLD
:meth:`~artiq.coredevice.urukul.CPLD.set_profile`.
:param data: List to be filled with data read from RAM.
"""
@ -390,9 +390,9 @@ class AD9910:
manual_osk_external: TInt32 = 0,
osk_enable: TInt32 = 0,
select_auto_osk: TInt32 = 0):
"""Set CFR1. See the AD9910 datasheet for parameter meanings.
"""Set CFR1. See the AD9910 datasheet for parameter meanings and sizes.
This method does not pulse IO_UPDATE.
This method does not pulse ``IO_UPDATE.``
:param power_down: Power down bits.
:param phase_autoclear: Autoclear phase accumulator.
@ -429,9 +429,9 @@ class AD9910:
effective_ftw: TInt32 = 1,
sync_validation_disable: TInt32 = 0,
matched_latency_enable: TInt32 = 0):
"""Set CFR2. See the AD9910 datasheet for parameter meanings.
"""Set CFR2. See the AD9910 datasheet for parameter meanings and sizes.
This method does not pulse IO_UPDATE.
This method does not pulse ``IO_UPDATE``.
:param asf_profile_enable: Enable amplitude scale from single tone profiles.
:param drg_enable: Digital ramp enable.
@ -456,14 +456,14 @@ class AD9910:
"""Initialize and configure the DDS.
Sets up SPI mode, confirms chip presence, powers down unused blocks,
configures the PLL, waits for PLL lock. Uses the
IO_UPDATE signal multiple times.
configures the PLL, waits for PLL lock. Uses the ``IO_UPDATE``
signal multiple times.
:param blind: Do not read back DDS identity and do not wait for lock.
"""
self.sync_data.init()
if self.sync_data.sync_delay_seed >= 0 and not self.cpld.sync_div:
raise ValueError("parent cpld does not drive SYNC")
raise ValueError("parent CPLD does not drive SYNC")
if self.sync_data.sync_delay_seed >= 0:
if self.sysclk_per_mu != self.sysclk * self.core.ref_period:
raise ValueError("incorrect clock ratio for synchronization")
@ -514,7 +514,7 @@ class AD9910:
def power_down(self, bits: TInt32 = 0b1111):
"""Power down DDS.
:param bits: Power down bits, see datasheet
:param bits: Power-down bits, see datasheet
"""
self.set_cfr1(power_down=bits)
self.cpld.io_update.pulse(1 * us)
@ -534,23 +534,23 @@ class AD9910:
After the SPI transfer, the shared IO update pin is pulsed to
activate the data.
.. seealso: :meth:`set_phase_mode` for a definition of the different
.. seealso: :meth:`AD9910.set_phase_mode` for a definition of the different
phase modes.
:param ftw: Frequency tuning word: 32 bit.
:param pow_: Phase tuning word: 16 bit unsigned.
:param asf: Amplitude scale factor: 14 bit unsigned.
:param ftw: Frequency tuning word: 32-bit.
:param pow_: Phase tuning word: 16-bit unsigned.
:param asf: Amplitude scale factor: 14-bit unsigned.
:param phase_mode: If specified, overrides the default phase mode set
by :meth:`set_phase_mode` for this call.
:param ref_time_mu: Fiducial time used to compute absolute or tracking
phase updates. In machine units as obtained by `now_mu()`.
phase updates. In machine units as obtained by :meth:`~artiq.language.core.now_mu()`.
:param profile: Single tone profile number to set (0-7, default: 7).
Ineffective if `ram_destination` is specified.
Ineffective if ``ram_destination`` is specified.
:param ram_destination: RAM destination (:const:`RAM_DEST_FTW`,
:const:`RAM_DEST_POW`, :const:`RAM_DEST_ASF`,
:const:`RAM_DEST_POWASF`). If specified, write free DDS parameters
to the ASF/FTW/POW registers instead of to the single tone profile
register (default behaviour, see `profile`).
register (default behaviour, see ``profile``).
:return: Resulting phase offset word after application of phase
tracking offset. When using :const:`PHASE_MODE_CONTINUOUS` in
subsequent calls, use this value as the "current" phase.
@ -598,10 +598,10 @@ class AD9910:
"""Get the frequency tuning word, phase offset word,
and amplitude scale factor.
.. seealso:: :meth:`get`
See also :meth:`AD9910.get`.
:param profile: Profile number to get (0-7, default: 7)
:return: A tuple ``(ftw, pow, asf)``
:return: A tuple (FTW, POW, ASF)
"""
# Read data
@ -617,12 +617,12 @@ class AD9910:
profile: TInt32 = _DEFAULT_PROFILE_RAM,
nodwell_high: TInt32 = 0, zero_crossing: TInt32 = 0,
mode: TInt32 = 1):
"""Set the RAM profile settings.
"""Set the RAM profile settings. See also AD9910 datasheet.
:param start: Profile start address in RAM.
:param end: Profile end address in RAM (last address).
:param step: Profile time step in units of t_DDS, typically 4 ns
(default: 1).
:param start: Profile start address in RAM (10-bit).
:param end: Profile end address in RAM, inclusive (10-bit).
:param step: Profile time step, counted in DDS sample clock
cycles, typically 4 ns (16-bit, default: 1)
:param profile: Profile index (0 to 7) (default: 0).
:param nodwell_high: No-dwell high bit (default: 0,
see AD9910 documentation).
@ -850,7 +850,7 @@ class AD9910:
ram_destination: TInt32 = -1) -> TFloat:
"""Set DDS data in SI units.
.. seealso:: :meth:`set_mu`
See also :meth:`AD9910.set_mu`.
:param frequency: Frequency in Hz
:param phase: Phase tuning word in turns
@ -871,10 +871,10 @@ class AD9910:
) -> TTuple([TFloat, TFloat, TFloat]):
"""Get the frequency, phase, and amplitude.
.. seealso:: :meth:`get_mu`
See also :meth:`AD9910.get_mu`.
:param profile: Profile number to get (0-7, default: 7)
:return: A tuple ``(frequency, phase, amplitude)``
:return: A tuple (frequency, phase, amplitude)
"""
# Get values
@ -887,11 +887,10 @@ class AD9910:
def set_att_mu(self, att: TInt32):
"""Set digital step attenuator in machine units.
This method will write the attenuator settings of all four channels.
This method will write the attenuator settings of all four channels. See also
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.set_att_mu>`.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu`
:param att: Attenuation setting, 8 bit digital.
:param att: Attenuation setting, 8-bit digital.
"""
self.cpld.set_att_mu(self.chip_select - 4, att)
@ -899,9 +898,8 @@ class AD9910:
def set_att(self, att: TFloat):
"""Set digital step attenuator in SI units.
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att`
This method will write the attenuator settings of all four channels. See also
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.set_att>`.
:param att: Attenuation in dB.
"""
@ -909,19 +907,17 @@ class AD9910:
@kernel
def get_att_mu(self) -> TInt32:
"""Get digital step attenuator value in machine units.
"""Get digital step attenuator value in machine units. See also
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.get_channel_att_mu>`.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att_mu`
:return: Attenuation setting, 8 bit digital.
:return: Attenuation setting, 8-bit digital.
"""
return self.cpld.get_channel_att_mu(self.chip_select - 4)
@kernel
def get_att(self) -> TFloat:
"""Get digital step attenuator value in SI units.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att`
"""Get digital step attenuator value in SI units. See also
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.get_channel_att>`.
:return: Attenuation in dB.
"""
@ -943,16 +939,16 @@ class AD9910:
window: TInt32,
en_sync_gen: TInt32 = 0):
"""Set the relevant parameters in the multi device synchronization
register. See the AD9910 datasheet for details. The SYNC clock
generator preset value is set to zero, and the SYNC_OUT generator is
register. See the AD9910 datasheet for details. The ``SYNC`` clock
generator preset value is set to zero, and the ``SYNC_OUT`` generator is
disabled by default.
:param in_delay: SYNC_IN delay tap (0-31) in steps of ~75ps
:param window: Symmetric SYNC_IN validation window (0-15) in
:param in_delay: ``SYNC_IN`` delay tap (0-31) in steps of ~75ps
:param window: Symmetric ``SYNC_IN`` validation window (0-15) in
steps of ~75ps for both hold and setup margin.
:param en_sync_gen: Whether to enable the DDS-internal sync generator
(SYNC_OUT, cf. sync_sel == 1). Should be left off for the normal
use case, where the SYNC clock is supplied by the core device.
(``SYNC_OUT``, cf. ``sync_sel == 1``). Should be left off for the normal
use case, where the ``SYNC`` clock is supplied by the core device.
"""
self.write32(_AD9910_REG_SYNC,
(window << 28) | # SYNC S/H validation delay
@ -965,9 +961,9 @@ class AD9910:
@kernel
def clear_smp_err(self):
"""Clear the SMP_ERR flag and enables SMP_ERR validity monitoring.
"""Clear the ``SMP_ERR`` flag and enables ``SMP_ERR`` validity monitoring.
Violations of the SYNC_IN sample and hold margins will result in
Violations of the ``SYNC_IN`` sample and hold margins will result in
SMP_ERR being asserted. This then also activates the red LED on
the respective Urukul channel.
@ -982,9 +978,9 @@ class AD9910:
@kernel
def tune_sync_delay(self,
search_seed: TInt32 = 15) -> TTuple([TInt32, TInt32]):
"""Find a stable SYNC_IN delay.
"""Find a stable ``SYNC_IN`` delay.
This method first locates a valid SYNC_IN delay at zero validation
This method first locates a valid ``SYNC_IN`` delay at zero validation
window size (setup/hold margin) by scanning around `search_seed`. It
then looks for similar valid delays at successively larger validation
window sizes until none can be found. It then decreases the validation
@ -993,7 +989,7 @@ class AD9910:
This method and :meth:`tune_io_update_delay` can be run in any order.
:param search_seed: Start value for valid SYNC_IN delay search.
:param search_seed: Start value for valid ``SYNC_IN`` delay search.
Defaults to 15 (half range).
:return: Tuple of optimal delay and window size.
"""
@ -1040,16 +1036,16 @@ class AD9910:
def measure_io_update_alignment(self, delay_start: TInt64,
delay_stop: TInt64) -> TInt32:
"""Use the digital ramp generator to locate the alignment between
IO_UPDATE and SYNC_CLK.
``IO_UPDATE`` and ``SYNC_CLK``.
The ramp generator is set up to a linear frequency ramp
(dFTW/t_SYNC_CLK=1) and started at a coarse RTIO time stamp plus
`delay_start` and stopped at a coarse RTIO time stamp plus
`delay_stop`.
``(dFTW/t_SYNC_CLK=1)`` and started at a coarse RTIO time stamp plus
``delay_start`` and stopped at a coarse RTIO time stamp plus
``delay_stop``.
:param delay_start: Start IO_UPDATE delay in machine units.
:param delay_stop: Stop IO_UPDATE delay in machine units.
:return: Odd/even SYNC_CLK cycle indicator.
:param delay_start: Start ``IO_UPDATE`` delay in machine units.
:param delay_stop: Stop ``IO_UPDATE`` delay in machine units.
:return: Odd/even ``SYNC_CLK`` cycle indicator.
"""
# set up DRG
self.set_cfr1(drg_load_lrr=1, drg_autoclear=1)
@ -1081,19 +1077,19 @@ class AD9910:
@kernel
def tune_io_update_delay(self) -> TInt32:
"""Find a stable IO_UPDATE delay alignment.
"""Find a stable ``IO_UPDATE`` delay alignment.
Scan through increasing IO_UPDATE delays until a delay is found that
lets IO_UPDATE be registered in the next SYNC_CLK cycle. Return a
IO_UPDATE delay that is as far away from that SYNC_CLK edge
Scan through increasing ``IO_UPDATE`` delays until a delay is found that
lets ``IO_UPDATE`` be registered in the next ``SYNC_CLK`` cycle. Return a
``IO_UPDATE`` delay that is as far away from that ``SYNC_CLK`` edge
as possible.
This method assumes that the IO_UPDATE TTLOut device has one machine
This method assumes that the ``IO_UPDATE`` TTLOut device has one machine
unit resolution (SERDES).
This method and :meth:`tune_sync_delay` can be run in any order.
:return: Stable IO_UPDATE delay to be passed to the constructor
:return: Stable ``IO_UPDATE`` delay to be passed to the constructor
:class:`AD9910` via the device database.
"""
period = self.sysclk_per_mu * 4 # SYNC_CLK period

View File

@ -11,7 +11,7 @@ from artiq.coredevice import urukul
class AD9912:
"""
AD9912 DDS channel on Urukul
AD9912 DDS channel on Urukul.
This class supports a single DDS channel and exposes the DDS,
the digital step attenuator, and the RF switch.
@ -22,9 +22,9 @@ class AD9912:
:param sw_device: Name of the RF switch device. The RF switch is a
TTLOut channel available as the :attr:`sw` attribute of this instance.
:param pll_n: DDS PLL multiplier. The DDS sample clock is
f_ref/clk_div*pll_n where f_ref is the reference frequency and clk_div
is the reference clock divider (both set in the parent Urukul CPLD
instance).
``f_ref / clk_div * pll_n`` where ``f_ref`` is the reference frequency and
``clk_div`` is the reference clock divider (both set in the parent
Urukul CPLD instance).
:param pll_en: PLL enable bit, set to 0 to bypass PLL (default: 1).
Note that when bypassing the PLL the red front panel LED may remain on.
"""
@ -44,7 +44,11 @@ class AD9912:
self.pll_en = pll_en
self.pll_n = pll_n
if pll_en:
sysclk = self.cpld.refclk / [1, 1, 2, 4][self.cpld.clk_div] * pll_n
refclk = self.cpld.refclk
if refclk < 11e6:
# use SYSCLK PLL Doubler
refclk = refclk * 2
sysclk = refclk / [1, 1, 2, 4][self.cpld.clk_div] * pll_n
else:
sysclk = self.cpld.refclk
assert sysclk <= 1e9
@ -97,7 +101,7 @@ class AD9912:
Sets up SPI mode, confirms chip presence, powers down unused blocks,
and configures the PLL. Does not wait for PLL lock. Uses the
IO_UPDATE signal multiple times.
``IO_UPDATE`` signal multiple times.
"""
# SPI mode
self.write(AD9912_SER_CONF, 0x99, length=1)
@ -115,7 +119,11 @@ class AD9912:
self.write(AD9912_N_DIV, self.pll_n // 2 - 2, length=1)
self.cpld.io_update.pulse(2 * us)
# I_cp = 375 µA, VCO high range
self.write(AD9912_PLLCFG, 0b00000101, length=1)
if self.cpld.refclk < 11e6:
# enable SYSCLK PLL Doubler
self.write(AD9912_PLLCFG, 0b00001101, length=1)
else:
self.write(AD9912_PLLCFG, 0b00000101, length=1)
self.cpld.io_update.pulse(2 * us)
delay(1 * ms)
@ -125,9 +133,9 @@ class AD9912:
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu`
See also :meth:`~artiq.coredevice.urukul.CPLD.set_att_mu`.
:param att: Attenuation setting, 8 bit digital.
:param att: Attenuation setting, 8-bit digital.
"""
self.cpld.set_att_mu(self.chip_select - 4, att)
@ -137,7 +145,7 @@ class AD9912:
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att`
See also :meth:`~artiq.coredevice.urukul.CPLD.set_att`.
:param att: Attenuation in dB. Higher values mean more attenuation.
"""
@ -147,9 +155,9 @@ class AD9912:
def get_att_mu(self) -> TInt32:
"""Get digital step attenuator value in machine units.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att_mu`
See also :meth:`~artiq.coredevice.urukul.CPLD.get_channel_att_mu`.
:return: Attenuation setting, 8 bit digital.
:return: Attenuation setting, 8-bit digital.
"""
return self.cpld.get_channel_att_mu(self.chip_select - 4)
@ -157,7 +165,7 @@ class AD9912:
def get_att(self) -> TFloat:
"""Get digital step attenuator value in SI units.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att`
See also :meth:`~artiq.coredevice.urukul.CPLD.get_channel_att`.
:return: Attenuation in dB.
"""
@ -170,8 +178,8 @@ class AD9912:
After the SPI transfer, the shared IO update pin is pulsed to
activate the data.
:param ftw: Frequency tuning word: 48 bit unsigned.
:param pow_: Phase tuning word: 16 bit unsigned.
:param ftw: Frequency tuning word: 48-bit unsigned.
:param pow_: Phase tuning word: 16-bit unsigned.
"""
# streaming transfer of FTW and POW
self.bus.set_config_mu(urukul.SPI_CONFIG, 16,
@ -189,9 +197,9 @@ class AD9912:
def get_mu(self) -> TTuple([TInt64, TInt32]):
"""Get the frequency tuning word and phase offset word.
.. seealso:: :meth:`get`
See also :meth:`AD9912.get`.
:return: A tuple ``(ftw, pow)``.
:return: A tuple (FTW, POW).
"""
# Read data
@ -239,7 +247,7 @@ class AD9912:
def set(self, frequency: TFloat, phase: TFloat = 0.0):
"""Set profile 0 data in SI units.
.. seealso:: :meth:`set_mu`
See also :meth:`AD9912.set_mu`.
:param frequency: Frequency in Hz
:param phase: Phase tuning word in turns
@ -251,9 +259,9 @@ class AD9912:
def get(self) -> TTuple([TFloat, TFloat]):
"""Get the frequency and phase.
.. seealso:: :meth:`get_mu`
See also :meth:`AD9912.get_mu`.
:return: A tuple ``(frequency, phase)``.
:return: A tuple (frequency, phase).
"""
# Get values

View File

@ -49,7 +49,7 @@ class AD9914:
The time cursor is not modified by any function in this class.
Output event replacement is not supported and issuing commands at the same
time is an error.
time results in collision errors.
:param sysclk: DDS system frequency. The DDS system clock must be a
phase-locked multiple of the RTIO clock.
@ -134,7 +134,7 @@ class AD9914:
timing margin.
:param sync_delay: integer from 0 to 0x3f that sets the value of
SYNC_OUT (bits 3-5) and SYNC_IN (bits 0-2) delay ADJ bits.
``SYNC_OUT`` (bits 3-5) and ``SYNC_IN`` (bits 0-2) delay ADJ bits.
"""
delay_mu(-self.init_sync_duration_mu)
self.write(AD9914_GPIO, (1 << self.channel) << 1)

View File

@ -112,7 +112,7 @@ class ADF5356:
This method will write the attenuator settings of the channel.
.. seealso:: :meth:`artiq.coredevice.mirny.Mirny.set_att`
See also :meth:`Mirny.set_att<artiq.coredevice.mirny.Mirny.set_att>`.
:param att: Attenuation in dB.
"""
@ -122,7 +122,7 @@ class ADF5356:
def set_att_mu(self, att):
"""Set digital step attenuator in machine units.
:param att: Attenuation setting, 8 bit digital.
:param att: Attenuation setting, 8-bit digital.
"""
self.cpld.set_att_mu(self.channel, att)
@ -531,14 +531,14 @@ class ADF5356:
@portable
def _compute_pfd_frequency(self, r, d, t) -> TInt64:
"""
Calculate the PFD frequency from the given reference path parameters
Calculate the PFD frequency from the given reference path parameters.
"""
return int64(self.sysclk * ((1 + d) / (r * (1 + t))))
@portable
def _compute_reference_counter(self) -> TInt32:
"""
Determine the reference counter R that maximizes the PFD frequency
Determine the reference counter R that maximizes the PFD frequency.
"""
d = ADF5356_REG4_R_DOUBLER_GET(self.regs[4])
t = ADF5356_REG4_R_DIVIDER_GET(self.regs[4])
@ -565,14 +565,15 @@ def calculate_pll(f_vco: TInt64, f_pfd: TInt64):
"""
Calculate fractional-N PLL parameters such that
``f_vco`` = ``f_pfd`` * (``n`` + (``frac1`` + ``frac2``/``mod2``) / ``mod1``)
``f_vco = f_pfd * (n + (frac1 + frac2/mod2) / mod1)``
where
``mod1 = 2**24`` and ``mod2 <= 2**28``
``mod1 = 2**24`` and ``mod2 <= 2**28``
:param f_vco: target VCO frequency
:param f_pfd: PFD frequency
:return: ``(n, frac1, (frac2_msb, frac2_lsb), (mod2_msb, mod2_lsb))``
:return: (``n``, ``frac1``, ``(frac2_msb, frac2_lsb)``, ``(mod2_msb, mod2_lsb)``)
"""
f_pfd = int64(f_pfd)
f_vco = int64(f_vco)

View File

@ -17,12 +17,12 @@ ALMAZNY_LEGACY_SPIT_WR = 32
class AlmaznyLegacy:
"""
Almazny (High frequency mezzanine board for Mirny)
Almazny (High-frequency mezzanine board for Mirny)
This applies to Almazny hardware v1.1 and earlier.
Use :class:`artiq.coredevice.almazny.AlmaznyChannel` for Almazny v1.2 and later.
Use :class:`~artiq.coredevice.almazny.AlmaznyChannel` for Almazny v1.2 and later.
:param host_mirny: Mirny device Almazny is connected to
:param host_mirny: :class:`~artiq.coredevice.mirny.Mirny` device Almazny is connected to
"""
def __init__(self, dmgr, host_mirny):
@ -121,12 +121,13 @@ class AlmaznyLegacy:
class AlmaznyChannel:
"""
One Almazny channel
Driver for one Almazny channel.
Almazny is a mezzanine for the Quad PLL RF source Mirny that exposes and
controls the frequency-doubled outputs.
This driver requires Almazny hardware revision v1.2 or later
and Mirny CPLD gateware v0.3 or later.
Use :class:`artiq.coredevice.almazny.AlmaznyLegacy` for Almazny hardware v1.1 and earlier.
Use :class:`~artiq.coredevice.almazny.AlmaznyLegacy` for Almazny hardware v1.1 and earlier.
:param host_mirny: Mirny CPLD device name
:param channel: channel index (0-3)

View File

@ -21,9 +21,9 @@ class CoreCache:
"""Extract a value from the core device cache.
After a value is extracted, it cannot be replaced with another value using
:meth:`put` until all kernel functions finish executing; attempting
to replace it will result in a :class:`artiq.coredevice.exceptions.CacheError`.
to replace it will result in a :class:`~artiq.coredevice.exceptions.CacheError`.
If the cache does not contain any value associated with ``key``, an empty list
If the cache does not contain any value associated with `key`, an empty list
is returned.
The value is not copied, so mutating it will change what's stored in the cache.

View File

@ -3,6 +3,7 @@ import logging
import traceback
import numpy
import socket
import builtins
from enum import Enum
from fractions import Fraction
from collections import namedtuple
@ -465,12 +466,12 @@ class CommKernel:
self._write_bool(value)
elif tag == "i":
check(isinstance(value, (int, numpy.int32)) and
(-2**31 <= value < 2**31),
(-2**31 <= value <= 2**31-1),
lambda: "32-bit int")
self._write_int32(value)
elif tag == "I":
check(isinstance(value, (int, numpy.int32, numpy.int64)) and
(-2**63 <= value < 2**63),
(-2**63 <= value <= 2**63-1),
lambda: "64-bit int")
self._write_int64(value)
elif tag == "f":
@ -479,8 +480,8 @@ class CommKernel:
self._write_float64(value)
elif tag == "F":
check(isinstance(value, Fraction) and
(-2**63 <= value.numerator < 2**63) and
(-2**63 <= value.denominator < 2**63),
(-2**63 <= value.numerator <= 2**63-1) and
(-2**63 <= value.denominator <= 2**63-1),
lambda: "64-bit Fraction")
self._write_int64(value.numerator)
self._write_int64(value.denominator)
@ -616,9 +617,10 @@ class CommKernel:
self._write_int32(embedding_map.store_str(function))
else:
exn_type = type(exn)
if exn_type in (ZeroDivisionError, ValueError, IndexError, RuntimeError) or \
hasattr(exn, "artiq_builtin"):
name = "0:{}".format(exn_type.__name__)
if exn_type in builtins.__dict__.values():
name = "0:{}".format(exn_type.__qualname__)
elif hasattr(exn, "artiq_builtin"):
name = "0:{}.{}".format(exn_type.__module__, exn_type.__qualname__)
else:
exn_id = embedding_map.store_object(exn_type)
name = "{}:{}.{}".format(exn_id,

View File

@ -53,6 +53,9 @@ def rtio_get_destination_status(linkno: TInt32) -> TBool:
def rtio_get_counter() -> TInt64:
raise NotImplementedError("syscall not simulated")
@syscall
def test_exception_id_sync(id: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated")
def get_target_cls(target):
if target == "rv32g":
@ -73,8 +76,8 @@ class Core:
On platforms that use clock multiplication and SERDES-based PHYs,
this is the period after multiplication. For example, with a RTIO core
clocked at 125MHz and a SERDES multiplication factor of 8, the
reference period is 1ns.
The time machine unit is equal to this period.
reference period is ``1 ns``.
The machine time unit (``mu``) is equal to this period.
:param ref_multiplier: ratio between the RTIO fine timestamp frequency
and the RTIO coarse timestamp frequency (e.g. SERDES multiplication
factor).
@ -116,6 +119,8 @@ class Core:
self.trigger_analyzer_proxy()
def close(self):
"""Disconnect core device and close sockets.
"""
self.comm.close()
def compile(self, function, args, kwargs, set_result=None,
@ -241,8 +246,8 @@ class Core:
Similarly, modified values are not written back, and explicit RPC should be used
to modify host objects.
Carefully review the source code of drivers calls used in precompiled kernels, as
they may rely on host object attributes being transfered between kernel calls.
Examples include code used to control DDS phase, and Urukul RF switch control
they may rely on host object attributes being transferred between kernel calls.
Examples include code used to control DDS phase and Urukul RF switch control
via the CPLD register.
The return value of the callable is the return value of the kernel, if any.
@ -273,7 +278,7 @@ class Core:
@portable
def seconds_to_mu(self, seconds):
"""Convert seconds to the corresponding number of machine units
(RTIO cycles).
(fine RTIO cycles).
:param seconds: time (in seconds) to convert.
"""
@ -281,7 +286,7 @@ class Core:
@portable
def mu_to_seconds(self, mu):
"""Convert machine units (RTIO cycles) to seconds.
"""Convert machine units (fine RTIO cycles) to seconds.
:param mu: cycle count to convert.
"""
@ -296,7 +301,7 @@ class Core:
for the actual value of the hardware register at the instant when
execution resumes in the caller.
For a more detailed description of these concepts, see :doc:`/rtio`.
For a more detailed description of these concepts, see :doc:`rtio`.
"""
return rtio_get_counter()
@ -315,7 +320,7 @@ class Core:
def get_rtio_destination_status(self, destination):
"""Returns whether the specified RTIO destination is up.
This is particularly useful in startup kernels to delay
startup until certain DRTIO destinations are up."""
startup until certain DRTIO destinations are available."""
return rtio_get_destination_status(destination)
@kernel
@ -343,7 +348,7 @@ class Core:
Returns only after the dump has been retrieved from the device.
Raises IOError if no analyzer proxy has been configured, or if the
Raises :exc:`IOError` if no analyzer proxy has been configured, or if the
analyzer proxy fails. In the latter case, more details would be
available in the proxy log.
"""

View File

@ -76,11 +76,11 @@ class CoreDMA:
@kernel
def record(self, name, enable_ddma=False):
"""Returns a context manager that will record a DMA trace called ``name``.
"""Returns a context manager that will record a DMA trace called `name`.
Any previously recorded trace with the same name is overwritten.
The trace will persist across kernel switches.
In DRTIO context, distributed DMA can be toggled with ``enable_ddma``.
In DRTIO context, distributed DMA can be toggled with `enable_ddma`.
Enabling it allows running DMA on satellites, rather than sending all
events from the master.
@ -116,7 +116,7 @@ class CoreDMA:
def playback_handle(self, handle):
"""Replays a handle obtained with :meth:`get_handle`. Using this function
is much faster than :meth:`playback` for replaying a set of traces repeatedly,
but incurs the overhead of managing the handles onto the programmer."""
but offloads the overhead of managing the handles onto the programmer."""
(epoch, advance_mu, ptr, uses_ddma) = handle
if self.epoch != epoch:
raise DMAError("Invalid handle")

View File

@ -1,9 +1,9 @@
"""Driver for RTIO-enabled TTL edge counter.
Like for the TTL input PHYs, sensitivity can be configured over RTIO
(``gate_rising()``, etc.). In contrast to the former, however, the count is
As for the TTL input PHYs, sensitivity can be configured over RTIO
(:meth:`gate_rising<EdgeCounter.gate_rising>`, etc.). In contrast to the former, however, the count is
accumulated in gateware, and only a single input event is generated at the end
of each gate period::
of each gate period: ::
with parallel:
doppler_cool()
@ -17,12 +17,12 @@ of each gate period::
print("Readout counts:", self.pmt_counter.fetch_count())
For applications where the timestamps of the individual input events are not
required, this has two advantages over ``TTLInOut.count()`` beyond raw
throughput. First, it is easy to count events during multiple separate periods
without blocking to read back counts in between, as illustrated in the above
example. Secondly, as each count total only takes up a single input event, it
is much easier to acquire counts on several channels in parallel without
risking input FIFO overflows::
required, this has two advantages over :meth:`TTLInOut.count<artiq.coredevice.ttl.TTLInOut.count>`
beyond raw throughput. First, it is easy to count events during multiple separate
periods without blocking to read back counts in between, as illustrated in the
above example. Secondly, as each count total only takes up a single input event,
it is much easier to acquire counts on several channels in parallel without
risking input RTIO overflows: ::
# Using the TTLInOut driver, pmt_1 input events are only processed
# after pmt_0 is done counting. To avoid RTIOOverflows, a round-robin
@ -35,8 +35,6 @@ risking input FIFO overflows::
counts_0 = self.pmt_0.count(now_mu()) # blocks
counts_1 = self.pmt_1.count(now_mu())
#
# Using gateware counters, only a single input event each is
# generated, greatly reducing the load on the input FIFOs:
@ -47,7 +45,7 @@ risking input FIFO overflows::
counts_0 = self.pmt_0_counter.fetch_count() # blocks
counts_1 = self.pmt_1_counter.fetch_count()
See :mod:`artiq.gateware.rtio.phy.edge_counter` and
See the sources of :mod:`artiq.gateware.rtio.phy.edge_counter` and
:meth:`artiq.gateware.eem.DIO.add_std` for the gateware components.
"""
@ -176,13 +174,13 @@ class EdgeCounter:
"""Emit an RTIO event at the current timeline position to set the
gateware configuration.
For most use cases, the `gate_*` wrappers will be more convenient.
For most use cases, the ``gate_*`` wrappers will be more convenient.
:param count_rising: Whether to count rising signal edges.
:param count_falling: Whether to count falling signal edges.
:param send_count_event: If `True`, an input event with the current
:param send_count_event: If ``True``, an input event with the current
counter value is generated on the next clock cycle (once).
:param reset_to_zero: If `True`, the counter value is reset to zero on
:param reset_to_zero: If ``True``, the counter value is reset to zero on
the next clock cycle (once).
"""
config = int32(0)

View File

@ -6,12 +6,26 @@ import os
from artiq import __artiq_dir__ as artiq_dir
from artiq.coredevice.runtime import source_loader
"""
This file provides class definition for all the exceptions declared in `EmbeddingMap` in `artiq.compiler.embedding`
For Python builtin exceptions, use the `builtins` module
For ARTIQ specific exceptions, inherit from `Exception` class
"""
ZeroDivisionError = builtins.ZeroDivisionError
ValueError = builtins.ValueError
IndexError = builtins.IndexError
RuntimeError = builtins.RuntimeError
AssertionError = builtins.AssertionError
AttributeError = builtins.AttributeError
IndexError = builtins.IndexError
IOError = builtins.IOError
KeyError = builtins.KeyError
NotImplementedError = builtins.NotImplementedError
OverflowError = builtins.OverflowError
RuntimeError = builtins.RuntimeError
TimeoutError = builtins.TimeoutError
TypeError = builtins.TypeError
ValueError = builtins.ValueError
ZeroDivisionError = builtins.ZeroDivisionError
OSError = builtins.OSError
class CoreException:
@ -137,7 +151,7 @@ class RTIOOverflow(Exception):
class RTIODestinationUnreachable(Exception):
"""Raised with a RTIO operation could not be completed due to a DRTIO link
"""Raised when a RTIO operation could not be completed due to a DRTIO link
being down.
"""
artiq_builtin = True
@ -157,13 +171,17 @@ class SubkernelError(Exception):
class ClockFailure(Exception):
"""Raised when RTIO PLL has lost lock."""
artiq_builtin = True
class I2CError(Exception):
"""Raised when a I2C transaction fails."""
pass
artiq_builtin = True
class SPIError(Exception):
"""Raised when a SPI transaction fails."""
pass
artiq_builtin = True
class UnwrapNoneError(Exception):
"""Raised when unwrapping a none Option."""
artiq_builtin = True

View File

@ -1,4 +1,4 @@
"""RTIO driver for the Fastino 32channel, 16 bit, 2.5 MS/s per channel,
"""RTIO driver for the Fastino 32-channel, 16-bit, 2.5 MS/s per channel
streaming DAC.
"""
from numpy import int32, int64
@ -17,22 +17,22 @@ class Fastino:
to the DAC RTIO addresses, if a channel is not "held" by setting its bit
using :meth:`set_hold`, the next frame will contain the update. For the
DACs held, the update is triggered explicitly by setting the corresponding
bit using :meth:`set_update`. Update is self-clearing. This enables atomic
bit using :meth:`update`. Update is self-clearing. This enables atomic
DAC updates synchronized to a frame edge.
The `log2_width=0` RTIO layout uses one DAC channel per RTIO address and a
dense RTIO address space. The RTIO words are narrow (32 bit) and
The ``log2_width=0`` RTIO layout uses one DAC channel per RTIO address and a
dense RTIO address space. The RTIO words are narrow (32-bit) and
few-channel updates are efficient. There is the least amount of DAC state
tracking in kernels, at the cost of more DMA and RTIO data.
The setting here and in the RTIO PHY (gateware) must match.
Other `log2_width` (up to `log2_width=5`) settings pack multiple
Other ``log2_width`` (up to ``log2_width=5``) settings pack multiple
(in powers of two) DAC channels into one group and into one RTIO write.
The RTIO data width increases accordingly. The `log2_width`
The RTIO data width increases accordingly. The ``log2_width``
LSBs of the RTIO address for a DAC channel write must be zero and the
address space is sparse. For `log2_width=5` the RTIO data is 512 bit wide.
address space is sparse. For ``log2_width=5`` the RTIO data is 512-bit wide.
If `log2_width` is zero, the :meth:`set_dac`/:meth:`set_dac_mu` interface
If ``log2_width`` is zero, the :meth:`set_dac`/:meth:`set_dac_mu` interface
must be used. If non-zero, the :meth:`set_group`/:meth:`set_group_mu`
interface must be used.
@ -63,15 +63,16 @@ class Fastino:
* disables RESET, DAC_CLR, enables AFE_PWR
* clears error counters, enables error counting
* turns LEDs off
* clears `hold` and `continuous` on all channels
* clears ``hold`` and ``continuous`` on all channels
* clear and resets interpolators to unit rate change on all
channels
It does not change set channel voltages and does not reset the PLLs or clock
domains.
Note: On Fastino gateware before v0.2 this may lead to 0 voltage being emitted
transiently.
.. warning::
On Fastino gateware before v0.2 this may lead to 0 voltage being emitted
transiently.
"""
self.set_cfg(reset=0, afe_power_down=0, dac_clr=0, clr_err=1)
delay_mu(self.t_frame)
@ -115,7 +116,7 @@ class Fastino:
"""Write DAC data in machine units.
:param dac: DAC channel to write to (0-31).
:param data: DAC word to write, 16 bit unsigned integer, in machine
:param data: DAC word to write, 16-bit unsigned integer, in machine
units.
"""
self.write(dac, data)
@ -124,9 +125,9 @@ class Fastino:
def set_group_mu(self, dac: TInt32, data: TList(TInt32)):
"""Write a group of DAC channels in machine units.
:param dac: First channel in DAC channel group (0-31). The `log2_width`
:param dac: First channel in DAC channel group (0-31). The ``log2_width``
LSBs must be zero.
:param data: List of DAC data pairs (2x16 bit unsigned) to write,
:param data: List of DAC data pairs (2x16-bit unsigned) to write,
in machine units. Data exceeding group size is ignored.
If the list length is less than group size, the remaining
DAC channels within the group are cleared to 0 (machine units).
@ -137,10 +138,10 @@ class Fastino:
@portable
def voltage_to_mu(self, voltage):
"""Convert SI Volts to DAC machine units.
"""Convert SI volts to DAC machine units.
:param voltage: Voltage in SI Volts.
:return: DAC data word in machine units, 16 bit integer.
:param voltage: Voltage in SI volts.
:return: DAC data word in machine units, 16-bit integer.
"""
data = int32(round((0x8000/10.)*voltage)) + int32(0x8000)
if data < 0 or data > 0xffff:
@ -149,9 +150,9 @@ class Fastino:
@portable
def voltage_group_to_mu(self, voltage, data):
"""Convert SI Volts to packed DAC channel group machine units.
"""Convert SI volts to packed DAC channel group machine units.
:param voltage: List of SI Volt voltages.
:param voltage: List of SI volt voltages.
:param data: List of DAC channel data pairs to write to.
Half the length of `voltage`.
"""
@ -185,7 +186,7 @@ class Fastino:
def update(self, update):
"""Schedule channels for update.
:param update: Bit mask of channels to update (32 bit).
:param update: Bit mask of channels to update (32-bit).
"""
self.write(0x20, update)
@ -193,7 +194,7 @@ class Fastino:
def set_hold(self, hold):
"""Set channels to manual update.
:param hold: Bit mask of channels to hold (32 bit).
:param hold: Bit mask of channels to hold (32-bit).
"""
self.write(0x21, hold)
@ -214,9 +215,9 @@ class Fastino:
@kernel
def set_leds(self, leds):
"""Set the green user-defined LEDs
"""Set the green user-defined LEDs.
:param leds: LED status, 8 bit integer each bit corresponding to one
:param leds: LED status, 8-bit integer each bit corresponding to one
green LED.
"""
self.write(0x23, leds)
@ -245,16 +246,16 @@ class Fastino:
def stage_cic(self, rate) -> TInt32:
"""Compute and stage interpolator configuration.
This method approximates the desired interpolation rate using a 10 bit
floating point representation (6 bit mantissa, 4 bit exponent) and
This method approximates the desired interpolation rate using a 10-bit
floating point representation (6-bit mantissa, 4-bit exponent) and
then determines an optimal interpolation gain compensation exponent
to avoid clipping. Gains for rates that are powers of two are accurately
compensated. Other rates lead to overall less than unity gain (but more
than 0.5 gain).
The overall gain including gain compensation is
`actual_rate**order/2**ceil(log2(actual_rate**order))`
where `order = 3`.
The overall gain including gain compensation is ``actual_rate ** order /
2 ** ceil(log2(actual_rate ** order))``
where ``order = 3``.
Returns the actual interpolation rate.
"""
@ -293,7 +294,7 @@ class Fastino:
their output is supposed to be constant.
This method resets and settles the affected interpolators. There will be
no output updates for the next `order = 3` input samples.
no output updates for the next ``order = 3`` input samples.
Affected channels will only accept one input sample per input sample
period. This method synchronizes the input sample period to the current
frame on the affected channels.

View File

@ -102,7 +102,7 @@ class Grabber:
this call or the next.
If the timeout is reached before data is available, the exception
GrabberTimeoutException is raised.
:exc:`GrabberTimeoutException` is raised.
:param timeout_mu: Timestamp at which a timeout will occur. Set to -1
(default) to disable timeout.

View File

@ -43,7 +43,7 @@ def i2c_poll(busno, busaddr):
"""Poll I2C device at address.
:param busno: I2C bus number
:param busaddr: 8 bit I2C device address (LSB=0)
:param busaddr: 8-bit I2C device address (LSB=0)
:returns: True if the poll was ACKed
"""
i2c_start(busno)
@ -57,7 +57,7 @@ def i2c_write_byte(busno, busaddr, data, ack=True):
"""Write one byte to a device.
:param busno: I2C bus number
:param busaddr: 8 bit I2C device address (LSB=0)
:param busaddr: 8-bit I2C device address (LSB=0)
:param data: Data byte to be written
:param nack: Allow NACK
"""
@ -76,7 +76,7 @@ def i2c_read_byte(busno, busaddr):
"""Read one byte from a device.
:param busno: I2C bus number
:param busaddr: 8 bit I2C device address (LSB=0)
:param busaddr: 8-bit I2C device address (LSB=0)
:returns: Byte read
"""
i2c_start(busno)
@ -95,10 +95,10 @@ def i2c_write_many(busno, busaddr, addr, data, ack_last=True):
"""Transfer multiple bytes to a device.
:param busno: I2c bus number
:param busaddr: 8 bit I2C device address (LSB=0)
:param addr: 8 bit data address
:param busaddr: 8-bit I2C device address (LSB=0)
:param addr: 8-bit data address
:param data: Data bytes to be written
:param ack_last: Expect I2C ACK of the last byte written. If `False`,
:param ack_last: Expect I2C ACK of the last byte written. If ``False``,
the last byte may be NACKed (e.g. EEPROM full page writes).
"""
n = len(data)
@ -121,8 +121,8 @@ def i2c_read_many(busno, busaddr, addr, data):
"""Transfer multiple bytes from a device.
:param busno: I2c bus number
:param busaddr: 8 bit I2C device address (LSB=0)
:param addr: 8 bit data address
:param busaddr: 8-bit I2C device address (LSB=0)
:param addr: 8-bit data address
:param data: List of integers to be filled with the data read.
One entry ber byte.
"""
@ -147,7 +147,7 @@ class I2CSwitch:
PCA954X (or other) type detection is done by the CPU during I2C init.
I2C transactions not real-time, and are performed by the CPU without
I2C transactions are not real-time, and are performed by the CPU without
involving RTIO.
On the KC705, this chip is used for selecting the I2C buses on the two FMC
@ -176,7 +176,7 @@ class I2CSwitch:
class TCA6424A:
"""Driver for the TCA6424A I2C I/O expander.
I2C transactions not real-time, and are performed by the CPU without
I2C transactions are not real-time, and are performed by the CPU without
involving RTIO.
On the NIST QC2 hardware, this chip is used for switching the directions
@ -212,7 +212,7 @@ class TCA6424A:
class PCF8574A:
"""Driver for the PCF8574 I2C remote 8-bit I/O expander.
I2C transactions not real-time, and are performed by the CPU without
I2C transactions are not real-time, and are performed by the CPU without
involving RTIO.
"""
def __init__(self, dmgr, busno=0, address=0x7c, core_device="core"):

View File

@ -1,4 +1,4 @@
"""RTIO driver for Mirny (4 channel GHz PLLs)
"""RTIO driver for Mirny (4-channel GHz PLLs)
"""
from artiq.language.core import kernel, delay, portable
@ -82,7 +82,7 @@ class Mirny:
@kernel
def read_reg(self, addr):
"""Read a register"""
"""Read a register."""
self.bus.set_config_mu(
SPI_CONFIG | spi.SPI_INPUT | spi.SPI_END, 24, SPIT_RD, SPI_CS
)
@ -91,7 +91,7 @@ class Mirny:
@kernel
def write_reg(self, addr, data):
"""Write a register"""
"""Write a register."""
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24, SPIT_WR, SPI_CS)
self.bus.write((addr << 25) | WE | ((data & 0xFFFF) << 8))
@ -101,9 +101,9 @@ class Mirny:
Initialize and detect Mirny.
Select the clock source based the board's hardware revision.
Raise ValueError if the board's hardware revision is not supported.
Raise :exc:`ValueError` if the board's hardware revision is not supported.
:param blind: Verify presence and protocol compatibility. Raise ValueError on failure.
:param blind: Verify presence and protocol compatibility. Raise :exc:`ValueError` on failure.
"""
reg0 = self.read_reg(0)
self.hw_rev = reg0 & 0x3
@ -138,7 +138,7 @@ class Mirny:
def set_att_mu(self, channel, att):
"""Set digital step attenuator in machine units.
:param att: Attenuation setting, 8 bit digital.
:param att: Attenuation setting, 8-bit digital.
"""
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 16, SPIT_WR, SPI_CS)
self.bus.write(((channel | 8) << 25) | (att << 16))
@ -149,7 +149,7 @@ class Mirny:
This method will write the attenuator settings of the selected channel.
.. seealso:: :meth:`set_att_mu`
See also :meth:`Mirny.set_att_mu`.
:param channel: Attenuator channel (0-3).
:param att: Attenuation setting in dB. Higher value is more
@ -160,7 +160,7 @@ class Mirny:
@kernel
def write_ext(self, addr, length, data, ext_div=SPIT_WR):
"""Perform SPI write to a prefixed address"""
"""Perform SPI write to a prefixed address."""
self.bus.set_config_mu(SPI_CONFIG, 8, SPIT_WR, SPI_CS)
self.bus.write(addr << 25)
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, length, ext_div, SPI_CS)

View File

@ -16,31 +16,31 @@ SPI_CS_SR = 2
@portable
def adc_ctrl(channel=1, softspan=0b111, valid=1):
"""Build a LTC2335-16 control word"""
"""Build a LTC2335-16 control word."""
return (valid << 7) | (channel << 3) | softspan
@portable
def adc_softspan(data):
"""Return the softspan configuration index from a result packet"""
"""Return the softspan configuration index from a result packet."""
return data & 0x7
@portable
def adc_channel(data):
"""Return the channel index from a result packet"""
"""Return the channel index from a result packet."""
return (data >> 3) & 0x7
@portable
def adc_data(data):
"""Return the ADC value from a result packet"""
"""Return the ADC value from a result packet."""
return (data >> 8) & 0xffff
@portable
def adc_value(data, v_ref=5.):
"""Convert a ADC result packet to SI units (Volt)"""
"""Convert a ADC result packet to SI units (volts)."""
softspan = adc_softspan(data)
data = adc_data(data)
g = 625
@ -107,7 +107,7 @@ class Novogorny:
def configure(self, data):
"""Set up the ADC sequencer.
:param data: List of 8 bit control words to write into the sequencer
:param data: List of 8-bit control words to write into the sequencer
table.
"""
if len(data) > 1:
@ -137,12 +137,10 @@ class Novogorny:
@kernel
def sample(self, next_ctrl=0):
"""Acquire a sample
.. seealso:: :meth:`sample_mu`
"""Acquire a sample. See also :meth:`Novogorny.sample_mu`.
:param next_ctrl: ADC control word for the next sample
:return: The ADC result packet (Volt)
:return: The ADC result packet (volts)
"""
return adc_value(self.sample_mu(), self.v_ref)
@ -151,7 +149,7 @@ class Novogorny:
"""Acquire a burst of samples.
If the burst is too long and the sample rate too high, there will be
RTIO input overflows.
:exc:RTIOOverflow exceptions.
High sample rates lead to gain errors since the impedance between the
instrumentation amplifier and the ADC is high.

View File

@ -85,7 +85,7 @@ SERVO_T_CYCLE = (32+12+192+24+4)*ns # Must match gateware ADC parameters
class Phaser:
"""Phaser 4-channel, 16-bit, 1 GS/s DAC coredevice driver.
Phaser contains a 4 channel, 1 GS/s DAC chip with integrated upconversion,
Phaser contains a 4-channel, 1 GS/s DAC chip with integrated upconversion,
quadrature modulation compensation and interpolation features.
The coredevice RTIO PHY and the Phaser gateware come in different modes
@ -109,9 +109,9 @@ class Phaser:
**Base mode**
The coredevice produces 2 IQ (in-phase and quadrature) data streams with 25
MS/s and 14 bit per quadrature. Each data stream supports 5 independent
numerically controlled IQ oscillators (NCOs, DDSs with 32 bit frequency, 16
bit phase, 15 bit amplitude, and phase accumulator clear functionality)
MS/s and 14 bits per quadrature. Each data stream supports 5 independent
numerically controlled IQ oscillators (NCOs, DDSs with 32-bit frequency,
16-bit phase, 15-bit amplitude, and phase accumulator clear functionality)
added together. See :class:`PhaserChannel` and :class:`PhaserOscillator`.
Together with a data clock, framing marker, a checksum and metadata for
@ -119,30 +119,28 @@ class Phaser:
FastLink via a single EEM connector from coredevice to Phaser.
On Phaser in the FPGA the data streams are buffered and interpolated
from 25 MS/s to 500 MS/s 16 bit followed by a 500 MS/s digital upconverter
from 25 MS/s to 500 MS/s 16-bit followed by a 500 MS/s digital upconverter
with adjustable frequency and phase. The interpolation passband is 20 MHz
wide, passband ripple is less than 1e-3 amplitude, stopband attenuation
is better than 75 dB at offsets > 15 MHz and better than 90 dB at offsets
> 30 MHz.
The four 16 bit 500 MS/s DAC data streams are sent via a 32 bit parallel
The four 16-bit 500 MS/s DAC data streams are sent via a 32-bit parallel
LVDS bus operating at 1 Gb/s per pin pair and processed in the DAC (Texas
Instruments DAC34H84). On the DAC 2x interpolation, sinx/x compensation,
quadrature modulator compensation, fine and coarse mixing as well as group
delay capabilities are available. If desired, these features my be
configured via the `dac` dictionary.
configured via the ``dac`` dictionary.
The latency/group delay from the RTIO events setting
:class:`PhaserOscillator` or :class:`PhaserChannel` DUC parameters all the
way to the DAC outputs is deterministic. This enables deterministic
absolute phase with respect to other RTIO input and output events
(see `get_next_frame_mu()`).
(see :meth:`get_next_frame_mu()`).
**Miqro mode**
See :class:`Miqro`
Here the DAC operates in 4x interpolation.
See :class:`Miqro`. Here the DAC operates in 4x interpolation.
**Analog flow**
@ -171,7 +169,7 @@ class Phaser:
and Q datastreams from the DUC by the IIR output. The IIR state is updated at
the 3.788 MHz ADC sampling rate.
Each channel IIR features 4 profiles, each consisting of the [b0, b1, a1] filter
Each channel IIR features 4 profiles, each consisting of the ``[b0, b1, a1]`` filter
coefficients as well as an output offset. The coefficients and offset can be
set for each profile individually and the profiles each have their own ``y0``,
``y1`` output registers (the ``x0``, ``x1`` inputs are shared). To avoid
@ -185,25 +183,25 @@ class Phaser:
still ingests samples and updates its input ``x0`` and ``x1`` registers, but
does not update the ``y0``, ``y1`` output registers.
After power-up the servo is disabled, in profile 0, with coefficients [0, 0, 0]
After power-up the servo is disabled, in profile 0, with coefficients ``[0, 0, 0]``
and hold is enabled. If older gateware without ther servo is loaded onto the
Phaser FPGA, the device simply behaves as if the servo is disabled and none of
the servo functions have any effect.
.. note:: Various register settings of the DAC and the quadrature
upconverters are available to be modified through the `dac`, `trf0`,
`trf1` dictionaries. These can be set through the device database
(`device_db.py`). The settings are frozen during instantiation of the
class and applied during `init()`. See the :class:`DAC34H84` and
:class:`TRF372017` source for details.
upconverters are available to be modified through the ``dac``, ``trf0``,
``trf1`` dictionaries. These can be set through the device database
(``device_db.py``). The settings are frozen during instantiation of the
class and applied during ``init()``. See the :class:`dac34H84` and
:class:`trf372017` source for details.
.. note:: To establish deterministic latency between RTIO time base and DAC
output, the DAC FIFO read pointer value (`fifo_offset`) must be
fixed. If `tune_fifo_offset=True` (the default) a value with maximum
output, the DAC FIFO read pointer value (``fifo_offset``) must be
fixed. If `tune_fifo_offset` = ``True`` (the default) a value with maximum
margin is determined automatically by `dac_tune_fifo_offset` each time
`init()` is called. This value should be used for the `fifo_offset` key
of the `dac` settings of Phaser in `device_db.py` and automatic
tuning should be disabled by `tune_fifo_offset=False`.
:meth:`init` is called. This value should be used for the ``fifo_offset`` key
of the ``dac`` settings of Phaser in ``device_db.py`` and automatic
tuning should be disabled by `tune_fifo_offset` = ``False```.
:param channel: Base RTIO channel number
:param core_device: Core device name (default: "core")
@ -219,9 +217,9 @@ class Phaser:
:param trf1: Channel 1 TRF372017 quadrature upconverter settings as a
dictionary.
Attributes:
**Attributes:**
* :attr:`channel`: List of two :class:`PhaserChannel`
* :attr:`channel`: List of two instances of :class:`PhaserChannel`
To access oscillators, digital upconverters, PLL/VCO analog
quadrature upconverters and attenuators.
"""
@ -463,8 +461,8 @@ class Phaser:
def write8(self, addr, data):
"""Write data to FPGA register.
:param addr: Address to write to (7 bit)
:param data: Data to write (8 bit)
:param addr: Address to write to (7-bit)
:param data: Data to write (8-bit)
"""
rtio_output((self.channel_base << 8) | (addr & 0x7f) | 0x80, data)
delay_mu(int64(self.t_frame))
@ -473,8 +471,8 @@ class Phaser:
def read8(self, addr) -> TInt32:
"""Read from FPGA register.
:param addr: Address to read from (7 bit)
:return: Data read (8 bit)
:param addr: Address to read from (7-bit)
:return: Data read (8-bit)
"""
rtio_output((self.channel_base << 8) | (addr & 0x7f), 0)
response = rtio_input_data(self.channel_base)
@ -482,13 +480,13 @@ class Phaser:
@kernel
def write16(self, addr, data: TInt32):
"""Write 16 bit to a sequence of FPGA registers."""
"""Write 16 bits to a sequence of FPGA registers."""
self.write8(addr, data >> 8)
self.write8(addr + 1, data)
@kernel
def write32(self, addr, data: TInt32):
"""Write 32 bit to a sequence of FPGA registers."""
"""Write 32 bits to a sequence of FPGA registers."""
for offset in range(4):
byte = data >> 24
self.write8(addr + offset, byte)
@ -496,7 +494,7 @@ class Phaser:
@kernel
def read32(self, addr) -> TInt32:
"""Read 32 bit from a sequence of FPGA registers."""
"""Read 32 bits from a sequence of FPGA registers."""
data = 0
for offset in range(4):
data <<= 8
@ -508,15 +506,15 @@ class Phaser:
def set_leds(self, leds):
"""Set the front panel LEDs.
:param leds: LED settings (6 bit)
:param leds: LED settings (6-bit)
"""
self.write8(PHASER_ADDR_LED, leds)
@kernel
def set_fan_mu(self, pwm):
"""Set the fan duty cycle.
"""Set the fan duty cycle in machine units.
:param pwm: Duty cycle in machine units (8 bit)
:param pwm: Duty cycle in machine units (8-bit)
"""
self.write8(PHASER_ADDR_FAN, pwm)
@ -581,10 +579,10 @@ class Phaser:
@kernel
def measure_frame_timestamp(self):
"""Measure the timestamp of an arbitrary frame and store it in `self.frame_tstamp`.
"""Measure the timestamp of an arbitrary frame and store it in ``self.frame_tstamp``.
To be used as reference for aligning updates to the FastLink frames.
See `get_next_frame_mu()`.
See :meth:`get_next_frame_mu()`.
"""
rtio_output(self.channel_base << 8, 0) # read any register
self.frame_tstamp = rtio_input_timestamp(now_mu() + 4 * self.t_frame, self.channel_base)
@ -592,10 +590,10 @@ class Phaser:
@kernel
def get_next_frame_mu(self):
"""Return the timestamp of the frame strictly after `now_mu()`.
"""Return the timestamp of the frame strictly after :meth:`~artiq.language.core.now_mu()`.
Register updates (DUC, DAC, TRF, etc.) scheduled at this timestamp and multiples
of `self.t_frame` later will have deterministic latency to output.
of ``self.t_frame`` later will have deterministic latency to output.
"""
n = int64((now_mu() - self.frame_tstamp) / self.t_frame)
return self.frame_tstamp + (n + 1) * self.t_frame
@ -658,7 +656,7 @@ class Phaser:
@kernel
def dac_write(self, addr, data):
"""Write 16 bit to a DAC register.
"""Write 16 bits to a DAC register.
:param addr: Register address
:param data: Register data to write
@ -708,16 +706,16 @@ class Phaser:
def dac_sync(self):
"""Trigger DAC synchronisation for both output channels.
The DAC sif_sync is de-asserts, then asserted. The synchronisation is
The DAC ``sif_sync`` is de-asserted, then asserted. The synchronisation is
triggered on assertion.
By default, the fine-mixer (NCO) and QMC are synchronised. This
includes applying the latest register settings.
The synchronisation sources may be configured through the `syncsel_x`
fields in the `dac` configuration dictionary (see `__init__()`).
The synchronisation sources may be configured through the ``syncsel_x``
fields in the ``dac`` configuration dictionary (see :class:`Phaser`).
.. note:: Synchronising the NCO clears the phase-accumulator
.. note:: Synchronising the NCO clears the phase-accumulator.
"""
config1f = self.dac_read(0x1f)
delay(.4*ms)
@ -726,11 +724,11 @@ class Phaser:
@kernel
def set_dac_cmix(self, fs_8_step):
"""Set the DAC coarse mixer frequency for both channels
"""Set the DAC coarse mixer frequency for both channels.
Use of the coarse mixer requires the DAC mixer to be enabled. The mixer
can be configured via the `dac` configuration dictionary (see
`__init__()`).
can be configured via the ``dac`` configuration dictionary (see
:class:`Phaser`).
The selected coarse mixer frequency becomes active without explicit
synchronisation.
@ -763,8 +761,8 @@ class Phaser:
def dac_iotest(self, pattern) -> TInt32:
"""Performs a DAC IO test according to the datasheet.
:param pattern: List of four int32 containing the pattern
:return: Bit error mask (16 bits)
:param pattern: List of four int32s containing the pattern
:return: Bit error mask (16-bit)
"""
if len(pattern) != 4:
raise ValueError("pattern length out of bounds")
@ -803,9 +801,9 @@ class Phaser:
@kernel
def dac_tune_fifo_offset(self):
"""Scan through `fifo_offset` and configure midpoint setting.
"""Scan through ``fifo_offset`` and configure midpoint setting.
:return: Optimal `fifo_offset` setting with maximum margin to write
:return: Optimal ``fifo_offset`` setting with maximum margin to write
pointer.
"""
# expect two or three error free offsets:
@ -865,7 +863,7 @@ class PhaserChannel:
Attributes:
* :attr:`oscillator`: List of five :class:`PhaserOscillator`.
* :attr:`oscillator`: List of five instances of :class:`PhaserOscillator`.
* :attr:`miqro`: A :class:`Miqro`.
.. note:: The amplitude sum of the oscillators must be less than one to
@ -879,7 +877,7 @@ class PhaserChannel:
changes in oscillator parameters, the overshoot can lead to clipping
or overflow after the interpolation. Either band-limit any changes
in the oscillator parameters or back off the amplitude sufficiently.
Miqro is not affected by this. But both the oscillators and Miqro can
Miqro is not affected by this, but both the oscillators and Miqro can
be affected by intrinsic overshoot of the interpolator on the DAC.
"""
kernel_invariants = {"index", "phaser", "trf_mmap"}
@ -899,7 +897,7 @@ class PhaserChannel:
The data is split accross multiple registers and thus the data
is only valid if constant.
:return: DAC data as 32 bit IQ. I/DACA/DACC in the 16 LSB,
:return: DAC data as 32-bit IQ. I/DACA/DACC in the 16 LSB,
Q/DACB/DACD in the 16 MSB
"""
return self.phaser.read32(PHASER_ADDR_DAC0_DATA + (self.index << 4))
@ -908,7 +906,7 @@ class PhaserChannel:
def set_dac_test(self, data: TInt32):
"""Set the DAC test data.
:param data: 32 bit IQ test data, I/DACA/DACC in the 16 LSB,
:param data: 32-bit IQ test data, I/DACA/DACC in the 16 LSB,
Q/DACB/DACD in the 16 MSB
"""
self.phaser.write32(PHASER_ADDR_DAC0_TEST + (self.index << 4), data)
@ -930,7 +928,7 @@ class PhaserChannel:
def set_duc_frequency_mu(self, ftw):
"""Set the DUC frequency.
:param ftw: DUC frequency tuning word (32 bit)
:param ftw: DUC frequency tuning word (32-bit)
"""
self.phaser.write32(PHASER_ADDR_DUC0_F + (self.index << 4), ftw)
@ -948,7 +946,7 @@ class PhaserChannel:
def set_duc_phase_mu(self, pow):
"""Set the DUC phase offset.
:param pow: DUC phase offset word (16 bit)
:param pow: DUC phase offset word (16-bit)
"""
addr = PHASER_ADDR_DUC0_P + (self.index << 4)
self.phaser.write8(addr, pow >> 8)
@ -970,10 +968,10 @@ class PhaserChannel:
This method stages the new NCO frequency, but does not apply it.
Use of the DAC-NCO requires the DAC mixer and NCO to be enabled. These
can be configured via the `dac` configuration dictionary (see
`__init__()`).
can be configured via the ``dac`` configuration dictionary (see
:class:`Phaser`).
:param ftw: NCO frequency tuning word (32 bit)
:param ftw: NCO frequency tuning word (32-bit)
"""
self.phaser.dac_write(0x15 + (self.index << 1), ftw >> 16)
self.phaser.dac_write(0x14 + (self.index << 1), ftw)
@ -985,8 +983,8 @@ class PhaserChannel:
This method stages the new NCO frequency, but does not apply it.
Use of the DAC-NCO requires the DAC mixer and NCO to be enabled. These
can be configured via the `dac` configuration dictionary (see
`__init__()`).
can be configured via the ``dac`` configuration dictionary (see
:class:`Phaser`).
:param frequency: NCO frequency in Hz (passband from -400 MHz
to 400 MHz, wrapping around at +- 500 MHz)
@ -1001,14 +999,13 @@ class PhaserChannel:
By default, the new NCO phase applies on completion of the SPI
transfer. This also causes a staged NCO frequency to be applied.
Different triggers for applying NCO settings may be configured through
the `syncsel_mixerxx` fields in the `dac` configuration dictionary (see
`__init__()`).
the ``syncsel_mixerxx`` fields in the ``dac`` configuration dictionary (see
:class:`Phaser`).
Use of the DAC-NCO requires the DAC mixer and NCO to be enabled. These
can be configured via the `dac` configuration dictionary (see
`__init__()`).
can be configured via the ``dac`` configuration dictionary.
:param pow: NCO phase offset word (16 bit)
:param pow: NCO phase offset word (16-bit)
"""
self.phaser.dac_write(0x12 + self.index, pow)
@ -1019,12 +1016,11 @@ class PhaserChannel:
By default, the new NCO phase applies on completion of the SPI
transfer. This also causes a staged NCO frequency to be applied.
Different triggers for applying NCO settings may be configured through
the `syncsel_mixerxx` fields in the `dac` configuration dictionary (see
`__init__()`).
the ``syncsel_mixerxx`` fields in the ``dac`` configuration dictionary (see
:class:`Phaser`).
Use of the DAC-NCO requires the DAC mixer and NCO to be enabled. These
can be configured via the `dac` configuration dictionary (see
`__init__()`).
can be configured via the ``dac`` configuration dictionary.
:param phase: NCO phase in turns
"""
@ -1035,7 +1031,7 @@ class PhaserChannel:
def set_att_mu(self, data):
"""Set channel attenuation.
:param data: Attenuator data in machine units (8 bit)
:param data: Attenuator data in machine units (8-bit)
"""
div = 34 # 30 ns min period
t_xfer = self.phaser.core.seconds_to_mu((8 + 1)*div*4*ns)
@ -1082,7 +1078,7 @@ class PhaserChannel:
def trf_write(self, data, readback=False):
"""Write 32 bits to quadrature upconverter register.
:param data: Register data (32 bit) containing encoded address
:param data: Register data (32-bit) containing encoded address
:param readback: Whether to return the read back MISO data
"""
div = 34 # 50 ns min period
@ -1114,7 +1110,7 @@ class PhaserChannel:
:param addr: Register address to read (0 to 7)
:param cnt_mux_sel: Report VCO counter min or max frequency
:return: Register data (32 bit)
:return: Register data (32-bit)
"""
self.trf_write(0x80000008 | (addr << 28) | (cnt_mux_sel << 27))
# single clk pulse with ~LE to start readback
@ -1189,13 +1185,13 @@ class PhaserChannel:
* :math:`b_0` and :math:`b_1` are the feedforward gains for the two
delays
.. seealso:: :meth:`set_iir`
See also :meth:`PhaserChannel.set_iir`.
:param profile: Profile to set (0 to 3)
:param b0: b0 filter coefficient (16 bit signed)
:param b1: b1 filter coefficient (16 bit signed)
:param a1: a1 filter coefficient (16 bit signed)
:param offset: Output offset (16 bit signed)
:param b0: b0 filter coefficient (16-bit signed)
:param b1: b1 filter coefficient (16-bit signed)
:param a1: a1 filter coefficient (16-bit signed)
:param offset: Output offset (16-bit signed)
"""
if (profile < 0) or (profile > 3):
raise ValueError("invalid profile index")
@ -1240,7 +1236,7 @@ class PhaserChannel:
integrator gain limit is infinite. Same sign as ``ki``.
:param x_offset: IIR input offset. Used as the negative
setpoint when stabilizing to a desired input setpoint. Will
be converted to an equivalent output offset and added to y_offset.
be converted to an equivalent output offset and added to ``y_offset``.
:param y_offset: IIR output offset.
"""
NORM = 1 << SERVO_COEFF_SHIFT
@ -1296,7 +1292,7 @@ class PhaserOscillator:
def set_frequency_mu(self, ftw):
"""Set Phaser MultiDDS frequency tuning word.
:param ftw: Frequency tuning word (32 bit)
:param ftw: Frequency tuning word (32-bit)
"""
rtio_output(self.base_addr, ftw)
@ -1314,8 +1310,8 @@ class PhaserOscillator:
def set_amplitude_phase_mu(self, asf=0x7fff, pow=0, clr=0):
"""Set Phaser MultiDDS amplitude, phase offset and accumulator clear.
:param asf: Amplitude (15 bit)
:param pow: Phase offset word (16 bit)
:param asf: Amplitude (15-bit)
:param pow: Phase offset word (16-bit)
:param clr: Clear the phase accumulator (persistent)
"""
data = (asf & 0x7fff) | ((clr & 1) << 15) | (pow << 16)
@ -1346,38 +1342,42 @@ class Miqro:
**Oscillators**
* There are n_osc = 16 oscillators with oscillator IDs 0..n_osc-1.
* There are ``n_osc = 16`` oscillators with oscillator IDs ``0``... ``n_osc-1``.
* Each oscillator outputs one tone at any given time
* I/Q (quadrature, a.k.a. complex) 2x16 bit signed data
* I/Q (quadrature, a.k.a. complex) 2x16-bit signed data
at tau = 4 ns sample intervals, 250 MS/s, Nyquist 125 MHz, bandwidth 200 MHz
(from f = -100..+100 MHz, taking into account the interpolation anti-aliasing
filters in subsequent interpolators),
* 32 bit frequency (f) resolution (~ 1/16 Hz),
* 16 bit unsigned amplitude (a) resolution
* 16 bit phase offset (p) resolution
* 32-bit frequency (f) resolution (~ 1/16 Hz),
* 16-bit unsigned amplitude (a) resolution
* 16-bit phase offset (p) resolution
* The output phase p' of each oscillator at time t (boot/reset/initialization of the
device at t=0) is then p' = f*t + p (mod 1 turn) where f and p are the (currently
* The output phase ``p'`` of each oscillator at time ``t`` (boot/reset/initialization of the
device at ``t=0``) is then ``p' = f*t + p (mod 1 turn)`` where ``f`` and ``p`` are the (currently
active) profile frequency and phase offset.
* Note: The terms "phase coherent" and "phase tracking" are defined to refer to this
choice of oscillator output phase p'. Note that the phase offset p is not relative to
(on top of previous phase/profiles/oscillator history).
It is "absolute" in the sense that frequency f and phase offset p fully determine
oscillator output phase p' at time t. This is unlike typical DDS behavior.
.. note ::
The terms "phase coherent" and "phase tracking" are defined to refer to this
choice of oscillator output phase ``p'``. Note that the phase offset ``p`` is not relative to
(on top of previous phase/profiles/oscillator history).
It is "absolute" in the sense that frequency ``f`` and phase offset ``p`` fully determine
oscillator output phase ``p'`` at time ``t``. This is unlike typical DDS behavior.
* Frequency, phase, and amplitude of each oscillator are configurable by selecting one of
n_profile = 32 profiles 0..n_profile-1. This selection is fast and can be done for
each pulse. The phase coherence defined above is guaranteed for each
``n_profiles = 32`` profiles ``0``... ``n_profile-1``. This selection is fast and can be
done for each pulse. The phase coherence defined above is guaranteed for each
profile individually.
* Note: one profile per oscillator (usually profile index 0) should be reserved
for the NOP (no operation, identity) profile, usually with zero amplitude.
* Data for each profile for each oscillator can be configured
individually. Storing profile data should be considered "expensive".
* Note: The annotation that some operation is "expensive" does not mean it is
impossible, just that it may take a significant amount of time and
resources to execute such that it may be impractical when used often or
during fast pulse sequences. They are intended for use in calibration and
initialization.
.. note::
To refer to an operation as "expensive" does not mean it is impossible,
merely that it may take a significant amount of time and resources to
execute, such that it may be impractical when used often or during fast
pulse sequences. They are intended for use in calibration and initialization.
**Summation**
@ -1394,18 +1394,18 @@ class Miqro:
the RF output.
* Selected profiles become active simultaneously (on the same output sample) when
triggering the shaper with the first shaper output sample.
* The shaper reads (replays) window samples from a memory of size n_window = 1 << 10.
* The shaper reads (replays) window samples from a memory of size ``n_window = 1 << 10``.
* The window memory can be segmented by choosing different start indices
to support different windows.
* Each window memory segment starts with a header determining segment
length and interpolation parameters.
* The window samples are interpolated by a factor (rate change) between 1 and
r = 1 << 12.
``r = 1 << 12``.
* The interpolation order is constant, linear, quadratic, or cubic. This
corresponds to interpolation modes from rectangular window (1st order CIC)
or zero order hold) to Parzen window (4th order CIC or cubic spline).
* This results in support for single shot pulse lengths (envelope support) between
tau and a bit more than r * n_window * tau = (1 << 12 + 10) tau ~ 17 ms.
tau and a bit more than ``r * n_window * tau = (1 << 12 + 10) tau ~ 17 ms``.
* Windows can be configured to be head-less and/or tail-less, meaning, they
do not feed zero-amplitude samples into the shaper before and after
each window respectively. This is used to implement pulses with arbitrary
@ -1413,18 +1413,18 @@ class Miqro:
**Overall properties**
* The DAC may upconvert the signal by applying a frequency offset f1 with
phase p1.
* The DAC may upconvert the signal by applying a frequency offset ``f1`` with
phase ``p1``.
* In the Upconverter Phaser variant, the analog quadrature upconverter
applies another frequency of f2 and phase p2.
applies another frequency of ``f2`` and phase ``p2``.
* The resulting phase of the signal from one oscillator at the SMA output is
(f + f1 + f2)*t + p + s(t - t0) + p1 + p2 (mod 1 turn)
where s(t - t0) is the phase of the interpolated
shaper output, and t0 is the trigger time (fiducial of the shaper).
``(f + f1 + f2)*t + p + s(t - t0) + p1 + p2 (mod 1 turn)``
where ``s(t - t0)`` is the phase of the interpolated
shaper output, and ``t0`` is the trigger time (fiducial of the shaper).
Unsurprisingly the frequency is the derivative of the phase.
* Group delays between pulse parameter updates are matched across oscillators,
shapers, and channels.
* The minimum time to change profiles and phase offsets is ~128 ns (estimate, TBC).
* The minimum time to change profiles and phase offsets is ``~128 ns`` (estimate, TBC).
This is the minimum pulse interval.
The sustained pulse rate of the RTIO PHY/Fastlink is one pulse per Fastlink frame
(may be increased, TBC).
@ -1455,9 +1455,9 @@ class Miqro:
:param oscillator: Oscillator index (0 to 15)
:param profile: Profile index (0 to 31)
:param ftw: Frequency tuning word (32 bit signed integer on a 250 MHz clock)
:param asf: Amplitude scale factor (16 bit unsigned integer)
:param pow_: Phase offset word (16 bit integer)
:param ftw: Frequency tuning word (32-bit signed integer on a 250 MHz clock)
:param asf: Amplitude scale factor (16-bit unsigned integer)
:param pow_: Phase offset word (16-bit integer)
"""
if oscillator >= 16:
raise ValueError("invalid oscillator index")
@ -1481,7 +1481,7 @@ class Miqro:
:param amplitude: Amplitude in units of full scale (0. to 1.)
:param phase: Phase in turns. See :class:`Miqro` for a definition of
phase in this context.
:return: The quantized 32 bit frequency tuning word
:return: The quantized 32-bit frequency tuning word
"""
ftw = int32(round(frequency*((1 << 30)/(62.5*MHz))))
asf = int32(round(amplitude*0xffff))
@ -1493,7 +1493,7 @@ class Miqro:
@kernel
def set_window_mu(self, start, iq, rate=1, shift=0, order=3, head=1, tail=1):
"""Store a window segment (machine units)
"""Store a window segment (machine units).
:param start: Window start address (0 to 0x3ff)
:param iq: List of IQ window samples. Each window sample is an integer
@ -1540,7 +1540,7 @@ class Miqro:
@kernel
def set_window(self, start, iq, period=4*ns, order=3, head=1, tail=1):
"""Store a window segment
"""Store a window segment.
:param start: Window start address (0 to 0x3ff)
:param iq: List of IQ window samples. Each window sample is a pair of
@ -1577,7 +1577,7 @@ class Miqro:
@kernel
def encode(self, window, profiles, data):
"""Encode window and profile selection
"""Encode window and profile selection.
:param window: Window start address (0 to 0x3ff)
:param profiles: List of profile indices for the oscillators. Maximum

View File

@ -25,7 +25,7 @@ def rtio_input_data(channel: TInt32) -> TInt32:
@syscall(flags={"nowrite"})
def rtio_input_timestamped_data(timeout_mu: TInt64,
channel: TInt32) -> TTuple([TInt64, TInt32]):
"""Wait for an input event up to timeout_mu on the given channel, and
"""Wait for an input event up to ``timeout_mu`` on the given channel, and
return a tuple of timestamp and attached data, or (-1, 0) if the timeout is
reached."""
raise NotImplementedError("syscall not simulated")

View File

@ -16,13 +16,13 @@ SPI_CS_PGIA = 1 # separate SPI bus, CS used as RCLK
@portable
def adc_mu_to_volt(data, gain=0, corrected_fs=True):
"""Convert ADC data in machine units to Volts.
"""Convert ADC data in machine units to volts.
:param data: 16 bit signed ADC word
:param data: 16-bit signed ADC word
:param gain: PGIA gain setting (0: 1, ..., 3: 1000)
:param corrected_fs: use corrected ADC FS reference.
Should be True for Samplers' revisions after v2.1. False for v2.1 and earlier.
:return: Voltage in Volts
Should be ``True`` for Sampler revisions after v2.1. ``False`` for v2.1 and earlier.
:return: Voltage in volts
"""
if gain == 0:
volt_per_lsb = 20.48 / (1 << 16) if corrected_fs else 20. / (1 << 16)
@ -40,7 +40,7 @@ def adc_mu_to_volt(data, gain=0, corrected_fs=True):
class Sampler:
"""Sampler ADC.
Controls the LTC2320-16 8 channel 16 bit ADC with SPI interface and
Controls the LTC2320-16 8-channel 16-bit ADC with SPI interface and
the switchable gain instrumentation amplifiers.
:param spi_adc_device: ADC SPI bus device name
@ -119,12 +119,12 @@ class Sampler:
Perform a conversion and transfer the samples.
This assumes that the input FIFO of the ADC SPI RTIO channel is deep
enough to buffer the samples (half the length of `data` deep).
enough to buffer the samples (half the length of ``data`` deep).
If it is not, there will be RTIO input overflows.
:param data: List of data samples to fill. Must have even length.
Samples are always read from the last channel (channel 7) down.
The `data` list will always be filled with the last item
The ``data`` list will always be filled with the last item
holding to the sample from channel 7.
"""
self.cnv.pulse(30*ns) # t_CNVH
@ -142,7 +142,7 @@ class Sampler:
def sample(self, data):
"""Acquire a set of samples.
.. seealso:: :meth:`sample_mu`
See also :meth:`Sampler.sample_mu`.
:param data: List of floating point data samples to fill.
"""

View File

@ -16,8 +16,8 @@ def shuttler_volt_to_mu(volt):
class Config:
"""Shuttler configuration registers interface.
The configuration registers control waveform phase auto-clear, and pre-DAC
gain & offset values for calibration with ADC on the Shuttler AFE card.
The configuration registers control waveform phase auto-clear, pre-DAC
gain and offset values for calibration with ADC on the Shuttler AFE card.
To find the calibrated DAC code, the Shuttler Core first multiplies the
output data with pre-DAC gain, then adds the offset.
@ -84,8 +84,7 @@ class Config:
def set_offset(self, channel, offset):
"""Set the 16-bits pre-DAC offset register of a Shuttler Core channel.
.. seealso::
:meth:`shuttler_volt_to_mu`
See also :meth:`shuttler_volt_to_mu`.
:param channel: Shuttler Core channel to be configured.
:param offset: Shuttler Core channel offset.
@ -114,13 +113,13 @@ class DCBias:
.. math::
w(t) = a(t) + b(t) * cos(c(t))
And `t` corresponds to time in seconds.
and `t` corresponds to time in seconds.
This class controls the cubic spline `a(t)`, in which
.. math::
a(t) = p_0 + p_1t + \\frac{p_2t^2}{2} + \\frac{p_3t^3}{6}
And `a(t)` is in Volt.
and `a(t)` is measured in volts.
:param channel: RTIO channel number of this DC-bias spline interface.
:param core_device: Core device name.
@ -137,7 +136,7 @@ class DCBias:
"""Set the DC-bias spline waveform.
Given `a(t)` as defined in :class:`DCBias`, the coefficients should be
configured by the following formulae.
configured by the following formulae:
.. math::
T &= 8*10^{-9}
@ -154,8 +153,10 @@ class DCBias:
and 48 bits in width respectively. See :meth:`shuttler_volt_to_mu` for
machine unit conversion.
Note: The waveform is not updated to the Shuttler Core until
triggered. See :class:`Trigger` for the update triggering mechanism.
.. note::
The waveform is not updated to the Shuttler Core until
triggered. See :class:`Trigger` for the update triggering
mechanism.
:param a0: The :math:`a_0` coefficient in machine unit.
:param a1: The :math:`a_1` coefficient in machine unit.
@ -189,7 +190,7 @@ class DDS:
.. math::
w(t) = a(t) + b(t) * cos(c(t))
And `t` corresponds to time in seconds.
and `t` corresponds to time in seconds.
This class controls the cubic spline `b(t)` and quadratic spline `c(t)`,
in which
@ -198,7 +199,7 @@ class DDS:
c(t) &= r_0 + r_1t + \\frac{r_2t^2}{2}
And `b(t)` is in Volt, `c(t)` is in number of turns. Note that `b(t)`
`b(t)` is in volts, `c(t)` is in number of turns. Note that `b(t)`
contributes to a constant gain of :math:`g=1.64676`.
:param channel: RTIO channel number of this DC-bias spline interface.
@ -244,13 +245,13 @@ class DDS:
Note: The waveform is not updated to the Shuttler Core until
triggered. See :class:`Trigger` for the update triggering mechanism.
:param b0: The :math:`b_0` coefficient in machine unit.
:param b1: The :math:`b_1` coefficient in machine unit.
:param b2: The :math:`b_2` coefficient in machine unit.
:param b3: The :math:`b_3` coefficient in machine unit.
:param c0: The :math:`c_0` coefficient in machine unit.
:param c1: The :math:`c_1` coefficient in machine unit.
:param c2: The :math:`c_2` coefficient in machine unit.
:param b0: The :math:`b_0` coefficient in machine units.
:param b1: The :math:`b_1` coefficient in machine units.
:param b2: The :math:`b_2` coefficient in machine units.
:param b3: The :math:`b_3` coefficient in machine units.
:param c0: The :math:`c_0` coefficient in machine units.
:param c1: The :math:`c_1` coefficient in machine units.
:param c2: The :math:`c_2` coefficient in machine units.
"""
coef_words = [
b0,
@ -292,8 +293,8 @@ class Trigger:
"""Triggers coefficient update of (a) Shuttler Core channel(s).
Each bit corresponds to a Shuttler waveform generator core. Setting
`trig_out` bits commits the pending coefficient update (from
`set_waveform` in :class:`DCBias` and :class:`DDS`) to the Shuttler Core
``trig_out`` bits commits the pending coefficient update (from
``set_waveform`` in :class:`DCBias` and :class:`DDS`) to the Shuttler Core
synchronously.
:param trig_out: Coefficient update trigger bits. The MSB corresponds
@ -336,8 +337,8 @@ _AD4115_REG_SETUPCON0 = 0x20
class Relay:
"""Shuttler AFE relay switches.
It controls the AFE relay switches and the LEDs. Switch on the relay to
enable AFE output; And off to disable the output. The LEDs indicates the
This class controls the AFE relay switches and the LEDs. Switch the relay on to
enable AFE output; off to disable the output. The LEDs indicate the
relay status.
.. note::
@ -357,7 +358,7 @@ class Relay:
def init(self):
"""Initialize SPI device.
Configures the SPI bus to 16-bits, write-only, simultaneous relay
Configures the SPI bus to 16 bits, write-only, simultaneous relay
switches and LED control.
"""
self.bus.set_config_mu(
@ -365,10 +366,10 @@ class Relay:
@kernel
def enable(self, en: TInt32):
"""Enable/Disable relay switches of corresponding channels.
"""Enable/disable relay switches of corresponding channels.
Each bit corresponds to the relay switch of a channel. Asserting a bit
turns on the corresponding relay switch; Deasserting the same bit
turns on the corresponding relay switch; deasserting the same bit
turns off the switch instead.
:param en: Switch enable bits. The MSB corresponds to Channel 15, LSB
@ -403,12 +404,12 @@ class ADC:
def reset(self):
"""AD4115 reset procedure.
This performs a write operation of 96 serial clock cycles with DIN
held at high. It resets the entire device, including the register
Performs a write operation of 96 serial clock cycles with DIN
held at high. This resets the entire device, including the register
contents.
.. note::
The datasheet only requires 64 cycles, but reasserting `CS_n` right
The datasheet only requires 64 cycles, but reasserting ``CS_n`` right
after the transfer appears to interrupt the start-up sequence.
"""
self.bus.set_config_mu(ADC_SPI_CONFIG, 32, SPIT_ADC_WR, CS_ADC)
@ -420,7 +421,7 @@ class ADC:
@kernel
def read8(self, addr: TInt32) -> TInt32:
"""Read from 8 bit register.
"""Read from 8-bit register.
:param addr: Register address.
:return: Read-back register content.
@ -433,7 +434,7 @@ class ADC:
@kernel
def read16(self, addr: TInt32) -> TInt32:
"""Read from 16 bit register.
"""Read from 16-bit register.
:param addr: Register address.
:return: Read-back register content.
@ -446,7 +447,7 @@ class ADC:
@kernel
def read24(self, addr: TInt32) -> TInt32:
"""Read from 24 bit register.
"""Read from 24-bit register.
:param addr: Register address.
:return: Read-back register content.
@ -459,7 +460,7 @@ class ADC:
@kernel
def write8(self, addr: TInt32, data: TInt32):
"""Write to 8 bit register.
"""Write to 8-bit register.
:param addr: Register address.
:param data: Data to be written.
@ -470,7 +471,7 @@ class ADC:
@kernel
def write16(self, addr: TInt32, data: TInt32):
"""Write to 16 bit register.
"""Write to 16-bit register.
:param addr: Register address.
:param data: Data to be written.
@ -481,7 +482,7 @@ class ADC:
@kernel
def write24(self, addr: TInt32, data: TInt32):
"""Write to 24 bit register.
"""Write to 24-bit register.
:param addr: Register address.
:param data: Data to be written.
@ -494,11 +495,11 @@ class ADC:
def read_ch(self, channel: TInt32) -> TFloat:
"""Sample a Shuttler channel on the AFE.
It performs a single conversion using profile 0 and setup 0, on the
selected channel. The sample is then recovered and converted to volt.
Performs a single conversion using profile 0 and setup 0 on the
selected channel. The sample is then recovered and converted to volts.
:param channel: Shuttler channel to be sampled.
:return: Voltage sample in volt.
:return: Voltage sample in volts.
"""
# Always configure Profile 0 for single conversion
self.write16(_AD4115_REG_CH0, 0x8000 | ((channel * 2 + 1) << 4))
@ -519,7 +520,7 @@ class ADC:
@kernel
def standby(self):
"""Place the ADC in standby mode and disables power down the clock.
"""Place the ADC in standby mode and disable power down the clock.
The ADC can be returned to single conversion mode by calling
:meth:`single_conversion`.
@ -536,13 +537,7 @@ class ADC:
.. note::
The AD4115 datasheet suggests placing the ADC in standby mode
before power-down. This is to prevent accidental entry into the
power-down mode.
.. seealso::
:meth:`standby`
:meth:`power_up`
power-down mode. See also :meth:`standby` and :meth:`power_up`.
"""
self.write16(_AD4115_REG_ADCMODE, 0x8030)
@ -552,8 +547,7 @@ class ADC:
The ADC should be in power-down mode before calling this method.
.. seealso::
:meth:`power_down`
See also :meth:`power_down`.
"""
self.reset()
# Although the datasheet claims 500 us reset wait time, only waiting
@ -564,22 +558,18 @@ class ADC:
def calibrate(self, volts, trigger, config, samples=[-5.0, 0.0, 5.0]):
"""Calibrate the Shuttler waveform generator using the ADC on the AFE.
It finds the average slope rate and average offset by samples, and
compensate by writing the pre-DAC gain and offset registers in the
Finds the average slope rate and average offset by samples, and
compensates by writing the pre-DAC gain and offset registers in the
configuration registers.
.. note::
If the pre-calibration slope rate < 1, the calibration procedure
will introduce a pre-DAC gain compensation. However, this may
saturate the pre-DAC voltage code. (See :class:`Config` notes).
If the pre-calibration slope rate is less than 1, the calibration
procedure will introduce a pre-DAC gain compensation. However, this
may saturate the pre-DAC voltage code (see :class:`Config` notes).
Shuttler cannot cover the entire +/- 10 V range in this case.
See also :meth:`Config.set_gain` and :meth:`Config.set_offset`.
.. seealso::
:meth:`Config.set_gain`
:meth:`Config.set_offset`
:param volts: A list of all 16 cubic DC-bias spline.
:param volts: A list of all 16 cubic DC-bias splines.
(See :class:`DCBias`)
:param trigger: The Shuttler spline coefficient update trigger.
:param config: The Shuttler Core configuration registers.

View File

@ -4,7 +4,7 @@ Driver for generic SPI on RTIO.
This ARTIQ coredevice driver corresponds to the "new" MiSoC SPI core (v2).
Output event replacement is not supported and issuing commands at the same
time is an error.
time results in collision errors.
"""
from artiq.language.core import syscall, kernel, portable, delay_mu
@ -51,7 +51,7 @@ class SPIMaster:
event (``SPI_INPUT`` set), then :meth:`read` the ``data``.
* If ``SPI_END`` was not set, repeat the transfer sequence.
A **transaction** consists of one or more **transfers**. The chip select
A *transaction* consists of one or more *transfers*. The chip select
pattern is asserted for the entire length of the transaction. All but the
last transfer are submitted with ``SPI_END`` cleared in the configuration
register.
@ -138,10 +138,10 @@ class SPIMaster:
* :const:`SPI_LSB_FIRST`: LSB is the first bit on the wire (reset=0)
* :const:`SPI_HALF_DUPLEX`: 3-wire SPI, in/out on ``mosi`` (reset=0)
:param flags: A bit map of `SPI_*` flags.
:param flags: A bit map of :const:`SPI_*` flags.
:param length: Number of bits to write during the next transfer.
(reset=1)
:param freq: Desired SPI clock frequency. (reset=f_rtio/2)
:param freq: Desired SPI clock frequency. (reset= ``f_rtio/2``)
:param cs: Bit pattern of chip selects to assert.
Or number of the chip select to assert if ``cs`` is decoded
downstream. (reset=0)
@ -152,16 +152,15 @@ class SPIMaster:
def set_config_mu(self, flags, length, div, cs):
"""Set the ``config`` register (in SPI bus machine units).
.. seealso:: :meth:`set_config`
See also :meth:`set_config`.
:param flags: A bit map of `SPI_*` flags.
:param length: Number of bits to write during the next transfer.
(reset=1)
:param div: Counter load value to divide the RTIO
clock by to generate the SPI clock. (minimum=2, reset=2)
``f_rtio_clk/f_spi == div``. If ``div`` is odd,
the setup phase of the SPI clock is one coarse RTIO clock cycle
longer than the hold phase.
clock by to generate the SPI clock; ``f_rtio_clk/f_spi == div``.
If ``div`` is odd, the setup phase of the SPI clock is one
coarse RTIO clock cycle longer than the hold phase. (minimum=2, reset=2)
:param cs: Bit pattern of chip selects to assert.
Or number of the chip select to assert if ``cs`` is decoded
downstream. (reset=0)
@ -188,7 +187,7 @@ class SPIMaster:
experiments and are known.
This method is portable and can also be called from e.g.
:meth:`__init__`.
``__init__``.
.. warning:: If this method is called while recording a DMA
sequence, the playback of the sequence will not update the
@ -208,7 +207,7 @@ class SPIMaster:
* The ``data`` register and the shift register are 32 bits wide.
* Data writes take one ``ref_period`` cycle.
* A transaction consisting of a single transfer (``SPI_END``) takes
:attr:`xfer_duration_mu` ``=(n + 1)*div`` cycles RTIO time where
:attr:`xfer_duration_mu` `` = (n + 1) * div`` cycles RTIO time, where
``n`` is the number of bits and ``div`` is the SPI clock divider.
* Transfers in a multi-transfer transaction take up to one SPI clock
cycle less time depending on multiple parameters. Advanced users may

View File

@ -24,7 +24,7 @@ def y_mu_to_full_scale(y):
@portable
def adc_mu_to_volts(x, gain, corrected_fs=True):
"""Convert servo ADC data from machine units to Volt."""
"""Convert servo ADC data from machine units to volts."""
val = (x >> 1) & 0xffff
mask = 1 << 15
val = -(val & mask) + (val & ~mask)
@ -155,7 +155,7 @@ class SUServo:
This method advances the timeline by one servo memory access.
It does not support RTIO event replacement.
:param enable (int): Enable servo operation. Enabling starts servo
:param int enable: Enable servo operation. Enabling starts servo
iterations beginning with the ADC sampling stage. The first DDS
update will happen about two servo cycles (~2.3 µs) after enabling
the servo. The delay is deterministic.
@ -198,7 +198,7 @@ class SUServo:
consistent and valid data, stop the servo before using this method.
:param adc: ADC channel number (0-7)
:return: 17 bit signed X0
:return: 17-bit signed X0
"""
# State memory entries are 25 bits. Due to the pre-adder dynamic
# range, X0/X1/OFFSET are only 24 bits. Finally, the RTIO interface
@ -288,12 +288,12 @@ class Channel:
def set_dds_mu(self, profile, ftw, offs, pow_=0):
"""Set profile DDS coefficients in machine units.
.. seealso:: :meth:`set_amplitude`
See also :meth:`Channel.set_dds`.
:param profile: Profile number (0-31)
:param ftw: Frequency tuning word (32 bit unsigned)
:param offs: IIR offset (17 bit signed)
:param pow_: Phase offset word (16 bit)
:param ftw: Frequency tuning word (32-bit unsigned)
:param offs: IIR offset (17-bit signed)
:param pow_: Phase offset word (16-bit)
"""
base = (self.servo_channel << 8) | (profile << 3)
self.servo.write(base + 0, ftw >> 16)
@ -327,7 +327,7 @@ class Channel:
See :meth:`set_dds_mu` for setting the complete DDS profile.
:param profile: Profile number (0-31)
:param offs: IIR offset (17 bit signed)
:param offs: IIR offset (17-bit signed)
"""
base = (self.servo_channel << 8) | (profile << 3)
self.servo.write(base + 4, offs)
@ -375,15 +375,15 @@ class Channel:
* :math:`b_0` and :math:`b_1` are the feedforward gains for the two
delays
.. seealso:: :meth:`set_iir`
See also :meth:`Channel.set_iir`.
:param profile: Profile number (0-31)
:param adc: ADC channel to take IIR input from (0-7)
:param a1: 18 bit signed A1 coefficient (Y1 coefficient,
:param a1: 18-bit signed A1 coefficient (Y1 coefficient,
feedback, integrator gain)
:param b0: 18 bit signed B0 coefficient (recent,
:param b0: 18-bit signed B0 coefficient (recent,
X0 coefficient, feed forward, proportional gain)
:param b1: 18 bit signed B1 coefficient (old,
:param b1: 18-bit signed B1 coefficient (old,
X1 coefficient, feed forward, proportional gain)
:param dly: IIR update suppression time. In units of IIR cycles
(~1.2 µs, 0-255).
@ -499,7 +499,7 @@ class Channel:
consistent and valid data, stop the servo before using this method.
:param profile: Profile number (0-31)
:return: 17 bit unsigned Y0
:return: 17-bit unsigned Y0
"""
return self.servo.read(STATE_SEL | (self.servo_channel << 5) | profile)
@ -535,7 +535,7 @@ class Channel:
This method advances the timeline by one servo memory access.
:param profile: Profile number (0-31)
:param y: 17 bit unsigned Y0
:param y: 17-bit unsigned Y0
"""
# State memory is 25 bits wide and signed.
# Reads interact with the 18 MSBs (coefficient memory width)

View File

@ -27,7 +27,7 @@ class TTLOut:
This should be used with output-only channels.
:param channel: channel number
:param channel: Channel number
"""
kernel_invariants = {"core", "channel", "target_o"}
@ -109,7 +109,7 @@ class TTLInOut:
API is active (e.g. the gate is open, or the input events have not been
fully read out), another API must not be used simultaneously.
:param channel: channel number
:param channel: Channel number
"""
kernel_invariants = {"core", "channel", "gate_latency_mu",
"target_o", "target_oe", "target_sens", "target_sample"}
@ -145,7 +145,7 @@ class TTLInOut:
"""Set the direction to output at the current position of the time
cursor.
There must be a delay of at least one RTIO clock cycle before any
A delay of at least one RTIO clock cycle is necessary before any
other command can be issued.
This method only configures the direction at the FPGA. When using
@ -158,7 +158,7 @@ class TTLInOut:
"""Set the direction to input at the current position of the time
cursor.
There must be a delay of at least one RTIO clock cycle before any
A delay of at least one RTIO clock cycle is necessary before any
other command can be issued.
This method only configures the direction at the FPGA. When using
@ -326,17 +326,18 @@ class TTLInOut:
:return: The number of events before the timeout elapsed (0 if none
observed).
Examples:
**Examples:**
To count events on channel ``ttl_input``, up to the current timeline
position::
position: ::
ttl_input.count(now_mu())
If other events are scheduled between the end of the input gate
period and when the number of events is counted, using ``now_mu()``
as timeout consumes an unnecessary amount of timeline slack. In
such cases, it can be beneficial to pass a more precise timestamp,
for example::
period and when the number of events is counted, using
:meth:`~artiq.language.core.now_mu()` as timeout consumes an
unnecessary amount of timeline slack. In such cases, it can be
beneficial to pass a more precise timestamp, for example: ::
gate_end_mu = ttl_input.gate_rising(100 * us)
@ -350,7 +351,7 @@ class TTLInOut:
num_rising_edges = ttl_input.count(gate_end_mu)
The ``gate_*()`` family of methods return the cursor at the end
of the window, allowing this to be expressed in a compact fashion::
of the window, allowing this to be expressed in a compact fashion: ::
ttl_input.count(ttl_input.gate_rising(100 * us))
"""
@ -441,7 +442,7 @@ class TTLInOut:
was being watched.
The time cursor is not modified by this function. This function
always makes the slack negative.
always results in negative slack.
"""
rtio_output(self.target_sens, 0)
success = True

View File

@ -130,7 +130,7 @@ class CPLD:
:param spi_device: SPI bus device name
:param io_update_device: IO update RTIO TTLOut channel name
:param dds_reset_device: DDS reset RTIO TTLOut channel name
:param sync_device: AD9910 SYNC_IN RTIO TTLClockGen channel name
:param sync_device: AD9910 ``SYNC_IN`` RTIO TTLClockGen channel name
:param refclk: Reference clock (SMA, MMCX or on-board 100 MHz oscillator)
frequency in Hz
:param clk_sel: Reference clock selection. For hardware revision >= 1.3
@ -143,9 +143,9 @@ class CPLD:
1: divide-by-1; 2: divide-by-2; 3: divide-by-4.
On Urukul boards with CPLD gateware before v1.3.1 only the default
(0, i.e. variant dependent divider) is valid.
:param sync_sel: SYNC (multi-chip synchronisation) signal source selection.
0 corresponds to SYNC_IN being supplied by the FPGA via the EEM
connector. 1 corresponds to SYNC_OUT from DDS0 being distributed to the
:param sync_sel: ``SYNC`` (multi-chip synchronisation) signal source selection.
0 corresponds to ``SYNC_IN`` being supplied by the FPGA via the EEM
connector. 1 corresponds to ``SYNC_OUT`` from DDS0 being distributed to the
other chips.
:param rf_sw: Initial CPLD RF switch register setting (default: 0x0).
Knowledge of this state is not transferred between experiments.
@ -153,8 +153,8 @@ class CPLD:
0x00000000). See also :meth:`get_att_mu` which retrieves the hardware
state without side effects. Knowledge of this state is not transferred
between experiments.
:param sync_div: SYNC_IN generator divider. The ratio between the coarse
RTIO frequency and the SYNC_IN generator frequency (default: 2 if
:param sync_div: ``SYNC_IN`` generator divider. The ratio between the coarse
RTIO frequency and the ``SYNC_IN`` generator frequency (default: 2 if
`sync_device` was specified).
:param core_device: Core device name
@ -204,7 +204,7 @@ class CPLD:
See :func:`urukul_cfg` for possible flags.
:param cfg: 24 bit data to be written. Will be stored at
:param cfg: 24-bit data to be written. Will be stored at
:attr:`cfg_reg`.
"""
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24,
@ -237,7 +237,7 @@ class CPLD:
Resets the DDS I/O interface and verifies correct CPLD gateware
version.
Does not pulse the DDS MASTER_RESET as that confuses the AD9910.
Does not pulse the DDS ``MASTER_RESET`` as that confuses the AD9910.
:param blind: Do not attempt to verify presence and compatibility.
"""
@ -283,7 +283,7 @@ class CPLD:
def cfg_switches(self, state: TInt32):
"""Configure all four RF switches through the configuration register.
:param state: RF switch state as a 4 bit integer.
:param state: RF switch state as a 4-bit integer.
"""
self.cfg_write((self.cfg_reg & ~0xf) | state)
@ -326,11 +326,10 @@ class CPLD:
@kernel
def set_all_att_mu(self, att_reg: TInt32):
"""Set all four digital step attenuators (in machine units).
"""Set all four digital step attenuators (in machine units).
See also :meth:`set_att_mu`.
.. seealso:: :meth:`set_att_mu`
:param att_reg: Attenuator setting string (32 bit)
:param att_reg: Attenuator setting string (32-bit)
"""
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 32,
SPIT_ATT_WR, CS_ATT)
@ -342,8 +341,7 @@ class CPLD:
"""Set digital step attenuator in SI units.
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`set_att_mu`
See also :meth:`set_att_mu`.
:param channel: Attenuator channel (0-3).
:param att: Attenuation setting in dB. Higher value is more
@ -359,9 +357,9 @@ class CPLD:
The result is stored and will be used in future calls of
:meth:`set_att_mu` and :meth:`set_att`.
.. seealso:: :meth:`get_channel_att_mu`
See also :meth:`get_channel_att_mu`.
:return: 32 bit attenuator settings
:return: 32-bit attenuator settings
"""
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_INPUT, 32,
SPIT_ATT_RD, CS_ATT)
@ -380,7 +378,7 @@ class CPLD:
The result is stored and will be used in future calls of
:meth:`set_att_mu` and :meth:`set_att`.
.. seealso:: :meth:`get_att_mu`
See also :meth:`get_att_mu`.
:param channel: Attenuator channel (0-3).
:return: 8-bit digital attenuation setting:
@ -392,7 +390,7 @@ class CPLD:
def get_channel_att(self, channel: TInt32) -> TFloat:
"""Get digital step attenuator value for a channel in SI units.
.. seealso:: :meth:`get_channel_att_mu`
See also :meth:`get_channel_att_mu`.
:param channel: Attenuator channel (0-3).
:return: Attenuation setting in dB. Higher value is more
@ -403,14 +401,14 @@ class CPLD:
@kernel
def set_sync_div(self, div: TInt32):
"""Set the SYNC_IN AD9910 pulse generator frequency
"""Set the ``SYNC_IN`` AD9910 pulse generator frequency
and align it to the current RTIO timestamp.
The SYNC_IN signal is derived from the coarse RTIO clock
The ``SYNC_IN`` signal is derived from the coarse RTIO clock
and the divider must be a power of two.
Configure ``sync_sel == 0``.
:param div: SYNC_IN frequency divider. Must be a power of two.
:param div: ``SYNC_IN`` frequency divider. Must be a power of two.
Minimum division ratio is 2. Maximum division ratio is 16.
"""
ftw_max = 1 << 4

View File

@ -1,7 +1,7 @@
"""RTIO driver for the Zotino 32-channel, 16-bit 1MSPS DAC.
Output event replacement is not supported and issuing commands at the same
time is an error.
time results in a collision error.
"""
from artiq.language.core import kernel

View File

@ -348,7 +348,7 @@ class _DACWidget(QtWidgets.QFrame):
grid.setHorizontalSpacing(0)
grid.setVerticalSpacing(0)
self.setLayout(grid)
label = QtWidgets.QLabel("{} ch{}".format(title, channel))
label = QtWidgets.QLabel("{}_ch{}".format(title, channel))
label.setAlignment(QtCore.Qt.AlignCenter)
grid.addWidget(label, 1, 1)
@ -373,7 +373,7 @@ class _DACWidget(QtWidgets.QFrame):
return (self.title, self.channel)
def to_model_path(self):
return "dac/{} ch{}".format(self.title, self.channel)
return "dac/{}_ch{}".format(self.title, self.channel)
_WidgetDesc = namedtuple("_WidgetDesc", "uid comment cls arguments")
@ -391,8 +391,6 @@ def setup_from_ddb(ddb):
comment = v.get("comment")
if v["type"] == "local":
if v["module"] == "artiq.coredevice.ttl":
if "ttl_urukul" in k:
continue
channel = v["arguments"]["channel"]
force_out = v["class"] == "TTLOut"
widget = _WidgetDesc(k, comment, _TTLWidget, (channel, force_out, k))
@ -861,9 +859,10 @@ class _MonInjDock(QDockWidgetCloseDetect):
def delete_widget(self, index, checked):
widget = self.flow.itemAt(index).widget()
widget.hide()
self.manager.dm.setup_monitoring(False, widget)
self.flow.layout.takeAt(index)
widget.setParent(self.manager.main_window)
widget.hide()
def add_channels(self):
channels = self.channel_dialog.channels
@ -871,9 +870,9 @@ class _MonInjDock(QDockWidgetCloseDetect):
def layout_widgets(self, widgets):
for widget in sorted(widgets, key=lambda w: w.sort_key()):
widget.show()
self.manager.dm.setup_monitoring(True, widget)
self.flow.addWidget(widget)
widget.show()
def restore_widgets(self):
if self.widget_uids is not None:
@ -948,7 +947,7 @@ class MonInj:
del self.docks[name]
self.update_closable()
dock.delete_all_widgets()
dock.hide() # dock may be parent, only delete on exit
dock.deleteLater()
def update_closable(self):
flags = (QtWidgets.QDockWidget.DockWidgetMovable |

View File

@ -105,6 +105,7 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(nextafter),
api!(pow),
api!(round),
api!(rint),
api!(sin),
api!(sinh),
api!(sqrt),
@ -172,4 +173,12 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(spi_set_config = ::nrt_bus::spi::set_config),
api!(spi_write = ::nrt_bus::spi::write),
api!(spi_read = ::nrt_bus::spi::read),
/*
* syscall for unit tests
* Used in `artiq.tests.coredevice.test_exceptions.ExceptionTest.test_raise_exceptions_kernel`
* This syscall checks that the exception IDs used in the Python `EmbeddingMap` (in `artiq.compiler.embedding`)
* match the `EXCEPTION_ID_LOOKUP` defined in the firmware (`artiq::firmware::ksupport::eh_artiq`)
*/
api!(test_exception_id_sync = ::eh_artiq::test_exception_id_sync)
];

View File

@ -328,6 +328,7 @@ extern fn stop_fn(_version: c_int,
}
}
// Must be kept in sync with `artiq.compiler.embedding`
static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [
("RuntimeError", 0),
("RTIOUnderflow", 1),
@ -340,7 +341,7 @@ static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [
("ZeroDivisionError", 8),
("IndexError", 9),
("UnwrapNoneError", 10),
("SubkernelError", 11)
("SubkernelError", 11),
];
pub fn get_exception_id(name: &str) -> u32 {
@ -352,3 +353,29 @@ pub fn get_exception_id(name: &str) -> u32 {
unimplemented!("unallocated internal exception id")
}
/// Takes as input exception id from host
/// Generates a new exception with:
/// * `id` set to `exn_id`
/// * `message` set to corresponding exception name from `EXCEPTION_ID_LOOKUP`
///
/// The message is matched on host to ensure correct exception is being referred
/// This test checks the synchronization of exception ids for runtime errors
#[no_mangle]
pub extern "C-unwind" fn test_exception_id_sync(exn_id: u32) {
let message = EXCEPTION_ID_LOOKUP
.iter()
.find_map(|&(name, id)| if id == exn_id { Some(name) } else { None })
.unwrap_or("unallocated internal exception id");
let exn = Exception {
id: exn_id,
file: file!().as_c_slice(),
line: 0,
column: 0,
function: "test_exception_id_sync".as_c_slice(),
message: message.as_c_slice(),
param: [0, 0, 0]
};
unsafe { raise(&exn) };
}

View File

@ -484,17 +484,23 @@ extern "C-unwind" fn subkernel_load_run(id: u32, destination: u8, run: bool) {
extern "C-unwind" fn subkernel_await_finish(id: u32, timeout: i64) {
send(&SubkernelAwaitFinishRequest { id: id, timeout: timeout });
recv!(SubkernelAwaitFinishReply { status } => {
match status {
SubkernelStatus::NoError => (),
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation")
recv(move |request| {
if let SubkernelAwaitFinishReply = request { }
else if let SubkernelError(status) = request {
match status {
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation"),
SubkernelStatus::Exception(e) => unsafe { crate::eh_artiq::raise(e) },
}
} else {
send(&Log(format_args!("unexpected reply: {:?}\n", request)));
loop {}
}
})
}
@ -512,23 +518,28 @@ extern fn subkernel_send_message(id: u32, is_return: bool, destination: u8,
extern "C-unwind" fn subkernel_await_message(id: i32, timeout: i64, tags: &CSlice<u8>, min: u8, max: u8) -> u8 {
send(&SubkernelMsgRecvRequest { id: id, timeout: timeout, tags: tags.as_ref() });
recv!(SubkernelMsgRecvReply { status, count } => {
match status {
SubkernelStatus::NoError => {
if count < &min || count > &max {
raise!("SubkernelError",
"Received less or more arguments than expected");
}
*count
recv(move |request| {
if let SubkernelMsgRecvReply { count } = request {
if count < &min || count > &max {
raise!("SubkernelError",
"Received less or more arguments than expected");
}
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation")
*count
} else if let SubkernelError(status) = request {
match status {
SubkernelStatus::IncorrectState => raise!("SubkernelError",
"Subkernel not running"),
SubkernelStatus::Timeout => raise!("SubkernelError",
"Subkernel timed out"),
SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation"),
SubkernelStatus::Exception(e) => unsafe { crate::eh_artiq::raise(e) },
}
} else {
send(&Log(format_args!("unexpected reply: {:?}\n", request)));
loop {}
}
})
// RpcRecvRequest should be called `count` times after this to receive message data

View File

@ -123,8 +123,8 @@ pub enum Packet {
SubkernelLoadRunRequest { source: u8, destination: u8, id: u32, run: bool },
SubkernelLoadRunReply { destination: u8, succeeded: bool },
SubkernelFinished { destination: u8, id: u32, with_exception: bool, exception_src: u8 },
SubkernelExceptionRequest { destination: u8 },
SubkernelException { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE] },
SubkernelExceptionRequest { source: u8, destination: u8 },
SubkernelException { destination: u8, last: bool, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelMessage { source: u8, destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelMessageAck { destination: u8 },
}
@ -367,14 +367,17 @@ impl Packet {
exception_src: reader.read_u8()?
},
0xc9 => Packet::SubkernelExceptionRequest {
source: reader.read_u8()?,
destination: reader.read_u8()?
},
0xca => {
let destination = reader.read_u8()?;
let last = reader.read_bool()?;
let length = reader.read_u16()?;
let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelException {
destination: destination,
last: last,
length: length,
data: data
@ -663,12 +666,14 @@ impl Packet {
writer.write_bool(with_exception)?;
writer.write_u8(exception_src)?;
},
Packet::SubkernelExceptionRequest { destination } => {
Packet::SubkernelExceptionRequest { source, destination } => {
writer.write_u8(0xc9)?;
writer.write_u8(source)?;
writer.write_u8(destination)?;
},
Packet::SubkernelException { last, length, data } => {
Packet::SubkernelException { destination, last, length, data } => {
writer.write_u8(0xca)?;
writer.write_u8(destination)?;
writer.write_bool(last)?;
writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?;
@ -693,18 +698,20 @@ impl Packet {
pub fn routable_destination(&self) -> Option<u8> {
// only for packets that could be re-routed, not only forwarded
match self {
Packet::DmaAddTraceRequest { destination, .. } => Some(*destination),
Packet::DmaAddTraceReply { destination, .. } => Some(*destination),
Packet::DmaRemoveTraceRequest { destination, .. } => Some(*destination),
Packet::DmaRemoveTraceReply { destination, .. } => Some(*destination),
Packet::DmaPlaybackRequest { destination, .. } => Some(*destination),
Packet::DmaPlaybackReply { destination, .. } => Some(*destination),
Packet::SubkernelLoadRunRequest { destination, .. } => Some(*destination),
Packet::SubkernelLoadRunReply { destination, .. } => Some(*destination),
Packet::SubkernelMessage { destination, .. } => Some(*destination),
Packet::SubkernelMessageAck { destination, .. } => Some(*destination),
Packet::DmaPlaybackStatus { destination, .. } => Some(*destination),
Packet::SubkernelFinished { destination, .. } => Some(*destination),
Packet::DmaAddTraceRequest { destination, .. } => Some(*destination),
Packet::DmaAddTraceReply { destination, .. } => Some(*destination),
Packet::DmaRemoveTraceRequest { destination, .. } => Some(*destination),
Packet::DmaRemoveTraceReply { destination, .. } => Some(*destination),
Packet::DmaPlaybackRequest { destination, .. } => Some(*destination),
Packet::DmaPlaybackReply { destination, .. } => Some(*destination),
Packet::SubkernelLoadRunRequest { destination, .. } => Some(*destination),
Packet::SubkernelLoadRunReply { destination, .. } => Some(*destination),
Packet::SubkernelMessage { destination, .. } => Some(*destination),
Packet::SubkernelMessageAck { destination, .. } => Some(*destination),
Packet::SubkernelExceptionRequest { destination, .. } => Some(*destination),
Packet::SubkernelException { destination, .. } => Some(*destination),
Packet::DmaPlaybackStatus { destination, .. } => Some(*destination),
Packet::SubkernelFinished { destination, .. } => Some(*destination),
_ => None
}
}

View File

@ -11,12 +11,12 @@ pub const KERNELCPU_LAST_ADDRESS: usize = 0x4fffffff;
pub const KSUPPORT_HEADER_SIZE: usize = 0x74;
#[derive(Debug)]
pub enum SubkernelStatus {
NoError,
pub enum SubkernelStatus<'a> {
Timeout,
IncorrectState,
CommLost,
OtherError
Exception(eh::eh_artiq::Exception<'a>),
OtherError,
}
#[derive(Debug)]
@ -106,10 +106,11 @@ pub enum Message<'a> {
SubkernelLoadRunRequest { id: u32, destination: u8, run: bool },
SubkernelLoadRunReply { succeeded: bool },
SubkernelAwaitFinishRequest { id: u32, timeout: i64 },
SubkernelAwaitFinishReply { status: SubkernelStatus },
SubkernelAwaitFinishReply,
SubkernelMsgSend { id: u32, destination: Option<u8>, count: u8, tag: &'a [u8], data: *const *const () },
SubkernelMsgRecvRequest { id: i32, timeout: i64, tags: &'a [u8] },
SubkernelMsgRecvReply { status: SubkernelStatus, count: u8 },
SubkernelMsgRecvReply { count: u8 },
SubkernelError(SubkernelStatus<'a>),
Log(fmt::Arguments<'a>),
LogSlice(&'a str)

View File

@ -95,7 +95,9 @@ pub mod subkernel {
use board_artiq::drtio_routing::RoutingTable;
use board_misoc::clock;
use proto_artiq::{drtioaux_proto::{PayloadStatus, MASTER_PAYLOAD_MAX_SIZE}, rpc_proto as rpc};
use io::Cursor;
use io::{Cursor, ProtoRead};
use eh::eh_artiq::Exception;
use cslice::CSlice;
use rtio_mgt::drtio;
use sched::{Io, Mutex, Error as SchedError};
@ -226,8 +228,8 @@ pub mod subkernel {
if subkernel.state == SubkernelState::Running {
subkernel.state = SubkernelState::Finished {
status: match with_exception {
true => FinishStatus::Exception(exception_src),
false => FinishStatus::Ok,
true => FinishStatus::Exception(exception_src),
false => FinishStatus::Ok,
}
}
}
@ -256,6 +258,49 @@ pub mod subkernel {
}
}
fn read_exception_string<'a>(reader: &mut Cursor<&[u8]>) -> Result<CSlice<'a, u8>, Error> {
let len = reader.read_u32()? as usize;
if len == usize::MAX {
let data = reader.read_u32()?;
Ok(unsafe { CSlice::new(data as *const u8, len) })
} else {
let pos = reader.position();
let slice = unsafe {
let ptr = reader.get_ref().as_ptr().offset(pos as isize);
CSlice::new(ptr, len)
};
reader.set_position(pos + len);
Ok(slice)
}
}
pub fn read_exception(buffer: &[u8]) -> Result<Exception, Error>
{
let mut reader = Cursor::new(buffer);
let mut byte = reader.read_u8()?;
// to sync
while byte != 0x5a {
byte = reader.read_u8()?;
}
// skip sync bytes, 0x09 indicates exception
while byte != 0x09 {
byte = reader.read_u8()?;
}
let _len = reader.read_u32()?;
// ignore the remaining exceptions, stack traces etc. - unwinding from another device would be unwise anyway
Ok(Exception {
id: reader.read_u32()?,
message: read_exception_string(&mut reader)?,
param: [reader.read_u64()? as i64, reader.read_u64()? as i64, reader.read_u64()? as i64],
file: read_exception_string(&mut reader)?,
line: reader.read_u32()?,
column: reader.read_u32()?,
function: read_exception_string(&mut reader)?
})
}
pub fn retrieve_finish_status(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
routing_table: &RoutingTable, id: u32) -> Result<SubkernelFinished, Error> {
let _lock = subkernel_mutex.lock(io)?;

View File

@ -119,17 +119,19 @@ pub mod drtio {
},
// (potentially) routable packets
drtioaux::Packet::DmaAddTraceRequest { destination, .. } |
drtioaux::Packet::DmaAddTraceReply { destination, .. } |
drtioaux::Packet::DmaRemoveTraceRequest { destination, .. } |
drtioaux::Packet::DmaRemoveTraceReply { destination, .. } |
drtioaux::Packet::DmaPlaybackRequest { destination, .. } |
drtioaux::Packet::DmaPlaybackReply { destination, .. } |
drtioaux::Packet::SubkernelLoadRunRequest { destination, .. } |
drtioaux::Packet::SubkernelLoadRunReply { destination, .. } |
drtioaux::Packet::SubkernelMessage { destination, .. } |
drtioaux::Packet::SubkernelMessageAck { destination, .. } |
drtioaux::Packet::DmaPlaybackStatus { destination, .. } |
drtioaux::Packet::SubkernelFinished { destination, .. } => {
drtioaux::Packet::DmaAddTraceReply { destination, .. } |
drtioaux::Packet::DmaRemoveTraceRequest { destination, .. } |
drtioaux::Packet::DmaRemoveTraceReply { destination, .. } |
drtioaux::Packet::DmaPlaybackRequest { destination, .. } |
drtioaux::Packet::DmaPlaybackReply { destination, .. } |
drtioaux::Packet::SubkernelLoadRunRequest { destination, .. } |
drtioaux::Packet::SubkernelLoadRunReply { destination, .. } |
drtioaux::Packet::SubkernelMessage { destination, .. } |
drtioaux::Packet::SubkernelMessageAck { destination, .. } |
drtioaux::Packet::SubkernelExceptionRequest { destination, .. } |
drtioaux::Packet::SubkernelException { destination, .. } |
drtioaux::Packet::DmaPlaybackStatus { destination, .. } |
drtioaux::Packet::SubkernelFinished { destination, .. } => {
if *destination == 0 {
false
} else {
@ -612,9 +614,9 @@ pub mod drtio {
let mut remote_data: Vec<u8> = Vec::new();
loop {
let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
&drtioaux::Packet::SubkernelExceptionRequest { destination: destination })?;
&drtioaux::Packet::SubkernelExceptionRequest { source: 0, destination: destination })?;
match reply {
drtioaux::Packet::SubkernelException { last, length, data } => {
drtioaux::Packet::SubkernelException { destination: 0, last, length, data } => {
remote_data.extend(&data[0..length as usize]);
if last {
return Ok(remote_data);
@ -710,11 +712,30 @@ fn read_device_map() -> DeviceMap {
device_map
}
fn toggle_sed_spread(val: u8) {
unsafe { csr::rtio_core::sed_spread_enable_write(val); }
}
fn setup_sed_spread() {
config::read_str("sed_spread_enable", |r| {
match r {
Ok("1") => { info!("SED spreading enabled"); toggle_sed_spread(1); },
Ok("0") => { info!("SED spreading disabled"); toggle_sed_spread(0); },
Ok(_) => {
warn!("sed_spread_enable value not supported (only 1, 0 allowed), disabling by default");
toggle_sed_spread(0);
},
Err(_) => { info!("SED spreading disabled by default"); toggle_sed_spread(0) },
}
});
}
pub fn startup(io: &Io, aux_mutex: &Mutex,
routing_table: &Urc<RefCell<drtio_routing::RoutingTable>>,
up_destinations: &Urc<RefCell<[bool; drtio_routing::DEST_COUNT]>>,
ddma_mutex: &Mutex, subkernel_mutex: &Mutex) {
set_device_map(read_device_map());
setup_sed_spread();
drtio::startup(io, aux_mutex, routing_table, up_destinations, ddma_mutex, subkernel_mutex);
unsafe {
csr::rtio_core::reset_phy_write(1);

View File

@ -126,19 +126,6 @@ macro_rules! unexpected {
($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*))));
}
#[cfg(has_drtio)]
macro_rules! propagate_subkernel_exception {
( $exception:ident, $stream:ident ) => {
error!("Exception in subkernel");
match $stream {
None => return Ok(true),
Some(ref mut $stream) => {
$stream.write_all($exception)?;
}
}
}
}
// Persistent state
#[derive(Debug)]
struct Congress {
@ -686,23 +673,26 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
&kern::SubkernelAwaitFinishRequest{ id, timeout } => {
let res = subkernel::await_finish(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table,
id, timeout);
let status = match res {
let response = match res {
Ok(ref res) => {
if res.comm_lost {
kern::SubkernelStatus::CommLost
} else if let Some(exception) = &res.exception {
propagate_subkernel_exception!(exception, stream);
// will not be called after exception is served
kern::SubkernelStatus::OtherError
kern::SubkernelError(kern::SubkernelStatus::CommLost)
} else if let Some(raw_exception) = &res.exception {
let exception = subkernel::read_exception(raw_exception);
if let Ok(exception) = exception {
kern::SubkernelError(kern::SubkernelStatus::Exception(exception))
} else {
kern::SubkernelError(kern::SubkernelStatus::OtherError)
}
} else {
kern::SubkernelStatus::NoError
kern::SubkernelAwaitFinishReply
}
},
Err(SubkernelError::Timeout) => kern::SubkernelStatus::Timeout,
Err(SubkernelError::IncorrectState) => kern::SubkernelStatus::IncorrectState,
Err(_) => kern::SubkernelStatus::OtherError
Err(SubkernelError::Timeout) => kern::SubkernelError(kern::SubkernelStatus::Timeout),
Err(SubkernelError::IncorrectState) => kern::SubkernelError(kern::SubkernelStatus::IncorrectState),
Err(_) => kern::SubkernelError(kern::SubkernelStatus::OtherError)
};
kern_send(io, &kern::SubkernelAwaitFinishReply { status: status })
kern_send(io, &response)
}
#[cfg(has_drtio)]
&kern::SubkernelMsgSend { id, destination, count, tag, data } => {
@ -712,72 +702,80 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
#[cfg(has_drtio)]
&kern::SubkernelMsgRecvRequest { id, timeout, tags } => {
let message_received = subkernel::message_await(io, subkernel_mutex, id as u32, timeout);
let (status, count) = match message_received {
Ok(ref message) => (kern::SubkernelStatus::NoError, message.count),
Err(SubkernelError::Timeout) => (kern::SubkernelStatus::Timeout, 0),
Err(SubkernelError::IncorrectState) => (kern::SubkernelStatus::IncorrectState, 0),
Err(SubkernelError::SubkernelFinished) => {
let res = subkernel::retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex,
routing_table, id as u32)?;
if res.comm_lost {
(kern::SubkernelStatus::CommLost, 0)
} else if let Some(exception) = &res.exception {
propagate_subkernel_exception!(exception, stream);
(kern::SubkernelStatus::OtherError, 0)
if let Err(SubkernelError::SubkernelFinished) = message_received {
let res = subkernel::retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex,
routing_table, id as u32)?;
if res.comm_lost {
kern_send(io,
&kern::SubkernelError(kern::SubkernelStatus::CommLost))?;
} else if let Some(raw_exception) = &res.exception {
let exception = subkernel::read_exception(raw_exception);
if let Ok(exception) = exception {
kern_send(io,
&kern::SubkernelError(kern::SubkernelStatus::Exception(exception)))?;
} else {
(kern::SubkernelStatus::OtherError, 0)
kern_send(io,
&kern::SubkernelError(kern::SubkernelStatus::OtherError))?;
}
} else {
kern_send(io,
&kern::SubkernelError(kern::SubkernelStatus::OtherError))?;
}
Err(_) => (kern::SubkernelStatus::OtherError, 0)
};
kern_send(io, &kern::SubkernelMsgRecvReply { status: status, count: count})?;
if let Ok(message) = message_received {
// receive code almost identical to RPC recv, except we are not reading from a stream
let mut reader = Cursor::new(message.data);
let mut current_tags = tags;
let mut i = 0;
loop {
// kernel has to consume all arguments in the whole message
let slot = kern_recv(io, |reply| {
match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!(
"expected root value slot from kernel CPU, not {:?}", other)
}
})?;
let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error<SchedError>> {
if size == 0 {
return Ok(0 as *mut ())
}
kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
Ok(kern_recv(io, |reply| {
} else {
let message = match message_received {
Ok(ref message) => kern::SubkernelMsgRecvReply { count: message.count },
Err(SubkernelError::Timeout) => kern::SubkernelError(kern::SubkernelStatus::Timeout),
Err(SubkernelError::IncorrectState) => kern::SubkernelError(kern::SubkernelStatus::IncorrectState),
Err(SubkernelError::SubkernelFinished) => unreachable!(), // taken care of above
Err(_) => kern::SubkernelError(kern::SubkernelStatus::OtherError)
};
kern_send(io, &message)?;
if let Ok(message) = message_received {
// receive code almost identical to RPC recv, except we are not reading from a stream
let mut reader = Cursor::new(message.data);
let mut current_tags = tags;
let mut i = 0;
loop {
// kernel has to consume all arguments in the whole message
let slot = kern_recv(io, |reply| {
match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!(
"expected nested value slot from kernel CPU, not {:?}", other)
"expected root value slot from kernel CPU, not {:?}", other)
}
})?)
});
match res {
Ok(new_tags) => {
kern_send(io, &kern::RpcRecvReply(Ok(0)))?;
i += 1;
if i < message.count {
// update the tag for next read
current_tags = new_tags;
} else {
// should be done by then
break;
})?;
let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error<SchedError>> {
if size == 0 {
return Ok(0 as *mut ())
}
},
Err(_) => unexpected!("expected valid subkernel message data")
};
kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
Ok(kern_recv(io, |reply| {
match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!(
"expected nested value slot from kernel CPU, not {:?}", other)
}
})?)
});
match res {
Ok(new_tags) => {
kern_send(io, &kern::RpcRecvReply(Ok(0)))?;
i += 1;
if i < message.count {
// update the tag for next read
current_tags = new_tags;
} else {
// should be done by then
break;
}
},
Err(_) => unexpected!("expected valid subkernel message data")
};
}
}
Ok(())
} else {
// if timed out, no data has been received, exception should be raised by kernel
Ok(())
}
Ok(())
},
request => unexpected!("unexpected request {:?} from kernel CPU", request)

View File

@ -1,23 +1,22 @@
use core::mem;
use alloc::{string::String, format, vec::Vec, collections::btree_map::BTreeMap};
use cslice::AsCSlice;
use cslice::{CSlice, AsCSlice};
use board_artiq::{drtioaux, drtio_routing::RoutingTable, mailbox, spi};
use board_misoc::{csr, clock, i2c};
use proto_artiq::{
drtioaux_proto::PayloadStatus,
kernel_proto as kern,
kernel_proto as kern,
session_proto::Reply::KernelException as HostKernelException,
rpc_proto as rpc};
use eh::eh_artiq;
use io::Cursor;
use io::{Cursor, ProtoRead};
use kernel::eh_artiq::StackPointerBacktrace;
use ::{cricon_select, RtioMaster};
use cache::Cache;
use dma::{Manager as DmaManager, Error as DmaError};
use routing::{Router, Sliceable, SliceMeta};
use SAT_PAYLOAD_MAX_SIZE;
use MASTER_PAYLOAD_MAX_SIZE;
mod kernel_cpu {
@ -69,6 +68,7 @@ enum KernelState {
SubkernelAwaitFinish { max_time: i64, id: u32 },
DmaUploading { max_time: u64 },
DmaAwait { max_time: u64 },
SubkernelRetrievingException { destination: u8 },
}
#[derive(Debug)]
@ -134,10 +134,13 @@ struct MessageManager {
struct Session {
kernel_state: KernelState,
log_buffer: String,
last_exception: Option<Sliceable>,
source: u8, // which destination requested running the kernel
last_exception: Option<Sliceable>, // exceptions raised locally
external_exception: Vec<u8>, // exceptions from sub-subkernels
// which destination requested running the kernel
source: u8,
messages: MessageManager,
subkernels_finished: Vec<u32> // ids of subkernels finished
// ids of subkernels finished (with exception)
subkernels_finished: Vec<(u32, Option<u8>)>,
}
#[derive(Debug)]
@ -277,6 +280,7 @@ impl Session {
kernel_state: KernelState::Absent,
log_buffer: String::new(),
last_exception: None,
external_exception: Vec::new(),
source: 0,
messages: MessageManager::new(),
subkernels_finished: Vec::new()
@ -428,9 +432,9 @@ impl Manager {
}
}
pub fn exception_get_slice(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> SliceMeta {
pub fn exception_get_slice(&mut self, data_slice: &mut [u8; MASTER_PAYLOAD_MAX_SIZE]) -> SliceMeta {
match self.session.last_exception.as_mut() {
Some(exception) => exception.get_slice_sat(data_slice),
Some(exception) => exception.get_slice_master(data_slice),
None => SliceMeta { destination: 0, len: 0, status: PayloadStatus::FirstAndLast }
}
}
@ -517,7 +521,7 @@ impl Manager {
return;
}
match self.process_external_messages() {
match self.process_external_messages(router, routing_table, rank, destination) {
Ok(()) => (),
Err(Error::AwaitingMessage) => return, // kernel still waiting, do not process kernel messages
Err(Error::KernelException(exception)) => {
@ -549,20 +553,42 @@ impl Manager {
}
}
fn process_external_messages(&mut self) -> Result<(), Error> {
fn check_finished_kernels(&mut self, id: u32, router: &mut Router, routing_table: &RoutingTable, rank: u8, self_destination: u8) {
for (i, (status, exception_source)) in self.session.subkernels_finished.iter().enumerate() {
if *status == id {
if exception_source.is_none() {
kern_send(&kern::SubkernelAwaitFinishReply).unwrap();
self.session.kernel_state = KernelState::Running;
self.session.subkernels_finished.swap_remove(i);
} else {
let destination = exception_source.unwrap();
self.session.external_exception = Vec::new();
self.session.kernel_state = KernelState::SubkernelRetrievingException { destination: destination };
router.route(drtioaux::Packet::SubkernelExceptionRequest {
source: self_destination, destination: destination
}, &routing_table, rank, self_destination);
}
break;
}
}
}
fn process_external_messages(&mut self, router: &mut Router, routing_table: &RoutingTable, rank: u8, self_destination: u8) -> Result<(), Error> {
match &self.session.kernel_state {
KernelState::MsgAwait { id, max_time, tags } => {
if *max_time > 0 && clock::get_ms() > *max_time as u64 {
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::Timeout, count: 0 })?;
kern_send(&kern::SubkernelError(kern::SubkernelStatus::Timeout))?;
self.session.kernel_state = KernelState::Running;
return Ok(())
}
if let Some(message) = self.session.messages.get_incoming(*id) {
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::NoError, count: message.count })?;
kern_send(&kern::SubkernelMsgRecvReply { count: message.count })?;
let tags = tags.clone();
self.session.kernel_state = KernelState::Running;
pass_message_to_kernel(&message, &tags)
} else {
let id = *id;
self.check_finished_kernels(id, router, routing_table, rank, self_destination);
Err(Error::AwaitingMessage)
}
},
@ -576,19 +602,11 @@ impl Manager {
},
KernelState::SubkernelAwaitFinish { max_time, id } => {
if *max_time > 0 && clock::get_ms() > *max_time as u64 {
kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::Timeout })?;
kern_send(&kern::SubkernelError(kern::SubkernelStatus::Timeout))?;
self.session.kernel_state = KernelState::Running;
} else {
let mut i = 0;
for status in &self.session.subkernels_finished {
if *status == *id {
kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::NoError })?;
self.session.kernel_state = KernelState::Running;
self.session.subkernels_finished.swap_remove(i);
break;
}
i += 1;
}
let id = *id;
self.check_finished_kernels(id, router, routing_table, rank, self_destination);
}
Ok(())
}
@ -606,6 +624,9 @@ impl Manager {
}
Ok(())
}
KernelState::SubkernelRetrievingException { destination: _ } => {
Err(Error::AwaitingMessage)
}
_ => Ok(())
}
}
@ -628,16 +649,30 @@ impl Manager {
}
pub fn remote_subkernel_finished(&mut self, id: u32, with_exception: bool, exception_source: u8) {
if with_exception {
unsafe { kernel_cpu::stop() }
self.session.kernel_state = KernelState::Absent;
unsafe { self.cache.unborrow() }
self.last_finished = Some(SubkernelFinished {
source: self.session.source, id: self.current_id,
with_exception: true, exception_source: exception_source
})
let exception_src = if with_exception { Some(exception_source) } else { None };
self.session.subkernels_finished.push((id, exception_src));
}
pub fn received_exception(&mut self, exception_data: &[u8], last: bool, router: &mut Router, routing_table: &RoutingTable,
rank: u8, self_destination: u8) {
if let KernelState::SubkernelRetrievingException { destination } = self.session.kernel_state {
self.session.external_exception.extend_from_slice(exception_data);
if last {
if let Ok(exception) = read_exception(&self.session.external_exception) {
kern_send(&kern::SubkernelError(kern::SubkernelStatus::Exception(exception))).unwrap();
} else {
kern_send(
&kern::SubkernelError(kern::SubkernelStatus::OtherError)).unwrap();
}
self.session.kernel_state = KernelState::Running;
} else {
/* fetch another slice */
router.route(drtioaux::Packet::SubkernelExceptionRequest {
source: self_destination, destination: destination
}, routing_table, rank, self_destination);
}
} else {
self.session.subkernels_finished.push(id);
warn!("Received unsolicited exception data");
}
}
@ -655,6 +690,7 @@ impl Manager {
(_, KernelState::DmaAwait { .. }) |
(_, KernelState::MsgSending) |
(_, KernelState::SubkernelAwaitLoad) |
(_, KernelState::SubkernelRetrievingException { .. }) |
(_, KernelState::SubkernelAwaitFinish { .. }) => {
// We're standing by; ignore the message.
return Ok(None)
@ -822,6 +858,48 @@ impl Drop for Manager {
}
}
fn read_exception_string<'a>(reader: &mut Cursor<&[u8]>) -> Result<CSlice<'a, u8>, Error> {
let len = reader.read_u32()? as usize;
if len == usize::MAX {
let data = reader.read_u32()?;
Ok(unsafe { CSlice::new(data as *const u8, len) })
} else {
let pos = reader.position();
let slice = unsafe {
let ptr = reader.get_ref().as_ptr().offset(pos as isize);
CSlice::new(ptr, len)
};
reader.set_position(pos + len);
Ok(slice)
}
}
fn read_exception(buffer: &[u8]) -> Result<eh_artiq::Exception, Error>
{
let mut reader = Cursor::new(buffer);
let mut byte = reader.read_u8()?;
// to sync
while byte != 0x5a {
byte = reader.read_u8()?;
}
// skip sync bytes, 0x09 indicates exception
while byte != 0x09 {
byte = reader.read_u8()?;
}
let _len = reader.read_u32()?;
// ignore the remaining exceptions, stack traces etc. - unwinding from another device would be unwise anyway
Ok(eh_artiq::Exception {
id: reader.read_u32()?,
message: read_exception_string(&mut reader)?,
param: [reader.read_u64()? as i64, reader.read_u64()? as i64, reader.read_u64()? as i64],
file: read_exception_string(&mut reader)?,
line: reader.read_u32()?,
column: reader.read_u32()?,
function: read_exception_string(&mut reader)?
})
}
fn kern_recv<R, F>(f: F) -> Result<R, Error>
where F: FnOnce(&kern::Message) -> Result<R, Error> {
if mailbox::receive() == 0 {

View File

@ -14,7 +14,7 @@ extern crate io;
extern crate eh;
use core::convert::TryFrom;
use board_misoc::{csr, ident, clock, uart_logger, i2c, pmp};
use board_misoc::{csr, ident, clock, config, uart_logger, i2c, pmp};
#[cfg(has_si5324)]
use board_artiq::si5324;
#[cfg(has_si549)]
@ -70,6 +70,10 @@ fn drtiosat_tsc_loaded() -> bool {
}
}
fn toggle_sed_spread(val: u8) {
unsafe { csr::drtiosat::sed_spread_enable_write(val); }
}
#[derive(Clone, Copy)]
pub enum RtioMaster {
Drtio,
@ -100,13 +104,13 @@ pub fn cricon_read() -> RtioMaster {
#[cfg(has_drtio_routing)]
macro_rules! forward {
($routing_table:expr, $destination:expr, $rank:expr, $repeaters:expr, $packet:expr) => {{
($router:expr, $routing_table:expr, $destination:expr, $rank:expr, $self_destination:expr, $repeaters:expr, $packet:expr) => {{
let hop = $routing_table.0[$destination as usize][$rank as usize];
if hop != 0 {
let repno = (hop - 1) as usize;
if repno < $repeaters.len() {
if $packet.expects_response() {
return $repeaters[repno].aux_forward($packet);
return $repeaters[repno].aux_forward($packet, $router, $routing_table, $rank, $self_destination);
} else {
let res = $repeaters[repno].aux_send($packet);
// allow the satellite to parse the packet before next
@ -122,7 +126,7 @@ macro_rules! forward {
#[cfg(not(has_drtio_routing))]
macro_rules! forward {
($routing_table:expr, $destination:expr, $rank:expr, $repeaters:expr, $packet:expr) => {}
($router:expr, $routing_table:expr, $destination:expr, $rank:expr, $self_destination:expr, $repeaters:expr, $packet:expr) => {}
}
fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmgr: &mut KernelManager,
@ -197,7 +201,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
let repno = hop - 1;
match _repeaters[repno].aux_forward(&drtioaux::Packet::DestinationStatusRequest {
destination: destination
}) {
}, router, _routing_table, *rank, *self_destination) {
Ok(()) => (),
Err(drtioaux::Error::LinkDown) => drtioaux::send(0, &drtioaux::Packet::DestinationDownReply)?,
Err(e) => {
@ -251,7 +255,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}
drtioaux::Packet::MonitorRequest { destination: _destination, channel, probe } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let value;
#[cfg(has_rtio_moninj)]
unsafe {
@ -268,7 +272,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
drtioaux::send(0, &reply)
},
drtioaux::Packet::InjectionRequest { destination: _destination, channel, overrd, value } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
#[cfg(has_rtio_moninj)]
unsafe {
csr::rtio_moninj::inj_chan_sel_write(channel as _);
@ -278,7 +282,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
Ok(())
},
drtioaux::Packet::InjectionStatusRequest { destination: _destination, channel, overrd } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let value;
#[cfg(has_rtio_moninj)]
unsafe {
@ -294,22 +298,22 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
},
drtioaux::Packet::I2cStartRequest { destination: _destination, busno } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let succeeded = i2c::start(busno).is_ok();
drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded })
}
drtioaux::Packet::I2cRestartRequest { destination: _destination, busno } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let succeeded = i2c::restart(busno).is_ok();
drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded })
}
drtioaux::Packet::I2cStopRequest { destination: _destination, busno } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let succeeded = i2c::stop(busno).is_ok();
drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded })
}
drtioaux::Packet::I2cWriteRequest { destination: _destination, busno, data } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
match i2c::write(busno, data) {
Ok(ack) => drtioaux::send(0,
&drtioaux::Packet::I2cWriteReply { succeeded: true, ack: ack }),
@ -318,7 +322,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}
}
drtioaux::Packet::I2cReadRequest { destination: _destination, busno, ack } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
match i2c::read(busno, ack) {
Ok(data) => drtioaux::send(0,
&drtioaux::Packet::I2cReadReply { succeeded: true, data: data }),
@ -327,25 +331,25 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}
}
drtioaux::Packet::I2cSwitchSelectRequest { destination: _destination, busno, address, mask } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let succeeded = i2c::switch_select(busno, address, mask).is_ok();
drtioaux::send(0, &drtioaux::Packet::I2cBasicReply { succeeded: succeeded })
}
drtioaux::Packet::SpiSetConfigRequest { destination: _destination, busno, flags, length, div, cs } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let succeeded = spi::set_config(busno, flags, length, div, cs).is_ok();
drtioaux::send(0,
&drtioaux::Packet::SpiBasicReply { succeeded: succeeded })
},
drtioaux::Packet::SpiWriteRequest { destination: _destination, busno, data } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let succeeded = spi::write(busno, data).is_ok();
drtioaux::send(0,
&drtioaux::Packet::SpiBasicReply { succeeded: succeeded })
}
drtioaux::Packet::SpiReadRequest { destination: _destination, busno } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
match spi::read(busno) {
Ok(data) => drtioaux::send(0,
&drtioaux::Packet::SpiReadReply { succeeded: true, data: data }),
@ -355,7 +359,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}
drtioaux::Packet::AnalyzerHeaderRequest { destination: _destination } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let header = analyzer.get_header();
drtioaux::send(0, &drtioaux::Packet::AnalyzerHeader {
total_byte_count: header.total_byte_count,
@ -365,7 +369,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}
drtioaux::Packet::AnalyzerDataRequest { destination: _destination } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
let meta = analyzer.get_data(&mut data_slice);
drtioaux::send(0, &drtioaux::Packet::AnalyzerData {
@ -376,7 +380,7 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}
drtioaux::Packet::DmaAddTraceRequest { source, destination, id, status, length, trace } => {
forward!(_routing_table, destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, destination, *rank, *self_destination, _repeaters, &packet);
*self_destination = destination;
let succeeded = dmamgr.add(source, id, status, &trace, length as usize).is_ok();
router.send(drtioaux::Packet::DmaAddTraceReply {
@ -384,19 +388,19 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}, _routing_table, *rank, *self_destination)
}
drtioaux::Packet::DmaAddTraceReply { source, destination: _destination, id, succeeded } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
dmamgr.ack_upload(kernelmgr, source, id, succeeded, router, *rank, *self_destination, _routing_table);
Ok(())
}
drtioaux::Packet::DmaRemoveTraceRequest { source, destination: _destination, id } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let succeeded = dmamgr.erase(source, id).is_ok();
router.send(drtioaux::Packet::DmaRemoveTraceReply {
destination: source, succeeded: succeeded
}, _routing_table, *rank, *self_destination)
}
drtioaux::Packet::DmaPlaybackRequest { source, destination: _destination, id, timestamp } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
// no DMA with a running kernel
let succeeded = !kernelmgr.is_running() && dmamgr.playback(source, id, timestamp).is_ok();
router.send(drtioaux::Packet::DmaPlaybackReply {
@ -404,27 +408,27 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
}, _routing_table, *rank, *self_destination)
}
drtioaux::Packet::DmaPlaybackReply { destination: _destination, succeeded } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
if !succeeded {
kernelmgr.ddma_nack();
}
Ok(())
}
drtioaux::Packet::DmaPlaybackStatus { source: _, destination: _destination, id, error, channel, timestamp } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
dmamgr.remote_finished(kernelmgr, id, error, channel, timestamp);
Ok(())
}
drtioaux::Packet::SubkernelAddDataRequest { destination, id, status, length, data } => {
forward!(_routing_table, destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, destination, *rank, *self_destination, _repeaters, &packet);
*self_destination = destination;
let succeeded = kernelmgr.add(id, status, &data, length as usize).is_ok();
drtioaux::send(0,
&drtioaux::Packet::SubkernelAddDataReply { succeeded: succeeded })
}
drtioaux::Packet::SubkernelLoadRunRequest { source, destination: _destination, id, run } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let mut succeeded = kernelmgr.load(id).is_ok();
// allow preloading a kernel with delayed run
if run {
@ -441,35 +445,41 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
_routing_table, *rank, *self_destination)
}
drtioaux::Packet::SubkernelLoadRunReply { destination: _destination, succeeded } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
// received if local subkernel started another, remote subkernel
kernelmgr.subkernel_load_run_reply(succeeded, *self_destination);
Ok(())
}
drtioaux::Packet::SubkernelFinished { destination: _destination, id, with_exception, exception_src } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
kernelmgr.remote_subkernel_finished(id, with_exception, exception_src);
Ok(())
}
drtioaux::Packet::SubkernelExceptionRequest { destination: _destination } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
drtioaux::Packet::SubkernelExceptionRequest { source, destination: _destination } => {
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
let meta = kernelmgr.exception_get_slice(&mut data_slice);
drtioaux::send(0, &drtioaux::Packet::SubkernelException {
router.send(drtioaux::Packet::SubkernelException {
destination: source,
last: meta.status.is_last(),
length: meta.len,
data: data_slice,
})
}, _routing_table, *rank, *self_destination)
}
drtioaux::Packet::SubkernelException { destination: _destination, last, length, data } => {
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
kernelmgr.received_exception(&data[..length as usize], last, router, _routing_table, *rank, *self_destination);
Ok(())
}
drtioaux::Packet::SubkernelMessage { source, destination: _destination, id, status, length, data } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
kernelmgr.message_handle_incoming(status, length as usize, id, &data);
router.send(drtioaux::Packet::SubkernelMessageAck {
destination: source
}, _routing_table, *rank, *self_destination)
}
drtioaux::Packet::SubkernelMessageAck { destination: _destination } => {
forward!(_routing_table, _destination, *rank, _repeaters, &packet);
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
if kernelmgr.message_ack_slice() {
let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
if let Some(meta) = kernelmgr.message_get_slice(&mut data_slice) {
@ -754,6 +764,18 @@ pub extern fn main() -> i32 {
init_rtio_crg();
config::read_str("sed_spread_enable", |r| {
match r {
Ok("1") => { info!("SED spreading enabled"); toggle_sed_spread(1); },
Ok("0") => { info!("SED spreading disabled"); toggle_sed_spread(0); },
Ok(_) => {
warn!("sed_spread_enable value not supported (only 1, 0 allowed), disabling by default");
toggle_sed_spread(0);
},
Err(_) => { info!("SED spreading disabled by default"); toggle_sed_spread(0) },
}
});
#[cfg(has_drtio_eem)]
drtio_eem::init();

View File

@ -75,6 +75,11 @@ impl Repeater {
if rep_link_rx_up(self.repno) {
if let Ok(Some(drtioaux::Packet::EchoReply)) = drtioaux::recv(self.auxno) {
info!("[REP#{}] remote replied after {} packets", self.repno, ping_count);
// clear the aux buffer
let max_time = clock::get_ms() + 200;
while clock::get_ms() < max_time {
let _ = drtioaux::recv(self.auxno);
}
self.state = RepeaterState::Up;
if let Err(e) = self.sync_tsc() {
error!("[REP#{}] failed to sync TSC ({})", self.repno, e);
@ -181,10 +186,31 @@ impl Repeater {
}
}
pub fn aux_forward(&self, request: &drtioaux::Packet) -> Result<(), drtioaux::Error<!>> {
pub fn aux_forward(&self, request: &drtioaux::Packet, router: &mut Router,
routing_table: &drtio_routing::RoutingTable, rank: u8,
self_destination: u8) -> Result<(), drtioaux::Error<!>> {
self.aux_send(request)?;
let reply = self.recv_aux_timeout(200)?;
drtioaux::send(0, &reply).unwrap();
loop {
let reply = self.recv_aux_timeout(200)?;
match reply {
// async/locally requested packets to be consumed or routed
// these may come while a packet would be forwarded
drtioaux::Packet::DmaPlaybackStatus { .. } |
drtioaux::Packet::SubkernelFinished { .. } |
drtioaux::Packet::SubkernelMessage { .. } |
drtioaux::Packet::SubkernelMessageAck { .. } |
drtioaux::Packet::SubkernelLoadRunReply { .. } |
drtioaux::Packet::SubkernelException { .. } |
drtioaux::Packet::DmaAddTraceReply { .. } |
drtioaux::Packet::DmaPlaybackReply { .. } => {
router.route(reply, routing_table, rank, self_destination);
}
_ => {
drtioaux::send(0, &reply).unwrap();
break;
}
}
}
Ok(())
}

View File

@ -4,7 +4,6 @@ use board_artiq::{drtioaux, drtio_routing};
use board_misoc::csr;
use core::cmp::min;
use proto_artiq::drtioaux_proto::PayloadStatus;
use SAT_PAYLOAD_MAX_SIZE;
use MASTER_PAYLOAD_MAX_SIZE;
/* represents data that has to be sent with the aux protocol */
@ -57,7 +56,6 @@ impl Sliceable {
self.data.extend(data);
}
get_slice_fn!(get_slice_sat, SAT_PAYLOAD_MAX_SIZE);
get_slice_fn!(get_slice_master, MASTER_PAYLOAD_MAX_SIZE);
}

View File

@ -158,11 +158,11 @@ class Client:
if reply[0] != "OK":
return reply[0], None
length = int(reply[1])
json_str = self.fsocket.read(length).decode("ascii")
return "OK", json_str
json_bytes = self.fsocket.read(length)
return "OK", json_bytes
def main():
def get_argparser():
parser = argparse.ArgumentParser()
parser.add_argument("--server", default="afws.m-labs.hk", help="server to connect to (default: %(default)s)")
parser.add_argument("--port", default=80, type=int, help="port to connect to (default: %(default)d)")
@ -183,8 +183,11 @@ def main():
act_get_json.add_argument("variant", nargs="?", default=None, help="variant to get (can be omitted if user is authorised to build only one)")
act_get_json.add_argument("-o", "--out", default=None, help="output JSON file")
act_get_json.add_argument("-f", "--force", action="store_true", help="overwrite file if it already exists")
args = parser.parse_args()
return parser
def main():
args = get_argparser().parse_args()
client = Client(args.server, args.port, args.cert)
try:
if args.action == "build":
@ -256,7 +259,7 @@ def main():
variant = args.variant
else:
variant = client.get_single_variant(error_msg="User can get JSON of more than 1 variant - need to specify")
result, json_str = client.get_json(variant)
result, json_bytes = client.get_json(variant)
if result != "OK":
if result == "UNAUTHORIZED":
print(f"You are not authorized to get JSON of variant {variant}. Your firmware subscription may have expired. Contact helpdesk\x40m-labs.hk.")
@ -265,10 +268,10 @@ def main():
if not args.force and os.path.exists(args.out):
print(f"File {args.out} already exists. You can use -f to overwrite the existing file.")
sys.exit(1)
with open(args.out, "w") as f:
f.write(json_str)
with open(args.out, "wb") as f:
f.write(json_bytes)
else:
print(json_str)
sys.stdout.buffer.write(json_bytes)
else:
raise ValueError
finally:

View File

@ -1,10 +1,4 @@
#!/usr/bin/env python3
"""
Client to send commands to :mod:`artiq_master` and display results locally.
The client can perform actions such as accessing/setting datasets,
scanning devices, scheduling experiments, and looking for experiments/devices.
"""
import argparse
import logging
@ -40,10 +34,10 @@ def get_argparser():
parser = argparse.ArgumentParser(description="ARTIQ CLI client")
parser.add_argument(
"-s", "--server", default="::1",
help="hostname or IP of the master to connect to")
help="hostname or IP of the master to connect to (default: %(default)s)")
parser.add_argument(
"--port", default=None, type=int,
help="TCP port to use to connect to the master")
help="TCP port to use to connect to the master (default: %(default)s)")
parser.add_argument("--version", action="version",
version="ARTIQ v{}".format(artiq_version),
help="print the ARTIQ version number")
@ -59,7 +53,8 @@ def get_argparser():
help="priority (higher value means sooner "
"scheduling, default: %(default)s)")
parser_add.add_argument("-t", "--timed", default=None, type=str,
help="set a due date for the experiment")
help="set a due date for the experiment "
"(default: %(default)s)")
parser_add.add_argument("-f", "--flush", default=False,
action="store_true",
help="flush the pipeline before preparing "

View File

@ -32,31 +32,31 @@ def get_argparser():
help="print the ARTIQ version number")
parser.add_argument(
"-s", "--server", default="::1",
help="hostname or IP of the master to connect to")
help="hostname or IP of the master to connect to (default: %(default)s)")
parser.add_argument(
"--port-notify", default=3250, type=int,
help="TCP port to connect to for notifications")
help="TCP port to connect to for notifications (default: %(default)s)")
parser.add_argument(
"--port-control", default=3251, type=int,
help="TCP port to connect to for control")
help="TCP port to connect to for control (default: %(default)s)")
parser.add_argument(
"--port-broadcast", default=1067, type=int,
help="TCP port to connect to for broadcasts")
help="TCP port to connect to for broadcasts (default: %(default)s)")
parser.add_argument(
"--db-file", default=None,
help="database file for local GUI settings")
help="database file for local GUI settings (default: %(default)s)")
parser.add_argument(
"-p", "--load-plugin", dest="plugin_modules", action="append",
help="Python module to load on startup")
parser.add_argument(
"--analyzer-proxy-timeout", default=5, type=float,
help="connection timeout to core analyzer proxy")
help="connection timeout to core analyzer proxy (default: %(default)s)")
parser.add_argument(
"--analyzer-proxy-timer", default=5, type=float,
help="retry timer to core analyzer proxy")
help="retry timer to core analyzer proxy (default: %(default)s)")
parser.add_argument(
"--analyzer-proxy-timer-backoff", default=1.1, type=float,
help="retry timer backoff multiplier to core analyzer proxy")
help="retry timer backoff multiplier to core analyzer proxy, (default: %(default)s)")
common_args.verbosity_args(parser)
return parser

View File

@ -833,7 +833,7 @@ def process(output, primary_description, satellites):
processor(peripheral)
def main():
def get_argparser():
parser = argparse.ArgumentParser(
description="ARTIQ device database template builder")
parser.add_argument("--version", action="version",
@ -847,8 +847,11 @@ def main():
default=[], metavar=("DESTINATION", "DESCRIPTION"), type=str,
help="add DRTIO satellite at the given destination number with "
"devices from the given JSON description")
return parser
args = parser.parse_args()
def main():
args = get_argparser().parse_args()
primary_description = jsondesc.load(args.primary_description)

View File

@ -30,7 +30,8 @@ Valid actions:
* firmware: write firmware to flash
* load: load main gateware bitstream into device (volatile but fast)
* erase: erase flash memory
* start: trigger the target to (re)load its gateware bitstream from flash
* start: trigger the target to (re)load its gateware bitstream from flash.
If your core device is reachable by network, prefer 'artiq_coremgmt reboot'.
Prerequisites:

View File

@ -40,21 +40,21 @@ def get_argparser():
group = parser.add_argument_group("databases")
group.add_argument("--device-db", default="device_db.py",
help="device database file (default: '%(default)s')")
help="device database file (default: %(default)s)")
group.add_argument("--dataset-db", default="dataset_db.mdb",
help="dataset file (default: '%(default)s')")
help="dataset file (default: %(default)s)")
group = parser.add_argument_group("repository")
group.add_argument(
"-g", "--git", default=False, action="store_true",
help="use the Git repository backend")
help="use the Git repository backend (default: %(default)s)")
group.add_argument(
"-r", "--repository", default="repository",
help="path to the repository (default: '%(default)s')")
help="path to the repository (default: %(default)s)")
group.add_argument(
"--experiment-subdir", default="",
help=("path to the experiment folder from the repository root "
"(default: '%(default)s')"))
"(default: %(default)s)"))
log_args(parser)
parser.add_argument("--name",

View File

@ -11,7 +11,7 @@ def get_argparser():
description="ARTIQ session manager. "
"Automatically runs the master, dashboard and "
"local controller manager on the current machine. "
"The latter requires the artiq-comtools package to "
"The latter requires the ``artiq-comtools`` package to "
"be installed.")
parser.add_argument("--version", action="version",
version="ARTIQ v{}".format(artiq_version),

View File

@ -1,6 +1,7 @@
from types import SimpleNamespace
from migen import *
from migen.genlib.cdc import MultiReg
from migen.genlib.resetsync import AsyncResetSynchronizer
from misoc.interconnect.csr import *
@ -51,6 +52,7 @@ class SyncRTIO(Module):
def __init__(self, tsc, channels, lane_count=8, fifo_depth=128):
self.cri = cri.Interface()
self.async_errors = Record(async_errors_layout)
self.sed_spread_enable = Signal()
chan_fine_ts_width = max(max(rtlink.get_fine_ts_width(channel.interface.o)
for channel in channels),
@ -61,10 +63,11 @@ class SyncRTIO(Module):
self.submodules.outputs = ClockDomainsRenamer("rio")(
SED(channels, tsc.glbl_fine_ts_width,
lane_count=lane_count, fifo_depth=fifo_depth,
enable_spread=True, fifo_high_watermark=0.75,
report_buffer_space=True, interface=self.cri))
fifo_high_watermark=0.75, report_buffer_space=True,
interface=self.cri))
self.comb += self.outputs.coarse_timestamp.eq(tsc.coarse_ts)
self.sync += self.outputs.minimum_coarse_timestamp.eq(tsc.coarse_ts + 16)
self.specials += MultiReg(self.sed_spread_enable, self.outputs.enable_spread, "rio")
self.submodules.inputs = ClockDomainsRenamer("rio")(
InputCollector(tsc, channels, interface=self.cri))
@ -78,6 +81,7 @@ class DRTIOSatellite(Module):
self.reset = CSRStorage(reset=1)
self.reset_phy = CSRStorage(reset=1)
self.tsc_loaded = CSR()
self.sed_spread_enable = CSRStorage()
# master interface in the sys domain
self.cri = cri.Interface()
self.async_errors = Record(async_errors_layout)
@ -143,7 +147,7 @@ class DRTIOSatellite(Module):
self.rt_packet, tsc, self.async_errors)
def get_csrs(self):
return ([self.reset, self.reset_phy, self.tsc_loaded] +
return ([self.reset, self.sed_spread_enable, self.reset_phy, self.tsc_loaded] +
self.link_layer.get_csrs() + self.link_stats.get_csrs() +
self.rt_errors.get_csrs())

View File

@ -3,7 +3,7 @@ from operator import and_
from migen import *
from migen.genlib.resetsync import AsyncResetSynchronizer
from migen.genlib.cdc import BlindTransfer
from migen.genlib.cdc import MultiReg, BlindTransfer
from misoc.interconnect.csr import *
from artiq.gateware.rtio import cri
@ -18,6 +18,7 @@ class Core(Module, AutoCSR):
self.cri = cri.Interface()
self.reset = CSR()
self.reset_phy = CSR()
self.sed_spread_enable = CSRStorage()
self.async_error = CSR(3)
self.collision_channel = CSRStatus(16)
self.busy_channel = CSRStatus(16)
@ -67,6 +68,7 @@ class Core(Module, AutoCSR):
self.submodules += outputs
self.comb += outputs.coarse_timestamp.eq(tsc.coarse_ts)
self.sync += outputs.minimum_coarse_timestamp.eq(tsc.coarse_ts + 12)
self.specials += MultiReg(self.sed_spread_enable.storage, outputs.enable_spread, "rio")
inputs = ClockDomainsRenamer("rio")(InputCollector(tsc, channels,
quash_channels=quash_channels,

View File

@ -12,7 +12,7 @@ __all__ = ["SED"]
class SED(Module):
def __init__(self, channels, glbl_fine_ts_width,
lane_count=8, fifo_depth=128, fifo_high_watermark=1.0, enable_spread=True,
lane_count=8, fifo_depth=128, fifo_high_watermark=1.0,
quash_channels=[], report_buffer_space=False, interface=None):
seqn_width = layouts.seqn_width(lane_count, fifo_depth)
@ -23,7 +23,6 @@ class SED(Module):
layouts.fifo_payload(channels),
[channel.interface.o.delay for channel in channels],
glbl_fine_ts_width,
enable_spread=enable_spread,
quash_channels=quash_channels,
interface=interface)
self.submodules.fifos = FIFOs(lane_count, fifo_depth, fifo_high_watermark,
@ -47,6 +46,10 @@ class SED(Module):
self.cri.o_buffer_space.eq(self.fifos.buffer_space)
]
@property
def enable_spread(self):
return self.lane_dist.enable_spread
@property
def cri(self):
return self.lane_dist.cri

View File

@ -10,7 +10,7 @@ __all__ = ["LaneDistributor"]
class LaneDistributor(Module):
def __init__(self, lane_count, seqn_width, layout_payload,
compensation, glbl_fine_ts_width,
enable_spread=True, quash_channels=[], interface=None):
quash_channels=[], interface=None):
if lane_count & (lane_count - 1):
raise NotImplementedError("lane count must be a power of 2")
@ -28,6 +28,8 @@ class LaneDistributor(Module):
self.output = [Record(layouts.fifo_ingress(seqn_width, layout_payload))
for _ in range(lane_count)]
self.enable_spread = Signal()
# # #
o_status_wait = Signal()
@ -173,12 +175,11 @@ class LaneDistributor(Module):
]
# current lane has reached high watermark, spread events by switching to the next.
if enable_spread:
self.sync += [
If(current_lane_high_watermark | ~current_lane_writable,
force_laneB.eq(1)
),
If(do_write,
force_laneB.eq(0)
)
]
self.sync += [
If(self.enable_spread & (current_lane_high_watermark | ~current_lane_writable),
force_laneB.eq(1)
),
If(do_write,
force_laneB.eq(0)
)
]

View File

@ -217,7 +217,10 @@ class Satellite(BaseSoC, AMPSoC):
# satellite (master-controlled) RTIO
self.submodules.local_io = SyncRTIO(self.rtio_tsc, rtio_channels, lane_count=sed_lanes)
self.comb += self.drtiosat.async_errors.eq(self.local_io.async_errors)
self.comb += [
self.drtiosat.async_errors.eq(self.local_io.async_errors),
self.local_io.sed_spread_enable.eq(self.drtiosat.sed_spread_enable.storage)
]
# subkernel RTIO
self.submodules.rtio = rtio.KernelInitiator(self.rtio_tsc)

View File

@ -120,7 +120,8 @@ class StandaloneBase(MiniSoC, AMPSoC):
self.config["HAS_SI549"] = None
self.config["WRPLL_REF_CLK"] = "SMA_CLKIN"
else:
self.submodules += SMAClkinForward(self.platform)
if self.platform.hw_rev == "v2.0":
self.submodules += SMAClkinForward(self.platform)
self.config["HAS_SI5324"] = None
self.config["SI5324_SOFT_RESET"] = None
@ -596,7 +597,10 @@ class SatelliteBase(BaseSoC, AMPSoC):
# satellite (master-controlled) RTIO
self.submodules.local_io = SyncRTIO(self.rtio_tsc, rtio_channels, lane_count=sed_lanes)
self.comb += self.drtiosat.async_errors.eq(self.local_io.async_errors)
self.comb += [
self.drtiosat.async_errors.eq(self.local_io.async_errors),
self.local_io.sed_spread_enable.eq(self.drtiosat.sed_spread_enable.storage)
]
# subkernel RTIO
self.submodules.rtio = rtio.KernelInitiator(self.rtio_tsc)
@ -842,6 +846,8 @@ def main():
has_shuttler = any(peripheral["type"] == "shuttler" for peripheral in description["peripherals"])
if has_shuttler and (description["drtio_role"] == "standalone"):
raise ValueError("Shuttler requires DRTIO, please switch role to master")
if description["enable_wrpll"] and description["hw_rev"] in ["v1.0", "v1.1"]:
raise ValueError("Kasli {} does not support WRPLL".format(description["hw_rev"]))
soc = cls(description, gateware_identifier_str=args.gateware_identifier_str, **soc_kasli_argdict(args))
args.variant = description["variant"]

View File

@ -468,7 +468,10 @@ class _SatelliteBase(BaseSoC, AMPSoC):
# DRTIO
self.submodules.local_io = SyncRTIO(self.rtio_tsc, rtio_channels)
self.comb += self.drtiosat.async_errors.eq(self.local_io.async_errors)
self.comb += [
self.drtiosat.async_errors.eq(self.local_io.async_errors),
self.local_io.sed_spread_enable.eq(self.drtiosat.sed_spread_enable.storage)
]
# subkernel RTIO
self.submodules.rtio = rtio.KernelInitiator(self.rtio_tsc)

View File

@ -14,6 +14,7 @@ def simulate(input_events, compensation=None, wait=True):
if compensation is None:
compensation = [0]*256
dut = lane_distributor.LaneDistributor(LANE_COUNT, 8, layout, compensation, 3)
dut.comb += dut.enable_spread.eq(1)
output = []
access_results = []

View File

@ -27,6 +27,7 @@ class DUT(Module):
self.sed.coarse_timestamp.eq(self.sed.coarse_timestamp + 1),
self.sed.minimum_coarse_timestamp.eq(self.sed.coarse_timestamp + 16)
]
self.comb += self.sed.enable_spread.eq(0)
def simulate(input_events, **kwargs):
@ -110,6 +111,6 @@ class TestSED(unittest.TestCase):
input_events += [(now, 1), (now, 0)]
ttl_changes, access_results = simulate(input_events,
lane_count=2, fifo_depth=2, enable_spread=False)
lane_count=2, fifo_depth=2)
self.assertEqual([r[0] for r in access_results], ["ok"]*len(input_events))
self.assertEqual(ttl_changes, list(range(40, 40+40*20, 10)))

View File

@ -28,7 +28,7 @@ def kernel(arg=None, flags={}):
This decorator marks an object's method for execution on the core
device.
When a decorated method is called from the Python interpreter, the :attr:`core`
When a decorated method is called from the Python interpreter, the ``core``
attribute of the object is retrieved and used as core device driver. The
core device driver will typically compile, transfer and run the method
(kernel) on the device.
@ -41,7 +41,7 @@ def kernel(arg=None, flags={}):
- if the method is a regular Python method (not a kernel), it generates
a remote procedure call (RPC) for execution on the host.
The decorator takes an optional parameter that defaults to :attr`core` and
The decorator takes an optional parameter that defaults to ``core`` and
specifies the name of the attribute to use as core device driver.
This decorator must be present in the global namespace of all modules using
@ -134,7 +134,7 @@ def portable(arg=None, flags={}):
def rpc(arg=None, flags={}):
"""
This decorator marks a function for execution on the host interpreter.
This is also the default behavior of ARTIQ; however, this decorator allows
This is also the default behavior of ARTIQ; however, this decorator allows for
specifying additional flags.
"""
if arg is None:
@ -256,7 +256,7 @@ _time_manager = _DummyTimeManager()
def set_time_manager(time_manager):
"""Set the time manager used for simulating kernels by running them
directly inside the Python interpreter. The time manager responds to the
entering and leaving of interleave/parallel/sequential blocks, delays, etc. and
entering and leaving of parallel/sequential blocks, delays, etc. and
provides a time-stamped logging facility for events.
"""
global _time_manager
@ -280,7 +280,7 @@ class _Parallel:
The execution time of a parallel block is the execution time of its longest
statement. A parallel block may contain sequential blocks, which themselves
may contain interleave blocks, etc.
may contain parallel blocks, etc.
"""
def __enter__(self):
_time_manager.enter_parallel()
@ -340,5 +340,5 @@ def watchdog(timeout):
class TerminationRequested(Exception):
"""Raised by ``pause`` when the user has requested termination."""
"""Raised by :meth:`pause` when the user has requested termination."""
pass

View File

@ -28,7 +28,7 @@ class DefaultMissing(Exception):
class CancelledArgsError(Exception):
"""Raised by the ``interactive`` context manager when an interactive
"""Raised by the :meth:`~artiq.language.environment.HasEnvironment.interactive` context manager when an interactive
arguments request is cancelled."""
pass
@ -117,7 +117,7 @@ class NumberValue(_SimpleArgProcessor):
``int`` will also result in an error unless these conditions are met.
When ``scale`` is not specified, and the unit is a common one (i.e.
defined in ``artiq.language.units``), then the scale is obtained from
defined in :class:`~artiq.language.units`), then the scale is obtained from
the unit using a simple string match. For example, milliseconds (``"ms"``)
units set the scale to 0.001. No unit (default) corresponds to a scale of
1.0.
@ -321,7 +321,8 @@ class HasEnvironment:
:param key: Name of the argument.
:param processor: A description of how to process the argument, such
as instances of ``BooleanValue`` and ``NumberValue``.
as instances of :mod:`~artiq.language.environment.BooleanValue` and
:mod:`~artiq.language.environment.NumberValue`.
:param group: An optional string that defines what group the argument
belongs to, for user interface purposes.
:param tooltip: An optional string to describe the argument in more
@ -347,7 +348,8 @@ class HasEnvironment:
"""Request arguments from the user interactively.
This context manager returns a namespace object on which the method
`setattr_argument` should be called, with the usual semantics.
:meth:`~artiq.language.environment.HasEnvironment.setattr_argument` should be called,
with the usual semantics.
When the context manager terminates, the experiment is blocked
and the user is presented with the requested argument widgets.
@ -355,7 +357,7 @@ class HasEnvironment:
the namespace contains the values of the arguments.
If the interactive arguments request is cancelled, raises
``CancelledArgsError``."""
:exc:`~artiq.language.environment.CancelledArgsError`."""
interactive_arglist = []
namespace = SimpleNamespace()
def setattr_argument(key, processor=None, group=None, tooltip=None):
@ -478,7 +480,7 @@ class HasEnvironment:
This function is used to get additional information for displaying the dataset.
See ``set_dataset`` for documentation of metadata items.
See :meth:`set_dataset` for documentation of metadata items.
"""
try:
return self.__dataset_mgr.get_metadata(key)

View File

@ -1,20 +1,6 @@
"""
Implementation and management of scan objects.
A scan object (e.g. :class:`artiq.language.scan.RangeScan`) represents a
one-dimensional sweep of a numerical range. Multi-dimensional scans are
constructed by combining several scan objects, for example using
:class:`artiq.language.scan.MultiScanManager`.
Iterate on a scan object to scan it, e.g. ::
for variable in self.scan:
do_something(variable)
Iterating multiple times on the same scan object is possible, with the scan
yielding the same values each time. Iterating concurrently on the
same scan object (e.g. via nested loops) is also supported, and the
iterators are independent from each other.
"""
import random
@ -32,6 +18,21 @@ __all__ = ["ScanObject",
class ScanObject:
"""
Represents a one-dimensional sweep of a numerical range. Multi-dimensional scans are
constructed by combining several scan objects, for example using
:class:`MultiScanManager`.
Iterate on a scan object to scan it, e.g. ::
for variable in self.scan:
do_something(variable)
Iterating multiple times on the same scan object is possible, with the scan
yielding the same values each time. Iterating concurrently on the
same scan object (e.g. via nested loops) is also supported, and the
iterators are independent from each other.
"""
def __iter__(self):
raise NotImplementedError
@ -163,7 +164,7 @@ class Scannable:
takes a scan object.
When ``scale`` is not specified, and the unit is a common one (i.e.
defined in ``artiq.language.units``), then the scale is obtained from
defined in :class:`artiq.language.units`), then the scale is obtained from
the unit using a simple string match. For example, milliseconds (``"ms"``)
units set the scale to 0.001. No unit (default) corresponds to a scale of
1.0.

View File

@ -477,16 +477,16 @@ class Scheduler:
return self.notifier.raw_view
def check_pause(self, rid):
"""Returns ``True`` if there is a condition that could make ``pause``
"""Returns ``True`` if there is a condition that could make :meth:`pause`
not return immediately (termination requested or higher priority run).
The typical purpose of this function is to check from a kernel
whether returning control to the host and pausing would have an effect,
in order to avoid the cost of switching kernels in the common case
where ``pause`` does nothing.
where :meth:`pause` does nothing.
This function does not have side effects, and does not have to be
followed by a call to ``pause``.
followed by a call to :meth:`pause`.
"""
for pipeline in self._pipelines.values():
if rid in pipeline.pool.runs:

View File

@ -362,7 +362,13 @@ def main():
write_results()
raise
finally:
device_mgr.notify_run_end()
try:
device_mgr.notify_run_end()
except:
# Also write results immediately if the `notify_run_end`
# callbacks produce an exception
write_results()
raise
put_completed()
elif action == "analyze":
try:

View File

@ -573,3 +573,29 @@ class NumpyQuotingTest(ExperimentCase):
def test_issue_1871(self):
"""Ensure numpy.array() does not break NumPy math functions"""
self.create(_NumpyQuoting).run()
class _IntBoundary(EnvExperiment):
def build(self):
self.setattr_device("core")
self.int32_min = numpy.iinfo(numpy.int32).min
self.int32_max = numpy.iinfo(numpy.int32).max
self.int64_min = numpy.iinfo(numpy.int64).min
self.int64_max = numpy.iinfo(numpy.int64).max
@kernel
def test_int32_bounds(self, min_val: TInt32, max_val: TInt32):
return min_val == self.int32_min and max_val == self.int32_max
@kernel
def test_int64_bounds(self, min_val: TInt64, max_val: TInt64):
return min_val == self.int64_min and max_val == self.int64_max
@kernel
def run(self):
self.test_int32_bounds(self.int32_min, self.int32_max)
self.test_int64_bounds(self.int64_min, self.int64_max)
class IntBoundaryTest(ExperimentCase):
def test_int_boundary(self):
self.create(_IntBoundary).run()

View File

@ -0,0 +1,59 @@
import unittest
import artiq.coredevice.exceptions as exceptions
from artiq.experiment import *
from artiq.test.hardware_testbench import ExperimentCase
from artiq.compiler.embedding import EmbeddingMap
from artiq.coredevice.core import test_exception_id_sync
"""
Test sync in exceptions raised between host and kernel
Check `artiq.compiler.embedding` and `artiq::firmware::ksupport::eh_artiq`
Considers the following two cases:
1) Exception raised on kernel and passed to host
2) Exception raised in a host function called from kernel
Ensures same exception is raised on both kernel and host in either case
"""
exception_names = EmbeddingMap().str_reverse_map
class _TestExceptionSync(EnvExperiment):
def build(self):
self.setattr_device("core")
@rpc
def _raise_exception_host(self, id):
exn = exception_names[id].split('.')[-1].split(':')[-1]
exn = getattr(exceptions, exn)
raise exn
@kernel
def raise_exception_host(self, id):
self._raise_exception_host(id)
@kernel
def raise_exception_kernel(self, id):
test_exception_id_sync(id)
class ExceptionTest(ExperimentCase):
def test_raise_exceptions_kernel(self):
exp = self.create(_TestExceptionSync)
for id, name in list(exception_names.items())[::-1]:
name = name.split('.')[-1].split(':')[-1]
with self.assertRaises(getattr(exceptions, name)) as ctx:
exp.raise_exception_kernel(id)
self.assertEqual(str(ctx.exception).strip("'"), name)
def test_raise_exceptions_host(self):
exp = self.create(_TestExceptionSync)
for id, name in exception_names.items():
name = name.split('.')[-1].split(':')[-1]
with self.assertRaises(getattr(exceptions, name)) as ctx:
exp.raise_exception_host(id)

View File

@ -141,132 +141,21 @@ class SchedulerCase(unittest.TestCase):
high_priority = 3
middle_priority = 2
low_priority = 1
# "late" is far in the future, beyond any reasonable test timeout.
late = time() + 100000
early = time() + 1
expect = [
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": low_priority,
"pipeline": "main",
"due_date": None,
"status": "pending",
"expid": expid_bg,
"flush": False
},
"key": 0
},
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": high_priority,
"pipeline": "main",
"due_date": late,
"status": "pending",
"expid": expid_empty,
"flush": False
},
"key": 1
},
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": middle_priority,
"pipeline": "main",
"due_date": early,
"status": "pending",
"expid": expid_empty,
"flush": False
},
"key": 2
},
{
"path": [0],
"action": "setitem",
"value": "preparing",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "prepare_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "running",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "preparing",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "prepare_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "paused",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "running",
"key": "status"
},
{
middle_priority_done = asyncio.Event()
def notify(mod):
# Watch for the "middle_priority" experiment's completion
if mod == {
"path": [2],
"action": "setitem",
"value": "run_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "running",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "analyzing",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "deleting",
"key": "status"
},
{
"path": [],
"action": "delitem",
"key": 2
},
]
done = asyncio.Event()
expect_idx = 0
def notify(mod):
nonlocal expect_idx
self.assertEqual(mod, expect[expect_idx])
expect_idx += 1
if expect_idx >= len(expect):
done.set()
}:
middle_priority_done.set()
scheduler.notifier.publish = notify
scheduler.start(loop=loop)
@ -275,7 +164,9 @@ class SchedulerCase(unittest.TestCase):
scheduler.submit("main", expid_empty, high_priority, late)
scheduler.submit("main", expid_empty, middle_priority, early)
loop.run_until_complete(done.wait())
# Check that the "middle_priority" experiment finishes. This would hang for a
# long time (until "late") if the due times were not being respected.
loop.run_until_complete(middle_priority_done.wait())
scheduler.notifier.publish = None
loop.run_until_complete(scheduler.stop())

View File

@ -0,0 +1,252 @@
Building and developing ARTIQ
=============================
.. warning::
This section is only for software or FPGA developers who want to modify ARTIQ. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we usually provide board binaries for you; you can use the AFWS client to get updated versions if necessary, as described in :ref:`obtaining-binaries`. It is not necessary to build them yourself.
The easiest way to obtain an ARTIQ development environment is via the `Nix package manager <https://nixos.org/nix/>`_ on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
ARTIQ itself does not depend on Nix, and it is also possible to obtain everything from source (look into the ``flake.nix`` file to see what resources are used, and run the commands manually, adapting to your system) - but Nix makes the process a lot easier.
Installing Vivado
-----------------
It is necessary to independently install AMD's `Vivado <https://www.xilinx.com/support/download.html>`_, which requires a login for download and can't be automatically obtained by package managers. The "appropriate" Vivado version to use for building gateware and firmware can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs.
.. tip::
Text-search ``flake.nix`` for a mention of ``/opt/Xilinx/Vivado``. Given e.g. the line ::
profile = "set -e; source /opt/Xilinx/Vivado/2022.2/settings64.sh"
the intended Vivado version is 2022.2.
Download and run the official installer. If using NixOS, note that this will require a FHS chroot environment; the ARTIQ flake provides such an environment, which you can enter with the command ``vivado-env`` from the development environment (i.e. after ``nix develop``). Other tips:
- Be aware that Vivado is notoriously not a lightweight piece of software and you will likely need **at least 70GB+** of free space to install it.
- If you do not want to write to ``/opt``, you can install into a folder of your home directory.
- During the Vivado installation, uncheck ``Install cable drivers`` (they are not required, as we use better open source alternatives).
- If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
- Vivado installation issues are not uncommon. Searching for similar problems on `the M-Labs forum <https://forum.m-labs.hk/>`_ or `Vivado's own support forums <https://support.xilinx.com/s/topic/0TO2E000000YKXwWAO/installation-and-licensing>`_ might be helpful when looking for solutions.
.. _system-description:
System description file
-----------------------
ARTIQ gateware and firmware binaries are dependent on the system configuration. In other words, a specific set of ARTIQ binaries is bound to the exact arrangement of real-time hardware it was generated for: the core device itself, its role in a DRTIO context (master, satellite, or standalone), the (real-time) peripherals in use, the physical EEM ports they will be connected to, and various other basic specifications. This information is normally provided to the software in the form of a JSON file called the system description or system configuration file.
.. warning::
System configuration files are only used with Kasli and Kasli-SoC boards. KC705 and ZC706 ARTIQ configurations, due to their relative rarity and specialization, are handled on a case-by-case basis and selected through a variant name such as ``nist_clock``, with no system description file necessary. See below in :ref:`building` for where to find the list of supported variants. Writing new KC705 or ZC706 variants is not a trivial task, and not particularly recommended, unless you are an FPGA developer and know what you're doing.
If you already have your system configuration file on hand, you can edit it to reflect any changes in configuration. If you purchased your original system from M-Labs, or recently purchased new hardware to add to it, you can obtain your up-to-date system configuration file through AFWS at any time using the command ``$ afws_client get_json`` (see :ref:`AFWS client<afws-client>`). If you are starting from scratch, a close reading of ``coredevice_generic.schema.json`` in ``artiq/coredevice`` will be helpful.
System descriptions do not need to be very complex. At its most basic, a system description looks something like: ::
{
"target": "kasli",
"variant": "example",
"hw_rev": "v2.0",
"base": "master",
"peripherals": [
{
"type": "grabber",
"ports": [0]
}
]
}
Only these five fields are required, and the ``peripherals`` list can in principle be empty. A limited number of more extensive examples can currently be found in `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq/src/branch/master>`_, as well as in the main repository under ``artiq/examples/kasli_shuttler``. Once your system description file is complete, you can use ``artiq_ddb_template`` (see also :ref:`Utilities <ddb-template-tool>`) to test it and to generate a template for the corresponding :ref:`device database <device-db>`.
DRTIO descriptions
^^^^^^^^^^^^^^^^^^
Note that in DRTIO systems it is necessary to create one description file *per core device*. Satellites and their connected peripherals must be described separately. Satellites also need to be reflashed separately, albeit only if their personal system descriptions have changed. (The layout of satellites relative to the master is configurable on the fly and will be established much later, in the routing table; see :ref:`drtio-routing`. It is not necessary to rebuild or reflash if only changing the DRTIO routing table).
In contrast, only one device database should be generated even for a DRTIO system. Use a command of the form: ::
$ artiq_ddb_template -s 1 <satellite1>.json -s 2 <satellite2>.json <master>.json
The numbers designate the respective satellite's destination number, which must correspond to the destination numbers used when generating the routing table later.
Common system description changes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To add or remove peripherals from the system, add or remove their entries from the ``peripherals`` field. When replacing hardware with upgraded versions, update the corresponding ``hw_rev`` (hardware revision) field. Other fields to consider include:
- ``enable_wrpll`` (a simple boolean, see :ref:`core-device-clocking`)
- ``sed_lanes`` (increasing the number of SED lanes can reduce sequence errors, but correspondingly consumes more FPGA resources, see :ref:`sequence-errors`)
- various defaults (e.g. ``core_addr`` defines a default IP address, which can be freely reconfigured later).
Nix development environment
---------------------------
* Install `Nix <http://nixos.org/nix/>`_ if you haven't already. Prefer a single-user installation for simplicity.
* Enable flakes in Nix, for example by adding ``experimental-features = nix-command flakes`` to ``nix.conf``; see the `NixOS Wiki on flakes <https://nixos.wiki/wiki/flakes>`_ for details and more options.
* Clone `the ARTIQ Git repository <https://github.com/m-labs/artiq>`_, or `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`__ for Zynq devices (Kasli-SoC or ZC706). By default, you are working with the ``master`` branch, which represents the beta version and is not stable (see :doc:`releases`). Checkout the most recent release (``git checkout release-[number]``) for a stable version.
* If your Vivado installation is not in its default location ``/opt``, open ``flake.nix`` and edit it accordingly (note that the edits must be made in the main ARTIQ flake, even if you are working with Zynq, see also tip below).
* Run ``nix develop`` at the root of the repository, where ``flake.nix`` is.
* Answer ``y``/'yes' to any Nix configuration questions if necessary, as in :ref:`installing-troubleshooting`.
.. note::
You can also target legacy versions of ARTIQ; use Git to checkout older release branches. Note however that older releases of ARTIQ required different processes for developing and building, which you are broadly more likely to figure out by (also) consulting the corresponding older versions of the manual.
Once you have run ``nix develop`` you are in the ARTIQ development environment. All ARTIQ commands and utilities -- :mod:`~artiq.frontend.artiq_run`, :mod:`~artiq.frontend.artiq_master`, etc. -- should be available, as well as all the packages necessary to build or run ARTIQ itself. You can exit the environment at any time using Control+D or the ``exit`` command and re-enter it by re-running ``nix develop`` again in the same location.
.. tip::
If you are developing for Zynq, you will have noted that the ARTIQ-Zynq repository consists largely of firmware. The firmware for Zynq (NAR3) is more modern than that used for current mainline ARTIQ, and is intended to eventually replace it; for now it constitutes most of the difference between the two ARTIQ variants. The gateware for Zynq, on the other hand, is largely imported from mainline ARTIQ.
If you intend to modify the source housed in the original ARTIQ repository, but build and test the results on a Zynq device, clone both repositories and set your ``PYTHONPATH`` after entering the ARTIQ-Zynq development shell: ::
$ export PYTHONPATH=/absolute/path/to/your/artiq:$PYTHONPATH
Note that this only applies for incremental builds. If you want to use ``nix build``, or make changes to the dependencies, look into changing the inputs of the ``flake.nix`` instead. You can do this by replacing the URL of the GitHub ARTIQ repository with ``path:/absolute/path/to/your/artiq``; remember that Nix caches dependencies, so to incorporate new changes you will need to exit the development shell, update the Nix cache with ``nix flake update``, and re-run ``nix develop``.
Building only standard binaries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are working with original ARTIQ, and you only want to build a set of standard binaries (i.e. without changing the source code), you can also enter the development shell without cloning the repository, using ``nix develop`` as follows: ::
$ nix develop git+https://github.com/m-labs/artiq.git\?ref=release-[number]#boards
Leave off ``\?ref=release-[number]`` to prefer the current beta version instead of a numbered release.
.. note::
Adding ``#boards`` makes use of the ARTIQ flake's provided ``artiq-boards-shell``, a lighter environment optimized for building firmware and flashing boards, which can also be accessed by running ``nix develop .#boards`` if you have already cloned the repository. Developers should be aware that in this shell the current copy of the ARTIQ sources is not added to your ``PYTHONPATH``. Run ``nix flake show`` and read ``flake.nix`` carefully to understand the different available shells.
The parallel command does exist for ARTIQ-Zynq: ::
$ nix develop git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number]
but if you are building ARTIQ-Zynq without intention to change the source, it is not actually necessary to enter the development environment at all; Nix is capable of accessing the official flake remotely for the build itself, eliminating the requirement for any particular environment.
This is equally possible for original ARTIQ, but not as useful, as the development environment (specifically the ``#boards`` shell) is still the easiest way to access the necessary tools for flashing the board. On the other hand, with Zynq, it is normally recommended to boot from SD card, which requires no further special tools. As long as you have a functioning Nix installation with flakes enabled, you can progress directly to the building instructions below.
.. _building:
Building ARTIQ
--------------
For general troubleshooting and debugging, especially with a 'fresh' board, see also :ref:`connecting-uart`.
Kasli or KC705 (ARTIQ original)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For Kasli, if you have your system description file on-hand, you can at this point build both firmware and gateware with a command of the form: ::
$ python -m artiq.gateware.targets.kasli <description>.json
With KC705, use: ::
$ python -m artiq.gateware.targets.kc705 -V <variant>
This will create a directory ``artiq_kasli`` or ``artiq_kc705`` containing the binaries in a subdirectory named after your description file or variant. Flash the board as described in :ref:`writing-flash`, adding the option ``--srcbuild``, e.g., assuming your board is already connected by JTAG USB: ::
$ artiq_flash --srcbuild [-t kc705] -d artiq_<board>/<variant>
.. note::
To see supported KC705 variants, run: ::
$ python -m artiq.gateware.targets.kc705 --help
Look for the option ``-V VARIANT, --variant VARIANT``.
Kasli-SoC or ZC706 (ARTIQ on Zynq)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The building process for Zynq devices is a little more complex. The easiest method is to leverage ``nix build`` and the ``makeArtiqZynqPackage`` utility provided by the official flake. The ensuing command is rather long, because it uses a multi-clause expression in the Nix language to describe the desired result; it can be executed piece-by-piece using the `Nix REPL <https://nix.dev/manual/nix/2.18/command-ref/new-cli/nix3-repl.html>`_, but ``nix build`` provides a lot of useful conveniences.
For Kasli-SoC, run: ::
$ nix build --print-build-logs --impure --expr 'let fl = builtins.getFlake "git+https://git.m-labs.hk/m-labs/artiq-zynq?ref=release-[number]"; in (fl.makeArtiqZynqPackage {target="kasli_soc"; variant="<variant>"; json=<path/to/description.json>;}).kasli_soc-<variant>-sd'
Replace ``<variant>`` with ``master``, ``satellite``, or ``standalone``, depending on your targeted DRTIO role. Remove ``?ref=release-[number]`` to use the current beta version rather than a numbered release. If you have cloned the repository and prefer to use your local copy of the flake, replace the corresponding clause with ``builtins.getFlake "/absolute/path/to/your/artiq-zynq"``.
For ZC706, you can use a command of the same form: ::
$ nix build --print-build-logs --impure --expr 'let fl = builtins.getFlake "git+https://git.m-labs.hk/m-labs/artiq-zynq?ref=release-[number]"; in (fl.makeArtiqZynqPackage {target="zc706"; variant="<variant>";}).zc706-<variant>-sd'
or you can use the more direct version: ::
$ nix build --print-build-logs git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number]#zc706-<variant>-sd
(which is possible for ZC706 because there is no need to be able to specify a system description file in the arguments.)
.. note::
To see supported ZC706 variants, you can run the following at the root of the repository: ::
$ src/gateware/zc706.py --help
Look for the option ``-V VARIANT, --variant VARIANT``. If you have not cloned the repository or are not in the development environment, try: ::
$ nix flake show git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number] | grep "package 'zc706.*sd"
to see the list of suitable build targets directly.
Any of these commands should produce a directory ``result`` which contains a file ``boot.bin``. As described in :ref:`writing-flash`, if your core device is currently accessible over the network, it can be flashed with :mod:`~artiq.frontend.artiq_coremgmt`. If it is not connected to the network:
1. Power off the board, extract the SD card and load ``boot.bin`` onto it manually.
2. Insert the SD card back into the board.
3. Ensure that the DIP switches (labeled BOOT MODE) are set correctly, to SD.
4. Power the board back on.
Optionally, the SD card may also be loaded at the same time with an additional file ``config.txt``, which can contain preset configuration values in the format ``key=value``, one per line. The keys are those used with :mod:`~artiq.frontend.artiq_coremgmt`. This allows e.g. presetting an IP address and any other configuration information.
After a successful boot, the "FPGA DONE" light should be illuminated and the board should respond to ping when plugged into Ethernet.
.. _zynq-jtag-boot :
Booting over JTAG/Ethernet
""""""""""""""""""""""""""
It is also possible to boot Zynq devices over USB and Ethernet. Flip the DIP switches to JTAG. The scripts ``remote_run.sh`` and ``local_run.sh`` in the ARTIQ-Zynq repository, intended for use with a remote JTAG server or a local connection to the core device respectively, are used at M-Labs to accomplish this. Both make use of the netboot tool ``artiq_netboot``, see also its source `here <https://git.m-labs.hk/M-Labs/artiq-netboot>`__, which is included in the ARTIQ-Zynq development environment. Adapt the relevant script to your system or read it closely to understand the options and the commands being run; note for example that ``remote_run.sh`` as written only supports ZC706.
You will need to generate the gateware, firmware and bootloader first, either through ``nix build`` or incrementally as below. After an incremental build add the option ``-i`` when running either of the scripts. If using ``nix build``, note that target names of the form ``<board>-<variant>-jtag`` (run ``nix flake show`` to see all targets) will output the three necessary files without combining them into ``boot.bin``.
.. warning::
A known Xilinx hardware bug on Zynq prevents repeatedly loading the SZL bootloader over JTAG (i.e. repeated calls of the ``*_run.sh`` scripts) without a POR reset. On Kasli-SoC, you can physically set a jumper on the ``PS_POR_B`` pins of your board and use the M-Labs `POR reset script <https://git.m-labs.hk/M-Labs/zynq-rs/src/branch/master/kasli_soc_por.py>`_.
Zynq incremental build
^^^^^^^^^^^^^^^^^^^^^^
The ``boot.bin`` file used in a Zynq SD card boot is in practice the combination of several files, normally ``top.bit`` (the gateware), ``runtime`` or ``satman`` (the firmware) and ``szl.elf`` (an open-source bootloader for Zynq `written by M-Labs <https://git.m-labs.hk/M-Labs/zynq-rs/src/branch/master/szl>`_, used in ARTIQ in place of Xilinx's FSBL). In some circumstances, especially if you are developing ARTIQ, you may prefer to construct these components separately. Be sure that you have cloned the repository and entered the development environment as described above.
To compile the gateware and firmware, enter the ``src`` directory and run two commands as follows:
For Kasli-SoC:
::
$ gateware/kasli_soc.py -g ../build/gateware <description.json>
$ make TARGET=kasli_soc GWARGS="path/to/description.json" <fw-type>
For ZC706:
::
$ gateware/zc706.py -g ../build/gateware -V <variant>
$ make TARGET=zc706 GWARGS="-V <variant>" <fw-type>
where ``fw-type`` is ``runtime`` for standalone or DRTIO master builds and ``satman`` for DRTIO satellites. Both the gateware and the firmware will generate into the ``../build`` destination directory. At this stage you can :ref:`boot from JTAG <zynq-jtag-boot>`; either of the ``*_run.sh`` scripts will expect the gateware and firmware files at their default locations, and the ``szl.elf`` bootloader is retrieved automatically.
.. warning::
Note that in between runs of ``make`` it is necessary to manually clear ``build``, even for different targets, or ``make`` will do nothing.
If you prefer to boot from SD card, you will need to construct your own ``boot.bin``. Build ``szl.elf`` from source by running a command of the form: ::
$ nix build git+https://git.m-labs.hk/m-labs/zynq-rs#<board>-szl
For easiest access run this command in the ``build`` directory. The ``szl.elf`` file will be in the subdirectory ``result``. To combine all three files into the boot image, create a file called ``boot.bif`` in ``build`` with the following contents: ::
the_ROM_image:
{
[bootloader]result/szl.elf
gateware/top.bit
firmware/armv7-none-eabihf/release/<fw-type>
}
EOF
Save this file. Now use ``mkbootimage`` to create ``boot.bin``. ::
$ mkbootimage boot.bif boot.bin
Boot from SD card as above.

View File

@ -1,34 +1,52 @@
Compiler
========
The ARTIQ compiler transforms the Python code of the kernels into machine code executable on the core device. It is invoked automatically when calling a function that uses the ``@kernel`` decorator.
The ARTIQ compiler transforms the Python code of the kernels into machine code executable on the core device. For limited purposes (normally, obtaining executable binaries of idle and startup kernels), it can be accessed through :mod:`~artiq.frontend.artiq_compile`. Otherwise it is invoked automatically whenever a function with an applicable decorator is called.
Supported Python features
-------------------------
ARTIQ kernel code accepts *nearly,* but not quite, a strict subset of Python 3. The necessities of real-time operation impose a harsher set of limitations; as a result, many Python features are necessarily omitted, and there are some specific discrepancies (see also :ref:`compiler-pitfalls`).
A number of Python features can be used inside a kernel for compilation and execution on the core device. They include ``for`` and ``while`` loops, conditionals (``if``, ``else``, ``elif``), functions, exceptions, and statically typed variables of the following types:
In general, ARTIQ Python supports only statically typed variables; it implements no heap allocation or garbage collection systems, essentially disallowing any heap-based data structures (although lists and arrays remain available in a stack-based form); and it cannot use runtime dispatch, meaning that, for example, all elements of an array must be of the same type. Nonetheless, technical details aside, a basic knowledge of Python is entirely sufficient to write ARTIQ experiments.
* Booleans
* 32-bit signed integers (default size)
* 64-bit signed integers (use ``numpy.int64`` to convert)
* Double-precision floating point numbers
* Lists of any supported types
* String constants
* User-defined classes, with attributes of any supported types (attributes that are not used anywhere in the kernel are ignored)
.. note::
The ARTIQ compiler is now in its second iteration. The third generation, known as NAC3, is `currently in development <https://git.m-labs.hk/M-Labs/nac3>`_, and available for pre-alpha experimental use. NAC3 represents a major overhaul of ARTIQ compilation, and will feature much faster compilation speeds, a greatly improved type system, and more predictable and transparent operation. It is compatible with ARTIQ firmware starting at ARTIQ-7. Instructions for installation and basic usage differences can also be found `on the M-Labs Forum <https://forum.m-labs.hk/d/392-nac3-new-artiq-compiler-3-prealpha-release>`_. While NAC3 is a work in progress and many important features remain unimplemented, installation and feedback is welcomed.
For a demonstration of some of these features, see the ``mandelbrot.py`` example.
ARTIQ Python code
-----------------
When several instances of a user-defined class are referenced from the same kernel, every attribute must have the same type in every instance of the class.
A variety of short experiments can be found in the subfolders of ``artiq/examples``, especially under ``kc705_nist_clock/repository`` and ``no_hardware/repository``. Reading through these will give you a general idea of what ARTIQ Python is capable of and how to use it.
Remote procedure calls
----------------------
Functions and decorators
^^^^^^^^^^^^^^^^^^^^^^^^^
Kernel code can call host functions without any additional ceremony. However, such functions are assumed to return `None`, and if a value other than `None` is returned, an exception is raised. To call a host function returning a value other than `None` its return type must be annotated using the standard Python syntax, e.g.: ::
The ARTIQ compiler recognizes several specialized decorators, which determine the way the decorated function will be compiled and handled.
def return_four() -> TInt32:
return 4
``@kernel`` (see :meth:`~artiq.language.core.kernel`) designates kernel functions, which will be compiled for and executed on the core device; the basic setup and background for kernels is detailed on the :doc:`getting_started_core` page. ``@subkernel`` (:meth:`~artiq.language.core.subkernel`) designates subkernel functions, which are largely similar to kernels except that they are executed on satellite devices in a DRTIO setting, with some associated limitations; they are described in more detail on the :doc:`using_drtio_subkernels` page.
The Python types correspond to ARTIQ type annotations as follows:
``@rpc`` (:meth:`~artiq.language.core.rpc`) designates functions to be executed on the host machine, which are compiled and run in regular Python, outside of the core device's real-time limitations. Notably, functions without decorators are assumed to be host-bound by default, and treated identically to an explicitly marked ``@rpc``. As a result, the explicit decorator is only really necessary when specifying additional flags (for example, ``flags={"async"}``, see below).
``@portable`` (:meth:`~artiq.language.core.portable`) designates functions to be executed *on the same device they are called.* In other words, when called from a kernel, a portable is executed as a kernel; when called from a subkernel, it is executed as a kernel, on the same satellite device as the calling subkernel; when called from a host function, it is executed on the host machine.
``@host_only`` (:meth:`~artiq.language.core.host_only`) functions are executed fully on the host, similarly to ``@rpc``, but calling them from a kernel as an RPC will be refused by the compiler. It can be used to mark functions which should only ever be called by the host.
.. warning::
ARTIQ goes to some lengths to cache code used in experiments correctly, so that experiments run according to the state of the code when they were started, even if the source is changed during the run time. Python itself annoyingly fails to implement this (see also `issue #416 <https://github.com/m-labs/artiq/issues/416>`_), necessitating a workaround on ARTIQ's part. One particular downstream limitation is that the ARTIQ compiler is unable to recognize decorators with path prefixes, i.e.: ::
import artiq.experiment as aq
[...]
@aq.kernel
def run(self):
pass
will fail to compile. As long as ``from artiq.experiment import *`` is used as in the examples, this is never an issue. If prefixes are strongly preferred, a possible workaround is to import decorators separately, as e.g. ``from artiq.language.core import kernel``.
.. _compiler-types:
ARTIQ types
^^^^^^^^^^^
Python/NumPy types correspond to ARTIQ types as follows:
+---------------+-------------------------+
| Python | ARTIQ |
@ -43,6 +61,10 @@ The Python types correspond to ARTIQ type annotations as follows:
+---------------+-------------------------+
| str | TStr |
+---------------+-------------------------+
| bytes | TBytes |
+---------------+-------------------------+
| bytearray | TByteArray |
+---------------+-------------------------+
| list of T | TList(T) |
+---------------+-------------------------+
| NumPy array | TArray(T, num_dims) |
@ -56,54 +78,111 @@ The Python types correspond to ARTIQ type annotations as follows:
| numpy.float64 | TFloat |
+---------------+-------------------------+
Integers are 32-bit by default but may be converted to 64-bit with ``numpy.int64``.
The ARTIQ compiler can be thought of as overriding all built-in Python types, and types in kernel code cannot always be assumed to behave as they would in host Python. In particular, normally heap-allocated types such as arrays, lists, and strings are very limited in what they support. Strings must be constant and lists and arrays must be of constant size. Methods like ``append``, ``push``, and ``pop`` are unavailable as a matter of principle, and will not compile. Certain types, notably dictionaries, have no ARTIQ implementation and cannot be used in kernels at all.
.. tip::
Instead of pushing or appending, preallocate for the maximum number of elements you expect with a list comprehension, i.e. ``x = [0 for _ in range(1024)]``, and then keep a variable ``n`` noting the last filled element of the array. Afterwards, ``x[0:n]`` will give you a list with that number of elements.
Multidimensional arrays are allowed (using NumPy syntax). Element-wise operations (e.g. ``+``, ``/``), matrix multiplication (``@``) and multidimensional indexing are supported; slices and views (currently) are not.
User-defined classes are supported, provided their attributes are of other supported types (attributes that are not used in the kernel are ignored and thus unrestricted). When several instances of a user-defined class are referenced from the same kernel, every attribute must have the same type in every instance of the class.
Basic ARTIQ Python
^^^^^^^^^^^^^^^^^^
Basic Python features can broadly be used inside kernels without further compunctions. This includes loops (``for`` / ``while`` / ``break`` / ``continue``), conditionals (``if`` / ``else`` / ``elif``), functions, exceptions, ``try`` / ``except`` / ``else`` blocks, and statically typed variables of any supported types.
Kernel code can call host functions without any additional ceremony. However, such functions are assumed to return ``None``, and if a value other than ``None`` is returned, an exception is raised. To call a host function returning a value other than ``None`` its return type must be annotated, using the standard Python syntax, e.g.: ::
def return_four() -> TInt32:
return 4
Kernels can freely modify attributes of objects shared with the host. However, by necessity, these modifications are actually applied to local copies of the objects, as the latency of immediate writeback would be unsupportable in a real-time environment. Instead, modifications are written back *when the kernel completes;* notably, this means RPCs called by a kernel itself will only have access to the unmodified host version of the object, as the kernel hasn't finished execution yet. In some cases, accessing data on the host is better handled by calling RPCs specifically to make the desired modifications.
.. warning::
Kernels *cannot and should not* return lists, arrays, or strings they have created, or any objects containing them; in the absence of a heap, the way these values are allocated means they cannot outlive the kernels they are created in. Trying to do so will normally be discovered by lifetime tracking and result in compilation errors, but in certain cases lifetime tracking will fail to detect a problem and experiments will encounter memory corruption at runtime. For example: ::
def func(a):
return a
class ProblemReturn1(EnvExperiment):
def build(self):
self.setattr_device("core")
@kernel
def run(self):
# results in memory corruption
return func([1, 2, 3])
will compile, **but corrupts at runtime.** On the other hand, lists, arrays, or strings can and should be used as inputs for RPCs, and this is the preferred method of returning data to the host. In this way the data is inherently read and sent before the kernel completes and there are no allocation issues.
Available built-in functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ARTIQ makes various useful built-in and mathematical functions from Python, NumPy, and SciPy available in kernel code. They are not guaranteed to be perfectly equivalent to their host namesakes (for example, ``numpy.rint()`` normally rounds-to-even, but in kernel code rounds toward zero) but their behavior should be basically predictable.
.. list-table::
:header-rows: 1
+ * Reference
* Functions
+ * `Python built-ins <https://docs.python.org/3/library/functions.html>`_
* - ``len()``, ``round()``, ``abs()``, ``min()``, ``max()``
- ``print()`` (with caveats; see below)
- all basic type conversions (``int()``, ``float()`` etc.)
+ * `NumPy mathematic utilities <https://numpy.org/doc/stable/reference/routines.math.html>`_
* - ``sqrt()``, ``cbrt()``
- ``fabs()``, ``fmax()``, ``fmin()``
- ``floor()``, ``ceil()``, ``trunc()``, ``rint()``
+ * `NumPy exponents and logarithms <https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms>`_
* - ``exp()``, ``exp2()``, ``expm1()``
- ``log()``, ``log2()``, ``log10()``
+ * `NumPy trigonometric and hyperbolic functions <https://numpy.org/doc/stable/reference/routines.math.html#trigonometric-functions>`_
* - ``sin()``, ``cos()``, ``tan()``,
- ``arcsin()``, ``arccos()``, ``arctan()``
- ``sinh()``, ``cosh()``, ``tanh()``
- ``arcsinh()``, ``arccosh()``, ``arctanh()``
- ``hypot()``, ``arctan2()``
+ * `NumPy floating point routines <https://numpy.org/doc/stable/reference/routines.math.html#floating-point-routines>`_
* - ``copysign()``, ``nextafter()``
+ * `SciPy special functions <https://docs.scipy.org/doc/scipy/reference/special.html>`_
* - ``erf()``, ``erfc()``
- ``gamma()``, ``gammaln()``
- ``j0()``, ``j1()``, ``y0()``, ``y1()``
Basic NumPy array handling (``np.array()``, ``numpy.transpose()``, ``numpy.full()``, ``@``, element-wise operation, etc.) is also available. NumPy functions are implicitly broadcast when applied to arrays.
Print and logging functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^
ARTIQ offers two native built-in logging functions: ``rtio_log()``, which prints to the :ref:`RTIO log <rtio-analyzer>`, as retrieved by :mod:`~artiq.frontend.artiq_coreanalyzer`, and ``core_log()``, which prints directly to the core log, regardless of context or network connection status. Both exist for debugging purposes, especially in contexts where a ``print()`` RPC is not suitable, such as in idle/startup kernels or when debugging delicate RTIO slack issues which may be significantly affected by the overhead of ``print()``.
``print()`` itself is in practice an RPC to the regular host Python ``print()``, i.e. with output either in the terminal of :mod:`~artiq.frontend.artiq_run` or in the client logs when using :mod:`~artiq.frontend.artiq_dashboard` or :mod:`~artiq.frontend.artiq_compile`. This means on one hand that it should not be used in idle, startup, or subkernels, and on the other hand that it suffers of some of the timing limitations of any other RPC, especially if the RPC queue is full. Accordingly, it is important to be aware that the timing of ``print()`` outputs can't reliably be used to debug timing in kernels, and especially not the timing of other RPCs.
.. _compiler-pitfalls:
Pitfalls
--------
The ARTIQ compiler accepts *nearly* a strict subset of Python 3. However, by necessity there
is a number of differences that can lead to bugs.
Arbitrary-length integers are not supported at all on the core device; all integers are
either 32-bit or 64-bit. This especially affects calculations that result in a 32-bit signed
overflow; if the compiler detects a constant that doesn't fit into 32 bits, the entire expression
will be upgraded to 64-bit arithmetics, however if all constants are small, 32-bit arithmetics
will be used even if the result will overflow. Overflows are not detected.
The result of calling the builtin ``round`` function is different when used with
the builtin ``float`` type and the ``numpy.float64`` type on the host interpreter; ``round(1.0)``
returns an integer value 1, whereas ``round(numpy.float64(1.0))`` returns a floating point value
``numpy.float64(1.0)``. Since both ``float`` and ``numpy.float64`` are mapped to
the builtin ``float`` type on the core device, this can lead to problems in functions marked
``@portable``; the workaround is to explicitly cast the argument of ``round`` to ``float``:
``round(float(numpy.float64(1.0)))`` returns an integer on the core device as well as on the host
interpreter.
Empty lists do not have valid list element types, so they cannot be used in the kernel.
In kernels, lifetime of allocated values (e.g. lists or numpy arrays) might not be correctly
tracked across function calls (see `#1497 <https://github.com/m-labs/artiq/issues/1497>`_,
`#1677 <https://github.com/m-labs/artiq/issues/1677>`_) like in this example ::
Arbitrary-length integers are not supported at all on the core device; all integers are either 32-bit or 64-bit. This especially affects calculations that result in a 32-bit signed overflow. If the compiler detects a constant that can't fit into 32 bits, the entire expression will be upgraded to 64-bit arithmetic, but if all constants are small, 32-bit arithmetic is used even if the result will overflow. Overflows are not detected.
@kernel
def func(a):
return a
The result of calling the builtin ``round`` function is different when used with the builtin ``float`` type and the ``numpy.float64`` type on the host interpreter; ``round(1.0)`` returns an integer value 1, whereas ``round(numpy.float64(1.0))`` returns a floating point value ``numpy.float64(1.0)``. Since both ``float`` and ``numpy.float64`` are mapped to the builtin ``float`` type on the core device, this can lead to problems in functions marked ``@portable``; the workaround is to explicitly cast the argument of ``round`` to ``float``: ``round(float(numpy.float64(1.0)))`` returns an integer on the core device as well as on the host interpreter.
class ProblemReturn1(EnvExperiment):
def build(self):
self.setattr_device("core")
Flags and optimizations
-----------------------
@kernel
def run(self):
# results in memory corruption
return func([1, 2, 3])
This results in memory corruption at runtime.
The ARTIQ compiler runs many optimizations, most of which perform well on code that has pristine Python semantics. It also contains more powerful, and more invasive, optimizations that require opt-in to activate.
Asynchronous RPCs
-----------------
^^^^^^^^^^^^^^^^^
If an RPC returns no value, it can be invoked in a way that does not block until the RPC finishes
execution, but only until it is queued. (Submitting asynchronous RPCs too rapidly, as well as
submitting asynchronous RPCs with arguments that are too large, can still block until completion.)
If an RPC returns no value, it can be invoked in a way that does not block until the RPC finishes execution, but only until it is queued. (Submitting asynchronous RPCs too rapidly, as well as submitting asynchronous RPCs with arguments that are too large, can still block until completion.)
To define an asynchronous RPC, use the ``@rpc`` annotation with a flag: ::
@ -111,17 +190,12 @@ To define an asynchronous RPC, use the ``@rpc`` annotation with a flag: ::
def record_result(x):
self.results.append(x)
Additional optimizations
------------------------
The ARTIQ compiler runs many optimizations, most of which perform well on code that has pristine Python semantics. It also contains more powerful, and more invasive, optimizations that require opt-in to activate.
Fast-math flags
+++++++++++++++
^^^^^^^^^^^^^^^
The compiler does not normally perform algebraically equivalent transformations on floating-point expressions, because this can dramatically change the result. However, it can be instructed to do so if all of the following is true:
The compiler does not normally perform algebraically equivalent transformations on floating-point expressions, because this can dramatically change the result. However, it can be instructed to do so if all of the following are true:
* Arguments and results will not be not-a-number or infinity values;
* Arguments and results will not be Not-a-Number or infinite;
* The sign of a zero value is insignificant;
* Any algebraically equivalent transformations, such as reassociation or replacing division with multiplication by reciprocal, are legal to perform.
@ -134,7 +208,7 @@ If this is the case for a given kernel, a ``fast-math`` flag can be specified to
This flag particularly benefits loops with I/O delays performed in fractional seconds rather than machine units, as well as updates to DDS phase and frequency.
Kernel invariants
+++++++++++++++++
^^^^^^^^^^^^^^^^^
The compiler attempts to remove or hoist out of loops any redundant memory load operations, as well as propagate known constants into function bodies, which can enable further optimization. However, it must make conservative assumptions about code that it is unable to observe, because such code can change the value of the attribute, making the optimization invalid.
@ -171,7 +245,7 @@ In the synthetic example above, the compiler will be able to detect that the res
delay(self.worker.interval / 5.0)
self.worker.work()
In the synthetic example above, the compiler will be able to detect that the result of evaluating ``self.interval / 5.0`` never changes, even though it neither knows the value of ``self.worker.interval`` beforehand nor can it see through the ``self.worker.work()`` function call, and hoist the expensive floating-point division out of the loop, transforming the code for ``loop`` into an equivalent of the following: ::
In the synthetic example above, the compiler will be able to detect that the result of evaluating ``self.interval / 5.0`` never changes, even though it neither knows the value of ``self.worker.interval`` beforehand nor can see through the ``self.worker.work()`` function call, and thus can hoist the expensive floating-point division out of the loop, transforming the code for ``loop`` into an equivalent of the following: ::
@kernel
def loop(self):

View File

@ -29,26 +29,17 @@ builtins.__in_sphinx__ = True
# we cannot use autodoc_mock_imports (does not help with argparse)
mock_modules = ["artiq.gui.waitingspinnerwidget",
"artiq.gui.flowlayout",
"artiq.gui.state",
"artiq.gui.log",
"artiq.gui.models",
"artiq.compiler.module",
"artiq.compiler.embedding",
"artiq.dashboard.waveform",
"artiq.dashboard.interactive_args",
"qasync", "pyqtgraph", "matplotlib", "lmdb",
"numpy", "dateutil", "dateutil.parser", "prettytable", "PyQt5",
"h5py", "serial", "scipy", "scipy.interpolate",
"llvmlite", "Levenshtein", "pythonparser",
"sipyco", "sipyco.pc_rpc", "sipyco.sync_struct",
"sipyco.asyncio_tools", "sipyco.logging_tools",
"sipyco.broadcast", "sipyco.packed_exceptions",
"sipyco.keepalive", "sipyco.pipe_ipc"]
"artiq.coredevice.jsondesc",
"qasync", "lmdb", "dateutil.parser", "prettytable", "PyQt5",
"h5py", "llvmlite", "pythonparser", "tqdm", "jsonschema"]
for module in mock_modules:
sys.modules[module] = Mock()
# https://stackoverflow.com/questions/29992444/sphinx-autodoc-skips-classes-inherited-from-mock
class MockApplets:
class AppletsDock:
@ -79,6 +70,7 @@ extensions = [
'sphinx.ext.napoleon',
'sphinxarg.ext',
'sphinxcontrib.wavedrom', # see also below for config
"sphinxcontrib.jquery",
]
mathjax_path = "https://m-labs.hk/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML.js"
@ -146,6 +138,25 @@ pygments_style = 'sphinx'
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, Sphinx will warn about *all* references where the target cannot be found.
nitpicky = True
# (type, target) regex tuples to ignore when generating warnings in 'nitpicky' mode
# i.e. objects that are not documented in this manual and do not need to be
nitpick_ignore_regex = [
(r'py:.*', r'numpy..*'),
(r'py:.*', r'sipyco..*'),
('py:const', r'.*'), # no constants are documented anyway
('py.attr', r'.*'), # nor attributes
(r'py:.*', r'artiq.gateware.*'),
('py:mod', r'artiq.test.*'),
('py:mod', r'artiq.applets.*'),
('py:class', 'dac34H84'),
('py:class', 'trf372017'),
('py:class', r'list(.*)'),
('py.class', 'NoneType'),
('py:meth', 'pause')
]
# -- Options for HTML output ----------------------------------------------

129
doc/manual/configuring.rst Normal file
View File

@ -0,0 +1,129 @@
Networking and configuration
============================
.. _core-device-networking:
Setting up core device networking
---------------------------------
For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to an Ethernet port in your network. If the port is 10Mbps or 100Mbps and not 1000Mbps, make sure that the SFP/RJ45 transceiver supports the lower rate. Many SFP/RJ45 transceivers only support the 1000Mbps rate. If you do not have a SFP/RJ45 transceiver that supports 10Mbps and 100Mbps rates, you may instead use a gigabit Ethernet switch in the middle to perform rate conversion.
You can also insert other types of SFP transceivers into Kasli if you wish to use it directly in e.g. an optical fiber Ethernet network. Kasli-SoC already directly features RJ45 10/100/1000 Ethernet.
IP address and ping
^^^^^^^^^^^^^^^^^^^
If you purchased a Kasli or Kasli-SoC device from M-Labs, it will arrive with an IP address already set, normally the address requested in the web shop at time of purchase. If you did not specify an address at purchase, the default IP M-Labs uses is ``192.168.1.75``. If you did not obtain your hardware from M-Labs, or if you have just reflashed your core device, see :ref:`networking-tips` below.
Once you know the IP, check that you can ping your device: ::
$ ping <IP_address>
If ping fails, check that the Ethernet LED is ON; on Kasli, it is the LED next to the SFP0 connector. As a next step, try connecting to the serial port to read the UART log. See :ref:`connecting-UART`.
Core management tool
^^^^^^^^^^^^^^^^^^^^
The tool used to configure the core device is the command-line utility :mod:`~artiq.frontend.artiq_coremgmt`. In order for it to connect to your core device, it is necessary to supply it somehow with the correct IP address for your core device. This can be done directly through use of the ``-D`` option, for example in: ::
$ artiq_coremgmt -D <IP_address> log
.. note::
This command reads and displays the core log. If you have recently rebooted or reflashed your core device, you should see the startup logs in your terminal.
Normally, however, the core device IP is supplied through the *device database* for your system, which comes in the form of a Python script called ``device_db.py`` (see also :ref:`device-db`). If you purchased a system from M-Labs, the ``device_db.py`` for your system will have been provided for you, either on the USB stick, inside ``~/artiq`` on your NUC, or sent by email.
Make sure the field ``core_addr`` at the top of the file is set to your core device's correct IP address, and always execute :mod:`~artiq.frontend.artiq_coremgmt` from the same directory the device database is placed in.
Once you can reach your core device, the IP can be changed at any time by running: ::
$ artiq_coremgmt [-D old_IP] config write -s ip <new_IP>
and then rebooting the device: ::
$ artiq_coremgmt [-D old_IP] reboot
Make sure to correspondingly edit your ``device_db.py`` after rebooting.
.. _networking-tips:
Tips and troubleshooting
^^^^^^^^^^^^^^^^^^^^^^^^
For Kasli-SoC:
If the ``ip`` config is not set, Kasli-SoC firmware defaults to using the IP address ``192.168.1.56``.
For ZC706:
If the ``ip`` config is not set, ZC706 firmware defaults to using the IP address ``192.168.1.52``.
For Kasli or KC705:
If the ``ip`` config field is not set or set to ``use_dhcp``, the device will attempt to obtain an IP address and default gateway using DHCP. The chosen IP address will be in log output, which can be accessed via the :ref:`UART log <connecting-UART>`.
If a static IP address is preferred, it can be flashed directly (OpenOCD must be installed and configured, as in :doc:`flashing`), along with, as necessary, default gateway, IPv6, and/or MAC address: ::
$ artiq_mkfs flash_storage.img [-s mac xx:xx:xx:xx:xx:xx] [-s ip xx.xx.xx.xx/xx] [-s ipv4_default_route xx.xx.xx.xx] [-s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx] [-s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx]
$ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start
On Kasli or Kasli-SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. On any board, once a device can be reached by :mod:`~artiq.frontend.artiq_coremgmt`, these values can be set and edited at any time, following the procedure for IP above.
Regarding IPv6, note that the device also has a link-local address that corresponds to its EUI-64, which can be used simultaneously to the (potentially unrelated) IPv6 address defined by using the ``ip6`` configuration key.
If problems persist, see the :ref:`network troubleshooting <faq-networking>` section of the FAQ.
.. _core-device-config:
Configuring the core device
---------------------------
.. note::
The following steps are optional, and you only need to execute them if they are necessary for your specific system. To learn more about how ARTIQ works and how to use it first, you might skip to the first tutorial page, :doc:`rtio`. For all configuration options, the core device generally must be restarted for changes to take effect.
Flash idle and/or startup kernel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The *idle kernel* is the kernel (that is, a piece of code running on the core device; see :doc:`rtio` for further explanation) which the core device runs in between experiments and whenever not connected to the host. It is saved directly to the core device's flash storage in compiled form. Potential uses include cleanup of the environment between experiments, state maintenance for certain hardware, or anything else that should run continuously whenever the system is not otherwise occupied.
To flash an idle kernel, first write an idle experiment. Note that since the idle kernel runs regardless of whether the core device is connected to the host, remote procedure calls or RPCs (functions called by a kernel to run on the host) are forbidden and the ``run()`` method must be a kernel marked with ``@kernel``. Once written, you can compile and flash your idle experiment: ::
$ artiq_compile idle.py
$ artiq_coremgmt config write -f idle_kernel idle.elf
The *startup kernel* is a kernel executed once and only once immediately whenever the core device powers on. Uses include initializing DDSes and setting TTL directions. For DRTIO systems, the startup kernel should wait until the desired destinations, including local RTIO, are up, using ``self.core.get_rtio_destination_status`` (see :meth:`~artiq.coredevice.core.Core.get_rtio_destination_status`).
To flash a startup kernel, proceed as with the idle kernel, but using the ``startup_kernel`` key in the :mod:`~artiq.frontend.artiq_coremgmt` command.
.. note::
Subkernels (see :doc:`using_drtio_subkernels`) are allowed in idle (and startup) experiments without any additional ceremony. :mod:`~artiq.frontend.artiq_compile` will produce a ``.tar`` rather than a ``.elf``; simply substitute ``idle.tar`` for ``idle.elf`` in the ``artiq_coremgmt config write`` command.
Select the RTIO clock source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The core device may use any of an external clock signal, its internal clock with external frequency reference, or its internal clock with internal crystal reference. Clock source and timing are set at power-up. To find out what clock signal you are using, check the startup logs with ``artiq_coremgmt log``.
The default is to use an internal 125MHz clock. To select a source, use a command of the form: ::
$ artiq_coremgmt config write -s rtio_clock int_125 # internal 125MHz clock (default)
$ artiq_coremgmt config write -s rtio_clock ext0_synth0_10to125 # external 10MHz reference used to synthesize internal 125MHz
See :ref:`core-device-clocking` for availability of specific options.
.. _config-rtiomap:
Set up resolving RTIO channels to their names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This feature allows you to print the channels' respective names alongside with their numbers in RTIO error messages. To enable it, run the :mod:`~artiq.frontend.artiq_rtiomap` tool and write its result into the device config at the ``device_map`` key: ::
$ artiq_rtiomap dev_map.bin
$ artiq_coremgmt config write -f device_map dev_map.bin
More information on the ``artiq_rtiomap`` utility can be found on the :ref:`Utilities <rtiomap-tool>` page.
Enable event spreading
^^^^^^^^^^^^^^^^^^^^^^
This feature changes the logic used for queueing RTIO output events in the core device for a more efficient use of FPGA resources, at the cost of introducing nondeterminism and potential unpredictability in certain timing errors (specifically gateware :ref:`sequence errors<sequence-errors>`). It can be enabled with the config key ``sed_spread_enable``. See :ref:`sed-event-spreading`.
Load the DRTIO routing table
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are using DRTIO and the default routing table (for a star topology) is not suitable to your needs, you will first need to prepare and load a different routing table. See :ref:`Using DRTIO <drtio-routing>`.

View File

@ -1,82 +1,152 @@
Core device
===========
The core device is a FPGA-based hardware component that contains a softcore CPU tightly coupled with the so-called RTIO core that provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler, and communicates with the core device peripherals (TTL, DDS, etc.) over the RTIO core. This architecture provides high timing resolution, low latency, low jitter, high level programming capabilities, and good integration with the rest of the Python experiment code.
The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :doc:`rtio`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code.
While it is possible to use all the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, many experiments require it.
While it is possible to use the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, most use cases will require it.
.. _configuration-storage:
.. _core-device-flash-storage:
Configuration storage
---------------------
Flash storage
*************
The core device reserves some storage space (either flash or directly on SD card, depending on target board) to store configuration data. The configuration data is organized as a list of key-value records, accessible either through :mod:`~artiq.frontend.artiq_mkfs` and :mod:`~artiq.frontend.artiq_flash` or, preferably in most cases, the ``config`` option of the :mod:`~artiq.frontend.artiq_coremgmt` core management tool (see below). Information can be stored to keys of any name, but the specific keys currently used and referenced by ARTIQ are summarized below:
The core device contains some flash space that can be used to store configuration data.
``idle_kernel``
Stores (compiled ``.tar`` or ``.elf`` binary of) idle kernel. See :ref:`core-device-config`.
``startup_kernel``
Stores (compiled ``.tar`` or ``.elf`` binary of) startup kernel. See :ref:`core-device-config`.
``ip``
Sets IP address of core device. For this and other networking options see also :ref:`core-device-networking`.
``mac``
Sets MAC address of core device. Unnecessary on Kasli or Kasli-SoC, which can obtain it from EEPROM.
``ipv4_default_route``
Sets IPv4 default route.
``ip6``
Sets IPv6 address of core device. Can be set irrespective of and used simultaneously as IPv4 address above.
``ipv6_default_route``
Sets IPv6 default route.
``sed_spread_enable``
If set to ``1``, will activate :ref:`sed-event-spreading` in this core device. Needs to be set separately for satellite devices in a DRTIO setting.
``log_level``
Sets core device log level. Possible levels are ``TRACE``, ``DEBUG``, ``INFO``, ``WARN``, ``ERROR``, and ``OFF``. Note that enabling higher log levels will produce some core device slowdown.
``uart_log_level``
Sets UART log level, with same options. Printing a large number of messages to UART log will produce a significant slowdown.
``rtio_clock``
Sets the RTIO clock; see :ref:`core-device-clocking`.
``routing_table``
Sets the routing table in DRTIO systems; see :ref:`drtio-routing`. If not set, a star topology is assumed.
``device_map``
If set, allows the core log to connect RTIO channels to device names and use device names as well as channel numbers in log output. A correctly formatted table can be automatically generated with :mod:`~artiq.frontend.artiq_rtiomap`, see :ref:`Utilities<rtiomap-tool>`.
``net_trace``
If set to ``1``, will activate net trace (print all packets sent and received to UART and core log). This will considerably slow down all network response from the core. Not applicable for ARTIQ-Zynq (Kasli-SoC, ZC706).
``panic_reset``
If set to ``1``, core device will restart automatically. Not applicable for ARTIQ-Zynq.
``no_flash_boot``
If set to ``1``, will disable flash boot. Network boot is attempted if possible. Not applicable for ARTIQ-Zynq.
``boot``
Allows full firmware/gateware (``boot.bin``) to be written with :mod:`~artiq.frontend.artiq_coremgmt`, on ARTIQ-Zynq systems only.
This storage area is used to store the core device MAC address, IP address and even the idle kernel.
Common configuration commands
-----------------------------
The flash storage area is one sector (typically 64 kB) large and is organized as a list of key-value records.
To write, then read, the value ``test_value`` in the key ``my_key``::
This flash storage space can be accessed by using ``artiq_coremgmt`` (see: :ref:`core-device-management-tool`).
$ artiq_coremgmt config write -s my_key test_value
$ artiq_coremgmt config read my_key
b'test_value'
.. _board-ports:
You do not need to remove a record in order to change its value. Just overwrite it::
$ artiq_coremgmt config write -s my_key some_value
$ artiq_coremgmt config write -s my_key some_other_value
$ artiq_coremgmt config read my_key
b'some_other_value'
You can write several records at once::
$ artiq_coremgmt config write -s key1 value1 -f key2 filename -s key3 value3
You can also write entire files in a record using the ``-f`` option. This is useful for instance to write the startup and idle kernels into the flash storage::
$ artiq_coremgmt config write -f idle_kernel idle.elf
$ artiq_coremgmt config read idle_kernel | head -c9
b'\x7fELF
The same option is used to write ``boot.bin`` in ARTIQ-Zynq. Note that the ``boot`` key is write-only.
See also the full reference of :mod:`~artiq.frontend.artiq_coremgmt` in :ref:`Utilities <core-device-management-tool>`.
.. _core-device-clocking:
Clocking
--------
The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include:
* ``int_100`` - internal crystal reference is used to synthesize a 100MHz RTIO clock,
* ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option),
* ``int_150`` - internal crystal reference is used to synthesize a 150MHz RTIO clock.
* ``ext0_synth0_10to125`` - external 10MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_80to125`` - external 80MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_100to125`` - external 100MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock.
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the :ref:`JSON description file <system-description>`. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.
If not using WRPLL, PLL can also be bypassed entirely with the options
* ``ext0_bypass`` (input clock used directly)
* ``ext0_bypass_125`` (explicit alias)
* ``ext0_bypass_100`` (explicit alias)
Bypassing the PLL ensures the skews between input clock, downstream clock outputs, and RTIO clock are deterministic across reboots of the system. This is useful when phase determinism is required in situations where the reference clock fans out to other devices before reaching the master.
Board details
-------------
FPGA board ports
****************
^^^^^^^^^^^^^^^^
All boards have a serial interface running at 115200bps 8-N-1 that can be used for debugging.
Kasli
-----
Kasli and Kasli-SoC
^^^^^^^^^^^^^^^^^^^
`Kasli <https://github.com/m-labs/sinara/wiki/Kasli>`_ is a versatile core device designed for ARTIQ as part of the `Sinara <https://github.com/m-labs/sinara/wiki>`_ family of boards. All variants support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) connected directly to it.
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli is based on a Xilinx Artix-7 FPGA, and Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is based on a Zynq-7000 SoC, notably including an ARM CPU allowing for much heavier software computations at high speeds. They are architecturally very different but supply similar feature sets. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original v1.0 remains supported by ARTIQ.
Standalone variants
+++++++++++++++++++
Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC).
Kasli is connected to the network using a 1000Base-X SFP module. `No-name <https://www.fs.com>`_ BiDi (1000Base-BX) modules have been used successfully. The SFP module for the network should be installed into the SFP0 cage.
The other SFP cages are not used.
The DRTIO line rate depends upon the RTIO clock frequency running, e.g., at 125MHz the line rate is 2.5Gbps, at 150MHz 3.0Gbps, etc. See below for information on RTIO clocks.
The RTIO clock frequency is 125MHz or 150MHz, which is generated by the Si5324.
KC705 and ZC706
^^^^^^^^^^^^^^^
DRTIO master variants
+++++++++++++++++++++
Two high-end evaluation kits are also supported as alternative ARTIQ core device target boards, respectively the Kintex7 `KC705 <https://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ and Zynq-SoC `ZC706 <https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html>`_, both from Xilinx. ZC706, like Kasli-SoC, runs on the ARTIQ-Zynq port. Both are supported in several set variants, namely NIST CLOCK and QC2 (FMC), either available in master, satellite, or standalone variants. See also :doc:`building_developing` for more on system variants.
Kasli can be used as a DRTIO master that provides local RTIO channels and can additionally control one DRTIO satellite.
The RTIO clock frequency is 125MHz or 150MHz, which is generated by the Si5324. The DRTIO line rate is 2.5Gbps or 3Gbps.
As with the standalone configuration, the SFP module for the Ethernet network should be installed into the SFP0 cage. The DRTIO connections are on SFP1 and SFP2, and optionally on the SATA connector.
DRTIO satellite/repeater variants
+++++++++++++++++++++++++++++++++
Kasli can be used as a DRTIO satellite with a 125MHz or 150MHz RTIO clock and a 2.5Gbps or 3Gbps DRTIO line rate.
The DRTIO upstream connection is on SFP0 or optionally on the SATA connector, and the remaining SFPs are downstream ports.
KC705
-----
An alternative target board for the ARTIQ core device is the KC705 development board from Xilinx. It supports the NIST CLOCK and QC2 hardware (FMC).
Common problems
+++++++++++++++
Common KC705 problems
"""""""""""""""""""""
* The SW13 switches on the board need to be set to 00001.
* When connected, the CLOCK adapter breaks the JTAG chain due to TDI not being connected to TDO on the FMC mezzanine.
* On some boards, the JTAG USB connector is not correctly soldered.
VADJ
++++
""""
With the NIST CLOCK and QC2 adapters, for safe operation of the DDS buses (to prevent damage to the IO banks of the FPGA), the FMC VADJ rail of the KC705 should be changed to 3.3V. Plug the Texas Instruments USB-TO-GPIO PMBus adapter into the PMBus connector in the corner of the KC705 and use the Fusion Digital Power Designer software to configure (requires Windows). Write to chip number U55 (address 52), channel 4, which is the VADJ rail, to make it 3.3V instead of 2.5V. Power cycle the KC705 board to check that the startup voltage on the VADJ rail is now 3.3V.
Variant details
---------------
NIST CLOCK
++++++++++
^^^^^^^^^^
With the CLOCK hardware, the TTL lines are mapped as follows:
With the KC705 CLOCK hardware, the TTL lines are mapped as follows:
+--------------------+-----------------------+--------------+
| RTIO channel | TTL line | Capability |
@ -116,11 +186,21 @@ The board has RTIO SPI buses mapped as follows:
The DDS bus is on channel 27.
The ZC706 variant is identical except for the following differences:
- The SMA GPIO on channel 18 is replaced by an Input+Output capable PMOD1_0 line.
- Since there is no SDIO on the programmable logic side, channel 26 is instead occupied by an additional SPI:
+--------------+------------------+--------------+--------------+--------------+
| RTIO channel | CS_N | MOSI | MISO | CLK |
+==============+==================+==============+==============+==============+
| 26 | PMOD_SPI_CS_N | PMOD_SPI_MOSI| PMOD_SPI_MISO| PMOD_SPI_CLK |
+--------------+------------------+--------------+--------------+--------------+
NIST QC2
++++++++
^^^^^^^^
With the QC2 hardware, the TTL lines are mapped as follows:
With the KC705 QC2 hardware, the TTL lines are mapped as follows:
+--------------------+-----------------------+--------------+
| RTIO channel | TTL line | Capability |
@ -154,32 +234,11 @@ The board has RTIO SPI buses mapped as follows:
There are two DDS buses on channels 50 (LPC, DDS0-DDS11) and 51 (HPC, DDS12-DDS23).
The QC2 hardware uses TCA6424A I2C I/O expanders to define the directions of its TTL buffers. There is one such expander per FMC card, and they are selected using the PCA9548 on the KC705.
To avoid I/O contention, the startup kernel should first program the TCA6424A expanders and then call ``output()`` on all ``TTLInOut`` channels that should be configured as outputs.
To avoid I/O contention, the startup kernel should first program the TCA6424A expanders and then call ``output()`` on all ``TTLInOut`` channels that should be configured as outputs. See :mod:`artiq.coredevice.i2c` for more details.
See :mod:`artiq.coredevice.i2c` for more details.
The ZC706 is identical except for the following differences:
Clocking
++++++++
The KC705 in standalone variants supports an internal 125 MHz RTIO clock (based on its crystal oscillator, or external reference for PLL for DRTIO variants) and an external clock, that can be selected using the ``rtio_clock`` configuration entry. Valid values are:
* ``int_125`` - internal crystal oscillator, 125 MHz output (default),
* ``ext0_bypass`` - external clock.
KC705 in DRTIO variants and Kasli generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. Valid values are:
* ``int_125`` - internal crystal oscillator using PLL, 125 MHz output (default),
* ``int_100`` - internal crystal oscillator using PLL, 100 MHz output,
* ``int_150`` - internal crystal oscillator using PLL, 150 MHz output,
* ``ext0_synth0_10to125`` - external 10 MHz reference using PLL, 125 MHz output,
* ``ext0_synth0_80to125`` - external 80 MHz reference using PLL, 125 MHz output,
* ``ext0_synth0_100to125`` - external 100 MHz reference using PLL, 125 MHz output,
* ``ext0_synth0_125to125`` - external 125 MHz reference using PLL, 125 MHz output,
* ``ext0_bypass``, ``ext0_bypass_125``, ``ext0_bypass_100`` - external clock - with explicit aliases available.
The selected option can be observed in the core device boot logs.
Options ``rtio_clock=int_XXX`` and ``rtio_clock=ext0_synth0_XXXXX`` generate the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference (depending on exact option). ``rtio_clock=ext0_bypass`` bypasses that PLL and the user must supply the RTIO clock (typically 125 MHz) at the Kasli front panel SMA input. Bypassing the PLL ensures the skews between input clock, Kasli downstream clock outputs, and RTIO clock are deterministic accross reboots of the system. This is useful when phase determinism is required in situtations where the reference clock fans out to other devices before reaching Kasli.
- The SMA GPIO is once again replaced with PMOD1_0.
- The first four TTLs also have edge counters, on channels 52, 53, 54, and 55.

View File

@ -1,32 +1,31 @@
Core drivers reference
Core real-time drivers
======================
These drivers are for the core device and the peripherals closely integrated into it, which do not use the controller mechanism.
System drivers
--------------
:mod:`artiq.coredevice.core` module
+++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.core
:members:
:mod:`artiq.coredevice.exceptions` module
+++++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.exceptions
:members:
:mod:`artiq.coredevice.dma` module
++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.dma
:members:
:mod:`artiq.coredevice.cache` module
++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.cache
:members:
@ -36,25 +35,25 @@ Digital I/O drivers
-------------------
:mod:`artiq.coredevice.ttl` module
++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.ttl
:members:
:mod:`artiq.coredevice.edge_counter` module
++++++++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.edge_counter
:members:
:mod:`artiq.coredevice.spi2` module
+++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.spi2
:members:
:mod:`artiq.coredevice.i2c` module
++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.i2c
:members:
@ -63,49 +62,49 @@ RF generation drivers
---------------------
:mod:`artiq.coredevice.urukul` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.urukul
:members:
:mod:`artiq.coredevice.ad9910` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.ad9910
:members:
:mod:`artiq.coredevice.ad9912` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.ad9912
:members:
:mod:`artiq.coredevice.ad9914` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.ad9914
:members:
:mod:`artiq.coredevice.mirny` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.mirny
:members:
:mod:`artiq.coredevice.almazny` module
++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.almazny
:members:
:mod:`artiq.coredevice.adf5356` module
+++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.adf5356
:members:
:mod:`artiq.coredevice.phaser` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.phaser
:members:
@ -114,37 +113,37 @@ DAC/ADC drivers
---------------
:mod:`artiq.coredevice.ad53xx` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.ad53xx
:members:
:mod:`artiq.coredevice.zotino` module
+++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.zotino
:members:
:mod:`artiq.coredevice.sampler` module
++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.sampler
:members:
:mod:`artiq.coredevice.novogorny` module
++++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.novogorny
:members:
:mod:`artiq.coredevice.fastino` module
++++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.fastino
:members:
:mod:`artiq.coredevice.shuttler` module
++++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.shuttler
:members:
@ -154,14 +153,14 @@ Miscellaneous
-------------
:mod:`artiq.coredevice.suservo` module
++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.suservo
:members:
:mod:`artiq.coredevice.grabber` module
++++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: artiq.coredevice.grabber
:members:

View File

@ -1,28 +1,34 @@
Core language reference
=======================
Core language and environment
=============================
The most commonly used features from the ARTIQ language modules and from the core device modules are bundled together in :mod:`artiq.experiment` and can be imported with ``from artiq.experiment import *``.
The most commonly used features from the ARTIQ language modules and from the core device modules are bundled together in ``artiq.experiment`` and can be imported with ``from artiq.experiment import *``.
:mod:`artiq.language.core` module
---------------------------------
.. automodule:: artiq.language.core
:member-order: bysource
:members:
:mod:`artiq.language.environment` module
----------------------------------------
.. automodule:: artiq.language.environment
:member-order: bysource
:members:
:mod:`artiq.language.scan` module
----------------------------------------
.. automodule:: artiq.language.scan
:member-order: bysource
:members:
:mod:`artiq.language.units` module
----------------------------------
This module contains floating point constants that correspond to common physical units (ns, MHz, ...).
They are provided for convenience (e.g write ``MHz`` instead of ``1000000.0``) and code clarity purposes.
.. displays nothing, but makes references work
.. automodule:: artiq.language.units
:members:
This module contains floating point constants that correspond to common physical units (ns, MHz, ...). They are provided for convenience (e.g write ``MHz`` instead of ``1000000.0``) and code clarity purposes.

View File

@ -10,9 +10,9 @@ Default network ports
+---------------------------------+--------------+
| Core device (analyzer) | 1382 |
+---------------------------------+--------------+
| Moninj (core device or proxy) | 1383 |
| MonInj (core device or proxy) | 1383 |
+---------------------------------+--------------+
| Moninj (proxy control) | 1384 |
| MonInj (proxy control) | 1384 |
+---------------------------------+--------------+
| Core analyzer proxy (proxy) | 1385 |
+---------------------------------+--------------+

View File

@ -1,22 +0,0 @@
.. _developing-artiq:
Developing ARTIQ
^^^^^^^^^^^^^^^^
.. warning::
This section is only for software or FPGA developers who want to modify ARTIQ. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we normally provide board binaries for you.
The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``flake.nix`` file and/or nixpkgs, and run the commands manually) - but Nix makes the process a lot easier.
* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment, which can be entered with the command `vivado-env`). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra <https://nixbld.m-labs.hk/>`_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
* During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives).
* Install the `Nix package manager <http://nixos.org/nix/>`_, version 2.4 or later. Prefer a single-user installation for simplicity.
* If you did not install Vivado in its default location ``/opt``, clone the ARTIQ Git repository and edit ``flake.nix`` accordingly.
* Enable flakes in Nix by e.g. adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (for example ``~/.config/nix/nix.conf``).
* Clone the ARTIQ Git repository and run ``nix develop`` at the root (where ``flake.nix`` is).
* Make the current source code of ARTIQ available to the Python interpreter by running ``export PYTHONPATH=`pwd`:$PYTHONPATH``.
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli <description>.json``, using a JSON system description file.
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli/<your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <configuring-openocd>`. OpenOCD is already part of the flake's development environment.
* Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed in the flake's development environment). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``.
* The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group).

View File

@ -1,39 +1,41 @@
Developing a Network Device Support Package (NDSP)
==================================================
Most ARTIQ devices are interfaced through "controllers" that expose RPC interfaces to the network (based on SiPyCo). The master never does direct I/O to the devices, but issues RPCs to the controllers when needed. As opposed to running everything on the master, this architecture has those main advantages:
Besides the kind of specialized real-time hardware most of ARTIQ is concerned with the control and management of, ARTIQ also easily handles more conventional 'slow' devices. This is done through *controllers*, based on `SiPyCo <https://github.com/m-labs/sipyco>`_ (manual hosted `here <https://m-labs.hk/artiq/sipyco-manual/>`_), which expose remote procedure call (RPC) interfaces to the network. This allows experiments to issue RPCs to the controllers as necessary, without needing to do direct I/O to the devices. Some advantages of this architecture include:
* Each driver can be run on a different machine, which alleviates cabling issues and OS compatibility problems.
* Controllers/drivers can be run on different machines, alleviating cabling issues and OS compatibility problems.
* Reduces the impact of driver crashes.
* Reduces the impact of driver memory leaks.
This mechanism is for "slow" devices that are directly controlled by a PC, typically over a non-realtime channel such as USB.
Certain devices (such as the PDQ2) may still perform real-time operations by having certain controls physically connected to the core device (for example, the trigger and frame selection signals on the PDQ2). For handling such cases, parts of the support infrastructure may be kernels executed on the core device.
Certain devices (such as the PDQ2) may still perform real-time operations by having certain controls physically connected to the core device (for example, the trigger and frame selection signals on the PDQ2). For handling such cases, parts of the NDSPs may be kernels executed on the core device.
.. seealso::
Some NDSPs for particular devices have already been written and made available to the public. A (non-exhaustive) list can be found on the page :doc:`list_of_ndsps`.
A network device support package (NDSP) is composed of several parts:
Components of a NDSP
--------------------
1. The `driver`, which contains the Python API functions to be called over the network, and performs the I/O to the device. The top-level module of the driver is called ``artiq.devices.XXX.driver``.
2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and is called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it.
3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data must be transferred over the network API, that would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters.
Full support for a specific device, called a network device support package or NDSP, requires several parts:
1. The `driver`, which contains the Python API functions to be called over the network and performs the I/O to the device. The top-level module of the driver should be called ``artiq.devices.XXX.driver``.
2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and should be called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it.
3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data, which would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters, must be transferred over the network API.
4. An optional `mediator`, which is code executed on the client that supplements the network API. A mediator may contain kernels that control real-time signals such as TTL lines connected to the device. Simple devices use the network API directly and do not have a mediator. Mediator modules are called ``artiq.devices.XXX.mediator`` and their public classes are exported at the ``artiq.devices.XXX`` level (via ``__init__.py``) for direct import and use by the experiments.
The driver and controller
-------------------------
A controller is a piece of software that receives commands from a client over the network (or the ``localhost`` interface), drives a device, and returns information about the device to the client. The mechanism used is remote procedure calls (RPCs) using ``sipyco.pc_rpc``, which makes the network layers transparent for the driver's user.
As an example, we will develop a controller for a "device" that is very easy to work with: the console from which the controller is run. The operation that the driver will implement (and offer as an RPC) is writing a message to that console.
The controller we will develop is for a "device" that is very easy to work with: the console from which the controller is run. The operation that the driver will implement is writing a message to that console.
For using RPC, the functions that a driver provides must be the methods of a single object. We will thus define a class that provides our message-printing method: ::
To use RPCs, the functions that a driver provides must be the methods of a single object. We will thus define a class that provides our message-printing method: ::
class Hello:
def message(self, msg):
print("message: " + msg)
For a more complex driver, you would put this class definition into a separate Python module called ``driver``.
For a more complex driver, we would place this class definition into a separate Python module called ``driver``. In this example, for simplicity, we can include it in the controller module.
To turn it into a server, we use ``sipyco.pc_rpc``. Import the function we will use: ::
For the controller itself, we will turn this method into a server using ``sipyco.pc_rpc``. Import the function we will use: ::
from sipyco.pc_rpc import simple_server_loop
@ -45,13 +47,14 @@ and add a ``main`` function that is run when the program is executed: ::
if __name__ == "__main__":
main()
:tip: Defining the ``main`` function instead of putting its code directly in the ``if __name__ == "__main__"`` body enables the controller to be used as well as a setuptools entry point.
.. tip::
Defining the ``main`` function instead of putting its code directly in the ``if __name__ == "__main__"`` body enables the controller to be used as a setuptools entry point as well.
The parameters ``::1`` and ``3249`` are respectively the address to bind the server to (IPv6 localhost) and the TCP port to use. Then add a line: ::
The parameters ``::1`` and ``3249`` are respectively the address to bind the server to (in this case, we use IPv6 localhost) and the TCP port. Add a line: ::
#!/usr/bin/env python3
at the beginning of the file, save it to ``aqctl_hello.py`` and set its execution permissions: ::
at the beginning of the file, save it as ``aqctl_hello.py``, and set its execution permissions: ::
$ chmod 755 aqctl_hello.py
@ -59,16 +62,18 @@ Run it as: ::
$ ./aqctl_hello.py
and verify that you can connect to the TCP port: ::
In a different console, verify that you can connect to the TCP port: ::
$ telnet ::1 3249
Trying ::1...
Connected to ::1.
Escape character is '^]'.
:tip: Use the key combination Ctrl-AltGr-9 to get the ``telnet>`` prompt, and enter ``close`` to quit Telnet. Quit the controller with Ctrl-C.
.. tip ::
Also verify that a target (service) named "hello" (as passed in the first argument to ``simple_server_loop``) exists using the ``sipyco_rpctool`` program from the ARTIQ front-end tools: ::
To exit telnet, use the escape character combination (Ctrl + ]) to access the ``telnet>`` prompt, and then enter ``quit`` or ``close`` to close the connection.
Also verify that a target (i.e. available service for RPC) named "hello" exists: ::
$ sipyco_rpctool ::1 3249 list-targets
Target(s): hello
@ -76,7 +81,7 @@ Also verify that a target (service) named "hello" (as passed in the first argume
The client
----------
Clients are small command-line utilities that expose certain functionalities of the drivers. The ``sipyco_rpctool`` utility contains a generic client that can be used in most cases, and developing a custom client is not required. Try these commands ::
Clients are small command-line utilities that expose certain functionalities of the drivers. The ``sipyco_rpctool`` utility contains a generic client that can be used in most cases, and developing a custom client is not required. You have already used it above in the form of ``list-targets``. Try these commands: ::
$ sipyco_rpctool ::1 3249 list-methods
$ sipyco_rpctool ::1 3249 call message test
@ -98,21 +103,54 @@ In case you are developing a NDSP that is complex enough to need a custom client
if __name__ == "__main__":
main()
Run it as before, while the controller is running. You should see the message appearing on the controller's terminal: ::
Run it as before, making sure the controller is running first. You should see the message appear in the controller's terminal: ::
$ ./aqctl_hello.py
message: Hello World!
When using the driver in an experiment, the ``Client`` instance can be returned by the environment mechanism (via the ``get_device`` and ``attr_device`` methods of :class:`artiq.language.environment.HasEnvironment`) and used normally as a device.
We see that the client has made a request to the server, which has, through the driver, performed the requisite I/O with the "device" (its console), resulting in the operation we wanted. Success!
:warning: RPC servers operate on copies of objects provided by the client, and modifications to mutable types are not written back. For example, if the client passes a list as a parameter of an RPC method, and that method ``append()s`` an element to the list, the element is not appended to the client's list.
.. warning::
Note that RPC servers operate on copies of objects provided by the client, and modifications to mutable types are not written back. For example, if the client passes a list as a parameter of an RPC method, and that method ``append()s`` an element to the list, the element is not appended to the client's list.
To access this driver in an experiment, we can retrieve the ``Client`` instance with the ``get_device`` and ``set_device`` methods of :class:`artiq.language.environment.HasEnvironment`, and then use it like any other device (provided the controller is running and accessible at the time).
.. _ndsp-integration:
Integration with ARTIQ experiments
----------------------------------
Generally we will want to add the device to our :ref:`device database <device-db>` so that we can add it to an experiment with ``self.setattr_device`` and so the controller can be started and stopped automatically by a controller manager (the :mod:`~artiq_comtools.artiq_ctlmgr` utility from ``artiq-comtools``). To do so, add an entry to your device database in this format: ::
device_db.update({
"hello": {
"type": "controller",
"host": "::1",
"port": 3249,
"command": "python /abs/path/to/aqctl_hello.py -p {port}"
},
})
Now it can be added using ``self.setattr_device("hello")`` in the ``build()`` phase of the experiment, and its methods accessed via: ::
self.hello.message("Hello world!")
.. note::
In order to be correctly started and stopped by a controller manager, your controller must additionally implement a ``ping()`` method, which should simply return true, e.g. ::
def ping(self):
return True
Remote execution support
------------------------
If you wish to support remote execution in your controller, you may do so by simply replacing ``simple_server_loop`` with :class:`sipyco.remote_exec.simple_rexec_server_loop`.
Command-line arguments
----------------------
Use the Python ``argparse`` module to make the bind address(es) and port configurable on the controller, and the server address, port and message configurable on the client.
We suggest naming the controller parameters ``--bind`` (which adds a bind address in addition to a default binding to localhost), ``--no-bind-localhost`` (which disables the default binding to localhost), and ``--port``, so that those parameters stay consistent across controllers. Use ``-s/--server`` and ``--port`` on the client. The ``sipyco.common_args.simple_network_args`` library function adds such arguments for the controller, and the ``sipyco.common_args.bind_address_from_args`` function processes them.
Use the Python ``argparse`` module to make the bind address(es) and port configurable on the controller, and the server address, port and message configurable on the client. We suggest naming the controller parameters ``--bind`` (which adds a bind address in addition to a default binding to localhost), ``--no-bind-localhost`` (which disables the default binding to localhost), and ``--port``, so that those parameters stay consistent across controllers. Use ``-s/--server`` and ``--port`` on the client. The :meth:`sipyco.common_args.simple_network_args` library function adds such arguments for the controller, and the :meth:`sipyco.common_args.bind_address_from_args` function processes them.
The controller's code would contain something similar to this: ::
@ -132,7 +170,7 @@ We suggest that you define a function ``get_argparser`` that returns the argumen
Logging
-------
For the debug, information and warning messages, use the ``logging`` Python module and print the log on the standard error output (the default setting). The logging level is by default "WARNING", meaning that only warning messages and more critical messages will get printed (and no debug nor information messages). By calling ``sipyco.common_args.verbosity_args`` with the parser as argument, you add support for the ``--verbose`` (``-v``) and ``--quiet`` (``-q``) arguments in the parser. Each occurrence of ``-v`` (resp. ``-q``) in the arguments will increase (resp. decrease) the log level of the logging module. For instance, if only one ``-v`` is present in the arguments, then more messages (info, warning and above) will get printed. If only one ``-q`` is present in the arguments, then only errors and critical messages will get printed. If ``-qq`` is present in the arguments, then only critical messages will get printed, but no debug/info/warning/error.
For debug, information and warning messages, use the ``logging`` Python module and print the log on the standard error output (the default setting). As in other areas, there are five logging levels, from most to least critical, ``CRITICAL``, ``ERROR``, ``WARNING``, ``INFO``, and ``DEBUG``. By default, the logging level starts at ``WARNING``, meaning it will print messages of level WARNING and above (and no debug nor information messages). By calling ``sipyco.common_args.verbosity_args`` with the parser as argument, you add support for the ``--verbose`` (``-v``) and ``--quiet`` (``-q``) arguments in your controller. Each occurrence of ``-v`` (resp. ``-q``) in the arguments will increase (resp. decrease) the log level of the logging module. For instance, if only one ``-v`` is present, then more messages (INFO and above) will be printed. If only one ``-q`` is present in the arguments, then ERROR and above will be printed. If ``-qq`` is present in the arguments, then only CRITICAL will be printed.
The program below exemplifies how to use logging: ::
@ -168,46 +206,26 @@ The program below exemplifies how to use logging: ::
if __name__ == "__main__":
main()
Additional guidelines
---------------------
Integration with ARTIQ experiments
----------------------------------
Command line and options
^^^^^^^^^^^^^^^^^^^^^^^^
To integrate the controller into an experiment, simply add an entry into the ``device_db.py`` following the example shown below for the ``aqctl_hello.py`` from above: ::
* Controllers should be able to operate in "simulation" mode, specified with ``--simulation``, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file.
* The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter names.
* Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it.
device_db.update({
"hello": {
"type": "controller",
"host": "::1",
"port": 3249,
"command": "python /abs/path/to/aqctl_hello.py -p {port}"
},
})
From the experiment this can now be added using ``self.setattr_device("hello")`` in the ``build()`` phase of the experiment, and methods accessed via: ::
self.hello.message("Hello world!")
To automatically initiate controllers the ``artiq_ctlmgr`` utility from ``artiq-comtools`` can be used. By default, this initiates all controllers hosted on the local address of master connection.
Remote execution support
------------------------
If you wish to support remote execution in your controller, you may do so by simply replacing ``simple_server_loop`` with :class:`sipyco.remote_exec.simple_rexec_server_loop`.
General guidelines
------------------
Style
^^^^^
* Do not use ``__del__`` to implement the cleanup code of your driver. Instead, define a ``close`` method, and call it using a ``try...finally...`` block in the controller.
* Format your source code according to PEP8. We suggest using ``flake8`` to check for compliance.
* Use new-style formatting (``str.format``) except for logging where it is not well supported, and double quotes for strings.
* The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter name.
* Controllers must be able to operate in "simulation" mode, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file.
* The simulation mode is entered whenever the ``--simulation`` option is specified.
* Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it.
* Use docstrings for all public methods of the driver (note that those will be retrieved by ``sipyco_rpctool``).
* Choose a free default TCP port and add it to the default port list in this manual.
* Choose a free default TCP port and add it to the :doc:`default port list<default_network_ports>` in this manual.
Hosting your code
-----------------
We suggest that you create a Git repository for your code, and publish it on https://git.m-labs.hk/, GitLab, GitHub, or a similar website of your choosing. Then send us a message or pull request for your NDSP to be added to the list in this manual.
We suggest that you create a Git repository for your code, and publish it on https://git.m-labs.hk/, GitLab, GitHub, or a similar website of your choosing. Then send us a message or pull request for your NDSP to be added to :doc:`the list in this manual <list_of_ndsps>`.

View File

@ -1,17 +1,11 @@
Distributed Real Time Input/Output (DRTIO)
==========================================
DRTIO system
============
DRTIO is a time and data transfer system that allows ARTIQ RTIO channels to be distributed among several satellite devices synchronized and controlled by a central core device.
DRTIO is the time and data transfer system that allows ARTIQ RTIO channels to be distributed among several satellite devices, synchronized and controlled by a central core device. The main source of DRTIO traffic is the remote control of RTIO output and input channels. The protocol is optimized to maximize throughput and minimize latency, and handles flow control and error conditions (underflows, overflows, etc.)
The link is a high speed duplex serial line operating at 1Gbps or more, over copper or optical fiber.
The DRTIO protocol also supports auxiliary traffic, which is low-priority and non-realtime, e.g., to override and monitor TTL I/Os. Auxiliary traffic never interrupts or delays the main traffic, so it cannot cause unexpected latencies or exceptions (e.g. RTIO underflows).
The main source of DRTIO traffic is the remote control of RTIO output and input channels. The protocol is optimized to maximize throughput and minimize latency, and handles flow control and error conditions (underflows, overflows, etc.)
The DRTIO protocol also supports auxiliary, low-priority and non-realtime traffic. The auxiliary channel supports overriding and monitoring TTL I/Os. Auxiliary traffic never interrupts or delays the main traffic, so that it cannot cause unexpected poor performance (e.g. RTIO underflows).
Time transfer and clock syntonization is typically done over the serial link alone. The DRTIO code is organized as much as possible to support porting to different types of transceivers (Xilinx MGTs, Altera MGTs, soft transceivers running off regular FPGA IOs, etc.) and different synchronization mechanisms.
The lower layers of DRTIO are similar to White Rabbit, with the following main differences:
The lower layers of DRTIO are similar to `White Rabbit <https://white-rabbit.web.cern.ch/>`_ , with the following main differences:
* lower latency
* deterministic latency
@ -20,80 +14,31 @@ The lower layers of DRTIO are similar to White Rabbit, with the following main d
* no Ethernet compatibility
* only star or tree topologies are supported
From ARTIQ kernels, DRTIO channels are used in the same way as local RTIO channels.
.. _using-drtio:
Using DRTIO
-----------
Time transfer and clock syntonization is typically done over the serial link alone. The DRTIO code is written as much as possible to support porting to different types of transceivers (Xilinx MGTs, Altera MGTs, soft transceivers running off regular FPGA IOs, etc.) and different synchronization mechanisms.
Terminology
+++++++++++
-----------
In a system of interconnected DRTIO devices, each RTIO core (driving RTIO PHYs; for example a RTIO core would connect to a large bank of TTL signals) is assigned a number and is called a *destination*. One DRTIO device normally contains one RTIO core.
In a system of interconnected DRTIO devices, each RTIO core (controlling a certain set of associated RTIO channels) is assigned a number and called a *destination*. One DRTIO device normally contains one RTIO core.
On one DRTIO device, the immediate path that a RTIO request must take is called a *hop*: the request can be sent to the local RTIO core, or to another device downstream. Each possible hop is assigned a number. Hop 0 is normally the local RTIO core, and hops 1 and above correspond to the respective downstream ports of the device.
DRTIO devices are arranged in a tree topology, with the core device at the root. For each device, its distance from the root (in number of devices that are crossed) is called its *rank*. The root has rank 0, the devices immediately connected to it have rank 1, and so on.
The routing table
+++++++++++++++++
-----------------
The routing table defines, for each destination, the list of hops ("route") that must be taken from the root in order to reach it.
It is stored in a binary format that can be manipulated with the :ref:`artiq_route utility <routing-table-tool>`. The binary file is then programmed into the flash storage of the core device under the ``routing_table`` key. It is automatically distributed to downstream devices when the connections are established. Modifying the routing table requires rebooting the core device for the new table to be taken into account.
All routes must end with the local RTIO core of the last device (0).
The local RTIO core of the core device is a destination like any other, and it needs to be explicitly part of the routing table for kernels to be able to access it.
If no routing table is programmed, the core device takes a default routing table for a star topology (i.e. with no devices of rank 2 or above), with destination 0 being the core device's local RTIO core and destinations 1 and above corresponding to devices on the respective downstream ports.
Here is an example of creating and programming a routing table for a chain of 3 devices: ::
# create an empty routing table
$ artiq_route rt.bin init
# set destination 0 to the local RTIO core
$ artiq_route rt.bin set 0 0
# for destination 1, first use hop 1 (the first downstream port)
# then use the local RTIO core of that second device.
$ artiq_route rt.bin set 1 1 0
# for destination 2, use hop 1 and reach the second device as
# before, then use hop 1 on that device to reach the third
# device, and finally use the local RTIO core (hop 0) of the
# third device.
$ artiq_route rt.bin set 2 1 1 0
$ artiq_route rt.bin show
0: 0
1: 1 0
2: 1 1 0
$ artiq_coremgmt config write -f routing_table rt.bin
Addressing distributed RTIO cores from kernels
++++++++++++++++++++++++++++++++++++++++++++++
Remote RTIO channels are accessed in the same way as local ones. Bits 16-24 of the RTIO channel number define the destination. Bits 0-15 of the RTIO channel number select the channel within the destination.
Link establishment
++++++++++++++++++
After devices have booted, it takes several seconds for all links in a DRTIO system to become established (especially with the long locking times of low-bandwidth PLLs that are used for jitter reduction purposes). Kernels should not attempt to access destinations until all required links are up (when this happens, the ``RTIODestinationUnreachable`` exception is raised). ARTIQ provides the method :meth:`~artiq.coredevice.core.Core.get_rtio_destination_status` that determines whether a destination can be reached. We recommend calling it in a loop in your startup kernel for each important destination, to delay startup until they all can be reached.
Latency
+++++++
Each hop increases the RTIO latency of a destination by a significant amount; that latency is however constant and can be compensated for in kernels. To limit latency in a system, fully utilize the downstream ports of devices to reduce the depth of the tree, instead of creating chains.
It is stored in a binary format that can be generated and manipulated with the utility :mod:`~artiq.frontend.artiq_route`, see :ref:`drtio-routing`. The binary file is programmed into the flash storage of the core device under the ``routing_table`` key. It is automatically distributed to downstream devices when the connections are established. Modifying the routing table requires rebooting the core device for the new table to be taken into account.
Internal details
----------------
Bits 16-24 of the RTIO channel number (assigned to a respective device in the initial :ref:`system description JSON <system-description>`, and specified again for use of the ARTIQ front-end in the device database) define the destination. Bits 0-15 of the RTIO channel number select the channel within the destination.
Real-time and auxiliary packets
+++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
DRTIO is a packet-based protocol that uses two types of packets:
@ -101,9 +46,9 @@ DRTIO is a packet-based protocol that uses two types of packets:
* auxiliary packets, which are lower-bandwidth and are used for ancillary tasks such as housekeeping and monitoring/injection. Auxiliary packets are low-priority and their transmission has no impact on the timing of real-time packets (however, transmission of real-time packets slows down the transmission of auxiliary packets). In the ARTIQ DRTIO implementation, the contents of the auxiliary packets are read and written directly by the firmware, with the gateware simply handling the transmission of the raw data.
Link layer
++++++++++
^^^^^^^^^^
The lower layer of the DRTIO protocol stack is the link layer, which is responsible for delimiting real-time and auxiliary packets, and assisting with the establishment of a fixed-latency high speed serial transceiver link.
The lower layer of the DRTIO protocol stack is the link layer, which is responsible for delimiting real-time and auxiliary packets and assisting with the establishment of a fixed-latency high speed serial transceiver link.
DRTIO uses the IBM (Widmer and Franaszek) 8b/10b encoding. D characters (the encoded 8b symbols) always transmit real-time packet data, whereas K characters are used for idling and transmitting auxiliary packet data.
@ -113,7 +58,8 @@ A real-time packet is defined by a series of D characters containing the packet'
K characters, which are transmitted whenever there is no real-time data to transmit and to delimit real-time packets, are chosen using a 3-bit K selection word. If this K character is the first character in the set of N characters processed by the transceiver in the logic clock cycle, the mapping between the K selection word and the 8b/10b K space contains commas. If the K character is any of the subsequent characters processed by the transceiver, a different mapping is used that does not contain any commas. This scheme allows the receiver to align its logic clock with that of the transmitter, simply by shifting its logic clock so that commas are received into the first character position.
.. note:: Due to the shoddy design of transceiver hardware, this simple process of clock and comma alignment is difficult to perform in practice. The paper "High-speed, fixed-latency serial links with Xilinx FPGAs" (by Xue LIU, Qing-xu DENG, Bo-ning HOU and Ze-ke WANG) discusses techniques that can be used. The ARTIQ implementation simply keeps resetting the receiver until the comma is aligned, since relatively long lock times are acceptable.
.. note::
Due to the shoddy design of transceiver hardware, this simple process of clock and comma alignment is difficult to perform in practice. The paper "High-speed, fixed-latency serial links with Xilinx FPGAs" (by Xue LIU, Qing-xu DENG, Bo-ning HOU and Ze-ke WANG) discusses techniques that can be used. The ARTIQ implementation simply keeps resetting the receiver until the comma is aligned, since relatively long lock times are acceptable.
The series of K selection words is then used to form auxiliary packets and the idle pattern. When there is no auxiliary packet to transfer or to delimitate auxiliary packets, the K selection word ``100`` is used. To transfer data from an auxiliary packet, the K selection word ``0ab`` is used, with ``ab`` containing two bits of data from the packet. An auxiliary packet is delimited by at least one ``100`` K selection word.
@ -122,23 +68,23 @@ Both real-time traffic and K selection words are scrambled in order to make the
Due to the use of K characters both as delimiters for real-time packets and as information carrier for auxiliary packets, auxiliary traffic is guaranteed a minimum bandwidth simply by having a maximum size limit on real-time packets.
Clocking
++++++++
^^^^^^^^
At the DRTIO satellite device, the recovered and aligned transceiver clock is used for clocking RTIO channels, after appropriate jitter filtering using devices such as the Si5324. The same clock is also used for clocking the DRTIO transmitter (loop timing), which simplifies clock domain transfers and allows for precise round-trip-time measurements to be done.
RTIO clock synchronization
++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^
As part of the DRTIO link initialization, a real-time packet is sent by the core device to each satellite device to make them load their respective timestamp counters with the timestamp values from their respective packets.
RTIO outputs
++++++++++++
^^^^^^^^^^^^
Controlling a remote RTIO output involves placing the RTIO event into the buffer of the destination. The core device maintains a cache of the buffer space available in each destination. If, according to the cache, there is space available, then a packet containing the event information (timestamp, address, channel, data) is sent immediately and the cached value is decremented by one. If, according to the cache, no space is available, then the core device sends a request for the space available in the destination and updates the cache. The process repeats until at least one remote buffer entry is available for the event, at which point a packet containing the event information is sent as before.
Detecting underflow conditions is the responsibility of the core device; should an underflow occur then no DRTIO packet is transmitted. Sequence errors are handled similarly.
RTIO inputs
+++++++++++
^^^^^^^^^^^
The core device sends a request to the satellite for reading data from one of its channels. The request contains a timeout, which is the RTIO timestamp to wait for until an input event appears. The satellite then replies with either an input event (containing timestamp and data), a timeout, or an overflow error.

View File

@ -1,53 +1,102 @@
The environment
===============
Environment
===========
Experiments interact with an environment that consists of devices, arguments and datasets. Access to the environment is handled by the class :class:`artiq.language.environment.EnvExperiment` that experiments should derive from.
ARTIQ experiments exist in an environment, which consists of devices, arguments, and datasets. Access to the environment is handled through the :class:`~artiq.language.environment.HasEnvironment` manager provided by the :class:`~artiq.language.environment.EnvExperiment` class that experiments should derive from.
.. _device-db:
The device database
-------------------
The device database contains information about the devices available in a ARTIQ installation, what drivers to use, what controllers to use and on what machine, and where the devices are connected.
Information about available devices is provided to ARTIQ through a file called the device database, typically called ``device_db.py``, which many of the ARTIQ front-end tools require access to in order to run. The device database specifies:
The master (or :mod:`~artiq.frontend.artiq_run`) instantiates the device drivers (and the RPC clients in the case of controllers) for the experiments based on the contents of the device database.
* what devices are available to an ARTIQ installation
* what drivers to use
* what controllers to use
* how and where to contact each device, i.e.
The device database is stored in the memory of the master and is generated by a Python script typically called ``device_db.py``. That script must define a global variable ``device_db`` with the contents of the database. The device database is a Python dictionary whose keys are the device names, and values can have several types.
- at which RTIO channel(s) each local device can be reached
- at which network address each controller can be reached
as well as, if present, how and where to contact the core device itself (e.g., its IP address, often by a field named ``core_addr``).
This is stored in a Python dictionary whose keys are the device names, which the file must define as a global variable with the name ``device_db``. Examples for various system configurations can be found inside the subfolders of ``artiq/examples``. A typical device database entry looks like this: ::
"led": {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 19}
},
Note that the key (the name of the device) is ``led`` and the value is itself a Python dictionary. Names will later be used to gain access to a device through methods such as ``self.setattr_device("led")``. While in this case ``led`` can be replaced with another name, provided it is used consistently, some names (in particular, ``core``) are used internally by ARTIQ and will cause problems if changed. It is often more convenient to use aliases for renaming purposes, see below.
.. note::
The device database is generated and stored in the memory of the master when the master is first started. Changes to the ``device_db.py`` file will not immediately affect a running master. In order to update the device database, right-click in the Explorer window and select 'Scan device database', or run the command ``artiq_client scan-devices``.
.. warning::
It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system.
Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, real-time, e.g. your Sinara hardware) must be connected to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports.
While controllers can be added and removed to your device database on an *ad hoc* basis, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will be provided with new binaries and necessary assistance.) See :doc:`building_developing`.
Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the :ref:`system description file<system-description>`. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around.
If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add controllers, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the :mod:`~artiq.frontend.artiq_ddb_template` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly. See also the corresponding entry in :ref:`Utilities <ddb-template-tool>`.
Local devices
+++++++++++++
^^^^^^^^^^^^^
Local device entries are dictionaries that contain a ``type`` field set to ``local``. They correspond to device drivers that are created locally on the master (as opposed to going through the controller mechanism). The fields ``module`` and ``class`` determine the location of the Python class that the driver consists of. The ``arguments`` field is another (possibly empty) dictionary that contains arguments to pass to the device driver constructor.
Local device entries are dictionaries which contain a ``type`` field set to ``local``. They correspond to device drivers that are created locally on the master as opposed to using the controller mechanism; this is normally the real-time hardware of the system, including the core, which is itself considered a local device. The ``led`` example above is a local device entry.
The fields ``module`` and ``class`` determine the location of the Python class of the driver. The ``arguments`` field is another (possibly empty) dictionary that contains arguments to pass to the device driver constructor. ``arguments`` is often used to specify the RTIO channel number of a peripheral, which must match the channel number in gateware.
On Kasli and Kasli-SoC, the allocation of RTIO channels to EEM ports is done automatically when the gateware is compiled, and while conceptually simple (channels are assigned one after the other, from zero upwards, for each device entry in the system description file) it is not entirely straightforward (different devices require different numbers of RTIO channels). Again, the easiest way to handle this when writing a new device database is automatically, using :mod:`~artiq.frontend.artiq_ddb_template`.
.. _environment-ctlrs:
Controllers
+++++++++++
^^^^^^^^^^^
Controller entries are dictionaries whose ``type`` field is set to ``controller``. When an experiment requests such a device, a RPC client (see ``sipyco.pc_rpc``) is created and connected to the appropriate controller. Controller entries are also used by controller managers to determine what controllers to run.
Controller entries are dictionaries which contain a ``type`` field set to ``controller``. When an experiment requests such a device, a RPC client (see ``sipyco.pc_rpc``) is created and connected to the appropriate controller. Controller entries are also used by controller managers to determine what controllers to run. For an example, see :ref:`the NDSP development page <ndsp-integration>`.
The ``best_effort`` field is a boolean that determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matches the ``port`` field) and an appropriate bind address for the controller's listening socket.
The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matching the ``port`` field) and an appropriate bind address for the controller's listening socket.
An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default.
Aliases
+++++++
^^^^^^^
If an entry is a string, that string is used as a key for another lookup in the device database.
Arguments
---------
Arguments are values that parameterize the behavior of an experiment and are set before the experiment is executed.
Requesting the values of arguments can only be done in the build phase of an experiment. The value requests are also used to define the GUI widgets shown in the explorer when the experiment is selected.
Arguments are values that parameterize the behavior of an experiment. ARTIQ supports both interactive arguments, requested and supplied at some point while an experiment is running, and submission-time arguments, requested in the build phase and set before the experiment is executed. For more on arguments in practice, see the tutorial section :ref:`mgmt-arguments`. For supported argument types, see the reference for :mod:`artiq.language.environment`; for specific methods, see the reference for :class:`~artiq.language.environment.HasEnvironment`.
.. _environment-datasets:
Datasets
--------
Datasets are values (possibly arrays) that are read and written by experiments and live in a key-value store.
Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`data interfaces tutorial <mgmt-datasets>`.
A dataset may be broadcasted, that is, distributed to all clients connected to the master. For example, the ARTIQ GUI may plot it while the experiment is in progress to give rapid feedback to the user. Broadcasted datasets live in a global key-value store; experiments should use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for example, a periodic calibration experiment may update a dataset read by payload experiments. Broadcasted datasets are replaced when a new dataset with the same key (name) is produced.
A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments.
Datasets can attach metadata for numerical formatting with the ``unit``, ``scale`` and ``precision`` parameters. In experiment code, values are assumed to be in the SI unit. In code, setting a dataset with a value of `1000` and the unit `kV` represents the quantity `1 kV`. It is recommended to use the globals defined by `artiq.language.units` and write `1*kV` instead of `1000` for the value. In dashboards and clients these globals are not available. There, setting a dataset with a value of `1` and the unit `kV` simply represents the quantity `1 kV`. ``precision`` refers to the max number of decimal places to display. This parameter does not affect the underlying value, and is only used for display purposes.
Broadcasted datasets are replaced when a new dataset with the same key (name) is produced. By default, they are erased when the master halts. Broadcasted datasets may be made persistent (``persistent=True``, which also implies ``broadcast=True``), in which case the master stores them in a LMDB database typically called ``dataset_db.mdb``, where they are saved across master restarts.
Broadcasted datasets may be persistent: the master stores them in a file typically called ``dataset_db.pyon`` so they are saved across master restarts.
By default, datasets are archived in the ``results`` HDF5 output for that run, although this can be opted against (``archive=False``). They can be viewed and analyzed with the ARTIQ browser, or with an HDF5 viewer of your choice.
Datasets and units
^^^^^^^^^^^^^^^^^^
Datasets accept metadata for numerical formatting with the ``unit``, ``scale`` and ``precision`` parameters of ``set_dataset``.
.. note::
In experiment code, values are assumed to be in the SI base unit. Setting a dataset with a value of ``1000`` and the unit ``kV`` represents the quantity ``1 kV``. It is recommended to use the globals defined by :mod:`artiq.language.units` and write ``1*kV`` instead of ``1000`` for the value.
In dashboards and clients these globals are not available. However, setting a dataset with a value of ``1`` and the unit ``kV`` simply represents the quantity ``1 kV``.
``precision`` refers to the max number of decimal places to display. This parameter does not affect the underlying value, and is only used for display purposes.
Datasets produced by an experiment run may be archived in the HDF5 output for that run.

View File

@ -1,32 +1,184 @@
.. Copyright (C) 2014, 2015 Robert Jordens <jordens@gmail.com>
FAQ (How do I...)
=================
FAQ
###
use this documentation?
-----------------------
How do I ...
============
The content of this manual is arranged in rough reading order. If you start at the beginning and make your way through section by section, you should form a pretty good idea of how ARTIQ works and how to use it. Otherwise:
**If you are just starting out,** and would like to get ARTIQ set up on your computer and your core device, start with :doc:`installing`, :doc:`flashing`, and :doc:`configuring`, in that order.
**If you have a working ARTIQ setup** (or someone else has set it up for you), start with the tutorials: read :doc:`rtio`, then progress to :doc:`getting_started_core`, :doc:`getting_started_mgmt`, and :doc:`using_data_interfaces`. If your system is in a DRTIO configuration, :doc:`DRTIO and subkernels <using_drtio_subkernels>` will also be helpful.
Pages like :doc:`management_system` and :doc:`core_device` describe **specific components of the ARTIQ ecosystem** in more detail. If you want to understand more about device and dataset databases, for example, read the :doc:`environment` page; if you want to understand the ARTIQ Python dialect and everything it does or does not support, read the :doc:`compiler` page.
Reference pages, like :doc:`main_frontend_tools` and :doc:`core_drivers_reference`, contain the detailed documentation of the individual methods and command-line tools ARTIQ provides. They are heavily interlinked throughout the rest of the documentation: whenever a method, tool, or exception is mentioned by name, like :class:`~artiq.frontend.artiq_run`, :meth:`~artiq.language.core.now_mu`, or :exc:`~artiq.coredevice.exceptions.RTIOUnderflow`, it can normally be clicked on to directly access the reference material. Notice also that the online version of this manual is searchable; see the 'Search docs' bar at left.
.. _build-documentation:
build this documentation?
-------------------------
To generate this manual from source, you can use ``nix build`` directives, for example: ::
$ nix build git+https://github.com/m-labs/artiq.git\?ref=release-[number]#artiq-manual-html
Substitute ``artiq-manual-pdf`` to get the LaTeX PDF version. The results will be in ``result``.
The manual is written in `reStructured Text <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_; you can find the source files in the ARTIQ repository under ``doc/manual``. If you spot a mistake, a typo, or something that's out of date or missing -- in particular, if you want to add something to this FAQ -- feel free to clone the repository, edit the source RST files, and make a pull request with your version of an improvement. (If you're not a fan of or not familiar with command-line Git, both GitHub and Gitea support making edits and pull requests directly in the web interface; tutorial materials are easy to find online.) The second best thing is to open an issue to make M-Labs aware of the problem.
.. _faq-networking:
troubleshoot networking problems?
---------------------------------
Diagnosis aids:
- Can you ``ping`` the device?
- Is the Ethernet LED on?
- Is the ERROR LED on?
- Is there anything unusual recorded in :ref:`the UART log <connecting-UART>`?
Some things to consider:
- Is the ``core_addr`` field of your ``device_db.py`` set correctly?
- Did your device flash and boot successfully? Were the binaries generated for the correct board hardware version?
- Are your core device's IP address and networking configurations definitely set correctly? Check the UART log to confirm, and talk to your network administrator about what the correct choices are.
- Is your core device configured for an external reference clock? If so, it cannot function correctly without one. Is the external reference clock plugged in?
- Are Ethernet and (on Kasli only) SFP0 plugged in all the way? Are they working? Try different cables and SFP adapters; M-Labs tests with CAT6 cables, but lower categories should be supported too.
- Are your PC and your crate in the same subnet?
- Is some other device in your network already using the configured IP address? Turn off the core device and try pinging the configured IP address; if it responds, you have a culprit. One of the two will need a different networking configuration.
- Are there restrictions or issues in your router or subnet that are preventing the core device from connecting? It may help to try connecting the core device to your PC directly.
fix 'no startup kernel found' / 'no idle kernel found' in the core log?
-----------------------------------------------------------------------
Don't. Note that these are ``INFO`` messages, and not ``ERROR`` or even ``WARN``. If you haven't flashed an idle or startup kernel yet, this is normal, and will not cause any problems; between experiments the core device will simply do nothing. The same applies to most other messages in the style of 'no configuration found' or 'falling back to default'. Your system will generally run just fine on its defaults until you get around to setting these configurations, though certain features may be limited until properly set up. See :doc:`configuring` and the list of keys in :ref:`core-device-flash-storage`.
fix 'Mismatch between gateware and software versions'?
------------------------------------------------------
Either reflash your core device with a newer version of ARTIQ (see :doc:`flashing`) or update your software (see :ref:`installing-upgrading`), depending on which is out of date.
.. note::
You can check the specific versions you are using at any time by comparing the gateware version given in the core startup log and the output given by adding ``--version`` to any of the standard ARTIQ front-end commands. This is especially useful when e.g. seeking help in the forum or at the helpdesk, where your running ARTIQ version is often crucial information to diagnose a problem.
Minor version mismatches are common, even in stable ARTIQ versions, but should not cause any issues. The ARTIQ release system ensures breaking changes are strictly limited to new release versions, or to the beta branch (which explicitly makes no promises of stability.) Updates that *are* applied to the stable version are usually bug fixes, documentation improvements, or other quality-of-life changes. As long as gateware and software are using the same stable release version of ARTIQ, even if there is a minor mismatch, no warning will be displayed.
change configuration settings of satellite devices?
---------------------------------------------------
Currently, it is not possible to reach satellites through ``artiq_coremgmt config``, although this is being worked on. On Kasli, use :class:`~artiq.frontend.artiq_mkfs` and :class:`~artiq.frontend.artiq_flash`; on Kasli-SoC, preload the SD card with a ``config.txt``, formatted as a list of ``key=value`` pairs, one per line.
Don't worry about individually flashing idle or startup kernels. If your idle or startup kernel contains subkernels, it will automatically compile as a ``.tar``, which you only need to flash to the master.
fix unreliable DRTIO master-satellite links?
--------------------------------------------
Inconsistent DRTIO connections, especially with odd or absent errors in the core logs, are often a symptom of overheating either in the master or satellite boards. Check the core device fans for failure or defects. Improve air circulation around the crate or attach additional fans to see if that improves or resolves the issue. In the long term, fan trays to be rack-mounted together with the crate are a clean solution to these kinds of problems.
add or remove EEM peripherals or DRTIO satellites?
--------------------------------------------------
Adding new real-time hardware to an ARTIQ system almost always means reflashing the core device; if you are adding new satellite core devices, they will have to be flashed as well. If you have obtained your upgrades from M-Labs or QUARTIQ, updated binaries and reflashing support will normally be offered to you directly. In any other case, track down your JSON system description file(s), bring them up to date with the updated state of your system, and see :doc:`building_developing`.
Once you have an updated set of binaries, reflash the core device, following the instructions in :doc:`flashing`. Be sure to update your device database before starting experimentation; run :mod:`~artiq.frontend.artiq_ddb_template` on your system description(s) to update the local devices, and copy over any aliases or entries for NDSP controllers you may have been using. Note that the device database is a Python file, and the generated file of local devices can also simply be imported into the final version, allowing for dynamic modifications, especially in complex systems that may have multiple device databases in use.
see command-line help?
----------------------
Like most if not almost all terminal utilities, ARTIQ commands, tools and applets print their help messages directly into the terminal and exit when run with the flag ``--help`` or ``-h``: ::
$ artiq_run -h
This is the simplest and most direct way of accessing the same usage and reference material that is replicated in this manual on the pages :doc:`main_frontend_tools` and :doc:`utilities`.
.. _faq-find-examples:
find ARTIQ examples?
--------------------
The examples are installed in the ``examples`` folder of the ARTIQ package. You can find where the ARTIQ package is installed on your machine with: ::
The official examples are stored in the ``examples`` folder of the ARTIQ package. You can find the location of the ARTIQ package on your machine with: ::
python3 -c "import artiq; print(artiq.__path__[0])"
Copy the ``examples`` folder from that path into your home/user directory, and start experimenting!
Copy the ``examples`` folder from that path into your home or user directory, and start experimenting! (Note that some examples have dependencies not included with a standard ARTIQ install, like matplotlib and numba. To run those examples properly, make sure those modules are accessible.)
prevent my first RTIO command from causing an underflow?
--------------------------------------------------------
If you have progressed past this level and would like to see more in-depth code or real-life examples of how other groups have handled running experiments with ARTIQ, see the "Community code" directory on the M-labs `resources page <https://m-labs.hk/experiment-control/resources/>`_.
The first RTIO event is programmed with a small timestamp above the value of the timecounter when the core device is reset. If the kernel needs more time than this timestamp to produce the event, an underflow will occur. You can prevent it by calling ``break_realtime`` just before programming the first event, or by adding a sufficient delay.
fix ``failed to connect to moninj`` in the dashboard?
-----------------------------------------------------
If you are not resetting the core device, the time cursor stays where the previous experiment left it.
This and other similar messages almost always indicate that your device database lists controllers (for example, ``aqctl_moninj_proxy``) that either haven't been started or aren't reachable at the given host and port. See :ref:`mgmt-ctlmgr`, or simply run: ::
$ artiq_ctlgmr
to let the controller manager start the necessary controllers automatically.
diagnose and fix sequence errors?
---------------------------------
Go through your code, keeping manual track of SED lanes. See the following example: ::
@kernel
def run(self):
self.core.reset()
with parallel:
self.ttl0.on() # lane0
self.ttl_sma.pulse(800*us) # lane1(rising) lane1(falling)
with sequential:
self.ttl1.on() # lane2
self.ttl2.on() # lane3
self.ttl3.on() # lane4
self.ttl4.on() # lane5
delay(800*us)
self.ttl1.off() # lane5
self.ttl2.off() # lane6
self.ttl3.off() # lane7
self.ttl4.off() # lane0
self.ttl0.off() # lane1 -> clashes with the falling edge of ttl_sma,
# which is already at +800us
In most cases, as in this one, it's relatively easy to rearrange the generation of events so that they will be better spread out across SED lanes without sacrificing actual functionality. One possible solution for the above sequence looks like: ::
@kernel
def run(self):
self.core.reset()
self.ttl0.on() # lane0
self.ttl_sma.on() # lane1
self.ttl1.on() # lane2
self.ttl2.on() # lane3
self.ttl3.on() # lane4
self.ttl4.on() # lane5
delay(800*us)
self.ttl1.off() # lane5
self.ttl2.off() # lane6
self.ttl3.off() # lane7
self.ttl4.off() # lane0 (no clash: new timestamp is higher than last)
self.ttl_sma.off() # lane1
self.ttl0.off() # lane2
In this case, the :meth:`~artiq.coredevice.ttl.TTLInOut.pulse` is split up into its component :meth:`~artiq.coredevice.ttl.TTLInOut.on` and :meth:`~artiq.coredevice.ttl.TTLInOut.off` so that events can be generated more linearly. It can also be worth keeping in mind that delaying by even a single coarse RTIO cycle between events avoids switching SED lanes at all; in contexts where perfect simultaneity is not a priority, this is an easy way to avoid sequencing issues. See again :ref:`sequence-errors`.
understand applet commands?
---------------------------
The 'Command' field contains the exact terminal command used to open and operate the applet. The default ``${artiq_applet}`` prefix simply translates to something to the effect of ``python -m artiq.applets.``, intended to be immediately followed by the applet module name. The options suffixed after the module name are the same used in the command line, and a list of them can be shown by using the standard command line ``-h`` help flag: ::
$ python -m artiq.applets.plot_xy -h
in any terminal.
organize datasets in folders?
-----------------------------
Use the dot (".") in dataset names to separate folders. The GUI will automatically create and delete folders in the dataset tree display.
organize applets in groups?
---------------------------
Create groups by left-clicking within the applet list and selecting 'New Group'. Move applets in and out of groups by dragging them with the mouse. To unselect an applet or a group, use CTRL+click.
organize experiment windows in the dashboard?
---------------------------------------------
@ -37,74 +189,36 @@ Experiment windows can be organized by using the following hotkeys:
The windows will be organized in the order they were last interacted with.
write a generator feeding a kernel feeding an analyze function?
---------------------------------------------------------------
fix errors when restarting management system after a crash?
-----------------------------------------------------------
Like this::
On Windows in particular, abnormal shutdowns such as power outages or bluescreens sometimes corrupt the organizational files used by the management system, resulting in errors to the tune of ``ValueError: source code string cannot contain null bytes`` when restarting. The easiest way to handle these problems is to delete the corrupted files and start from scratch. Note that GUI configuration ``.pyon`` files are kept in the user configuration directory, see below at :ref:`gui-config-files`
def run(self):
self.parse(self.pipe(iter(range(10))))
create and use variable-length arrays in kernels?
-------------------------------------------------
def pipe(self, gen):
for i in gen:
r = self.do(i)
yield r
def parse(self, gen):
for i in gen:
pass
@kernel
def do(self, i):
return i
create and use variable lengths arrays in kernels?
--------------------------------------------------
Don't. Preallocate everything. Or chunk it and e.g. read 100 events per
function call, push them upstream and retry until the gate time closes.
execute multiple slow controller RPCs in parallel without losing time?
----------------------------------------------------------------------
Use ``threading.Thread``: portable, fast, simple for one-shot calls.
You can't, in general; see the corresponding notes under :ref:`compiler-types`. ARTIQ kernels do not support heap allocation, meaning in particular that lists, arrays, and strings must be of constant size. One option is to preallocate everything, as mentioned on the Compiler page; another option is to chunk it and e.g. read 100 events per function call, push them upstream and retry until the gate time closes.
write part of my experiment as a coroutine/asyncio task/generator?
------------------------------------------------------------------
You can not change the API that your experiment exposes: ``build()``,
``prepare()``, ``run()`` and ``analyze()`` need to be regular functions, not
generators or asyncio coroutines. That would make reusing your own code in
sub-experiments difficult and fragile. You can however wrap your own
generators/coroutines/tasks in regular functions that you then expose as part
of the API.
You cannot change the API that your experiment exposes: :meth:`~artiq.language.environment.HasEnvironment.build`, :meth:`~artiq.language.environment.Experiment.prepare`, :meth:`~artiq.language.environment.Experiment.run` and :meth:`~artiq.language.environment.Experiment.analyze` need to be regular functions, not generators or asyncio coroutines. That would make reusing your own code in sub-experiments difficult and fragile. You can however wrap your own generators/coroutines/tasks in regular functions that you then expose as part of the API.
determine the pyserial URL to attach to a device by its serial number?
----------------------------------------------------------------------
determine the pyserial URL to connect to a device by its serial number?
-----------------------------------------------------------------------
You can list your system's serial devices and print their vendor/product
id and serial number by running::
You can list your system's serial devices and print their vendor/product id and serial number by running::
$ python3 -m serial.tools.list_ports -v
It will give you the ``/dev/ttyUSBxx`` (or the ``COMxx`` for Windows) device
names.
The ``hwid:`` field gives you the string you can pass via the ``hwgrep://``
feature of pyserial
`serial_for_url() <http://pyserial.sourceforge.net/pyserial_api.html#serial.serial_for_url>`_
in order to open a serial device.
This will give you the ``/dev/ttyUSBxx`` (or ``COMxx`` for Windows) device names. The ``hwid:`` field gives you the string you can pass via the ``hwgrep://`` feature of pyserial `serial_for_url() <https://pythonhosted.org/pyserial/pyserial_api.html#serial.serial_for_url>`_ in order to open a serial device.
The preferred way to specify a serial device is to make use of the ``hwgrep://``
URL: it allows to select the serial device by its USB vendor ID, product
ID and/or serial number. Those never change, unlike the device file name.
The preferred way to specify a serial device is to make use of the ``hwgrep://`` URL: it allows for selecting the serial device by its USB vendor ID, product
ID and/or serial number. These never change, unlike the device file name.
For instance, if you want to specify the Vendor/Product ID and the USB Serial Number, you can do:
``-d "hwgrep://<VID>:<PID> SNR=<serial_number>"``.
for example:
``-d "hwgrep://0403:faf0 SNR=83852734"``
For instance, if you want to specify the Vendor/Product ID and the USB Serial Number, you can do: ::
$ -d "hwgrep://<VID>:<PID> SNR=<serial_number>"``.
run unit tests?
---------------
@ -124,9 +238,44 @@ The core device tests require the following TTL devices and connections:
If TTL devices are missing, the corresponding tests are skipped.
find the dashboard and browser configuration files are stored?
--------------------------------------------------------------
.. _gui-config-files:
find the dashboard and browser configuration files?
---------------------------------------------------
::
python -c "from artiq.tools import get_user_config_dir; print(get_user_config_dir())"
Additional Resources
====================
Other related documentation
---------------------------
- the `Sinara wiki <https://github.com/sinara-hw/meta/wiki>`_
- the `SiPyCo manual <https://m-labs.hk/artiq/sipyco-manual/>`_
- the `Migen manual <https://m-labs.hk/migen/manual/>`_
- in a pinch, the `M-labs internal docs <https://git.m-labs.hk/sinara-hw/assembly>`_
For more advanced questions, sometimes the `list of publications <https://m-labs.hk/experiment-control/publications/>`_ about experiments performed using ARTIQ may be interesting. See also the official M-Labs `resources <https://m-labs.hk/experiment-control/resources/>`_ page, especially the section on community code.
"Help, I've done my best and I can't get any further!"
------------------------------------------------------
- If you have an active M-Labs AFWS/support subscription, you can email helpdesk@ at any time for personalized assistance. Please include the following information:
- Your installed ARTIQ version (add ``--version`` to any of the standard ARTIQ commands)
- The variant name of your system (refer to the sticker on the crate if you aren't sure)
- The recent output of your core log, either through ``artiq_coremgmt`` (if you're able to contact your device by network), or over UART following :ref:`the guide here <connecting-UART>`
- How your problem happened, and what you've already tried to fix it
- Compare your materials with the examples; see also :ref:`finding ARTIQ examples <faq-find-examples>` above.
- Check the list of `active issues <https://github.com/m-labs/artiq/issues>`_ on the ARTIQ GitHub repository for possible known problems with ARTIQ. Search through the closed issues to see if your question or concern has been addressed before.
- Search the `M-Labs forum <https://forum.m-labs.hk/>`_ for similar problems, or make a post asking for help yourself.
- Look into the `Mattermost live chat <https://chat.m-labs.hk>`_ or the bridged IRC channel.
- Read the open source code and its docstrings and figure it out.
- If you're reasonably certain you've identified a bug, or if you'd like to suggest a feature that should be included in future ARTIQ releases, `file a GitHub issue <https://github.com/m-labs/artiq/issues/new/choose>`_ yourself, following one of the provided templates.
- In some odd cases, you may want to see the `mailing list archive <https://www.mail-archive.com/artiq@lists.m-labs.hk/>`_; the ARTIQ mailing list was shut down at the end of 2020 and was last regularly used during the time of ARTIQ-2 and 3, but for some older ARTIQ features, or to understand a development thought process, you may still find relevant information there.
In any situation, if you found the manual unclear or unhelpful, you might consider following the :ref:`directions for contribution <build-documentation>` and editing it to be more helpful for future readers.

129
doc/manual/flashing.rst Normal file
View File

@ -0,0 +1,129 @@
(Re)flashing your core device
=============================
.. note::
If you have purchased a pre-assembled system from M-Labs or QUARTIQ, the gateware and firmware of your devices will already be flashed to the newest version of ARTIQ. Flashing your device is only necessary if you obtained your hardware in a different way, or if you want to change your system configuration or upgrade your ARTIQ version after the original purchase. Otherwise, skip straight to :doc:`configuring`.
.. _obtaining-binaries:
Obtaining board binaries
------------------------
If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware for your system that corresponds to your currently installed version of ARTIQ using the ARTIQ firmware service (AFWS). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. The client :mod:`~artiq.frontend.afws_client` is included in all ARTIQ installations.
Run the command::
$ afws_client <username> build <afws_director> <variant>
Replace ``<username>`` with the login name that was given to you with the subscription, ``<variant>`` with the name of your system variant, and ``<afws_directory>`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email. For more information about :mod:`~artiq.frontend.afws_client` see also the corresponding entry on the :ref:`Utilities <afws-client>` page.
For certain configurations (KC705 or ZC706 only) it is also possible to source firmware from `the M-Labs Hydra server <https://nixbld.m-labs.hk/project/artiq>`_ (in ``main`` and ``zynq`` respectively).
Without a subscription, you may build the firmware yourself from the open source code. See the section :doc:`building_developing`.
Installing and configuring OpenOCD
----------------------------------
.. warning::
These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility :mod:`~artiq.frontend.artiq_flash`. If your core device is a Zynq device, skip straight to :ref:`writing-flash`.
ARTIQ supplies the utility :mod:`~artiq.frontend.artiq_flash`, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards.
.. note::
With Conda, install ``openocd`` as follows: ::
$ conda install -c m-labs openocd
Some additional steps are necessary to ensure that OpenOCD can communicate with the FPGA board:
On Linux
^^^^^^^^
First ensure that the current user belongs to the ``plugdev`` group (i.e. ``plugdev`` shown when you run ``$ groups``). If it does not, run ``$ sudo adduser $USER plugdev`` and re-login.
If you installed OpenOCD on Linux using Nix, use the ``which`` command to determine the path to OpenOCD, and then copy the udev rules: ::
$ which openocd
/nix/store/2bmsssvk3d0y5hra06pv54s2324m4srs-openocd-mlabs-0.10.0/bin/openocd
$ sudo cp /nix/store/2bmsssvk3d0y5hra06pv54s2324m4srs-openocd-mlabs-0.10.0/share/openocd/contrib/60-openocd.rules /etc/udev/rules.d
$ sudo udevadm trigger
NixOS users should configure OpenOCD through ``/etc/nixos/configuration.nix`` instead.
Linux using Conda
^^^^^^^^^^^^^^^^^
If you are using a Conda environment ``artiq``, then execute the statements below. If you are using a different environment, you will have to replace ``artiq`` with the name of your environment::
$ sudo cp ~/.conda/envs/artiq/share/openocd/contrib/60-openocd.rules /etc/udev/rules.d
$ sudo udevadm trigger
On Windows
^^^^^^^^^^
A third-party tool, `Zadig <http://zadig.akeo.ie/>`_, is necessary. It is also included with the MSYS2 offline installer and available from the Start Menu as ``Zadig Driver Installer``. Use it as follows:
1. Make sure the FPGA board's JTAG USB port is connected to your computer.
2. Activate Options → List All Devices.
3. Select the "Digilent Adept USB Device (Interface 0)" or "FTDI Quad-RS232 HS" (or similar)
device from the drop-down list.
4. Select WinUSB from the spinner list.
5. Click "Install Driver" or "Replace Driver".
You may need to repeat these steps every time you plug the FPGA board into a port it has not previously been plugged into, even on the same system.
.. _writing-flash:
Writing the flash
-----------------
First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you can simply connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses :mod:`~artiq.frontend.artiq_coremgmt` to flash over network, an Ethernet connection and an IP address, supplied either with the ``-D`` option or in your :ref:`device database <device-db>`, are sufficient.
For Kasli-SoC or ZC706:
::
$ artiq_coremgmt [-D IP_address] config write -f boot <afws_directory>/boot.bin
$ artiq_coremgmt reboot
If the device is not reachable due to corrupted firmware or networking problems, extract the SD card and copy ``boot.bin`` onto it manually.
For Kasli:
::
$ artiq_flash -d <afws_directory>
For KC705:
::
$ artiq_flash -t kc705 -d <afws_directory>
The SW13 switches need to be set to 00001.
Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for :mod:`~artiq.frontend.artiq_flash` in the :ref:`Utilities <flashing-loading-tool>` reference.
.. _connecting-uart:
Connecting to the UART log
--------------------------
A UART is a peripheral device for asynchronous serial communication; in the case of core device boards, it allows the reading of the UART log, which is used for debugging, especially when problems with booting or networking disallow checking core logs with ``artiq_coremgmt log``. If you had no issues flashing your board you can proceed directly to :doc:`configuring`.
Otherwise, ensure your core device is connected to your PC with a data micro-USB cable, as above, and wait at least fifteen seconds after startup to try to connect. To help find the correct port to connect to, you can list your system's serial devices by running: ::
$ python -m serial.tools.list_ports -v
This will give you the list of ``/dev/ttyUSBx`` or ``COMx`` device names (on Linux and Windows respectively). Most commonly, the correct option is the third, i.e. index number 2, but it can vary.
On Linux:
Run the commands: ::
stty 115200 < /dev/ttyUSBx
cat /dev/ttyUSBx
When you restart or reflash the core device you should see the startup logs in the terminal. If you encounter issues, try other ``ttyUSBx`` names, and make certain that your user is part of the ``dialout`` group (run ``groups`` in a terminal to check).
On Windows:
Use a program such as PuTTY to connect to the COM port. Connect to every available COM port at first, restart the core device, see which port produces meaningful output, and close the others. It may be necessary to install the `FTDI drivers <https://ftdichip.com/drivers/>`_ first.
Note that the correct parameters for the serial port are 115200bps 8-N-1 for every core device.

View File

@ -1,10 +1,5 @@
Getting started with the core language
======================================
.. _connecting-to-the-core-device:
Connecting to the core device
-----------------------------
Getting started with the core device
====================================
As a very first step, we will turn on a LED on the core device. Create a file ``led.py`` containing the following: ::
@ -14,23 +9,25 @@ As a very first step, we will turn on a LED on the core device. Create a file ``
class LED(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("led")
self.setattr_device("led0")
@kernel
def run(self):
self.core.reset()
self.led.on()
self.led0.on()
The central part of our code is our ``LED`` class, which derives from :class:`artiq.language.environment.EnvExperiment`. Among other features, :class:`~artiq.language.environment.EnvExperiment` calls our :meth:`~artiq.language.environment.Experiment.build` method and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` method that interfaces to the device database to create the appropriate device drivers and make those drivers accessible as ``self.core`` and ``self.led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host). The decorator uses ``self.core`` internally, which is why we request the core device using :meth:`~artiq.language.environment.HasEnvironment.setattr_device` like any other.
The central part of our code is our ``LED`` class, which derives from :class:`~artiq.language.environment.EnvExperiment`. Almost all experiments should derive from this class, which provides access to the environment as well as including the necessary experiment framework from the base-level :class:`~artiq.language.environment.Experiment`. It will call our :meth:`~artiq.language.environment.HasEnvironment.build` at the right time and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` we use to gain access to our devices ``core`` and ``led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method is a kernel and must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host).
You will need to supply the correct device database for your core device; it is generated by a Python script typically called ``device_db.py`` (see also :ref:`device_db`). If you purchased a system from M-Labs, the device database is provided either on the USB stick or inside ~/artiq on the NUC; otherwise, you can also find examples in the ``examples`` folder of ARTIQ, sorted inside the corresponding subfolder for your core device. Copy ``device_db.py`` into the same directory as ``led.py`` (or use the ``--device-db`` option of ``artiq_run``). The field ``core_addr``, placed at the top of the file, needs to match the IP address of your core device so your computer can communicate with it. If you purchased a pre-assembled system it is normally already set correctly.
Before you can run the example experiment, you will need to supply ARTIQ with the device database for your system, just as you did when configuring the core device. Make sure ``device_db.py`` is in the same directory as ``led.py``. Check once again that the field ``core_addr``, placed at the top of the file, matches the current IP address of your core device.
If you don't have a ``device_db.py`` for your system, consult :ref:`device-db` to find out how to construct one. You can also find example device databases in the ``examples`` folder of ARTIQ, sorted into corresponding subfolders by core device, which you can edit to match your system.
.. note::
To access the examples, you can find where the ARTIQ package is installed on your machine with: ::
To access the examples, find where the ARTIQ package is installed on your machine with: ::
python3 -c "import artiq; print(artiq.__path__[0])"
Run your code using ``artiq_run``, which is part of the ARTIQ front-end tools: ::
Run your code using :mod:`~artiq.frontend.artiq_run`, which is one of the ARTIQ front-end tools: ::
$ artiq_run led.py
@ -39,9 +36,9 @@ The process should terminate quietly and the LED of the device should turn on. C
Host/core device interaction (RPC)
----------------------------------
A method or function running on the core device (which we call a "kernel") may communicate with the host by calling non-kernel functions that may accept parameters and may return a value. The "remote procedure call" (RPC) mechanisms handle automatically the communication between the host and the device of which function to call, with which parameters, and what the returned value is.
A method or function running on the core device (which we call a "kernel") may communicate with the host by calling non-kernel functions that may accept parameters and may return a value. The "remote procedure call" (RPC) mechanisms automatically handle the communication between the host and the device, conveying between them what function to call, what parameters to call it with, and the resulting value, once returned.
Modify the code as follows: ::
Modify ``led.py`` as follows: ::
def input_led_state() -> TBool:
return input("Enter desired LED state: ") == "1"
@ -49,7 +46,7 @@ Modify the code as follows: ::
class LED(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("led")
self.setattr_device("led0")
@kernel
def run(self):
@ -57,9 +54,9 @@ Modify the code as follows: ::
s = input_led_state()
self.core.break_realtime()
if s:
self.led.on()
self.led0.on()
else:
self.led.off()
self.led0.off()
You can then turn the LED off and on by entering 0 or 1 at the prompt that appears: ::
@ -69,15 +66,11 @@ You can then turn the LED off and on by entering 0 or 1 at the prompt that appea
$ artiq_run led.py
Enter desired LED state: 0
What happens is the ARTIQ compiler notices that the :meth:`input_led_state` function does not have a ``@kernel`` decorator (:func:`~artiq.language.core.kernel`) and thus must be executed on the host. When the core device calls it, it sends a request to the host to execute it. The host displays the prompt, collects user input, and sends the result back to the core device, which sets the LED state accordingly.
What happens is that the ARTIQ compiler notices that the ``input_led_state`` function does not have a ``@kernel`` decorator (:func:`~artiq.language.core.kernel`) and thus must be executed on the host. When the function is called on the core device, it sends a request to the host, which executes it. The core device waits until the host returns, and then continues the kernel; in this case, the host displays the prompt, collects user input, and the core device sets the LED state accordingly.
RPC functions must always return a value of the same type. When they return a value that is not ``None``, the compiler should be informed in advance of the type of the value, which is what the ``-> TBool`` annotation is for.
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :func:`self.led.on()` or :func:`self.led.off()` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`.
These events would fail because the RPC to :meth:`input_led_state()` can take an arbitrary amount of time and therefore the deadline for submission of RTIO events would have long passed when :func:`self.led.on()` or :func:`self.led.off()` are called.
The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change.
It advances the timeline far enough to ensure that events can meet the submission deadline.
The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`.
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :doc:`rtio` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock.
Real-time Input/Output (RTIO)
-----------------------------
@ -102,23 +95,15 @@ Create a new file ``rtio.py`` containing the following: ::
delay(2*us)
self.ttl0.pulse(2*us)
In its :meth:`~artiq.language.environment.Experiment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database.
In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels.
In its :meth:`~artiq.language.environment.HasEnvironment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database. In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels.
When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to.
Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode.
There are no input-only TTL channels.
When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to. Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode. There are no input-only TTL channels.
The experiment then drives one million 2 µs long pulses separated by 2 µs each.
Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run.py rtio.py``.
Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%.
This is not what you would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled.
The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc, all of which are hard to predict and variable.
Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle.
The experiment then drives one million 2 µs long pulses separated by 2 µs each. Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run rtio.py``. Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%. This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled. The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable. Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle.
Instead, inside the core device, output timing is generated by the gateware and the CPU only programs switching commands with certain timestamps that the CPU computes.
This guarantees precise timing as long as the CPU can keep generating timestamps that are increasing fast enough. In case it fails to do that (and attempts to program an event with a timestamp smaller than the current RTIO clock timestamp), a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` exception is raised. The kernel causing it may catch it (using a regular ``try... except...`` construct), or it will be propagated to the host.
This guarantees precise timing as long as the CPU can keep generating timestamps that are increasing fast enough. In the case that it fails to do so (and attempts to program an event with a timestamp smaller than the current RTIO clock timestamp), :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` is raised. The kernel causing it may catch it (using a regular ``try... except...`` construct), or allow it to propagate to the host.
Try reducing the period of the generated waveform until the CPU cannot keep up with the generation of switching events and the underflow exception is raised. Then try catching it: ::
@ -147,23 +132,32 @@ Try reducing the period of the generated waveform until the CPU cannot keep up w
Parallel and sequential blocks
------------------------------
It is often necessary that several pulses overlap one another. This can be expressed through the use of ``with parallel`` constructs, in which the events generated by the individual statements are executed at the same time. The duration of the ``parallel`` block is the duration of its longest statement.
It is often necessary for several pulses to overlap one another. This can be expressed through the use of the ``with parallel`` construct, in which the events generated by individual statements are scheduled to execute at the same time, rather than sequentially. The duration of the ``parallel`` block is the duration of its longest statement.
Try the following code and observe the generated pulses on a 2-channel oscilloscope or logic analyzer: ::
for i in range(1000000):
with parallel:
self.ttl0.pulse(2*us)
self.ttl1.pulse(4*us)
delay(4*us)
from artiq.experiment import *
ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches.
It simply remembers the position on the timeline when entering the ``parallel`` block and then seeks back to that position after submitting the events generated by each statement.
In other words, the statements in the ``parallel`` block are actually executed sequentially, only the RTIO events generated by them are scheduled to be executed in parallel.
Note that if a statement takes a lot of CPU time to execute (this different from the events scheduled by a statement taking a long time), it may cause a subsequent statement to miss the deadline for timely submission of its events.
This then causes a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` exception to be raised.
class Tutorial(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("ttl0")
self.setattr_device("ttl1")
Within a parallel block, some statements can be made sequential again using a ``with sequential`` construct. Observe the pulses generated by this code: ::
@kernel
def run(self):
self.core.reset()
for i in range(1000000):
with parallel:
self.ttl0.pulse(2*us)
self.ttl1.pulse(4*us)
delay(4*us)
ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches. It simply remembers its position on the timeline (``now_mu``) when entering the ``parallel`` block and resets to that position after each individual statement. At the end of the block, the cursor is advanced to the furthest position it reached during the block. In other words, the statements in a ``parallel`` block are actually executed sequentially. Only the RTIO events generated by the statements are *scheduled* in parallel.
Remember that while ``now_mu`` resets at the beginning of each statement in a ``parallel`` block, the wall clock advances regardless. If a particular statement takes a long time to execute (which is different from -- and unrelated to! -- the events *scheduled* by the statement taking a long time), the wall clock may advance past the reset value, putting any subsequent statements inside the block into a situation of negative slack (i.e., resulting in :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` ). Sometimes underflows may be avoided simply by reordering statements within the parallel block. This especially applies to input methods, which generally necessarily block CPU progress until the wall clock has caught up to or overtaken the cursor.
Within a parallel block, some statements can be scheduled sequentially again using a ``with sequential`` block. Observe the pulses generated by this code: ::
for i in range(1000000):
with parallel:
@ -174,22 +168,35 @@ Within a parallel block, some statements can be made sequential again using a ``
self.ttl1.pulse(4*us)
delay(4*us)
Particular care needs to be taken when working with ``parallel`` blocks in cases where a large number of RTIO events are generated as it possible to create sequencing errors (`RTIO sequence error`). Sequence errors do not halt execution of the kernel for performance reasons and instead are reported in the core log. If the ``aqctl_corelog`` process has been started with ``artiq_ctlmgr``, then these errors will be posted to the master log. However, if an experiment is executed through ``artiq_run``, these errors will not be visible outside of the core log.
.. warning::
``with parallel`` specifically 'parallelizes' the *top-level* statements inside a block. Consider as an example:
A sequence error is caused when the scalable event dispatcher (SED) cannot queue an RTIO event due to its timestamp being the same as or earlier than another event in its queue. By default, the SED has 8 lanes which allows ``parallel`` events to work without sequence errors in most cases, however if many (>8) events are queued with conflicting timestamps this error can surface.
.. code-block::
:linenos:
These errors can usually be overcome by reordering the generation of the events. Alternatively, the number of SED lanes can be increased in the gateware.
for i in range(1000000):
with parallel:
self.ttl0.pulse(2*us)
if True:
self.ttl1.pulse(2*us)
self.ttl2.pulse(2*us)
delay(4*us)
.. _rtio-analyzer-example:
This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of line #4; it will not repeat the reset at the deeper indentation level for #5 or #6.
In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, lines #5 and #6, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To schedule #5 and #6 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement.
Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to cause sequencing issues in the gateware; see also :ref:`sequence-errors`.
.. _rtio-analyzer:
RTIO analyzer
-------------
The core device records the real-time I/O waveforms into a circular buffer. It is possible to dump any Python object so that it appears alongside the waveforms using the ``rtio_log`` function, which accepts a channel name (i.e. a log target) as the first argument: ::
The core device records all real-time I/O waveforms, as well as the variation of RTIO slack, into a circular buffer, the contents of which can be extracted using :mod:`~artiq.frontend.artiq_coreanalyzer`. Try for example: ::
from artiq.experiment import *
class Tutorial(EnvExperiment):
def build(self):
self.setattr_device("core")
@ -198,18 +205,37 @@ The core device records the real-time I/O waveforms into a circular buffer. It i
@kernel
def run(self):
self.core.reset()
for i in range(100):
self.ttl0.pulse(...)
rtio_log("ttl0", "i", i)
delay(...)
for i in range(5):
self.ttl0.pulse(0.1 * ms)
delay(0.1 * ms)
Afterwards, the recorded data can be extracted and written to a VCD file using ``artiq_coreanalyzer -w rtio.vcd`` (see: :ref:`core-device-rtio-analyzer-tool`). VCD files can be viewed using third-party tools such as GtkWave.
When using :mod:`~artiq.frontend.artiq_run`, the recorded buffer data can be extracted directly into the terminal, using a command in the form of: ::
$ artiq_coreanalyzer -p
.. note::
The first time this command is run, it will retrieve the entire contents of the analyzer buffer, which may include every experiment you have run so far. For a more manageable introduction, run the analyzer once to clear the buffer, run the experiment, and then run the analyzer a second time, so that only the data from this single experiment is displayed.
This will produce a list of the exact output events submitted to RTIO, printed in chronological order, along with the state of both ``now_mu`` and ``rtio_counter_mu``. While useful in diagnosing some specific gateware errors (in particular, :ref:`sequencing issues <sequence-errors>`), it isn't the most readable of formats. An alternate is to export to VCD, which can be viewed using third-party tools such as GTKWave. Run the experiment again, and use a command in the form of: ::
$ artiq_coreanalyzer -w <file_name>.vcd
The ``<file_name>.vcd`` file should be immediately created and written. Check the directory the command was run in to find it.
.. tip::
Tutorials on GTKWave options (or other third-party tools) and how best to view VCD files can be found online. By default, the data in a trace like ``rtio_slack`` will probably be presented in a raw form. To see a stepped wave as in the ARTIQ dashboard, look for options to interpret the data as a real number, then as an analog signal.
Pay attention to the timescale of the waveform dock in your chosen viewer; if you have set your signals to display but nothing is visible, it is likely zoomed in or out much too far.
The easiest way to view recorded analyzer data, however, is directly in the ARTIQ dashboard, a feature which will be presented later in :ref:`interactivity-waveform`.
.. _getting-started-dma:
Direct Memory Access (DMA)
--------------------------
DMA allows you to store fixed sequences of pulses in system memory, and have the DMA core in the FPGA play them back at high speed. Pulse sequences that are too fast for the CPU (i.e. would cause RTIO underflows) can still be generated using DMA. The only modification of the sequence that the DMA core supports is shifting it in time (so it can be played back at any position of the timeline), everything else is fixed at the time of recording the sequence.
DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up' to ``now_mu``. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly.
Try this: ::
@ -225,7 +251,7 @@ Try this: ::
@kernel
def record(self):
with self.core_dma.record("pulses"):
# all RTIO operations now go to the "pulses"
# all RTIO operations now_mu go to the "pulses"
# DMA buffer, instead of being executed immediately.
for i in range(50):
self.ttl0.pulse(100*ns)
@ -244,137 +270,7 @@ Try this: ::
# each playback advances the timeline by 50*(100+100) ns
self.core_dma.playback_handle(pulses_handle)
Distributed Direct Memory Access (DDMA)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default on DRTIO systems, all events recorded by the DMA core are kept and played back on the master.
With distributed DMA, RTIO events that should be played back on remote destinations, are distributed to the corresponding satellites. In some cases (typically, large buffers on several satellites with high event throughput), it allows for better performance and higher bandwidth, as the RTIO events do not have to be sent over the DRTIO link(s) during playback.
To enable distributed DMA, simply provide an ``enable_ddma=True`` argument for the :meth:`~artiq.coredevice.dma.CoreDMA.record` method - taking a snippet from the previous example: ::
@kernel
def record(self):
with self.core_dma.record("pulses", enable_ddma=True):
# all RTIO operations now go to the "pulses"
# DMA buffer, instead of being executed immediately.
for i in range(50):
self.ttl0.pulse(100*ns)
delay(100*ns)
This argument is ignored on standalone systems, as it does not apply there.
Enabling DDMA on a purely local sequence on a DRTIO system introduces an overhead during trace recording which comes from additional processing done on the record, so careful use is advised.
Due to the extra time that communicating with relevant satellites takes, an additional delay before playback may be necessary to prevent a :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` when playing back a DDMA-enabled sequence.
Subkernels
----------
Subkernels refer to kernels running on a satellite device. This allows you to offload some of the processing and control over remote RTIO devices, freeing up resources on the master.
Subkernels behave in most part as regular kernels, they accept arguments and can return values. However, there are few caveats:
- they do not support RPCs,
- they do not support DRTIO,
- their return value must be fully annotated with an ARTIQ type,
- their arguments should be annotated, and only basic ARTIQ types are supported,
- while ``self`` is allowed, there is no attribute writeback - any changes to it will be discarded when the subkernel is done,
- they can raise exceptions, but they cannot be caught by the master,
- they begin execution as soon as possible when called, and they can be awaited.
To define a subkernel, use the subkernel decorator (``@subkernel(destination=X)``). The destination is the satellite number as defined in the routing table, and must be between 1 and 255. To call a subkernel, call it like a normal function; and to await its result, use ``subkernel_await(function, [timeout])`` built-in function.
For example, a subkernel performing integer addition: ::
from artiq.experiment import *
@subkernel(destination=1)
def subkernel_add(a: TInt32, b: TInt32) -> TInt32:
return a + b
class SubkernelExperiment(EnvExperiment):
def build(self):
self.setattr_device("core")
@kernel
def run(self):
subkernel_add(2, 2)
result = subkernel_await(subkernel_add)
assert result == 4
Sometimes the subkernel execution may take more time. By default, the await function will wait forever. However, if timeout is needed it can be set, as ``subkernel_await()`` accepts an optional argument. The value is interpreted in milliseconds and if it is negative, timeout is disabled.
Subkernels are compiled after the main kernel, and then immediately uploaded to satellites. When called, master instructs the appropriate satellite to load the subkernel into their kernel core and to run it. If the subkernel is complex, and its binary relatively big, the delay between the call and actually running the subkernel may be substantial; if that delay has to be minimized, ``subkernel_preload(function)`` should be used before the call.
While ``self`` is accepted as an argument for subkernels, it is embedded into the compiled data. Any changes made by the main kernel or other subkernels, will not be available.
Subkernels can call other kernels and subkernels. For a more complex example: ::
from artiq.experiment import *
class SubkernelExperiment(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("ttl0")
self.setattr_device("ttl8") # assuming it's on satellite
@subkernel(destination=1)
def add_and_pulse(self, a: TInt32, b: TInt32) -> TInt32:
c = a + b
self.pulse_ttl(c)
return c
@subkernel(destination=1)
def pulse_ttl(self, delay: TInt32) -> TNone:
self.ttl8.pulse(delay*us)
@kernel
def run(self):
subkernel_preload(self.add_and_pulse)
self.core.reset()
delay(10*ms)
self.add_and_pulse(2, 2)
self.ttl0.pulse(15*us)
result = subkernel_await(self.add_and_pulse)
assert result == 4
self.pulse_ttl(20)
Without the preload, the delay after the core reset would need to be longer. It's still an operation that can take some time, depending on the connection. Notice that the method ``pulse_ttl()`` can be also called both within a subkernel, and on its own.
In general, subkernels do not have to be awaited, but awaiting is required to retrieve returned values and exceptions.
.. note::
When a subkernel is running, regardless of devices used by it, RTIO devices on that satellite are not available to the master. Control is returned to master after the subkernel finishes - to be sure that you can use the device, the subkernel should be awaited before any RTIO operations on the affected satellite are performed.
Only output events are redirected to the DMA core. Input methods inside a ``with dma`` block will be called as they would be outside of the block, in the current real-time context, and input events will be buffered normally, not to DMA.
Message passing
^^^^^^^^^^^^^^^
Subkernels besides arguments and returns, can also pass messages between each other or the master with built-in ``subkernel_send()`` and ``subkernel_recv()`` functions. This can be used for communication between subkernels, passing additional data, or partially computed data. Consider the following example: ::
from artiq.experiment import *
@subkernel(destination=1)
def simple_message() -> TInt32:
data = subkernel_recv("message", TInt32)
return data + 20
class MessagePassing(EnvExperiment):
def build(self):
self.setattr_device("core")
@kernel
def run(self):
simple_self()
subkernel_send(1, "message", 150)
result = subkernel_await(simple_self)
assert result == 170
The ``subkernel_send(destination, name, value)`` function requires three arguments: destination, name of the message that will be linked with the ``subkernel_recv()``, and the passed value.
The ``subkernel_recv(name, type, [timeout])`` function requires two arguments: message name (matching the name provided in ``subkernel_send``) and expected type. Optionally, it accepts a third argument - timeout for the operation in milliseconds. If the value is negative, timeout is disabled. The default value is no timeout.
The "name" argument in both ``send`` and ``recv`` functions acts as a link, and must match exactly between the two for a successful message transaction. The type of the value sent by ``subkernel_send`` is checked against the type declared in ``subkernel_recv`` with the same name, to avoid misinterpretation of the data. The compiler also checks if all subkernel message names have both a sending and receiving functions to help with typos. However, it cannot help if wrong names are used - the receiver will wait only for a matching message for the duration of the timeout.
A message can be received only when a subkernel is running, and is put into a buffer to be taken when required - thus whatever sending order will not cause a deadlock. However, a subkernel may timeout or wait forever, if destination or names do not match (e.g. message sent to wrong destination, or under different than expected name even if types match).
For more documentation on the methods used, see the :mod:`artiq.coredevice.dma` reference.

View File

@ -1,24 +1,36 @@
Getting started with the management system
==========================================
Using the management system
===========================
The management system is the high-level part of ARTIQ that schedules the experiments, distributes and stores the results, and manages devices and parameters.
In practice, rather than managing experiments by executing :mod:`~artiq.frontend.artiq_run` over and over, most use cases are better served by using the ARTIQ *management system*. This is the high-level application part of ARTIQ, which can be used to schedule experiments, manage devices and parameters, and distribute and store results. It also allows for distributed use of ARTIQ, with a single master coordinating demands on the system issued over the network by multiple clients. Using this system, multiple users on different machines can schedule experiments or analyze results on the same ARTIQ system, potentially simultaneously, without interfering with each other.
The manipulations described in this tutorial can be carried out using a single computer, without any special hardware.
The management system consists of at least two parts:
Starting your first experiment with the master
----------------------------------------------
a. the **ARTIQ master,** which runs on a single machine, facilitates communication with the core device and peripherals, and is responsible for most of the actual duties of the system,
b. one or more **ARTIQ clients,** which may be local or remote and which communicate only with the master. Both a GUI (the **dashboard**) and a straightforward **command line client** are provided, with many of the same capabilities.
In the previous tutorial, we used the ``artiq_run`` utility to execute our experiments, which is a simple stand-alone tool that bypasses the ARTIQ management system. We will now see how to run an experiment using the master (the central program in the management system that schedules and executes experiments) and the dashboard (that connects to the master and controls it).
as well as, optionally,
First, create a folder ``~/artiq-master`` and copy the file ``device_db.py`` (containing the device database) found in the ``examples/master`` directory from the ARTIQ sources. The master uses those files in the same way as ``artiq_run``.
c. one or more **controller managers**, which coordinate the operation of certain (generally, non-realtime) classes of device and provide certain services to the clients,
d. and one or more instances of the **ARTIQ browser**, a GUI application designed to facilitate the analysis of experiment results and datasets.
Then create a ``~/artiq-master/repository`` sub-folder to contain experiments. The master scans this ``repository`` folder to determine what experiments are available (the name of the folder can be changed using ``-r``).
In this tutorial, we will explore the basic operation of the management system. Because the various components of the management system run wholly on the host machine, and not on the core device (in other words, they do not inherently involve any kernel functions), it is not necessary to have a core device or any specialized hardware set up to use it. The examples in this tutorial can all be carried out using only your host computer.
Running your first experiment with the master
---------------------------------------------
Until now, we have executed experiments using :mod:`~artiq.frontend.artiq_run`, which is a simple standalone tool that bypasses the management system. We will now see how to run an experiment using a master and a client. In this arrangement, the master is responsible for communicating with the core device, scheduling and keeping track of experiments, and carrying out RPCs the core device may call for. Clients submit experiments to the master to be scheduled and, if necessary, query the master about the state of experiments and their results.
First, create a folder called ``~/artiq-master``. Copy into it the ``device_db.py`` for your system (your device database, exactly as in :doc:`getting_started_core`); the master uses the device database in the same way as :mod:`~artiq.frontend.artiq_run` when communicating with the core device.
.. tip::
Since no devices are actually used in these examples, you can also use a device database in the model of the ``device_db.py`` from ``examples/no_hardware``, which uses resources from ``artiq/sim`` instead of referencing or requiring any real local hardware.
Secondly, create a subfolder ``~/artiq-master/repository`` to contain experiments. By default, the master scans for a folder of this name to determine what experiments are available; if you'd prefer to use a different name, this can be changed by running ``artiq_master -r [folder name]`` instead of ``artiq_master`` below. Experiments don't have to be in the repository to be submitted to the master, but the repository contains those experiments the master is automatically aware of.
Create a very simple experiment in ``~/artiq-master/repository`` and save it as ``mgmt_tutorial.py``: ::
from artiq.experiment import *
class MgmtTutorial(EnvExperiment):
"""Management tutorial"""
def build(self):
@ -27,104 +39,204 @@ Create a very simple experiment in ``~/artiq-master/repository`` and save it as
def run(self):
print("Hello World")
Start the master with: ::
$ cd ~/artiq-master
$ artiq_master
This last command should not return, as the master keeps running.
This command should display ``ARTIQ master is now ready`` and not return, as the master keeps running. In another terminal, use the client to request this experiment: ::
Now, start the dashboard with the following commands in another terminal: ::
$ artiq_client submit repository/mgmt_tutorial.py
This command should print a message in the format ``RID: 0``, telling you the scheduling ID assigned to the experiment by the master, and exit. Note that it doesn't matter *where* the client is run; the client does not require direct access to ``device_db.py`` or the repository folder, and only directly communicates with the master. Relatedly, the path to an experiment a client submits is given relative to the location of the *master*, not the client.
Return to the terminal where the master is running. You should see an output similar to: ::
INFO:worker(0,mgmt_tutorial.py):print:Hello World
In other words, a worker created by the master has executed the experiment and carried out the print instruction. Congratulations!
.. tip::
In order to run the master and the clients on different PCs, start the master with a ``--bind`` flag: ::
$ artiq_master --bind [hostname or IP to bind to]
and then use the option ``--server`` or ``-s`` for clients, as in: ::
$ artiq_client -s [hostname or IP of the master]
$ artiq_dashboard -s [hostname or IP of the master]
Both IPv4 and IPv6 are supported. See also the individual references :mod:`~artiq.frontend.artiq_master`, :mod:`~artiq.frontend.artiq_dashboard`, and :mod:`~artiq.frontend.artiq_client` for more details.
You may also notice that the master has created some other organizational files in its home directory, notably a folder ``results``, where a HDF5 record is preserved of every experiment that is submitted and run. The files in ``results`` will be discussed in greater detail in :doc:`using_data_interfaces`.
Running the dashboard and controller manager
--------------------------------------------
Submitting experiments with :mod:`~artiq.frontend.artiq_client` has some interesting qualities: for instance, experiments can be requested simultaneously by different clients and be relied upon to execute cleanly in sequence, which is useful in a distributed context. On the other hand, on an local level, it doesn't necessarily carry many practical advantages over using :mod:`~artiq.frontend.artiq_run`. The real convenience of the management system lies in its GUI, the dashboard. We will now try submitting an experiment using the dashboard.
First, start the controller manager: ::
$ artiq_ctlmgr
Like the master, this command should not return, as the controller manager keeps running. Note that the controller manager requires access to the device database, but not in the local directory -- it gets that access automatically by connecting to the master.
.. note::
We will not be using controllers in this part of the tutorial. Nonetheless, the dashboard will expect to be able to contact certain controllers given in the device database, and print error messages if this isn't the case (e.g. ``Is aqctl_moninj_proxy running?``). It is equally possible to check your device database and start the requisite controllers manually, or to temporarily delete their entries from ``device_db.py``, but it's normally quite convenient to let the controller manager handle things. The role and use of controller managers will be covered in more detail in :doc:`using_data_interfaces`.
In a third terminal, start the dashboard: ::
$ cd ~
$ artiq_dashboard
.. note:: The ``artiq_dashboard`` program uses a file called ``artiq_dashboard.pyon`` in the current directory to save and restore the GUI state (window/dock positions, last values entered by the user, etc.).
Like :mod:`~artiq.frontend.artiq_client`, the dashboard requires no direct access to the device database or the repository. It communicates with the master to receive information about available experiments and the state of the system.
The dashboard should display the list of experiments from the repository folder in a dock called "Explorer". There should be only the experiment we created. Select it and click "Submit", then look at the "Log" dock for the output from this simple experiment.
You should see the list of experiments from the ``repository`` in the dock called 'Explorer'. In our case, this will only be the single experiment we created, listed by the name we gave it in the docstring inside the triple quotes, "Management tutorial". Select it, and in the window that opens, click 'Submit'.
.. note:: Multiple clients may be connected at the same time, possibly on different machines, and will be synchronized. See the ``-s`` option of ``artiq_dashboard`` and the ``--bind`` option of ``artiq_master`` to use the network. Both IPv4 and IPv6 are supported.
This time you will find the output displayed directly in the dock called 'Log'. The dashboard log combines the master's console output, the dashboard's own logs, and the device logs of the core device itself (if there is one in use); normally, this is the only log it's necessary to check.
Adding an argument
------------------
Adding a new experiment
-----------------------
Experiments may have arguments whose values can be set in the dashboard and used in the experiment's code. Modify the experiment as follows: ::
Create a new file in your ``repository`` folder, called ``timed_tutorial.py``: ::
from artiq.experiment import *
import time
class TimedTutorial(EnvExperiment):
"""Timed tutorial"""
def build(self):
pass # no devices used
def run(self):
print("Hello World")
time.sleep(10)
print("Goodnight World")
Save it. You will notice that it does not immediately appear in the 'Explorer' dock. For stability reasons, the master operates with a cached idea of the repository, and changes in the file system will often not be reflected until a *repository rescan* is triggered.
You can ask it to do this through the command-line client: ::
$ artiq_client scan-repository
or you can right-click in the Explorer and select 'Scan repository HEAD'. Now you should be able to select and submit the new experiment.
If you switch the 'Log' dock to its 'Schedule' tab while the experiment is still running, you will see the experiment appear, displaying its RID, status, priority, and other information. Click 'Submit' again while the first experiment is in progress, and a second iteration of the experiment will appear in the Schedule, queued up to execute next in line.
.. note::
You may have noted that experiments can be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, see :ref:`experiment-scheduling`.
In the meantime, you can try out submitting either of the two experiments with different priority levels and take a look at the queues that ensue. If you are interested, you can try submitting experiments through the command line client at the same time, or even open a second dashboard in a different terminal. Observe that no matter the source, all submitted experiments will be accounted for and handled by the scheduler in an orderly way.
.. _mgmt-arguments:
Adding arguments
----------------
Experiments may have arguments, values which can be set in the dashboard on submission and used in the experiment's code. Create a new experiment called ``argument_tutorial.py``, and give it the following :meth:`~artiq.language.environment.HasEnvironment.build` and :meth:`~artiq.language.environment.Experiment.run` functions: ::
def build(self):
self.setattr_argument("count", NumberValue(precision=0, step=1))
def run(self):
for i in range(self.count):
print("Hello World", i)
for i in range(self.count):
print("Hello World", i)
The method :meth:`~artiq.language.environment.HasEnvironment.setattr_argument` acts to set the argument and make its value accessible, similar to the effect of :meth:`~artiq.language.environment.HasEnvironment.setattr_device`. The second input sets the type of the argument; here, :class:`~artiq.language.environment.NumberValue` represents a floating point numerical value. To learn what other types are supported, see :class:`artiq.language.environment` and :class:`artiq.language.scan`.
Rescan the repository as before. Open the new experiment in the dashboard. Above the submission options, you should now see a spin box that allows you to set the value of ``count``. Try setting it and submitting it.
Interactive arguments
---------------------
With standard arguments, it is only possible to use :meth:`~artiq.language.environment.HasEnvironment.setattr_argument` in :meth:`~artiq.language.environment.HasEnvironment.build`; these arguments are always requested at submission time. However, it is also possible to use *interactive* arguments, which can be requested and supplied inside :meth:`~artiq.language.environment.Experiment.run`, while the experiment is being executed. Modify the experiment as follows (and push the result): ::
def build(self):
pass
def run(self):
repeat = True
while repeat:
print("Hello World")
with self.interactive(title="Repeat?") as interactive:
interactive.setattr_argument("repeat", BooleanValue(True))
repeat = interactive.repeat
Close and reopen the submission window, or click on the button labeled 'Recompute all arguments', in order to update the submission parameters. Submit again. It should print once, then wait; you may notice in 'Schedule' that the experiment does not exit, but hangs at status 'running'.
Now, in the same dock as 'Explorer', navigate to the tab 'Interactive Args'. You can now choose and submit a value for 'repeat'. Every time an interactive argument is requested, the experiment pauses until an input is supplied.
.. note::
If you choose to 'Cancel' instead, an :exc:`~artiq.language.environment.CancelledArgsError` will be raised (which an experiment can catch, instead of halting).
In order to request and supply multiple interactive arguments at once, simply place them in the same ``with`` block; see also the example ``interactive.py`` in ``examples/no_hardware``.
``NumberValue`` represents a floating point numeric argument. There are many other types, see :class:`artiq.language.environment` and :class:`artiq.language.scan`.
Use the command-line client to trigger a repository rescan: ::
artiq_client scan-repository
The dashboard should now display a spin box that allows you to set the value of the ``count`` argument. Try submitting the experiment as before.
.. _master-setting-up-git:
Setting up Git integration
--------------------------
So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with the tracking of modifications to experiments and with the traceability of a result to a particular version of an experiment.
So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with tracking modifications to experiments and with the traceability to a particular version of an experiment.
.. note:: The workflow we will describe in this tutorial corresponds to a situation where the ARTIQ master machine is also used as a Git server where multiple users may push and pull code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head.
.. note::
The workflow we will describe in this tutorial corresponds to a situation where the computer running the ARTIQ master is also used as a Git server to which multiple users may contribute code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head. See the :ref:`Management system <mgmt-git-integration>` page for notes on alternate workflows.
We will use the current ``repository`` folder as working directory for making local modifications to the experiments, move it away from the master data directory, and create a new ``repository`` folder that holds the Git data used by the master. Stop the master with Ctrl-C and enter the following commands: ::
We will use our current ``repository`` folder as the working directory for making local modifications to the experiments, move it away from the master's data directory, and replace it with a new ``repository`` folder, which will hold only the Git data used by the master. Stop the master with Ctrl+C and enter the following commands: ::
$ cd ~/artiq-master
$ mv repository ~/artiq-work
$ mkdir repository
$ cd repository
$ git init --bare
$ git init bare
Now, push data to into the bare repository. Initialize a regular (non-bare) Git repository into our working directory: ::
Now initialize a regular (non-bare) Git repository in our working directory: ::
$ cd ~/artiq-work
$ git init
$ git init
Then commit our experiment: ::
Then add and commit our experiments: ::
$ git add mgmt_tutorial.py
$ git commit -m "First version of the tutorial experiment"
$ git add timed_tutorial.py
$ git commit -m "First version of the tutorial experiments"
and finally, push the commit into the master's bare repository: ::
and finally, connect the two repositories and push the commit upstream to the master's repository: ::
$ git remote add origin ~/artiq-master/repository
$ git push -u origin master
Start the master again with the ``-g`` flag, telling it to treat the contents of the ``repository`` folder (not ``artiq-work``) as a bare Git repository: ::
.. tip::
If you are not familiar with command-line Git and would like to understand these commands in more detail, search for some tutorials in basic use of Git; there are many available online.
Start the master again with the ``-g`` flag, which tells it to treat its ``repository`` folder as a bare Git repository: ::
$ cd ~/artiq-master
$ artiq_master -g
.. note:: You need at least one commit in the repository before you can start the master.
.. note::
Note that you need at least one commit in the repository before the master can be started.
There should be no errors displayed, and if you start the GUI again, you will find the experiment there.
Now you should be able to restart the dashboard and see your experiments there.
To complete the master configuration, we must tell Git to make the master rescan the repository when new data is added to it. Create a file ``~/artiq-master/repository/hooks/post-receive`` with the following contents: ::
To make things more convenient, we will make Git tell the master to rescan the repository whenever new data is pushed from downstream. Create a file ``~/artiq-master/repository/hooks/post-receive`` with the following contents: ::
#!/bin/sh
artiq_client scan-repository --async
Then set the execution permission on it: ::
Then set its execution permissions: ::
$ chmod 755 ~/artiq-master/repository/hooks/post-receive
$ chmod 755 repository/hooks/post-receive
.. note:: Remote machines may also push and pull into the master's bare repository using e.g. Git over SSH.
.. note::
Remote client machines may also push and pull into the master repository, using e.g. Git over SSH.
Let's now make a modification to the experiment. In the source present in the working directory, add an exclamation mark at the end of "Hello World". Before committing it, check that the experiment can still be executed correctly by running it directly from the filesystem using: ::
Let's now make a modification to the experiments. In the working directory ``artiq-work``, open ``mgmt_tutorial.py`` again and add an exclamation mark to the end of "Hello World". Before committing it, check that the experiment can still be executed correctly by submitting it directly from the working directory, using the command-line client: ::
$ artiq_client submit ~/artiq-work/mgmt_tutorial.py
.. note:: You may also use the "Open file outside repository" feature of the GUI, by right-clicking on the explorer.
.. note:: Submitting an experiment from the repository using the ``artiq_client`` command-line tool is done using the ``-R`` flag.
.. note::
Alternatively, right-click in the Explorer dock and select the 'Open file outside repository' option for the same effect.
Verify the log in the GUI. If you are happy with the result, commit the new version and push it into the master's repository: ::
@ -132,33 +244,17 @@ Verify the log in the GUI. If you are happy with the result, commit the new vers
$ git commit -a -m "More enthusiasm"
$ git push
.. note:: Notice that commands other than ``git push`` are not needed anymore.
Notice that commands other than ``git commit`` and ``git push`` are no longer necessary. The Git hook should cause a repository rescan automatically, and submitting the experiment in the dashboard should run the new version, with enthusiasm included.
The master should now run the new version from its repository.
The ARTIQ session
-----------------
As an exercise, add another experiment to the repository, commit and push the result, and verify that it appears in the GUI.
Often, you will want to run an instance of the controller manager and dashboard along with the ARTIQ master, whether or not you also intend to allow other clients to connect remotely. For convenience, all three can be started simultaneously with a single command: ::
Datasets
--------
$ artiq_session
Modify the ``run()`` method of the experiment as follows: ::
Arguments to the individual tools (including ``-s`` and ``--bind``) can still be specified using the ``-m``, ``-d`` and ``-c`` options for master, dashboard and manager respectively. Use an equals sign to avoid confusion in parsing, for example: ::
def run(self):
self.set_dataset("parabola", np.full(self.count, np.nan), broadcast=True)
for i in range(self.count):
self.mutate_dataset("parabola", i, i*i)
time.sleep(0.5)
$ artiq_session -m=-g
.. note:: You need to import the ``time`` module, and the ``numpy`` module as ``np``.
Commit, push and submit the experiment as before. Go to the "Datasets" dock of the GUI and observe that a new dataset has been created. We will now create a new XY plot showing this new result.
Plotting in the ARTIQ dashboard is achieved by programs called "applets". Applets are independent programs that add simple GUI features and are run as separate processes (to achieve goals of modularity and resilience against poorly written applets). Users may write their own applets, or use those supplied with ARTIQ (in the ``artiq.applets`` module) that cover basic plotting.
Applets are configured through their command line to select parameters such as the names of the datasets to plot. The list of command-line options can be retrieved using the ``-h`` option; for example you can run ``python3 -m artiq.applets.plot_xy -h`` in a terminal.
In our case, create a new applet from the XY template by right-clicking on the applet list, and edit the applet command line so that it retrieves the ``parabola`` dataset (the command line should now be ``${artiq_applet}plot_xy parabola``). Run the experiment again, and observe how the points are added one by one to the plot.
After the experiment has finished executing, the results are written to a HDF5 file that resides in ``~/artiq-master/results/<date>/<hour>``. Open that file with HDFView or h5dump, and observe the data we just generated as well as the Git commit ID of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645`` that represents the data at a particular time in the Git repository). The list of Git commit IDs can be found using the ``git log`` command in ``~/artiq-work``.
.. note:: HDFView and h5dump are third-party tools not supplied with ARTIQ.
to start the session with the master in Git mode. See also :mod:`~artiq.frontend.artiq_session`.

View File

@ -1,26 +1,62 @@
ARTIQ documentation
===================
ARTIQ manual
============
.. toctree::
:caption: Contents
:caption: Overview
:maxdepth: 2
introduction
releases
.. toctree::
:caption: Setting up ARTIQ
:maxdepth: 2
installing
developing
release_notes
flashing
configuring
building_developing
.. toctree::
:caption: Tutorials
:maxdepth: 2
rtio
getting_started_core
compiler
getting_started_mgmt
core_device
management_system
using_data_interfaces
using_drtio_subkernels
.. toctree::
:caption: ARTIQ components
:maxdepth: 2
environment
compiler
management_system
drtio
core_device
.. toctree::
:caption: Network devices
:maxdepth: 2
developing_a_ndsp
list_of_ndsps
default_network_ports
.. toctree::
:caption: References
:maxdepth: 2
main_frontend_tools
core_language_reference
core_drivers_reference
list_of_ndsps
developing_a_ndsp
mgmt_system_reference
utilities
default_network_ports
.. toctree::
:caption: Addenda
:maxdepth: 2
faq

View File

@ -3,28 +3,32 @@ Installing ARTIQ
ARTIQ can be installed using the Nix (on Linux) or MSYS2 (on Windows) package managers. Using Conda is also possible on both platforms but not recommended.
.. _installing-nix-users:
Installing via Nix (Linux)
--------------------------
First, install the Nix package manager. Some distributions provide a package for the Nix package manager, otherwise, it can be installed via the script on the `Nix website <http://nixos.org/nix/>`_. Make sure you get Nix version 2.4 or higher.
First install the Nix package manager. Some distributions provide a package for it; otherwise, it can be installed via the script on the `Nix website <http://nixos.org/nix/>`_. Make sure you get Nix version 2.4 or higher. Prefer a single-user installation for simplicity.
Once Nix is installed, enable Flakes: ::
Once Nix is installed, enable flakes, for example by running: ::
$ mkdir -p ~/.config/nix
$ echo "experimental-features = nix-command flakes" > ~/.config/nix/nix.conf
$ echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
The easiest way to obtain ARTIQ is then to install it into the user environment with ``$ nix profile install git+https://github.com/m-labs/artiq.git``. Answer "Yes" to the questions about setting Nix configuration options. This provides a minimal installation of ARTIQ where the usual commands (``artiq_master``, ``artiq_dashboard``, ``artiq_run``, etc.) are available.
See also the different options for enabling flakes on `the NixOS wiki <https://nixos.wiki/wiki/flakes>`_.
This installation is however quite limited, as Nix creates a dedicated Python environment for the ARTIQ commands alone. This means that other useful Python packages that you may want (pandas, matplotlib, ...) are not available to them.
The easiest way to obtain ARTIQ is to install it into the user environment with ::
$ nix profile install git+https://github.com/m-labs/artiq.git?ref=release-8
Answer "Yes" to the questions about setting Nix configuration options (for more details see 'Troubleshooting' below.) You should now have a minimal installation of ARTIQ, where the usual front-end commands (:mod:`~artiq.frontend.artiq_run`, :mod:`~artiq.frontend.artiq_master`, :mod:`~artiq.frontend.artiq_dashboard`, etc.) are all available to you.
This installation is however quite limited, as Nix creates a dedicated Python environment for the ARTIQ commands alone. This means that other useful Python packages, which ARTIQ is not dependent on but which you may want to use in your experiments (pandas, matplotlib...), are not available.
Installing multiple packages and making them visible to the ARTIQ commands requires using the Nix language. Create an empty directory with a file ``flake.nix`` with the following contents:
::
{
inputs.extrapkg.url = "git+https://git.m-labs.hk/M-Labs/artiq-extrapkg.git";
inputs.extrapkg.url = "git+https://git.m-labs.hk/M-Labs/artiq-extrapkg.git?ref=release-8";
outputs = { self, extrapkg }:
let
pkgs = extrapkg.pkgs;
@ -47,8 +51,6 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
# The NixOS package collection contains many other packages that you may find
# interesting. Here are some examples:
#ps.pandas
#ps.numpy
#ps.scipy
#ps.numba
#ps.matplotlib
# or if you need Qt (will recompile):
@ -56,11 +58,13 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
#ps.bokeh
#ps.cirq
#ps.qiskit
# Note that NixOS also provides packages ps.numpy and ps.scipy, but it is
# not necessary to explicitly add these, since they are dependencies of
# ARTIQ and available with an ARTIQ install anyway.
]))
#artiq.korad_ka3005p
#artiq.novatech409b
# List desired non-Python packages here
#artiq.openocd-bscanspi # needed if and only if flashing boards
# Other potentially interesting non-Python packages from the NixOS package collection:
#pkgs.gtkwave
#pkgs.spyder
@ -78,14 +82,24 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
};
}
You can now spawn a shell containing these packages by running ``$ nix shell`` in the directory containing the ``flake.nix``. This should make both the ARTIQ commands and all the additional packages available to you. You can exit the shell with Control+D or with the command ``exit``. A first execution of ``$ nix shell`` may take some time, but for any future repetitions Nix will use cached packages and startup should be much faster.
Then spawn a shell containing the packages with ``$ nix shell``. The ARTIQ commands with all the additional packages should now be available.
You might be interested in creating multiple directories containing different ``flake.nix`` files which represent different sets of packages for different purposes. If you are familiar with Conda, using Nix in this way is similar to having multiple Conda environments.
You can exit the shell by typing Control-D. The next time ``$ nix shell`` is invoked, Nix uses the cached packages so the shell startup is fast.
To find more packages you can browse the `Nix package search <https://search.nixos.org/packages>`_ website. If your favorite package is not available with Nix, contact M-Labs using the helpdesk@ email.
You can create directories containing each a ``flake.nix`` that correspond to different sets of packages. If you are familiar with Conda, using Nix in this way is similar to having multiple Conda environments.
.. note::
If you find you prefer using flakes to your original ``nix profile`` installation, you can remove it from your system by running: ::
If your favorite package is not available with Nix, contact us using the helpdesk@ email.
$ nix profile list
finding the entry with its ``Original flake URL`` listed as the GitHub ARTIQ repository, noting its index number (in a fresh Nix system it will normally be the only entry, at index 0), and running: ::
$ nix profile remove [index]
While using flakes, ARTIQ is not 'installed' as such in any permanent way. However, Nix will preserve independent cached packages in ``/nix/store`` for each flake, which over time or with many different flakes and versions can take up large amounts of storage space. To clear this cache, run ``$ nix-garbage-collect``.
.. _installing-troubleshooting:
Troubleshooting
^^^^^^^^^^^^^^^
@ -93,9 +107,7 @@ Troubleshooting
"Do you want to allow configuration setting... (y/N)?"
""""""""""""""""""""""""""""""""""""""""""""""""""""""
When installing and initializing ARTIQ using commands like ``nix shell``, ``nix develop``, or ``nix profile install``, you may encounter prompts to modify certain configuration settings. These settings correspond to the ``nixConfig`` flag within the ARTIQ flake:
::
When installing and initializing ARTIQ using commands like ``nix shell``, ``nix develop``, or ``nix profile install``, you may encounter prompts to modify certain configuration settings. These settings correspond to the ``nixConfig`` flag within the ARTIQ flake: ::
do you want to allow configuration setting 'extra-sandbox-paths' to be set to '/opt' (y/N)?
do you want to allow configuration setting 'extra-substituters' to be set to 'https://nixbld.m-labs.hk' (y/N)?
@ -103,15 +115,11 @@ When installing and initializing ARTIQ using commands like ``nix shell``, ``nix
We recommend accepting these settings by responding with ``y``. If asked to permanently mark these values as trusted, choose ``y`` again. This action saves the configuration to ``~/.local/share/nix/trusted-settings.json``, allowing future prompts to be bypassed.
Alternatively, you can also use the option `accept-flake-config <https://nixos.org/manual/nix/stable/command-ref/conf-file#conf-accept-flake-config>`_ by appending ``--accept-flake-config`` to your nix command:
::
Alternatively, you can also use the option `accept-flake-config <https://nix.dev/manual/nix/stable/command-ref/conf-file#conf-accept-flake-config>`_ by appending ``--accept-flake-config`` to your nix command, for example: ::
nix develop --accept-flake-config
Or add the option to ``~/.config/nix/nix.conf`` to make the setting more permanent:
::
Or add the option to ``~/.config/nix/nix.conf`` to make the setting more permanent: ::
extra-experimental-features = flakes
accept-flake-config = true
@ -122,26 +130,18 @@ Or add the option to ``~/.config/nix/nix.conf`` to make the setting more permane
"Ignoring untrusted substituter, you are not a trusted user"
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
If the following message displays when running ``nix shell`` or ``nix develop``
::
If the following message displays when running ``nix shell`` or ``nix develop`` ::
warning: ignoring untrusted substituter 'https://nixbld.m-labs.hk', you are not a trusted user.
Run `man nix.conf` for more information on the `substituters` configuration option.
and Nix proceeds to build some packages from source, this means that you are using `multi-user mode <https://nixos.org/manual/nix/stable/installation/multi-user>`_ in Nix, for example when Nix is installed via ``pacman`` in Arch Linux.
By default, users accessing Nix in multi-user mode are "unprivileged" and cannot use untrusted substituters. To change this, edit ``/etc/nix/nix.conf`` and add the following line (or append to the key if the key already exists):
::
and Nix proceeds to build some packages from source, this means that you are using `multi-user mode <https://nix.dev/manual/nix/stable/installation/multi-user>`_ in Nix, which may be the case for example when Nix is installed via ``pacman`` in Arch Linux. By default, users accessing Nix in multi-user mode are "unprivileged" and cannot use untrusted substituters. To change this, edit ``/etc/nix/nix.conf`` and add the following line (or append to the key if the key already exists): ::
trusted-substituters = https://nixbld.m-labs.hk
This will add the substituter as a trusted substituter for all users using Nix.
Alternatively, add the following line:
::
Alternatively, add the following line: ::
trusted-users = <username> # Replace <username> with the user invoking `nix`
@ -154,49 +154,47 @@ This will set your user as a trusted user, allowing the use of any untrusted sub
Installing via MSYS2 (Windows)
------------------------------
We recommend using our `offline installer <https://nixbld.m-labs.hk/job/artiq/extra-beta/msys2-offline-installer/latest>`_, which contains all the necessary packages and no additional configuration is needed.
After installation, launch ``MSYS2 with ARTIQ`` from the Windows Start menu.
We recommend using our `offline installer <https://nixbld.m-labs.hk/job/artiq/extra/msys2-offline-installer/latest>`_, which contains all the necessary packages and requires no additional configuration. After installation, simply launch ``MSYS2 with ARTIQ`` from the Windows Start menu.
Alternatively, you may install `MSYS2 <https://msys2.org>`_, then edit ``C:\MINGW64\etc\pacman.conf`` and add at the end: ::
[artiq]
SigLevel = Optional TrustAll
Server = https://msys2.m-labs.hk/artiq-beta
Server = https://msys2.m-labs.hk/artiq
Launch ``MSYS2 CLANG64`` from the Windows Start menu to open the MSYS2 shell, and enter the following commands: ::
pacman -Syy
pacman -S mingw-w64-clang-x86_64-artiq
$ pacman -Syy
$ pacman -S mingw-w64-clang-x86_64-artiq
If your favorite package is not available with MSYS2, contact us using the helpdesk@ email.
As above in the Nix section, you may find yourself wanting to add other useful packages (pandas, matplotlib, etc.). MSYS2 uses a port of ArchLinux's ``pacman`` to manage (add, remove, and update) packages. To add a specific package, you can simply use a command of the form: ::
Installing via Conda (Windows, Linux) [DEPRECATED]
--------------------------------------------------
$ pacman -S <package name>
For more see the `MSYS2 documentation <https://www.msys2.org/docs/package-management/>`_ on package management. If your favorite package is not available with MSYS2, contact M-Labs using the helpdesk@ email.
Installing via Conda [DEPRECATED]
---------------------------------
.. warning::
Installing ARTIQ via Conda is not recommended. Instead, Linux users should install it via Nix and Windows users should install it via MSYS2. Conda support may be removed in future ARTIQ releases and M-Labs can only provide very limited technical support for Conda.
First, install `Anaconda <https://www.anaconda.com/distribution/>`_ or the more minimalistic `Miniconda <https://conda.io/en/latest/miniconda.html>`_.
After installing either Anaconda or Miniconda, open a new terminal (also known as command line, console, or shell and denoted here as lines starting with ``$``) and verify the following command works::
First, install `Anaconda <https://www.anaconda.com/download>`_ or the more minimalistic `Miniconda <https://conda.io/en/latest/miniconda.html>`_. After installing either Anaconda or Miniconda, open a new terminal and verify that the following command works::
$ conda
Executing just ``conda`` should print the help of the ``conda`` command. If your shell does not find the ``conda`` command, make sure that the Conda binaries are in your ``$PATH``. If ``$ echo $PATH`` does not show the Conda directories, add them: execute ``$ export PATH=$HOME/miniconda3/bin:$PATH`` if you installed Conda into ``~/miniconda3``.
Executing just ``conda`` should print the help of the ``conda`` command. If your shell cannot find the ``conda`` command, make sure that the Conda binaries are in your ``$PATH``. If ``$ echo $PATH`` does not show the Conda directories, add them: execute e.g. ``$ export PATH=$HOME/miniconda3/bin:$PATH`` if you installed Conda into ``~/miniconda3``.
Controllers for third-party devices (e.g. Thorlabs TCube, Lab Brick Digital Attenuator, etc.) that are not shipped with ARTIQ can also be installed with this script. Browse `Hydra <https://nixbld.m-labs.hk/project/artiq>`_ or see the list of NDSPs in this manual to find the names of the corresponding packages, and list them at the beginning of the script.
Set up the Conda channel and install ARTIQ into a new Conda environment: ::
$ conda config --prepend channels https://conda.m-labs.hk/artiq-beta
$ conda config --prepend channels https://conda.m-labs.hk/artiq
$ conda config --append channels conda-forge
$ conda create -n artiq artiq
.. note::
If you do not need to flash boards, the ``artiq`` package is sufficient. The packages named ``artiq-board-*`` contain only firmware for the FPGA board, and you should not install them unless you are reflashing an FPGA board. Controllers for third-party devices (e.g. Thorlabs TCube, Lab Brick Digital Attenuator, etc.) that are not shipped with ARTIQ can also be installed with Conda. Browse `Hydra <https://nixbld.m-labs.hk/project/artiq>`_ or see the list of NDSPs in this manual to find the names of the corresponding packages.
.. note::
On Windows, if the last command that creates and installs the ARTIQ environment fails with an error similar to "seeking backwards is not allowed", try to re-run the command with admin rights.
On Windows, if the last command that creates and installs the ARTIQ environment fails with an error similar to "seeking backwards is not allowed", try re-running the command with admin rights.
.. note::
For commercial use you might need a license for Anaconda/Miniconda or for using the Anaconda package channel. `Miniforge <https://github.com/conda-forge/miniforge>`_ might be an alternative in a commercial environment as it does not include the Anaconda package channel by default. If you want to use Anaconda/Miniconda/Miniforge in a commercial environment, please check the license and the latest terms of service.
@ -207,227 +205,36 @@ After the installation, activate the newly created environment by name. ::
This activation has to be performed in every new shell you open to make the ARTIQ tools from that environment available.
.. _installing-upgrading:
Upgrading ARTIQ
---------------
.. note::
Some ARTIQ examples also require matplotlib and numba, and they must be installed manually for running those examples. They are available in Conda.
When you upgrade ARTIQ, as well as updating the software on your host machine, it may also be necessary to reflash the gateware and firmware of your core device to keep them compatible. New numbered release versions in particular incorporate breaking changes and are not generally compatible. See :doc:`flashing` for instructions.
Upgrading ARTIQ (with Nix)
--------------------------
Upgrading with Nix
^^^^^^^^^^^^^^^^^^
Run ``$ nix profile upgrade`` if you installed ARTIQ into your user profile. If you used a ``flake.nix`` shell environment, make a back-up copy of the ``flake.lock`` file to enable rollback, then run ``$ nix flake update`` and re-enter ``$ nix shell``.
Run ``$ nix profile upgrade`` if you installed ARTIQ into your user profile. If you used a ``flake.nix`` shell environment, make a back-up copy of the ``flake.lock`` file to enable rollback, then run ``$ nix flake update`` and re-enter the environment with ``$ nix shell``.
To rollback to the previous version, respectively use ``$ nix profile rollback`` or restore the backed-up version of the ``flake.lock`` file.
You may need to reflash the gateware and firmware of the core device to keep it synchronized with the software.
Upgrading with MSYS2
^^^^^^^^^^^^^^^^^^^^
Upgrading ARTIQ (with MSYS2)
----------------------------
Run ``pacman -Syu`` to update all MSYS2 packages, including ARTIQ. If you get a message telling you that the shell session must be restarted after a partial update, open the shell again after the partial update and repeat the command. See the `MSYS2 <https://www.msys2.org/docs/updating/>`__ and `Pacman <https://wiki.archlinux.org/title/Pacman>`_ manuals for more information, including how to update individual packages if required.
Run ``pacman -Syu`` to update all MSYS2 packages including ARTIQ. If you get a message telling you that the shell session must be restarted after a partial update, open the shell again after the partial update and repeat the command. See the MSYS2 and Pacman manual for information on how to update individual packages if required.
Upgrading with Conda
^^^^^^^^^^^^^^^^^^^^
Upgrading ARTIQ (with Conda)
----------------------------
When upgrading ARTIQ or when testing different versions it is recommended that new Conda environments are created instead of upgrading the packages in existing environments. As a rule, keep previous environments around unless you are certain that they are no longer needed and the new environment is working correctly.
When upgrading ARTIQ or when testing different versions it is recommended that new Conda environments are created instead of upgrading the packages in existing environments.
Keep previous environments around until you are certain that they are not needed anymore and a new environment is known to work correctly.
To install the latest version, simply select a different environment name and run the installation commands again.
To install the latest version, just select a different environment name and run the installation command again.
Switching between Conda environments using commands such as ``$ conda deactivate artiq-6`` and ``$ conda activate artiq-5`` is the recommended way to roll back to previous versions of ARTIQ.
You may need to reflash the gateware and firmware of the core device to keep it synchronized with the software.
Switching between Conda environments using commands such as ``$ conda deactivate artiq-7`` and ``$ conda activate artiq-8`` is the recommended way to roll back to previous versions of ARTIQ.
You can list the environments you have created using::
$ conda env list
Flashing gateware and firmware into the core device
---------------------------------------------------
.. note::
If you have purchased a pre-assembled system from M-Labs or QUARTIQ, the gateware and firmware are already flashed and you can skip those steps, unless you want to replace them with a different version of ARTIQ.
You need to write three binary images onto the FPGA board:
1. The FPGA gateware bitstream
2. The bootloader
3. The ARTIQ runtime or satellite manager
Installing OpenOCD
^^^^^^^^^^^^^^^^^^
.. note::
This version of OpenOCD is not applicable to Kasli-SoC.
OpenOCD can be used to write the binary images into the core device FPGA board's flash memory.
With Nix, add ``aqmain.openocd-bscanspi`` to the shell packages. Be careful not to add ``pkgs.openocd`` instead - this would install OpenOCD from the NixOS package collection, which does not support ARTIQ boards.
With MSYS2, ``openocd`` and ``bscan-spi-bitstreams`` are included with ``artiq`` by default.
With Conda, install ``openocd`` as follows::
$ conda install -c m-labs openocd
.. _configuring-openocd:
Configuring OpenOCD
^^^^^^^^^^^^^^^^^^^
.. note::
These instructions are not applicable to Kasli-SoC.
Some additional steps are necessary to ensure that OpenOCD can communicate with the FPGA board.
On Linux, first ensure that the current user belongs to the ``plugdev`` group (i.e. ``plugdev`` shown when you run ``$ groups``). If it does not, run ``$ sudo adduser $USER plugdev`` and re-login.
If you installed OpenOCD on Linux using Nix, use the ``which`` command to determine the path to OpenOCD, and then copy the udev rules: ::
$ which openocd
/nix/store/2bmsssvk3d0y5hra06pv54s2324m4srs-openocd-mlabs-0.10.0/bin/openocd
$ sudo cp /nix/store/2bmsssvk3d0y5hra06pv54s2324m4srs-openocd-mlabs-0.10.0/share/openocd/contrib/60-openocd.rules /etc/udev/rules.d
$ sudo udevadm trigger
NixOS users should of course configure OpenOCD through ``/etc/nixos/configuration.nix`` instead.
If you installed OpenOCD on Linux using Conda and are using the Conda environment ``artiq``, then execute the statements below. If you are using a different environment, you will have to replace ``artiq`` with the name of your environment::
$ sudo cp ~/.conda/envs/artiq/share/openocd/contrib/60-openocd.rules /etc/udev/rules.d
$ sudo udevadm trigger
On Windows, a third-party tool, `Zadig <http://zadig.akeo.ie/>`_, is necessary. Use it as follows:
1. Make sure the FPGA board's JTAG USB port is connected to your computer.
2. Activate Options → List All Devices.
3. Select the "Digilent Adept USB Device (Interface 0)" or "FTDI Quad-RS232 HS" (or similar)
device from the drop-down list.
4. Select WinUSB from the spinner list.
5. Click "Install Driver" or "Replace Driver".
You may need to repeat these steps every time you plug the FPGA board into a port where it has not been plugged into previously on the same system.
Obtaining the board binaries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware that corresponds to the currently installed version of ARTIQ using AFWS (ARTIQ firmware service). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email.
Run the command::
$ afws_client [username] build [afws_directory] [variant]
Replace ``[username]`` with the login name that was given to you with the subscription, ``[variant]`` with the name of your system variant, and ``[afws_directory]`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email.
Without a subscription, you may build the firmware yourself from the open source code. See the section :ref:`Developing ARTIQ <developing-artiq>`.
Writing the flash
^^^^^^^^^^^^^^^^^
Then, you can write the flash:
* For Kasli::
$ artiq_flash -d [afws_directory]
The JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you simply need to connect your computer to the micro-USB connector on the Kasli front panel.
* For Kasli-SoC::
$ artiq_coremgmt [-D 192.168.1.75] config write -f boot [afws_directory]/boot.bin
If the Kasli-SoC won't boot due to corrupted firmware and ``artiq_coremgmt`` cannot access it, extract the SD card and replace ``boot.bin`` manually.
* For the KC705 board::
$ artiq_flash -t kc705 -d [afws_directory]
The SW13 switches need to be set to 00001.
Setting up the core device IP networking
----------------------------------------
For Kasli, insert a SFP/RJ45 transceiver (normally included with purchases from M-Labs and QUARTIQ) into the SFP0 port and connect it to an Ethernet port in your network. If the port is 10Mbps or 100Mbps and not 1000Mbps, make sure that the SFP/RJ45 transceiver supports the lower rate. Many SFP/RJ45 transceivers only support the 1000Mbps rate. If you do not have a SFP/RJ45 transceiver that supports 10Mbps and 100Mbps rates, you may instead use a gigabit Ethernet switch in the middle to perform rate conversion.
You can also insert other types of SFP transceivers into Kasli if you wish to use it directly in e.g. an optical fiber Ethernet network.
If you purchased a Kasli device from M-Labs, it usually comes with the IP address ``192.168.1.75``. Once you can reach this IP, it can be changed with: ::
$ artiq_coremgmt -D 192.168.1.75 config write -s ip [new IP]
and then reboot the device (with ``artiq_flash start`` or a power cycle).
If the ``ip`` config field is not set, or set to ``use_dhcp`` then the device will
attempt to obtain an IP address and default gateway using DHCP. If a static IP
address is wanted, install OpenOCD as before, and flash the IP, default gateway
(and, if necessary, MAC and IPv6) addresses directly: ::
$ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx/xx -s ipv4_default_route xx.xx.xx.xx -s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx -s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
$ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start
For Kasli devices, flashing a MAC address is not necessary as they can obtain it from their EEPROM.
If you only want to access the core device from the same subnet you may
omit the default gateway and IPv4 prefix length: ::
$ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx
If DHCP has been used the address can be found in the console output, which can be viewed using: ::
$ python -m misoc.tools.flterm /dev/ttyUSB2
Check that you can ping the device. If ping fails, check that the Ethernet link LED is ON - on Kasli, it is the LED next to the SFP0 connector. As a next step, look at the messages emitted on the UART during boot. Use a program such as flterm or PuTTY to connect to the device's serial port at 115200bps 8-N-1 and reboot the device. On Kasli, the serial port is on FTDI channel 2 with v1.1 hardware (with channel 0 being JTAG) and on FTDI channel 1 with v1.0 hardware. Note that on Windows you might need to install the `FTDI drivers <https://ftdichip.com/drivers/>`_ first.
If you want to use IPv6, the device also has a link-local address that corresponds to its EUI-64, and an additional arbitrary IPv6 address can be defined by using the ``ip6`` configuration key. All IPv4 and IPv6 addresses can be used at the same time.
Miscellaneous configuration of the core device
----------------------------------------------
Those steps are optional. The core device usually needs to be restarted for changes to take effect.
* Load the idle kernel
The idle kernel is the kernel (some piece of code running on the core device) which the core device runs whenever it is not connected to a PC via Ethernet.
This kernel is therefore stored in the :ref:`core device configuration flash storage <core-device-flash-storage>`.
To flash the idle kernel, first compile the idle experiment. The idle experiment's ``run()`` method must be a kernel: it must be decorated with the ``@kernel`` decorator (see :ref:`next topic <connecting-to-the-core-device>` for more information about kernels). Since the core device is not connected to the PC, RPCs (calling Python code running on the PC from the kernel) are forbidden in the idle experiment. Then write it into the core device configuration flash storage: ::
$ artiq_compile idle.py
$ artiq_coremgmt config write -f idle_kernel idle.elf
.. note:: You can find more information about how to use the ``artiq_coremgmt`` utility on the :ref:`Utilities <core-device-management-tool>` page.
* Load the startup kernel
The startup kernel is executed once when the core device powers up. It should initialize DDSes, set up TTL directions, etc. Proceed as with the idle kernel, but using the ``startup_kernel`` key in the ``artiq_coremgmt`` command.
For DRTIO systems, the startup kernel should wait until the desired destinations (including local RTIO) are up, using :meth:`artiq.coredevice.Core.get_rtio_destination_status`.
* Load the DRTIO routing table
If you are using DRTIO and the default routing table (for a star topology) is not suitable to your needs, prepare and load a different routing table. See :ref:`Using DRTIO <using-drtio>`.
* Select the RTIO clock source (KC705 and Kasli)
The KC705 may use either an external clock signal, or its internal clock with external frequency or internal crystal reference. The clock is selected at power-up. Setting the RTIO clock source to "ext0_bypass" would bypass the Si5324 synthesiser, requiring that an input clock be present. To select the source, use one of these commands: ::
$ artiq_coremgmt config write -s rtio_clock int_125 # internal 125MHz clock (default)
$ artiq_coremgmt config write -s rtio_clock ext0_bypass # external clock (bypass)
Other options include:
- ``ext0_synth0_10to125`` - external 10MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock,
- ``ext0_synth0_80to125`` - external 80MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock,
- ``ext0_synth0_100to125`` - external 100MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock,
- ``ext0_synth0_125to125`` - external 125MHz reference clock used by Si5324 to synthesize a 125MHz RTIO clock,
- ``int_100`` - internal crystal reference is used by Si5324 to synthesize a 100MHz RTIO clock,
- ``int_150`` - internal crystal reference is used by Si5324 to synthesize a 150MHz RTIO clock.
- ``ext0_bypass_125`` and ``ext0_bypass_100`` - explicit aliases for ``ext0_bypass``.
Availability of these options depends on the board and their configuration - specific setting may or may not be supported.
* Setup resolving RTIO channels to their names
This feature allows you to print the channels' respective names alongside with their numbers in RTIO error messages. To enable it, run the ``artiq_rtiomap`` tool and write its result into the device config at the ``device_map`` key: ::
$ artiq_rtiomap dev_map.bin
$ artiq_coremgmt config write -f device_map dev_map.bin
.. note:: You can find more information about how to use the ``artiq_rtiomap`` utility on the :ref:`Utilities <rtiomap-tool>` page.

View File

@ -3,30 +3,19 @@ List of available NDSPs
The following network device support packages are available for ARTIQ. If you would like to add yours to this list, just send us an email or a pull request.
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Equipment | Nix package | MSYS2 package | Documentation | URL |
+=================================+===================================+==================================+=====================================================================================================+========================================================+
| PDQ2 | Not available | Not available | `HTML <https://pdq.readthedocs.io>`_ | https://github.com/m-labs/pdq |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Lab Brick Digital Attenuator | ``lda`` | ``lda`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/lda-manual-html/latest/download/1>`_ | https://github.com/m-labs/lda |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Novatech 409B | ``novatech409b`` | ``novatech409b`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/novatech409b-manual-html/latest/download/1>`_ | https://github.com/m-labs/novatech409b |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Thorlabs T-Cubes | ``thorlabs_tcube`` | ``thorlabs_tcube`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/thorlabs_tcube-manual-html/latest/download/1>`_ | https://github.com/m-labs/thorlabs_tcube |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Korad KA3005P | ``korad_ka3005p`` | ``korad_ka3005p`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/korad_ka3005p-manual-html/latest/download/1>`_ | https://github.com/m-labs/korad_ka3005p |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Newfocus 8742 | ``newfocus8742`` | ``newfocus8742`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/newfocus8742-manual-html/latest/download/1>`_ | https://github.com/quartiq/newfocus8742 |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Princeton Instruments PICam | Not available | Not available | Not available | https://github.com/quartiq/picam |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Anel HUT2 power distribution | ``hut2`` | Not available | `HTML <https://nixbld.m-labs.hk/job/artiq/full/hut2-manual-html/latest/download/1>`_ | https://github.com/quartiq/hut2 |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| TOPTICA lasers | ``toptica-lasersdk-artiq`` | Not available | Not available | https://github.com/quartiq/lasersdk-artiq |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| HighFinesse wavemeters | ``highfinesse-net`` | Not available | `HTML <https://nixbld.m-labs.hk/job/artiq/full/highfinesse-net-manual-html/latest/download/1>`_ | https://github.com/quartiq/highfinesse-net |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| InfluxDB database | Not available | Not available | `HTML <https://gitlab.com/charlesbaynham/artiq_influx_generic>`_ | https://gitlab.com/charlesbaynham/artiq_influx_generic |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
.. csv-table::
:header: Equipment, Nix package, MSYS2 package, Documentation, URL
PDQ2, Not available, Not available, `HTML <https://pdq.readthedocs.io>`_, `GitHub <https://github.com/m-labs/pdq>`__
Lab Brick Digital Attenuator, ``lda``, ``lda``, Not available, `GitHub <https://github.com/m-labs/lda>`__
Novatech 4098B, ``novatech409b``, ``novatech409b``, Not available, `GitHub <https://github.com/m-labs/novatech409b>`__
Thorlabs T-Cubes, ``thorlabs_tcube``, ``thorlabs_tcube``, Not available, `GitHub <https://github.com/m-labs/thorlabs_tcube>`__
Korad KA3005P, ``korad_ka3005p``, ``korad_ka3005p``, Not available, `GitHub <https://github.com/m-labs/korad_ka3005p>`__
Newfocus 8742, ``newfocus8742``, ``newfocus8742``, Not available, `GitHub <https://github.com/quartiq/newfocus8742>`__
Princeton Instruments PICam, Not available, Not available, Not available, `GitHub <https://github.com/quartiq/picam>`__
Anel HUT2 power distribution, ``hut2``, Not available, Not available, `GitHub <https://github.com/quartiq/hut2>`__
TOPTICA lasers, ``toptica-lasersdk-artiq``, Not available, Not available, `GitHub <https://github.com/quartiq/lasersdk-artiq>`__
HighFinesse wavemeters, ``highfinesse-net``, Not available, Not available, `GitHub <https://github.com/quartiq/highfinesse-net>`__
InfluxDB database, Not available, Not available, `HTML <https://gitlab.com/charlesbaynham/artiq_influx_generic>`__, `GitHub <https://gitlab.com/charlesbaynham/artiq_influx_generic>`__
MSYS2 packages all start with the ``mingw-w64-clang-x86_64-`` prefix.

View File

@ -0,0 +1,90 @@
Main front-end tools
====================
These are the top-level commands used to run and manage ARTIQ experiments. Not all of the ARTIQ front-end is described here (many additional useful commands are presented in this manual in :doc:`utilities`) but these together comprise the main points of access for using ARTIQ as a system.
.. Note that ARTIQ frontend has no docstrings and the ..automodule directives display nothing; they are there to make :mod: references function correctly, since sphinx-argparse does not support links to ..argparse directives in the same way.
:mod:`artiq.frontend.artiq_run`
-------------------------------
.. automodule:: artiq.frontend.artiq_run
.. argparse::
:ref: artiq.frontend.artiq_run.get_argparser
:prog: artiq_run
:nodefault:
.. _frontend-artiq-master:
:mod:`artiq.frontend.artiq_master`
----------------------------------
.. automodule:: artiq.frontend.artiq_master
.. argparse::
:ref: artiq.frontend.artiq_master.get_argparser
:prog: artiq_master
:nodefault:
.. _frontend-artiq-client:
:mod:`artiq.frontend.artiq_client`
----------------------------------
.. automodule:: artiq.frontend.artiq_client
.. argparse::
:ref: artiq.frontend.artiq_client.get_argparser
:prog: artiq_client
:nodefault:
.. _frontend-artiq-dashboard:
:mod:`artiq.frontend.artiq_dashboard`
-------------------------------------
.. automodule:: artiq.frontend.artiq_dashboard
.. argparse::
:ref: artiq.frontend.artiq_dashboard.get_argparser
:prog: artiq_dashboard
:nodefault:
.. _frontend-artiq-browser:
:mod:`artiq.frontend.artiq_browser`
-----------------------------------
.. automodule:: artiq.frontend.artiq_browser
.. argparse::
:ref: artiq.frontend.artiq_browser.get_argparser
:prog: artiq_browser
:nodefault:
:mod:`artiq.frontend.artiq_session`
-----------------------------------
.. automodule:: artiq.frontend.artiq_session
ARTIQ session manager. Automatically runs the master, dashboard and local controller manager on the current machine. The latter requires the ``artiq-comtools`` package to be installed.
When supplying arguments to individual front-end tools, use ``=`` to avoid ambiguity in argument parsing, e.g.: ::
artiq_session -m=-g -m=--device-db=alternate_device_db.py -c=-v
and so on.
.. argparse::
:ref: artiq.frontend.artiq_session.get_argparser
:prog: artiq_session
:nodescription:
:nodefault:
:mod:`artiq_comtools.artiq_ctlmgr`
----------------------------------
.. automodule:: artiq_comtools.artiq_ctlmgr
ARTIQ controller manager. Supplied in the separate package ``artiq-comtools``, which is included with a standard ARTIQ installation but can also be `installed standalone <https://github.com/m-labs/artiq-comtools>`_, with the intention of making it easier to run controllers and controller managers on machines where a full ARTIQ installation may not be necessary or convenient.
.. argparse::
:ref: artiq_comtools.artiq_ctlmgr.get_argparser
:prog: artiq_ctlmgr
:nodescription:
:nodefault:

View File

@ -1,256 +1,116 @@
Management system
=================
The management system described below is optional: experiments can be run one by one using :mod:`~artiq.frontend.artiq_run`, and the controllers can run stand-alone (without a controller manager). For their very first steps with ARTIQ or in simple or particular cases, users do not need to deploy the management system.
.. note::
The ARTIQ management system as described here is optional. Experiments can be run one-by-one using :mod:`~artiq.frontend.artiq_run`, and controllers can be run without a controller manager. For their very first steps with ARTIQ or in simple or particular cases, users do not need to deploy the management system. For an introduction to the system and how to use it, see :doc:`getting_started_mgmt`.
Components
**********
----------
Master
------
^^^^^^
The :ref:`master <frontend-artiq-master>` is responsible for managing the parameter and device databases, the experiment repository, scheduling and running experiments, archiving results, and distributing real-time results.
The :ref:`ARTIQ master <frontend-artiq-master>` is responsible for managing the dataset and device databases, the experiment repository, scheduling and running experiments, archiving results, and distributing real-time results. It is a headless component, and one or several clients (command-line or GUI) use the network to interact with it.
The master is a headless component, and one or several clients (command-line or GUI) use the network to interact with it.
The master expects to be given a directory on startup, the experiment repository, containing these experiments which are automatically tracked and communicated to clients. By default, it simply looks for a directory called ``repository``. The ``-r`` flag can be used to substitute an alternate location.
Controller manager
------------------
Controller managers (started using the ``artiq_ctlmgr`` command that is part of the ``artiq-comtools`` package) are responsible for running and stopping controllers on a machine. There is one controller manager per network node that runs controllers.
A controller manager connects to the master and uses the device database to determine what controllers need to be run. Changes in the device database are tracked by the manager and controllers are started and stopped accordingly.
Controller managers use the local network address of the connection to the master to filter the device database and run only those controllers that are allocated to the current node. Hostname resolution is supported.
.. warning:: With some network setups, the current machine's hostname without the domain name resolves to a localhost address (127.0.0.1 or even 127.0.1.1). If you wish to use controllers across a network, make sure that the hostname you provide resolves to an IP address visible on the network (e.g. try providing the full hostname including the domain name).
Command-line client
-------------------
The :ref:`command-line client <frontend-artiq-client>` connects to the master and permits modification and monitoring of the databases, monitoring the experiment schedule and log, and submitting experiments.
Dashboard
---------
The :ref:`dashboard <frontend-artiq-dashboard>` connects to the master and is the main way of interacting with it. The main features of the dashboard are scheduling of experiments, setting of their arguments, examining the schedule, displaying real-time results, and debugging TTL and DDS channels in real time.
Experiment scheduling
*********************
Basics
------
To use hardware resources more efficiently, potentially compute-intensive pre-computation and analysis phases of other experiments are executed in parallel with the body of the current experiment that accesses the hardware.
.. seealso:: These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`.
Experiments are divided into three phases that are programmed by the user:
1. The **preparation** stage, that pre-fetches and pre-computes any data that necessary to run the experiment. Users may implement this stage by overloading the :meth:`~artiq.language.environment.Experiment.prepare` method. It is not permitted to access hardware in this stage, as doing so may conflict with other experiments using the same devices.
2. The **running** stage, that corresponds to the body of the experiment, and typically accesses hardware. Users must implement this stage and overload the :meth:`~artiq.language.environment.Experiment.run` method.
3. The **analysis** stage, where raw results collected in the running stage are post-processed and may lead to updates of the parameter database. This stage may be implemented by overloading the :meth:`~artiq.language.environment.Experiment.analyze` method.
.. note:: Only the :meth:`~artiq.language.environment.Experiment.run` method implementation is mandatory; if the experiment does not fit into the pipelined scheduling model, it can leave one or both of the other methods empty (which is the default).
The three phases of several experiments are then executed in a pipelined manner by the scheduler in the ARTIQ master: experiment A executes its preparation stage, then experiment A executes its running stage while experiment B executes its preparation stage, and so on.
It also expects access to a ``device_db.py``, with a corresponding flag ``--device-db`` to substitute a different file name. Additionally, it will reference or create certain files in the directory it is run in, among them ``dataset_db.mdb``, the LMDB database containing persistent datasets, ``last_rid.pyon``, which simply contains the last used RID, and the ``results`` directory.
.. note::
The next experiment (B) may start :meth:`~artiq.language.environment.Experiment.run`\ ing before all events placed into (core device) RTIO buffers by the previous experiment (A) have been executed. These events can then execute while experiment B is :meth:`~artiq.language.environment.Experiment.run`\ ing. Using :meth:`~artiq.coredevice.core.Core.reset` clears the RTIO buffers, discarding pending events, including those left over from A.
Because the other parts of the management system all seem to be able to access the information stored in these files, confusion can sometimes result about where it is really stored and how it is distributed. Device databases, datasets, results, and experiments are all solely kept and administered by the master, which communicates information to dashboards, browsers, and clients over the network whenever necessary.
Notably, clients and dashboards do not send in experiments to the master; they request them from the array of experiments the master knows about, primarily those in ``repository``, but also in the master's local file system, if 'Open file outside repository' is selected. This is true even if ``repository`` is configured as a Git repository and cloned on other machines.
The ARTIQ master should not be confused with the 'master' device in a DRTIO system, which is only a designation for the particular core device acting as central node in a distributed configuration of ARTIQ. The two concepts are otherwise unrelated.
Clients
^^^^^^^
The :ref:`command-line client <frontend-artiq-client>` connects to the master and permits modification and monitoring of the databases, reading the experiment schedule and log, and submitting experiments.
The :ref:`dashboard <frontend-artiq-dashboard>` connects to the master and is the main method of interacting with it. The main roles of the dashboard are scheduling of experiments, setting of their arguments, examining the schedule, displaying real-time results, and debugging TTL and DDS channels in real time.
The dashboard remembers and restores GUI state (window/dock positions, last values entered by the user, etc.) in between instances. This information is stored in a file called ``artiq_dashboard_{server}_{port}.pyon`` in the configuration directory (e.g. generally ``~/.config/artiq`` for Unix, same as data directory for Windows), distinguished in subfolders by ARTIQ version.
Browser
^^^^^^^
The :ref:`browser <frontend-artiq-browser>` is used to read ARTIQ ``results`` HDF5 files and run experiment :meth:`~artiq.language.environment.Experiment.analyze` functions, in particular to retrieve previous result databases, process them, and display them in ARTIQ applets. The browser also remembers and restores its GUI state; this is stored in a file called simply ``artiq_browser``, kept in the same configuration directory as the dashboard.
Controller manager
^^^^^^^^^^^^^^^^^^
The controller manager is provided in the ``artiq-comtools`` package (which is also made available separately from mainline ARTIQ, to allow independent use with minimal dependencies) and started with the :mod:`~artiq_comtools.artiq_ctlmgr` command. It is responsible for running and stopping controllers on a machine. One controller manager must be run by each network node that runs controllers.
A controller manager connects to the master and accesses the device database through it to determine what controllers need to be run. The local network address of the connection is used to filter for only those controllers allocated to the current node. Hostname resolution is supported. Changes to the device database are tracked and controllers will be stopped and started accordingly.
.. _mgmt-git-integration:
Git integration
---------------
The master may use a Git repository to store experiment source code. Using Git has many advantages. For example, each result file (HDF5) contains the commit ID corresponding to the exact source code it was produced by, which helps reproducibility. Although the master also supports non-bare repositories, it is recommended to use a bare repository (e.g. ``git init --bare``) to easily support push transactions from clients.
You will want Git to notify the master every time the repository is pushed to (e.g. updated), so that the master knows to rescan the repository for new or changed experiments. This is easiest done with the ``post-receive`` hook, as described in :ref:`master-setting-up-git`.
.. note::
If you plan to run the ARTIQ system entirely on a single machine, you may also consider using a non-bare repository and the ``post-commit`` hook to trigger repository scans every time you commit changes (locally). In this case, note that the ARTIQ master never uses the repository's working directory, but only what is committed. More precisely, when scanning the repository, it fetches the last (atomically) completed commit at that time of repository scan and checks it out in a temporary folder. This commit ID is used by default when subsequently submitting experiments. There is one temporary folder by commit ID currently referenced in the system, so concurrently running experiments from different repository revisions is fully supported by the master.
By default, the dashboard runs experiments from the repository, whereas the command-line client (``artiq_client submit``) runs experiments from the raw filesystem (which is useful for iterating rapidly without creating many disorganized commits). In order to run from the raw filesystem when using the dashboard, right-click in the Explorer window and select the option 'Open file outside repository'. In order to run from the repository when using the command-line client, simply pass the ``-R`` flag.
.. _experiment-scheduling:
Experiment scheduling
---------------------
Basics
^^^^^^
To make more efficient use of hardware resources, experiments are generally split into three phases and pipelined, such that potentially compute-intensive pre-computation or analysis phases may be executed in parallel with the bodies of other experiments, which access hardware.
.. seealso::
These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`, which additionally provides access to the methods of :class:`artiq.language.environment.HasEnvironment`.
There are three stages of a standard experiment users may write code in:
1. The **preparation** stage, which pre-fetches and pre-computes any data that necessary to run the experiment. Users may implement this stage by overloading the :meth:`~artiq.language.environment.Experiment.prepare` method. It is not permitted to access hardware in this stage, as doing so may conflict with other experiments using the same devices.
2. The **run** stage, which corresponds to the body of the experiment and generally accesses hardware. Users must implement this stage and overload the :meth:`~artiq.language.environment.Experiment.run` method.
3. The **analysis** stage, where raw results collected in the running stage are post-processed and may lead to updates of the parameter database. This stage may be implemented by overloading the :meth:`~artiq.language.environment.Experiment.analyze` method.
Only the :meth:`~artiq.language.environment.Experiment.run` method implementation is mandatory; if the experiment does not fit into the pipelined scheduling model, it can leave one or both of the other methods empty (which is the default).
Consecutive experiments are then executed in a pipelined manner by the ARTIQ master's scheduler: first experiment A runs its preparation stage, than experiment A executes its running stage while experiment B executes its preparation stage, and so on.
.. note::
The next experiment (B) may start its :meth:`~artiq.language.environment.Experiment.run` before all events placed into (core device) RTIO buffers by the previous experiment (A) have been executed. These events may then execute while experiment B's :meth:`~artiq.language.environment.Experiment.run` is already in progress. Using :meth:`~artiq.coredevice.core.Core.reset` in experiment B will clear the RTIO buffers, discarding pending events, including those left over from A.
Interactions between events of different experiments can be avoided by preventing the :meth:`~artiq.language.environment.Experiment.run` method of experiment A from returning until all events have been executed. This is discussed in the section on RTIO :ref:`rtio-handover-synchronization`.
Priorities and timed runs
-------------------------
^^^^^^^^^^^^^^^^^^^^^^^^^
When determining what experiment to begin executing next (i.e. entering the preparation stage), the scheduling looks at the following factors, by decreasing order of precedence:
When determining what experiment should begin executing next (i.e. enter the preparation stage), the scheduling looks at the following factors, by decreasing order of precedence:
1. Experiments may be scheduled with a due date. If there is one and it is not reached yet, the experiment is not eligible for preparation.
1. Experiments may be scheduled with a due date. This is considered the *earliest possible* time of their execution (rather than a deadline, or latest possible -- ARTIQ makes no guarantees about experiments being started or completed before any specified time). If a due date is set and it has not yet been reached, the experiment is not eligible for preparation.
2. The integer priority value specified by the user.
3. The due date itself. The earlier the due date, the earlier the experiment is scheduled.
4. The run identifier (RID), an integer that is incremented at each experiment submission. This ensures that, all other things being equal, experiments are scheduled in the same order as they are submitted.
Pauses
------
In the run stage, an experiment may yield to the scheduler by calling the ``pause()`` method of the scheduler.
If there are other experiments with higher priority (e.g. a high-priority timed experiment has reached its due date), they are preemptively executed, and then ``pause()`` returns.
Otherwise, ``pause()`` returns immediately.
To check whether ``pause()`` would in fact *not* return immediately, use :meth:`artiq.master.scheduler.Scheduler.check_pause`.
The experiment must place the hardware in a safe state and disconnect from the core device (typically, by calling ``self.core.comm.close()`` from the kernel, which is equivalent to :meth:`artiq.coredevice.core.Core.close`) before calling ``pause()``.
Accessing the ``pause()`` and :meth:`~artiq.master.scheduler.Scheduler.check_pause` methods is done through a virtual device called ``scheduler`` that is accessible to all experiments. The scheduler virtual device is requested like regular devices using :meth:`~artiq.language.environment.HasEnvironment.get_device` (``self.get_device()``) or :meth:`~artiq.language.environment.HasEnvironment.setattr_device` (``self.setattr_device()``).
:meth:`~artiq.master.scheduler.Scheduler.check_pause` can be called (via RPC) from a kernel, but ``pause()`` must not.
3. The due date itself. The earliest (reached) due date will be scheduled first.
4. The run identifier (RID), an integer that is incremented at each experiment submission. This ensures that, all else being equal, experiments are scheduled in the same order as they are submitted.
Multiple pipelines
------------------
^^^^^^^^^^^^^^^^^^
Multiple pipelines can operate in parallel inside the same master. It is the responsibility of the user to ensure that experiments scheduled in one pipeline will never conflict with those of another pipeline over resources (e.g. same devices).
Experiments must be placed into a pipeline at submission time, set by the "Pipeline" field. The master supports multiple simultaneous pipelines, which will operate in parallel. Pipelines are identified by their names, and are automatically created (when an experiment is scheduled with a pipeline name that does not yet exist) and destroyed (when they run empty). By default, all experiments are submitted into the same pipeline, ``main``.
Pipelines are identified by their name, and are automatically created (when an experiment is scheduled with a pipeline name that does not exist) and destroyed (when they run empty).
When using multiple pipelines it is the responsibility of the user to ensure that experiments scheduled in parallel will never conflict with those of another pipeline over resources (e.g. attempt to use the same devices simultaneously).
Pauses
^^^^^^
Git integration
***************
In the run stage, an experiment may yield to the scheduler by calling the :meth:`pause` method of the scheduler.
If there are other experiments with higher priority (e.g. a high-priority experiment has been newly submitted, or reached its due date and become eligible for execution), the higher-priority experiments are executed first, and then :meth:`pause` returns. If there are no such experiments, :meth:`pause` returns immediately. To check whether :meth:`pause` would in fact *not* return immediately, use :meth:`artiq.master.scheduler.Scheduler.check_pause`.
The master may use a Git repository for the storage of experiment source code. Using Git has many advantages. For example, each result file (HDF5) contains the commit ID corresponding to the exact source code that produced it, which helps reproducibility.
The experiment must place the hardware in a safe state and disconnect from the core device (typically, by calling ``self.core.comm.close()`` from the kernel, which is equivalent to :meth:`artiq.coredevice.core.Core.close`) before calling :meth:`pause`.
Even though the master also supports non-bare repositories, it is recommended to use a bare repository so that it can easily support push transactions from clients. Create it with e.g.: ::
Accessing the :meth:`pause` and :meth:`~artiq.master.scheduler.Scheduler.check_pause` methods is done through a virtual device called ``scheduler`` that is accessible to all experiments. The scheduler virtual device is requested like regular devices using :meth:`~artiq.language.environment.HasEnvironment.get_device` (``self.get_device()``) or :meth:`~artiq.language.environment.HasEnvironment.setattr_device` (``self.setattr_device()``).
$ mkdir experiments
$ cd experiments
$ git init --bare
You want Git to notify the master every time the repository is pushed to (updated), so that it is rescanned for experiments and e.g. the GUI controls and the experiment list are updated.
Create a file named ``post-receive`` in the ``hooks`` folder (this folder has been created by the ``git`` command), containing the following: ::
#!/bin/sh
artiq_client scan-repository
Then set the execution permission on it: ::
$ chmod 755 hooks/post-receive
You may now run the master with the Git support enabled: ::
$ artiq_master -g -r /path_to/experiments
Push commits containing experiments to the bare repository using e.g. Git over SSH, and the new experiments should automatically appear in the dashboard.
.. note:: If you plan to run the ARTIQ system entirely on a single machine, you may also consider using a non-bare repository and the ``post-commit`` hook to trigger repository scans every time you commit changes (locally). The ARTIQ master never uses the repository's working directory, but only what is committed. More precisely, when scanning the repository, it fetches the last (atomically) completed commit at that time of repository scan and checks it out in a temporary folder. This commit ID is used by default when subsequently submitting experiments. There is one temporary folder by commit ID currently referenced in the system, so concurrently running experiments from different repository revisions is fully supported by the master.
The dashboard always runs experiments from the repository. The command-line client, by default, runs experiment from the raw filesystem (which is useful for iterating rapidly without creating many disorganized commits). If you want to use the repository instead, simply pass the ``-R`` option.
Scheduler API reference
***********************
The scheduler is exposed to the experiments via a virtual device called ``scheduler``. It can be requested like any regular device, and then the methods below can be called on the returned object.
The scheduler virtual device also contains the attributes ``rid``, ``pipeline_name``, ``priority`` and ``expid`` that contain the corresponding information about the current run.
.. autoclass:: artiq.master.scheduler.Scheduler
:members:
Client control broadcasts (CCBs)
********************************
Client control broadcasts are requests made by experiments for clients to perform some action. Experiments do so by requesting the ``ccb`` virtual device and calling its ``issue`` method. The first argument of the issue method is the name of the broadcast, and any further positional and keyword arguments are passed to the broadcast.
CCBs are used by experiments to configure applets in the dashboard, for example for plotting purposes.
.. autoclass:: artiq.dashboard.applets_ccb.AppletsCCBDock
:members:
Applet request interfaces
*************************
Applet request interfaces allow applets to perform actions on the master database and set arguments in the dashboard. Applets may inherit from the ``artiq.applets.simple.SimpleApplet`` and call the methods defined below through the `req` attribute.
Embedded applets should use `AppletRequestIPC` while standalone applets use `AppletRequestRPC`. `SimpleApplet` automatically chooses the correct interface on initialization.
.. autoclass:: artiq.applets.simple._AppletRequestInterface
:members:
Applet entry area
*****************
Argument widgets can be used in applets through the `EntryArea` class.
Below is a simple example code snippet using the `EntryArea` class: ::
entry_area = EntryArea()
# Create a new widget
entry_area.setattr_argument("bl", BooleanValue(True))
# Get the value of the widget (output: True)
print(entry_area.bl)
# Set the value
entry_area.set_value("bl", False)
# False
print(entry_area.bl)
The `EntryArea` object can then be added to a layout and integrated with the applet GUI. Multiple `EntryArea` objects can be used in a single applet.
.. class:: artiq.gui.applets.EntryArea
.. method:: setattr_argument(name, proc, group=None, tooltip=None)
Sets an argument as attribute. The names of the argument and of the
attribute are the same.
:param name: Argument name
:param proc: Argument processor, for example ``NumberValue``
:param group: Used to group together arguments in the GUI under a common category
:param tooltip: Tooltip displayed when hovering over the entry widget
.. method:: get_value(name)
Get the value of an entry widget.
:param name: Argument name
.. method:: get_values()
Get all values in the ``EntryArea`` as a dictionary. Names are stored as keys, and argument values as values.
.. method:: set_value(name, value)
Set the value of an entry widget. The change is temporary and will reset to default when the reset button is clicked.
:param name: Argument name
:param value: Object representing the new value of the argument. For ``Scannable`` arguments, this parameter
should be a ``ScanObject``. The type of the ``ScanObject`` will be set as the selected type when this function is called.
.. method:: set_values(values)
Set multiple values from a dictionary input. Calls ``set_value()`` for each key-value pair.
:param values: Dictionary with names as keys and new argument values as values.
Front-end tool reference
************************
.. _frontend-artiq-master:
artiq_master
------------
.. argparse::
:ref: artiq.frontend.artiq_master.get_argparser
:prog: artiq_master
.. _frontend-artiq-client:
artiq_client
------------
.. argparse::
:ref: artiq.frontend.artiq_client.get_argparser
:prog: artiq_client
.. _frontend-artiq-dashboard:
artiq_dashboard
---------------
.. argparse::
:ref: artiq.frontend.artiq_dashboard.get_argparser
:prog: artiq_dashboard
artiq_session
-------------
.. argparse::
:ref: artiq.frontend.artiq_session.get_argparser
:prog: artiq_session
:meth:`~artiq.master.scheduler.Scheduler.check_pause` can be called (via RPC) from a kernel, but :meth:`pause` must not be.

View File

@ -0,0 +1,99 @@
Management system interface
===========================
ARTIQ makes certain provisions to allow interactions between different components when using the :doc:`management system <management_system>`. An experiment may make requests of the master or clients using virtual devices to represent the necessary line of communication; applets may interact with databases, the dashboard, and directly with the user (through argument widgets). This page collects the references for these features.
In experiments
--------------
``scheduler`` virtual device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The scheduler is exposed to the experiments via a virtual device called ``scheduler``. It can be requested like any other device, and the methods below, as well as :meth:`pause`, can be called on the returned object.
The scheduler virtual device also contains the attributes ``rid``, ``pipeline_name``, ``priority`` and ``expid``, which contain the corresponding information about the current run.
.. autoclass:: artiq.master.scheduler.Scheduler
:members:
``ccb`` virtual device
^^^^^^^^^^^^^^^^^^^^^^
Client control broadcasts (CCBs) are requests made by experiments for clients to perform some action. Experiments do so by requesting the ``ccb`` virtual device and calling its ``issue`` method. The first argument of the issue method is the name of the broadcast, and any further positional and keyword arguments are passed to the broadcast.
CCBs are especially used by experiments to configure applets in the dashboard, for example for plotting purposes.
.. autoclass:: artiq.dashboard.applets_ccb.AppletsCCBDock
:members:
In applets
----------
.. _applet-references:
Applet request interfaces
^^^^^^^^^^^^^^^^^^^^^^^^^
Applet request interfaces allow applets to perform actions on the master database and set arguments in the dashboard. Applets may inherit from ``artiq.applets.simple.SimpleApplet`` and call the methods defined below through the ``req`` attribute.
Embedded applets should use ``AppletRequestIPC``, while standalone applets use ``AppletRequestRPC``. ``SimpleApplet`` automatically chooses the correct interface on initialization.
.. autoclass:: artiq.applets.simple._AppletRequestInterface
:members:
Applet entry area
^^^^^^^^^^^^^^^^^
Argument widgets can be used in applets through the :class:`~artiq.gui.applets.EntryArea` class. Below is a simple example code snippet: ::
entry_area = EntryArea()
# Create a new widget
entry_area.setattr_argument("bl", BooleanValue(True))
# Get the value of the widget (output: True)
print(entry_area.bl)
# Set the value
entry_area.set_value("bl", False)
# False
print(entry_area.bl)
The :class:`~artiq.gui.applets.EntryArea` object can then be added to a layout and integrated with the applet GUI. Multiple :class:`~artiq.gui.applets.EntryArea` objects can be used in a single applet.
.. class:: artiq.gui.applets.EntryArea
.. method:: setattr_argument(name, proc, group=None, tooltip=None)
Sets an argument as attribute. The names of the argument and of the
attribute are the same.
:param name: Argument name
:param proc: Argument processor, for example :class:`~artiq.language.environment.NumberValue`
:param group: Used to group together arguments in the GUI under a common category
:param tooltip: Tooltip displayed when hovering over the entry widget
.. method:: get_value(name)
Get the value of an entry widget.
:param name: Argument name
.. method:: get_values()
Get all values in the :class:`~artiq.gui.applets.EntryArea` as a dictionary. Names are stored as keys, and argument values as values.
.. method:: set_value(name, value)
Set the value of an entry widget. The change is temporary and will reset to default when the reset button is clicked.
:param name: Argument name
:param value: Object representing the new value of the argument. For :class:`~artiq.language.scan.Scannable` arguments, this parameter
should be a :class:`~artiq.language.scan.ScanObject`. The type of the :class:`~artiq.language.scan.ScanObject` will be set as the selected type when this function is called.
.. method:: set_values(values)
Set multiple values from a dictionary input. Calls :meth:`set_value` for each key-value pair.
:param values: Dictionary with names as keys and new argument values as values.

View File

@ -1 +0,0 @@
.. include:: ../../RELEASE_NOTES.rst

10
doc/manual/releases.rst Normal file
View File

@ -0,0 +1,10 @@
ARTIQ Releases
##############
ARTIQ follows a rolling release model, with beta, stable, and legacy channels. Different releases are saved as different branches on the M-Labs `ARTIQ repository <https://github.com/m-labs/artiq>`_. The ``master`` branch represents the beta version, where at any time the next stable release of ARTIQ is currently in development. This branch is unstable and does not yet guarantee reliability or consistency, but may also already offer new features and improvements; see the `beta release notes <https://github.com/m-labs/artiq/blob/master/RELEASE_NOTES.rst>`_ for the most up-to-date information. The ``release-[number]`` branches represent stable releases, of which the most recent is considered the current stable version, and the second-most recent the current legacy version.
To install the current stable version of ARTIQ, consult the *current* `Installing ARTIQ <https://m-labs.hk/artiq/manual/installing.html>`_ page. To install beta or legacy versions, consult the same page in their respective manuals. Instructions given in pre-legacy versions of the manual may or may not install their corresponding ARTIQ systems, and may or may not currently be supported (e.g. M-Labs does not host older ARTIQ versions for Conda, and Conda support will probably eventually be removed entirely). Regardless, all out-of-date versions remain available as complete source code on the repository.
The beta manual is hosted `here <https://m-labs.hk/artiq/manual-beta/>`_. The current manual is hosted `here <https://m-labs.hk/artiq/manual/>`__. The legacy manual is hosted `here <https://m-labs.hk/artiq/manual-legacy/>`__. Older versions of the manual can be rebuilt from the source files in ``doc/manual``, retrieved from the respective branch.
.. include:: ../../RELEASE_NOTES.rst

Some files were not shown because too many files have changed in this diff Show More