Merge branch 'master' into nac3

This commit is contained in:
Sébastien Bourdeauducq 2024-08-30 14:40:24 +08:00
commit 52196bb36e
64 changed files with 2232 additions and 1541 deletions

View File

@ -51,7 +51,7 @@ Closes #XXX
### Documentation Changes ### Documentation Changes
- [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`cd doc/manual/; make html`) to ensure no errors. - [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`nix build .#artiq-manual-html; nix build .#artiq-manual-pdf`) to ensure no errors.
### Git Logistics ### Git Logistics

4
.gitignore vendored
View File

@ -11,6 +11,7 @@ __pycache__/
.ipynb_checkpoints .ipynb_checkpoints
/doc/manual/_build /doc/manual/_build
/build /build
/result
/dist /dist
/*.egg-info /*.egg-info
/.coverage /.coverage
@ -22,7 +23,8 @@ __pycache__/
/artiq/test/results /artiq/test/results
/artiq/examples/*/results /artiq/examples/*/results
/artiq/examples/*/last_rid.pyon /artiq/examples/*/last_rid.pyon
/artiq/examples/*/dataset_db.pyon /artiq/examples/*/dataset_db.mdb
/artiq/examples/*/dataset_db.mdb-lock
# when testing ad-hoc experiments at the root: # when testing ad-hoc experiments at the root:
/repository/ /repository/

View File

@ -8,28 +8,27 @@ Reporting Issues/Bugs
Thanks for `reporting issues to ARTIQ Thanks for `reporting issues to ARTIQ
<https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and <https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and
ask questions on IRC (the #m-labs channel on OFTC), the `Mattermost chat ask questions on IRC (the #m-labs channel on OFTC), the `Mattermost chat
<https://chat.m-labs.hk>`_, or on the `forum <https://forum.m-labs.hk>`_. <https://chat.m-labs.hk>`_, or in the `forum <https://forum.m-labs.hk>`_.
The best bug reports are those which contain sufficient information. With The best bug reports are those which contain sufficient information. With
accurate and comprehensive context, an issue can be resolved quickly and accurate and comprehensive context, an issue can be resolved quickly and
efficiently. Please consider adding the following data to your issue efficiently. Please consider adding the following data to your issue
report if possible: report if possible:
* A clear and unique summary that fits into one line. Also check that * A clear and unique summary that fits into one line. Check that this
this issue has not yet been reported. If it has, add additional information there. issue has not yet been reported; if it has, add additional information there.
* Precise steps to reproduce (list of actions that leads to the issue) * Precise steps to reproduce (a list of actions that leads to the issue)
* Expected behavior (what should happen) * Expected behavior (what should happen)
* Actual behavior (what happens instead) * Actual behavior (what happens instead)
* Logging message, trace backs, screen shots where relevant * Logging message, tracebacks, screenshots, where applicable
* Components involved (omit irrelevant parts): * Components involved (omit irrelevant parts):
* Operating System * Operating system used
* ARTIQ version (with recent versions of ARTIQ, run ``artiq_client --version``) * ARTIQ version (run any command in the form of ``artiq_client --version``)
* Version of the gateware and runtime loaded in the core device (in the output of ``artiq_coremgmt -D .... log``) * Gateware and firmware loaded to the core device (in the output of
* If using MSYS2, output of `pacman -Q` ``artiq_coremgmt [-D ....] log``)
* Hardware involved * Hardware involved
For in-depth information on bug reporting, see: For in-depth information on bug reporting, see:
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
@ -39,10 +38,10 @@ https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines
Contributing Code Contributing Code
================= =================
ARTIQ welcomes contributions. Write bite-sized patches that can stand alone, ARTIQ welcomes contributions. Write bite-size patches that can stand alone,
clean them up, write proper commit messages, add docstrings and unittests. Then clean them up, write proper commit messages, add docstrings and unit tests;
``git rebase`` them onto the current master or merge the current master. Verify ``git rebase`` them onto the current master or merge the current master. Verify
that the testsuite passes. Then submit a pull request. Expect your contribution that the test suite passes. Then submit a pull request. Expect your contribution
to be held up to coding standards (e.g. use ``flake8`` to check yourself). to be held up to coding standards (e.g. use ``flake8`` to check yourself).
Checklist for Code Contributions Checklist for Code Contributions
@ -52,7 +51,7 @@ Checklist for Code Contributions
- Use correct spelling and grammar. Use your code editor to help you with - Use correct spelling and grammar. Use your code editor to help you with
syntax, spelling, and style syntax, spelling, and style
- Style: PEP-8 (``flake8``) - Style: PEP-8 (``flake8``)
- Add, check docstrings and comments - Add or update docstrings and comments
- Split your contribution into logically separate changes (``git rebase - Split your contribution into logically separate changes (``git rebase
--interactive``). Merge (squash, fixup) commits that just fix previous commits --interactive``). Merge (squash, fixup) commits that just fix previous commits
or amend them. Remove unintended changes. Clean up your commits. or amend them. Remove unintended changes. Clean up your commits.
@ -64,12 +63,37 @@ Checklist for Code Contributions
- Review each of your commits for the above items (``git show``) - Review each of your commits for the above items (``git show``)
- Update ``RELEASE_NOTES.md`` if there are noteworthy changes, especially if - Update ``RELEASE_NOTES.md`` if there are noteworthy changes, especially if
there are changes to existing APIs there are changes to existing APIs
- Check, test, and update the documentation in `doc/` - Check, test, and update the documentation in ``doc/``
- Check, test, and update the unittests - Check, test, and update the unit tests
- Close and/or update issues - Close and/or update issues
Contributing Documentation
==========================
ARTIQ welcomes documentation contributions. The ARTIQ manual is hosted online in HTML
form `here <https://m-labs.hk/artiq/manual/>`__ and in PDF form
`here <https://m-labs.hk/artiq/manual.pdf>`__. It is generated from source files
in ``doc/manual``, written in a variant of the
`reStructured Text <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_
markup language processed by `Sphinx <https://www.sphinx-doc.org/en/master/>`_, with
some of the additional reference material processed from inline documentation
in the ARTIQ source itself.
Write bite-size patches that can stand alone, clean them up, write proper commit
messages. Check that your edits render properly and compile without errors: ::
$ nix build .#artiq-manual-pdf
$ nix build .#artiq-manual-html
Elaborations, improvements, clarifications and corrections to any of the material
are happily accepted, but special attention is drawn to the manual
`FAQ <https://m-labs.hk/artiq/manual/faq.html>`_, where tips and solutions
are especially easy to add. See also the FAQ's own
`section on the subject <https://m-labs.hk/artiq/manual/faq.html#build-documentation>`_.
Copyright and Sign-Off Copyright and Sign-Off
---------------------- ======================
Authors retain copyright of their contributions to ARTIQ, but whenever possible Authors retain copyright of their contributions to ARTIQ, but whenever possible
should use the GNU LGPL version 3 license for them to be merged. should use the GNU LGPL version 3 license for them to be merged.
@ -109,7 +133,7 @@ can certify the below:
maintained indefinitely and may be redistributed consistent with maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved. this project or the open source license(s) involved.
then you just add a line saying then add a line saying
Signed-off-by: Random J Developer <random@developer.example.org> Signed-off-by: Random J Developer <random@developer.example.org>

View File

@ -6,6 +6,7 @@ Release notes
ARTIQ-9 (Unreleased) ARTIQ-9 (Unreleased)
-------------------- --------------------
* GUI state files are now automatically backed up upon successful loading.
* Zotino monitoring in the dashboard now displays the values in volts. * Zotino monitoring in the dashboard now displays the values in volts.
* afws_client now uses the "happy eyeballs" algorithm (RFC 6555) for a faster and more * afws_client now uses the "happy eyeballs" algorithm (RFC 6555) for a faster and more
reliable connection to the server. reliable connection to the server.

View File

@ -5,3 +5,4 @@ def get_rev():
def get_version(): def get_version():
return os.getenv("VERSIONEER_OVERRIDE", default="10.0+unknown.beta") return os.getenv("VERSIONEER_OVERRIDE", default="10.0+unknown.beta")

View File

@ -1,557 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2005-2010 ActiveState Software Inc.
# Copyright (c) 2013 Eddy Petrișor
"""Utilities for determining application-specific dirs.
See <http://github.com/ActiveState/appdirs> for details and usage.
"""
# Dev Notes:
# - MSDN on where to store app data files:
# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120
# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html
# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
__version_info__ = (1, 4, 1)
__version__ = '.'.join(map(str, __version_info__))
import sys
import os
PY3 = sys.version_info[0] == 3
if PY3:
unicode = str
if sys.platform.startswith('java'):
import platform
os_name = platform.java_ver()[3][0]
if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc.
system = 'win32'
elif os_name.startswith('Mac'): # "Mac OS X", etc.
system = 'darwin'
else: # "Linux", "SunOS", "FreeBSD", etc.
# Setting this to "linux2" is not ideal, but only Windows or Mac
# are actually checked for and the rest of the module expects
# *sys.platform* style strings.
system = 'linux2'
else:
system = sys.platform
def user_data_dir(appname=None, appauthor=None, version=None, roaming=False):
r"""Return full path to the user-specific data dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"roaming" (boolean, default False) can be set True to use the Windows
roaming appdata directory. That means that for users on a Windows
network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user data directories are:
Mac OS X: ~/Library/Application Support/<AppName>
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
That means, by default "~/.local/share/<AppName>".
"""
if system == "win32":
if appauthor is None:
appauthor = appname
const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA"
path = os.path.normpath(_get_win_folder(const))
if appname:
if appauthor is not False:
path = os.path.join(path, appauthor, appname)
else:
path = os.path.join(path, appname)
elif system == 'darwin':
path = os.path.expanduser('~/Library/Application Support/')
if appname:
path = os.path.join(path, appname)
else:
path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share"))
if appname:
path = os.path.join(path, appname)
if appname and version:
path = os.path.join(path, version)
return path
def site_data_dir(appname=None, appauthor=None, version=None, multipath=False):
"""Return full path to the user-shared data dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"multipath" is an optional parameter only applicable to *nix
which indicates that the entire list of data dirs should be
returned. By default, the first item from XDG_DATA_DIRS is
returned, or '/usr/local/share/<AppName>',
if XDG_DATA_DIRS is not set
Typical user data directories are:
Mac OS X: /Library/Application Support/<AppName>
Unix: /usr/local/share/<AppName> or /usr/share/<AppName>
Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName>
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7.
For Unix, this is using the $XDG_DATA_DIRS[0] default.
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
"""
if system == "win32":
if appauthor is None:
appauthor = appname
path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA"))
if appname:
if appauthor is not False:
path = os.path.join(path, appauthor, appname)
else:
path = os.path.join(path, appname)
elif system == 'darwin':
path = os.path.expanduser('/Library/Application Support')
if appname:
path = os.path.join(path, appname)
else:
# XDG default for $XDG_DATA_DIRS
# only first, if multipath is False
path = os.getenv('XDG_DATA_DIRS',
os.pathsep.join(['/usr/local/share', '/usr/share']))
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
if appname:
if version:
appname = os.path.join(appname, version)
pathlist = [os.sep.join([x, appname]) for x in pathlist]
if multipath:
path = os.pathsep.join(pathlist)
else:
path = pathlist[0]
return path
if appname and version:
path = os.path.join(path, version)
return path
def user_config_dir(appname=None, appauthor=None, version=None, roaming=False):
r"""Return full path to the user-specific config dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"roaming" (boolean, default False) can be set True to use the Windows
roaming appdata directory. That means that for users on a Windows
network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user data directories are:
Mac OS X: same as user_data_dir
Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined
Win *: same as user_data_dir
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
That means, by deafult "~/.config/<AppName>".
"""
if system in ["win32", "darwin"]:
path = user_data_dir(appname, appauthor, None, roaming)
else:
path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config"))
if appname:
path = os.path.join(path, appname)
if appname and version:
path = os.path.join(path, version)
return path
def site_config_dir(appname=None, appauthor=None, version=None, multipath=False):
"""Return full path to the user-shared data dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"multipath" is an optional parameter only applicable to *nix
which indicates that the entire list of config dirs should be
returned. By default, the first item from XDG_CONFIG_DIRS is
returned, or '/etc/xdg/<AppName>', if XDG_CONFIG_DIRS is not set
Typical user data directories are:
Mac OS X: same as site_data_dir
Unix: /etc/xdg/<AppName> or $XDG_CONFIG_DIRS[i]/<AppName> for each value in
$XDG_CONFIG_DIRS
Win *: same as site_data_dir
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
"""
if system in ["win32", "darwin"]:
path = site_data_dir(appname, appauthor)
if appname and version:
path = os.path.join(path, version)
else:
# XDG default for $XDG_CONFIG_DIRS
# only first, if multipath is False
path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg')
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
if appname:
if version:
appname = os.path.join(appname, version)
pathlist = [os.sep.join([x, appname]) for x in pathlist]
if multipath:
path = os.pathsep.join(pathlist)
else:
path = pathlist[0]
return path
def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True):
r"""Return full path to the user-specific cache dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"opinion" (boolean) can be False to disable the appending of
"Cache" to the base app data dir for Windows. See
discussion below.
Typical user cache directories are:
Mac OS X: ~/Library/Caches/<AppName>
Unix: ~/.cache/<AppName> (XDG default)
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache
On Windows the only suggestion in the MSDN docs is that local settings go in
the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming
app data dir (the default returned by `user_data_dir` above). Apps typically
put cache data somewhere *under* the given dir here. Some examples:
...\Mozilla\Firefox\Profiles\<ProfileName>\Cache
...\Acme\SuperApp\Cache\1.0
OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value.
This can be disabled with the `opinion=False` option.
"""
if system == "win32":
if appauthor is None:
appauthor = appname
path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA"))
if appname:
if appauthor is not False:
path = os.path.join(path, appauthor, appname)
else:
path = os.path.join(path, appname)
if opinion:
path = os.path.join(path, "Cache")
elif system == 'darwin':
path = os.path.expanduser('~/Library/Caches')
if appname:
path = os.path.join(path, appname)
else:
path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache'))
if appname:
path = os.path.join(path, appname)
if appname and version:
path = os.path.join(path, version)
return path
def user_log_dir(appname=None, appauthor=None, version=None, opinion=True):
r"""Return full path to the user-specific log dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"opinion" (boolean) can be False to disable the appending of
"Logs" to the base app data dir for Windows, and "log" to the
base cache dir for Unix. See discussion below.
Typical user cache directories are:
Mac OS X: ~/Library/Logs/<AppName>
Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if defined
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs
On Windows the only suggestion in the MSDN docs is that local settings
go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in
examples of what some windows apps use for a logs dir.)
OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA`
value for Windows and appends "log" to the user cache dir for Unix.
This can be disabled with the `opinion=False` option.
"""
if system == "darwin":
path = os.path.join(
os.path.expanduser('~/Library/Logs'),
appname)
elif system == "win32":
path = user_data_dir(appname, appauthor, version)
version = False
if opinion:
path = os.path.join(path, "Logs")
else:
path = user_cache_dir(appname, appauthor, version)
version = False
if opinion:
path = os.path.join(path, "log")
if appname and version:
path = os.path.join(path, version)
return path
class AppDirs(object):
"""Convenience wrapper for getting application dirs."""
def __init__(self, appname, appauthor=None, version=None, roaming=False,
multipath=False):
self.appname = appname
self.appauthor = appauthor
self.version = version
self.roaming = roaming
self.multipath = multipath
@property
def user_data_dir(self):
return user_data_dir(self.appname, self.appauthor,
version=self.version, roaming=self.roaming)
@property
def site_data_dir(self):
return site_data_dir(self.appname, self.appauthor,
version=self.version, multipath=self.multipath)
@property
def user_config_dir(self):
return user_config_dir(self.appname, self.appauthor,
version=self.version, roaming=self.roaming)
@property
def site_config_dir(self):
return site_config_dir(self.appname, self.appauthor,
version=self.version, multipath=self.multipath)
@property
def user_cache_dir(self):
return user_cache_dir(self.appname, self.appauthor,
version=self.version)
@property
def user_log_dir(self):
return user_log_dir(self.appname, self.appauthor,
version=self.version)
#---- internal support stuff
def _get_win_folder_from_registry(csidl_name):
"""This is a fallback technique at best. I'm not sure if using the
registry for this guarantees us the correct answer for all CSIDL_*
names.
"""
if PY3:
import winreg as _winreg
else:
import _winreg
shell_folder_name = {
"CSIDL_APPDATA": "AppData",
"CSIDL_COMMON_APPDATA": "Common AppData",
"CSIDL_LOCAL_APPDATA": "Local AppData",
}[csidl_name]
key = _winreg.OpenKey(
_winreg.HKEY_CURRENT_USER,
r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders"
)
dir, type = _winreg.QueryValueEx(key, shell_folder_name)
return dir
def _get_win_folder_with_pywin32(csidl_name):
from win32com.shell import shellcon, shell
dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0)
# Try to make this a unicode path because SHGetFolderPath does
# not return unicode strings when there is unicode data in the
# path.
try:
dir = unicode(dir)
# Downgrade to short path name if have highbit chars. See
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
has_high_char = False
for c in dir:
if ord(c) > 255:
has_high_char = True
break
if has_high_char:
try:
import win32api
dir = win32api.GetShortPathName(dir)
except ImportError:
pass
except UnicodeError:
pass
return dir
def _get_win_folder_with_ctypes(csidl_name):
import ctypes
csidl_const = {
"CSIDL_APPDATA": 26,
"CSIDL_COMMON_APPDATA": 35,
"CSIDL_LOCAL_APPDATA": 28,
}[csidl_name]
buf = ctypes.create_unicode_buffer(1024)
ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
# Downgrade to short path name if have highbit chars. See
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
has_high_char = False
for c in buf:
if ord(c) > 255:
has_high_char = True
break
if has_high_char:
buf2 = ctypes.create_unicode_buffer(1024)
if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
buf = buf2
return buf.value
def _get_win_folder_with_jna(csidl_name):
import array
from com.sun import jna
from com.sun.jna.platform import win32
buf_size = win32.WinDef.MAX_PATH * 2
buf = array.zeros('c', buf_size)
shell = win32.Shell32.INSTANCE
shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf)
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
# Downgrade to short path name if have highbit chars. See
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
has_high_char = False
for c in dir:
if ord(c) > 255:
has_high_char = True
break
if has_high_char:
buf = array.zeros('c', buf_size)
kernel = win32.Kernel32.INSTANCE
if kernel.GetShortPathName(dir, buf, buf_size):
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
return dir
if system == "win32":
try:
import win32com.shell
_get_win_folder = _get_win_folder_with_pywin32
except ImportError:
try:
from ctypes import windll
_get_win_folder = _get_win_folder_with_ctypes
except ImportError:
try:
import com.sun.jna
_get_win_folder = _get_win_folder_with_jna
except ImportError:
_get_win_folder = _get_win_folder_from_registry
#---- self test code
if __name__ == "__main__":
appname = "MyApp"
appauthor = "MyCompany"
props = ("user_data_dir", "site_data_dir",
"user_config_dir", "site_config_dir",
"user_cache_dir", "user_log_dir")
print("-- app dirs %s --" % __version__)
print("-- app dirs (with optional 'version')")
dirs = AppDirs(appname, appauthor, version="1.0")
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))
print("\n-- app dirs (without optional 'version')")
dirs = AppDirs(appname, appauthor)
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))
print("\n-- app dirs (without optional 'appauthor')")
dirs = AppDirs(appname)
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))
print("\n-- app dirs (with disabled 'appauthor')")
dirs = AppDirs(appname, appauthor=False)
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))

View File

@ -34,7 +34,7 @@ class NumberWidget(LayoutWidget):
self.addWidget(self.number_area, 0, 0) self.addWidget(self.number_area, 0, 0)
self.unit_area = QtWidgets.QLabel() self.unit_area = QtWidgets.QLabel()
self.unit_area.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignTop) self.unit_area.setAlignment(QtCore.Qt.AlignmentFlag.AlignRight | QtCore.Qt.AlignmentFlag.AlignTop)
self.addWidget(self.unit_area, 0, 1) self.addWidget(self.unit_area, 0, 1)
self.lcd_widget = QResponsiveLCDNumber() self.lcd_widget = QResponsiveLCDNumber()

View File

@ -6,6 +6,7 @@ import socket
import re import re
import linecache import linecache
import os import os
import builtins
from enum import Enum from enum import Enum
from fractions import Fraction from fractions import Fraction
from collections import namedtuple from collections import namedtuple
@ -734,9 +735,10 @@ class CommKernel:
self._write_int32(embedding_map.store_str(function)) self._write_int32(embedding_map.store_str(function))
else: else:
exn_type = type(exn) exn_type = type(exn)
if exn_type in (ZeroDivisionError, ValueError, IndexError, RuntimeError) or \ if exn_type in builtins.__dict__.values():
hasattr(exn, "artiq_builtin"): name = "0:{}".format(exn_type.__qualname__)
name = "0:{}".format(exn_type.__name__) elif hasattr(exn, "artiq_builtin"):
name = "0:{}.{}".format(exn_type.__module__, exn_type.__qualname__)
else: else:
exn_id = embedding_map.store_object(exn_type) exn_id = embedding_map.store_object(exn_type)
name = "{}:{}.{}".format(exn_id, name = "{}:{}.{}".format(exn_id,

View File

@ -25,6 +25,9 @@ def rtio_get_destination_status(destination: int32) -> bool:
def rtio_get_counter() -> int64: def rtio_get_counter() -> int64:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall
def test_exception_id_sync(id: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated")
artiq_builtins = { artiq_builtins = {
"none": none, "none": none,

View File

@ -57,7 +57,6 @@ class ClockFailure(Exception):
"""Raised when RTIO PLL has lost lock.""" """Raised when RTIO PLL has lost lock."""
artiq_builtin = True artiq_builtin = True
@nac3 @nac3
class I2CError(Exception): class I2CError(Exception):
"""Raised when a I2C transaction fails.""" """Raised when a I2C transaction fails."""
@ -69,3 +68,8 @@ class SPIError(Exception):
"""Raised when a SPI transaction fails.""" """Raised when a SPI transaction fails."""
artiq_builtin = True artiq_builtin = True
class UnwrapNoneError(Exception):
"""Raised when unwrapping a none Option."""
artiq_builtin = True

View File

@ -338,7 +338,7 @@ class _DACWidget(_MoninjWidget):
self.offset_dacs = offset_dacs self.offset_dacs = offset_dacs
self.value = QtWidgets.QLabel() self.value = QtWidgets.QLabel()
self.value.setAlignment(QtCore.Qt.AlignHCenter | QtCore.Qt.AlignTop) self.value.setAlignment(QtCore.Qt.AlignmentFlag.AlignHCenter | QtCore.Qt.AlignmentFlag.AlignTop)
self.grid.addWidget(self.value, 2, 1, 6, 1) self.grid.addWidget(self.value, 2, 1, 6, 1)
self.grid.setRowStretch(1, 1) self.grid.setRowStretch(1, 1)

View File

@ -208,16 +208,19 @@ class _BaseWaveform(pg.PlotWidget):
pixmapi = QtWidgets.QApplication.style().standardIcon( pixmapi = QtWidgets.QApplication.style().standardIcon(
QtWidgets.QStyle.StandardPixmap.SP_FileIcon) QtWidgets.QStyle.StandardPixmap.SP_FileIcon)
drag.setPixmap(pixmapi.pixmap(32)) drag.setPixmap(pixmapi.pixmap(32))
drag.exec_(QtCore.Qt.MoveAction) drag.exec(QtCore.Qt.DropAction.MoveAction)
else: else:
super().mouseMoveEvent(e) super().mouseMoveEvent(e)
def wheelEvent(self, e): def wheelEvent(self, e):
if e.modifiers() & QtCore.Qt.ControlModifier: if e.modifiers() & QtCore.Qt.KeyboardModifier.ControlModifier:
super().wheelEvent(e) super().wheelEvent(e)
else:
e.ignore()
def mouseDoubleClickEvent(self, e): def mouseDoubleClickEvent(self, e):
pos = self.view_box.mapSceneToView(e.pos()) pos = self.view_box.mapSceneToView(e.position())
self.cursorMove.emit(pos.x()) self.cursorMove.emit(pos.x())
@ -393,6 +396,13 @@ class LogWaveform(_BaseWaveform):
self.plot_data_item.setData(x=[], y=[]) self.plot_data_item.setData(x=[], y=[])
# pg.GraphicsView ignores dragEnterEvent but not dragLeaveEvent
# https://github.com/pyqtgraph/pyqtgraph/blob/1e98704eac6b85de9c35371079f561042e88ad68/pyqtgraph/widgets/GraphicsView.py#L388
class _RefAxis(pg.PlotWidget):
def dragLeaveEvent(self, ev):
ev.ignore()
class _WaveformView(QtWidgets.QWidget): class _WaveformView(QtWidgets.QWidget):
cursorMove = QtCore.pyqtSignal(float) cursorMove = QtCore.pyqtSignal(float)
@ -408,7 +418,7 @@ class _WaveformView(QtWidgets.QWidget):
layout.setSpacing(0) layout.setSpacing(0)
self.setLayout(layout) self.setLayout(layout)
self._ref_axis = pg.PlotWidget() self._ref_axis = _RefAxis()
self._ref_axis.hideAxis("bottom") self._ref_axis.hideAxis("bottom")
self._ref_axis.hideAxis("left") self._ref_axis.hideAxis("left")
self._ref_axis.hideButtons() self._ref_axis.hideButtons()

View File

@ -173,4 +173,12 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(spi_set_config = ::nrt_bus::spi::set_config), api!(spi_set_config = ::nrt_bus::spi::set_config),
api!(spi_write = ::nrt_bus::spi::write), api!(spi_write = ::nrt_bus::spi::write),
api!(spi_read = ::nrt_bus::spi::read), api!(spi_read = ::nrt_bus::spi::read),
/*
* syscall for unit tests
* Used in `artiq.tests.coredevice.test_exceptions.ExceptionTest.test_raise_exceptions_kernel`
* This syscall checks that the exception IDs used in the Python `EmbeddingMap` (in `artiq.compiler.embedding`)
* match the `EXCEPTION_ID_LOOKUP` defined in the firmware (`artiq::firmware::ksupport::eh_artiq`)
*/
api!(test_exception_id_sync = ::eh_artiq::test_exception_id_sync)
]; ];

View File

@ -328,19 +328,30 @@ extern fn stop_fn(_version: c_int,
} }
} }
static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [ // Must be kept in sync with `artiq.compiler.embedding`
("RuntimeError", 0), static EXCEPTION_ID_LOOKUP: [(&str, u32); 22] = [
("RTIOUnderflow", 1), ("RTIOUnderflow", 0),
("RTIOOverflow", 2), ("RTIOOverflow", 1),
("RTIODestinationUnreachable", 3), ("RTIODestinationUnreachable", 2),
("DMAError", 4), ("DMAError", 3),
("I2CError", 5), ("I2CError", 4),
("CacheError", 6), ("CacheError", 5),
("SPIError", 7), ("SPIError", 6),
("ZeroDivisionError", 8), ("SubkernelError", 7),
("IndexError", 9), ("AssertionError", 8),
("UnwrapNoneError", 10), ("AttributeError", 9),
("SubkernelError", 11) ("IndexError", 10),
("IOError", 11),
("KeyError", 12),
("NotImplementedError", 13),
("OverflowError", 14),
("RuntimeError", 15),
("TimeoutError", 16),
("TypeError", 17),
("ValueError", 18),
("ZeroDivisionError", 19),
("LinAlgError", 20),
("UnwrapNoneError", 21),
]; ];
pub fn get_exception_id(name: &str) -> u32 { pub fn get_exception_id(name: &str) -> u32 {
@ -352,3 +363,29 @@ pub fn get_exception_id(name: &str) -> u32 {
unimplemented!("unallocated internal exception id") unimplemented!("unallocated internal exception id")
} }
/// Takes as input exception id from host
/// Generates a new exception with:
/// * `id` set to `exn_id`
/// * `message` set to corresponding exception name from `EXCEPTION_ID_LOOKUP`
///
/// The message is matched on host to ensure correct exception is being referred
/// This test checks the synchronization of exception ids for runtime errors
#[no_mangle]
pub extern "C-unwind" fn test_exception_id_sync(exn_id: u32) {
let message = EXCEPTION_ID_LOOKUP
.iter()
.find_map(|&(name, id)| if id == exn_id { Some(name) } else { None })
.unwrap_or("unallocated internal exception id");
let exn = Exception {
id: exn_id,
file: file!().as_c_slice(),
line: 0,
column: 0,
function: "test_exception_id_sync".as_c_slice(),
message: message.as_c_slice(),
param: [0, 0, 0]
};
unsafe { raise(&exn) };
}

View File

@ -484,17 +484,23 @@ extern "C-unwind" fn subkernel_load_run(id: u32, destination: u8, run: bool) {
extern "C-unwind" fn subkernel_await_finish(id: u32, timeout: i64) { extern "C-unwind" fn subkernel_await_finish(id: u32, timeout: i64) {
send(&SubkernelAwaitFinishRequest { id: id, timeout: timeout }); send(&SubkernelAwaitFinishRequest { id: id, timeout: timeout });
recv!(SubkernelAwaitFinishReply { status } => { recv(move |request| {
match status { if let SubkernelAwaitFinishReply = request { }
SubkernelStatus::NoError => (), else if let SubkernelError(status) = request {
SubkernelStatus::IncorrectState => raise!("SubkernelError", match status {
"Subkernel not running"), SubkernelStatus::IncorrectState => raise!("SubkernelError",
SubkernelStatus::Timeout => raise!("SubkernelError", "Subkernel not running"),
"Subkernel timed out"), SubkernelStatus::Timeout => raise!("SubkernelError",
SubkernelStatus::CommLost => raise!("SubkernelError", "Subkernel timed out"),
"Lost communication with satellite"), SubkernelStatus::CommLost => raise!("SubkernelError",
SubkernelStatus::OtherError => raise!("SubkernelError", "Lost communication with satellite"),
"An error occurred during subkernel operation") SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation"),
SubkernelStatus::Exception(e) => unsafe { crate::eh_artiq::raise(e) },
}
} else {
send(&Log(format_args!("unexpected reply: {:?}\n", request)));
loop {}
} }
}) })
} }
@ -512,23 +518,28 @@ extern fn subkernel_send_message(id: u32, is_return: bool, destination: u8,
extern "C-unwind" fn subkernel_await_message(id: i32, timeout: i64, tags: &CSlice<u8>, min: u8, max: u8) -> u8 { extern "C-unwind" fn subkernel_await_message(id: i32, timeout: i64, tags: &CSlice<u8>, min: u8, max: u8) -> u8 {
send(&SubkernelMsgRecvRequest { id: id, timeout: timeout, tags: tags.as_ref() }); send(&SubkernelMsgRecvRequest { id: id, timeout: timeout, tags: tags.as_ref() });
recv!(SubkernelMsgRecvReply { status, count } => { recv(move |request| {
match status { if let SubkernelMsgRecvReply { count } = request {
SubkernelStatus::NoError => { if count < &min || count > &max {
if count < &min || count > &max { raise!("SubkernelError",
raise!("SubkernelError", "Received less or more arguments than expected");
"Received less or more arguments than expected");
}
*count
} }
SubkernelStatus::IncorrectState => raise!("SubkernelError", *count
"Subkernel not running"), } else if let SubkernelError(status) = request {
SubkernelStatus::Timeout => raise!("SubkernelError", match status {
"Subkernel timed out"), SubkernelStatus::IncorrectState => raise!("SubkernelError",
SubkernelStatus::CommLost => raise!("SubkernelError", "Subkernel not running"),
"Lost communication with satellite"), SubkernelStatus::Timeout => raise!("SubkernelError",
SubkernelStatus::OtherError => raise!("SubkernelError", "Subkernel timed out"),
"An error occurred during subkernel operation") SubkernelStatus::CommLost => raise!("SubkernelError",
"Lost communication with satellite"),
SubkernelStatus::OtherError => raise!("SubkernelError",
"An error occurred during subkernel operation"),
SubkernelStatus::Exception(e) => unsafe { crate::eh_artiq::raise(e) },
}
} else {
send(&Log(format_args!("unexpected reply: {:?}\n", request)));
loop {}
} }
}) })
// RpcRecvRequest should be called `count` times after this to receive message data // RpcRecvRequest should be called `count` times after this to receive message data

View File

@ -123,8 +123,8 @@ pub enum Packet {
SubkernelLoadRunRequest { source: u8, destination: u8, id: u32, run: bool }, SubkernelLoadRunRequest { source: u8, destination: u8, id: u32, run: bool },
SubkernelLoadRunReply { destination: u8, succeeded: bool }, SubkernelLoadRunReply { destination: u8, succeeded: bool },
SubkernelFinished { destination: u8, id: u32, with_exception: bool, exception_src: u8 }, SubkernelFinished { destination: u8, id: u32, with_exception: bool, exception_src: u8 },
SubkernelExceptionRequest { destination: u8 }, SubkernelExceptionRequest { source: u8, destination: u8 },
SubkernelException { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE] }, SubkernelException { destination: u8, last: bool, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelMessage { source: u8, destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] }, SubkernelMessage { source: u8, destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
SubkernelMessageAck { destination: u8 }, SubkernelMessageAck { destination: u8 },
} }
@ -367,14 +367,17 @@ impl Packet {
exception_src: reader.read_u8()? exception_src: reader.read_u8()?
}, },
0xc9 => Packet::SubkernelExceptionRequest { 0xc9 => Packet::SubkernelExceptionRequest {
source: reader.read_u8()?,
destination: reader.read_u8()? destination: reader.read_u8()?
}, },
0xca => { 0xca => {
let destination = reader.read_u8()?;
let last = reader.read_bool()?; let last = reader.read_bool()?;
let length = reader.read_u16()?; let length = reader.read_u16()?;
let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE]; let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
reader.read_exact(&mut data[0..length as usize])?; reader.read_exact(&mut data[0..length as usize])?;
Packet::SubkernelException { Packet::SubkernelException {
destination: destination,
last: last, last: last,
length: length, length: length,
data: data data: data
@ -663,12 +666,14 @@ impl Packet {
writer.write_bool(with_exception)?; writer.write_bool(with_exception)?;
writer.write_u8(exception_src)?; writer.write_u8(exception_src)?;
}, },
Packet::SubkernelExceptionRequest { destination } => { Packet::SubkernelExceptionRequest { source, destination } => {
writer.write_u8(0xc9)?; writer.write_u8(0xc9)?;
writer.write_u8(source)?;
writer.write_u8(destination)?; writer.write_u8(destination)?;
}, },
Packet::SubkernelException { last, length, data } => { Packet::SubkernelException { destination, last, length, data } => {
writer.write_u8(0xca)?; writer.write_u8(0xca)?;
writer.write_u8(destination)?;
writer.write_bool(last)?; writer.write_bool(last)?;
writer.write_u16(length)?; writer.write_u16(length)?;
writer.write_all(&data[0..length as usize])?; writer.write_all(&data[0..length as usize])?;
@ -693,18 +698,20 @@ impl Packet {
pub fn routable_destination(&self) -> Option<u8> { pub fn routable_destination(&self) -> Option<u8> {
// only for packets that could be re-routed, not only forwarded // only for packets that could be re-routed, not only forwarded
match self { match self {
Packet::DmaAddTraceRequest { destination, .. } => Some(*destination), Packet::DmaAddTraceRequest { destination, .. } => Some(*destination),
Packet::DmaAddTraceReply { destination, .. } => Some(*destination), Packet::DmaAddTraceReply { destination, .. } => Some(*destination),
Packet::DmaRemoveTraceRequest { destination, .. } => Some(*destination), Packet::DmaRemoveTraceRequest { destination, .. } => Some(*destination),
Packet::DmaRemoveTraceReply { destination, .. } => Some(*destination), Packet::DmaRemoveTraceReply { destination, .. } => Some(*destination),
Packet::DmaPlaybackRequest { destination, .. } => Some(*destination), Packet::DmaPlaybackRequest { destination, .. } => Some(*destination),
Packet::DmaPlaybackReply { destination, .. } => Some(*destination), Packet::DmaPlaybackReply { destination, .. } => Some(*destination),
Packet::SubkernelLoadRunRequest { destination, .. } => Some(*destination), Packet::SubkernelLoadRunRequest { destination, .. } => Some(*destination),
Packet::SubkernelLoadRunReply { destination, .. } => Some(*destination), Packet::SubkernelLoadRunReply { destination, .. } => Some(*destination),
Packet::SubkernelMessage { destination, .. } => Some(*destination), Packet::SubkernelMessage { destination, .. } => Some(*destination),
Packet::SubkernelMessageAck { destination, .. } => Some(*destination), Packet::SubkernelMessageAck { destination, .. } => Some(*destination),
Packet::DmaPlaybackStatus { destination, .. } => Some(*destination), Packet::SubkernelExceptionRequest { destination, .. } => Some(*destination),
Packet::SubkernelFinished { destination, .. } => Some(*destination), Packet::SubkernelException { destination, .. } => Some(*destination),
Packet::DmaPlaybackStatus { destination, .. } => Some(*destination),
Packet::SubkernelFinished { destination, .. } => Some(*destination),
_ => None _ => None
} }
} }

View File

@ -11,12 +11,12 @@ pub const KERNELCPU_LAST_ADDRESS: usize = 0x4fffffff;
pub const KSUPPORT_HEADER_SIZE: usize = 0x74; pub const KSUPPORT_HEADER_SIZE: usize = 0x74;
#[derive(Debug)] #[derive(Debug)]
pub enum SubkernelStatus { pub enum SubkernelStatus<'a> {
NoError,
Timeout, Timeout,
IncorrectState, IncorrectState,
CommLost, CommLost,
OtherError Exception(eh::eh_artiq::Exception<'a>),
OtherError,
} }
#[derive(Debug)] #[derive(Debug)]
@ -106,10 +106,11 @@ pub enum Message<'a> {
SubkernelLoadRunRequest { id: u32, destination: u8, run: bool }, SubkernelLoadRunRequest { id: u32, destination: u8, run: bool },
SubkernelLoadRunReply { succeeded: bool }, SubkernelLoadRunReply { succeeded: bool },
SubkernelAwaitFinishRequest { id: u32, timeout: i64 }, SubkernelAwaitFinishRequest { id: u32, timeout: i64 },
SubkernelAwaitFinishReply { status: SubkernelStatus }, SubkernelAwaitFinishReply,
SubkernelMsgSend { id: u32, destination: Option<u8>, count: u8, tag: &'a [u8], data: *const *const () }, SubkernelMsgSend { id: u32, destination: Option<u8>, count: u8, tag: &'a [u8], data: *const *const () },
SubkernelMsgRecvRequest { id: i32, timeout: i64, tags: &'a [u8] }, SubkernelMsgRecvRequest { id: i32, timeout: i64, tags: &'a [u8] },
SubkernelMsgRecvReply { status: SubkernelStatus, count: u8 }, SubkernelMsgRecvReply { count: u8 },
SubkernelError(SubkernelStatus<'a>),
Log(fmt::Arguments<'a>), Log(fmt::Arguments<'a>),
LogSlice(&'a str) LogSlice(&'a str)

View File

@ -95,7 +95,9 @@ pub mod subkernel {
use board_artiq::drtio_routing::RoutingTable; use board_artiq::drtio_routing::RoutingTable;
use board_misoc::clock; use board_misoc::clock;
use proto_artiq::{drtioaux_proto::{PayloadStatus, MASTER_PAYLOAD_MAX_SIZE}, rpc_proto as rpc}; use proto_artiq::{drtioaux_proto::{PayloadStatus, MASTER_PAYLOAD_MAX_SIZE}, rpc_proto as rpc};
use io::Cursor; use io::{Cursor, ProtoRead};
use eh::eh_artiq::Exception;
use cslice::CSlice;
use rtio_mgt::drtio; use rtio_mgt::drtio;
use sched::{Io, Mutex, Error as SchedError}; use sched::{Io, Mutex, Error as SchedError};
@ -226,8 +228,8 @@ pub mod subkernel {
if subkernel.state == SubkernelState::Running { if subkernel.state == SubkernelState::Running {
subkernel.state = SubkernelState::Finished { subkernel.state = SubkernelState::Finished {
status: match with_exception { status: match with_exception {
true => FinishStatus::Exception(exception_src), true => FinishStatus::Exception(exception_src),
false => FinishStatus::Ok, false => FinishStatus::Ok,
} }
} }
} }
@ -256,6 +258,49 @@ pub mod subkernel {
} }
} }
fn read_exception_string<'a>(reader: &mut Cursor<&[u8]>) -> Result<CSlice<'a, u8>, Error> {
let len = reader.read_u32()? as usize;
if len == usize::MAX {
let data = reader.read_u32()?;
Ok(unsafe { CSlice::new(data as *const u8, len) })
} else {
let pos = reader.position();
let slice = unsafe {
let ptr = reader.get_ref().as_ptr().offset(pos as isize);
CSlice::new(ptr, len)
};
reader.set_position(pos + len);
Ok(slice)
}
}
pub fn read_exception(buffer: &[u8]) -> Result<Exception, Error>
{
let mut reader = Cursor::new(buffer);
let mut byte = reader.read_u8()?;
// to sync
while byte != 0x5a {
byte = reader.read_u8()?;
}
// skip sync bytes, 0x09 indicates exception
while byte != 0x09 {
byte = reader.read_u8()?;
}
let _len = reader.read_u32()?;
// ignore the remaining exceptions, stack traces etc. - unwinding from another device would be unwise anyway
Ok(Exception {
id: reader.read_u32()?,
message: read_exception_string(&mut reader)?,
param: [reader.read_u64()? as i64, reader.read_u64()? as i64, reader.read_u64()? as i64],
file: read_exception_string(&mut reader)?,
line: reader.read_u32()?,
column: reader.read_u32()?,
function: read_exception_string(&mut reader)?
})
}
pub fn retrieve_finish_status(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, pub fn retrieve_finish_status(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
routing_table: &RoutingTable, id: u32) -> Result<SubkernelFinished, Error> { routing_table: &RoutingTable, id: u32) -> Result<SubkernelFinished, Error> {
let _lock = subkernel_mutex.lock(io)?; let _lock = subkernel_mutex.lock(io)?;

View File

@ -119,17 +119,19 @@ pub mod drtio {
}, },
// (potentially) routable packets // (potentially) routable packets
drtioaux::Packet::DmaAddTraceRequest { destination, .. } | drtioaux::Packet::DmaAddTraceRequest { destination, .. } |
drtioaux::Packet::DmaAddTraceReply { destination, .. } | drtioaux::Packet::DmaAddTraceReply { destination, .. } |
drtioaux::Packet::DmaRemoveTraceRequest { destination, .. } | drtioaux::Packet::DmaRemoveTraceRequest { destination, .. } |
drtioaux::Packet::DmaRemoveTraceReply { destination, .. } | drtioaux::Packet::DmaRemoveTraceReply { destination, .. } |
drtioaux::Packet::DmaPlaybackRequest { destination, .. } | drtioaux::Packet::DmaPlaybackRequest { destination, .. } |
drtioaux::Packet::DmaPlaybackReply { destination, .. } | drtioaux::Packet::DmaPlaybackReply { destination, .. } |
drtioaux::Packet::SubkernelLoadRunRequest { destination, .. } | drtioaux::Packet::SubkernelLoadRunRequest { destination, .. } |
drtioaux::Packet::SubkernelLoadRunReply { destination, .. } | drtioaux::Packet::SubkernelLoadRunReply { destination, .. } |
drtioaux::Packet::SubkernelMessage { destination, .. } | drtioaux::Packet::SubkernelMessage { destination, .. } |
drtioaux::Packet::SubkernelMessageAck { destination, .. } | drtioaux::Packet::SubkernelMessageAck { destination, .. } |
drtioaux::Packet::DmaPlaybackStatus { destination, .. } | drtioaux::Packet::SubkernelExceptionRequest { destination, .. } |
drtioaux::Packet::SubkernelFinished { destination, .. } => { drtioaux::Packet::SubkernelException { destination, .. } |
drtioaux::Packet::DmaPlaybackStatus { destination, .. } |
drtioaux::Packet::SubkernelFinished { destination, .. } => {
if *destination == 0 { if *destination == 0 {
false false
} else { } else {
@ -612,9 +614,9 @@ pub mod drtio {
let mut remote_data: Vec<u8> = Vec::new(); let mut remote_data: Vec<u8> = Vec::new();
loop { loop {
let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno, let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
&drtioaux::Packet::SubkernelExceptionRequest { destination: destination })?; &drtioaux::Packet::SubkernelExceptionRequest { source: 0, destination: destination })?;
match reply { match reply {
drtioaux::Packet::SubkernelException { last, length, data } => { drtioaux::Packet::SubkernelException { destination: 0, last, length, data } => {
remote_data.extend(&data[0..length as usize]); remote_data.extend(&data[0..length as usize]);
if last { if last {
return Ok(remote_data); return Ok(remote_data);

View File

@ -126,19 +126,6 @@ macro_rules! unexpected {
($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*)))); ($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*))));
} }
#[cfg(has_drtio)]
macro_rules! propagate_subkernel_exception {
( $exception:ident, $stream:ident ) => {
error!("Exception in subkernel");
match $stream {
None => return Ok(true),
Some(ref mut $stream) => {
$stream.write_all($exception)?;
}
}
}
}
// Persistent state // Persistent state
#[derive(Debug)] #[derive(Debug)]
struct Congress { struct Congress {
@ -686,23 +673,26 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
&kern::SubkernelAwaitFinishRequest{ id, timeout } => { &kern::SubkernelAwaitFinishRequest{ id, timeout } => {
let res = subkernel::await_finish(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, let res = subkernel::await_finish(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table,
id, timeout); id, timeout);
let status = match res { let response = match res {
Ok(ref res) => { Ok(ref res) => {
if res.comm_lost { if res.comm_lost {
kern::SubkernelStatus::CommLost kern::SubkernelError(kern::SubkernelStatus::CommLost)
} else if let Some(exception) = &res.exception { } else if let Some(raw_exception) = &res.exception {
propagate_subkernel_exception!(exception, stream); let exception = subkernel::read_exception(raw_exception);
// will not be called after exception is served if let Ok(exception) = exception {
kern::SubkernelStatus::OtherError kern::SubkernelError(kern::SubkernelStatus::Exception(exception))
} else {
kern::SubkernelError(kern::SubkernelStatus::OtherError)
}
} else { } else {
kern::SubkernelStatus::NoError kern::SubkernelAwaitFinishReply
} }
}, },
Err(SubkernelError::Timeout) => kern::SubkernelStatus::Timeout, Err(SubkernelError::Timeout) => kern::SubkernelError(kern::SubkernelStatus::Timeout),
Err(SubkernelError::IncorrectState) => kern::SubkernelStatus::IncorrectState, Err(SubkernelError::IncorrectState) => kern::SubkernelError(kern::SubkernelStatus::IncorrectState),
Err(_) => kern::SubkernelStatus::OtherError Err(_) => kern::SubkernelError(kern::SubkernelStatus::OtherError)
}; };
kern_send(io, &kern::SubkernelAwaitFinishReply { status: status }) kern_send(io, &response)
} }
#[cfg(has_drtio)] #[cfg(has_drtio)]
&kern::SubkernelMsgSend { id, destination, count, tag, data } => { &kern::SubkernelMsgSend { id, destination, count, tag, data } => {
@ -712,72 +702,80 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
#[cfg(has_drtio)] #[cfg(has_drtio)]
&kern::SubkernelMsgRecvRequest { id, timeout, tags } => { &kern::SubkernelMsgRecvRequest { id, timeout, tags } => {
let message_received = subkernel::message_await(io, subkernel_mutex, id as u32, timeout); let message_received = subkernel::message_await(io, subkernel_mutex, id as u32, timeout);
let (status, count) = match message_received { if let Err(SubkernelError::SubkernelFinished) = message_received {
Ok(ref message) => (kern::SubkernelStatus::NoError, message.count), let res = subkernel::retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex,
Err(SubkernelError::Timeout) => (kern::SubkernelStatus::Timeout, 0), routing_table, id as u32)?;
Err(SubkernelError::IncorrectState) => (kern::SubkernelStatus::IncorrectState, 0), if res.comm_lost {
Err(SubkernelError::SubkernelFinished) => { kern_send(io,
let res = subkernel::retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex, &kern::SubkernelError(kern::SubkernelStatus::CommLost))?;
routing_table, id as u32)?; } else if let Some(raw_exception) = &res.exception {
if res.comm_lost { let exception = subkernel::read_exception(raw_exception);
(kern::SubkernelStatus::CommLost, 0) if let Ok(exception) = exception {
} else if let Some(exception) = &res.exception { kern_send(io,
propagate_subkernel_exception!(exception, stream); &kern::SubkernelError(kern::SubkernelStatus::Exception(exception)))?;
(kern::SubkernelStatus::OtherError, 0)
} else { } else {
(kern::SubkernelStatus::OtherError, 0) kern_send(io,
&kern::SubkernelError(kern::SubkernelStatus::OtherError))?;
} }
} else {
kern_send(io,
&kern::SubkernelError(kern::SubkernelStatus::OtherError))?;
} }
Err(_) => (kern::SubkernelStatus::OtherError, 0) } else {
}; let message = match message_received {
kern_send(io, &kern::SubkernelMsgRecvReply { status: status, count: count})?; Ok(ref message) => kern::SubkernelMsgRecvReply { count: message.count },
if let Ok(message) = message_received { Err(SubkernelError::Timeout) => kern::SubkernelError(kern::SubkernelStatus::Timeout),
// receive code almost identical to RPC recv, except we are not reading from a stream Err(SubkernelError::IncorrectState) => kern::SubkernelError(kern::SubkernelStatus::IncorrectState),
let mut reader = Cursor::new(message.data); Err(SubkernelError::SubkernelFinished) => unreachable!(), // taken care of above
let mut current_tags = tags; Err(_) => kern::SubkernelError(kern::SubkernelStatus::OtherError)
let mut i = 0; };
loop { kern_send(io, &message)?;
// kernel has to consume all arguments in the whole message if let Ok(message) = message_received {
let slot = kern_recv(io, |reply| { // receive code almost identical to RPC recv, except we are not reading from a stream
match reply { let mut reader = Cursor::new(message.data);
&kern::RpcRecvRequest(slot) => Ok(slot), let mut current_tags = tags;
other => unexpected!( let mut i = 0;
"expected root value slot from kernel CPU, not {:?}", other) loop {
} // kernel has to consume all arguments in the whole message
})?; let slot = kern_recv(io, |reply| {
let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error<SchedError>> {
if size == 0 {
return Ok(0 as *mut ())
}
kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
Ok(kern_recv(io, |reply| {
match reply { match reply {
&kern::RpcRecvRequest(slot) => Ok(slot), &kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!( other => unexpected!(
"expected nested value slot from kernel CPU, not {:?}", other) "expected root value slot from kernel CPU, not {:?}", other)
} }
})?) })?;
}); let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error<SchedError>> {
match res { if size == 0 {
Ok(new_tags) => { return Ok(0 as *mut ())
kern_send(io, &kern::RpcRecvReply(Ok(0)))?;
i += 1;
if i < message.count {
// update the tag for next read
current_tags = new_tags;
} else {
// should be done by then
break;
} }
}, kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
Err(_) => unexpected!("expected valid subkernel message data") Ok(kern_recv(io, |reply| {
}; match reply {
&kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!(
"expected nested value slot from kernel CPU, not {:?}", other)
}
})?)
});
match res {
Ok(new_tags) => {
kern_send(io, &kern::RpcRecvReply(Ok(0)))?;
i += 1;
if i < message.count {
// update the tag for next read
current_tags = new_tags;
} else {
// should be done by then
break;
}
},
Err(_) => unexpected!("expected valid subkernel message data")
};
}
} }
Ok(())
} else {
// if timed out, no data has been received, exception should be raised by kernel // if timed out, no data has been received, exception should be raised by kernel
Ok(())
} }
Ok(())
}, },
request => unexpected!("unexpected request {:?} from kernel CPU", request) request => unexpected!("unexpected request {:?} from kernel CPU", request)
@ -916,7 +914,7 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
Ok(()) => Ok(()) =>
info!("startup kernel finished"), info!("startup kernel finished"),
Err(Error::KernelNotFound) => Err(Error::KernelNotFound) =>
info!("no startup kernel found"), debug!("no startup kernel found"),
Err(err) => { Err(err) => {
congress.finished_cleanly.set(false); congress.finished_cleanly.set(false);
error!("startup kernel aborted: {}", err); error!("startup kernel aborted: {}", err);
@ -1011,7 +1009,7 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
drtio::clear_buffers(&io, &aux_mutex); drtio::clear_buffers(&io, &aux_mutex);
} }
Err(Error::KernelNotFound) => { Err(Error::KernelNotFound) => {
info!("no idle kernel found"); debug!("no idle kernel found");
while io.relinquish().is_ok() {} while io.relinquish().is_ok() {}
} }
Err(err) => { Err(err) => {

View File

@ -1,6 +1,6 @@
use core::mem; use core::mem;
use alloc::{string::String, format, vec::Vec, collections::btree_map::BTreeMap}; use alloc::{string::String, format, vec::Vec, collections::btree_map::BTreeMap};
use cslice::AsCSlice; use cslice::{CSlice, AsCSlice};
use board_artiq::{drtioaux, drtio_routing::RoutingTable, mailbox, spi}; use board_artiq::{drtioaux, drtio_routing::RoutingTable, mailbox, spi};
use board_misoc::{csr, clock, i2c}; use board_misoc::{csr, clock, i2c};
@ -10,14 +10,13 @@ use proto_artiq::{
session_proto::Reply::KernelException as HostKernelException, session_proto::Reply::KernelException as HostKernelException,
rpc_proto as rpc}; rpc_proto as rpc};
use eh::eh_artiq; use eh::eh_artiq;
use io::Cursor; use io::{Cursor, ProtoRead};
use kernel::eh_artiq::StackPointerBacktrace; use kernel::eh_artiq::StackPointerBacktrace;
use ::{cricon_select, RtioMaster}; use ::{cricon_select, RtioMaster};
use cache::Cache; use cache::Cache;
use dma::{Manager as DmaManager, Error as DmaError}; use dma::{Manager as DmaManager, Error as DmaError};
use routing::{Router, Sliceable, SliceMeta}; use routing::{Router, Sliceable, SliceMeta};
use SAT_PAYLOAD_MAX_SIZE;
use MASTER_PAYLOAD_MAX_SIZE; use MASTER_PAYLOAD_MAX_SIZE;
mod kernel_cpu { mod kernel_cpu {
@ -69,6 +68,7 @@ enum KernelState {
SubkernelAwaitFinish { max_time: i64, id: u32 }, SubkernelAwaitFinish { max_time: i64, id: u32 },
DmaUploading { max_time: u64 }, DmaUploading { max_time: u64 },
DmaAwait { max_time: u64 }, DmaAwait { max_time: u64 },
SubkernelRetrievingException { destination: u8 },
} }
#[derive(Debug)] #[derive(Debug)]
@ -134,10 +134,13 @@ struct MessageManager {
struct Session { struct Session {
kernel_state: KernelState, kernel_state: KernelState,
log_buffer: String, log_buffer: String,
last_exception: Option<Sliceable>, last_exception: Option<Sliceable>, // exceptions raised locally
source: u8, // which destination requested running the kernel external_exception: Vec<u8>, // exceptions from sub-subkernels
// which destination requested running the kernel
source: u8,
messages: MessageManager, messages: MessageManager,
subkernels_finished: Vec<u32> // ids of subkernels finished // ids of subkernels finished (with exception)
subkernels_finished: Vec<(u32, Option<u8>)>,
} }
#[derive(Debug)] #[derive(Debug)]
@ -277,6 +280,7 @@ impl Session {
kernel_state: KernelState::Absent, kernel_state: KernelState::Absent,
log_buffer: String::new(), log_buffer: String::new(),
last_exception: None, last_exception: None,
external_exception: Vec::new(),
source: 0, source: 0,
messages: MessageManager::new(), messages: MessageManager::new(),
subkernels_finished: Vec::new() subkernels_finished: Vec::new()
@ -428,9 +432,9 @@ impl Manager {
} }
} }
pub fn exception_get_slice(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> SliceMeta { pub fn exception_get_slice(&mut self, data_slice: &mut [u8; MASTER_PAYLOAD_MAX_SIZE]) -> SliceMeta {
match self.session.last_exception.as_mut() { match self.session.last_exception.as_mut() {
Some(exception) => exception.get_slice_sat(data_slice), Some(exception) => exception.get_slice_master(data_slice),
None => SliceMeta { destination: 0, len: 0, status: PayloadStatus::FirstAndLast } None => SliceMeta { destination: 0, len: 0, status: PayloadStatus::FirstAndLast }
} }
} }
@ -517,7 +521,7 @@ impl Manager {
return; return;
} }
match self.process_external_messages() { match self.process_external_messages(router, routing_table, rank, destination) {
Ok(()) => (), Ok(()) => (),
Err(Error::AwaitingMessage) => return, // kernel still waiting, do not process kernel messages Err(Error::AwaitingMessage) => return, // kernel still waiting, do not process kernel messages
Err(Error::KernelException(exception)) => { Err(Error::KernelException(exception)) => {
@ -549,20 +553,42 @@ impl Manager {
} }
} }
fn process_external_messages(&mut self) -> Result<(), Error> { fn check_finished_kernels(&mut self, id: u32, router: &mut Router, routing_table: &RoutingTable, rank: u8, self_destination: u8) {
for (i, (status, exception_source)) in self.session.subkernels_finished.iter().enumerate() {
if *status == id {
if exception_source.is_none() {
kern_send(&kern::SubkernelAwaitFinishReply).unwrap();
self.session.kernel_state = KernelState::Running;
self.session.subkernels_finished.swap_remove(i);
} else {
let destination = exception_source.unwrap();
self.session.external_exception = Vec::new();
self.session.kernel_state = KernelState::SubkernelRetrievingException { destination: destination };
router.route(drtioaux::Packet::SubkernelExceptionRequest {
source: self_destination, destination: destination
}, &routing_table, rank, self_destination);
}
break;
}
}
}
fn process_external_messages(&mut self, router: &mut Router, routing_table: &RoutingTable, rank: u8, self_destination: u8) -> Result<(), Error> {
match &self.session.kernel_state { match &self.session.kernel_state {
KernelState::MsgAwait { id, max_time, tags } => { KernelState::MsgAwait { id, max_time, tags } => {
if *max_time > 0 && clock::get_ms() > *max_time as u64 { if *max_time > 0 && clock::get_ms() > *max_time as u64 {
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::Timeout, count: 0 })?; kern_send(&kern::SubkernelError(kern::SubkernelStatus::Timeout))?;
self.session.kernel_state = KernelState::Running; self.session.kernel_state = KernelState::Running;
return Ok(()) return Ok(())
} }
if let Some(message) = self.session.messages.get_incoming(*id) { if let Some(message) = self.session.messages.get_incoming(*id) {
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::NoError, count: message.count })?; kern_send(&kern::SubkernelMsgRecvReply { count: message.count })?;
let tags = tags.clone(); let tags = tags.clone();
self.session.kernel_state = KernelState::Running; self.session.kernel_state = KernelState::Running;
pass_message_to_kernel(&message, &tags) pass_message_to_kernel(&message, &tags)
} else { } else {
let id = *id;
self.check_finished_kernels(id, router, routing_table, rank, self_destination);
Err(Error::AwaitingMessage) Err(Error::AwaitingMessage)
} }
}, },
@ -576,19 +602,11 @@ impl Manager {
}, },
KernelState::SubkernelAwaitFinish { max_time, id } => { KernelState::SubkernelAwaitFinish { max_time, id } => {
if *max_time > 0 && clock::get_ms() > *max_time as u64 { if *max_time > 0 && clock::get_ms() > *max_time as u64 {
kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::Timeout })?; kern_send(&kern::SubkernelError(kern::SubkernelStatus::Timeout))?;
self.session.kernel_state = KernelState::Running; self.session.kernel_state = KernelState::Running;
} else { } else {
let mut i = 0; let id = *id;
for status in &self.session.subkernels_finished { self.check_finished_kernels(id, router, routing_table, rank, self_destination);
if *status == *id {
kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::NoError })?;
self.session.kernel_state = KernelState::Running;
self.session.subkernels_finished.swap_remove(i);
break;
}
i += 1;
}
} }
Ok(()) Ok(())
} }
@ -606,6 +624,9 @@ impl Manager {
} }
Ok(()) Ok(())
} }
KernelState::SubkernelRetrievingException { destination: _ } => {
Err(Error::AwaitingMessage)
}
_ => Ok(()) _ => Ok(())
} }
} }
@ -628,16 +649,30 @@ impl Manager {
} }
pub fn remote_subkernel_finished(&mut self, id: u32, with_exception: bool, exception_source: u8) { pub fn remote_subkernel_finished(&mut self, id: u32, with_exception: bool, exception_source: u8) {
if with_exception { let exception_src = if with_exception { Some(exception_source) } else { None };
unsafe { kernel_cpu::stop() } self.session.subkernels_finished.push((id, exception_src));
self.session.kernel_state = KernelState::Absent; }
unsafe { self.cache.unborrow() }
self.last_finished = Some(SubkernelFinished { pub fn received_exception(&mut self, exception_data: &[u8], last: bool, router: &mut Router, routing_table: &RoutingTable,
source: self.session.source, id: self.current_id, rank: u8, self_destination: u8) {
with_exception: true, exception_source: exception_source if let KernelState::SubkernelRetrievingException { destination } = self.session.kernel_state {
}) self.session.external_exception.extend_from_slice(exception_data);
if last {
if let Ok(exception) = read_exception(&self.session.external_exception) {
kern_send(&kern::SubkernelError(kern::SubkernelStatus::Exception(exception))).unwrap();
} else {
kern_send(
&kern::SubkernelError(kern::SubkernelStatus::OtherError)).unwrap();
}
self.session.kernel_state = KernelState::Running;
} else {
/* fetch another slice */
router.route(drtioaux::Packet::SubkernelExceptionRequest {
source: self_destination, destination: destination
}, routing_table, rank, self_destination);
}
} else { } else {
self.session.subkernels_finished.push(id); warn!("Received unsolicited exception data");
} }
} }
@ -655,6 +690,7 @@ impl Manager {
(_, KernelState::DmaAwait { .. }) | (_, KernelState::DmaAwait { .. }) |
(_, KernelState::MsgSending) | (_, KernelState::MsgSending) |
(_, KernelState::SubkernelAwaitLoad) | (_, KernelState::SubkernelAwaitLoad) |
(_, KernelState::SubkernelRetrievingException { .. }) |
(_, KernelState::SubkernelAwaitFinish { .. }) => { (_, KernelState::SubkernelAwaitFinish { .. }) => {
// We're standing by; ignore the message. // We're standing by; ignore the message.
return Ok(None) return Ok(None)
@ -822,6 +858,48 @@ impl Drop for Manager {
} }
} }
fn read_exception_string<'a>(reader: &mut Cursor<&[u8]>) -> Result<CSlice<'a, u8>, Error> {
let len = reader.read_u32()? as usize;
if len == usize::MAX {
let data = reader.read_u32()?;
Ok(unsafe { CSlice::new(data as *const u8, len) })
} else {
let pos = reader.position();
let slice = unsafe {
let ptr = reader.get_ref().as_ptr().offset(pos as isize);
CSlice::new(ptr, len)
};
reader.set_position(pos + len);
Ok(slice)
}
}
fn read_exception(buffer: &[u8]) -> Result<eh_artiq::Exception, Error>
{
let mut reader = Cursor::new(buffer);
let mut byte = reader.read_u8()?;
// to sync
while byte != 0x5a {
byte = reader.read_u8()?;
}
// skip sync bytes, 0x09 indicates exception
while byte != 0x09 {
byte = reader.read_u8()?;
}
let _len = reader.read_u32()?;
// ignore the remaining exceptions, stack traces etc. - unwinding from another device would be unwise anyway
Ok(eh_artiq::Exception {
id: reader.read_u32()?,
message: read_exception_string(&mut reader)?,
param: [reader.read_u64()? as i64, reader.read_u64()? as i64, reader.read_u64()? as i64],
file: read_exception_string(&mut reader)?,
line: reader.read_u32()?,
column: reader.read_u32()?,
function: read_exception_string(&mut reader)?
})
}
fn kern_recv<R, F>(f: F) -> Result<R, Error> fn kern_recv<R, F>(f: F) -> Result<R, Error>
where F: FnOnce(&kern::Message) -> Result<R, Error> { where F: FnOnce(&kern::Message) -> Result<R, Error> {
if mailbox::receive() == 0 { if mailbox::receive() == 0 {

View File

@ -455,15 +455,21 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
kernelmgr.remote_subkernel_finished(id, with_exception, exception_src); kernelmgr.remote_subkernel_finished(id, with_exception, exception_src);
Ok(()) Ok(())
} }
drtioaux::Packet::SubkernelExceptionRequest { destination: _destination } => { drtioaux::Packet::SubkernelExceptionRequest { source, destination: _destination } => {
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet); forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE]; let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
let meta = kernelmgr.exception_get_slice(&mut data_slice); let meta = kernelmgr.exception_get_slice(&mut data_slice);
drtioaux::send(0, &drtioaux::Packet::SubkernelException { router.send(drtioaux::Packet::SubkernelException {
destination: source,
last: meta.status.is_last(), last: meta.status.is_last(),
length: meta.len, length: meta.len,
data: data_slice, data: data_slice,
}) }, _routing_table, *rank, *self_destination)
}
drtioaux::Packet::SubkernelException { destination: _destination, last, length, data } => {
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
kernelmgr.received_exception(&data[..length as usize], last, router, _routing_table, *rank, *self_destination);
Ok(())
} }
drtioaux::Packet::SubkernelMessage { source, destination: _destination, id, status, length, data } => { drtioaux::Packet::SubkernelMessage { source, destination: _destination, id, status, length, data } => {
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet); forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);

View File

@ -4,7 +4,6 @@ use board_artiq::{drtioaux, drtio_routing};
use board_misoc::csr; use board_misoc::csr;
use core::cmp::min; use core::cmp::min;
use proto_artiq::drtioaux_proto::PayloadStatus; use proto_artiq::drtioaux_proto::PayloadStatus;
use SAT_PAYLOAD_MAX_SIZE;
use MASTER_PAYLOAD_MAX_SIZE; use MASTER_PAYLOAD_MAX_SIZE;
/* represents data that has to be sent with the aux protocol */ /* represents data that has to be sent with the aux protocol */
@ -57,7 +56,6 @@ impl Sliceable {
self.data.extend(data); self.data.extend(data);
} }
get_slice_fn!(get_slice_sat, SAT_PAYLOAD_MAX_SIZE);
get_slice_fn!(get_slice_master, MASTER_PAYLOAD_MAX_SIZE); get_slice_fn!(get_slice_master, MASTER_PAYLOAD_MAX_SIZE);
} }

View File

@ -164,8 +164,10 @@ class Client:
if reply[0] != "OK": if reply[0] != "OK":
return reply[0], None return reply[0], None
length = int(reply[1]) length = int(reply[1])
json_str = (await self.reader.read(length)).decode("ascii") json_bytes = await self.reader.read(length)
return "OK", json_str if length != len(json_bytes):
raise ValueError(f"Received data length ({len(json_bytes)}) doesn't match expected length ({length})")
return "OK", json_bytes
def get_argparser(): def get_argparser():
@ -266,7 +268,7 @@ async def main_async():
variant = args.variant variant = args.variant
else: else:
variant = await client.get_single_variant(error_msg="User can get JSON of more than 1 variant - need to specify") variant = await client.get_single_variant(error_msg="User can get JSON of more than 1 variant - need to specify")
result, json_str = await client.get_json(variant) result, json_bytes = await client.get_json(variant)
if result != "OK": if result != "OK":
if result == "UNAUTHORIZED": if result == "UNAUTHORIZED":
print(f"You are not authorized to get JSON of variant {variant}. Your firmware subscription may have expired. Contact helpdesk\x40m-labs.hk.") print(f"You are not authorized to get JSON of variant {variant}. Your firmware subscription may have expired. Contact helpdesk\x40m-labs.hk.")
@ -275,10 +277,10 @@ async def main_async():
if not args.force and os.path.exists(args.out): if not args.force and os.path.exists(args.out):
print(f"File {args.out} already exists. You can use -f to overwrite the existing file.") print(f"File {args.out} already exists. You can use -f to overwrite the existing file.")
sys.exit(1) sys.exit(1)
with open(args.out, "w") as f: with open(args.out, "wb") as f:
f.write(json_str) f.write(json_bytes)
else: else:
print(json_str) sys.stdout.buffer.write(json_bytes)
else: else:
raise ValueError raise ValueError
finally: finally:

View File

@ -140,7 +140,12 @@ def main():
args.db_file = os.path.join(get_user_config_dir(), "artiq_browser.pyon") args.db_file = os.path.join(get_user_config_dir(), "artiq_browser.pyon")
widget_log_handler = log.init_log(args, "browser") widget_log_handler = log.init_log(args, "browser")
app = QtWidgets.QApplication(["ARTIQ Browser"]) forced_platform = []
if (QtGui.QGuiApplication.platformName() == "wayland" and
not os.getenv("QT_QPA_PLATFORM")):
# force XCB instead of Wayland due to applets not embedding
forced_platform = ["-platform", "xcb"]
app = QtWidgets.QApplication(["ARTIQ Browser"] + forced_platform)
loop = QEventLoop(app) loop = QEventLoop(app)
asyncio.set_event_loop(loop) asyncio.set_event_loop(loop)
atexit.register(loop.close) atexit.register(loop.close)

View File

@ -1,10 +1,4 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""
Client to send commands to :mod:`artiq_master` and display results locally.
The client can perform actions such as accessing/setting datasets,
scanning devices, scheduling experiments, and looking for experiments/devices.
"""
import argparse import argparse
import logging import logging

View File

@ -133,7 +133,12 @@ def main():
server=args.server.replace(":", "."), server=args.server.replace(":", "."),
port=args.port_notify)) port=args.port_notify))
app = QtWidgets.QApplication(["ARTIQ Dashboard"]) forced_platform = []
if (QtGui.QGuiApplication.platformName() == "wayland" and
not os.getenv("QT_QPA_PLATFORM")):
# force XCB instead of Wayland due to applets not embedding
forced_platform = ["-platform", "xcb"]
app = QtWidgets.QApplication(["ARTIQ Dashboard"] + forced_platform)
loop = QEventLoop(app) loop = QEventLoop(app)
asyncio.set_event_loop(loop) asyncio.set_event_loop(loop)
atexit.register(loop.close) atexit.register(loop.close)
@ -256,7 +261,7 @@ def main():
right_docks = [ right_docks = [
d_explorer, d_shortcuts, d_explorer, d_shortcuts,
d_datasets, d_applets, d_datasets, d_applets,
# d_waveform, d_interactive_args d_waveform, d_interactive_args
] ]
main_window.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, right_docks[0]) main_window.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, right_docks[0])
for d1, d2 in zip(right_docks, right_docks[1:]): for d1, d2 in zip(right_docks, right_docks[1:]):

View File

@ -11,7 +11,7 @@ def get_argparser():
description="ARTIQ session manager. " description="ARTIQ session manager. "
"Automatically runs the master, dashboard and " "Automatically runs the master, dashboard and "
"local controller manager on the current machine. " "local controller manager on the current machine. "
"The latter requires the artiq-comtools package to " "The latter requires the ``artiq-comtools`` package to "
"be installed.") "be installed.")
parser.add_argument("--version", action="version", parser.add_argument("--version", action="version",
version="ARTIQ v{}".format(artiq_version), version="ARTIQ v{}".format(artiq_version),

View File

@ -24,7 +24,7 @@ class VDragDropSplitter(QtWidgets.QSplitter):
e.accept() e.accept()
def dragMoveEvent(self, e): def dragMoveEvent(self, e):
pos = e.pos() pos = e.position()
src = e.source() src = e.source()
src_i = self.indexOf(src) src_i = self.indexOf(src)
self.setRubberBand(self.height()) self.setRubberBand(self.height())
@ -48,7 +48,7 @@ class VDragDropSplitter(QtWidgets.QSplitter):
def dropEvent(self, e): def dropEvent(self, e):
self.setRubberBand(-1) self.setRubberBand(-1)
pos = e.pos() pos = e.position()
src = e.source() src = e.source()
src_i = self.indexOf(src) src_i = self.indexOf(src)
for n in range(self.count()): for n in range(self.count()):
@ -81,7 +81,7 @@ class VDragScrollArea(QtWidgets.QScrollArea):
if e.type() == QtCore.QEvent.Type.DragMove: if e.type() == QtCore.QEvent.Type.DragMove:
val = self.verticalScrollBar().value() val = self.verticalScrollBar().value()
height = self.viewport().height() height = self.viewport().height()
y = e.pos().y() y = e.position().y()
self._direction = 0 self._direction = 0
if y < val + self._margin: if y < val + self._margin:
self._direction = -1 self._direction = -1
@ -119,7 +119,7 @@ class DragDropFlowLayoutWidget(QtWidgets.QWidget):
def mousePressEvent(self, event): def mousePressEvent(self, event):
if event.buttons() == QtCore.Qt.MouseButton.LeftButton \ if event.buttons() == QtCore.Qt.MouseButton.LeftButton \
and event.modifiers() == QtCore.Qt.KeyboardModifier.ShiftModifier: and event.modifiers() == QtCore.Qt.KeyboardModifier.ShiftModifier:
index = self._get_index(event.pos()) index = self._get_index(event.position())
if index == -1: if index == -1:
return return
drag = QtGui.QDrag(self) drag = QtGui.QDrag(self)
@ -136,7 +136,7 @@ class DragDropFlowLayoutWidget(QtWidgets.QWidget):
event.accept() event.accept()
def dropEvent(self, event): def dropEvent(self, event):
index = self._get_index(event.pos()) index = self._get_index(event.position())
source_layout = event.source() source_layout = event.source()
source_index = int(bytes(event.mimeData().data("index")).decode()) source_index = int(bytes(event.mimeData().data("index")).decode())
if source_layout == self: if source_layout == self:

View File

@ -23,13 +23,13 @@ class EntryTreeWidget(QtWidgets.QTreeWidget):
set_resize_mode = self.header().setSectionResizeMode set_resize_mode = self.header().setSectionResizeMode
else: else:
set_resize_mode = self.header().setResizeMode set_resize_mode = self.header().setResizeMode
set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents) set_resize_mode(0, QtWidgets.QHeaderView.ResizeMode.ResizeToContents)
set_resize_mode(1, QtWidgets.QHeaderView.Stretch) set_resize_mode(1, QtWidgets.QHeaderView.ResizeMode.Stretch)
set_resize_mode(2, QtWidgets.QHeaderView.ResizeToContents) set_resize_mode(2, QtWidgets.QHeaderView.ResizeMode.ResizeToContents)
self.header().setVisible(False) self.header().setVisible(False)
self.setSelectionMode(self.NoSelection) self.setSelectionMode(self.SelectionMode.NoSelection)
self.setHorizontalScrollMode(self.ScrollPerPixel) self.setHorizontalScrollMode(self.ScrollMode.ScrollPerPixel)
self.setVerticalScrollMode(self.ScrollPerPixel) self.setVerticalScrollMode(self.ScrollMode.ScrollPerPixel)
self.setStyleSheet("QTreeWidget {background: " + self.setStyleSheet("QTreeWidget {background: " +
self.palette().midlight().color().name() + " ;}") self.palette().midlight().color().name() + " ;}")
@ -109,7 +109,6 @@ class EntryTreeWidget(QtWidgets.QTreeWidget):
group = QtWidgets.QTreeWidgetItem([key]) group = QtWidgets.QTreeWidgetItem([key])
for col in range(3): for col in range(3):
group.setBackground(col, self.palette().mid()) group.setBackground(col, self.palette().mid())
group.setForeground(col, self.palette().brightText())
font = group.font(col) font = group.font(col)
font.setBold(True) font.setBold(True)
group.setFont(col, font) group.setFont(col, font)

View File

@ -1,6 +1,7 @@
import asyncio import asyncio
from collections import OrderedDict from collections import OrderedDict
import logging import logging
import shutil
from sipyco.asyncio_tools import TaskObject from sipyco.asyncio_tools import TaskObject
from sipyco import pyon from sipyco import pyon
@ -45,6 +46,8 @@ class StateManager(TaskObject):
logger.info("State database '%s' not found, using defaults", logger.info("State database '%s' not found, using defaults",
self.filename) self.filename)
return return
else:
shutil.copy2(self.filename, self.filename + ".backup")
# The state of one object may depend on the state of another, # The state of one object may depend on the state of another,
# e.g. the display state may create docks that are referenced in # e.g. the display state may create docks that are referenced in
# the area state. # the area state.

View File

@ -0,0 +1,59 @@
import unittest
import artiq.coredevice.exceptions as exceptions
from artiq.experiment import *
from artiq.test.hardware_testbench import ExperimentCase
from artiq.compiler.embedding import EmbeddingMap
from artiq.coredevice.core import test_exception_id_sync
"""
Test sync in exceptions raised between host and kernel
Check `artiq.compiler.embedding` and `artiq::firmware::ksupport::eh_artiq`
Considers the following two cases:
1) Exception raised on kernel and passed to host
2) Exception raised in a host function called from kernel
Ensures same exception is raised on both kernel and host in either case
"""
exception_names = EmbeddingMap().str_reverse_map
class _TestExceptionSync(EnvExperiment):
def build(self):
self.setattr_device("core")
@rpc
def _raise_exception_host(self, id):
exn = exception_names[id].split('.')[-1].split(':')[-1]
exn = getattr(exceptions, exn)
raise exn
@kernel
def raise_exception_host(self, id):
self._raise_exception_host(id)
@kernel
def raise_exception_kernel(self, id):
test_exception_id_sync(id)
class ExceptionTest(ExperimentCase):
def test_raise_exceptions_kernel(self):
exp = self.create(_TestExceptionSync)
for id, name in list(exception_names.items())[::-1]:
name = name.split('.')[-1].split(':')[-1]
with self.assertRaises(getattr(exceptions, name)) as ctx:
exp.raise_exception_kernel(id)
self.assertEqual(str(ctx.exception).strip("'"), name)
def test_raise_exceptions_host(self):
exp = self.create(_TestExceptionSync)
for id, name in exception_names.items():
name = name.split('.')[-1].split(':')[-1]
with self.assertRaises(getattr(exceptions, name)) as ctx:
exp.raise_exception_host(id)

View File

@ -3,7 +3,6 @@ import importlib.util
import importlib.machinery import importlib.machinery
import inspect import inspect
import logging import logging
import os
import pathlib import pathlib
import string import string
import sys import sys
@ -11,9 +10,9 @@ import sys
import numpy as np import numpy as np
from sipyco import pyon from sipyco import pyon
from platformdirs import user_config_dir
from artiq import __version__ as artiq_version from artiq import __version__ as artiq_version
from artiq.appdirs import user_config_dir
from artiq.language.environment import is_public_experiment from artiq.language.environment import is_public_experiment
from artiq.language import units from artiq.language import units
@ -198,6 +197,4 @@ def get_windows_drives():
def get_user_config_dir(): def get_user_config_dir():
major = artiq_version.split(".")[0] major = artiq_version.split(".")[0]
dir = user_config_dir("artiq", "m-labs", major) return user_config_dir("artiq", "m-labs", major, ensure_exists=True)
os.makedirs(dir, exist_ok=True)
return dir

View File

@ -75,7 +75,7 @@ Common system description changes
To add or remove peripherals from the system, add or remove their entries from the ``peripherals`` field. When replacing hardware with upgraded versions, update the corresponding ``hw_rev`` (hardware revision) field. Other fields to consider include: To add or remove peripherals from the system, add or remove their entries from the ``peripherals`` field. When replacing hardware with upgraded versions, update the corresponding ``hw_rev`` (hardware revision) field. Other fields to consider include:
- ``enable_wrpll`` (a simple boolean, see :ref:`core-device-clocking`) - ``enable_wrpll`` (a simple boolean, see :ref:`core-device-clocking`)
- ``sed_lanes`` (increasing the number of SED lanes can reduce sequence errors, but correspondingly consumes more FPGA resources, see :ref:`sequence-errors` ) - ``sed_lanes`` (increasing the number of SED lanes can reduce sequence errors, but correspondingly consumes more FPGA resources, see :ref:`sequence-errors`)
- various defaults (e.g. ``core_addr`` defines a default IP address, which can be freely reconfigured later). - various defaults (e.g. ``core_addr`` defines a default IP address, which can be freely reconfigured later).
Nix development environment Nix development environment
@ -84,21 +84,23 @@ Nix development environment
* Install `Nix <http://nixos.org/nix/>`_ if you haven't already. Prefer a single-user installation for simplicity. * Install `Nix <http://nixos.org/nix/>`_ if you haven't already. Prefer a single-user installation for simplicity.
* Enable flakes in Nix, for example by adding ``experimental-features = nix-command flakes`` to ``nix.conf``; see the `NixOS Wiki on flakes <https://nixos.wiki/wiki/flakes>`_ for details and more options. * Enable flakes in Nix, for example by adding ``experimental-features = nix-command flakes`` to ``nix.conf``; see the `NixOS Wiki on flakes <https://nixos.wiki/wiki/flakes>`_ for details and more options.
* Clone `the ARTIQ Git repository <https://github.com/m-labs/artiq>`_, or `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`__ for Zynq devices (Kasli-SoC or ZC706). By default, you are working with the ``master`` branch, which represents the beta version and is not stable (see :doc:`releases`). Checkout the most recent release (``git checkout release-[number]``) for a stable version. * Clone `the ARTIQ Git repository <https://github.com/m-labs/artiq>`_, or `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`__ for Zynq devices (Kasli-SoC or ZC706). By default, you are working with the ``master`` branch, which represents the beta version and is not stable (see :doc:`releases`). Checkout the most recent release (``git checkout release-[number]``) for a stable version.
* If your Vivado installation is not in its default location ``/opt``, open ``flake.nix`` and edit it accordingly (once again text-search ``/opt/Xilinx/Vivado``). * If your Vivado installation is not in its default location ``/opt``, open ``flake.nix`` and edit it accordingly (note that the edits must be made in the main ARTIQ flake, even if you are working with Zynq, see also tip below).
* Run ``nix develop`` at the root of the repository, where ``flake.nix`` is. * Run ``nix develop`` at the root of the repository, where ``flake.nix`` is.
* Answer ``y``/'yes' to any Nix configuration questions if necessary, as in :ref:`installing-troubleshooting`. * Answer ``y``/'yes' to any Nix configuration questions if necessary, as in :ref:`installing-troubleshooting`.
.. note:: .. note::
You can also target legacy versions of ARTIQ; use Git to checkout older release branches. Note however that older releases of ARTIQ required different processes for developing and building, which you are broadly more likely to figure out by (also) consulting corresponding older versions of the manual. You can also target legacy versions of ARTIQ; use Git to checkout older release branches. Note however that older releases of ARTIQ required different processes for developing and building, which you are broadly more likely to figure out by (also) consulting the corresponding older versions of the manual.
Once you have run ``nix develop`` you are in the ARTIQ development environment. All ARTIQ commands and utilities -- ``artiq_run``, ``artiq_master``, etc. -- should be available, as well as all the packages necessary to build or run ARTIQ itself. You can exit the environment at any time using Control+D or the ``exit`` command and re-enter it by re-running ``nix develop`` again in the same location. Once you have run ``nix develop`` you are in the ARTIQ development environment. All ARTIQ commands and utilities -- :mod:`~artiq.frontend.artiq_run`, :mod:`~artiq.frontend.artiq_master`, etc. -- should be available, as well as all the packages necessary to build or run ARTIQ itself. You can exit the environment at any time using Control+D or the ``exit`` command and re-enter it by re-running ``nix develop`` again in the same location.
.. tip:: .. tip::
If you are developing for Zynq, you will have noted that the ARTIQ-Zynq repository consists largely of firmware. The firmware for Zynq (NAR3) is more modern than that used for current mainline ARTIQ, and is intended to eventually replace it; for now it constitutes most of the difference between the two ARTIQ variants. The gateware for Zynq, on the other hand, is largely imported from mainline ARTIQ. If you intend to modify the gateware housed in the original ARTIQ repository, but build and test the results on a Zynq device, clone both repositories and set your ``PYTHONPATH`` after entering the ARTIQ-Zynq development shell: :: If you are developing for Zynq, you will have noted that the ARTIQ-Zynq repository consists largely of firmware. The firmware for Zynq (NAR3) is more modern than that used for current mainline ARTIQ, and is intended to eventually replace it; for now it constitutes most of the difference between the two ARTIQ variants. The gateware for Zynq, on the other hand, is largely imported from mainline ARTIQ.
If you intend to modify the source housed in the original ARTIQ repository, but build and test the results on a Zynq device, clone both repositories and set your ``PYTHONPATH`` after entering the ARTIQ-Zynq development shell: ::
$ export PYTHONPATH=/absolute/path/to/your/artiq:$PYTHONPATH $ export PYTHONPATH=/absolute/path/to/your/artiq:$PYTHONPATH
Note that this only applies for incremental builds. If you want to use ``nix build``, look into changing the inputs of the ``flake.nix`` instead. You can do this by replacing the URL of the GitHub ARTIQ repository with ``path:/absolute/path/to/your/artiq``; remember that Nix caches dependencies, so to incorporate new changes you will need to exit the development shell, update the Nix cache with ``nix flake update``, and re-run ``nix develop``. Note that this only applies for incremental builds. If you want to use ``nix build``, or make changes to the dependencies, look into changing the inputs of the ``flake.nix`` instead. You can do this by replacing the URL of the GitHub ARTIQ repository with ``path:/absolute/path/to/your/artiq``; remember that Nix caches dependencies, so to incorporate new changes you will need to exit the development shell, update the Nix cache with ``nix flake update``, and re-run ``nix develop``.
Building only standard binaries Building only standard binaries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -181,14 +183,14 @@ or you can use the more direct version: ::
to see the list of suitable build targets directly. to see the list of suitable build targets directly.
Any of these commands should produce a directory ``result`` which contains a file ``boot.bin``. As described in :ref:`writing-flash`, if your core device is currently accessible over the network, it can be flashed with ``artiq_coremgmt``. If it is not connected to the network: Any of these commands should produce a directory ``result`` which contains a file ``boot.bin``. As described in :ref:`writing-flash`, if your core device is currently accessible over the network, it can be flashed with :mod:`~artiq.frontend.artiq_coremgmt`. If it is not connected to the network:
1. Power off the board, extract the SD card and load ``boot.bin`` onto it manually. 1. Power off the board, extract the SD card and load ``boot.bin`` onto it manually.
2. Insert the SD card back into the board. 2. Insert the SD card back into the board.
3. Ensure that the DIP switches (labeled BOOT MODE) are set correctly, to SD. 3. Ensure that the DIP switches (labeled BOOT MODE) are set correctly, to SD.
4. Power the board back on. 4. Power the board back on.
Optionally, the SD card may also be loaded at the same time with an additional file ``config.txt``, which can contain preset configuration values in the format ``key=value``, one per line. The keys are those used with ``artiq_coremgmt``. This allows e.g. presetting an IP address and any other configuration information. Optionally, the SD card may also be loaded at the same time with an additional file ``config.txt``, which can contain preset configuration values in the format ``key=value``, one per line. The keys are those used with :mod:`~artiq.frontend.artiq_coremgmt`. This allows e.g. presetting an IP address and any other configuration information.
After a successful boot, the "FPGA DONE" light should be illuminated and the board should respond to ping when plugged into Ethernet. After a successful boot, the "FPGA DONE" light should be illuminated and the board should respond to ping when plugged into Ethernet.

View File

@ -1,11 +1,11 @@
Compiler Compiler
======== ========
The ARTIQ compiler transforms the Python code of the kernels into machine code executable on the core device. For limited purposes (normally, obtaining executable binaries of idle and startup kernels), it can be accessed through ``artiq_compile``. Otherwise it is invoked automatically whenever a function with an applicable decorator is called. The ARTIQ compiler transforms the Python code of the kernels into machine code executable on the core device. For limited purposes (normally, obtaining executable binaries of idle and startup kernels), it can be accessed through :mod:`~artiq.frontend.artiq_compile`. Otherwise it is invoked automatically whenever a function with an applicable decorator is called.
ARTIQ kernel code accepts *nearly,* but not quite, a strict subset of Python 3. The necessities of real-time operation impose a harsher set of limitations; as a result, many Python features are necessarily omitted, and there are some specific discrepancies (see also :ref:`compiler-pitfalls`). ARTIQ kernel code accepts *nearly,* but not quite, a strict subset of Python 3. The necessities of real-time operation impose a harsher set of limitations; as a result, many Python features are necessarily omitted, and there are some specific discrepancies (see also :ref:`compiler-pitfalls`).
In general, ARTIQ Python supports only statically typed variables; it implements no heap allocation or garbage collection systems, essentially disallowing any heap-based data structures (although lists and arrays remain available in a stack-based form); and it cannot use runtime dispatch, meaning that, for example, all elements of an array must be of the same type. Nonetheless, technical details aside, a basic knowledge of Python is entirely sufficient to write useful and coherent ARTIQ experiments. In general, ARTIQ Python supports only statically typed variables; it implements no heap allocation or garbage collection systems, essentially disallowing any heap-based data structures (although lists and arrays remain available in a stack-based form); and it cannot use runtime dispatch, meaning that, for example, all elements of an array must be of the same type. Nonetheless, technical details aside, a basic knowledge of Python is entirely sufficient to write ARTIQ experiments.
.. note:: .. note::
The ARTIQ compiler is now in its second iteration. The third generation, known as NAC3, is `currently in development <https://git.m-labs.hk/M-Labs/nac3>`_, and available for pre-alpha experimental use. NAC3 represents a major overhaul of ARTIQ compilation, and will feature much faster compilation speeds, a greatly improved type system, and more predictable and transparent operation. It is compatible with ARTIQ firmware starting at ARTIQ-7. Instructions for installation and basic usage differences can also be found `on the M-Labs Forum <https://forum.m-labs.hk/d/392-nac3-new-artiq-compiler-3-prealpha-release>`_. While NAC3 is a work in progress and many important features remain unimplemented, installation and feedback is welcomed. The ARTIQ compiler is now in its second iteration. The third generation, known as NAC3, is `currently in development <https://git.m-labs.hk/M-Labs/nac3>`_, and available for pre-alpha experimental use. NAC3 represents a major overhaul of ARTIQ compilation, and will feature much faster compilation speeds, a greatly improved type system, and more predictable and transparent operation. It is compatible with ARTIQ firmware starting at ARTIQ-7. Instructions for installation and basic usage differences can also be found `on the M-Labs Forum <https://forum.m-labs.hk/d/392-nac3-new-artiq-compiler-3-prealpha-release>`_. While NAC3 is a work in progress and many important features remain unimplemented, installation and feedback is welcomed.
@ -20,7 +20,7 @@ Functions and decorators
The ARTIQ compiler recognizes several specialized decorators, which determine the way the decorated function will be compiled and handled. The ARTIQ compiler recognizes several specialized decorators, which determine the way the decorated function will be compiled and handled.
``@kernel`` (see :meth:`~artiq.language.core.kernel`) designates kernel functions, which will be compiled for and wholly executed on the core device; the basic setup and background for kernels is detailed on the :doc:`getting_started_core` page. ``@subkernel`` (:meth:`~artiq.language.core.subkernel`) designates subkernel functions, which are largely similar to kernels except that they are executed on satellite devices in a DRTIO setting, with some associated limitations; they are described in more detail on the :doc:`using_drtio_subkernels` page. ``@kernel`` (see :meth:`~artiq.language.core.kernel`) designates kernel functions, which will be compiled for and executed on the core device; the basic setup and background for kernels is detailed on the :doc:`getting_started_core` page. ``@subkernel`` (:meth:`~artiq.language.core.subkernel`) designates subkernel functions, which are largely similar to kernels except that they are executed on satellite devices in a DRTIO setting, with some associated limitations; they are described in more detail on the :doc:`using_drtio_subkernels` page.
``@rpc`` (:meth:`~artiq.language.core.rpc`) designates functions to be executed on the host machine, which are compiled and run in regular Python, outside of the core device's real-time limitations. Notably, functions without decorators are assumed to be host-bound by default, and treated identically to an explicitly marked ``@rpc``. As a result, the explicit decorator is only really necessary when specifying additional flags (for example, ``flags={"async"}``, see below). ``@rpc`` (:meth:`~artiq.language.core.rpc`) designates functions to be executed on the host machine, which are compiled and run in regular Python, outside of the core device's real-time limitations. Notably, functions without decorators are assumed to be host-bound by default, and treated identically to an explicitly marked ``@rpc``. As a result, the explicit decorator is only really necessary when specifying additional flags (for example, ``flags={"async"}``, see below).
@ -132,10 +132,10 @@ ARTIQ makes various useful built-in and mathematical functions from Python, NumP
* Functions * Functions
+ * `Python built-ins <https://docs.python.org/3/library/functions.html>`_ + * `Python built-ins <https://docs.python.org/3/library/functions.html>`_
* - ``len()``, ``round()``, ``abs()``, ``min()``, ``max()`` * - ``len()``, ``round()``, ``abs()``, ``min()``, ``max()``
- ``print()`` (for specifics see warning below) - ``print()`` (with caveats; see below)
- all basic type conversions (``int()``, ``float()`` etc.) - all basic type conversions (``int()``, ``float()`` etc.)
+ * `NumPy mathematic utilities <https://numpy.org/doc/stable/reference/routines.math.html>`_ + * `NumPy mathematic utilities <https://numpy.org/doc/stable/reference/routines.math.html>`_
* - ``sqrt()``, ``cbrt``` * - ``sqrt()``, ``cbrt()``
- ``fabs()``, ``fmax()``, ``fmin()`` - ``fabs()``, ``fmax()``, ``fmin()``
- ``floor()``, ``ceil()``, ``trunc()``, ``rint()`` - ``floor()``, ``ceil()``, ``trunc()``, ``rint()``
+ * `NumPy exponents and logarithms <https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms>`_ + * `NumPy exponents and logarithms <https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms>`_
@ -154,10 +154,14 @@ ARTIQ makes various useful built-in and mathematical functions from Python, NumP
- ``gamma()``, ``gammaln()`` - ``gamma()``, ``gammaln()``
- ``j0()``, ``j1()``, ``y0()``, ``y1()`` - ``j0()``, ``j1()``, ``y0()``, ``y1()``
Basic NumPy array handling (``np.array()``, ``numpy.transpose()``, ``numpy.full``, ``@``, element-wise operation, etc.) is also available. NumPy functions are implicitly broadcast when applied to arrays. Basic NumPy array handling (``np.array()``, ``numpy.transpose()``, ``numpy.full()``, ``@``, element-wise operation, etc.) is also available. NumPy functions are implicitly broadcast when applied to arrays.
.. warning:: Print and logging functions
A kernel ``print`` call normally specifically prints to the host machine, either the terminal of ``artiq_run`` or the dashboard log when using ``artiq_dashboard``. This makes it effectively an RPC, with some of the corresponding limitations. In subkernels and whenever compiled through ``artiq_compile``, where RPCs are not supported, it is instead considered a print to the local log (UART only in the case of satellites, UART and core log for master/standalone devices). ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ARTIQ offers two native built-in logging functions: ``rtio_log()``, which prints to the :ref:`RTIO log <rtio-analyzer>`, as retrieved by :mod:`~artiq.frontend.artiq_coreanalyzer`, and ``core_log()``, which prints directly to the core log, regardless of context or network connection status. Both exist for debugging purposes, especially in contexts where a ``print()`` RPC is not suitable, such as in idle/startup kernels or when debugging delicate RTIO slack issues which may be significantly affected by the overhead of ``print()``.
``print()`` itself is in practice an RPC to the regular host Python ``print()``, i.e. with output either in the terminal of :mod:`~artiq.frontend.artiq_run` or in the client logs when using :mod:`~artiq.frontend.artiq_dashboard` or :mod:`~artiq.frontend.artiq_compile`. This means on one hand that it should not be used in idle, startup, or subkernels, and on the other hand that it suffers of some of the timing limitations of any other RPC, especially if the RPC queue is full. Accordingly, it is important to be aware that the timing of ``print()`` outputs can't reliably be used to debug timing in kernels, and especially not the timing of other RPCs.
.. _compiler-pitfalls: .. _compiler-pitfalls:

View File

@ -23,21 +23,14 @@ import sphinx_rtd_theme
# we cannot use autodoc_mock_imports (does not help with argparse) # we cannot use autodoc_mock_imports (does not help with argparse)
mock_modules = ["artiq.gui.waitingspinnerwidget", mock_modules = ["artiq.gui.waitingspinnerwidget",
"artiq.gui.flowlayout", "artiq.gui.flowlayout",
"artiq.gui.state",
"artiq.gui.log",
"artiq.gui.models", "artiq.gui.models",
"artiq.compiler.module", "artiq.compiler.module",
"artiq.compiler.embedding", "artiq.compiler.embedding",
"artiq.dashboard.waveform", "artiq.dashboard.waveform",
"artiq.dashboard.interactive_args", "artiq.coredevice.jsondesc",
"qasync", "pyqtgraph", "matplotlib", "lmdb",
"numpy", "dateutil", "dateutil.parser", "prettytable", "PyQt6",
"h5py", "serial", "scipy", "scipy.interpolate",
"nac3artiq", "nac3artiq",
"sipyco", "sipyco.pc_rpc", "sipyco.sync_struct", "qasync", "lmdb", "dateutil.parser", "prettytable", "PyQt6",
"sipyco.asyncio_tools", "sipyco.logging_tools", "h5py", "llvmlite", "pythonparser", "tqdm", "jsonschema"]
"sipyco.broadcast", "sipyco.packed_exceptions",
"sipyco.keepalive", "sipyco.pipe_ipc"]
for module in mock_modules: for module in mock_modules:
sys.modules[module] = Mock() sys.modules[module] = Mock()
@ -72,7 +65,8 @@ extensions = [
'sphinx.ext.napoleon', 'sphinx.ext.napoleon',
'sphinxarg.ext', 'sphinxarg.ext',
'sphinxcontrib.wavedrom', # see also below for config 'sphinxcontrib.wavedrom', # see also below for config
"sphinxcontrib.jquery", 'sphinxcontrib.jquery',
'sphinxcontrib.tikz' # see also below for config
] ]
mathjax_path = "https://m-labs.hk/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML.js" mathjax_path = "https://m-labs.hk/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML.js"
@ -144,15 +138,15 @@ pygments_style = 'sphinx'
nitpicky = True nitpicky = True
# (type, target) regex tuples to ignore when generating warnings in 'nitpicky' mode # (type, target) regex tuples to ignore when generating warnings in 'nitpicky' mode
# i.e. objects that are not documented in this manual and do not need to be
nitpick_ignore_regex = [ nitpick_ignore_regex = [
(r'py:.*', r'numpy..*'), (r'py:.*', r'numpy..*'),
(r'py:.*', r'sipyco..*'), (r'py:.*', r'sipyco..*'),
('py:const', r'.*'), # no constants are documented anyway ('py:const', r'.*'), # no constants are documented anyway
('py.attr', r'.*'), # nor attributes ('py.attr', r'.*'), # nor attributes
(r'py:.*', r'artiq.gateware.*'), (r'py:.*', r'artiq.gateware.*'),
('py:mod', r'artiq.frontend.*'),
('py:mod', r'artiq.test.*'), ('py:mod', r'artiq.test.*'),
('py:mod', 'artiq.experiment'), ('py:mod', r'artiq.applets.*'),
('py:class', 'dac34H84'), ('py:class', 'dac34H84'),
('py:class', 'trf372017'), ('py:class', 'trf372017'),
('py:class', r'list(.*)'), ('py:class', r'list(.*)'),
@ -328,3 +322,10 @@ texinfo_documents = [
offline_skin_js_path = '_static/default.js' offline_skin_js_path = '_static/default.js'
offline_wavedrom_js_path = '_static/WaveDrom.js' offline_wavedrom_js_path = '_static/WaveDrom.js'
render_using_wavedrompy = True render_using_wavedrompy = True
# -- Options for sphinxcontrib-tikz ---------------------------------------
# tikz_proc_suite = pdf2svg
# tikz_transparent = True
# these are the defaults
tikz_tikzlibraries = 'positioning, shapes, arrows.meta'

View File

@ -24,7 +24,7 @@ If ping fails, check that the Ethernet LED is ON; on Kasli, it is the LED next t
Core management tool Core management tool
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
The tool used to configure the core device is the command-line utility ``artiq_coremgmt``. In order for it to connect to your core device, it is necessary to supply it somehow with the correct IP address for your core device. This can be done directly through use of the ``-D`` option, for example in: :: The tool used to configure the core device is the command-line utility :mod:`~artiq.frontend.artiq_coremgmt`. In order for it to connect to your core device, it is necessary to supply it somehow with the correct IP address for your core device. This can be done directly through use of the ``-D`` option, for example in: ::
$ artiq_coremgmt -D <IP_address> log $ artiq_coremgmt -D <IP_address> log
@ -33,7 +33,7 @@ The tool used to configure the core device is the command-line utility ``artiq_c
Normally, however, the core device IP is supplied through the *device database* for your system, which comes in the form of a Python script called ``device_db.py`` (see also :ref:`device-db`). If you purchased a system from M-Labs, the ``device_db.py`` for your system will have been provided for you, either on the USB stick, inside ``~/artiq`` on your NUC, or sent by email. Normally, however, the core device IP is supplied through the *device database* for your system, which comes in the form of a Python script called ``device_db.py`` (see also :ref:`device-db`). If you purchased a system from M-Labs, the ``device_db.py`` for your system will have been provided for you, either on the USB stick, inside ``~/artiq`` on your NUC, or sent by email.
Make sure the field ``core_addr`` at the top of the file is set to your core device's correct IP address, and always execute ``artiq_coremgmt`` from the same directory the device database is placed in. Make sure the field ``core_addr`` at the top of the file is set to your core device's correct IP address, and always execute :mod:`~artiq.frontend.artiq_coremgmt` from the same directory the device database is placed in.
Once you can reach your core device, the IP can be changed at any time by running: :: Once you can reach your core device, the IP can be changed at any time by running: ::
@ -63,34 +63,36 @@ For Kasli or KC705:
$ artiq_mkfs flash_storage.img [-s mac xx:xx:xx:xx:xx:xx] [-s ip xx.xx.xx.xx/xx] [-s ipv4_default_route xx.xx.xx.xx] [-s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx] [-s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx] $ artiq_mkfs flash_storage.img [-s mac xx:xx:xx:xx:xx:xx] [-s ip xx.xx.xx.xx/xx] [-s ipv4_default_route xx.xx.xx.xx] [-s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx] [-s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx]
$ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start $ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start
On Kasli or Kasli-SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. On any board, once a device can be reached by ``artiq_coremgmt``, these values can be set and edited at any time, following the procedure for IP above. On Kasli or Kasli-SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. On any board, once a device can be reached by :mod:`~artiq.frontend.artiq_coremgmt`, these values can be set and edited at any time, following the procedure for IP above.
Regarding IPv6, note that the device also has a link-local address that corresponds to its EUI-64, which can be used simultaneously to the (potentially unrelated) IPv6 address defined by using the ``ip6`` configuration key. Regarding IPv6, note that the device also has a link-local address that corresponds to its EUI-64, which can be used simultaneously to the (potentially unrelated) IPv6 address defined by using the ``ip6`` configuration key.
If problems persist, see the :ref:`network troubleshooting <faq-networking>` section of the FAQ.
.. _core-device-config: .. _core-device-config:
Configuring the core device Configuring the core device
--------------------------- ---------------------------
.. note:: .. note::
The following steps are optional, and you only need to execute them if they are necessary for your specific system. To learn more about how ARTIQ works and how to use it first, you might skip to the next page, :doc:`rtio`. For all configuration options, the core device generally must be restarted for changes to take effect. The following steps are optional, and you only need to execute them if they are necessary for your specific system. To learn more about how ARTIQ works and how to use it first, you might skip to the first tutorial page, :doc:`rtio`. For all configuration options, the core device generally must be restarted for changes to take effect.
Flash idle and/or startup kernel Flash idle and/or startup kernel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The *idle kernel* is the kernel (that is, a piece of code running on the core device; see :doc:`the next page <rtio>` for further explanation) which the core device runs in between experiments and whenever not connected to the host. It is saved directly to the core device's flash storage in compiled form. Potential uses include cleanup of the environment between experiments, state maintenance for certain hardware, or anything else that should run continuously whenever the system is not otherwise occupied. The *idle kernel* is the kernel (that is, a piece of code running on the core device; see :doc:`rtio` for further explanation) which the core device runs in between experiments and whenever not connected to the host. It is saved directly to the core device's flash storage in compiled form. Potential uses include cleanup of the environment between experiments, state maintenance for certain hardware, or anything else that should run continuously whenever the system is not otherwise occupied.
To flash an idle kernel, first write an idle experiment. Note that since the idle kernel runs regardless of whether the core device is connected to the host, remote procedure calls or RPCs (functions called by a kernel to run on the host) are forbidden and the ``run()`` method must be a kernel marked with ``@kernel``. Once written, you can compile and flash your idle experiment: :: To flash an idle kernel, first write an idle experiment. Note that since the idle kernel runs regardless of whether the core device is connected to the host, remote procedure calls or RPCs (functions called by a kernel to run on the host) are forbidden and the ``run()`` method must be a kernel marked with ``@kernel``. Once written, you can compile and flash your idle experiment: ::
$ artiq_compile idle.py $ artiq_compile idle.py
$ artiq_coremgmt config write -f idle_kernel idle.elf $ artiq_coremgmt config write -f idle_kernel idle.elf
The *startup kernel* is a kernel executed once and only once immediately whenever the core device powers on. Uses include initializing DDSes and setting TTL directions. For DRTIO systems, the startup kernel should wait until the desired destinations, including local RTIO, are up, using ``self.core.get_rtio_destination_status`` (:meth:`~artiq.coredevice.core.Core.get_rtio_destination_status`). The *startup kernel* is a kernel executed once and only once immediately whenever the core device powers on. Uses include initializing DDSes and setting TTL directions. For DRTIO systems, the startup kernel should wait until the desired destinations, including local RTIO, are up, using ``self.core.get_rtio_destination_status`` (see :meth:`~artiq.coredevice.core.Core.get_rtio_destination_status`).
To flash a startup kernel, proceed as with the idle kernel, but using the ``startup_kernel`` key in the ``artiq_coremgmt`` command. To flash a startup kernel, proceed as with the idle kernel, but using the ``startup_kernel`` key in the :mod:`~artiq.frontend.artiq_coremgmt` command.
.. note:: .. note::
Subkernels (see :doc:`using_drtio_subkernels`) are allowed in idle (and startup) experiments without any additional ceremony. ``artiq_compile`` will produce a ``.tar`` rather than a ``.elf``; simply substitute ``idle.tar`` for ``idle.elf`` in the ``artiq_coremgmt config write`` command. Subkernels (see :doc:`using_drtio_subkernels`) are allowed in idle (and startup) experiments without any additional ceremony. :mod:`~artiq.frontend.artiq_compile` will produce a ``.tar`` rather than a ``.elf``; simply substitute ``idle.tar`` for ``idle.elf`` in the ``artiq_coremgmt config write`` command.
Select the RTIO clock source Select the RTIO clock source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -104,10 +106,12 @@ The default is to use an internal 125MHz clock. To select a source, use a comman
See :ref:`core-device-clocking` for availability of specific options. See :ref:`core-device-clocking` for availability of specific options.
.. _config-rtiomap:
Set up resolving RTIO channels to their names Set up resolving RTIO channels to their names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This feature allows you to print the channels' respective names alongside with their numbers in RTIO error messages. To enable it, run the ``artiq_rtiomap`` tool and write its result into the device config at the ``device_map`` key: :: This feature allows you to print the channels' respective names alongside with their numbers in RTIO error messages. To enable it, run the :mod:`~artiq.frontend.artiq_rtiomap` tool and write its result into the device config at the ``device_map`` key: ::
$ artiq_rtiomap dev_map.bin $ artiq_rtiomap dev_map.bin
$ artiq_coremgmt config write -f device_map dev_map.bin $ artiq_coremgmt config write -f device_map dev_map.bin

View File

@ -1,16 +1,16 @@
Core device Core device
=========== ===========
The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :ref:`artiq-real-time-i-o-concepts`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code. The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :doc:`rtio`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code.
While it is possible to use the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, many experiments require it. While it is possible to use the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, most use cases will require it.
.. _core-device-flash-storage: .. _configuration-storage:
Flash storage Configuration storage
------------- ---------------------
The core device contains some flash storage space which is used to store configuration data. It is one sector (typically 64 kB) large and organized as a list of key-value records, accessible either through ``artiq_mkfs`` and ``artiq_flash`` or, preferably in most cases, the ``config`` option of the ``artiq_coremgmt`` core management tool (see below). Information can be stored to keys of any name, but the specific keys currently used and referenced by ARTIQ are summarized below: The core device reserves some storage space (either flash or directly on SD card, depending on target board) to store configuration data. The configuration data is organized as a list of key-value records, accessible either through :mod:`~artiq.frontend.artiq_mkfs` and :mod:`~artiq.frontend.artiq_flash` or, preferably in most cases, the ``config`` option of the :mod:`~artiq.frontend.artiq_coremgmt` core management tool (see below). Information can be stored to keys of any name, but the specific keys currently used and referenced by ARTIQ are summarized below:
``idle_kernel`` ``idle_kernel``
Stores (compiled ``.tar`` or ``.elf`` binary of) idle kernel. See :ref:`core-device-config`. Stores (compiled ``.tar`` or ``.elf`` binary of) idle kernel. See :ref:`core-device-config`.
@ -37,7 +37,7 @@ The core device contains some flash storage space which is used to store configu
``routing_table`` ``routing_table``
Sets the routing table in DRTIO systems; see :ref:`drtio-routing`. If not set, a star topology is assumed. Sets the routing table in DRTIO systems; see :ref:`drtio-routing`. If not set, a star topology is assumed.
``device_map`` ``device_map``
If set, allows the core log to connect RTIO channels to device names and use device names as well as channel numbers in log output. A correctly formatted table can be automatically generated with ``artiq_rtiomap``, see :ref:`Utilities<rtiomap-tool>`. If set, allows the core log to connect RTIO channels to device names and use device names as well as channel numbers in log output. A correctly formatted table can be automatically generated with :mod:`~artiq.frontend.artiq_rtiomap`, see :ref:`Utilities<rtiomap-tool>`.
``net_trace`` ``net_trace``
If set to ``1``, will activate net trace (print all packets sent and received to UART and core log). This will considerably slow down all network response from the core. Not applicable for ARTIQ-Zynq (Kasli-SoC, ZC706). If set to ``1``, will activate net trace (print all packets sent and received to UART and core log). This will considerably slow down all network response from the core. Not applicable for ARTIQ-Zynq (Kasli-SoC, ZC706).
``panic_reset`` ``panic_reset``
@ -45,7 +45,7 @@ The core device contains some flash storage space which is used to store configu
``no_flash_boot`` ``no_flash_boot``
If set to ``1``, will disable flash boot. Network boot is attempted if possible. Not applicable for ARTIQ-Zynq. If set to ``1``, will disable flash boot. Network boot is attempted if possible. Not applicable for ARTIQ-Zynq.
``boot`` ``boot``
Allows full firmware/gateware (``boot.bin``) to be written with ``artiq_coremgmt``, on ARTIQ-Zynq systems only. Allows full firmware/gateware (``boot.bin``) to be written with :mod:`~artiq.frontend.artiq_coremgmt`, on ARTIQ-Zynq systems only.
Common configuration commands Common configuration commands
----------------------------- -----------------------------
@ -65,7 +65,7 @@ You do not need to remove a record in order to change its value. Just overwrite
You can write several records at once:: You can write several records at once::
$ artiq_coremgmt config write -s key1 value1 -f key2 filename -s key3 value3 $ artiq_coremgmt config write -s key1 value1 -f key2 filename -s key3 value3
You can also write entire files in a record using the ``-f`` option. This is useful for instance to write the startup and idle kernels into the flash storage:: You can also write entire files in a record using the ``-f`` option. This is useful for instance to write the startup and idle kernels into the flash storage::
@ -75,7 +75,36 @@ You can also write entire files in a record using the ``-f`` option. This is use
The same option is used to write ``boot.bin`` in ARTIQ-Zynq. Note that the ``boot`` key is write-only. The same option is used to write ``boot.bin`` in ARTIQ-Zynq. Note that the ``boot`` key is write-only.
See also the full reference of ``artiq_coremgmt`` in :ref:`Utilities <core-device-management-tool>`. See also the full reference of :mod:`~artiq.frontend.artiq_coremgmt` in :ref:`Utilities <core-device-management-tool>`.
.. _core-device-clocking:
Clocking
--------
The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include:
* ``int_100`` - internal crystal reference is used to synthesize a 100MHz RTIO clock,
* ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option),
* ``int_150`` - internal crystal reference is used to synthesize a 150MHz RTIO clock.
* ``ext0_synth0_10to125`` - external 10MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_80to125`` - external 80MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_100to125`` - external 100MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock.
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the :ref:`JSON description file <system-description>`. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.
If not using WRPLL, PLL can also be bypassed entirely with the options
* ``ext0_bypass`` (input clock used directly)
* ``ext0_bypass_125`` (explicit alias)
* ``ext0_bypass_100`` (explicit alias)
Bypassing the PLL ensures the skews between input clock, downstream clock outputs, and RTIO clock are deterministic across reboots of the system. This is useful when phase determinism is required in situations where the reference clock fans out to other devices before reaching the master.
Board details Board details
------------- -------------
@ -88,20 +117,20 @@ All boards have a serial interface running at 115200bps 8-N-1 that can be used f
Kasli and Kasli-SoC Kasli and Kasli-SoC
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is architecturally separate, among other things being capable of performing much heavier software computations at high speeds on the board itself, but provides generally similar features to Kasli. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use; the original Kasli v1.0 remains supported by ARTIQ. `Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli is based on a Xilinx Artix-7 FPGA, and Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is based on a Zynq-7000 SoC, notably including an ARM CPU allowing for much heavier software computations at high speeds. They are architecturally very different but supply similar feature sets. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original v1.0 remains supported by ARTIQ.
Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC). Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC).
The DRTIO line rate depends upon the RTIO clock frequency running, e.g., at 125MHz the line rate is 2.5Gbps, at 150MHz 3.0Gbps, etc. See below for information on RTIO clocks. The DRTIO line rate depends upon the RTIO clock frequency running, e.g., at 125MHz the line rate is 2.5Gbps, at 150MHz 3.0Gbps, etc. See below for information on RTIO clocks.
KC705 KC705 and ZC706
^^^^^
An alternative target board for the ARTIQ core device is the KC705 development board from Xilinx. It supports the NIST CLOCK and QC2 hardware (FMC).
Common problems
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
Two high-end evaluation kits are also supported as alternative ARTIQ core device target boards, respectively the Kintex7 `KC705 <https://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ and Zynq-SoC `ZC706 <https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html>`_, both from Xilinx. ZC706, like Kasli-SoC, runs on the ARTIQ-Zynq port. Both are supported in several set variants, namely NIST CLOCK and QC2 (FMC), either available in master, satellite, or standalone variants. See also :doc:`building_developing` for more on system variants.
Common KC705 problems
"""""""""""""""""""""
* The SW13 switches on the board need to be set to 00001. * The SW13 switches on the board need to be set to 00001.
* When connected, the CLOCK adapter breaks the JTAG chain due to TDI not being connected to TDO on the FMC mezzanine. * When connected, the CLOCK adapter breaks the JTAG chain due to TDI not being connected to TDO on the FMC mezzanine.
* On some boards, the JTAG USB connector is not correctly soldered. * On some boards, the JTAG USB connector is not correctly soldered.
@ -111,11 +140,13 @@ VADJ
With the NIST CLOCK and QC2 adapters, for safe operation of the DDS buses (to prevent damage to the IO banks of the FPGA), the FMC VADJ rail of the KC705 should be changed to 3.3V. Plug the Texas Instruments USB-TO-GPIO PMBus adapter into the PMBus connector in the corner of the KC705 and use the Fusion Digital Power Designer software to configure (requires Windows). Write to chip number U55 (address 52), channel 4, which is the VADJ rail, to make it 3.3V instead of 2.5V. Power cycle the KC705 board to check that the startup voltage on the VADJ rail is now 3.3V. With the NIST CLOCK and QC2 adapters, for safe operation of the DDS buses (to prevent damage to the IO banks of the FPGA), the FMC VADJ rail of the KC705 should be changed to 3.3V. Plug the Texas Instruments USB-TO-GPIO PMBus adapter into the PMBus connector in the corner of the KC705 and use the Fusion Digital Power Designer software to configure (requires Windows). Write to chip number U55 (address 52), channel 4, which is the VADJ rail, to make it 3.3V instead of 2.5V. Power cycle the KC705 board to check that the startup voltage on the VADJ rail is now 3.3V.
Variant details
---------------
NIST CLOCK NIST CLOCK
^^^^^^^^^^ ^^^^^^^^^^
With the CLOCK hardware, the TTL lines are mapped as follows: With the KC705 CLOCK hardware, the TTL lines are mapped as follows:
+--------------------+-----------------------+--------------+ +--------------------+-----------------------+--------------+
| RTIO channel | TTL line | Capability | | RTIO channel | TTL line | Capability |
@ -155,11 +186,21 @@ The board has RTIO SPI buses mapped as follows:
The DDS bus is on channel 27. The DDS bus is on channel 27.
The ZC706 variant is identical except for the following differences:
- The SMA GPIO on channel 18 is replaced by an Input+Output capable PMOD1_0 line.
- Since there is no SDIO on the programmable logic side, channel 26 is instead occupied by an additional SPI:
+--------------+------------------+--------------+--------------+--------------+
| RTIO channel | CS_N | MOSI | MISO | CLK |
+==============+==================+==============+==============+==============+
| 26 | PMOD_SPI_CS_N | PMOD_SPI_MOSI| PMOD_SPI_MISO| PMOD_SPI_CLK |
+--------------+------------------+--------------+--------------+--------------+
NIST QC2 NIST QC2
^^^^^^^^ ^^^^^^^^
With the QC2 hardware, the TTL lines are mapped as follows: With the KC705 QC2 hardware, the TTL lines are mapped as follows:
+--------------------+-----------------------+--------------+ +--------------------+-----------------------+--------------+
| RTIO channel | TTL line | Capability | | RTIO channel | TTL line | Capability |
@ -193,38 +234,11 @@ The board has RTIO SPI buses mapped as follows:
There are two DDS buses on channels 50 (LPC, DDS0-DDS11) and 51 (HPC, DDS12-DDS23). There are two DDS buses on channels 50 (LPC, DDS0-DDS11) and 51 (HPC, DDS12-DDS23).
The QC2 hardware uses TCA6424A I2C I/O expanders to define the directions of its TTL buffers. There is one such expander per FMC card, and they are selected using the PCA9548 on the KC705. The QC2 hardware uses TCA6424A I2C I/O expanders to define the directions of its TTL buffers. There is one such expander per FMC card, and they are selected using the PCA9548 on the KC705.
To avoid I/O contention, the startup kernel should first program the TCA6424A expanders and then call ``output()`` on all ``TTLInOut`` channels that should be configured as outputs. To avoid I/O contention, the startup kernel should first program the TCA6424A expanders and then call ``output()`` on all ``TTLInOut`` channels that should be configured as outputs. See :mod:`artiq.coredevice.i2c` for more details.
See :mod:`artiq.coredevice.i2c` for more details. The ZC706 is identical except for the following differences:
.. _core-device-clocking: - The SMA GPIO is once again replaced with PMOD1_0.
- The first four TTLs also have edge counters, on channels 52, 53, 54, and 55.
Clocking
--------
The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include:
* ``int_100`` - internal crystal reference is used to synthesize a 100MHz RTIO clock,
* ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option),
* ``int_150`` - internal crystal reference is used to synthesize a 150MHz RTIO clock.
* ``ext0_synth0_10to125`` - external 10MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_80to125`` - external 80MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_100to125`` - external 100MHz reference clock used to synthesize a 125MHz RTIO clock,
* ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock.
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the :ref:`JSON description file <system-description>`. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.
If not using WRPLL, PLL can also be bypassed entirely with the options
* ``ext0_bypass`` (input clock used directly)
* ``ext0_bypass_125`` (explicit alias)
* ``ext0_bypass_100`` (explicit alias)
Bypassing the PLL ensures the skews between input clock, downstream clock outputs, and RTIO clock are deterministic across reboots of the system. This is useful when phase determinism is required in situations where the reference clock fans out to other devices before reaching the master.

View File

@ -1,4 +1,4 @@
Core drivers reference Core real-time drivers
====================== ======================
These drivers are for the core device and the peripherals closely integrated into it, which do not use the controller mechanism. These drivers are for the core device and the peripherals closely integrated into it, which do not use the controller mechanism.

View File

@ -1,7 +1,7 @@
Core language reference Core language and environment
======================= =============================
The most commonly used features from the ARTIQ language modules and from the core device modules are bundled together in :mod:`artiq.experiment` and can be imported with ``from artiq.experiment import *``. The most commonly used features from the ARTIQ language modules and from the core device modules are bundled together in ``artiq.experiment`` and can be imported with ``from artiq.experiment import *``.
:mod:`artiq.language.core` module :mod:`artiq.language.core` module
--------------------------------- ---------------------------------

View File

@ -10,9 +10,9 @@ Default network ports
+---------------------------------+--------------+ +---------------------------------+--------------+
| Core device (analyzer) | 1382 | | Core device (analyzer) | 1382 |
+---------------------------------+--------------+ +---------------------------------+--------------+
| Moninj (core device or proxy) | 1383 | | MonInj (core device or proxy) | 1383 |
+---------------------------------+--------------+ +---------------------------------+--------------+
| Moninj (proxy control) | 1384 | | MonInj (proxy control) | 1384 |
+---------------------------------+--------------+ +---------------------------------+--------------+
| Core analyzer proxy (proxy) | 1385 | | Core analyzer proxy (proxy) | 1385 |
+---------------------------------+--------------+ +---------------------------------+--------------+

View File

@ -19,7 +19,7 @@ Full support for a specific device, called a network device support package or N
1. The `driver`, which contains the Python API functions to be called over the network and performs the I/O to the device. The top-level module of the driver should be called ``artiq.devices.XXX.driver``. 1. The `driver`, which contains the Python API functions to be called over the network and performs the I/O to the device. The top-level module of the driver should be called ``artiq.devices.XXX.driver``.
2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and should be called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it. 2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and should be called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it.
3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data must be transferred over the network API, that would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters. 3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data, which would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters, must be transferred over the network API.
4. An optional `mediator`, which is code executed on the client that supplements the network API. A mediator may contain kernels that control real-time signals such as TTL lines connected to the device. Simple devices use the network API directly and do not have a mediator. Mediator modules are called ``artiq.devices.XXX.mediator`` and their public classes are exported at the ``artiq.devices.XXX`` level (via ``__init__.py``) for direct import and use by the experiments. 4. An optional `mediator`, which is code executed on the client that supplements the network API. A mediator may contain kernels that control real-time signals such as TTL lines connected to the device. Simple devices use the network API directly and do not have a mediator. Mediator modules are called ``artiq.devices.XXX.mediator`` and their public classes are exported at the ``artiq.devices.XXX`` level (via ``__init__.py``) for direct import and use by the experiments.
The driver and controller The driver and controller
@ -120,7 +120,7 @@ To access this driver in an experiment, we can retrieve the ``Client`` instance
Integration with ARTIQ experiments Integration with ARTIQ experiments
---------------------------------- ----------------------------------
Generally we will want to add the device to our :ref:`device database <device-db>` so that we can add it to an experiment with ``self.setattr_device`` and so the controller can be started and stopped automatically by a controller manager (the ``artiq_ctlmgr`` utility from ``artiq-comtools``). To do so, add an entry to your device database in this format: :: Generally we will want to add the device to our :ref:`device database <device-db>` so that we can add it to an experiment with ``self.setattr_device`` and so the controller can be started and stopped automatically by a controller manager (the :mod:`~artiq_comtools.artiq_ctlmgr` utility from ``artiq-comtools``). To do so, add an entry to your device database in this format: ::
device_db.update({ device_db.update({
"hello": { "hello": {
@ -213,7 +213,7 @@ Command line and options
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
* Controllers should be able to operate in "simulation" mode, specified with ``--simulation``, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file. * Controllers should be able to operate in "simulation" mode, specified with ``--simulation``, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file.
* The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter name. * The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter names.
* Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it. * Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it.
Style Style

View File

@ -30,7 +30,7 @@ The routing table
The routing table defines, for each destination, the list of hops ("route") that must be taken from the root in order to reach it. The routing table defines, for each destination, the list of hops ("route") that must be taken from the root in order to reach it.
It is stored in a binary format that can be generated and manipulated with the :ref:`artiq_route utility <routing-table-tool>`, see :ref:`drtio-routing`. The binary file is programmed into the flash storage of the core device under the ``routing_table`` key. It is automatically distributed to downstream devices when the connections are established. Modifying the routing table requires rebooting the core device for the new table to be taken into account. It is stored in a binary format that can be generated and manipulated with the utility :mod:`~artiq.frontend.artiq_route`, see :ref:`drtio-routing`. The binary file is programmed into the flash storage of the core device under the ``routing_table`` key. It is automatically distributed to downstream devices when the connections are established. Modifying the routing table requires rebooting the core device for the new table to be taken into account.
Internal details Internal details
---------------- ----------------

View File

@ -29,7 +29,7 @@ This is stored in a Python dictionary whose keys are the device names, which the
"arguments": {"channel": 19} "arguments": {"channel": 19}
}, },
Note that the key (the name of the device) is ``led`` and the value is itself a Python dictionary. Names will later be used to gain access to a device through methods such as ``self.setattr_device("led")``. While in this case ``led`` can be replaced with another name, provided it is used consistently, some names (e.g. in particular ``core``) are used internally by ARTIQ and will cause problems if changed. It is often more convenient to use aliases for renaming purposes, see below. Note that the key (the name of the device) is ``led`` and the value is itself a Python dictionary. Names will later be used to gain access to a device through methods such as ``self.setattr_device("led")``. While in this case ``led`` can be replaced with another name, provided it is used consistently, some names (in particular, ``core``) are used internally by ARTIQ and will cause problems if changed. It is often more convenient to use aliases for renaming purposes, see below.
.. note:: .. note::
The device database is generated and stored in the memory of the master when the master is first started. Changes to the ``device_db.py`` file will not immediately affect a running master. In order to update the device database, right-click in the Explorer window and select 'Scan device database', or run the command ``artiq_client scan-devices``. The device database is generated and stored in the memory of the master when the master is first started. Changes to the ``device_db.py`` file will not immediately affect a running master. In order to update the device database, right-click in the Explorer window and select 'Scan device database', or run the command ``artiq_client scan-devices``.
@ -37,13 +37,13 @@ Note that the key (the name of the device) is ``led`` and the value is itself a
.. warning:: .. warning::
It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system. It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system.
Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, realtime, e.g. your Sinara hardware) must be factually attached to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports. Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, real-time, e.g. your Sinara hardware) must be connected to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports.
While controllers can be added and removed to your device database relatively easily, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will normally be provided with new binaries and necessary assistance.) While controllers can be added and removed to your device database on an *ad hoc* basis, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will be provided with new binaries and necessary assistance.) See :doc:`building_developing`.
Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the *system description* file. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around. Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the :ref:`system description file<system-description>`. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around.
If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add remote devices, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the ``artiq_ddb_template`` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly. See also the corresponding entry in :ref:`Utilities <ddb-template-tool>`. If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add controllers, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the :mod:`~artiq.frontend.artiq_ddb_template` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly. See also the corresponding entry in :ref:`Utilities <ddb-template-tool>`.
Local devices Local devices
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
@ -52,14 +52,16 @@ Local device entries are dictionaries which contain a ``type`` field set to ``lo
The fields ``module`` and ``class`` determine the location of the Python class of the driver. The ``arguments`` field is another (possibly empty) dictionary that contains arguments to pass to the device driver constructor. ``arguments`` is often used to specify the RTIO channel number of a peripheral, which must match the channel number in gateware. The fields ``module`` and ``class`` determine the location of the Python class of the driver. The ``arguments`` field is another (possibly empty) dictionary that contains arguments to pass to the device driver constructor. ``arguments`` is often used to specify the RTIO channel number of a peripheral, which must match the channel number in gateware.
On Kasli and Kasli-SoC, the allocation of RTIO channels to EEM ports is done automatically when the gateware is compiled, and while conceptually simple (channels are assigned one after the other, from zero upwards, for each device entry in the system description file) it is not entirely straightforward (different devices require different numbers of RTIO channels). Again, the easiest way to handle this when writing a new device database is automatically, using ``artiq_ddb_template``. On Kasli and Kasli-SoC, the allocation of RTIO channels to EEM ports is done automatically when the gateware is compiled, and while conceptually simple (channels are assigned one after the other, from zero upwards, for each device entry in the system description file) it is not entirely straightforward (different devices require different numbers of RTIO channels). Again, the easiest way to handle this when writing a new device database is automatically, using :mod:`~artiq.frontend.artiq_ddb_template`.
.. _environment-ctlrs:
Controllers Controllers
^^^^^^^^^^^ ^^^^^^^^^^^
Controller entries are dictionaries which contain a ``type`` field set to ``controller``. When an experiment requests such a device, a RPC client (see ``sipyco.pc_rpc``) is created and connected to the appropriate controller. Controller entries are also used by controller managers to determine what controllers to run. For an example, see :ref:`the NDSP development page <ndsp-integration>`. Controller entries are dictionaries which contain a ``type`` field set to ``controller``. When an experiment requests such a device, a RPC client (see ``sipyco.pc_rpc``) is created and connected to the appropriate controller. Controller entries are also used by controller managers to determine what controllers to run. For an example, see :ref:`the NDSP development page <ndsp-integration>`.
The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matches the ``port`` field) and an appropriate bind address for the controller's listening socket. The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matching the ``port`` field) and an appropriate bind address for the controller's listening socket.
An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default. An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default.
@ -71,18 +73,20 @@ If an entry is a string, that string is used as a key for another lookup in the
Arguments Arguments
--------- ---------
Arguments are values that parameterize the behavior of an experiment. ARTIQ supports both interactive arguments, requested and supplied at some point while an experiment is running, and submission-time arguments, requested in the build phase and set before the experiment is executed. For more on arguments in practice, see the tutorial section :ref:`mgmt-arguments`. For supported argument types and specific reference, see the relevant sections of :doc:`the core language reference <core_language_reference>`, as well as the example experiment ``examples/no_hardware/interactive.py``. Arguments are values that parameterize the behavior of an experiment. ARTIQ supports both interactive arguments, requested and supplied at some point while an experiment is running, and submission-time arguments, requested in the build phase and set before the experiment is executed. For more on arguments in practice, see the tutorial section :ref:`mgmt-arguments`. For supported argument types, see the reference for :mod:`artiq.language.environment`; for specific methods, see the reference for :class:`~artiq.language.environment.HasEnvironment`.
.. _environment-datasets:
Datasets Datasets
-------- --------
Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`management system tutorial <getting-started-datasets>`. Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`data interfaces tutorial <mgmt-datasets>`.
A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments. A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments.
Broadcasted datasets are replaced when a new dataset with the same key (name) is produced. By default, they are erased when the master halts. Broadcasted datasets may be made persistent (``persistent=True``, which also implies ``broadcast=True``), in which case the master stores them in a LMDB database typically called ``dataset_db.mdb``, where they are saved across master restarts. Broadcasted datasets are replaced when a new dataset with the same key (name) is produced. By default, they are erased when the master halts. Broadcasted datasets may be made persistent (``persistent=True``, which also implies ``broadcast=True``), in which case the master stores them in a LMDB database typically called ``dataset_db.mdb``, where they are saved across master restarts.
By default, datasets are archived in the HDF5 output for that run, although this can be opted against (``archive=False``). By default, datasets are archived in the ``results`` HDF5 output for that run, although this can be opted against (``archive=False``).
Datasets and units Datasets and units
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^

View File

@ -0,0 +1,272 @@
Extending RTIO
==============
.. warning::
This page is for users who want to extend or modify ARTIQ RTIO. Broadly speaking, one of the core intentions of ARTIQ is to provide a high-level, easy-to-use interface for experimentation, while the infrastructure handles the technological challenges of the high-resolution, timing-critical operations required. Rather than worrying about the details of timing in hardware, users can outline tasks quickly and efficiently in ARTIQ Python, and trust the system to carry out those tasks in real time. It is not normally, or indeed ever, necessary to make modifications on a gateware level.
However, ARTIQ is an open-source project, and welcomes interest and agency from its users, as well as from experienced developers. This page is intended to serve firstly as a broad introduction to the internal structure of ARTIQ, and secondly as a tutorial for how RTIO extensions in ARTIQ can be made. Experience with FPGAs or hardware description languages is not strictly necessary, but additional research on the topic will likely be required to make serious modifications of your own.
For instructions on setting up the ARTIQ development environment and on building gateware and firmware binaries, first see :doc:`building_developing` in the main part of the manual.
Introduction to the ARTIQ internal stack
----------------------------------------
.. tikz::
:align: center
:libs: arrows.meta
:xscale: 70
\definecolor{primary}{HTML}{0d3547} % ARTIQ blue
\pgfdeclarelayer{bg} % declare background layer
\pgfsetlayers{bg,main} % set layer order
\node[draw, dotted, thick] (frontend) at (0, 6) {Host machine: Compiler, master, GUI...};
\node[draw=primary, fill=white] (software) at (0, 5) {Software: \it{Kernel code}};
\node[draw=blue, fill=white, thick] (firmware) at (0, 4) {Firmware: \it{ARTIQ runtime}};
\node[draw=blue, fill=white, thick] (gateware) at (0, 3) {Gateware: \it{Migen and Misoc}};
\node[draw=primary, fill=white] (hardware) at (0, 2) {Hardware: \it{Sinara ecosystem}};
\begin{pgfonlayer}{bg}
\draw[primary, -Stealth, dotted, thick] (frontend.south) to [out=180, in=180] (firmware.west);
\draw[primary, -Stealth, dotted, thick] (frontend) to (software);
\end{pgfonlayer}
\draw[primary, -Stealth] (firmware) to (software);
\draw[primary, -Stealth] (gateware) to (firmware);
\draw[primary, -Stealth] (hardware) to (gateware);
Like any other modern piece of software, kernel code running on an ARTIQ core device rests upon a layered infrastructure, starting with the hardware: the physical carrier board and its peripherals. Generally, though not exclusively, this is the `Sinara device family <https://m-labs.hk/experiment-control/sinara-core/>`_, which is designed to work with ARTIQ. Other carrier boards, such as the Xilinx KC705 and ZC706, are also supported.
All of the ARTIQ core device carrier boards necessarily center around a physical field-programmable gate array, or FPGA. If you have never worked with FPGAs before, it is easiest to understand them as 'rearrangeable' circuits. Ideally, they are capable of approaching the tremendous speed and timing precision advantages of custom-designed, application-specific hardware, while still being reprogrammable, allowing development and revision to continue after manufacturing.
The 'configuration' of an FPGA, the circuit design it is programmed with, is its *gateware*. Gateware is not software, and is not written in programming languages. Rather, it is written in a *hardware description language,* of which the most common are VHDL and Verilog. The ARTIQ codebase uses a set of tools called `Migen <https://m-labs.hk/gateware/migen/>`_ to write hardware description in a subset of Python, which is later translated to Verilog behind the scenes. This has the advantage of preserving much of the flexibility and convenience of Python as a programming language, but shouldn't be mistaken for it *being* Python, or functioning like Python. (MiSoC, built on Migen, is used to implement softcore -- i.e. 'programmed', on-FPGA, not hardwired -- CPUs on Kasli and KC705. Zynq devices contain 'hardcore' ARM CPUs already and correspondingly make relatively less intensive use of MiSoC.)
The low-level software that runs directly on the core device's CPU, softcore or hardcore, is its *firmware.* This is the 'operating system' of the core device. The firmware is tasked, among other things, with handling the low-level communication between the core device and the host machine, as well as between the core devices in a DRTIO setting. It is written in bare-metal `Rust <https://www.rust-lang.org/>`__. There are currently two active versions of the ARTIQ firmware (the version used for ARTIQ-Zynq, NAR3, is more modern than that used on Kasli and KC705, and will likely eventually replace it) but they are functionally equivalent except for internal details.
Experiment kernels themselves -- ARTIQ Python, processed by the ARTIQ compiler and loaded from the host machine -- rest on top of and are framed and supported by the firmware, in the same sense way that application software on your PC rests on top of an operating system. All together, software kernels communicate with the firmware to set parameters for the gateware, which passes signals directly to the hardware.
These frameworks are built to be self-contained and extensible. To make additions to the gateware and software, for example, we do not need to make changes to the firmware; we can interact purely with the interfaces provided on either side.
Extending gateware logic
------------------------
As briefly explained in :doc:`rtio`, when we talk about RTIO infrastructure, we are primarily speaking of structures implemented in gateware. The FIFO banks which hold scheduled output events or recorded input events, for example, are in gateware. Sequence errors, overflow exceptions, event spreading, and so on, happen in the gateware. In some cases, you may want to make relatively simple, parametric changes to existing RTIO, like changing the sizes of certain queues. In this case, it can be as simple as tracking down the part of the code where this parameter is set, changing it, and :doc:`rebuilding the binaries <building_developing>`.
.. warning::
Note that FPGA resources are finite, and buffer sizes, lane counts, etc., are generally chosen to maximize available resources already, with different values depending on the core device in use. Depending on the peripherals you include (some are more resource-intensive than others) blanket increases will likely quickly outstrip the capacity of your FPGA and fail to build. Increasing the depth of a particular channel you know to be heavily used is more likely to succeed; the easiest way to find out is to attempt the build and observe what results.
Gateware in ARTIQ is housed in ``artiq/gateware`` on the main ARTIQ repository and (for Zynq-specific additions) in ``artiq-zynq/src/gateware`` on ARTIQ-Zynq. The starting point for figuring out your changes will often be the *target file*, which is core device-specific and which you may recognize as the primary module called when building gateware. Depending on your core device, simply track down the file named after it, as in ``kasli.py``, ``kasli_soc.py``, and so on. Note that the Kasli and Kasli-SoC targets are designed to take JSON description files as input, whereas their KC705 and ZC706 equivalents work with hardcoded variants instead.
To change parameters related to particular peripherals, see also the files ``eem.py`` and ``eem_7series.py``, which describe the core device's interface with other EEM cards in Migen terms, and contain ``add_std`` methods that in turn reference specific gateware modules and assign RTIO channels.
Adding a module to gateware
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To demonstrate how RTIO can be *extended,* on the other hand, we will develop a new interface entirely for the control of certain hardware -- in our case, for a simple example, the core device LEDs. If you haven't already, follow the instructions in :doc:`building_developing` to clone the ARTIQ repository and set up a development environment. The first part of our addition will be a module added to ``gateware/rtio/phy`` (PHY, for interaction with the physical layer), written in the Migen Fragmented Hardware Description Language (FHDL).
.. seealso::
To find reference material for FHDL and the Migen constructs we will use, see the Migen manual, in particular the page `The FHDL domain-specific language <https://m-labs.hk/migen/manual/fhdl.html>`_.
.. warning::
If you have never worked with a hardware description language before, it is important to understand that hardware description is fundamentally different to programming in a language like Python or Rust. At its most basic, a program is a set of instructions: a step-by-step guide to a task you want to see performed, where each step is written, and executed, principally in sequence. In contrast, hardware description is *a description*. It specifies the static state of a piece of hardware. There are no 'steps', and no chronological execution, only stated facts about how the system should be built.
The examples we will handle in this tutorial are simple, and you will likely find Migen much more readable than traditional languages like VHDL and Verilog, but keep in mind that we are describing how a system connects and interlocks its signals, *not* operations it should perform.
Normally, the PHY module used for LEDs is the ``Output`` of ``ttl_simple.py``. Take a look at its source code. Note that values like ``override`` and ``probes`` exist to support RTIO MonInj -- ``probes`` for monitoring, ``override`` for injection -- and are not involved with normal control of the output. Note also that ``pad``, among FPGA engineers, refers to an input/output pad, i.e. a physical connection through which signals are sent. ``pad_n`` is its negative pair, necessary only for certain kinds of TTLs and not applicable to LEDs.
Interface and signals
"""""""""""""""""""""
To get started, create a new file in ``gateware/rtio/phy``. Call it ``linked_leds.py``. In it, create a class ``Output``, which will inherit from Migen's ``Module``, and give it an ``init`` method, which takes two pads as input: ::
from migen import *
class Output(Module):
def __init__(self, pad0, pad1):
``pad0`` and ``pad1`` will represent output pads, in our case ultimately connecting to the board's user LEDs. On the other side, to receive output events from a RTIO FIFO queue, we will use an ``Interface`` provided by the ``rtlink`` module, also found in ``artiq/gateware``. Both output and input interfaces are available, and both can be combined into one link, but we are only handling output events. We use the ``data_width`` parameter to request an interface that is 2 bits wide: ::
from migen import *
from artiq.gateware.rtio import rtlink
class Output(Module):
def __init__(self, pad0, pad1):
self.rtlink = rtlink.Interface(rtlink.OInterface(2))
In our example, rather than controlling both LEDs manually using ``on`` and ``off``, which is the functionality ``ttl_simple.py`` provides, we will control one LED manually and have the gateware determine the value of the other based on the first. This same logic would be easy (in fact, much easier) to implement in ARTIQ Python; the advantage of placing it in gateware is that logic in gateware is *extremely fast,* in effect 'instant', i.e., completed within a single clock cycle. Rather than waiting for a CPU to process and respond to instructions, a response can happen at the speed of a dedicated logic circuit.
.. note::
Naturally, the truth is more complicated, and depends heavily on how complex the logic in question is. An overlong chain of gateware logic will fail to settle within a single RTIO clock cycle, causing a wide array of potential problems that are difficult to diagnose and difficult to fix; the only solutions are to simplify the logic, deliberately split it across multiple clock cycles (correspondingly increasing latency for the operation), or to decrease the speed of the clock (increasing latency for *everything* the device does).
For now, it's enough to say that you are unlikely to encounter timing failures with the kind of simple logic demonstrated in this tutorial. Indeed, designing gateware logic to run in as few cycles as possible without 'failing timing' is an engineering discipline in itself, and much of what FPGA developers spend their time on.
In practice, of course, since ARTIQ explicitly allows scheduling simultaneous output events to different channels, there's still no reason to make gateware modifications to accomplish this. After all, leveraging the real-time capabilities of customized gateware without making it necessary to *write* it is much of the point of ARTIQ as a system. Only in more complex cases, such as directly binding inputs to outputs without feeding back through the CPU, might gateware-level additions become necessary.
For now, add two intermediate signals for our logic, instances of the Migen ``Signal`` construct: ::
def __init__(self, pad0, pad1):
self.rtlink = rtlink.Interface(rtlink.OInterface(2))
reg = Signal()
pad0_o = Signal()
.. note::
A gateware 'signal' is not a signal in the sense of being a piece of transmitted information. Rather, it represents a channel, which bits of information can be held in. To conceptualize a Migen ``Signal``, take it as a kind of register: a box that holds a certain number of bits, and can update those bits from an input, or broadcast them to an output connection. The number of bits is arbitrary, e.g., a ``Signal(2)`` will be two bits wide, but in our example we handle only single-bit registers.
These are our inputs, outputs, and intermediate signals. By convention, in Migen, these definitions are all made at the beginning of a module, and separated from the logic that interconnects them with a line containing the three symbols ``###``. See also ``ttl_simple.py`` and other modules.
Since hardware description is not linear or chronological, nothing conceptually prevents us from making these statements in any other order -- in fact, except for the practicalities of code execution, nothing particularly prevents us from defining the connections between the signals before we define the signals themselves -- but for readable and maintainable code, this format is vastly preferable.
Combinatorial and synchronous statements
""""""""""""""""""""""""""""""""""""""""
After the ``###`` separator, we will set the connecting logic. A Migen ``Module`` has several special attributes, to which different logical statements can be assigned. We will be using ``self.sync``, for synchronous statements, and ``self.comb``, for combinatorial statements. If a statement is *synchronous*, it is only updated once per clock cycle, i.e. when the clock ticks. If a statement is *combinatorial*, it is updated whenever one of its inputs change, i.e. 'instantly'.
Add a synchronous block as follows: ::
self.sync.rio_phy += [
If(self.rtlink.o.stb,
pad0_o.eq(self.rtlink.o.data[0] ^ pad0_o),
reg.eq(self.rtlink.o.data[1])
)
]
In other words, at every tick of the ``rtio_phy`` clock, if the ``rtlink`` strobe signal (which is set to high when the data is valid, i.e., when an output event has just reached the PHY) is high, the ``pad0_o`` and ``reg`` registers are updated according to the input data on ``rtlink``.
.. note::
Notice that, in a standard synchronous block, it makes no difference how or how many times the inputs to an ``.eq()`` statement change or fluctuate. The output is updated *exactly once* per cycle, at the tick, according to the instantaneous state of the inputs in that moment. In between ticks and during the clock cycle, it remains stable at the last updated level, no matter the state of the inputs. This stability is vital for the broader functioning of synchronous circuits, even though 'waiting for the tick' adds latency to the update.
``reg`` is simply set equal to the incoming bit. ``pad0_o``, on the other hand, flips its old value if the input is ``1``, and keeps it if the input is ``0``. Note that ``^``, which you may know as the Python notation for a bitwise XOR operation, here simply represents a XOR gate. In summary, we can flip the value of ``pad0`` with the first bit of the interface, and set the value of ``reg`` with the other.
Add the combinatorial block as follows: ::
self.comb += [
pad0.eq(pad0_o),
If(reg,
pad1.eq(pad0_k)
)
]
The output ``pad0`` is continuously connected to the value of the ``pad0_o`` register. The output of ``pad1`` is set equal to that of ``pad0``, but only if the ``reg`` register is high, or ``1``.
The module is now capable of accepting RTIO output events and applying them to the hardware outputs. What we can't yet do is generate these output events in an ARTIQ kernel. To do that, we need to add a core device driver.
Adding a core device driver
^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have been writing ARTIQ experiments for any length of time, you will already be familiar with the core device drivers. Their reference is kept in this manual on the page :doc:`core_drivers_reference`; their methods are commonly used to manipulate the core device and its close peripherals. Source code for these drivers is kept in the directory ``artiq/coredevice``. Create a new file, again called ``linked_led.py``, in this directory.
The drivers are software, not gateware, and they are written in regular ARTIQ Python. They use methods given in ``coredevice/rtio.py`` to queue input and output events to RTIO channels. We will start with its ``__init__``, the method ``get_rtio_channels`` (which is formulaic, and exists only to be used by :meth:`~artiq.frontend.artiq_rtiomap`), and a output set method ``set_o``: ::
from artiq.language.core import *
from artiq.language.types import *
from artiq.coredevice.rtio import rtio_output
class LinkedLED:
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_o = channel << 8
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def set_o(self, o):
rtio_output(self.target_o, o)
.. note::
``rtio_output()`` is one of four methods given in ``coredevice/rtio.py``, which provides an interface with lower layers of the system. You can think of it ultimately as representing the other side of the ``Interface`` we requested in our Migen module. Notably, in between the two, events pass through the SED and its FIFO lanes, where they are held until the exact real-time moment the events were scheduled for, as originally described in :doc:`rtio`.
Now we can write the kernel API. In the gateware, bit 0 flips the value of the first pad: ::
@kernel
def flip_led(self):
self.set_o(0b01)
and bit 1 connects the second pad to the first: ::
@kernel
def link_up(self):
self.set_o(0b10)
There's no reason we can't do both at the same time: ::
@kernel
def flip_together(self):
self.set_o(0b11)
Target and device database
^^^^^^^^^^^^^^^^^^^^^^^^^^
Our ``linked_led`` PHY module exists, but in order for it to be generated as part of a set of ARTIQ binaries, we need to add it to one of the target files. Find the target file for your core device, as described above. Each target file is structured differently; track down the part of the file where channels and PHY modules are assigned to the user LEDs. Depending on your core device, there may be two or more LEDs that are available. Look for lines similar to: ::
for i in (0, 1):
user_led = self.platform.request("user_led", i)
phy = ttl_simple.Output(user_led)
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
Edit the code so that, rather than assigning a separate PHY and channel to each LED, two of the LEDs are grouped together in ``linked_led``. You might use something like: ::
print("Linked LEDs at:", len(rtio_channels))
phy = linked_led.Output(self.platform.request("user_led", 0), self.platform.request("user_led", 1))
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
Save the target file, under a different name if you prefer. Follow the instructions in :doc:`building_developing` to build a set of binaries, being sure to use your edited target file for the gateware, and flash your core device, for simplicity preferably in a standalone configuration without peripherals.
Now, before you can access your new core device driver from a kernel, it must be added to your device database. Find your ``device_db.py``. Delete the entries dedicated to the user LEDs that you have repurposed; if you tried to control those LEDs using the standard TTL interfaces now, the corresponding gateware would be missing anyway. Add an entry with your new driver, as in: ::
device_db["leds"] = {
"type": "local",
"module": "artiq.coredevice.linked_led",
"class": "LinkedLED",
"arguments": {"channel": 0x000008}
}
.. warning::
Channel numbers are assigned sequentially each time ``rtio_channels.append()`` is called. Since we assigned the channel for our linked LEDs in the same location as the old user LEDs, the correct channel number is likely simply the one previously used in your device database for the first LED. In any other case, however, the ``print()`` statement we added to the target file should tell us the exact canonical channel. Search through the console logs produced when generating the gateware to find the line starting with ``Linked LEDs at:``.
Depending on how your device database was written, note that the channel numbers for other peripherals, if they are present, *will have changed*, and :meth:`~artiq.frontend.artiq_ddb_template` will not generate their numbers correctly unless it is edited to match the new assignments of the user LEDs. For a longer-term gateware change, especially the addition of a new EEM card, ``artiq/frontend/artiq_ddb_template.py`` and ``artiq/coredevice/coredevice_generic.schema`` should be edited accordingly, so that system descriptions and device databases can continue to be parsed and generated correctly.
Test experiments
^^^^^^^^^^^^^^^^
Now the device ``leds`` can be called from your device database, and its corresponding driver accessed, just as with any other device. Try writing some miniature experiments, for instance ``flip.py``: ::
from artiq.experiment import *
class flip(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("leds")
@kernel
def run(self):
self.core.reset()
self.leds.flip_led()
and ``linkup.py``: ::
from artiq.experiment import *
class sync(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("leds")
@kernel
def run(self):
self.core.reset()
self.leds.link_up()
Run these and observe the results. Congratulations! You have successfully constructed an extension to the ARTIQ RTIO.
.. + 'Adding custom EEMs' and 'Merging support'

View File

@ -1,32 +1,179 @@
.. Copyright (C) 2014, 2015 Robert Jordens <jordens@gmail.com> FAQ (How do I...)
=================
FAQ use this documentation?
=== -----------------------
How do I ... The content of this manual is arranged in rough reading order. If you start at the beginning and make your way through section by section, you should form a pretty good idea of how ARTIQ works and how to use it. Otherwise:
------------
**If you are just starting out,** and would like to get ARTIQ set up on your computer and your core device, start with :doc:`installing`, :doc:`flashing`, and :doc:`configuring`, in that order.
**If you have a working ARTIQ setup** (or someone else has set it up for you), start with the tutorials: read :doc:`rtio`, then progress to :doc:`getting_started_core`, :doc:`getting_started_mgmt`, and :doc:`using_data_interfaces`. If your system is in a DRTIO configuration, :doc:`DRTIO and subkernels <using_drtio_subkernels>` will also be helpful.
Pages like :doc:`management_system` and :doc:`core_device` describe **specific components of the ARTIQ ecosystem** in more detail. If you want to understand more about device and dataset databases, for example, read the :doc:`environment` page; if you want to understand the ARTIQ Python dialect and everything it does or does not support, read the :doc:`compiler` page.
Reference pages, like :doc:`main_frontend_tools` and :doc:`core_drivers_reference`, contain the detailed documentation of the individual methods and command-line tools ARTIQ provides. They are heavily interlinked throughout the rest of the documentation: whenever a method, tool, or exception is mentioned by name, like :class:`~artiq.frontend.artiq_run`, :meth:`~artiq.language.core.now_mu`, or :exc:`~artiq.coredevice.exceptions.RTIOUnderflow`, it can normally be clicked on to directly access the reference material. Notice also that the online version of this manual is searchable; see the 'Search docs' bar at left.
.. _build-documentation:
build this documentation?
-------------------------
To generate this manual from source, you can use ``nix build`` directives, for example: ::
$ nix build git+https://github.com/m-labs/artiq.git\?ref=release-[number]#artiq-manual-html
Substitute ``artiq-manual-pdf`` to get the LaTeX PDF version. The results will be in ``result``.
The manual is written in `reStructured Text <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_; you can find the source files in the ARTIQ repository under ``doc/manual``. If you spot a mistake, a typo, or something that's out of date or missing -- in particular, if you want to add something to this FAQ -- feel free to clone the repository, edit the source RST files, and make a pull request with your version of an improvement. (If you're not a fan of or not familiar with command-line Git, both GitHub and Gitea support making edits and pull requests directly in the web interface; tutorial materials are easy to find online.) The second best thing is to open an issue to make M-Labs aware of the problem.
.. _faq-networking:
troubleshoot networking problems?
---------------------------------
Diagnosis aids:
- Can you ``ping`` the device?
- Is the Ethernet LED on?
- Is the ERROR LED on?
- Is there anything unusual recorded in :ref:`the UART log <connecting-UART>`?
Some things to consider:
- Is the ``core_addr`` field of your ``device_db.py`` set correctly?
- Did your device flash and boot successfully? Were the binaries generated for the correct board hardware version?
- Are your core device's IP address and networking configurations definitely set correctly? Check the UART log to confirm, and talk to your network administrator about what the correct choices are.
- Is your core device configured for an external reference clock? If so, it cannot function correctly without one. Is the external reference clock plugged in?
- Are Ethernet and (on Kasli only) SFP0 plugged in all the way? Are they working? Try different cables and SFP adapters; M-Labs tests with CAT6 cables, but lower categories should be supported too.
- Are your PC and your crate in the same subnet?
- Is some other device in your network already using the configured IP address? Turn off the core device and try pinging the configured IP address; if it responds, you have a culprit. One of the two will need a different networking configuration.
- Are there restrictions or issues in your router or subnet that are preventing the core device from connecting? It may help to try connecting the core device to your PC directly.
fix 'Mismatch between gateware and software versions'?
------------------------------------------------------
Either reflash your core device with a newer version of ARTIQ (see :doc:`flashing`) or update your software (see :ref:`installing-upgrading`), depending on which is out of date.
.. note::
You can check the specific versions you are using at any time by comparing the gateware version given in the core startup log and the output given by adding ``--version`` to any of the standard ARTIQ front-end commands. This is especially useful when e.g. seeking help in the forum or at the helpdesk, where your running ARTIQ version is often crucial information to diagnose a problem.
Minor version mismatches are common, even in stable ARTIQ versions, but should not cause any issues. The ARTIQ release system ensures breaking changes are strictly limited to new release versions, or to the beta branch (which explicitly makes no promises of stability.) Updates that *are* applied to the stable version are usually bug fixes, documentation improvements, or other quality-of-life changes. As long as gateware and software are using the same stable release version of ARTIQ, even if there is a minor mismatch, no warning will be displayed.
change configuration settings of satellite devices?
---------------------------------------------------
Currently, it is not possible to reach satellites through ``artiq_coremgmt config``, although this is being worked on. On Kasli, use :class:`~artiq.frontend.artiq_mkfs` and :class:`~artiq.frontend.artiq_flash`; on Kasli-SoC, preload the SD card with a ``config.txt``, formatted as a list of ``key=value`` pairs, one per line.
Don't worry about individually flashing idle or startup kernels. If your idle or startup kernel contains subkernels, it will automatically compile as a ``.tar``, which you only need to flash to the master.
fix unreliable DRTIO master-satellite links?
--------------------------------------------
Inconsistent DRTIO connections, especially with odd or absent errors in the core logs, are often a symptom of overheating either in the master or satellite boards. Check the core device fans for failure or defects. Improve air circulation around the crate or attach additional fans to see if that improves or resolves the issue. In the long term, fan trays to be rack-mounted together with the crate are a clean solution to these kinds of problems.
add or remove EEM peripherals or DRTIO satellites?
--------------------------------------------------
Adding new real-time hardware to an ARTIQ system almost always means reflashing the core device; if you are adding new satellite core devices, they will have to be flashed as well. If you have obtained your upgrades from M-Labs or QUARTIQ, updated binaries and reflashing support will normally be offered to you directly. In any other case, track down your JSON system description file(s), bring them up to date with the updated state of your system, and see :doc:`building_developing`.
Once you have an updated set of binaries, reflash the core device, following the instructions in :doc:`flashing`. Be sure to update your device database before starting experimentation; run :mod:`~artiq.frontend.artiq_ddb_template` on your system description(s) to update the local devices, and copy over any aliases or entries for NDSP controllers you may have been using. Note that the device database is a Python file, and the generated file of local devices can also simply be imported into the final version, allowing for dynamic modifications, especially in complex systems that may have multiple device databases in use.
see command-line help?
----------------------
Like most if not almost all terminal utilities, ARTIQ commands, tools and applets print their help messages directly into the terminal and exit when run with the flag ``--help`` or ``-h``: ::
$ artiq_run -h
This is the simplest and most direct way of accessing the same usage and reference material that is replicated in this manual on the pages :doc:`main_frontend_tools` and :doc:`utilities`.
.. _faq-find-examples:
find ARTIQ examples? find ARTIQ examples?
-------------------- --------------------
The examples are installed in the ``examples`` folder of the ARTIQ package. You can find where the ARTIQ package is installed on your machine with: :: The official examples are stored in the ``examples`` folder of the ARTIQ package. You can find the location of the ARTIQ package on your machine with: ::
python3 -c "import artiq; print(artiq.__path__[0])" python3 -c "import artiq; print(artiq.__path__[0])"
Copy the ``examples`` folder from that path into your home/user directory, and start experimenting! Copy the ``examples`` folder from that path into your home or user directory, and start experimenting! (Note that some examples have dependencies not included with a standard ARTIQ install, like matplotlib and numba. To run those examples properly, make sure those modules are accessible.)
prevent my first RTIO command from causing an underflow? If you have progressed past this level and would like to see more in-depth code or real-life examples of how other groups have handled running experiments with ARTIQ, see the "Community code" directory on the M-labs `resources page <https://m-labs.hk/experiment-control/resources/>`_.
--------------------------------------------------------
The first RTIO event is programmed with a small timestamp above the value of the timecounter when the core device is reset. If the kernel needs more time than this timestamp to produce the event, an underflow will occur. You can prevent it by calling ``break_realtime`` just before programming the first event, or by adding a sufficient delay. fix ``failed to connect to moninj`` in the dashboard?
-----------------------------------------------------
If you are not resetting the core device, the time cursor stays where the previous experiment left it. This and other similar messages almost always indicate that your device database lists controllers (for example, ``aqctl_moninj_proxy``) that either haven't been started or aren't reachable at the given host and port. See :ref:`mgmt-ctlmgr`, or simply run: ::
$ artiq_ctlgmr
to let the controller manager start the necessary controllers automatically.
diagnose and fix sequence errors?
---------------------------------
Go through your code, keeping manual track of SED lanes. See the following example: ::
@kernel
def run(self):
self.core.reset()
with parallel:
self.ttl0.on() # lane0
self.ttl_sma.pulse(800*us) # lane1(rising) lane1(falling)
with sequential:
self.ttl1.on() # lane2
self.ttl2.on() # lane3
self.ttl3.on() # lane4
self.ttl4.on() # lane5
delay(800*us)
self.ttl1.off() # lane5
self.ttl2.off() # lane6
self.ttl3.off() # lane7
self.ttl4.off() # lane0
self.ttl0.off() # lane1 -> clashes with the falling edge of ttl_sma,
# which is already at +800us
In most cases, as in this one, it's relatively easy to rearrange the generation of events so that they will be better spread out across SED lanes without sacrificing actual functionality. One possible solution for the above sequence looks like: ::
@kernel
def run(self):
self.core.reset()
self.ttl0.on() # lane0
self.ttl_sma.on() # lane1
self.ttl1.on() # lane2
self.ttl2.on() # lane3
self.ttl3.on() # lane4
self.ttl4.on() # lane5
delay(800*us)
self.ttl1.off() # lane5
self.ttl2.off() # lane6
self.ttl3.off() # lane7
self.ttl4.off() # lane0 (no clash: new timestamp is higher than last)
self.ttl_sma.off() # lane1
self.ttl0.off() # lane2
In this case, the :meth:`~artiq.coredevice.ttl.TTLInOut.pulse` is split up into its component :meth:`~artiq.coredevice.ttl.TTLInOut.on` and :meth:`~artiq.coredevice.ttl.TTLInOut.off` so that events can be generated more linearly. It can also be worth keeping in mind that delaying by even a single coarse RTIO cycle between events avoids switching SED lanes at all; in contexts where perfect simultaneity is not a priority, this is an easy way to avoid sequencing issues. See again :ref:`sequence-errors`.
understand applet commands?
---------------------------
The 'Command' field contains the exact terminal command used to open and operate the applet. The default ``${artiq_applet}`` prefix simply translates to something to the effect of ``python -m artiq.applets.``, intended to be immediately followed by the applet module name. The options suffixed after the module name are the same used in the command line, and a list of them can be shown by using the standard command line ``-h`` help flag: ::
$ python -m artiq.applets.plot_xy -h
in any terminal.
organize datasets in folders? organize datasets in folders?
----------------------------- -----------------------------
Use the dot (".") in dataset names to separate folders. The GUI will automatically create and delete folders in the dataset tree display. Use the dot (".") in dataset names to separate folders. The GUI will automatically create and delete folders in the dataset tree display.
organize applets in groups?
---------------------------
Create groups by left-clicking within the applet list and selecting 'New Group'. Move applets in and out of groups by dragging them with the mouse. To unselect an applet or a group, use CTRL+click.
organize experiment windows in the dashboard? organize experiment windows in the dashboard?
--------------------------------------------- ---------------------------------------------
@ -37,74 +184,38 @@ Experiment windows can be organized by using the following hotkeys:
The windows will be organized in the order they were last interacted with. The windows will be organized in the order they were last interacted with.
write a generator feeding a kernel feeding an analyze function? fix errors when restarting the management system after a crash?
--------------------------------------------------------------- ---------------------------------------------------------------
Like this:: On Windows in particular, abnormal shutdowns such as power outages or bluescreens can sometimes corrupt the organizational files used by the management system, resulting in errors to the tune of ``ValueError: source code string cannot contain null bytes`` when restarting. The easiest way to handle these problems is to delete the corrupted files and start from scratch.
def run(self): If the master itself fails to start, it may be necessary to delete the dataset database or even ``last_rid.pyon`` to restart properly, but if the dashboard or browser fail, the problem is probably in the GUI configuration files, where the state of the GUI (arrangement of docks, applets, etc.) is kept. These files are backed up once whenever they are successfully loaded. Navigate to the user configuration directory (see :ref:`gui-config-files`) and look for a file suffixed ``.backup``. To restore the GUI, simply delete the corrupted configuration file and rename the backup to replace it.
self.parse(self.pipe(iter(range(10))))
def pipe(self, gen): create and use variable-length arrays in kernels?
for i in gen: -------------------------------------------------
r = self.do(i)
yield r
def parse(self, gen): You can't, in general; see the corresponding notes under :ref:`compiler-types`. ARTIQ kernels do not support heap allocation, meaning in particular that lists, arrays, and strings must be of constant size. One option is to preallocate everything, as mentioned on the Compiler page; another option is to chunk it and e.g. read 100 events per function call, push them upstream and retry until the gate time closes.
for i in gen:
pass
@kernel
def do(self, i):
return i
create and use variable lengths arrays in kernels?
--------------------------------------------------
Don't. Preallocate everything. Or chunk it and e.g. read 100 events per
function call, push them upstream and retry until the gate time closes.
execute multiple slow controller RPCs in parallel without losing time?
----------------------------------------------------------------------
Use ``threading.Thread``: portable, fast, simple for one-shot calls.
write part of my experiment as a coroutine/asyncio task/generator? write part of my experiment as a coroutine/asyncio task/generator?
------------------------------------------------------------------ ------------------------------------------------------------------
You can not change the API that your experiment exposes: ``build()``, You cannot change the API that your experiment exposes: :meth:`~artiq.language.environment.HasEnvironment.build`, :meth:`~artiq.language.environment.Experiment.prepare`, :meth:`~artiq.language.environment.Experiment.run` and :meth:`~artiq.language.environment.Experiment.analyze` need to be regular functions, not generators or asyncio coroutines. That would make reusing your own code in sub-experiments difficult and fragile. You can however wrap your own generators/coroutines/tasks in regular functions that you then expose as part of the API.
``prepare()``, ``run()`` and ``analyze()`` need to be regular functions, not
generators or asyncio coroutines. That would make reusing your own code in
sub-experiments difficult and fragile. You can however wrap your own
generators/coroutines/tasks in regular functions that you then expose as part
of the API.
determine the pyserial URL to attach to a device by its serial number? determine the pyserial URL to connect to a device by its serial number?
---------------------------------------------------------------------- -----------------------------------------------------------------------
You can list your system's serial devices and print their vendor/product You can list your system's serial devices and print their vendor/product id and serial number by running::
id and serial number by running::
$ python3 -m serial.tools.list_ports -v $ python3 -m serial.tools.list_ports -v
It will give you the ``/dev/ttyUSBxx`` (or the ``COMxx`` for Windows) device This will give you the ``/dev/ttyUSBxx`` (or ``COMxx`` for Windows) device names. The ``hwid:`` field gives you the string you can pass via the ``hwgrep://`` feature of pyserial `serial_for_url() <https://pythonhosted.org/pyserial/pyserial_api.html#serial.serial_for_url>`_ in order to open a serial device.
names.
The ``hwid:`` field gives you the string you can pass via the ``hwgrep://``
feature of pyserial
`serial_for_url() <https://pythonhosted.org/pyserial/pyserial_api.html#serial.serial_for_url>`_
in order to open a serial device.
The preferred way to specify a serial device is to make use of the ``hwgrep://`` The preferred way to specify a serial device is to make use of the ``hwgrep://`` URL: it allows for selecting the serial device by its USB vendor ID, product
URL: it allows to select the serial device by its USB vendor ID, product ID and/or serial number. These never change, unlike the device file name.
ID and/or serial number. Those never change, unlike the device file name.
For instance, if you want to specify the Vendor/Product ID and the USB Serial Number, you can do: For instance, if you want to specify the Vendor/Product ID and the USB Serial Number, you can do: ::
``-d "hwgrep://<VID>:<PID> SNR=<serial_number>"``.
for example:
``-d "hwgrep://0403:faf0 SNR=83852734"``
$ -d "hwgrep://<VID>:<PID> SNR=<serial_number>"``.
run unit tests? run unit tests?
--------------- ---------------
@ -124,9 +235,44 @@ The core device tests require the following TTL devices and connections:
If TTL devices are missing, the corresponding tests are skipped. If TTL devices are missing, the corresponding tests are skipped.
find the dashboard and browser configuration files are stored? .. _gui-config-files:
--------------------------------------------------------------
find the dashboard and browser configuration files?
---------------------------------------------------
:: ::
python -c "from artiq.tools import get_user_config_dir; print(get_user_config_dir())" python -c "from artiq.tools import get_user_config_dir; print(get_user_config_dir())"
Additional Resources
====================
Other related documentation
---------------------------
- the `Sinara wiki <https://github.com/sinara-hw/meta/wiki>`_
- the `SiPyCo manual <https://m-labs.hk/artiq/sipyco-manual/>`_
- the `Migen manual <https://m-labs.hk/migen/manual/>`_
- in a pinch, the `M-labs internal docs <https://git.m-labs.hk/sinara-hw/assembly>`_
For more advanced questions, sometimes the `list of publications <https://m-labs.hk/experiment-control/publications/>`_ about experiments performed using ARTIQ may be interesting. See also the official M-Labs `resources <https://m-labs.hk/experiment-control/resources/>`_ page, especially the section on community code.
"Help, I've done my best and I can't get any further!"
------------------------------------------------------
- If you have an active M-Labs AFWS/support subscription, you can email helpdesk@ at any time for personalized assistance. Please include the following information:
- Your installed ARTIQ version (add ``--version`` to any of the standard ARTIQ commands)
- The variant name of your system (refer to the sticker on the crate if you aren't sure)
- The recent output of your core log, either through ``artiq_coremgmt`` (if you're able to contact your device by network), or over UART following :ref:`the guide here <connecting-UART>`
- How your problem happened, and what you've already tried to fix it
- Compare your materials with the examples; see also :ref:`finding ARTIQ examples <faq-find-examples>` above.
- Check the list of `active issues <https://github.com/m-labs/artiq/issues>`_ on the ARTIQ GitHub repository for possible known problems with ARTIQ. Search through the closed issues to see if your question or concern has been addressed before.
- Search the `M-Labs forum <https://forum.m-labs.hk/>`_ for similar problems, or make a post asking for help yourself.
- Look into the `Mattermost live chat <https://chat.m-labs.hk>`_ or the bridged IRC channel.
- Read the open source code and its docstrings and figure it out.
- If you're reasonably certain you've identified a bug, or if you'd like to suggest a feature that should be included in future ARTIQ releases, `file a GitHub issue <https://github.com/m-labs/artiq/issues/new/choose>`_ yourself, following one of the provided templates.
- In some odd cases, you may want to see the `mailing list archive <https://www.mail-archive.com/artiq@lists.m-labs.hk/>`_; the ARTIQ mailing list was shut down at the end of 2020 and was last regularly used during the time of ARTIQ-2 and 3, but for some older ARTIQ features, or to understand a development thought process, you may still find relevant information there.
In any situation, if you found the manual unclear or unhelpful, you might consider following the :ref:`directions for contribution <build-documentation>` and editing it to be more helpful for future readers.

View File

@ -9,13 +9,13 @@
Obtaining board binaries Obtaining board binaries
------------------------ ------------------------
If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware for your system that corresponds to your currently installed version of ARTIQ using the ARTIQ firmware service (AFWS). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. The client ``afws_client`` is included in all ARTIQ installations. If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware for your system that corresponds to your currently installed version of ARTIQ using the ARTIQ firmware service (AFWS). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. The client :mod:`~artiq.frontend.afws_client` is included in all ARTIQ installations.
Run the command:: Run the command::
$ afws_client <username> build <afws_director> <variant> $ afws_client <username> build <afws_director> <variant>
Replace ``<username>`` with the login name that was given to you with the subscription, ``<variant>`` with the name of your system variant, and ``<afws_directory>`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email. For more information about ``afws_client`` see also the corresponding entry on the :ref:`Utilities <afws-client>` page. Replace ``<username>`` with the login name that was given to you with the subscription, ``<variant>`` with the name of your system variant, and ``<afws_directory>`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email. For more information about :mod:`~artiq.frontend.afws_client` see also the corresponding entry on the :ref:`Utilities <afws-client>` page.
For certain configurations (KC705 or ZC706 only) it is also possible to source firmware from `the M-Labs Hydra server <https://nixbld.m-labs.hk/project/artiq>`_ (in ``main`` and ``zynq`` respectively). For certain configurations (KC705 or ZC706 only) it is also possible to source firmware from `the M-Labs Hydra server <https://nixbld.m-labs.hk/project/artiq>`_ (in ``main`` and ``zynq`` respectively).
@ -25,9 +25,9 @@ Installing and configuring OpenOCD
---------------------------------- ----------------------------------
.. warning:: .. warning::
These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility ``artiq_flash`` to reflash. If your core device is a Zynq device, skip straight to :ref:`writing-flash`. These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility :mod:`~artiq.frontend.artiq_flash`. If your core device is a Zynq device, skip straight to :ref:`writing-flash`.
ARTIQ supplies the utility ``artiq_flash``, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards. ARTIQ supplies the utility :mod:`~artiq.frontend.artiq_flash`, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards.
.. note:: .. note::
@ -78,12 +78,12 @@ On Windows
Writing the flash Writing the flash
----------------- -----------------
First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you can simply connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses ``artiq_coremgmt`` to flash over network, an Ethernet connection and an IP address, supplied either with the ``-D`` option or in your :ref:`device database <device-db>`, are sufficient. First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you can simply connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses :mod:`~artiq.frontend.artiq_coremgmt` to flash over network, an Ethernet connection and an IP address, supplied either with the ``-D`` option or in your :ref:`device database <device-db>`, are sufficient.
For Kasli-SoC or ZC706: For Kasli-SoC or ZC706:
:: ::
$ artiq_coremgmt [-D 192.168.1.75] config write -f boot [afws_directory]/boot.bin $ artiq_coremgmt [-D IP_address] config write -f boot <afws_directory>/boot.bin
$ artiq_coremgmt reboot $ artiq_coremgmt reboot
If the device is not reachable due to corrupted firmware or networking problems, extract the SD card and copy ``boot.bin`` onto it manually. If the device is not reachable due to corrupted firmware or networking problems, extract the SD card and copy ``boot.bin`` onto it manually.
@ -91,16 +91,16 @@ For Kasli-SoC or ZC706:
For Kasli: For Kasli:
:: ::
$ artiq_flash -d [afws_directory] $ artiq_flash -d <afws_directory>
For KC705: For KC705:
:: ::
$ artiq_flash -t kc705 -d [afws_directory] $ artiq_flash -t kc705 -d <afws_directory>
The SW13 switches need to be set to 00001. The SW13 switches need to be set to 00001.
Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for ``artiq_flash`` in the :ref:`Utilities <flashing-loading-tool>` reference. Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for :mod:`~artiq.frontend.artiq_flash` in the :ref:`Utilities <flashing-loading-tool>` reference.
.. _connecting-uart: .. _connecting-uart:

View File

@ -1,10 +1,5 @@
Getting started with the core language Getting started with the core device
====================================== ====================================
.. _connecting-to-the-core-device:
Connecting to the core device
-----------------------------
As a very first step, we will turn on a LED on the core device. Create a file ``led.py`` containing the following: :: As a very first step, we will turn on a LED on the core device. Create a file ``led.py`` containing the following: ::
@ -14,12 +9,12 @@ As a very first step, we will turn on a LED on the core device. Create a file ``
class LED(EnvExperiment): class LED(EnvExperiment):
def build(self): def build(self):
self.setattr_device("core") self.setattr_device("core")
self.setattr_device("led") self.setattr_device("led0")
@kernel @kernel
def run(self): def run(self):
self.core.reset() self.core.reset()
self.led.on() self.led0.on()
The central part of our code is our ``LED`` class, which derives from :class:`~artiq.language.environment.EnvExperiment`. Almost all experiments should derive from this class, which provides access to the environment as well as including the necessary experiment framework from the base-level :class:`~artiq.language.environment.Experiment`. It will call our :meth:`~artiq.language.environment.HasEnvironment.build` at the right time and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` we use to gain access to our devices ``core`` and ``led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method is a kernel and must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host). The central part of our code is our ``LED`` class, which derives from :class:`~artiq.language.environment.EnvExperiment`. Almost all experiments should derive from this class, which provides access to the environment as well as including the necessary experiment framework from the base-level :class:`~artiq.language.environment.Experiment`. It will call our :meth:`~artiq.language.environment.HasEnvironment.build` at the right time and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` we use to gain access to our devices ``core`` and ``led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method is a kernel and must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host).
@ -32,7 +27,7 @@ If you don't have a ``device_db.py`` for your system, consult :ref:`device-db` t
python3 -c "import artiq; print(artiq.__path__[0])" python3 -c "import artiq; print(artiq.__path__[0])"
Run your code using ``artiq_run``, which is one of the ARTIQ front-end tools: :: Run your code using :mod:`~artiq.frontend.artiq_run`, which is one of the ARTIQ front-end tools: ::
$ artiq_run led.py $ artiq_run led.py
@ -51,7 +46,7 @@ Modify ``led.py`` as follows: ::
class LED(EnvExperiment): class LED(EnvExperiment):
def build(self): def build(self):
self.setattr_device("core") self.setattr_device("core")
self.setattr_device("led") self.setattr_device("led0")
@kernel @kernel
def run(self): def run(self):
@ -59,9 +54,9 @@ Modify ``led.py`` as follows: ::
s = input_led_state() s = input_led_state()
self.core.break_realtime() self.core.break_realtime()
if s: if s:
self.led.on() self.led0.on()
else: else:
self.led.off() self.led0.off()
You can then turn the LED off and on by entering 0 or 1 at the prompt that appears: :: You can then turn the LED off and on by entering 0 or 1 at the prompt that appears: ::
@ -75,7 +70,7 @@ What happens is that the ARTIQ compiler notices that the ``input_led_state`` fun
The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`. The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`.
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :ref:`artiq-real-time-i-o-concepts` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock. Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :doc:`rtio` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock.
Real-time Input/Output (RTIO) Real-time Input/Output (RTIO)
----------------------------- -----------------------------
@ -100,19 +95,11 @@ Create a new file ``rtio.py`` containing the following: ::
delay(2*us) delay(2*us)
self.ttl0.pulse(2*us) self.ttl0.pulse(2*us)
In its :meth:`~artiq.language.environment.HasEnvironment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database. In its :meth:`~artiq.language.environment.HasEnvironment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database. In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels.
In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels.
When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to. When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to. Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode. There are no input-only TTL channels.
Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode.
There are no input-only TTL channels.
The experiment then drives one million 2 µs long pulses separated by 2 µs each. The experiment then drives one million 2 µs long pulses separated by 2 µs each. Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run rtio.py``. Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%. This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled. The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable. Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle.
Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run rtio.py``.
Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%.
This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled.
The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable.
Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle.
Instead, inside the core device, output timing is generated by the gateware and the CPU only programs switching commands with certain timestamps that the CPU computes. Instead, inside the core device, output timing is generated by the gateware and the CPU only programs switching commands with certain timestamps that the CPU computes.
@ -166,11 +153,7 @@ Try the following code and observe the generated pulses on a 2-channel oscillosc
self.ttl1.pulse(4*us) self.ttl1.pulse(4*us)
delay(4*us) delay(4*us)
ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches. ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches. It simply remembers its position on the timeline (``now_mu``) when entering the ``parallel`` block and resets to that position after each individual statement. At the end of the block, the cursor is advanced to the furthest position it reached during the block. In other words, the statements in a ``parallel`` block are actually executed sequentially. Only the RTIO events generated by the statements are *scheduled* in parallel.
It simply remembers its position on the timeline (``now_mu``) when entering the ``parallel`` block and resets to that position after each individual statement.
At the end of the block, the cursor is advanced to the furthest position it reached during the block.
In other words, the statements in a ``parallel`` block are actually executed sequentially.
Only the RTIO events generated by the statements are *scheduled* in parallel.
Remember that while ``now_mu`` resets at the beginning of each statement in a ``parallel`` block, the wall clock advances regardless. If a particular statement takes a long time to execute (which is different from -- and unrelated to! -- the events *scheduled* by the statement taking a long time), the wall clock may advance past the reset value, putting any subsequent statements inside the block into a situation of negative slack (i.e., resulting in :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` ). Sometimes underflows may be avoided simply by reordering statements within the parallel block. This especially applies to input methods, which generally necessarily block CPU progress until the wall clock has caught up to or overtaken the cursor. Remember that while ``now_mu`` resets at the beginning of each statement in a ``parallel`` block, the wall clock advances regardless. If a particular statement takes a long time to execute (which is different from -- and unrelated to! -- the events *scheduled* by the statement taking a long time), the wall clock may advance past the reset value, putting any subsequent statements inside the block into a situation of negative slack (i.e., resulting in :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` ). Sometimes underflows may be avoided simply by reordering statements within the parallel block. This especially applies to input methods, which generally necessarily block CPU progress until the wall clock has caught up to or overtaken the cursor.
@ -186,32 +169,34 @@ Within a parallel block, some statements can be scheduled sequentially again usi
delay(4*us) delay(4*us)
.. warning:: .. warning::
``with parallel`` specifically 'parallelizes' the *top-level* statements inside a block. Consider as an example: :: ``with parallel`` specifically 'parallelizes' the *top-level* statements inside a block. Consider as an example:
.. code-block::
:linenos:
for i in range(1000000): for i in range(1000000):
with parallel: with parallel:
self.ttl0.pulse(2*us) # 1 self.ttl0.pulse(2*us)
if True: # 2 if True:
self.ttl1.pulse(2*us) # 3 self.ttl1.pulse(2*us)
self.ttl2.pulse(2*us) # 4 self.ttl2.pulse(2*us)
delay(4*us) delay(4*us)
This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of statement #2; it will not repeat the reset at the deeper indentation level for #3 or #4. This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of line #4; it will not repeat the reset at the deeper indentation level for #5 or #6.
In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, statements #3 and #4, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To execute #3 and #4 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement. In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, lines #5 and #6, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To schedule #5 and #6 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement.
Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to cause sequencing issues in the gateware; see also :ref:`sequence-errors`. Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to cause sequencing issues in the gateware; see also :ref:`sequence-errors`.
.. _rtio-analyzer-example: .. _rtio-analyzer:
RTIO analyzer RTIO analyzer
------------- -------------
The core device records the real-time I/O waveforms into a circular buffer. It is possible to dump any Python object so that it appears alongside the waveforms using the ``rtio_log`` function, which accepts a channel name (i.e. a log target) as the first argument: :: The core device records all real-time I/O waveforms, as well as the variation of RTIO slack, into a circular buffer, the contents of which can be extracted using :mod:`~artiq.frontend.artiq_coreanalyzer`. Try for example: ::
from artiq.experiment import * from artiq.experiment import *
class Tutorial(EnvExperiment): class Tutorial(EnvExperiment):
def build(self): def build(self):
self.setattr_device("core") self.setattr_device("core")
@ -220,19 +205,37 @@ The core device records the real-time I/O waveforms into a circular buffer. It i
@kernel @kernel
def run(self): def run(self):
self.core.reset() self.core.reset()
for i in range(100): for i in range(5):
self.ttl0.pulse(...) self.ttl0.pulse(0.1 * ms)
rtio_log("ttl0", "i", i) delay(0.1 * ms)
delay(...)
When using ``artiq_run``, the recorded data can be extracted using ``artiq_coreanalyzer`` (see :ref:`core-device-rtio-analyzer-tool`). To export it to VCD, which can be viewed using third-party tools such as GtkWave, use the command ``artiq_coreanalyzer -w rtio.vcd``. Recorded data can also be viewed directly with the ARTIQ dashboard, which will be presented later in :doc:`getting_started_mgmt`. When using :mod:`~artiq.frontend.artiq_run`, the recorded buffer data can be extracted directly into the terminal, using a command in the form of: ::
$ artiq_coreanalyzer -p
.. note::
The first time this command is run, it will retrieve the entire contents of the analyzer buffer, which may include every experiment you have run so far. For a more manageable introduction, run the analyzer once to clear the buffer, run the experiment, and then run the analyzer a second time, so that only the data from this single experiment is displayed.
This will produce a list of the exact output events submitted to RTIO, printed in chronological order, along with the state of both ``now_mu`` and ``rtio_counter_mu``. While useful in diagnosing some specific gateware errors (in particular, :ref:`sequencing issues <sequence-errors>`), it isn't the most readable of formats. An alternate is to export to VCD, which can be viewed using third-party tools such as GTKWave. Run the experiment again, and use a command in the form of: ::
$ artiq_coreanalyzer -w <file_name>.vcd
The ``<file_name>.vcd`` file should be immediately created and written. Check the directory the command was run in to find it.
.. tip::
Tutorials on GTKWave options (or other third-party tools) and how best to view VCD files can be found online. By default, the data in a trace like ``rtio_slack`` will probably be presented in a raw form. To see a stepped wave as in the ARTIQ dashboard, look for options to interpret the data as a real number, then as an analog signal.
Pay attention to the timescale of the waveform dock in your chosen viewer; if you have set your signals to display but nothing is visible, it is likely zoomed in or out much too far.
The easiest way to view recorded analyzer data, however, is directly in the ARTIQ dashboard, a feature which will be presented later in :ref:`interactivity-waveform`.
.. _getting-started-dma: .. _getting-started-dma:
Direct Memory Access (DMA) Direct Memory Access (DMA)
-------------------------- --------------------------
DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up'. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly. DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up' to ``now_mu``. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly.
Try this: :: Try this: ::

View File

@ -1,27 +1,31 @@
Getting started with the management system Using the management system
========================================== ===========================
In practice, rather than managing experiments by executing ``artiq_run`` over and over, most use cases are better served by using the ARTIQ *management system.* This is the high-level part of ARTIQ, which can be used to schedule experiments, distribute and store the results, and manage devices and parameters. It possesses a detailed GUI and can be used on several machines concurrently, allowing them to coordinate with each other and with the specialized hardware over the network. As a result, multiple users on different machines can schedule experiments or retrieve results on the same ARTIQ system, potentially simultaneously. In practice, rather than managing experiments by executing :mod:`~artiq.frontend.artiq_run` over and over, most use cases are better served by using the ARTIQ *management system*. This is the high-level application part of ARTIQ, which can be used to schedule experiments, manage devices and parameters, and distribute and store results. It also allows for distributed use of ARTIQ, with a single master coordinating demands on the system issued over the network by multiple clients. Using this system, multiple users on different machines can schedule experiments or analyze results on the same ARTIQ system, potentially simultaneously, without interfering with each other.
The management system consists of at least two parts: The management system consists of at least two parts:
a. the **ARTIQ master,** which runs on a single machine, facilitates communication with the core device and peripherals, and is responsible for most of the actual duties of the system, a. the **ARTIQ master,** which runs on a single machine, facilitates communication with the core device and peripherals, and is responsible for most of the actual duties of the system,
b. one or more **ARTIQ clients,** which may be local or remote and which communicate only with the master. Both a GUI (the **dashboard**) and a straightforward command line client are provided, with many of the same capabilities. b. one or more **ARTIQ clients,** which may be local or remote and which communicate only with the master. Both a GUI (the **dashboard**) and a straightforward **command line client** are provided, with many of the same capabilities.
as well as, optionally, as well as, optionally,
c. one or more **controller managers**, which help coordinate the operation of certain (generally, non-realtime) classes of device. c. one or more **controller managers**, which coordinate the operation of certain (generally, non-realtime) classes of device and provide certain services to the clients,
d. and one or more instances of the **ARTIQ browser**, a GUI application designed to facilitate the analysis of experiment results and datasets.
In this tutorial, we will explore the basic operation of the management system. Because the various components of the management system run wholly on the host machine, and not on the core device (in other words, they do not inherently involve any kernel functions), it is not necessary to have a core device or any special hardware set up to use it. The examples in this tutorial can all be carried out using only your computer. In this tutorial, we will explore the basic operation of the management system. Because the various components of the management system run wholly on the host machine, and not on the core device (in other words, they do not inherently involve any kernel functions), it is not necessary to have a core device or any specialized hardware set up to use it. The examples in this tutorial can all be carried out using only your host computer.
Starting your first experiment with the master Running your first experiment with the master
---------------------------------------------- ---------------------------------------------
In the previous tutorial, we used the ``artiq_run`` utility to execute our experiments, which is a simple standalone tool that bypasses the management system. We will now see how to run an experiment using the master and the dashboard. Until now, we have executed experiments using :mod:`~artiq.frontend.artiq_run`, which is a simple standalone tool that bypasses the management system. We will now see how to run an experiment using a master and a client. In this arrangement, the master is responsible for communicating with the core device, scheduling and keeping track of experiments, and carrying out RPCs the core device may call for. Clients submit experiments to the master to be scheduled and, if necessary, query the master about the state of experiments and their results.
First, create a folder ``~/artiq-master`` and copy into it the ``device_db.py`` for your system (your device database, exactly as in :ref:`connecting-to-the-core-device`.) The master uses the device database in the same way as ``artiq_run`` when communicating with the core device. Since no devices are actually used in these examples, you can also use the ``device_db.py`` found in ``examples/no_hardware``. First, create a folder called ``~/artiq-master``. Copy into it the ``device_db.py`` for your system (your device database, exactly as in :doc:`getting_started_core`); the master uses the device database in the same way as :mod:`~artiq.frontend.artiq_run` when communicating with the core device.
Secondly, create a subfolder ``~/artiq-master/repository`` to contain experiments. By default, the master scans for a folder of this name to determine what experiments are available. If you'd prefer to use a different name, this can be changed by running ``artiq_master -r [folder name]`` instead of ``artiq_master`` below. .. tip::
Since no devices are actually used in these examples, you can also use a device database in the model of the ``device_db.py`` from ``examples/no_hardware``, which uses resources from ``artiq/sim`` instead of referencing or requiring any real local hardware.
Secondly, create a subfolder ``~/artiq-master/repository`` to contain experiments. By default, the master scans for a folder of this name to determine what experiments are available; if you'd prefer to use a different name, this can be changed by running ``artiq_master -r [folder name]`` instead of ``artiq_master`` below. Experiments don't have to be in the repository to be submitted to the master, but the repository contains those experiments the master is automatically aware of.
Create a very simple experiment in ``~/artiq-master/repository`` and save it as ``mgmt_tutorial.py``: :: Create a very simple experiment in ``~/artiq-master/repository`` and save it as ``mgmt_tutorial.py``: ::
@ -35,63 +39,117 @@ Create a very simple experiment in ``~/artiq-master/repository`` and save it as
def run(self): def run(self):
print("Hello World") print("Hello World")
Start the master with: :: Start the master with: ::
$ cd ~/artiq-master $ cd ~/artiq-master
$ artiq_master $ artiq_master
This last command should not return, as the master keeps running. This command should display ``ARTIQ master is now ready`` and not return, as the master keeps running. In another terminal, use the client to request this experiment: ::
Now, start the dashboard with the following commands in another terminal: :: $ artiq_client submit repository/mgmt_tutorial.py
$ cd ~ This command should print a message in the format ``RID: 0``, telling you the scheduling ID assigned to the experiment by the master, and exit. Note that it doesn't matter *where* the client is run; the client does not require direct access to ``device_db.py`` or the repository folder, and only directly communicates with the master. Relatedly, the path to an experiment a client submits is given relative to the location of the *master*, not the client.
$ artiq_dashboard
.. note:: Return to the terminal where the master is running. You should see an output similar to: ::
In order to connect to a master over the network, start it with the command ::
$ artiq_master --bind [hostname or IP] INFO:worker(0,mgmt_tutorial.py):print:Hello World
In other words, a worker created by the master has executed the experiment and carried out the print instruction. Congratulations!
.. tip::
In order to run the master and the clients on different PCs, start the master with a ``--bind`` flag: ::
$ artiq_master --bind [hostname or IP to bind to]
and then use the option ``--server`` or ``-s`` for clients, as in: :: and then use the option ``--server`` or ``-s`` for clients, as in: ::
$ artiq_dashboard -s [hostname or IP of the master]
$ artiq_client -s [hostname or IP of the master] $ artiq_client -s [hostname or IP of the master]
$ artiq_dashboard -s [hostname or IP of the master]
Both IPv4 and IPv6 are supported. Both IPv4 and IPv6 are supported. See also the individual references :mod:`~artiq.frontend.artiq_master`, :mod:`~artiq.frontend.artiq_dashboard`, and :mod:`~artiq.frontend.artiq_client` for more details.
The dashboard should display the list of experiments from the repository folder in a dock called "Explorer". There should be only the experiment we created. Select it and click "Submit", then look at the "Log" dock for the output from this simple experiment. You may also notice that the master has created some other organizational files in its home directory, notably a folder ``results``, where a HDF5 record is preserved of every experiment that is submitted and run. The files in ``results`` will be discussed in greater detail in :doc:`using_data_interfaces`.
.. seealso:: Running the dashboard and controller manager
You may note that experiments may be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, especially when submitting longer experiments or submitting across multiple users, see :ref:`experiment-scheduling`. --------------------------------------------
Submitting experiments with :mod:`~artiq.frontend.artiq_client` has some interesting qualities: for instance, experiments can be requested simultaneously by different clients and be relied upon to execute cleanly in sequence, which is useful in a distributed context. On the other hand, on an local level, it doesn't necessarily carry many practical advantages over using :mod:`~artiq.frontend.artiq_run`. The real convenience of the management system lies in its GUI, the dashboard. We will now try submitting an experiment using the dashboard.
First, start the controller manager: ::
$ artiq_ctlmgr
Like the master, this command should not return, as the controller manager keeps running. Note that the controller manager requires access to the device database, but not in the local directory -- it gets that access automatically by connecting to the master.
.. note::
We will not be using controllers in this part of the tutorial. Nonetheless, the dashboard will expect to be able to contact certain controllers given in the device database, and print error messages if this isn't the case (e.g. ``Is aqctl_moninj_proxy running?``). It is equally possible to check your device database and start the requisite controllers manually, or to temporarily delete their entries from ``device_db.py``, but it's normally quite convenient to let the controller manager handle things. The role and use of controller managers will be covered in more detail in :doc:`using_data_interfaces`.
In a third terminal, start the dashboard: ::
$ artiq_dashboard
Like :mod:`~artiq.frontend.artiq_client`, the dashboard requires no direct access to the device database or the repository. It communicates with the master to receive information about available experiments and the state of the system.
You should see the list of experiments from the ``repository`` in the dock called 'Explorer'. In our case, this will only be the single experiment we created, listed by the name we gave it in the docstring inside the triple quotes, "Management tutorial". Select it, and in the window that opens, click 'Submit'.
This time you will find the output displayed directly in the dock called 'Log'. The dashboard log combines the master's console output, the dashboard's own logs, and the device logs of the core device itself (if there is one in use); normally, this is the only log it's necessary to check.
Adding a new experiment
-----------------------
Create a new file in your ``repository`` folder, called ``timed_tutorial.py``: ::
from artiq.experiment import *
import time
class TimedTutorial(EnvExperiment):
"""Timed tutorial"""
def build(self):
pass # no devices used
def run(self):
print("Hello World")
time.sleep(10)
print("Goodnight World")
Save it. You will notice that it does not immediately appear in the 'Explorer' dock. For stability reasons, the master operates with a cached idea of the repository, and changes in the file system will often not be reflected until a *repository rescan* is triggered.
You can ask it to do this through the command-line client: ::
$ artiq_client scan-repository
or you can right-click in the Explorer and select 'Scan repository HEAD'. Now you should be able to select and submit the new experiment.
If you switch the 'Log' dock to its 'Schedule' tab while the experiment is still running, you will see the experiment appear, displaying its RID, status, priority, and other information. Click 'Submit' again while the first experiment is in progress, and a second iteration of the experiment will appear in the Schedule, queued up to execute next in line.
.. note::
You may have noted that experiments can be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, see :ref:`experiment-scheduling`.
In the meantime, you can try out submitting either of the two experiments with different priority levels and take a look at the queues that ensue. If you are interested, you can try submitting experiments through the command line client at the same time, or even open a second dashboard in a different terminal. Observe that no matter the source, all submitted experiments will be accounted for and handled by the scheduler in an orderly way.
.. _mgmt-arguments: .. _mgmt-arguments:
Adding an argument Adding arguments
------------------ ----------------
Experiments may have arguments whose values can be set in the dashboard and used in the experiment's code. Modify the experiment as follows: :: Experiments may have arguments, values which can be set in the dashboard on submission and used in the experiment's code. Create a new experiment called ``argument_tutorial.py``, and give it the following :meth:`~artiq.language.environment.HasEnvironment.build` and :meth:`~artiq.language.environment.Experiment.run` functions: ::
def build(self): def build(self):
self.setattr_argument("count", NumberValue(precision=0, step=1)) self.setattr_argument("count", NumberValue(precision=0, step=1))
def run(self): def run(self):
for i in range(self.count): for i in range(self.count):
print("Hello World", i) print("Hello World", i)
The method :meth:`~artiq.language.environment.HasEnvironment.setattr_argument` acts to set the argument and make its value accessible, similar to the effect of :meth:`~artiq.language.environment.HasEnvironment.setattr_device`. The second input sets the type of the argument; here, :class:`~artiq.language.environment.NumberValue` represents a floating point numerical value. To learn what other types are supported, see :class:`artiq.language.environment` and :class:`artiq.language.scan`.
``NumberValue`` represents a floating point numeric argument. There are many other types, see :class:`~artiq.language.environment` and :class:`~artiq.language.scan`. Rescan the repository as before. Open the new experiment in the dashboard. Above the submission options, you should now see a spin box that allows you to set the value of ``count``. Try setting it and submitting it.
Use the command-line client to trigger a repository rescan: ::
artiq_client scan-repository
The dashboard should now display a spin box that allows you to set the value of the ``count`` argument. Try submitting the experiment as before.
Interactive arguments Interactive arguments
--------------------- ---------------------
It is also possible to use interactive arguments, which may be requested and supplied while the experiment is running. This time modify the experiment as follows: :: With standard arguments, it is only possible to use :meth:`~artiq.language.environment.HasEnvironment.setattr_argument` in :meth:`~artiq.language.environment.HasEnvironment.build`; these arguments are always requested at submission time. However, it is also possible to use *interactive* arguments, which can be requested and supplied inside :meth:`~artiq.language.environment.Experiment.run`, while the experiment is being executed. Modify the experiment as follows (and push the result): ::
def build(self): def build(self):
pass pass
@ -104,72 +162,81 @@ It is also possible to use interactive arguments, which may be requested and sup
interactive.setattr_argument("repeat", BooleanValue(True)) interactive.setattr_argument("repeat", BooleanValue(True))
repeat = interactive.repeat repeat = interactive.repeat
Close and reopen the submission window, or click on the button labeled 'Recompute all arguments', in order to update the submission parameters. Submit again. It should print once, then wait; you may notice in 'Schedule' that the experiment does not exit, but hangs at status 'running'.
Trigger a repository rescan and click the button labeled "Recompute all arguments". Now submit the experiment. It should print once, then wait; in the same dock as "Explorer", find and navigate to the tab "Interactive Args". You can now choose and supply a value for the argument mid-experiment. Every time an argument is requested, the experiment pauses until the input is supplied. If you choose to "Cancel" instead, an :exc:`~artiq.language.environment.CancelledArgsError` will be raised (which the experiment can choose to catch, rather than halting.) Now, in the same dock as 'Explorer', navigate to the tab 'Interactive Args'. You can now choose and submit a value for 'repeat'. Every time an interactive argument is requested, the experiment pauses until an input is supplied.
.. note::
If you choose to 'Cancel' instead, an :exc:`~artiq.language.environment.CancelledArgsError` will be raised (which an experiment can catch, instead of halting).
In order to request and supply multiple interactive arguments at once, simply place them in the same ``with`` block; see also the example ``interactive.py`` in ``examples/no_hardware``.
While regular arguments are all requested simultaneously before submitting, interactive arguments can be requested at any point. In order to request multiple interactive arguments at once, place them within the same ``with`` block; see also the example ``interactive.py`` in the ``examples/no_hardware`` folder.
.. _master-setting-up-git: .. _master-setting-up-git:
Setting up Git integration Setting up Git integration
-------------------------- --------------------------
So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with the tracking of modifications to experiments and with the traceability of a result to a particular version of an experiment. So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with tracking modifications to experiments and with the traceability to a particular version of an experiment.
.. note:: .. note::
The workflow we will describe in this tutorial corresponds to a situation where the ARTIQ master machine is also used as a Git server where multiple users may push and pull code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head. The workflow we will describe in this tutorial corresponds to a situation where the computer running the ARTIQ master is also used as a Git server to which multiple users may contribute code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head. See the :ref:`Management system <mgmt-git-integration>` page for notes on alternate workflows.
We will use the current ``repository`` folder as working directory for making local modifications to the experiments, move it away from the master data directory, and create a new ``repository`` folder that holds the Git data used by the master. Stop the master with Ctrl-C and enter the following commands: :: We will use our current ``repository`` folder as the working directory for making local modifications to the experiments, move it away from the master's data directory, and replace it with a new ``repository`` folder, which will hold only the Git data used by the master. Stop the master with Ctrl+C and enter the following commands: ::
$ cd ~/artiq-master $ cd ~/artiq-master
$ mv repository ~/artiq-work $ mv repository ~/artiq-work
$ mkdir repository $ mkdir repository
$ cd repository $ cd repository
$ git init --bare $ git init bare
Now, push data to into the bare repository. Initialize a regular (non-bare) Git repository into our working directory: :: Now initialize a regular (non-bare) Git repository in our working directory: ::
$ cd ~/artiq-work $ cd ~/artiq-work
$ git init $ git init
Then commit our experiment: :: Then add and commit our experiments: ::
$ git add mgmt_tutorial.py $ git add mgmt_tutorial.py
$ git commit -m "First version of the tutorial experiment" $ git add timed_tutorial.py
$ git commit -m "First version of the tutorial experiments"
and finally, push the commit into the master's bare repository: :: and finally, connect the two repositories and push the commit upstream to the master's repository: ::
$ git remote add origin ~/artiq-master/repository $ git remote add origin ~/artiq-master/repository
$ git push -u origin master $ git push -u origin master
Start the master again with the ``-g`` flag, telling it to treat the contents of the ``repository`` folder (not ``artiq-work``) as a bare Git repository: :: .. tip::
If you are not familiar with command-line Git and would like to understand these commands in more detail, search for some tutorials in basic use of Git; there are many available online.
Start the master again with the ``-g`` flag, which tells it to treat its ``repository`` folder as a bare Git repository: ::
$ cd ~/artiq-master $ cd ~/artiq-master
$ artiq_master -g $ artiq_master -g
.. note:: .. note::
You need at least one commit in the repository before you can start the master. Note that you need at least one commit in the repository before the master can be started.
There should be no errors displayed, and if you start the GUI again, you will find the experiment there. Now you should be able to restart the dashboard and see your experiments there.
To complete the master configuration, we must tell Git to make the master rescan the repository when new data is added to it. Create a file ``~/artiq-master/repository/hooks/post-receive`` with the following contents: :: To make things more convenient, we will make Git tell the master to rescan the repository whenever new data is pushed from downstream. Create a file ``~/artiq-master/repository/hooks/post-receive`` with the following contents: ::
#!/bin/sh #!/bin/sh
artiq_client scan-repository --async artiq_client scan-repository --async
Then set the execution permission on it: :: Then set its execution permissions: ::
$ chmod 755 ~/artiq-master/repository/hooks/post-receive $ chmod 755 repository/hooks/post-receive
.. note:: .. note::
Remote machines may also push and pull into the master's bare repository using e.g. Git over SSH. Remote client machines may also push and pull into the master repository, using e.g. Git over SSH.
Let's now make a modification to the experiment. In the source present in the working directory, add an exclamation mark at the end of "Hello World". Before committing it, check that the experiment can still be executed correctly by running it directly from the filesystem using: :: Let's now make a modification to the experiments. In the working directory ``artiq-work``, open ``mgmt_tutorial.py`` again and add an exclamation mark to the end of "Hello World". Before committing it, check that the experiment can still be executed correctly by submitting it directly from the working directory, using the command-line client: ::
$ artiq_client submit ~/artiq-work/mgmt_tutorial.py $ artiq_client submit ~/artiq-work/mgmt_tutorial.py
.. note:: .. note::
You may also use the "Open file outside repository" feature of the GUI, by right-clicking on the explorer. Alternatively, right-click in the Explorer dock and select the 'Open file outside repository' option for the same effect.
Verify the log in the GUI. If you are happy with the result, commit the new version and push it into the master's repository: :: Verify the log in the GUI. If you are happy with the result, commit the new version and push it into the master's repository: ::
@ -177,86 +244,17 @@ Verify the log in the GUI. If you are happy with the result, commit the new vers
$ git commit -a -m "More enthusiasm" $ git commit -a -m "More enthusiasm"
$ git push $ git push
.. note:: Notice that commands other than ``git commit`` and ``git push`` are no longer necessary. The Git hook should cause a repository rescan automatically, and submitting the experiment in the dashboard should run the new version, with enthusiasm included.
Notice that commands other than ``git push`` are no longer necessary.
The master should now run the new version from its repository.
As an exercise, add another experiment to the repository, commit and push the result, and verify that it appears in the GUI.
.. _getting-started-datasets:
Datasets
--------
ARTIQ uses the concept of *datasets* to manage the data exchanged with experiments, both supplied *to* experiments (generally, from other experiments) and saved *from* experiments (i.e. results or records).
Modify the experiment as follows, once again using a single non-interactive argument: ::
def build(self):
self.setattr_argument("count", NumberValue(precision=0, step=1))
def run(self):
self.set_dataset("parabola", np.full(self.count, np.nan), broadcast=True)
for i in range(self.count):
self.mutate_dataset("parabola", i, i*i)
time.sleep(0.5)
.. tip::
You need to import the ``time`` module, and the ``numpy`` module as ``np``.
Commit, push and submit the experiment as before. Go to the "Datasets" dock of the GUI and observe that a new dataset has been created. Once the experiment has finished executing, navigate to ``~/artiq-master/`` in a terminal or file manager and see that a new directory has been created called ``results``. Your dataset should be stored as an HD5 dump file in ``results`` under ``<date>/<hour>``.
.. note::
By default, datasets are primarily attributes of the experiments that run them, and are not shared with the master or the dashboard. The ``broadcast=True`` argument specifies that an argument should be shared in real-time with the master, which is responsible for dispatching it to the clients. A more detailed description of dataset methods and their arguments can be found under :mod:`artiq.language.environment.HasEnvironment`.
Open the file for your first dataset with HDFView, h5dump, or any similar third-party tool, and observe the data we just generated as well as the Git commit ID of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645`` which represents a particular state of the Git repository). A list of Git commit IDs can be found by running the ``git log`` command in ``~/artiq-master/``.
Applets
-------
Often, rather than the HDF dump, we would like to see our result datasets in readable graphical form, preferably immediately. In the ARTIQ dashboard, this is achieved by programs called "applets". Applets are independent programs that add simple GUI features and are run as separate processes (to achieve goals of modularity and resilience against poorly written applets). ARTIQ supplies several applets for basic plotting in the ``artiq.applets`` module, and users may write their own using the provided interfaces.
.. seealso::
For developing your own applets, see the references provided on the :ref:`management system page<applet-references>` of this manual.
For our ``parabola`` dataset, we will create an XY plot using the provided ``artiq.applets.plot_xy``. Applets are configured with simple command line options; we can find the list of available options using the ``-h`` flag. Try running: ::
$ python3 -m artiq.applets.plot_xy -h
In our case, we only need to supply our dataset as the y-values to be plotted. Navigate to the "Applet" dock in the dashboard. Right-click in the empty list and select "New applet from template" and "XY". This will generate a version of the applet command that shows all applicable options; edit the command so that it retrieves the ``parabola`` dataset and erase the unused options. The line should now be: ::
${artiq_applet}plot_xy parabola
Run the experiment again, and observe how the points are added one by one to the plot.
RTIO analyzer and the dashboard
-------------------------------
The :ref:`rtio-analyzer-example` is fully integrated into the dashboard. Navigate to the "Waveform" tab in the dashboard. After running the example experiment in that section, or any other experiment producing an analyzer trace, the waveform results will be directly displayed in this tab. It is also possible to save a trace, reopen it, or export it to VCD directly from the GUI.
Non-RTIO devices and the controller manager
-------------------------------------------
As described in :ref:`artiq-real-time-i-o-concepts`, there are two classes of equipment a laboratory typically finds itself needing to operate. So far, we have largely discussed ARTIQ in terms of one only: the kind of specialized hardware that requires the very high-resolution timing control ARTIQ provides. The other class comprises the broad range of regular, "slow" laboratory devices, which do *not* require nanosecond precision and can generally be operated perfectly well from a regular PC over a non-realtime channel such as USB.
To handle these "slow" devices, ARTIQ uses *controllers*, intermediate pieces of software which are responsible for the direct I/O to these devices and offer RPC interfaces to the network. Controllers can be started and run standalone, but are generally handled through the *controller manager*, available through the ``artiq-comtools`` package (normally automatically installed together with ARTIQ.) The controller manager in turn communicates with the ARTIQ master, and through it with clients or the GUI.
To start the controller manager (the master must already be running), the only command necessary is: ::
$ artiq_ctlmgr
Controllers may be run on a different machine from the master, or even on multiple different machines, alleviating cabling issues and OS compatibility problems. In this case, communication with the master happens over the network. If multiple machines are running controllers, they must each run their own controller manager (for which only ``artiq-comtools`` and its few dependencies are necessary, not the full ARTIQ installation.) Use the ``-s`` and ``--bind`` flags of ``artiq_ctlmgr`` to set IP addresses or hostnames to connect and bind to.
Note, however, that the controller for the particular device you are trying to connect to must first exist and be part of a complete Network Device Support Package, or NDSP. :doc:`Some NDSPs are already available <list_of_ndsps>`. If your device is not on this list, the system is designed to make it quite possible to write your own. For this, see the :doc:`developing_a_ndsp` page.
Once a device is correctly listed in ``device_db.py``, it can be added to an experiment using ``self.setattr_device([device_name])`` and the methods its API offers called straightforwardly as ``self.[device_name].[method_name]``. As long as the requisite controllers are running and available, the experiment can then be executed with ``artiq_run`` or through the management system.
The ARTIQ session The ARTIQ session
----------------- -----------------
If (as is often the case) you intend to mostly operate your ARTIQ system and its devices from a single machine, i.e., the networked aspects of the management system are largely unnecessary and you will be running master, dashboard, and controller manager on one computer, they can all be started simultaneously with the single command: :: Often, you will want to run an instance of the controller manager and dashboard along with the ARTIQ master, whether or not you also intend to allow other clients to connect remotely. For convenience, all three can be started simultaneously with a single command: ::
$ artiq_session $ artiq_session
Arguments to the individuals (including ``-s`` and ``--bind``) can still be specified using the ``-m``, ``-d`` and ``-c`` options respectively. Arguments to the individual tools (including ``-s`` and ``--bind``) can still be specified using the ``-m``, ``-d`` and ``-c`` options for master, dashboard and manager respectively. Use an equals sign to avoid confusion in parsing, for example: ::
$ artiq_session -m=-g
to start the session with the master in Git mode. See also :mod:`~artiq.frontend.artiq_session`.

View File

@ -24,12 +24,14 @@ ARTIQ manual
rtio rtio
getting_started_core getting_started_core
getting_started_mgmt getting_started_mgmt
using_data_interfaces
using_drtio_subkernels using_drtio_subkernels
.. toctree:: .. toctree::
:caption: ARTIQ components :caption: ARTIQ components
:maxdepth: 2 :maxdepth: 2
overview
environment environment
compiler compiler
management_system management_system
@ -48,13 +50,15 @@ ARTIQ manual
:caption: References :caption: References
:maxdepth: 2 :maxdepth: 2
main_frontend_reference main_frontend_tools
core_language_reference core_language_reference
core_drivers_reference core_drivers_reference
mgmt_system_reference
utilities utilities
.. toctree:: .. toctree::
:caption: Addenda :caption: Addenda
:maxdepth: 2 :maxdepth: 2
extending_rtio
faq faq

View File

@ -19,7 +19,7 @@ The easiest way to obtain ARTIQ is to install it into the user environment with
$ nix profile install git+https://github.com/m-labs/artiq.git $ nix profile install git+https://github.com/m-labs/artiq.git
Answer "Yes" to the questions about setting Nix configuration options (for more details see 'Troubleshooting' below.) You should now have a minimal installation of ARTIQ, where the usual front-end commands (``artiq_run``, ``artiq_master``, ``artiq_dashboard``, etc.) are all available to you. Answer "Yes" to the questions about setting Nix configuration options (for more details see 'Troubleshooting' below.) You should now have a minimal installation of ARTIQ, where the usual front-end commands (:mod:`~artiq.frontend.artiq_run`, :mod:`~artiq.frontend.artiq_master`, :mod:`~artiq.frontend.artiq_dashboard`, etc.) are all available to you.
This installation is however quite limited, as Nix creates a dedicated Python environment for the ARTIQ commands alone. This means that other useful Python packages, which ARTIQ is not dependent on but which you may want to use in your experiments (pandas, matplotlib...), are not available. This installation is however quite limited, as Nix creates a dedicated Python environment for the ARTIQ commands alone. This means that other useful Python packages, which ARTIQ is not dependent on but which you may want to use in your experiments (pandas, matplotlib...), are not available.
@ -82,15 +82,23 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
}; };
} }
.. note::
You might consider adding matplotlib and numba in particular, as these are required by certain ARTIQ example experiments.
You can now spawn a shell containing these packages by running ``$ nix shell`` in the directory containing the ``flake.nix``. This should make both the ARTIQ commands and all the additional packages available to you. You can exit the shell with Control+D or with the command ``exit``. A first execution of ``$ nix shell`` may take some time, but for any future repetitions Nix will use cached packages and startup should be much faster. You can now spawn a shell containing these packages by running ``$ nix shell`` in the directory containing the ``flake.nix``. This should make both the ARTIQ commands and all the additional packages available to you. You can exit the shell with Control+D or with the command ``exit``. A first execution of ``$ nix shell`` may take some time, but for any future repetitions Nix will use cached packages and startup should be much faster.
You might be interested in creating multiple directories containing different ``flake.nix`` files which represent different sets of packages for different purposes. If you are familiar with Conda, using Nix in this way is similar to having multiple Conda environments. You might be interested in creating multiple directories containing different ``flake.nix`` files which represent different sets of packages for different purposes. If you are familiar with Conda, using Nix in this way is similar to having multiple Conda environments.
To find more packages you can browse the `Nix package search <https://search.nixos.org/packages>`_ website. If your favorite package is not available with Nix, contact M-Labs using the helpdesk@ email. To find more packages you can browse the `Nix package search <https://search.nixos.org/packages>`_ website. If your favorite package is not available with Nix, contact M-Labs using the helpdesk@ email.
.. note::
If you find you prefer using flakes to your original ``nix profile`` installation, you can remove it from your system by running: ::
$ nix profile list
finding the entry with its ``Original flake URL`` listed as the GitHub ARTIQ repository, noting its index number (in a fresh Nix system it will normally be the only entry, at index 0), and running: ::
$ nix profile remove [index]
While using flakes, ARTIQ is not 'installed' as such in any permanent way. However, Nix will preserve independent cached packages in ``/nix/store`` for each flake, which over time or with many different flakes and versions can take up large amounts of storage space. To clear this cache, run ``$ nix-garbage-collect``.
.. _installing-troubleshooting: .. _installing-troubleshooting:
Troubleshooting Troubleshooting

View File

@ -3,30 +3,19 @@ List of available NDSPs
The following network device support packages are available for ARTIQ. If you would like to add yours to this list, just send us an email or a pull request. The following network device support packages are available for ARTIQ. If you would like to add yours to this list, just send us an email or a pull request.
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+ .. csv-table::
| Equipment | Nix package | MSYS2 package | Documentation | URL | :header: Equipment, Nix package, MSYS2 package, Documentation, URL
+=================================+===================================+==================================+===================================================================+========================================================+
| PDQ2 | Not available | Not available | `HTML <https://pdq.readthedocs.io>`_ | https://github.com/m-labs/pdq | PDQ2, Not available, Not available, `HTML <https://pdq.readthedocs.io>`_, `GitHub <https://github.com/m-labs/pdq>`__
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+ Lab Brick Digital Attenuator, ``lda``, ``lda``, Not available, `GitHub <https://github.com/m-labs/lda>`__
| Lab Brick Digital Attenuator | ``lda`` | ``lda`` | Not available | https://github.com/m-labs/lda | Novatech 4098B, ``novatech409b``, ``novatech409b``, Not available, `GitHub <https://github.com/m-labs/novatech409b>`__
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+ Thorlabs T-Cubes, ``thorlabs_tcube``, ``thorlabs_tcube``, Not available, `GitHub <https://github.com/m-labs/thorlabs_tcube>`__
| Novatech 409B | ``novatech409b`` | ``novatech409b`` | Not available | https://github.com/m-labs/novatech409b | Korad KA3005P, ``korad_ka3005p``, ``korad_ka3005p``, Not available, `GitHub <https://github.com/m-labs/korad_ka3005p>`__
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+ Newfocus 8742, ``newfocus8742``, ``newfocus8742``, Not available, `GitHub <https://github.com/quartiq/newfocus8742>`__
| Thorlabs T-Cubes | ``thorlabs_tcube`` | ``thorlabs_tcube`` | Not available | https://github.com/m-labs/thorlabs_tcube | Princeton Instruments PICam, Not available, Not available, Not available, `GitHub <https://github.com/quartiq/picam>`__
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+ Anel HUT2 power distribution, ``hut2``, Not available, Not available, `GitHub <https://github.com/quartiq/hut2>`__
| Korad KA3005P | ``korad_ka3005p`` | ``korad_ka3005p`` | Not available | https://github.com/m-labs/korad_ka3005p | TOPTICA lasers, ``toptica-lasersdk-artiq``, Not available, Not available, `GitHub <https://github.com/quartiq/lasersdk-artiq>`__
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+ HighFinesse wavemeters, ``highfinesse-net``, Not available, Not available, `GitHub <https://github.com/quartiq/highfinesse-net>`__
| Newfocus 8742 | ``newfocus8742`` | ``newfocus8742`` | Not available | https://github.com/quartiq/newfocus8742 | InfluxDB database, Not available, Not available, `HTML <https://gitlab.com/charlesbaynham/artiq_influx_generic>`__, `GitHub <https://gitlab.com/charlesbaynham/artiq_influx_generic>`__
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+
| Princeton Instruments PICam | Not available | Not available | Not available | https://github.com/quartiq/picam |
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+
| Anel HUT2 power distribution | ``hut2`` | Not available | Not available | https://github.com/quartiq/hut2 |
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+
| TOPTICA lasers | ``toptica-lasersdk-artiq`` | Not available | Not available | https://github.com/quartiq/lasersdk-artiq |
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+
| HighFinesse wavemeters | ``highfinesse-net`` | Not available | Not available | https://github.com/quartiq/highfinesse-net |
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+
| InfluxDB database | Not available | Not available | `HTML <https://gitlab.com/charlesbaynham/artiq_influx_generic>`__ | https://gitlab.com/charlesbaynham/artiq_influx_generic |
+---------------------------------+-----------------------------------+----------------------------------+-------------------------------------------------------------------+--------------------------------------------------------+
MSYS2 packages all start with the ``mingw-w64-clang-x86_64-`` prefix. MSYS2 packages all start with the ``mingw-w64-clang-x86_64-`` prefix.

View File

@ -1,51 +0,0 @@
Main front-end tool reference
=============================
These are the top-level commands used to run and manage ARTIQ experiments. Not all of the ARTIQ front-end is described here (many additional useful commands are presented in this manual in :doc:`utilities`) but these together comprise the main points of access for using ARTIQ as a system.
:mod:`artiq.frontend.artiq_run`
-------------------------------
.. argparse::
:ref: artiq.frontend.artiq_run.get_argparser
:prog: artiq_run
:nodefault:
.. _frontend-artiq-master:
:mod:`artiq.frontend.artiq_master`
----------------------------------
.. argparse::
:ref: artiq.frontend.artiq_master.get_argparser
:prog: artiq_master
:nodefault:
.. _frontend-artiq-client:
:mod:`artiq.frontend.artiq_client`
----------------------------------
.. argparse::
:ref: artiq.frontend.artiq_client.get_argparser
:prog: artiq_client
:nodefault:
.. _frontend-artiq-dashboard:
:mod:`artiq.frontend.artiq_dashboard`
-------------------------------------
.. argparse::
:ref: artiq.frontend.artiq_dashboard.get_argparser
:prog: artiq_dashboard
:nodefault:
:mod:`artiq.frontend.artiq_session`
-----------------------------------
.. argparse::
:ref: artiq.frontend.artiq_session.get_argparser
:prog: artiq_session
:nodefault:

View File

@ -0,0 +1,90 @@
Main front-end tools
====================
These are the top-level commands used to run and manage ARTIQ experiments. Not all of the ARTIQ front-end is described here (many additional useful commands are presented in this manual in :doc:`utilities`) but these together comprise the main points of access for using ARTIQ as a system.
.. Note that ARTIQ frontend has no docstrings and the ..automodule directives display nothing; they are there to make :mod: references function correctly, since sphinx-argparse does not support links to ..argparse directives in the same way.
:mod:`artiq.frontend.artiq_run`
-------------------------------
.. automodule:: artiq.frontend.artiq_run
.. argparse::
:ref: artiq.frontend.artiq_run.get_argparser
:prog: artiq_run
:nodefault:
.. _frontend-artiq-master:
:mod:`artiq.frontend.artiq_master`
----------------------------------
.. automodule:: artiq.frontend.artiq_master
.. argparse::
:ref: artiq.frontend.artiq_master.get_argparser
:prog: artiq_master
:nodefault:
.. _frontend-artiq-client:
:mod:`artiq.frontend.artiq_client`
----------------------------------
.. automodule:: artiq.frontend.artiq_client
.. argparse::
:ref: artiq.frontend.artiq_client.get_argparser
:prog: artiq_client
:nodefault:
.. _frontend-artiq-dashboard:
:mod:`artiq.frontend.artiq_dashboard`
-------------------------------------
.. automodule:: artiq.frontend.artiq_dashboard
.. argparse::
:ref: artiq.frontend.artiq_dashboard.get_argparser
:prog: artiq_dashboard
:nodefault:
.. _frontend-artiq-browser:
:mod:`artiq.frontend.artiq_browser`
-----------------------------------
.. automodule:: artiq.frontend.artiq_browser
.. argparse::
:ref: artiq.frontend.artiq_browser.get_argparser
:prog: artiq_browser
:nodefault:
:mod:`artiq.frontend.artiq_session`
-----------------------------------
.. automodule:: artiq.frontend.artiq_session
ARTIQ session manager. Automatically runs the master, dashboard and local controller manager on the current machine. The latter requires the ``artiq-comtools`` package to be installed.
When supplying arguments to individual front-end tools, use ``=`` to avoid ambiguity in argument parsing, e.g.: ::
artiq_session -m=-g -m=--device-db=alternate_device_db.py -c=-v
and so on.
.. argparse::
:ref: artiq.frontend.artiq_session.get_argparser
:prog: artiq_session
:nodescription:
:nodefault:
:mod:`artiq_comtools.artiq_ctlmgr`
----------------------------------
.. automodule:: artiq_comtools.artiq_ctlmgr
ARTIQ controller manager. Supplied in the separate package ``artiq-comtools``, which is included with a standard ARTIQ installation but can also be `installed standalone <https://github.com/m-labs/artiq-comtools>`_, with the intention of making it easier to run controllers and controller managers on machines where a full ARTIQ installation may not be necessary or convenient.
.. argparse::
:ref: artiq_comtools.artiq_ctlmgr.get_argparser
:prog: artiq_ctlmgr
:nodescription:
:nodefault:

View File

@ -10,43 +10,53 @@ Components
Master Master
^^^^^^ ^^^^^^
The :ref:`ARTIQ master <frontend-artiq-master>` is responsible for managing the parameter and device databases, the experiment repository, scheduling and running experiments, archiving results, and distributing real-time results. It is a headless component, and one or several clients (command-line or GUI) use the network to interact with it. The :ref:`ARTIQ master <frontend-artiq-master>` is responsible for managing the dataset and device databases, the experiment repository, scheduling and running experiments, archiving results, and distributing real-time results. It is a headless component, and one or several clients (command-line or GUI) use the network to interact with it.
It should not be confused with the 'master' device in a DRTIO system, which is only a designation for the particular core device acting as central node in a distributed configuration of ARTIQ. The two concepts are otherwise unrelated. The master expects to be given a directory on startup, the experiment repository, containing these experiments which are automatically tracked and communicated to clients. By default, it simply looks for a directory called ``repository``. The ``-r`` flag can be used to substitute an alternate location.
Command-line client It also expects access to a ``device_db.py``, with a corresponding flag ``--device-db`` to substitute a different file name. Additionally, it will reference or create certain files in the directory it is run in, among them ``dataset_db.mdb``, the LMDB database containing persistent datasets, ``last_rid.pyon``, which simply contains the last used RID, and the ``results`` directory.
^^^^^^^^^^^^^^^^^^^
The :ref:`command-line client <frontend-artiq-client>` connects to the master and permits modification and monitoring of the databases, monitoring the experiment schedule and log, and submitting experiments. .. note::
Because the other parts of the management system all seem to be able to access the information stored in these files, confusion can sometimes result about where it is really stored and how it is distributed. Device databases, datasets, results, and experiments are all solely kept and administered by the master, which communicates information to dashboards, browsers, and clients over the network whenever necessary.
Dashboard Notably, clients and dashboards do not send in experiments to the master; they request them from the array of experiments the master knows about, primarily those in ``repository``, but also in the master's local file system, if 'Open file outside repository' is selected. This is true even if ``repository`` is configured as a Git repository and cloned on other machines.
^^^^^^^^^
The :ref:`dashboard <frontend-artiq-dashboard>` connects to the master and is the main method of interacting with it. The main features of the dashboard are scheduling of experiments, setting of their arguments, examining the schedule, displaying real-time results, and debugging TTL and DDS channels in real time. The ARTIQ master should not be confused with the 'master' device in a DRTIO system, which is only a designation for the particular core device acting as central node in a distributed configuration of ARTIQ. The two concepts are otherwise unrelated.
Clients
^^^^^^^
The :ref:`command-line client <frontend-artiq-client>` connects to the master and permits modification and monitoring of the databases, reading the experiment schedule and log, and submitting experiments.
The :ref:`dashboard <frontend-artiq-dashboard>` connects to the master and is the main method of interacting with it. The main roles of the dashboard are scheduling of experiments, setting of their arguments, examining the schedule, displaying real-time results, and debugging TTL and DDS channels in real time.
The dashboard remembers and restores GUI state (window/dock positions, last values entered by the user, etc.) in between instances. This information is stored in a file called ``artiq_dashboard_{server}_{port}.pyon`` in the configuration directory (e.g. generally ``~/.config/artiq`` for Unix, same as data directory for Windows), distinguished in subfolders by ARTIQ version. The dashboard remembers and restores GUI state (window/dock positions, last values entered by the user, etc.) in between instances. This information is stored in a file called ``artiq_dashboard_{server}_{port}.pyon`` in the configuration directory (e.g. generally ``~/.config/artiq`` for Unix, same as data directory for Windows), distinguished in subfolders by ARTIQ version.
Browser
^^^^^^^
The :ref:`browser <frontend-artiq-browser>` is used to read ARTIQ ``results`` HDF5 files and run experiment :meth:`~artiq.language.environment.Experiment.analyze` functions, in particular to retrieve previous result databases, process them, and display them in ARTIQ applets. The browser also remembers and restores its GUI state; this is stored in a file called simply ``artiq_browser``, kept in the same configuration directory as the dashboard.
Controller manager Controller manager
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
The controller manager is provided in the ``artiq-comtools`` package (which is also made available separately from mainline ARTIQ, to allow independent use with minimal dependencies) and started with the ``artiq_ctlmgr`` command. It is responsible for running and stopping controllers on a machine. One controller manager must be run by each network node that runs controllers. The controller manager is provided in the ``artiq-comtools`` package (which is also made available separately from mainline ARTIQ, to allow independent use with minimal dependencies) and started with the :mod:`~artiq_comtools.artiq_ctlmgr` command. It is responsible for running and stopping controllers on a machine. One controller manager must be run by each network node that runs controllers.
A controller manager connects to the master and accesses the device database through it to determine what controllers need to be run. The local network address of the connection is used to filter for only those controllers allocated to the current node. Hostname resolution is supported. Changes to the device database are tracked and controllers will be stopped and started accordingly. A controller manager connects to the master and accesses the device database through it to determine what controllers need to be run. The local network address of the connection is used to filter for only those controllers allocated to the current node. Hostname resolution is supported. Changes to the device database are tracked and controllers will be stopped and started accordingly.
.. _mgmt-git-integration:
Git integration Git integration
--------------- ---------------
The master may use a Git repository to store experiment source code. Using Git has many advantages. For example, each result file (HDF5) contains the commit ID corresponding to the exact source code it was produced by, which helps reproducibility. The master may use a Git repository to store experiment source code. Using Git has many advantages. For example, each result file (HDF5) contains the commit ID corresponding to the exact source code it was produced by, which helps reproducibility. Although the master also supports non-bare repositories, it is recommended to use a bare repository (e.g. ``git init --bare``) to easily support push transactions from clients.
Although the master also supports non-bare repositories, it is recommended to use a bare repository (e.g. ``git init --bare``) to easily support push transactions from clients.
You will want Git to notify the master every time the repository is pushed to (e.g. updated), so that the master knows to rescan the repository for new or changed experiments. This is easiest done with the ``post-receive`` hook, as described in :ref:`master-setting-up-git`. You will want Git to notify the master every time the repository is pushed to (e.g. updated), so that the master knows to rescan the repository for new or changed experiments. This is easiest done with the ``post-receive`` hook, as described in :ref:`master-setting-up-git`.
.. note:: .. note::
If you plan to run the ARTIQ system entirely on a single machine, you may also consider using a non-bare repository and the ``post-commit`` hook to trigger repository scans every time you commit changes (locally). In this case, note that the ARTIQ master never uses the repository's working directory, but only what is committed. More precisely, when scanning the repository, it fetches the last (atomically) completed commit at that time of repository scan and checks it out in a temporary folder. This commit ID is used by default when subsequently submitting experiments. There is one temporary folder by commit ID currently referenced in the system, so concurrently running experiments from different repository revisions is fully supported by the master. If you plan to run the ARTIQ system entirely on a single machine, you may also consider using a non-bare repository and the ``post-commit`` hook to trigger repository scans every time you commit changes (locally). In this case, note that the ARTIQ master never uses the repository's working directory, but only what is committed. More precisely, when scanning the repository, it fetches the last (atomically) completed commit at that time of repository scan and checks it out in a temporary folder. This commit ID is used by default when subsequently submitting experiments. There is one temporary folder by commit ID currently referenced in the system, so concurrently running experiments from different repository revisions is fully supported by the master.
By default, the dashboard runs experiments from the repository, whereas the command-line client (``artiq_client submit``) runs experiments from the raw filesystem (which is useful for iterating rapidly without creating many disorganized commits). In order to run from the raw filesystem when using the dashboard, right-click in the Explorer window and select the option "Open file outside repository"; in order to run from the repository when using the command-line client, simply pass the ``-R`` flag. By default, the dashboard runs experiments from the repository, whereas the command-line client (``artiq_client submit``) runs experiments from the raw filesystem (which is useful for iterating rapidly without creating many disorganized commits). In order to run from the raw filesystem when using the dashboard, right-click in the Explorer window and select the option 'Open file outside repository'. In order to run from the repository when using the command-line client, simply pass the ``-R`` flag.
.. _experiment-scheduling: .. _experiment-scheduling:
@ -59,7 +69,7 @@ Basics
To make more efficient use of hardware resources, experiments are generally split into three phases and pipelined, such that potentially compute-intensive pre-computation or analysis phases may be executed in parallel with the bodies of other experiments, which access hardware. To make more efficient use of hardware resources, experiments are generally split into three phases and pipelined, such that potentially compute-intensive pre-computation or analysis phases may be executed in parallel with the bodies of other experiments, which access hardware.
.. seealso:: .. seealso::
These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`. These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`, which additionally provides access to the methods of :class:`artiq.language.environment.HasEnvironment`.
There are three stages of a standard experiment users may write code in: There are three stages of a standard experiment users may write code in:
@ -104,92 +114,3 @@ The experiment must place the hardware in a safe state and disconnect from the c
Accessing the :meth:`pause` and :meth:`~artiq.master.scheduler.Scheduler.check_pause` methods is done through a virtual device called ``scheduler`` that is accessible to all experiments. The scheduler virtual device is requested like regular devices using :meth:`~artiq.language.environment.HasEnvironment.get_device` (``self.get_device()``) or :meth:`~artiq.language.environment.HasEnvironment.setattr_device` (``self.setattr_device()``). Accessing the :meth:`pause` and :meth:`~artiq.master.scheduler.Scheduler.check_pause` methods is done through a virtual device called ``scheduler`` that is accessible to all experiments. The scheduler virtual device is requested like regular devices using :meth:`~artiq.language.environment.HasEnvironment.get_device` (``self.get_device()``) or :meth:`~artiq.language.environment.HasEnvironment.setattr_device` (``self.setattr_device()``).
:meth:`~artiq.master.scheduler.Scheduler.check_pause` can be called (via RPC) from a kernel, but :meth:`pause` must not be. :meth:`~artiq.master.scheduler.Scheduler.check_pause` can be called (via RPC) from a kernel, but :meth:`pause` must not be.
Scheduler API reference
-----------------------
The scheduler is exposed to the experiments via a virtual device called ``scheduler``. It can be requested like any regular device, and the methods below, as well as :meth:`pause`, can be called on the returned object.
The scheduler virtual device also contains the attributes ``rid``, ``pipeline_name``, ``priority`` and ``expid``, which contain the corresponding information about the current run.
.. autoclass:: artiq.master.scheduler.Scheduler
:members:
Client control broadcasts (CCBs)
--------------------------------
Client control broadcasts are requests made by experiments for clients to perform some action. Experiments do so by requesting the ``ccb`` virtual device and calling its ``issue`` method. The first argument of the issue method is the name of the broadcast, and any further positional and keyword arguments are passed to the broadcast.
CCBs are used by experiments to configure applets in the dashboard, for example for plotting purposes.
.. autoclass:: artiq.dashboard.applets_ccb.AppletsCCBDock
:members:
.. _applet-references:
Applet request interfaces
-------------------------
Applet request interfaces allow applets to perform actions on the master database and set arguments in the dashboard. Applets may inherit from ``artiq.applets.simple.SimpleApplet`` and call the methods defined below through the ``req`` attribute.
Embedded applets should use ``AppletRequestIPC`` while standalone applets use ``AppletRequestRPC``. ``SimpleApplet`` automatically chooses the correct interface on initialization.
.. autoclass:: artiq.applets.simple._AppletRequestInterface
:members:
Applet entry area
-----------------
Argument widgets can be used in applets through the :class:`~artiq.gui.applets.EntryArea` class. Below is a simple example code snippet: ::
entry_area = EntryArea()
# Create a new widget
entry_area.setattr_argument("bl", BooleanValue(True))
# Get the value of the widget (output: True)
print(entry_area.bl)
# Set the value
entry_area.set_value("bl", False)
# False
print(entry_area.bl)
The :class:`~artiq.gui.applets.EntryArea` object can then be added to a layout and integrated with the applet GUI. Multiple :class:`~artiq.gui.applets.EntryArea` objects can be used in a single applet.
.. class:: artiq.gui.applets.EntryArea
.. method:: setattr_argument(name, proc, group=None, tooltip=None)
Sets an argument as attribute. The names of the argument and of the
attribute are the same.
:param name: Argument name
:param proc: Argument processor, for example :class:`~artiq.language.environment.NumberValue`
:param group: Used to group together arguments in the GUI under a common category
:param tooltip: Tooltip displayed when hovering over the entry widget
.. method:: get_value(name)
Get the value of an entry widget.
:param name: Argument name
.. method:: get_values()
Get all values in the :class:`~artiq.gui.applets.EntryArea` as a dictionary. Names are stored as keys, and argument values as values.
.. method:: set_value(name, value)
Set the value of an entry widget. The change is temporary and will reset to default when the reset button is clicked.
:param name: Argument name
:param value: Object representing the new value of the argument. For :class:`~artiq.language.scan.Scannable` arguments, this parameter
should be a :class:`~artiq.language.scan.ScanObject`. The type of the :class:`~artiq.language.scan.ScanObject` will be set as the selected type when this function is called.
.. method:: set_values(values)
Set multiple values from a dictionary input. Calls :meth:`set_value` for each key-value pair.
:param values: Dictionary with names as keys and new argument values as values.

View File

@ -0,0 +1,99 @@
Management system interface
===========================
ARTIQ makes certain provisions to allow interactions between different components when using the :doc:`management system <management_system>`. An experiment may make requests of the master or clients using virtual devices to represent the necessary line of communication; applets may interact with databases, the dashboard, and directly with the user (through argument widgets). This page collects the references for these features.
In experiments
--------------
``scheduler`` virtual device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The scheduler is exposed to the experiments via a virtual device called ``scheduler``. It can be requested like any other device, and the methods below, as well as :meth:`pause`, can be called on the returned object.
The scheduler virtual device also contains the attributes ``rid``, ``pipeline_name``, ``priority`` and ``expid``, which contain the corresponding information about the current run.
.. autoclass:: artiq.master.scheduler.Scheduler
:members:
``ccb`` virtual device
^^^^^^^^^^^^^^^^^^^^^^
Client control broadcasts (CCBs) are requests made by experiments for clients to perform some action. Experiments do so by requesting the ``ccb`` virtual device and calling its ``issue`` method. The first argument of the issue method is the name of the broadcast, and any further positional and keyword arguments are passed to the broadcast.
CCBs are especially used by experiments to configure applets in the dashboard, for example for plotting purposes.
.. autoclass:: artiq.dashboard.applets_ccb.AppletsCCBDock
:members:
In applets
----------
.. _applet-references:
Applet request interfaces
^^^^^^^^^^^^^^^^^^^^^^^^^
Applet request interfaces allow applets to perform actions on the master database and set arguments in the dashboard. Applets may inherit from ``artiq.applets.simple.SimpleApplet`` and call the methods defined below through the ``req`` attribute.
Embedded applets should use ``AppletRequestIPC``, while standalone applets use ``AppletRequestRPC``. ``SimpleApplet`` automatically chooses the correct interface on initialization.
.. autoclass:: artiq.applets.simple._AppletRequestInterface
:members:
Applet entry area
^^^^^^^^^^^^^^^^^
Argument widgets can be used in applets through the :class:`~artiq.gui.applets.EntryArea` class. Below is a simple example code snippet: ::
entry_area = EntryArea()
# Create a new widget
entry_area.setattr_argument("bl", BooleanValue(True))
# Get the value of the widget (output: True)
print(entry_area.bl)
# Set the value
entry_area.set_value("bl", False)
# False
print(entry_area.bl)
The :class:`~artiq.gui.applets.EntryArea` object can then be added to a layout and integrated with the applet GUI. Multiple :class:`~artiq.gui.applets.EntryArea` objects can be used in a single applet.
.. class:: artiq.gui.applets.EntryArea
.. method:: setattr_argument(name, proc, group=None, tooltip=None)
Sets an argument as attribute. The names of the argument and of the
attribute are the same.
:param name: Argument name
:param proc: Argument processor, for example :class:`~artiq.language.environment.NumberValue`
:param group: Used to group together arguments in the GUI under a common category
:param tooltip: Tooltip displayed when hovering over the entry widget
.. method:: get_value(name)
Get the value of an entry widget.
:param name: Argument name
.. method:: get_values()
Get all values in the :class:`~artiq.gui.applets.EntryArea` as a dictionary. Names are stored as keys, and argument values as values.
.. method:: set_value(name, value)
Set the value of an entry widget. The change is temporary and will reset to default when the reset button is clicked.
:param name: Argument name
:param value: Object representing the new value of the argument. For :class:`~artiq.language.scan.Scannable` arguments, this parameter
should be a :class:`~artiq.language.scan.ScanObject`. The type of the :class:`~artiq.language.scan.ScanObject` will be set as the selected type when this function is called.
.. method:: set_values(values)
Set multiple values from a dictionary input. Calls :meth:`set_value` for each key-value pair.
:param values: Dictionary with names as keys and new argument values as values.

20
doc/manual/overview.rst Normal file
View File

@ -0,0 +1,20 @@
Overview diagram
================
.. Kept in a separate file for the luxury of good syntax highlighting
.. tikz:: Structure of an ARTIQ system. Note that most components can also be used independently of each other. Network-connected components can be distributed across multiple machines.
:align: left
:libs: positioning, shapes, arrows
:xscale: 100
:include: overview.tex
Footnotes:
1. As described in the :doc:`management tutorial <getting_started_mgmt>`, the clients can simply be run on the same machine as the master. It is common however to set up several other 'client machines' to allow distributed access to the system, each with an installation of ARTIQ and (potentially) a Git clone of the experiment repository. The communication between components is the same. Other ARTIQ commands and utilities can also be used from these machines.
2. It is common to have controllers for one or several 'slow' devices running on other machines. In this case, it's simply necessary to have one controller manager per machine. Note that running controllers and controller managers does not require a full ARTIQ install: the minimal package ``artiq-comtools`` can be installed instead. Otherwise, if these controllers are run side-by-side to the built-in controllers, they can be handled by the same controller manager, as depicted by the non-dashed lines.
3. The various other ARTIQ utilities may also connect to the master, read the device database, communicate with the core device, etc., depending on their specific functionalities. See their :doc:`individual references <utilities>`.
4. These are the :ref:`built-in controllers <built-in-ctlrs>` used by the ARTIQ dashboard to provide certain functionalities which require access to the core device. They can be found listed in :ref:`Utilities <utilities-ctrls>` (the commands prefixed with ``aqctl_`` rather than ``artiq_``). It is possible to use the ARTIQ management system without them, and the other abilities of the dashboard will still function, but important features will be unavailable.

212
doc/manual/overview.tex Normal file
View File

@ -0,0 +1,212 @@
[
thin/.style={line width=1.2pt}, % default line thickness
thick/.style={line width=1.8pt},
% ampersand replacement apparently necessary for sphinx latexpdf output (but not for html!)
ampersand replacement=\&
]
% artiq colors
\definecolor{brand}{HTML}{715ec7} % 'brand colour' violet
\definecolor{brand-light}{HTML}{a88cfd} % lighter brand colour
\definecolor{primary}{HTML}{0d3547}
\definecolor{secondary}{HTML}{1a6d93}
\definecolor{dingle}{HTML}{76c5d2}
\definecolor{rtm}{HTML}{3084bc} % RTM blue
% other colors used are tikz standard issue
% tikzstyle settings
\tikzstyle{every node} = [anchor = center, thin, minimum size = 1cm]
% the every matrix style is odd and can't be overriden (??) so its usefulness is limited
\tikzstyle{every matrix} = [anchor = north west, rounded corners]
\tikzstyle{matrices} = [thick, row sep = 0.3cm, column sep=0.3cm]
\tikzstyle{legend} = [draw = primary, thin, row sep=0.1cm, minimum width = 5cm, column sep = 0.2cm]
% particular node styles
\tikzstyle{label} = [minimum size = 0.5cm]
\tikzstyle{single} = [draw = brand, minimum width = 2 cm]
\tikzstyle{splits} = [draw = brand, rectangle split, solid]
\tikzstyle{linelabel} = [anchor = west, font=\sc\footnotesize]
\tikzstyle{colorbox} = [rectangle, very thin, draw=gray, sharp corners, minimum height = 0.4 cm, minimum width = 1.8 cm]
% color coding
\tikzstyle{line} = [thick]
\tikzstyle{network} = [line, draw = magenta]
\tikzstyle{git} = [line, draw = violet]
\tikzstyle{rtio} = [line, draw = brand-light]
\tikzstyle{drtio} = [line, draw = blue]
\tikzstyle{slow} = [line, draw = dingle]
\tikzstyle{jtag} = [line, draw = gray]
\tikzstyle{direct} = [line, draw = primary, thin]
\tikzstyle{master} = [draw = purple]
\tikzstyle{host} = [row sep = 0.5 cm, column sep = 0.7 cm]
\tikzstyle{pc} = [matrices, thin, inner sep = 0.2cm, draw = gray, dashed]
\tikzstyle{peripheral} = [matrices, draw = secondary]
\tikzstyle{core} = [matrices, draw = purple]
% PERIPHERALS (SLOW)
\matrix[peripheral](peri-slow) at (0, 15.75)
{
\node[label] (label){Peripherals (slow)};\\
\node [splits, rectangle split parts = 4](slow-list){
\nodepart{one} Optical translation stages
\nodepart{two} Optical rotation stages
\nodepart{three} Temperature controllers
\nodepart{four} etc...
};\\
};
% OTHER MACHINES
\matrix[pc](comtools) at (6.38, 15.75)
{
\node [splits, rectangle split parts = 3](remote-ctlrs){
\nodepart{one} Controller
\nodepart{two} Controller
\nodepart{three} etc...
}; \\
\node[single, minimum height = 0.5cm, font=\tt](remote-ctlmgr){
artiq\_ctlmgr\textsuperscript{2}
}; \\
};
\node[anchor = north west] (label) at (11, 16) {Client machines, NB: see footnote 1.};
\matrix[pc](clients) at (10.75, 15)
{
\node[single, draw=rtm, solid](remote-repo) {Cloned repository};
\&
\node [splits, master, rectangle split parts = 2, minimum width = 3 cm](remote-client){
\nodepart[font=\tt]{one} artiq\_client
\nodepart[font=\tt]{two} artiq\_dashboard
};
\\
};
% HOST MACHINE
\matrix[host](host-machine) at (0, 11)
{
\node[font=\tt, single](browser) {artiq\_browser};
\&
\node[single, master](results){Results};
\&
\node[splits, rectangle split parts = 2, master](master) {
\nodepart[font=\bf]{one} ARTIQ master
\nodepart{two} Datasets
};
\&
\node [single, master](repo){Repository};
\&
\node[font=\tt, single, master](ctlmgr) {artiq\_ctlmgr};
\\
\node[font=\tt, single](flash) {artiq\_flash};
\&
\node[font=\tt, single](run) {artiq\_run};
\&
\node[single, master](ddb) {Device database};
\&
\node[font=\tt, single](coremgmt) {artiq\_coremgmt\textsuperscript{3}};
\&
\node[single, master](builtin) {Built-in controllers\textsuperscript{4}};
\\
};
% BORDER LINE
\node[linelabel] (label1) at (-0.25, 6.3) {PCs AND REMOTE DEVICES};
\draw[primary, thick, dotted] (-0.5, 6) -- (19, 6);
\node[linelabel] (label2) at (0, 5.7) {REAL TIME HARDWARE};
% PERIPHERALS (FAST)
\matrix[peripheral](peri-fast) at (0, 5)
{
\node[label] (label){Peripherals (fast)};\\
\node [splits, rectangle split parts = 5](fast-list){
\nodepart{one} TTL in/out
\nodepart{two} DDS/Urukul
\nodepart{three} AWG/Phaser
\nodepart{four} DAC/Zotino
\nodepart{five} etc...
};\\
};
% CORE DEVICES
\matrix[core](core) at (6.75, 5)
{
\node[label] (label){Core device};\\
\node [splits, rectangle split parts = 2](core-list){
\nodepart{one} CPU
\nodepart{two} RTIO gateware
};\\
};
% SATELLITE
\matrix[core](satellite) at (6.25, 2)
{
\node[label] (label){Satellite core devices};\\
};
% legend
\matrix[legend] (legend) at (12, 5.5) {
\node [colorbox, fill=magenta] (network) {}; \& \node[label] (network-label) {Network}; \\
\node [colorbox, fill=brand-light] (rtio) {}; \& \node[label] (rtio-label) {RTIO}; \\
\node [colorbox, fill=dingle] (slow) {}; \& \node[label] (slow-label) {Slow IO}; \\
\node [colorbox, fill=blue] (drtio) {}; \& \node[label] (drtio-label) {DRTIO}; \\
\node [colorbox, fill=gray] (jtag) {}; \& \node[label] (jtag-label) {JTAG}; \\
\node [colorbox, fill=primary] (direct) {}; \& \node[label] (direct-label) {Local}; \\
\node [colorbox, fill=violet] (git) {}; \& \node[label] (git-label) {Git}; \\
};
%controllers
\draw[direct, dashed] (remote-ctlmgr.north) -- (remote-ctlrs);
\draw[slow] (slow-list.one east) -| +(0.5,0) |- (remote-ctlrs.one west);
\draw[slow] (slow-list.two east) -| +(0.8,0) |- (remote-ctlrs.two west);
\draw[slow] (slow-list.three east) -| +(1.1,0) |- (remote-ctlrs.three west);
\draw[direct] (remote-ctlrs.east) -| + (1, -2.25) -|(ctlmgr.70);
%ctlmgr
\draw[network, dashed] (remote-ctlmgr.west) -| +(-0.5, -1) -| (master.120);
% client
\draw[git] (remote-repo.south) -- +(0, -1) -| (repo.110);
\draw[network] (remote-client.south) -- +(0, -1.5) -| (master.90);
\draw[network] (remote-client.two east) -- +(0.7, 0) |- (builtin.east);
%host
\draw[direct] (browser) -- (results);
%master
\draw[direct] (results) -- (master);
\draw[direct] (master) -- (ddb);
\draw[direct] (master) -- (repo);
\draw[direct] (run) -- (ddb);
\draw[direct] (coremgmt) -- (ddb);
% ctlmgr
\draw[network] (master.60) -- +(0, 0.25) -| (ctlmgr.110);
\draw[network] (ctlmgr) -- (builtin);
% core connections
\draw[jtag] (flash.south) |- +(1, -1) -| (core.125);
\draw[network] (run.south) |- +(1, -0.75) -| (core.110);
\draw[network] (master.230) |- +(-1.25, -0.25) |- +(0, -2) -| (core.north);
\draw[network] (coremgmt.south) |- +(-0.75, -0.75) -| (core.70);
\draw[network] (builtin.south) |- +(-2, -1.1) -| (core.55);
%rtio
\node (branch) at (5, 2.5){};
\draw[rtio] (core-list.two west) |- +(-1, 0) |- (branch.center);
\draw[rtio] (branch.center) |- (fast-list.one east);
\draw[rtio] (branch.center) |- (fast-list.two east);
\draw[rtio] (branch.center) |- (fast-list.three east);
\draw[rtio] (branch.center) |- (fast-list.four east);
\draw[rtio] (branch.center) |- (fast-list.five east);
%drtio
\draw[drtio] (core-list.one east) |- +(1, 0) |- (satellite.east);
\draw[rtio] (satellite.west) -| +(-0.5, 0) |- (5, 2);

View File

@ -1,32 +1,19 @@
.. _artiq-real-time-i-o-concepts: ARTIQ Real-Time I/O concepts
ARTIQ Real-Time I/O Concepts
============================ ============================
The ARTIQ Real-Time I/O design employs several concepts to achieve its goals of high timing resolution on the nanosecond scale and low latency on the microsecond scale while still not sacrificing a readable and extensible language. The ARTIQ Real-Time I/O design employs several concepts to achieve its goals of high timing resolution on the nanosecond scale and low latency on the microsecond scale while still not sacrificing a readable and extensible language.
In a typical environment two very different classes of hardware need to be controlled. In a typical environment two very different classes of hardware need to be controlled. One class is the vast arsenal of diverse laboratory hardware that interfaces with and is controlled from a typical PC. The other is specialized real-time hardware that requires tight coupling and a low-latency interface to a CPU. The ARTIQ code that describes a given experiment is composed of two types of "programs": regular Python code that is executed on the host and ARTIQ *kernels* that are executed on a *core device*. The CPU that executes the ARTIQ kernels has direct access to specialized programmable I/O timing logic (part of the *gateware*). The two types of code can invoke each other and transitions between them are seamless.
One class is the vast arsenal of diverse laboratory hardware that interfaces with and is controlled from a typical PC.
The other is specialized real-time hardware that requires tight coupling and a low-latency interface to a CPU.
The ARTIQ code that describes a given experiment is composed of two types of "programs":
regular Python code that is executed on the host and ARTIQ *kernels* that are executed on a *core device*.
The CPU that executes the ARTIQ kernels has direct access to specialized programmable I/O timing logic (part of the *gateware*).
The two types of code can invoke each other and transitions between them are seamless.
The ARTIQ kernels do not interface with the real-time gateware directly. The ARTIQ kernels do not interface with the real-time gateware directly. That would lead to imprecise, indeterminate, and generally unpredictable timing. Instead the CPU operates at one end of a bank of FIFO (first-in-first-out) buffers while the real-time gateware at the other end guarantees the *all or nothing* level of excellent timing precision.
That would lead to imprecise, indeterminate, and generally unpredictable timing.
Instead the CPU operates at one end of a bank of FIFO (first-in-first-out) buffers while the real-time gateware at the other end guarantees the *all or nothing* level of excellent timing precision.
A FIFO for an output channel holds timestamps and event data describing when and what is to be executed. A FIFO for an output channel holds timestamps and event data describing when and what is to be executed. The CPU feeds events into this FIFO. A FIFO for an input channel contains timestamps and event data for events that have been recorded by the real-time gateware and are waiting to be read out by the CPU on the other end.
The CPU feeds events into this FIFO.
A FIFO for an input channel contains timestamps and event data for events that have been recorded by the real-time gateware and are waiting to be read out by the CPU on the other end.
Timeline and terminology Timeline and terminology
------------------------ ------------------------
The set of all input and output events on all channels constitutes the *timeline*. The set of all input and output events on all channels constitutes the *timeline*. A high-resolution wall clock (``rtio_counter_mu``) counts clock cycles and manages the precise timing of the events. Output events are executed when their timestamp matches the current clock value. Input events are recorded when they reach the gateware and stamped with the current clock value accordingly.
A high-resolution wall clock (``rtio_counter_mu``) counts clock cycles and manages the precise timing of the events. Output events are executed when their timestamp matches the current clock value. Input events are recorded when they reach the gateware and stamped with the current clock value accordingly.
The kernel runtime environment maintains a timeline cursor (called ``now_mu``) used as the timestamp when output events are submitted to the FIFOs. Both ``now_mu`` and ``rtio_counter_mu`` are counted in integer *machine units,* or mu, rather than SI units. The machine unit represents the maximum resolution of RTIO timing in an ARTIQ system. The duration of a machine unit is the *reference period* of the system, and may be changed by the user, but normally corresponds to a duration of one nanosecond. The kernel runtime environment maintains a timeline cursor (called ``now_mu``) used as the timestamp when output events are submitted to the FIFOs. Both ``now_mu`` and ``rtio_counter_mu`` are counted in integer *machine units,* or mu, rather than SI units. The machine unit represents the maximum resolution of RTIO timing in an ARTIQ system. The duration of a machine unit is the *reference period* of the system, and may be changed by the user, but normally corresponds to a duration of one nanosecond.
@ -34,34 +21,22 @@ The timeline cursor ``now_mu`` can be moved forward or backward on the timeline
RTIO timestamps, the timeline cursor, and the ``rtio_counter_mu`` wall clock are all counted relative to the core device startup/boot time. The wall clock keeps running across experiments. RTIO timestamps, the timeline cursor, and the ``rtio_counter_mu`` wall clock are all counted relative to the core device startup/boot time. The wall clock keeps running across experiments.
Absolute timestamps can be large numbers. Absolute timestamps can be large numbers. They are represented internally as 64-bit integers. With a typical one-nanosecond machine unit, this covers a range of hundreds of years. Conversions between such a large integer number and a floating point representation can cause loss of precision through cancellation. When computing the difference of absolute timestamps, use ``self.core.mu_to_seconds(t2-t1)``, not ``self.core.mu_to_seconds(t2)-self.core.mu_to_seconds(t1)`` (see :meth:`~artiq.coredevice.core.Core.mu_to_seconds`). When accumulating time, do it in machine units and not in SI units, so that rounding errors do not accumulate.
They are represented internally as 64-bit integers.
With a typical one-nanosecond machine unit, this covers a range of hundreds of years.
Conversions between such a large integer number and a floating point representation can cause loss of precision through cancellation.
When computing the difference of absolute timestamps, use ``self.core.mu_to_seconds(t2-t1)``, not ``self.core.mu_to_seconds(t2)-self.core.mu_to_seconds(t1)`` (see :meth:`~artiq.coredevice.core.Core.mu_to_seconds`).
When accumulating time, do it in machine units and not in SI units, so that rounding errors do not accumulate.
.. note:: .. note::
Absolute timestamps are also referred to as *RTIO fine timestamps,* because they run on a significantly finer resolution than the timestamps provided by the so-called *coarse RTIO clock,* the actual clocking signal provided to or generated by the core device. Absolute timestamps are also referred to as *RTIO fine timestamps,* because they run on a significantly finer resolution than the timestamps of the so-called *coarse RTIO clock,* the actual clocking signal provided to or generated by the core device. The frequency of the coarse RTIO clock is set by the core device :ref:`clocking settings <core-device-clocking>` but is most commonly 125MHz, which corresponds to eight one-nanosecond machine units per coarse RTIO cycle.
The frequency of the coarse RTIO clock is set by the core device :ref:`clocking settings <core-device-clocking>` but is most commonly 125MHz, which corresponds to eight one-nanosecond machine units per coarse RTIO cycle.
The *coarse timestamp* of an event is its timestamp as according to the lower resolution of the coarse clock. The *coarse timestamp* of an event is its timestamp as according to the lower resolution of the coarse clock. It is in practice a truncated version of the fine timestamp. In general, ARTIQ offers *precision* on the fine level, but *operates* at the coarse level; this is rarely relevant to the user, but understanding it may clarify the behavior of some RTIO issues (e.g. sequence errors).
It is in practice a truncated version of the fine timestamp.
In general, ARTIQ offers *precision* on the fine level, but *operates* at the coarse level; this is rarely relevant to the user, but understanding it may clarify the behavior of some RTIO issues (e.g. sequence errors).
.. Related: https://github.com/m-labs/artiq/issues/1237 .. Related: https://github.com/m-labs/artiq/issues/1237
The following basic example shows how to place output events on the timeline. The following basic example shows how to place output events on the timeline. It emits a precisely timed 2 µs pulse::
It emits a precisely timed 2 µs pulse::
ttl.on() ttl.on()
delay(2*us) delay(2*us)
ttl.off() ttl.off()
The device ``ttl`` represents a single digital output channel (:class:`artiq.coredevice.ttl.TTLOut`). The device ``ttl`` represents a single digital output channel (:class:`artiq.coredevice.ttl.TTLOut`). The :meth:`artiq.coredevice.ttl.TTLOut.on` method places an rising edge on the timeline at the current cursor position (``now_mu``). Then the cursor is moved forward 2 µs and a falling edge is placed at the new cursor position. Later, when the wall clock reaches the respective timestamps, the RTIO gateware executes the two events.
The :meth:`artiq.coredevice.ttl.TTLOut.on` method places an rising edge on the timeline at the current cursor position (``now_mu``).
Then the cursor is moved forward 2 µs and a falling edge is placed at the new cursor position.
Later, when the wall clock reaches the respective timestamps, the RTIO gateware executes the two events.
The following diagram shows what is going on at the different levels of the software and gateware stack (assuming one machine unit of time is 1 ns): The following diagram shows what is going on at the different levels of the software and gateware stack (assuming one machine unit of time is 1 ns):
@ -88,7 +63,7 @@ This sequence is exactly equivalent to::
ttl.pulse(2*us) ttl.pulse(2*us)
This method :meth:`artiq.coredevice.ttl.TTLOut.pulse` advances the timeline cursor (using :func:`~artiq.language.core.delay` internally) by exactly the amount given. Other methods such as :meth:`~artiq.coredevice.ttl.TTLOut.on`, :meth:`~artiq.coredevice.ttl.TTLOut.off`, :meth:`~artiq.coredevice.ad9914.AD9914.set` do not modify the timeline cursor. The latter are called *zero-duration* methods. This method :meth:`artiq.coredevice.ttl.TTLOut.pulse` advances the timeline cursor (using :func:`~artiq.language.core.delay` internally) by exactly the amount given. ther methods such as :meth:`~artiq.coredevice.ttl.TTLOut.on`, :meth:`~artiq.coredevice.ttl.TTLOut.off`, :meth:`~artiq.coredevice.ad9914.AD9914.set` do not modify the timeline cursor. The latter are called *zero-duration* methods.
Output errors and exceptions Output errors and exceptions
---------------------------- ----------------------------
@ -96,11 +71,7 @@ Output errors and exceptions
Underflows Underflows
^^^^^^^^^^ ^^^^^^^^^^
A RTIO ouput event must always be programmed with a timestamp in the future. A RTIO ouput event must always be programmed with a timestamp in the future. In other words, the timeline cursor ``now_mu`` must be in advance of the current wall clock ``rtio_counter_mu``: the past cannot be altered. The following example tries to place a rising edge event on the timeline. If the current cursor is in the past, an :class:`artiq.coredevice.exceptions.RTIOUnderflow` exception is thrown. The experiment attempts to handle the exception by moving the cursor forward and repeating the programming of the rising edge::
In other words, the timeline cursor ``now_mu`` must be in advance of the current wall clock ``rtio_counter_mu``: the past cannot be altered.
The following example tries to place a rising edge event on the timeline.
If the current cursor is in the past, an :class:`artiq.coredevice.exceptions.RTIOUnderflow` exception is thrown.
The experiment attempts to handle the exception by moving the cursor forward and repeating the programming of the rising edge::
try: try:
ttl.on() ttl.on()
@ -109,8 +80,7 @@ The experiment attempts to handle the exception by moving the cursor forward and
delay(16.6667*ms) delay(16.6667*ms)
ttl.on() ttl.on()
Once the timeline cursor has overtaken the wall clock, the exception does not reoccur and the event can be scheduled successfully. Once the timeline cursor has overtaken the wall clock, the exception does not reoccur and the event can be scheduled successfully. This can also be thought of as adding positive slack to the system.
This can also be thought of as adding positive slack to the system.
.. wavedrom:: .. wavedrom::
@ -133,7 +103,7 @@ This can also be thought of as adding positive slack to the system.
To track down :class:`~artiq.coredevice.exceptions.RTIOUnderflow` exceptions in an experiment there are a few approaches: To track down :class:`~artiq.coredevice.exceptions.RTIOUnderflow` exceptions in an experiment there are a few approaches:
* Exception backtraces show where underflow has occurred while executing the code. * Exception backtraces show where underflow has occurred while executing the code.
* The :ref:`integrated logic analyzer <core-device-rtio-analyzer-tool>` shows the timeline context that lead to the exception. The analyzer is always active and supports plotting of RTIO slack. This may be useful to spot where and how an experiment has 'run out' of positive slack. * The :ref:`integrated logic analyzer <rtio-analyzer>` shows the timeline context that lead to the exception. The analyzer is always active and supports plotting of RTIO slack. This makes it possible to visually find where and how an experiment has 'run out' of positive slack.
.. _sequence-errors: .. _sequence-errors:
@ -149,7 +119,7 @@ If an event with a timestamp coarsely equal to or lesser than the previous times
By default, the ARTIQ SED has eight lanes, which normally suffices to avoid sequence errors, but problems may still occur if many (>8) events are issued to the gateware with interleaving timestamps. Due to the strict timing limitations imposed on RTIO gateware, it is not possible for the SED to rearrange events in a lane once submitted, nor to anticipate future events when making lane choices. This makes sequence errors fairly 'unintelligent', but also generally fairly easy to eliminate by manually rearranging the generation of events (*not* rearranging the timing of the events themselves, which is rarely necessary.) By default, the ARTIQ SED has eight lanes, which normally suffices to avoid sequence errors, but problems may still occur if many (>8) events are issued to the gateware with interleaving timestamps. Due to the strict timing limitations imposed on RTIO gateware, it is not possible for the SED to rearrange events in a lane once submitted, nor to anticipate future events when making lane choices. This makes sequence errors fairly 'unintelligent', but also generally fairly easy to eliminate by manually rearranging the generation of events (*not* rearranging the timing of the events themselves, which is rarely necessary.)
It is also possible to increase the number of SED lanes in the gateware, which will reduce the frequency of sequencing issues, but will also correspondingly put more stress on FPGA resources and timing. It is also possible to increase the number of SED lanes in the gateware, which will reduce the frequency of sequencing issues, but will correspondingly put more stress on FPGA resources and timing.
Other notes: Other notes:
@ -173,24 +143,20 @@ Like sequence errors, collisions originate in gateware and do not stop the execu
Busy errors Busy errors
^^^^^^^^^^^ ^^^^^^^^^^^
A busy error occurs when at least one output event could not be executed because the output channel was already busy executing an event. This differs from a collision error in that a collision is triggered when a sequence of events overwhelms *communication* with a channel, and a busy error is triggered when *execution* is overwhelmed. Busy errors are only possible in the context of single events with execution times longer than a coarse RTIO clock cycle; the exact parameters will depend on the nature of the output channel (e.g. specific peripheral device). A busy error occurs when at least one output event could not be executed because the output channel was already busy executing an event. This differs from a collision error in that a collision is triggered when a sequence of events overwhelms *communication* with a channel, and a busy error is triggered when *execution* is overwhelmed. Busy errors are only possible in the context of single events with execution times longer than a coarse RTIO clock cycle; the exact parameters will depend on the nature of the output channel (e.g. the specific peripheral device).
Offending event(s) are discarded and the problem is reported asynchronously via the core log. Offending event(s) are discarded and the problem is reported asynchronously via the core log.
Input channels and events Input channels and events
------------------------- -------------------------
Input channels detect events, timestamp them, and place them in a buffer for the experiment to read out. Input channels detect events, timestamp them, and place them in a buffer for the experiment to read out. The following example counts the rising edges occurring during a precisely timed 500 ns interval. If more than 20 rising edges are received, it outputs a pulse::
The following example counts the rising edges occurring during a precisely timed 500 ns interval.
If more than 20 rising edges are received, it outputs a pulse::
if input.count(input.gate_rising(500*ns)) > 20: if input.count(input.gate_rising(500*ns)) > 20:
delay(2*us) delay(2*us)
output.pulse(500*ns) output.pulse(500*ns)
Note that many input methods will necessarily involve the wall clock catching up to the timeline cursor or advancing before it. Note that many input methods will necessarily involve the wall clock catching up to the timeline cursor or advancing before it. This is to be expected: managing output events means working to plan the future, but managing input events means working to react to the past. For input channels, it is the past that is under discussion.
This is to be expected: managing output events means working to plan the future, but managing input events means working to react to the past.
For input channels, it is the past that is under discussion.
In this case, the :meth:`~artiq.coredevice.ttl.TTLInOut.gate_rising` waits for the duration of the 500ns interval (or *gate window*) and records an event for each rising edge. At the end of the interval it exits, leaving the timeline cursor at the end of the interval (``now_mu = rtio_counter_mu``). :meth:`~artiq.coredevice.ttl.TTLInOut.count` unloads these events from the input buffers and counts the number of events recorded, during which the wall clock necessarily advances (``rtio_counter_mu > now_mu``). Accordingly, before we place any further output events, a :func:`~artiq.language.core.delay` is necessary to re-establish positive slack. In this case, the :meth:`~artiq.coredevice.ttl.TTLInOut.gate_rising` waits for the duration of the 500ns interval (or *gate window*) and records an event for each rising edge. At the end of the interval it exits, leaving the timeline cursor at the end of the interval (``now_mu = rtio_counter_mu``). :meth:`~artiq.coredevice.ttl.TTLInOut.count` unloads these events from the input buffers and counts the number of events recorded, during which the wall clock necessarily advances (``rtio_counter_mu > now_mu``). Accordingly, before we place any further output events, a :func:`~artiq.language.core.delay` is necessary to re-establish positive slack.
@ -220,13 +186,7 @@ Similar situations arise with methods such as :meth:`TTLInOut.sample_get <artiq.
Overflow exceptions Overflow exceptions
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
The RTIO input channels buffer input events received while an input gate is open, or when using the sampling API (:meth:`TTLInOut.sample_input <artiq.coredevice.ttl.TTLInOut.sample_input>`) at certain points in time. The RTIO input channels buffer input events received while an input gate is open, or when using the sampling API (:meth:`TTLInOut.sample_input <artiq.coredevice.ttl.TTLInOut.sample_input>`) at certain points in time. The events are kept in a FIFO until the CPU reads them out via e.g. :meth:`~artiq.coredevice.ttl.TTLInOut.count`, :meth:`~artiq.coredevice.ttl.TTLInOut.timestamp_mu` or :meth:`~artiq.coredevice.ttl.TTLInOut.sample_get`. The size of these FIFOs is finite and specified in gateware; in practice, it is limited by the resources available to the FPGA, and therefore differs depending on the specific core device being used. If a FIFO is full and another event comes in, this causes an overflow condition. The condition is converted into an :class:`~artiq.coredevice.exceptions.RTIOOverflow` exception that is raised on a subsequent invocation of one of the readout methods. Overflow exceptions are generally best dealt with simply by reading out from the input buffers more frequently. In odd or particular cases, users may consider modifying the length of individual buffers in gateware.
The events are kept in a FIFO until the CPU reads them out via e.g. :meth:`~artiq.coredevice.ttl.TTLInOut.count`, :meth:`~artiq.coredevice.ttl.TTLInOut.timestamp_mu` or :meth:`~artiq.coredevice.ttl.TTLInOut.sample_get`.
The size of these FIFOs is finite and specified in gateware; in practice, it is limited by the resources available to the FPGA, and therefore differs depending on the specific core device being used.
If a FIFO is full and another event comes in, this causes an overflow condition.
The condition is converted into an :class:`~artiq.coredevice.exceptions.RTIOOverflow` exception that is raised on a subsequent invocation of one of the readout methods.
Overflow exceptions are generally best dealt with simply by reading out from the input buffers more frequently. In odd or particular cases, users may consider modifying the length of individual buffers in gateware.
.. note:: .. note::
It is not possible to provoke an :class:`~artiq.coredevice.exceptions.RTIOOverflow` on a RTIO output channel. While output buffers are also of finite size, and can be filled up, the CPU will simply stall the submission of further events until it is once again possible to buffer them. Among other things, this means that padding the timeline cursor with large amounts of positive slack is not always a valid strategy to avoid :class:`~artiq.coredevice.exceptions.RTIOOverflow` exceptions when generating fast event sequences. In practice only a fixed number of events can be generated in advance, and the rest of the processing will be carried out when the wall clock is much closer to ``now_mu``. It is not possible to provoke an :class:`~artiq.coredevice.exceptions.RTIOOverflow` on a RTIO output channel. While output buffers are also of finite size, and can be filled up, the CPU will simply stall the submission of further events until it is once again possible to buffer them. Among other things, this means that padding the timeline cursor with large amounts of positive slack is not always a valid strategy to avoid :class:`~artiq.coredevice.exceptions.RTIOOverflow` exceptions when generating fast event sequences. In practice only a fixed number of events can be generated in advance, and the rest of the processing will be carried out when the wall clock is much closer to ``now_mu``.
@ -251,8 +211,7 @@ Note that event spreading can be particularly helpful in DRTIO satellites, as it
Seamless handover Seamless handover
----------------- -----------------
The timeline cursor persists across kernel invocations. The timeline cursor persists across kernel invocations. This is demonstrated in the following example where a pulse is split across two kernels::
This is demonstrated in the following example where a pulse is split across two kernels::
def run(): def run():
k1() k1()

View File

@ -0,0 +1,202 @@
Data and user interfaces
========================
Beyond running single experiments, or basic use of master, client, and dashboard, ARTIQ supports a much broader range of interactions between its different components. These integrations allow for system control which is extensive, flexible, and dynamic, with space for different workflows and methodologies depending on the needs of particular sets of experiments. We will now explore some of these further tools and systems.
.. note::
This page follows up directly on :doc:`getting_started_mgmt`, but discusses a broader range of features, most of which continue to rest on the foundation of the management system. Some sections (datasets, the browser) are still possible to try out using your PC alone; others (MonInj, the RTIO analyzer) are only meaningful in relation to real-time hardware.
.. _mgmt-datasets:
Datasets and results
--------------------
ARTIQ uses the concept of *datasets* to manage the data exchanged with experiments, both supplied *to* experiments (generally, from other experiments) and saved *from* experiments (i.e. results or records). We will now see how to view and manipulate datasets, both in experiments or through the management system. Create a new experiment as follows: ::
from artiq.experiment import *
import numpy as np
import time
class Datasets(EnvExperiment):
"""Dataset tutorial"""
def build(self):
pass # no devices used
def run(self):
self.set_dataset("parabola", np.full(10, np.nan), broadcast=True)
for i in range(10):
self.mutate_dataset("parabola", i, i*i)
time.sleep(0.5)
Save it as ``dataset_tutorial.py``. Commit and push your changes, or rescan the repository, whichever is appropriate for your management system configuration. Submit the experiment. In the same window as 'Explorer', navigate to the 'Dataset' tab and observe that a new dataset has been created under the name ``parabola``. As the experiment runs, the values are progressively calculated and entered.
.. note::
By default, datasets are primarily attributes of the experiments that run them, and are not shared with the master or the dashboard. The ``broadcast=True`` argument specifies that a dataset should be shared in real-time with the master, which is then responsible for storing it in the dataset database ``dataset_db.mdb`` and dispatching it to any clients. For more about dataset options see the :ref:`Environment <environment-datasets>` page.
As long as ``archive=False`` is not explicitly set, datasets are among the information preserved by the master in the ``results`` folder. The files in ``results`` are organized in subdirectories based on the time they were executed, as ``results/<date>/<hour>/``; their individual filenames are a combination of the RID assigned to the execution and the name of the experiment module itself. As such, results are stored in a unique and identifiable location for each iteration of an experiment, even if a dataset is overwritten in the master.
You can open the result file for this experiment with HDFView, h5dump, or any similar third-party tool. Observe that it contains the dataset we just generated, as well as other useful information such as RID, run time, start time, and the Git commit ID of the repository at the time of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645``).
.. tip::
If you are not familiar with Git, try running ``git log`` in either of your connected Git repositories to see a history of commits in the repository which includes their respective hashes. As long as this history remains intact, you can use a hash of this kind of to uniquely identify, and even retrieve, the state of the files in the repository at the time this experiment was run. In other words, when running experiments from a Git repository, it's always possible to retrieve the code that led to a particular set of results.
Applets
^^^^^^^
Most of the time, rather than the HDF dump, we would like to see our result datasets in a readable graphical form, preferably without opening any third-party applications. In the ARTIQ dashboard, this is achieved by programs called "applets". Applets provide simple, modular GUI features; are run independently from the dashboard as separate processes to achieve goals of modularity and resilience. ARTIQ supplies several applets for basic plotting in the :mod:`artiq.applets` module, and provides interfaces so users can write their own.
.. seealso::
When developing your own applets, see also the references provided on the :ref:`Management system reference<applet-references>` page of this manual.
For our ``parabola`` dataset, we will create an XY plot using the provided :mod:`artiq.applets.plot_xy`. Applets are configured with simple command line options. To figure out what configurations are accepted, use the ``-h`` flag, as in: ::
$ python3 -m artiq.applets.plot_xy -h
In our case, we only need to supply our dataset to the applet to be plotted. Navigate to the "Applet" dock in the dashboard. Right-click in the empty list and select "New applet from template" and "XY". This will generate a version of the applet command which shows all the configuration options; edit the line so that it retrieves the ``parabola`` dataset and erase the unused options. It should look like: ::
${artiq_applet}plot_xy parabola
Run the experiment again, and observe how the points are added as they are generated to the plot in the applet window.
.. tip::
Datasets and applets can both be arranged in groups for organizational purposes. (In fact, so can arguments; see the reference of :meth:`~artiq.language.environment.HasEnvironment.setattr_argument`). For datasets, use a dot (``.``) in names to separate folders. For applets, left-click in the applet list to see the option 'Create Group'. You can drag and drop to move applets in and out of groups, or select a particular group with a click to create new applets in that group. Deselect applets or groups with CTRL+click.
.. tip::
You can close all open, undocked applets with the shortcut CTRL+ALT+W. Docked applets will remain where they are. This is a convenient way to clean up after exploratory work without destroying a carefully arranged workspace.
The ARTIQ browser
^^^^^^^^^^^^^^^^^
ARTIQ also possesses a second GUI, specifically targeted for the manipulation and analysis of datasets, called the ARTIQ browser. It is independent, and does not require either a running master or a core device to operate; a connection to the master is only necessary if you want to upload edited datasets back to the main management system. Open ``results`` in the browser by running: ::
$ cd ~/artiq-master
$ artiq_browser ./results
Navigate to the entry containing your ``parabola`` datasets in the file explorers on the left. To bring the dataset into the browser, click on the HDF5 file.
To open an experiment, click on 'Experiment' at the top left. Observe that instead of 'Submit', the option given is 'Analyze'. Where :mod:`~artiq.frontend.artiq_run` and :mod:`~artiq.frontend.artiq_master` ultimately call :meth:`~artiq.language.environment.Experiment.prepare`, :meth:`~artiq.language.environment.Experiment.run`, and :meth:`~artiq.language.environment.Experiment.analyze`, the browser limits itself to :meth:`~artiq.language.environment.Experiment.analyze`. Nonetheless, it still accepts arguments.
As described later in :ref:`experiment-scheduling`, only :meth:`~artiq.language.environment.Experiment.run` is obligatory for experiments to implement, and only :meth:`~artiq.language.environment.Experiment.run` is permitted to access hardware; the preparation and analysis stages occur before and after, and are limited to the host machine. The browser allows for re-running the post-experiment :meth:`~artiq.language.environment.Experiment.analyze`, potentially with different arguments or an edited algorithm, while accessing the datasets from opened ``results`` files.
Notably, the browser does not merely act as an HD5 viewer, but also allows the use of ARTIQ applets to plot and view the data. For this, see the lower left dock; applets can be opened, closed, and managed just as they are in the dashboard, once again accessing datasets from ``results``.
.. _mgmt-ctlmgr:
Non-RTIO devices and the controller manager
-------------------------------------------
As described in :doc:`rtio`, there are two classes of equipment a laboratory typically finds itself needing to operate. So far, we have largely discussed ARTIQ in terms of one only: specialized hardware which requires the very high-resolution timing control ARTIQ provides. The other class comprises the broad range of regular, "slow" laboratory devices, which do *not* require nanosecond precision and can generally be operated perfectly well from a regular PC over a non-realtime channel such as USB.
To handle these "slow" devices, ARTIQ uses *controllers*, intermediate pieces of software which are responsible for the direct I/O to these devices and offer RPC interfaces to the network. By convention, ARTIQ controllers are named with the prefix ``aqctl_``. Controllers can be started and run standalone, but are generally handled through the *controller manager*, :mod:`~artiq_comtools.artiq_ctlmgr`. The controller manager in turn communicates with the ARTIQ master, and through it with clients or the GUI.
Like clients, controllers do not need to be run on the same machine as the master. Various controllers in a device database may in fact be distributed across multiple machines, in whatever way is most convenient for the devices in question, alleviating cabling issues and OS compatibility problems. Each machine running controllers must run its own controller manager. Communication with the master happens over the network. Use the ``-s`` flag of :mod:`~artiq_comtools.artiq_ctlmgr` to set the IP address or hostname of a master to bind to.
.. tip::
The controller manager is made available through the ``artiq-comtools`` package, maintained separately from the main ARTIQ repository. It is considered a dependency of ARTIQ, and is normally included in any ARTIQ installation, but can also be installed independently. This is especially useful when controllers are widely distributed; instead of installing ARTIQ on every machine that runs controllers, only ``artiq-comtools`` and its much lighter set of dependencies are necessary. See the source repository `here <https://github.com/m-labs/artiq-comtools>`_.
We have already used the controller manager in the previous part of the tutorial. To run it, the only command necessary is: ::
$ artiq_ctlmgr
Note however that in order for the controller manager to be able to start a controller, the controller in question must first exist and be properly installed on the machine the manager is running on. For laboratory devices, this normally means it must be part of a complete Network Device Support Package, or NDSP. :doc:`Some NDSPs are already available <list_of_ndsps>`. If your device is not on this list, the protocol is designed to make it relatively simple to write your own; for more information and a tutorial, see the :doc:`developing_a_ndsp` page.
Once a device is correctly listed in ``device_db.py``, it can be added to an experiment using ``self.setattr_device([device_name])`` and the methods its API offers called straightforwardly as ``self.[device_name].[method_name]``. As long as the requisite controllers are running and available, the experiment can then be executed with :mod:`~artiq.frontend.artiq_run` or through the management system. To understand how to add controllers to the device database, see also :ref:`device-db`.
.. _built-in-ctlrs:
ARTIQ built-in controllers
^^^^^^^^^^^^^^^^^^^^^^^^^^
Certain built-in controllers are also included in a standard ARTIQ installation, and can be run directly in your ARTIQ shell. They are listed at the end of the :ref:`Utilities <utilities-ctrls>` reference (the commands prefixed with ``aqctl_`` rather than ``artiq_``) and included by default in device databases generated with :mod:`~artiq.frontend.artiq_ddb_template`.
Broadly speaking, these controllers are edge cases, serving as proxies for interactions between clients and the core device, which otherwise do not make direct contact with each other. Features like dashboard MonInj and the RTIO analyzer's Waveform tab, both discussed in more depth below, depend upon a respective proxy controller to function. A proxy controller is also used to communicate the core log to dashboards.
Although they are listed in the references for completeness' sake, there is normally no reason to run the built-in controllers independently. A controller manager run alongside the master (or anywhere else, provided the given addresses are edited accordingly; proxy controllers communicate with the core device by network just as the master does) is more than sufficient.
.. _interactivity-moninj:
Using MonInj
------------
One of ARTIQ's most convenient features is the Monitor/Injector, commonly known as MonInj. This feature allows for checking (monitoring) the state of various peripherals and setting (injecting) values for their parameters, directly and without any need to explicitly run an experiment for either. MonInj is integrated into ARTIQ on a gateware level, and (except in the case of injection on certain peripherals) can be used in parallel to running experiments, without interrupting them.
In order to use dashboard MonInj, ``aqctl_moninj_proxy`` or a local controller manager must be running. Given this, navigate to the dashboard's ``MonInj`` tab. Mouse over the second button at the top of the dock, which is labeled 'Add channels'. Clicking on it will open a small pop-up, which allows you to select RTIO channels from those currently available in your system.
.. note::
Multiple channels can be selected and added simultaneously. The button with a folder icon allows opening independent pop-up MonInj docks, into which channels can also be added. Configurations of docks and channels will be remembered between dashboard restarts.
.. warning::
Not all ARTIQ/Sinara real-time peripherals support both monitoring *and* injection, and some do not yet support either. Which peripherals belong to which categories has varied somewhat over the history of ARTIQ versions. Depending on the complexity of the peripheral, incorporating monitor or injection support represents a nontrivial engineering effort, which has generally only been undertaken when commissioned by particular research groups or users. The pop-up menu will display only channels that are valid targets for one or the other functionality.
For DDS/Urukul in particular, injection is supported by a slightly different implementation, which involves automatic submission of a miniature kernel which will override and terminate any other experiments currently executing. Accordingly, Urukul injection should be used carefully.
MonInj can always be tested using the user LEDs, which you can find the folder ``ttl`` in the pop-up menu. Channels are listed according to the types and names given in ``device_db.py``. Add your LED channels to the main dock; their monitored values will be displayed automatically. Try running any experiment that has an effect on LED state to see the monitored values change.
Mouse over one of the LED channel fields to see the two buttons ``OVR``, for override, and ``LVL``, for level. Clicking 'Override' will cause MonInj to take direct control of the channel, overriding any experiments that may be running. Once the channel is overriden, its level can be changed directly from the dashboard, by clicking 'Level' to flip it back and forth.
Command-line monitor
^^^^^^^^^^^^^^^^^^^^
For those peripherals which support monitoring, the command-line :mod:`~artiq.frontend.artiq_rtiomon` utility can be used to see monitor output directly in the terminal. The command-line monitor does not require or interact with the management system or even the device database. Instead, it takes the core device IP address and a channel number as parameters and communicates with the core device directly.
.. tip::
To remember which channel numbers were assigned to which peripherals, check your device database, specifically the ``channel`` field in local entries.
.. _interactivity-waveform:
Waveform
--------
The RTIO analyzer was briefly presented in :ref:`rtio-analyzer`. Like MonInj, it is directly accessible to the dashboard through its own proxy controller, :mod:`~artiq.frontend.aqctl_coreanalyzer_proxy`. To see it in action with the management system, navigate to the 'Waveform' tab of the dashboard. The dock should display several buttons and a currently empty list of waveforms, distinguishable only by the timeline along the top of the field. Use the 'Add channels' button, similar to that used by MonInj, to add waveforms to the list, for example ``rtio_slack`` and the ``led0`` user LED.
The circular arrow 'Fetch analyzer data' button has the same basic effect as using the command-line :mod:`~artiq.frontend.artiq_coreanalyzer`: it extracts the full contents of the circular analyzer buffer. In order to start from a clean slate, click the fetch button a few times, until the ``analyzer dump is empty aside from stop message`` warning appears. Try running a simple experiment, for example this one, which underflows: ::
from artiq.experiment import *
class BlinkToUnderflow(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("led0")
@kernel
def run(self):
self.core.reset()
for i in range(1000):
self.led0.pulse(.2*us)
delay(.2*us)
Now fetch the analyzer data again (only once)! Visible waveforms should appear in their respective fields. If nothing is visible to you, the timescale is likely zoomed too far out; adjust by zooming with CTRL+scroll and moving along the timeline by dragging it with your mouse. On a clean slate, ``BlinkToUnderflow`` should represent the first RTIO events on the record, and the waveforms accordingly will be displayed at the very beginning of the timeline.
Eventually, you should be able to see the up-and-down 'square wave' pattern of the blinking LED, coupled with a steadily descending line in the RTIO slack, representing the progressive wearing away of the slack gained using ``self.core.reset()``. This kind of analysis can be especially useful in diagnosing underflows; with some practice, the waveform can be used to ascertain which parts of an experiment are consuming the greatest amounts of slack, thereby causing underflows down the line.
.. tip::
File options in the top left allow for saving and exporting RTIO traces and channel lists (including to VCD), as well as opening them from saved files.
RTIO logging
^^^^^^^^^^^^
It is possible to dump any Python object so that it appears alongside the waveforms, using the built-in ``rtio_log()`` function, which accepts a log name as its first parameter and an arbitrary number of objects along with it. Try adding it to the ``BlinkToUnderflow`` experiment: ::
@kernel
def run(self):
self.core.reset()
for i in range(1000):
self.led0.pulse(.2*us)
rtio_log("test_trace", "i", i)
delay(.2*us)
Run this edited experiment. Fetch the analyzer data. Open the 'Add channels' pop-up again; ``test_trace`` should appear as an option now that the experiment has been run. Observe that every ``i`` is printed as a single-point event in a new waveform timeline.
Shortcuts
---------
The last notable tab of the dashboard is called 'Shortcuts'. To demonstrate its use, navigate to the 'Explorer' tab, left-click on an experiment, and select 'Set shortcut'. Binding an experiment to one of the available keys will cause it to be automatically submitted any time the key is pressed. The 'Shortcuts' tab simply displays the current set of bound experiments, and provides controls for opening a submission window or deleting the shortcut.
.. note::
Experiments submitted by shortcut will always use the argument currently entered into the submission window, if one is open. If no window is currently open, it will simply use the value *last* entered into a submission window. This is true even if that value was never used to submit an experiment.
It is also possible to bring up a "Quick Open" dialogue where experiments can be called up by searching part of their name. This is bound to the hotkey CTRL+P. To immediately start the experiment with its default arguments, hit CTRL+ENTER.

View File

@ -1,19 +1,17 @@
.. _drtio-and-subkernels:
Using DRTIO and subkernels Using DRTIO and subkernels
========================== ==========================
In larger or more spread-out systems, a single core device might not be suited to managing all the RTIO operations or channels necessary. For these situations ARTIQ supplies Distributed Real-Time IO, or DRTIO. This allows systems to be configured with some or all of their RTIO channels distributed to one or several *satellite* core devices, which are linked to the *master* core device. These remote channels are then accessible in kernels on the master device exactly like local channels. In larger or more spread-out systems, a single core device might not be suited to managing all the RTIO operations or channels necessary. For these situations ARTIQ supplies Distributed Real-Time IO, or DRTIO. This allows systems to be configured with some or all of their RTIO channels distributed to one or several *satellite* core devices, which are linked to the *master* core device. These remote channels are then accessible in kernels on the master device exactly like local channels.
While the components of a system, as well as the distribution of peripherals among satellites, are necessarily fixed in the system configuration, the specific topology of core and satellite links is flexible and can be changed whenever necessary. It is supplied to the core device by means of a routing table (see below). Kasli and Kasli-SoC devices use the SFP ports for DRTIO connections, whereunder links should be high-speed duplex serial lines operating 1Gbps or more. While the components of a system, as well as the distribution of peripherals among satellites, are necessarily fixed in the system configuration, the specific topology of master and satellite links is flexible and can be changed whenever necessary. It is supplied to the core device by means of a routing table (see below). Kasli and Kasli-SoC devices use SFP ports for DRTIO connections. Links should be high-speed duplex serial lines operating 1Gbps or more.
Certain peripheral cards with onboard FPGAs of their own (e.g. Shuttler) can be configured as satellites in a DRTIO setting, allowing them to run their own subkernels and make use of DDMA. In these cases, the EEM connection to the core device is used for DRTIO communication (DRTIO-over-EEM). Certain peripheral cards with onboard FPGAs of their own (e.g. Shuttler) can be configured as satellites in a DRTIO setting, allowing them to run their own subkernels and make use of DDMA. In these cases, the EEM connection to the core device is used for DRTIO communication (DRTIO-over-EEM).
.. note:: .. note::
As with other configuration changes (e.g. adding new hardware), if you are in possession of a non-distributed ARTIQ system and you'd like to expand it into a DRTIO setup, it's easily possible to do so, but you need to be sure that both master and satellite are (re)flashed with this in mind. As usual, if you obtained your hardware from M-Labs, you will normally be supplied with all the binaries you need, through ``afws_client`` or otherwise. As with other configuration changes (e.g. adding new hardware), if you are in possession of a non-distributed ARTIQ system and you'd like to expand it into a DRTIO setup, it's easily possible to do so, but you need to be sure that both master and satellite are (re)flashed with this in mind. As usual, if you obtained your hardware from M-Labs, you will normally be supplied with all the binaries you need, through :mod:`~artiq.frontend.afws_client` or otherwise.
.. note:: .. warning::
Do not confuse the DRTIO *master device* (used to mean the central controlling core device of a distributed system) with the *ARTIQ master* (the central piece of software of ARTIQ's management system, which interacts with ``artiq_client`` and the dashboard.) ``artiq_run`` can be used to run experiments on DRTIO systems just as easily as non-distributed ones, and the ARTIQ master interacts with the central core device regardless of whether it's configured as a DRTIO master or standalone. Do not confuse the DRTIO *master device* (used to mean the central controlling core device of a distributed system) with the *ARTIQ master* (the central piece of software of ARTIQ's management system, which interacts with :mod:`~artiq.frontend.artiq_client` and the dashboard.) :mod:`~artiq.frontend.artiq_run` can be used to run experiments on DRTIO systems just as easily as non-distributed ones, and the ARTIQ master interacts with the central core device regardless of whether it's configured as a DRTIO master or standalone.
Using DRTIO Using DRTIO
----------- -----------
@ -23,7 +21,7 @@ Using DRTIO
Configuring the routing table Configuring the routing table
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, DRTIO assumes a routing table for a star topology (i.e. all satellites directly connected to the master), with destination 0 being the master device's local RTIO core and destinations 1 and above corresponding to devices on the master's respective downstream ports. To use any other topology, it is necessary to supply a corresponding routing table in the form of a binary file, written to flash storage under the key ``routing_table``. The binary file is easily generated in the correct format using ``artiq_route``. This example is for a chain of 3 devices: :: By default, DRTIO assumes a routing table for a star topology (i.e. all satellites directly connected to the master), with destination 0 being the master device's local RTIO core and destinations 1 and above corresponding to devices on the master's respective downstream ports. To use any other topology, it is necessary to supply a corresponding routing table in the form of a binary file, written to flash storage under the key ``routing_table``. The binary file is easily generated in the correct format using :mod:`~artiq.frontend.artiq_route`. This example is for a chain of 3 devices: ::
# create an empty routing table # create an empty routing table
$ artiq_route rt.bin init $ artiq_route rt.bin init
@ -89,16 +87,16 @@ Enabling DDMA on a purely local sequence on a DRTIO system introduces an overhea
Subkernels Subkernels
---------- ----------
Rather than only offloading the RTIO channels to satellites and limiting all processing to the master core device, it is fully possible to run kernels directly on satellite devices. These are referred to as *subkernels*. Using subkernels to process and control remote RTIO channels can free up resources on the core device. Rather than only offloading the RTIO channels to satellites and limiting all processing to the master core device, it is also possible to run kernels directly on satellite devices. These are referred to as *subkernels*. Using subkernels to process and control remote RTIO channels can free up resources on the core device.
Subkernels behave for the most part like regular kernels; they accept arguments, can return values, and are marked by the decorator ``@subkernel(destination=i)``, where ``i`` is the satellite's destination number as used in the routing table. To call a subkernel, call it like any other function. There are however a few caveats: Subkernels behave for the most part like regular kernels; they accept arguments, can return values, and are marked by the decorator ``@subkernel(destination=i)``, where ``i`` is the satellite's destination number as used in the routing table. To call a subkernel, call it like any other function. There are however a few caveats:
- subkernels do not support RPCs, - subkernels do not support RPCs,
- subkernels do not support (recursive) DRTIO (but they can call other subkernels and send messages to each other, see below), - subkernels do not support (recursive) DRTIO (but they can call other subkernels and send messages to each other, see below),
- they support DMA, for which DDMA is considered always enabled, - they support DMA, for which DDMA is considered always enabled,
- they can raise exceptions, which they may catch locally or propagate to the calling kernel,
- their return values must be fully annotated with an ARTIQ type, - their return values must be fully annotated with an ARTIQ type,
- their arguments should be annotated, and only basic ARTIQ types are supported, - their arguments should be annotated, and only basic ARTIQ types are supported,
- they can raise exceptions, but the exceptions cannot be caught by the master (they can only be caught locally or propagated directly to the host),
- while ``self`` is allowed as an argument, it is retrieved at compile time and exists as a purely local object afterwards. Any changes made by other kernels will not be visible, and changes made locally will not be applied anywhere else. - while ``self`` is allowed as an argument, it is retrieved at compile time and exists as a purely local object afterwards. Any changes made by other kernels will not be visible, and changes made locally will not be applied anywhere else.
Subkernels in practice Subkernels in practice
@ -176,7 +174,7 @@ Subkernels can call other kernels and subkernels. For a more complex example: ::
In this case, without the preload, the delay after the core reset would need to be longer. Depending on the connection, the call may still take some time in itself. Notice that the method ``pulse_ttl()`` can be called both within a subkernel and on its own. In this case, without the preload, the delay after the core reset would need to be longer. Depending on the connection, the call may still take some time in itself. Notice that the method ``pulse_ttl()`` can be called both within a subkernel and on its own.
.. note:: .. note::
Subkernels can call subkernels on any other satellite, not only their own. Care should however be taken that different kernels do not call subkernels on the same satellite, or only very cautiously. If, e.g., a newer call overrides a subkernel that another caller is awaiting, unpredictable timeouts or locks may result, as the original subkernel will never return. There is not currently any mechanism to check whether a particular satellite is 'busy'; it is up to the programmer to handle this correctly. Subkernels can call subkernels on any other satellite, not only their own. Care should however be taken that different kernels do not call subkernels on the same satellite, or only very cautiously. If, e.g., a newer call overrides a subkernel that another caller is awaiting, unpredictable timeouts or locks may result, as the original subkernel will never return. There is no mechanism to check whether a particular satellite is 'busy'; it is up to the programmer to handle this correctly.
Message passing Message passing
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^

View File

@ -4,11 +4,15 @@ Utilities
.. Sort these tool by some subjective combination of their .. Sort these tool by some subjective combination of their
typical sequence and expected frequency of use. typical sequence and expected frequency of use.
.. As in main_frontend_reference, automodule directives display nothing and are here to make :mod: links possible
.. _afws-client: .. _afws-client:
ARTIQ Firmware Service (AFWS) client ARTIQ Firmware Service (AFWS) client
------------------------------------ ------------------------------------
.. automodule:: artiq.frontend.afws_client
This tool serves as a client for building tailored firmware and gateware from M-Lab's servers and downloading the binaries in ready-to-flash format. It is necessary to have a valid subscription to AFWS to use it. Subscription also includes general helpdesk support and can be purchased or extended by contacting ``sales@``. One year of support is included with any Kasli carriers or crates containing them purchased from M-Labs. Additional one-time use is generally provided with purchase of additional cards to facilitate the system configuration change. This tool serves as a client for building tailored firmware and gateware from M-Lab's servers and downloading the binaries in ready-to-flash format. It is necessary to have a valid subscription to AFWS to use it. Subscription also includes general helpdesk support and can be purchased or extended by contacting ``sales@``. One year of support is included with any Kasli carriers or crates containing them purchased from M-Labs. Additional one-time use is generally provided with purchase of additional cards to facilitate the system configuration change.
.. argparse:: .. argparse::
@ -24,7 +28,9 @@ This tool serves as a client for building tailored firmware and gateware from M-
Static compiler Static compiler
--------------- ---------------
This tool compiles an experiment into a ELF file. It is primarily used to prepare binaries for the startup and idle kernels, loaded in non-volatile storage of the core device. Experiments compiled with this tool are not allowed to use RPCs, and their ``run`` entry point must be a kernel. .. automodule:: artiq.frontend.artiq_compile
Compiles an experiment into a ELF file (or a TAR file if the experiment involves subkernels). It is primarily used to prepare binaries for the startup and idle kernels, loaded in non-volatile storage of the core device. Experiments compiled with this tool are not allowed to use RPCs, and their ``run`` entry point must be a kernel.
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_compile.get_argparser :ref: artiq.frontend.artiq_compile.get_argparser
@ -35,7 +41,9 @@ This tool compiles an experiment into a ELF file. It is primarily used to prepar
Flash storage image generator Flash storage image generator
----------------------------- -----------------------------
This tool compiles key/value pairs (e.g. configuration information) into a binary image suitable for flashing into the storage space of the core device. It can be used in combination with ``artiq_flash`` to configure the core device, but this is normally necessary at most to set the ``ip`` field; once the core device is reachable by network it is preferable to use ``artiq_coremgmt config``. .. automodule:: artiq.frontend.artiq_mkfs
Compiles key/value pairs (e.g. configuration information) into a binary image suitable for flashing into the storage space of the core device. It can be used in combination with ``artiq_flash`` to configure the core device, but this is normally necessary at most to set the ``ip`` field; once the core device is reachable by network it is preferable to use ``artiq_coremgmt config``. Not applicable to ARTIQ-Zynq, where preconfiguration is better achieved by loading ``config.txt`` onto the SD card.
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_mkfs.get_argparser :ref: artiq.frontend.artiq_mkfs.get_argparser
@ -48,9 +56,14 @@ This tool compiles key/value pairs (e.g. configuration information) into a binar
Flashing/Loading tool Flashing/Loading tool
--------------------- ---------------------
.. automodule:: artiq.frontend.artiq_flash
Allows for flashing and loading of various files onto the core device. Not applicable to ARTIQ-Zynq, where gateware and firmware should be loaded onto the core device with :mod:`~artiq.frontend.artiq_coremgmt`, directly copied onto the SD card, or (for developers) using the :ref:`ARTIQ netboot <zynq-jtag-boot>` utility.
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_flash.get_argparser :ref: artiq.frontend.artiq_flash.get_argparser
:prog: artiq_flash :prog: artiq_flash
:nodescription:
:nodefault: :nodefault:
.. _core-device-management-tool: .. _core-device-management-tool:
@ -58,9 +71,11 @@ Flashing/Loading tool
Core device management tool Core device management tool
--------------------------- ---------------------------
The core management utility gives remote access to the core device logs, the :ref:`core-device-flash-storage`, and other management functions. .. automodule:: artiq.frontend.artiq_coremgmt
To use this tool, it is necessary to specify the IP address your core device can be contacted at. If no option is used, the utility will assume there is a file named ``device_db.py`` in the current directory containing the device database; otherwise, a device database file can be provided with ``--device-db`` or an address directly with ``--device`` (see also below). The core management utility gives remote access to the core device logs, the :ref:`core device flash storage <configuration-storage>`, and other management functions.
To use this tool, it is necessary to specify the IP address your core device can be contacted at. If no option is used, the utility will assume there is a file named ``device_db.py`` in the current directory containing the :ref:`device database <device-db>`; otherwise, a device database file can be provided with ``--device-db`` or an address directly with ``--device`` (see also below).
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_coremgmt.get_argparser :ref: artiq.frontend.artiq_coremgmt.get_argparser
@ -68,38 +83,19 @@ To use this tool, it is necessary to specify the IP address your core device can
:nodescription: :nodescription:
:nodefault: :nodefault:
Core device logging controller
------------------------------
.. argparse::
:ref: artiq.frontend.aqctl_corelog.get_argparser
:prog: aqctl_corelog
:nodefault:
.. _ddb-template-tool: .. _ddb-template-tool:
Device database template generator Device database template generator
---------------------------------- ----------------------------------
.. automodule:: artiq.frontend.artiq_ddb_template
This tool generates a basic template for a :ref:`device database <device-db>` given the JSON description file(s) for the system. Entries for :ref:`controllers <environment-ctlrs>` are not generated.
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_ddb_template.get_argparser :ref: artiq.frontend.artiq_ddb_template.get_argparser
:prog: artiq_ddb_template :prog: artiq_ddb_template
:nodefault: :nodescription:
ARTIQ RTIO monitor
------------------
.. argparse::
:ref: artiq.frontend.artiq_rtiomon.get_argparser
:prog: artiq_rtiomon
:nodefault:
Moninj proxy
------------
.. argparse::
:ref: artiq.frontend.aqctl_moninj_proxy.get_argparser
:prog: aqctl_moninj_proxy
:nodefault: :nodefault:
.. _rtiomap-tool: .. _rtiomap-tool:
@ -107,17 +103,24 @@ Moninj proxy
RTIO channel name map tool RTIO channel name map tool
-------------------------- --------------------------
.. automodule:: artiq.frontend.artiq_rtiomap
This tool encodes the map of RTIO channel numbers to names in a format suitable for writing to the config key ``device_map``. See :ref:`config-rtiomap`.
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_rtiomap.get_argparser :ref: artiq.frontend.artiq_rtiomap.get_argparser
:prog: artiq_rtiomap :prog: artiq_rtiomap
:nodescription:
:nodefault: :nodefault:
.. _core-device-rtio-analyzer-tool:
Core device RTIO analyzer tool Core device RTIO analyzer tool
------------------------------ ------------------------------
This tool converts core device RTIO logs to VCD waveform files that are readable by third-party tools such as GtkWave. See :ref:`rtio-analyzer-example` for an example, or ``artiq.test.coredevice.test_analyzer`` for a relevant unit test. When using the ARTIQ dashboard, recorded data can be viewed or exported directly in the integrated waveform analyzer (the "Waveform" dock). .. automodule:: artiq.frontend.artiq_coreanalyzer
This tool retrieves core device RTIO logs either as raw data or as VCD waveform files, which are readable by third-party tools such as GtkWave. See :ref:`rtio-analyzer` for an example, or :mod:`artiq.test.coredevice.test_analyzer` for a relevant unit test.
Using the management system, the respective functionality is provided by :mod:`~artiq.frontend.aqctl_coreanalyzer_proxy` and the dashboard's 'Waveform' tab; see :ref:`interactivity-waveform`.
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_coreanalyzer.get_argparser :ref: artiq.frontend.artiq_coreanalyzer.get_argparser
@ -127,21 +130,60 @@ This tool converts core device RTIO logs to VCD waveform files that are readable
.. _routing-table-tool: .. _routing-table-tool:
Core device RTIO analyzer proxy
-------------------------------
This tool distributes the core analyzer dump to several clients such as the dashboard.
.. argparse::
:ref: artiq.frontend.aqctl_coreanalyzer_proxy.get_argparser
:prog: aqctl_coreanalyzer_proxy
:nodescription:
:nodefault:
DRTIO routing table manipulation tool DRTIO routing table manipulation tool
------------------------------------- -------------------------------------
.. automodule:: artiq.frontend.artiq_route
This tool allows for manipulation of a DRTIO routing table file, which can be transmitted to the core device using :mod:`artiq_coremgmt config write<artiq.frontend.artiq_coremgmt>`; see :ref:`drtio-routing`.
.. argparse:: .. argparse::
:ref: artiq.frontend.artiq_route.get_argparser :ref: artiq.frontend.artiq_route.get_argparser
:prog: artiq_route :prog: artiq_route
:nodescription:
:nodefault:
ARTIQ RTIO monitor
------------------
.. automodule:: artiq.frontend.artiq_rtiomon
Command-line interface for monitoring RTIO channels, as in the Monitor capacity of dashboard MonInj. See :ref:`interactivity-moninj`.
.. argparse::
:ref: artiq.frontend.artiq_rtiomon.get_argparser
:prog: artiq_rtiomon
:nodescription:
:nodefault:
.. _utilities-ctrls:
MonInj proxy
------------
.. automodule:: artiq.frontend.aqctl_moninj_proxy
.. argparse::
:ref: artiq.frontend.aqctl_moninj_proxy.get_argparser
:prog: aqctl_moninj_proxy
:nodefault:
Core device RTIO analyzer proxy
-------------------------------
.. automodule:: artiq.frontend.aqctl_coreanalyzer_proxy
.. argparse::
:ref: artiq.frontend.aqctl_coreanalyzer_proxy.get_argparser
:prog: aqctl_coreanalyzer_proxy
:nodefault:
Core device logging controller
------------------------------
.. automodule:: artiq.frontend.aqctl_corelog
.. argparse::
:ref: artiq.frontend.aqctl_corelog.get_argparser
:prog: aqctl_corelog
:nodefault: :nodefault:

View File

@ -87,7 +87,7 @@
# keep llvm_x in sync with nac3 # keep llvm_x in sync with nac3
propagatedBuildInputs = [ pkgs.llvm_14 nac3.packages.x86_64-linux.nac3artiq-pgo sipyco.packages.x86_64-linux.sipyco pkgs.qt6.qtsvg artiq-comtools.packages.x86_64-linux.artiq-comtools ] propagatedBuildInputs = [ pkgs.llvm_14 nac3.packages.x86_64-linux.nac3artiq-pgo sipyco.packages.x86_64-linux.sipyco pkgs.qt6.qtsvg artiq-comtools.packages.x86_64-linux.artiq-comtools ]
++ (with pkgs.python3Packages; [ pyqtgraph pygit2 numpy dateutil scipy prettytable pyserial h5py pyqt6 qasync tqdm lmdb jsonschema ]); ++ (with pkgs.python3Packages; [ pyqtgraph pygit2 numpy dateutil scipy prettytable pyserial h5py pyqt6 qasync tqdm lmdb jsonschema platformdirs ]);
dontWrapQtApps = true; dontWrapQtApps = true;
postFixup = '' postFixup = ''
@ -257,7 +257,7 @@
inherit (pkgs.texlive) inherit (pkgs.texlive)
scheme-basic latexmk cmap collection-fontsrecommended fncychap scheme-basic latexmk cmap collection-fontsrecommended fncychap
titlesec tabulary varwidth framed fancyvrb float wrapfig parskip titlesec tabulary varwidth framed fancyvrb float wrapfig parskip
upquote capt-of needspace etoolbox booktabs; upquote capt-of needspace etoolbox booktabs pgf pgfplots;
}; };
artiq-frontend-dev-wrappers = pkgs.runCommandNoCC "artiq-frontend-dev-wrappers" {} artiq-frontend-dev-wrappers = pkgs.runCommandNoCC "artiq-frontend-dev-wrappers" {}
@ -293,8 +293,10 @@
version = artiqVersion; version = artiqVersion;
src = self; src = self;
buildInputs = with pkgs.python3Packages; [ buildInputs = with pkgs.python3Packages; [
sphinx sphinx_rtd_theme sphinx sphinx_rtd_theme sphinxcontrib-tikz
sphinx-argparse sphinxcontrib-wavedrom sphinx-argparse sphinxcontrib-wavedrom
] ++ [ latex-artiq-manual artiq-comtools.packages.x86_64-linux.artiq-comtools
pkgs.pdf2svg
]; ];
buildPhase = '' buildPhase = ''
export VERSIONEER_OVERRIDE=${artiqVersion} export VERSIONEER_OVERRIDE=${artiqVersion}
@ -313,9 +315,11 @@
version = artiqVersion; version = artiqVersion;
src = self; src = self;
buildInputs = with pkgs.python3Packages; [ buildInputs = with pkgs.python3Packages; [
sphinx sphinx_rtd_theme sphinx sphinx_rtd_theme sphinxcontrib-tikz
sphinx-argparse sphinxcontrib-wavedrom sphinx-argparse sphinxcontrib-wavedrom
] ++ [ latex-artiq-manual ]; ] ++ [ latex-artiq-manual artiq-comtools.packages.x86_64-linux.artiq-comtools
pkgs.pdf2svg
];
buildPhase = '' buildPhase = ''
export VERSIONEER_OVERRIDE=${artiq.version} export VERSIONEER_OVERRIDE=${artiq.version}
export SOURCE_DATE_EPOCH=${builtins.toString self.sourceInfo.lastModified} export SOURCE_DATE_EPOCH=${builtins.toString self.sourceInfo.lastModified}
@ -355,8 +359,9 @@
packages.x86_64-linux.vivadoEnv packages.x86_64-linux.vivadoEnv
packages.x86_64-linux.vivado packages.x86_64-linux.vivado
packages.x86_64-linux.openocd-bscanspi packages.x86_64-linux.openocd-bscanspi
pkgs.python3Packages.sphinx pkgs.python3Packages.sphinx_rtd_theme pkgs.python3Packages.sphinx pkgs.python3Packages.sphinx_rtd_theme pkgs.pdf2svg
pkgs.python3Packages.sphinx-argparse pkgs.python3Packages.sphinxcontrib-wavedrom latex-artiq-manual pkgs.python3Packages.sphinx-argparse pkgs.python3Packages.sphinxcontrib-wavedrom latex-artiq-manual
pkgs.python3Packages.sphinxcontrib-tikz
]; ];
shellHook = '' shellHook = ''
export QT_PLUGIN_PATH=${pkgs.qt6.qtbase}/${pkgs.qt6.qtbase.dev.qtPluginPrefix}:${pkgs.qt6.qtsvg}/${pkgs.qt6.qtbase.dev.qtPluginPrefix} export QT_PLUGIN_PATH=${pkgs.qt6.qtbase}/${pkgs.qt6.qtbase.dev.qtPluginPrefix}:${pkgs.qt6.qtsvg}/${pkgs.qt6.qtbase.dev.qtPluginPrefix}

View File

@ -6,8 +6,8 @@ import sys
import versioneer import versioneer
if sys.version_info[:2] < (3, 9): if sys.version_info[:2] < (3, 11):
raise Exception("You need Python 3.9+") raise Exception("You need Python 3.11+")
console_scripts = [ console_scripts = [