forked from M-Labs/artiq
Compare commits
20 Commits
master
...
drtio-core
Author | SHA1 | Date |
---|---|---|
occheung | dd7ac81b28 | |
occheung | 33642a3c47 | |
occheung | 3be22571ab | |
occheung | e2382c2cd5 | |
occheung | eec8f38e35 | |
occheung | c66c0f0bc1 | |
occheung | 0f2b15c584 | |
occheung | c17e0e2b71 | |
occheung | 64c69f46c9 | |
occheung | a1e392fb0e | |
occheung | cd6e5ff378 | |
occheung | 18e18bdb46 | |
occheung | 67a2c63d2d | |
occheung | b563533bc8 | |
occheung | 47636ae973 | |
occheung | c4cef77132 | |
occheung | 259c5a2dc5 | |
occheung | aa0fecbb73 | |
occheung | 08c286f3b8 | |
occheung | 86f692fe7b |
|
@ -51,7 +51,7 @@ Closes #XXX
|
|||
|
||||
### Documentation Changes
|
||||
|
||||
- [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`nix build .#artiq-manual-html; nix build .#artiq-manual-pdf`) to ensure no errors.
|
||||
- [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`cd doc/manual/; make html`) to ensure no errors.
|
||||
|
||||
### Git Logistics
|
||||
|
||||
|
|
|
@ -11,7 +11,6 @@ __pycache__/
|
|||
.ipynb_checkpoints
|
||||
/doc/manual/_build
|
||||
/build
|
||||
/result
|
||||
/dist
|
||||
/*.egg-info
|
||||
/.coverage
|
||||
|
@ -24,8 +23,7 @@ __pycache__/
|
|||
/artiq/test/results
|
||||
/artiq/examples/*/results
|
||||
/artiq/examples/*/last_rid.pyon
|
||||
/artiq/examples/*/dataset_db.mdb
|
||||
/artiq/examples/*/dataset_db.mdb-lock
|
||||
/artiq/examples/*/dataset_db.pyon
|
||||
|
||||
# when testing ad-hoc experiments at the root:
|
||||
/repository/
|
||||
|
|
|
@ -8,27 +8,27 @@ Reporting Issues/Bugs
|
|||
Thanks for `reporting issues to ARTIQ
|
||||
<https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and
|
||||
ask questions on IRC (the #m-labs channel on OFTC), the `Mattermost chat
|
||||
<https://chat.m-labs.hk>`_, or in the `forum <https://forum.m-labs.hk>`_.
|
||||
<https://chat.m-labs.hk>`_, or on the `forum <https://forum.m-labs.hk>`_.
|
||||
|
||||
The best bug reports are those which contain sufficient information. With
|
||||
accurate and comprehensive context, an issue can be resolved quickly and
|
||||
efficiently. Please consider adding the following data to your issue
|
||||
report if possible:
|
||||
|
||||
* A clear and unique summary that fits into one line. Check that this
|
||||
issue has not yet been reported; if it has, add additional information there.
|
||||
* Precise steps to reproduce (a list of actions that leads to the issue)
|
||||
* A clear and unique summary that fits into one line. Also check that
|
||||
this issue has not yet been reported. If it has, add additional information there.
|
||||
* Precise steps to reproduce (list of actions that leads to the issue)
|
||||
* Expected behavior (what should happen)
|
||||
* Actual behavior (what happens instead)
|
||||
* Logging message, tracebacks, screenshots, where applicable
|
||||
* Logging message, trace backs, screen shots where relevant
|
||||
* Components involved (omit irrelevant parts):
|
||||
|
||||
* Operating system used
|
||||
* ARTIQ version (run any command in the form of ``artiq_client --version``)
|
||||
* Gateware and firmware loaded to the core device (in the output of
|
||||
``artiq_coremgmt [-D ....] log``)
|
||||
* Operating System
|
||||
* ARTIQ version (with recent versions of ARTIQ, run ``artiq_client --version``)
|
||||
* Version of the gateware and runtime loaded in the core device (in the output of ``artiq_coremgmt -D .... log``)
|
||||
* Hardware involved
|
||||
|
||||
|
||||
For in-depth information on bug reporting, see:
|
||||
|
||||
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
|
||||
|
@ -38,10 +38,10 @@ https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines
|
|||
Contributing Code
|
||||
=================
|
||||
|
||||
ARTIQ welcomes contributions. Write bite-size patches that can stand alone,
|
||||
clean them up, write proper commit messages, add docstrings and unit tests;
|
||||
ARTIQ welcomes contributions. Write bite-sized patches that can stand alone,
|
||||
clean them up, write proper commit messages, add docstrings and unittests. Then
|
||||
``git rebase`` them onto the current master or merge the current master. Verify
|
||||
that the test suite passes. Then submit a pull request. Expect your contribution
|
||||
that the testsuite passes. Then submit a pull request. Expect your contribution
|
||||
to be held up to coding standards (e.g. use ``flake8`` to check yourself).
|
||||
|
||||
Checklist for Code Contributions
|
||||
|
@ -51,7 +51,7 @@ Checklist for Code Contributions
|
|||
- Use correct spelling and grammar. Use your code editor to help you with
|
||||
syntax, spelling, and style
|
||||
- Style: PEP-8 (``flake8``)
|
||||
- Add or update docstrings and comments
|
||||
- Add, check docstrings and comments
|
||||
- Split your contribution into logically separate changes (``git rebase
|
||||
--interactive``). Merge (squash, fixup) commits that just fix previous commits
|
||||
or amend them. Remove unintended changes. Clean up your commits.
|
||||
|
@ -63,37 +63,12 @@ Checklist for Code Contributions
|
|||
- Review each of your commits for the above items (``git show``)
|
||||
- Update ``RELEASE_NOTES.md`` if there are noteworthy changes, especially if
|
||||
there are changes to existing APIs
|
||||
- Check, test, and update the documentation in ``doc/``
|
||||
- Check, test, and update the unit tests
|
||||
- Check, test, and update the documentation in `doc/`
|
||||
- Check, test, and update the unittests
|
||||
- Close and/or update issues
|
||||
|
||||
|
||||
Contributing Documentation
|
||||
==========================
|
||||
|
||||
ARTIQ welcomes documentation contributions. The ARTIQ manual is hosted online in HTML
|
||||
form `here <https://m-labs.hk/artiq/manual/>`__ and in PDF form
|
||||
`here <https://m-labs.hk/artiq/manual.pdf>`__. It is generated from source files
|
||||
in ``doc/manual``, written in a variant of the
|
||||
`reStructured Text <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_
|
||||
markup language processed by `Sphinx <https://www.sphinx-doc.org/en/master/>`_, with
|
||||
some of the additional reference material processed from inline documentation
|
||||
in the ARTIQ source itself.
|
||||
|
||||
Write bite-size patches that can stand alone, clean them up, write proper commit
|
||||
messages. Check that your edits render properly and compile without errors: ::
|
||||
|
||||
$ nix build .#artiq-manual-pdf
|
||||
$ nix build .#artiq-manual-html
|
||||
|
||||
Elaborations, improvements, clarifications and corrections to any of the material
|
||||
are happily accepted, but special attention is drawn to the manual
|
||||
`FAQ <https://m-labs.hk/artiq/manual/faq.html>`_, where tips and solutions
|
||||
are especially easy to add. See also the FAQ's own
|
||||
`section on the subject <https://m-labs.hk/artiq/manual/faq.html#build-documentation>`_.
|
||||
|
||||
Copyright and Sign-Off
|
||||
======================
|
||||
----------------------
|
||||
|
||||
Authors retain copyright of their contributions to ARTIQ, but whenever possible
|
||||
should use the GNU LGPL version 3 license for them to be merged.
|
||||
|
@ -133,7 +108,7 @@ can certify the below:
|
|||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
||||
|
||||
then add a line saying
|
||||
then you just add a line saying
|
||||
|
||||
Signed-off-by: Random J Developer <random@developer.example.org>
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ ARTIQ and its dependencies are available in the form of Nix packages (for Linux)
|
|||
|
||||
ARTIQ is supported by M-Labs and developed openly. Components, features, fixes, improvements, and extensions are often `funded <https://m-labs.hk/experiment-control/funding/>`_ by and developed for the partnering research groups.
|
||||
|
||||
Core technologies employed include `Python <https://www.python.org/>`_, `Migen <https://github.com/m-labs/migen>`_, `Migen-AXI <https://github.com/peteut/migen-axi>`_, `Rust <https://www.rust-lang.org/>`_, `MiSoC <https://github.com/m-labs/misoc>`_/`VexRiscv <https://github.com/SpinalHDL/VexRiscv>`_, `LLVM <https://llvm.org/>`_/`llvmlite <https://github.com/numba/llvmlite>`_, and `Qt6 <https://www.qt.io/>`_.
|
||||
Core technologies employed include `Python <https://www.python.org/>`_, `Migen <https://github.com/m-labs/migen>`_, `Migen-AXI <https://github.com/peteut/migen-axi>`_, `Rust <https://www.rust-lang.org/>`_, `MiSoC <https://github.com/m-labs/misoc>`_/`VexRiscv <https://github.com/SpinalHDL/VexRiscv>`_, `LLVM <https://llvm.org/>`_/`llvmlite <https://github.com/numba/llvmlite>`_, and `Qt5 <https://www.qt.io/>`_.
|
||||
|
||||
| Website: https://m-labs.hk/experiment-control/artiq
|
||||
| (US-hosted mirror: https://m-labs-intl.com/experiment-control/artiq)
|
||||
|
|
|
@ -6,14 +6,11 @@ Release notes
|
|||
ARTIQ-9 (Unreleased)
|
||||
--------------------
|
||||
|
||||
* GUI state files are now automatically backed up upon successful loading.
|
||||
* Zotino monitoring in the dashboard now displays the values in volts.
|
||||
* afws_client now uses the "happy eyeballs" algorithm (RFC 6555) for a faster and more
|
||||
reliable connection to the server.
|
||||
* The Zadig driver installer was added to the MSYS2 offline installer.
|
||||
* Fastino monitoring with Moninj is now supported.
|
||||
* Qt6 support.
|
||||
* Python 3.12 support.
|
||||
|
||||
ARTIQ-8
|
||||
-------
|
||||
|
|
|
@ -4,4 +4,4 @@ def get_rev():
|
|||
return os.getenv("VERSIONEER_REV", default="unknown")
|
||||
|
||||
def get_version():
|
||||
return os.getenv("VERSIONEER_OVERRIDE", default="9.0+unknown.beta")
|
||||
return os.getenv("VERSIONEER_OVERRIDE", default="8.0+unknown.beta")
|
||||
|
|
|
@ -0,0 +1,557 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright (c) 2005-2010 ActiveState Software Inc.
|
||||
# Copyright (c) 2013 Eddy Petrișor
|
||||
|
||||
"""Utilities for determining application-specific dirs.
|
||||
|
||||
See <http://github.com/ActiveState/appdirs> for details and usage.
|
||||
"""
|
||||
# Dev Notes:
|
||||
# - MSDN on where to store app data files:
|
||||
# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120
|
||||
# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html
|
||||
# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
|
||||
|
||||
__version_info__ = (1, 4, 1)
|
||||
__version__ = '.'.join(map(str, __version_info__))
|
||||
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
PY3 = sys.version_info[0] == 3
|
||||
|
||||
if PY3:
|
||||
unicode = str
|
||||
|
||||
if sys.platform.startswith('java'):
|
||||
import platform
|
||||
os_name = platform.java_ver()[3][0]
|
||||
if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc.
|
||||
system = 'win32'
|
||||
elif os_name.startswith('Mac'): # "Mac OS X", etc.
|
||||
system = 'darwin'
|
||||
else: # "Linux", "SunOS", "FreeBSD", etc.
|
||||
# Setting this to "linux2" is not ideal, but only Windows or Mac
|
||||
# are actually checked for and the rest of the module expects
|
||||
# *sys.platform* style strings.
|
||||
system = 'linux2'
|
||||
else:
|
||||
system = sys.platform
|
||||
|
||||
|
||||
|
||||
def user_data_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||
r"""Return full path to the user-specific data dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"roaming" (boolean, default False) can be set True to use the Windows
|
||||
roaming appdata directory. That means that for users on a Windows
|
||||
network setup for roaming profiles, this user data will be
|
||||
sync'd on login. See
|
||||
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||
for a discussion of issues.
|
||||
|
||||
Typical user data directories are:
|
||||
Mac OS X: ~/Library/Application Support/<AppName>
|
||||
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
|
||||
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
|
||||
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
|
||||
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
|
||||
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
|
||||
|
||||
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
|
||||
That means, by default "~/.local/share/<AppName>".
|
||||
"""
|
||||
if system == "win32":
|
||||
if appauthor is None:
|
||||
appauthor = appname
|
||||
const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA"
|
||||
path = os.path.normpath(_get_win_folder(const))
|
||||
if appname:
|
||||
if appauthor is not False:
|
||||
path = os.path.join(path, appauthor, appname)
|
||||
else:
|
||||
path = os.path.join(path, appname)
|
||||
elif system == 'darwin':
|
||||
path = os.path.expanduser('~/Library/Application Support/')
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
else:
|
||||
path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share"))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def site_data_dir(appname=None, appauthor=None, version=None, multipath=False):
|
||||
"""Return full path to the user-shared data dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"multipath" is an optional parameter only applicable to *nix
|
||||
which indicates that the entire list of data dirs should be
|
||||
returned. By default, the first item from XDG_DATA_DIRS is
|
||||
returned, or '/usr/local/share/<AppName>',
|
||||
if XDG_DATA_DIRS is not set
|
||||
|
||||
Typical user data directories are:
|
||||
Mac OS X: /Library/Application Support/<AppName>
|
||||
Unix: /usr/local/share/<AppName> or /usr/share/<AppName>
|
||||
Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName>
|
||||
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
|
||||
Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7.
|
||||
|
||||
For Unix, this is using the $XDG_DATA_DIRS[0] default.
|
||||
|
||||
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
|
||||
"""
|
||||
if system == "win32":
|
||||
if appauthor is None:
|
||||
appauthor = appname
|
||||
path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA"))
|
||||
if appname:
|
||||
if appauthor is not False:
|
||||
path = os.path.join(path, appauthor, appname)
|
||||
else:
|
||||
path = os.path.join(path, appname)
|
||||
elif system == 'darwin':
|
||||
path = os.path.expanduser('/Library/Application Support')
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
else:
|
||||
# XDG default for $XDG_DATA_DIRS
|
||||
# only first, if multipath is False
|
||||
path = os.getenv('XDG_DATA_DIRS',
|
||||
os.pathsep.join(['/usr/local/share', '/usr/share']))
|
||||
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
|
||||
if appname:
|
||||
if version:
|
||||
appname = os.path.join(appname, version)
|
||||
pathlist = [os.sep.join([x, appname]) for x in pathlist]
|
||||
|
||||
if multipath:
|
||||
path = os.pathsep.join(pathlist)
|
||||
else:
|
||||
path = pathlist[0]
|
||||
return path
|
||||
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def user_config_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||
r"""Return full path to the user-specific config dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"roaming" (boolean, default False) can be set True to use the Windows
|
||||
roaming appdata directory. That means that for users on a Windows
|
||||
network setup for roaming profiles, this user data will be
|
||||
sync'd on login. See
|
||||
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||
for a discussion of issues.
|
||||
|
||||
Typical user data directories are:
|
||||
Mac OS X: same as user_data_dir
|
||||
Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined
|
||||
Win *: same as user_data_dir
|
||||
|
||||
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
|
||||
That means, by deafult "~/.config/<AppName>".
|
||||
"""
|
||||
if system in ["win32", "darwin"]:
|
||||
path = user_data_dir(appname, appauthor, None, roaming)
|
||||
else:
|
||||
path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config"))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def site_config_dir(appname=None, appauthor=None, version=None, multipath=False):
|
||||
"""Return full path to the user-shared data dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"multipath" is an optional parameter only applicable to *nix
|
||||
which indicates that the entire list of config dirs should be
|
||||
returned. By default, the first item from XDG_CONFIG_DIRS is
|
||||
returned, or '/etc/xdg/<AppName>', if XDG_CONFIG_DIRS is not set
|
||||
|
||||
Typical user data directories are:
|
||||
Mac OS X: same as site_data_dir
|
||||
Unix: /etc/xdg/<AppName> or $XDG_CONFIG_DIRS[i]/<AppName> for each value in
|
||||
$XDG_CONFIG_DIRS
|
||||
Win *: same as site_data_dir
|
||||
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
|
||||
|
||||
For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False
|
||||
|
||||
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
|
||||
"""
|
||||
if system in ["win32", "darwin"]:
|
||||
path = site_data_dir(appname, appauthor)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
else:
|
||||
# XDG default for $XDG_CONFIG_DIRS
|
||||
# only first, if multipath is False
|
||||
path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg')
|
||||
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
|
||||
if appname:
|
||||
if version:
|
||||
appname = os.path.join(appname, version)
|
||||
pathlist = [os.sep.join([x, appname]) for x in pathlist]
|
||||
|
||||
if multipath:
|
||||
path = os.pathsep.join(pathlist)
|
||||
else:
|
||||
path = pathlist[0]
|
||||
return path
|
||||
|
||||
|
||||
def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True):
|
||||
r"""Return full path to the user-specific cache dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"opinion" (boolean) can be False to disable the appending of
|
||||
"Cache" to the base app data dir for Windows. See
|
||||
discussion below.
|
||||
|
||||
Typical user cache directories are:
|
||||
Mac OS X: ~/Library/Caches/<AppName>
|
||||
Unix: ~/.cache/<AppName> (XDG default)
|
||||
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache
|
||||
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache
|
||||
|
||||
On Windows the only suggestion in the MSDN docs is that local settings go in
|
||||
the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming
|
||||
app data dir (the default returned by `user_data_dir` above). Apps typically
|
||||
put cache data somewhere *under* the given dir here. Some examples:
|
||||
...\Mozilla\Firefox\Profiles\<ProfileName>\Cache
|
||||
...\Acme\SuperApp\Cache\1.0
|
||||
OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value.
|
||||
This can be disabled with the `opinion=False` option.
|
||||
"""
|
||||
if system == "win32":
|
||||
if appauthor is None:
|
||||
appauthor = appname
|
||||
path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA"))
|
||||
if appname:
|
||||
if appauthor is not False:
|
||||
path = os.path.join(path, appauthor, appname)
|
||||
else:
|
||||
path = os.path.join(path, appname)
|
||||
if opinion:
|
||||
path = os.path.join(path, "Cache")
|
||||
elif system == 'darwin':
|
||||
path = os.path.expanduser('~/Library/Caches')
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
else:
|
||||
path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache'))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def user_log_dir(appname=None, appauthor=None, version=None, opinion=True):
|
||||
r"""Return full path to the user-specific log dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"opinion" (boolean) can be False to disable the appending of
|
||||
"Logs" to the base app data dir for Windows, and "log" to the
|
||||
base cache dir for Unix. See discussion below.
|
||||
|
||||
Typical user cache directories are:
|
||||
Mac OS X: ~/Library/Logs/<AppName>
|
||||
Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if defined
|
||||
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs
|
||||
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs
|
||||
|
||||
On Windows the only suggestion in the MSDN docs is that local settings
|
||||
go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in
|
||||
examples of what some windows apps use for a logs dir.)
|
||||
|
||||
OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA`
|
||||
value for Windows and appends "log" to the user cache dir for Unix.
|
||||
This can be disabled with the `opinion=False` option.
|
||||
"""
|
||||
if system == "darwin":
|
||||
path = os.path.join(
|
||||
os.path.expanduser('~/Library/Logs'),
|
||||
appname)
|
||||
elif system == "win32":
|
||||
path = user_data_dir(appname, appauthor, version)
|
||||
version = False
|
||||
if opinion:
|
||||
path = os.path.join(path, "Logs")
|
||||
else:
|
||||
path = user_cache_dir(appname, appauthor, version)
|
||||
version = False
|
||||
if opinion:
|
||||
path = os.path.join(path, "log")
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
class AppDirs(object):
|
||||
"""Convenience wrapper for getting application dirs."""
|
||||
def __init__(self, appname, appauthor=None, version=None, roaming=False,
|
||||
multipath=False):
|
||||
self.appname = appname
|
||||
self.appauthor = appauthor
|
||||
self.version = version
|
||||
self.roaming = roaming
|
||||
self.multipath = multipath
|
||||
|
||||
@property
|
||||
def user_data_dir(self):
|
||||
return user_data_dir(self.appname, self.appauthor,
|
||||
version=self.version, roaming=self.roaming)
|
||||
|
||||
@property
|
||||
def site_data_dir(self):
|
||||
return site_data_dir(self.appname, self.appauthor,
|
||||
version=self.version, multipath=self.multipath)
|
||||
|
||||
@property
|
||||
def user_config_dir(self):
|
||||
return user_config_dir(self.appname, self.appauthor,
|
||||
version=self.version, roaming=self.roaming)
|
||||
|
||||
@property
|
||||
def site_config_dir(self):
|
||||
return site_config_dir(self.appname, self.appauthor,
|
||||
version=self.version, multipath=self.multipath)
|
||||
|
||||
@property
|
||||
def user_cache_dir(self):
|
||||
return user_cache_dir(self.appname, self.appauthor,
|
||||
version=self.version)
|
||||
|
||||
@property
|
||||
def user_log_dir(self):
|
||||
return user_log_dir(self.appname, self.appauthor,
|
||||
version=self.version)
|
||||
|
||||
|
||||
#---- internal support stuff
|
||||
|
||||
def _get_win_folder_from_registry(csidl_name):
|
||||
"""This is a fallback technique at best. I'm not sure if using the
|
||||
registry for this guarantees us the correct answer for all CSIDL_*
|
||||
names.
|
||||
"""
|
||||
if PY3:
|
||||
import winreg as _winreg
|
||||
else:
|
||||
import _winreg
|
||||
|
||||
shell_folder_name = {
|
||||
"CSIDL_APPDATA": "AppData",
|
||||
"CSIDL_COMMON_APPDATA": "Common AppData",
|
||||
"CSIDL_LOCAL_APPDATA": "Local AppData",
|
||||
}[csidl_name]
|
||||
|
||||
key = _winreg.OpenKey(
|
||||
_winreg.HKEY_CURRENT_USER,
|
||||
r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders"
|
||||
)
|
||||
dir, type = _winreg.QueryValueEx(key, shell_folder_name)
|
||||
return dir
|
||||
|
||||
|
||||
def _get_win_folder_with_pywin32(csidl_name):
|
||||
from win32com.shell import shellcon, shell
|
||||
dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0)
|
||||
# Try to make this a unicode path because SHGetFolderPath does
|
||||
# not return unicode strings when there is unicode data in the
|
||||
# path.
|
||||
try:
|
||||
dir = unicode(dir)
|
||||
|
||||
# Downgrade to short path name if have highbit chars. See
|
||||
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||
has_high_char = False
|
||||
for c in dir:
|
||||
if ord(c) > 255:
|
||||
has_high_char = True
|
||||
break
|
||||
if has_high_char:
|
||||
try:
|
||||
import win32api
|
||||
dir = win32api.GetShortPathName(dir)
|
||||
except ImportError:
|
||||
pass
|
||||
except UnicodeError:
|
||||
pass
|
||||
return dir
|
||||
|
||||
|
||||
def _get_win_folder_with_ctypes(csidl_name):
|
||||
import ctypes
|
||||
|
||||
csidl_const = {
|
||||
"CSIDL_APPDATA": 26,
|
||||
"CSIDL_COMMON_APPDATA": 35,
|
||||
"CSIDL_LOCAL_APPDATA": 28,
|
||||
}[csidl_name]
|
||||
|
||||
buf = ctypes.create_unicode_buffer(1024)
|
||||
ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
|
||||
|
||||
# Downgrade to short path name if have highbit chars. See
|
||||
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||
has_high_char = False
|
||||
for c in buf:
|
||||
if ord(c) > 255:
|
||||
has_high_char = True
|
||||
break
|
||||
if has_high_char:
|
||||
buf2 = ctypes.create_unicode_buffer(1024)
|
||||
if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
|
||||
buf = buf2
|
||||
|
||||
return buf.value
|
||||
|
||||
def _get_win_folder_with_jna(csidl_name):
|
||||
import array
|
||||
from com.sun import jna
|
||||
from com.sun.jna.platform import win32
|
||||
|
||||
buf_size = win32.WinDef.MAX_PATH * 2
|
||||
buf = array.zeros('c', buf_size)
|
||||
shell = win32.Shell32.INSTANCE
|
||||
shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf)
|
||||
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
|
||||
|
||||
# Downgrade to short path name if have highbit chars. See
|
||||
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||
has_high_char = False
|
||||
for c in dir:
|
||||
if ord(c) > 255:
|
||||
has_high_char = True
|
||||
break
|
||||
if has_high_char:
|
||||
buf = array.zeros('c', buf_size)
|
||||
kernel = win32.Kernel32.INSTANCE
|
||||
if kernel.GetShortPathName(dir, buf, buf_size):
|
||||
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
|
||||
|
||||
return dir
|
||||
|
||||
if system == "win32":
|
||||
try:
|
||||
import win32com.shell
|
||||
_get_win_folder = _get_win_folder_with_pywin32
|
||||
except ImportError:
|
||||
try:
|
||||
from ctypes import windll
|
||||
_get_win_folder = _get_win_folder_with_ctypes
|
||||
except ImportError:
|
||||
try:
|
||||
import com.sun.jna
|
||||
_get_win_folder = _get_win_folder_with_jna
|
||||
except ImportError:
|
||||
_get_win_folder = _get_win_folder_from_registry
|
||||
|
||||
|
||||
#---- self test code
|
||||
|
||||
if __name__ == "__main__":
|
||||
appname = "MyApp"
|
||||
appauthor = "MyCompany"
|
||||
|
||||
props = ("user_data_dir", "site_data_dir",
|
||||
"user_config_dir", "site_config_dir",
|
||||
"user_cache_dir", "user_log_dir")
|
||||
|
||||
print("-- app dirs %s --" % __version__)
|
||||
|
||||
print("-- app dirs (with optional 'version')")
|
||||
dirs = AppDirs(appname, appauthor, version="1.0")
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||
|
||||
print("\n-- app dirs (without optional 'version')")
|
||||
dirs = AppDirs(appname, appauthor)
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||
|
||||
print("\n-- app dirs (without optional 'appauthor')")
|
||||
dirs = AppDirs(appname)
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||
|
||||
print("\n-- app dirs (with disabled 'appauthor')")
|
||||
dirs = AppDirs(appname, appauthor=False)
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
|
@ -1,6 +1,6 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
from PyQt6 import QtWidgets, QtCore, QtGui
|
||||
from PyQt5 import QtWidgets, QtCore, QtGui
|
||||
from artiq.applets.simple import SimpleApplet
|
||||
from artiq.tools import scale_from_metadata
|
||||
from artiq.gui.tools import LayoutWidget
|
||||
|
@ -17,7 +17,7 @@ class QCancellableLineEdit(QtWidgets.QLineEdit):
|
|||
editCancelled = QtCore.pyqtSignal()
|
||||
|
||||
def keyPressEvent(self, event):
|
||||
if event.key() == QtCore.Qt.Key.Key_Escape:
|
||||
if event.key() == QtCore.Qt.Key_Escape:
|
||||
self.editCancelled.emit()
|
||||
else:
|
||||
super().keyPressEvent(event)
|
||||
|
@ -34,7 +34,7 @@ class NumberWidget(LayoutWidget):
|
|||
self.addWidget(self.number_area, 0, 0)
|
||||
|
||||
self.unit_area = QtWidgets.QLabel()
|
||||
self.unit_area.setAlignment(QtCore.Qt.AlignmentFlag.AlignRight | QtCore.Qt.AlignmentFlag.AlignTop)
|
||||
self.unit_area.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignTop)
|
||||
self.addWidget(self.unit_area, 0, 1)
|
||||
|
||||
self.lcd_widget = QResponsiveLCDNumber()
|
||||
|
@ -44,7 +44,7 @@ class NumberWidget(LayoutWidget):
|
|||
|
||||
self.edit_widget = QCancellableLineEdit()
|
||||
self.edit_widget.setValidator(QtGui.QDoubleValidator())
|
||||
self.edit_widget.setAlignment(QtCore.Qt.AlignmentFlag.AlignRight | QtCore.Qt.AlignmentFlag.AlignVCenter)
|
||||
self.edit_widget.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignVCenter)
|
||||
self.edit_widget.editCancelled.connect(self.cancel_edit)
|
||||
self.edit_widget.returnPressed.connect(self.confirm_edit)
|
||||
self.number_area.addWidget(self.edit_widget)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import PyQt6 # make sure pyqtgraph imports Qt6
|
||||
import PyQt5 # make sure pyqtgraph imports Qt5
|
||||
import pyqtgraph
|
||||
|
||||
from artiq.applets.simple import SimpleApplet
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import PyQt6 # make sure pyqtgraph imports Qt6
|
||||
from PyQt6.QtCore import QTimer
|
||||
import PyQt5 # make sure pyqtgraph imports Qt5
|
||||
from PyQt5.QtCore import QTimer
|
||||
import pyqtgraph
|
||||
|
||||
from artiq.applets.simple import TitleApplet
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import numpy as np
|
||||
import PyQt6 # make sure pyqtgraph imports Qt6
|
||||
from PyQt6.QtCore import QTimer
|
||||
import PyQt5 # make sure pyqtgraph imports Qt5
|
||||
from PyQt5.QtCore import QTimer
|
||||
import pyqtgraph
|
||||
|
||||
from artiq.applets.simple import TitleApplet
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import numpy as np
|
||||
from PyQt6 import QtWidgets
|
||||
from PyQt6.QtCore import QTimer
|
||||
from PyQt5 import QtWidgets
|
||||
from PyQt5.QtCore import QTimer
|
||||
import pyqtgraph
|
||||
|
||||
from artiq.applets.simple import SimpleApplet
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
from PyQt6 import QtWidgets
|
||||
from PyQt5 import QtWidgets
|
||||
|
||||
from artiq.applets.simple import SimpleApplet
|
||||
|
||||
|
|
|
@ -137,8 +137,9 @@ class AppletIPCClient(AsyncioChildComm):
|
|||
logger.error("unexpected action reply to embed request: %s",
|
||||
reply["action"])
|
||||
self.close_cb()
|
||||
else:
|
||||
return reply["size_w"], reply["size_h"]
|
||||
|
||||
def fix_initial_size(self):
|
||||
self.write_pyon({"action": "fix_initial_size"})
|
||||
|
||||
async def listen(self):
|
||||
data = None
|
||||
|
@ -272,7 +273,7 @@ class SimpleApplet:
|
|||
# HACK: if the window has a frame, there will be garbage
|
||||
# (usually white) displayed at its right and bottom borders
|
||||
# after it is embedded.
|
||||
self.main_widget.setWindowFlags(QtCore.Qt.WindowType.FramelessWindowHint)
|
||||
self.main_widget.setWindowFlags(QtCore.Qt.FramelessWindowHint)
|
||||
self.main_widget.show()
|
||||
win_id = int(self.main_widget.winId())
|
||||
self.loop.run_until_complete(self.ipc.embed(win_id))
|
||||
|
@ -285,13 +286,12 @@ class SimpleApplet:
|
|||
# 2. applet creates native window without showing it, and
|
||||
# gets its ID
|
||||
# 3. applet sends the ID to host, host embeds the widget
|
||||
# and returns embedded size
|
||||
# 4. applet is resized to that given size
|
||||
# 5. applet shows the widget
|
||||
# 4. applet shows the widget
|
||||
# 5. parent resizes the widget
|
||||
win_id = int(self.main_widget.winId())
|
||||
size_w, size_h = self.loop.run_until_complete(self.ipc.embed(win_id))
|
||||
self.main_widget.resize(size_w, size_h)
|
||||
self.loop.run_until_complete(self.ipc.embed(win_id))
|
||||
self.main_widget.show()
|
||||
self.ipc.fix_initial_size()
|
||||
else:
|
||||
self.main_widget.show()
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
import logging
|
||||
import asyncio
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
|
||||
from sipyco.pc_rpc import AsyncioClient as RPCClient
|
||||
|
||||
from artiq.tools import short_format
|
||||
from artiq.gui.tools import LayoutWidget
|
||||
from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel
|
||||
from artiq.gui.models import DictSyncTreeSepModel
|
||||
|
||||
# reduced read-only version of artiq.dashboard.datasets
|
||||
|
@ -62,8 +62,8 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
|||
def __init__(self, dataset_sub, dataset_ctl):
|
||||
QtWidgets.QDockWidget.__init__(self, "Datasets")
|
||||
self.setObjectName("Datasets")
|
||||
self.setFeatures(self.DockWidgetFeature.DockWidgetMovable |
|
||||
self.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
|
||||
grid = LayoutWidget()
|
||||
self.setWidget(grid)
|
||||
|
@ -74,9 +74,9 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
|||
grid.addWidget(self.search, 0, 0)
|
||||
|
||||
self.table = QtWidgets.QTreeView()
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectRows)
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
|
||||
self.table.setSelectionMode(
|
||||
QtWidgets.QAbstractItemView.SelectionMode.SingleSelection)
|
||||
QtWidgets.QAbstractItemView.SingleSelection)
|
||||
grid.addWidget(self.table, 1, 0)
|
||||
|
||||
metadata_grid = LayoutWidget()
|
||||
|
@ -85,13 +85,13 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
|||
"rid start_time".split()):
|
||||
metadata_grid.addWidget(QtWidgets.QLabel(label), i, 0)
|
||||
v = QtWidgets.QLabel()
|
||||
v.setTextInteractionFlags(QtCore.Qt.TextInteractionFlag.TextSelectableByMouse)
|
||||
v.setTextInteractionFlags(QtCore.Qt.TextSelectableByMouse)
|
||||
metadata_grid.addWidget(v, i, 1)
|
||||
self.metadata[label] = v
|
||||
grid.addWidget(metadata_grid, 2, 0)
|
||||
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
upload_action = QtGui.QAction("Upload dataset to master",
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
upload_action = QtWidgets.QAction("Upload dataset to master",
|
||||
self.table)
|
||||
upload_action.triggered.connect(self.upload_clicked)
|
||||
self.table.addAction(upload_action)
|
||||
|
@ -112,8 +112,7 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
|||
|
||||
def set_model(self, model):
|
||||
self.table_model = model
|
||||
self.table_model_filter = QtCore.QSortFilterProxyModel()
|
||||
self.table_model_filter.setRecursiveFilteringEnabled(True)
|
||||
self.table_model_filter = QRecursiveFilterProxyModel()
|
||||
self.table_model_filter.setSourceModel(self.table_model)
|
||||
self.table.setModel(self.table_model_filter)
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ import os
|
|||
from functools import partial
|
||||
from collections import OrderedDict
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
import h5py
|
||||
|
||||
from sipyco import pyon
|
||||
|
@ -33,13 +33,13 @@ class _ArgumentEditor(EntryTreeWidget):
|
|||
recompute_arguments = QtWidgets.QPushButton("Recompute all arguments")
|
||||
recompute_arguments.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
||||
QtWidgets.QStyle.SP_BrowserReload))
|
||||
recompute_arguments.clicked.connect(self._recompute_arguments_clicked)
|
||||
|
||||
load = QtWidgets.QPushButton("Set arguments from HDF5")
|
||||
load.setToolTip("Set arguments from currently selected HDF5 file")
|
||||
load.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogApplyButton))
|
||||
QtWidgets.QStyle.SP_DialogApplyButton))
|
||||
load.clicked.connect(self._load_clicked)
|
||||
|
||||
buttons = LayoutWidget()
|
||||
|
@ -86,7 +86,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
|||
self.resize(100*qfm.averageCharWidth(), 30*qfm.lineSpacing())
|
||||
self.setWindowTitle(expurl)
|
||||
self.setWindowIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogContentsView))
|
||||
QtWidgets.QStyle.SP_FileDialogContentsView))
|
||||
self.setAcceptDrops(True)
|
||||
|
||||
self.layout = QtWidgets.QGridLayout()
|
||||
|
@ -126,22 +126,22 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
|||
|
||||
run = QtWidgets.QPushButton("Analyze")
|
||||
run.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
||||
QtWidgets.QStyle.SP_DialogOkButton))
|
||||
run.setToolTip("Run analysis stage (Ctrl+Return)")
|
||||
run.setShortcut("CTRL+RETURN")
|
||||
run.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
||||
run.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||
QtWidgets.QSizePolicy.Expanding)
|
||||
self.layout.addWidget(run, 2, 4)
|
||||
run.clicked.connect(self._run_clicked)
|
||||
self._run = run
|
||||
|
||||
terminate = QtWidgets.QPushButton("Terminate")
|
||||
terminate.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogCancelButton))
|
||||
QtWidgets.QStyle.SP_DialogCancelButton))
|
||||
terminate.setToolTip("Terminate analysis (Ctrl+Backspace)")
|
||||
terminate.setShortcut("CTRL+BACKSPACE")
|
||||
terminate.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
||||
terminate.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||
QtWidgets.QSizePolicy.Expanding)
|
||||
self.layout.addWidget(terminate, 3, 4)
|
||||
terminate.clicked.connect(self._terminate_clicked)
|
||||
terminate.setEnabled(False)
|
||||
|
@ -316,7 +316,7 @@ class ExperimentsArea(QtWidgets.QMdiArea):
|
|||
asyncio.ensure_future(sub.load_hdf5_task(path))
|
||||
|
||||
def mousePressEvent(self, ev):
|
||||
if ev.button() == QtCore.Qt.MouseButton.LeftButton:
|
||||
if ev.button() == QtCore.Qt.LeftButton:
|
||||
self.select_experiment()
|
||||
|
||||
def paintEvent(self, event):
|
||||
|
@ -406,7 +406,7 @@ class ExperimentsArea(QtWidgets.QMdiArea):
|
|||
exc_info=True)
|
||||
dock = _ExperimentDock(self, expurl, {})
|
||||
asyncio.ensure_future(dock._recompute_arguments())
|
||||
dock.setAttribute(QtCore.Qt.WidgetAttribute.WA_DeleteOnClose)
|
||||
dock.setAttribute(QtCore.Qt.WA_DeleteOnClose)
|
||||
self.addSubWindow(dock)
|
||||
dock.show()
|
||||
dock.sigClosed.connect(partial(self.on_dock_closed, dock))
|
||||
|
|
|
@ -3,7 +3,7 @@ import os
|
|||
from datetime import datetime
|
||||
|
||||
import h5py
|
||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
||||
from PyQt5 import QtCore, QtWidgets, QtGui
|
||||
|
||||
from sipyco import pyon
|
||||
|
||||
|
@ -69,15 +69,15 @@ class ZoomIconView(QtWidgets.QListView):
|
|||
def __init__(self):
|
||||
QtWidgets.QListView.__init__(self)
|
||||
self._char_width = QtGui.QFontMetrics(self.font()).averageCharWidth()
|
||||
self.setViewMode(self.ViewMode.IconMode)
|
||||
self.setViewMode(self.IconMode)
|
||||
w = self._char_width*self.default_size
|
||||
self.setIconSize(QtCore.QSize(w, int(w*self.aspect)))
|
||||
self.setFlow(self.Flow.LeftToRight)
|
||||
self.setResizeMode(self.ResizeMode.Adjust)
|
||||
self.setFlow(self.LeftToRight)
|
||||
self.setResizeMode(self.Adjust)
|
||||
self.setWrapping(True)
|
||||
|
||||
def wheelEvent(self, ev):
|
||||
if ev.modifiers() & QtCore.Qt.KeyboardModifier.ControlModifier:
|
||||
if ev.modifiers() & QtCore.Qt.ControlModifier:
|
||||
a = self._char_width*self.min_size
|
||||
b = self._char_width*self.max_size
|
||||
w = self.iconSize().width()*self.zoom_step**(
|
||||
|
@ -88,16 +88,16 @@ class ZoomIconView(QtWidgets.QListView):
|
|||
QtWidgets.QListView.wheelEvent(self, ev)
|
||||
|
||||
|
||||
class Hdf5FileSystemModel(QtGui.QFileSystemModel):
|
||||
class Hdf5FileSystemModel(QtWidgets.QFileSystemModel):
|
||||
def __init__(self):
|
||||
QtGui.QFileSystemModel.__init__(self)
|
||||
self.setFilter(QtCore.QDir.Filter.Drives | QtCore.QDir.Filter.NoDotAndDotDot |
|
||||
QtCore.QDir.Filter.AllDirs | QtCore.QDir.Filter.Files)
|
||||
QtWidgets.QFileSystemModel.__init__(self)
|
||||
self.setFilter(QtCore.QDir.Drives | QtCore.QDir.NoDotAndDotDot |
|
||||
QtCore.QDir.AllDirs | QtCore.QDir.Files)
|
||||
self.setNameFilterDisables(False)
|
||||
self.setIconProvider(ThumbnailIconProvider())
|
||||
|
||||
def data(self, idx, role):
|
||||
if role == QtCore.Qt.ItemDataRole.ToolTipRole:
|
||||
if role == QtCore.Qt.ToolTipRole:
|
||||
info = self.fileInfo(idx)
|
||||
h5 = open_h5(info)
|
||||
if h5 is not None:
|
||||
|
@ -114,7 +114,7 @@ class Hdf5FileSystemModel(QtGui.QFileSystemModel):
|
|||
except:
|
||||
logger.warning("unable to read metadata from %s",
|
||||
info.filePath(), exc_info=True)
|
||||
return QtGui.QFileSystemModel.data(self, idx, role)
|
||||
return QtWidgets.QFileSystemModel.data(self, idx, role)
|
||||
|
||||
|
||||
class FilesDock(QtWidgets.QDockWidget):
|
||||
|
@ -125,7 +125,7 @@ class FilesDock(QtWidgets.QDockWidget):
|
|||
def __init__(self, datasets, browse_root=""):
|
||||
QtWidgets.QDockWidget.__init__(self, "Files")
|
||||
self.setObjectName("Files")
|
||||
self.setFeatures(self.DockWidgetFeature.DockWidgetMovable | self.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(self.DockWidgetMovable | self.DockWidgetFloatable)
|
||||
|
||||
self.splitter = QtWidgets.QSplitter()
|
||||
self.setWidget(self.splitter)
|
||||
|
@ -147,8 +147,8 @@ class FilesDock(QtWidgets.QDockWidget):
|
|||
self.rt.setRootIndex(rt_model.mapFromSource(
|
||||
self.model.setRootPath(browse_root)))
|
||||
self.rt.setHeaderHidden(True)
|
||||
self.rt.setSelectionBehavior(self.rt.SelectionBehavior.SelectRows)
|
||||
self.rt.setSelectionMode(self.rt.SelectionMode.SingleSelection)
|
||||
self.rt.setSelectionBehavior(self.rt.SelectRows)
|
||||
self.rt.setSelectionMode(self.rt.SingleSelection)
|
||||
self.rt.selectionModel().currentChanged.connect(
|
||||
self.tree_current_changed)
|
||||
self.rt.setRootIsDecorated(False)
|
||||
|
@ -252,7 +252,7 @@ class FilesDock(QtWidgets.QDockWidget):
|
|||
100,
|
||||
lambda: self.rt.scrollTo(
|
||||
self.rt.model().mapFromSource(self.model.index(path)),
|
||||
self.rt.ScrollHint.PositionAtCenter)
|
||||
self.rt.PositionAtCenter)
|
||||
)
|
||||
self.model.directoryLoaded.connect(scroll_when_loaded)
|
||||
idx = self.rt.model().mapFromSource(idx)
|
||||
|
|
|
@ -88,34 +88,18 @@ class EmbeddingMap:
|
|||
self.subkernel_message_map[msg_type.name] = msg_id
|
||||
self.object_reverse_map[obj_id] = msg_id
|
||||
|
||||
# Keep this list of exceptions in sync with `EXCEPTION_ID_LOOKUP` in `artiq::firmware::ksupport::eh_artiq`
|
||||
# The exceptions declared here must be defined in `artiq.coredevice.exceptions`
|
||||
# Verify synchronization by running the test cases in `artiq.test.coredevice.test_exceptions`
|
||||
self.preallocate_runtime_exception_names([
|
||||
"RTIOUnderflow",
|
||||
"RTIOOverflow",
|
||||
"RTIODestinationUnreachable",
|
||||
"DMAError",
|
||||
"I2CError",
|
||||
"CacheError",
|
||||
"SPIError",
|
||||
"SubkernelError",
|
||||
|
||||
"0:AssertionError",
|
||||
"0:AttributeError",
|
||||
"0:IndexError",
|
||||
"0:IOError",
|
||||
"0:KeyError",
|
||||
"0:NotImplementedError",
|
||||
"0:OverflowError",
|
||||
"0:RuntimeError",
|
||||
"0:TimeoutError",
|
||||
"0:TypeError",
|
||||
"0:ValueError",
|
||||
"0:ZeroDivisionError",
|
||||
"0:LinAlgError",
|
||||
"UnwrapNoneError",
|
||||
])
|
||||
self.preallocate_runtime_exception_names(["RuntimeError",
|
||||
"RTIOUnderflow",
|
||||
"RTIOOverflow",
|
||||
"RTIODestinationUnreachable",
|
||||
"DMAError",
|
||||
"I2CError",
|
||||
"CacheError",
|
||||
"SPIError",
|
||||
"0:ZeroDivisionError",
|
||||
"0:IndexError",
|
||||
"UnwrapNoneError",
|
||||
"SubkernelError"])
|
||||
|
||||
def preallocate_runtime_exception_names(self, names):
|
||||
for i, name in enumerate(names):
|
||||
|
|
|
@ -1047,42 +1047,6 @@ class Builtin(Instruction):
|
|||
def opcode(self):
|
||||
return "builtin({})".format(self.op)
|
||||
|
||||
class BuiltinInvoke(Terminator):
|
||||
"""
|
||||
A builtin operation which can raise exceptions.
|
||||
|
||||
:ivar op: (string) operation name
|
||||
"""
|
||||
|
||||
"""
|
||||
:param op: (string) operation name
|
||||
:param normal: (:class:`BasicBlock`) normal target
|
||||
:param exn: (:class:`BasicBlock`) exceptional target
|
||||
"""
|
||||
def __init__(self, op, operands, typ, normal, exn, name=None):
|
||||
assert isinstance(op, str)
|
||||
for operand in operands: assert isinstance(operand, Value)
|
||||
assert isinstance(normal, BasicBlock)
|
||||
assert isinstance(exn, BasicBlock)
|
||||
if name is None:
|
||||
name = "BLTINV.{}".format(op)
|
||||
super().__init__(operands + [normal, exn], typ, name)
|
||||
self.op = op
|
||||
|
||||
def copy(self, mapper):
|
||||
self_copy = super().copy(mapper)
|
||||
self_copy.op = self.op
|
||||
return self_copy
|
||||
|
||||
def normal_target(self):
|
||||
return self.operands[-2]
|
||||
|
||||
def exception_target(self):
|
||||
return self.operands[-1]
|
||||
|
||||
def opcode(self):
|
||||
return "builtinInvokable({})".format(self.op)
|
||||
|
||||
class Closure(Instruction):
|
||||
"""
|
||||
A closure creation operation.
|
||||
|
|
|
@ -2544,22 +2544,10 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
|||
fn = types.get_method_function(fn)
|
||||
sid = ir.Constant(fn.sid, builtins.TInt32())
|
||||
if not builtins.is_none(fn.ret):
|
||||
if self.unwind_target is None:
|
||||
ret = self.append(ir.Builtin("subkernel_retrieve_return", [sid, timeout], fn.ret))
|
||||
else:
|
||||
after_invoke = self.add_block("invoke")
|
||||
ret = self.append(ir.BuiltinInvoke("subkernel_retrieve_return", [sid, timeout],
|
||||
fn.ret, after_invoke, self.unwind_target))
|
||||
self.current_block = after_invoke
|
||||
ret = self.append(ir.Builtin("subkernel_retrieve_return", [sid, timeout], fn.ret))
|
||||
else:
|
||||
ret = ir.Constant(None, builtins.TNone())
|
||||
if self.unwind_target is None:
|
||||
self.append(ir.Builtin("subkernel_await_finish", [sid, timeout], builtins.TNone()))
|
||||
else:
|
||||
after_invoke = self.add_block("invoke")
|
||||
self.append(ir.BuiltinInvoke("subkernel_await_finish", [sid, timeout],
|
||||
builtins.TNone(), after_invoke, self.unwind_target))
|
||||
self.current_block = after_invoke
|
||||
self.append(ir.Builtin("subkernel_await_finish", [sid, timeout], builtins.TNone()))
|
||||
return ret
|
||||
elif types.is_builtin(typ, "subkernel_preload"):
|
||||
if len(node.args) == 1 and len(node.keywords) == 0:
|
||||
|
@ -2606,14 +2594,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
|||
{"name": name, "recv": vartype, "send": msg.value_type},
|
||||
node.loc)
|
||||
self.engine.process(diag)
|
||||
if self.unwind_target is None:
|
||||
ret = self.append(ir.Builtin("subkernel_recv", [msg_id, timeout], vartype))
|
||||
else:
|
||||
after_invoke = self.add_block("invoke")
|
||||
ret = self.append(ir.BuiltinInvoke("subkernel_recv", [msg_id, timeout],
|
||||
vartype, after_invoke, self.unwind_target))
|
||||
self.current_block = after_invoke
|
||||
return ret
|
||||
return self.append(ir.Builtin("subkernel_recv", [msg_id, timeout], vartype))
|
||||
elif types.is_exn_constructor(typ):
|
||||
return self.alloc_exn(node.type, *[self.visit(arg_node) for arg_node in node.args])
|
||||
elif types.is_constructor(typ):
|
||||
|
@ -2965,7 +2946,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
|||
format_string += ")"
|
||||
elif builtins.is_exception(value.type):
|
||||
# message may not be an actual string...
|
||||
# so we cannot really print itInvoke
|
||||
# so we cannot really print it
|
||||
name = self.append(ir.GetAttr(value, "#__name__"))
|
||||
param1 = self.append(ir.GetAttr(value, "#__param0__"))
|
||||
param2 = self.append(ir.GetAttr(value, "#__param1__"))
|
||||
|
|
|
@ -1437,44 +1437,6 @@ class LLVMIRGenerator:
|
|||
else:
|
||||
assert False
|
||||
|
||||
def process_BuiltinInvoke(self, insn):
|
||||
llnormalblock = self.map(insn.normal_target())
|
||||
llunwindblock = self.map(insn.exception_target())
|
||||
if insn.op == "subkernel_retrieve_return":
|
||||
llsid = self.map(insn.operands[0])
|
||||
lltimeout = self.map(insn.operands[1])
|
||||
lltagptr = self._build_subkernel_tags([insn.type])
|
||||
llheadu = self.llbuilder.append_basic_block(name="subkernel.await.unwind")
|
||||
self.llbuilder.invoke(self.llbuiltin("subkernel_await_message"),
|
||||
[llsid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
|
||||
llheadu, llunwindblock,
|
||||
name="subkernel.await.message")
|
||||
self.llbuilder.position_at_end(llheadu)
|
||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
||||
name="subkernel.arg.stack")
|
||||
return self._build_rpc_recv(insn.type, llstackptr, llnormalblock, llunwindblock)
|
||||
elif insn.op == "subkernel_await_finish":
|
||||
llsid = self.map(insn.operands[0])
|
||||
lltimeout = self.map(insn.operands[1])
|
||||
return self.llbuilder.invoke(self.llbuiltin("subkernel_await_finish"), [llsid, lltimeout],
|
||||
llnormalblock, llunwindblock,
|
||||
name="subkernel.await.finish")
|
||||
elif insn.op == "subkernel_recv":
|
||||
llmsgid = self.map(insn.operands[0])
|
||||
lltimeout = self.map(insn.operands[1])
|
||||
lltagptr = self._build_subkernel_tags([insn.type])
|
||||
llheadu = self.llbuilder.append_basic_block(name="subkernel.await.unwind")
|
||||
self.llbuilder.invoke(self.llbuiltin("subkernel_await_message"),
|
||||
[llmsgid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
|
||||
llheadu, llunwindblock,
|
||||
name="subkernel.await.message")
|
||||
self.llbuilder.position_at_end(llheadu)
|
||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
||||
name="subkernel.arg.stack")
|
||||
return self._build_rpc_recv(insn.type, llstackptr, llnormalblock, llunwindblock)
|
||||
else:
|
||||
assert False
|
||||
|
||||
def process_SubkernelAwaitArgs(self, insn):
|
||||
llmin = self.map(insn.operands[0])
|
||||
llmax = self.map(insn.operands[1])
|
||||
|
|
|
@ -3,7 +3,6 @@ import logging
|
|||
import traceback
|
||||
import numpy
|
||||
import socket
|
||||
import builtins
|
||||
from enum import Enum
|
||||
from fractions import Fraction
|
||||
from collections import namedtuple
|
||||
|
@ -617,10 +616,9 @@ class CommKernel:
|
|||
self._write_int32(embedding_map.store_str(function))
|
||||
else:
|
||||
exn_type = type(exn)
|
||||
if exn_type in builtins.__dict__.values():
|
||||
name = "0:{}".format(exn_type.__qualname__)
|
||||
elif hasattr(exn, "artiq_builtin"):
|
||||
name = "0:{}.{}".format(exn_type.__module__, exn_type.__qualname__)
|
||||
if exn_type in (ZeroDivisionError, ValueError, IndexError, RuntimeError) or \
|
||||
hasattr(exn, "artiq_builtin"):
|
||||
name = "0:{}".format(exn_type.__name__)
|
||||
else:
|
||||
exn_id = embedding_map.store_object(exn_type)
|
||||
name = "{}:{}.{}".format(exn_id,
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
from enum import Enum
|
||||
import binascii
|
||||
import logging
|
||||
import io
|
||||
import struct
|
||||
|
||||
from sipyco.keepalive import create_connection
|
||||
|
@ -23,6 +25,8 @@ class Request(Enum):
|
|||
|
||||
DebugAllocator = 8
|
||||
|
||||
Flash = 9
|
||||
|
||||
|
||||
class Reply(Enum):
|
||||
Success = 1
|
||||
|
@ -46,15 +50,17 @@ class LogLevel(Enum):
|
|||
|
||||
|
||||
class CommMgmt:
|
||||
def __init__(self, host, port=1380):
|
||||
def __init__(self, host, port=1380, drtio_dest=0):
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.drtio_dest = drtio_dest
|
||||
|
||||
def open(self):
|
||||
if hasattr(self, "socket"):
|
||||
return
|
||||
self.socket = create_connection(self.host, self.port)
|
||||
self.socket.sendall(b"ARTIQ management\n")
|
||||
self._write_int8(self.drtio_dest)
|
||||
endian = self._read(1)
|
||||
if endian == b"e":
|
||||
self.endian = "<"
|
||||
|
@ -194,3 +200,20 @@ class CommMgmt:
|
|||
|
||||
def debug_allocator(self):
|
||||
self._write_header(Request.DebugAllocator)
|
||||
|
||||
def flash(self, bin_paths):
|
||||
self._write_header(Request.Flash)
|
||||
|
||||
with io.BytesIO() as image_buf:
|
||||
for filename in bin_paths:
|
||||
with open(filename, "rb") as fi:
|
||||
bin_ = fi.read()
|
||||
image_buf.write(struct.pack(self.endian + "I", len(bin_)))
|
||||
image_buf.write(bin_)
|
||||
|
||||
crc = binascii.crc32(image_buf.getvalue())
|
||||
image_buf.write(struct.pack(self.endian + "I", crc))
|
||||
|
||||
self._write_bytes(image_buf.getvalue())
|
||||
|
||||
self._read_expect(Reply.RebootImminent)
|
||||
|
|
|
@ -53,9 +53,6 @@ def rtio_get_destination_status(linkno: TInt32) -> TBool:
|
|||
def rtio_get_counter() -> TInt64:
|
||||
raise NotImplementedError("syscall not simulated")
|
||||
|
||||
@syscall
|
||||
def test_exception_id_sync(id: TInt32) -> TNone:
|
||||
raise NotImplementedError("syscall not simulated")
|
||||
|
||||
def get_target_cls(target):
|
||||
if target == "rv32g":
|
||||
|
|
|
@ -2,31 +2,16 @@ import builtins
|
|||
import linecache
|
||||
import re
|
||||
import os
|
||||
from numpy.linalg import LinAlgError
|
||||
|
||||
from artiq import __artiq_dir__ as artiq_dir
|
||||
from artiq.coredevice.runtime import source_loader
|
||||
|
||||
"""
|
||||
This file provides class definition for all the exceptions declared in `EmbeddingMap` in `artiq.compiler.embedding`
|
||||
|
||||
For Python builtin exceptions, use the `builtins` module
|
||||
For ARTIQ specific exceptions, inherit from `Exception` class
|
||||
"""
|
||||
|
||||
AssertionError = builtins.AssertionError
|
||||
AttributeError = builtins.AttributeError
|
||||
IndexError = builtins.IndexError
|
||||
IOError = builtins.IOError
|
||||
KeyError = builtins.KeyError
|
||||
NotImplementedError = builtins.NotImplementedError
|
||||
OverflowError = builtins.OverflowError
|
||||
RuntimeError = builtins.RuntimeError
|
||||
TimeoutError = builtins.TimeoutError
|
||||
TypeError = builtins.TypeError
|
||||
ValueError = builtins.ValueError
|
||||
ZeroDivisionError = builtins.ZeroDivisionError
|
||||
OSError = builtins.OSError
|
||||
ValueError = builtins.ValueError
|
||||
IndexError = builtins.IndexError
|
||||
RuntimeError = builtins.RuntimeError
|
||||
AssertionError = builtins.AssertionError
|
||||
|
||||
|
||||
class CoreException:
|
||||
|
@ -172,18 +157,13 @@ class SubkernelError(Exception):
|
|||
|
||||
class ClockFailure(Exception):
|
||||
"""Raised when RTIO PLL has lost lock."""
|
||||
artiq_builtin = True
|
||||
|
||||
|
||||
class I2CError(Exception):
|
||||
"""Raised when a I2C transaction fails."""
|
||||
artiq_builtin = True
|
||||
pass
|
||||
|
||||
|
||||
class SPIError(Exception):
|
||||
"""Raised when a SPI transaction fails."""
|
||||
artiq_builtin = True
|
||||
|
||||
|
||||
class UnwrapNoneError(Exception):
|
||||
"""Raised when unwrapping a none Option."""
|
||||
artiq_builtin = True
|
||||
pass
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import asyncio
|
||||
import logging
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
|
||||
from artiq.gui import applets
|
||||
|
||||
|
@ -13,58 +13,58 @@ class AppletsCCBDock(applets.AppletsDock):
|
|||
def __init__(self, *args, **kwargs):
|
||||
applets.AppletsDock.__init__(self, *args, **kwargs)
|
||||
|
||||
sep = QtGui.QAction(self.table)
|
||||
sep = QtWidgets.QAction(self.table)
|
||||
sep.setSeparator(True)
|
||||
self.table.addAction(sep)
|
||||
|
||||
ccbp_group_menu = QtWidgets.QMenu(self.table)
|
||||
actiongroup = QtGui.QActionGroup(self.table)
|
||||
ccbp_group_menu = QtWidgets.QMenu()
|
||||
actiongroup = QtWidgets.QActionGroup(self.table)
|
||||
actiongroup.setExclusive(True)
|
||||
self.ccbp_group_none = QtGui.QAction("No policy", self.table)
|
||||
self.ccbp_group_none = QtWidgets.QAction("No policy", self.table)
|
||||
self.ccbp_group_none.setCheckable(True)
|
||||
self.ccbp_group_none.triggered.connect(lambda: self.set_ccbp(""))
|
||||
ccbp_group_menu.addAction(self.ccbp_group_none)
|
||||
actiongroup.addAction(self.ccbp_group_none)
|
||||
self.ccbp_group_ignore = QtGui.QAction("Ignore requests", self.table)
|
||||
self.ccbp_group_ignore = QtWidgets.QAction("Ignore requests", self.table)
|
||||
self.ccbp_group_ignore.setCheckable(True)
|
||||
self.ccbp_group_ignore.triggered.connect(lambda: self.set_ccbp("ignore"))
|
||||
ccbp_group_menu.addAction(self.ccbp_group_ignore)
|
||||
actiongroup.addAction(self.ccbp_group_ignore)
|
||||
self.ccbp_group_create = QtGui.QAction("Create applets", self.table)
|
||||
self.ccbp_group_create = QtWidgets.QAction("Create applets", self.table)
|
||||
self.ccbp_group_create.setCheckable(True)
|
||||
self.ccbp_group_create.triggered.connect(lambda: self.set_ccbp("create"))
|
||||
ccbp_group_menu.addAction(self.ccbp_group_create)
|
||||
actiongroup.addAction(self.ccbp_group_create)
|
||||
self.ccbp_group_enable = QtGui.QAction("Create and enable/disable applets",
|
||||
self.table)
|
||||
self.ccbp_group_enable = QtWidgets.QAction("Create and enable/disable applets",
|
||||
self.table)
|
||||
self.ccbp_group_enable.setCheckable(True)
|
||||
self.ccbp_group_enable.triggered.connect(lambda: self.set_ccbp("enable"))
|
||||
ccbp_group_menu.addAction(self.ccbp_group_enable)
|
||||
actiongroup.addAction(self.ccbp_group_enable)
|
||||
self.ccbp_group_action = QtGui.QAction("Group CCB policy", self.table)
|
||||
self.ccbp_group_action = QtWidgets.QAction("Group CCB policy", self.table)
|
||||
self.ccbp_group_action.setMenu(ccbp_group_menu)
|
||||
self.table.addAction(self.ccbp_group_action)
|
||||
self.table.itemSelectionChanged.connect(self.update_group_ccbp_menu)
|
||||
self.update_group_ccbp_menu()
|
||||
|
||||
ccbp_global_menu = QtWidgets.QMenu(self.table)
|
||||
actiongroup = QtGui.QActionGroup(self.table)
|
||||
ccbp_global_menu = QtWidgets.QMenu()
|
||||
actiongroup = QtWidgets.QActionGroup(self.table)
|
||||
actiongroup.setExclusive(True)
|
||||
self.ccbp_global_ignore = QtGui.QAction("Ignore requests", self.table)
|
||||
self.ccbp_global_ignore = QtWidgets.QAction("Ignore requests", self.table)
|
||||
self.ccbp_global_ignore.setCheckable(True)
|
||||
ccbp_global_menu.addAction(self.ccbp_global_ignore)
|
||||
actiongroup.addAction(self.ccbp_global_ignore)
|
||||
self.ccbp_global_create = QtGui.QAction("Create applets", self.table)
|
||||
self.ccbp_global_create = QtWidgets.QAction("Create applets", self.table)
|
||||
self.ccbp_global_create.setCheckable(True)
|
||||
self.ccbp_global_create.setChecked(True)
|
||||
ccbp_global_menu.addAction(self.ccbp_global_create)
|
||||
actiongroup.addAction(self.ccbp_global_create)
|
||||
self.ccbp_global_enable = QtGui.QAction("Create and enable/disable applets",
|
||||
self.table)
|
||||
self.ccbp_global_enable = QtWidgets.QAction("Create and enable/disable applets",
|
||||
self.table)
|
||||
self.ccbp_global_enable.setCheckable(True)
|
||||
ccbp_global_menu.addAction(self.ccbp_global_enable)
|
||||
actiongroup.addAction(self.ccbp_global_enable)
|
||||
ccbp_global_action = QtGui.QAction("Global CCB policy", self.table)
|
||||
ccbp_global_action = QtWidgets.QAction("Global CCB policy", self.table)
|
||||
ccbp_global_action.setMenu(ccbp_global_menu)
|
||||
self.table.addAction(ccbp_global_action)
|
||||
|
||||
|
@ -196,7 +196,7 @@ class AppletsCCBDock(applets.AppletsDock):
|
|||
logger.debug("Applet %s already exists and no update required", name)
|
||||
|
||||
if ccbp == "enable":
|
||||
applet.setCheckState(0, QtCore.Qt.CheckState.Checked)
|
||||
applet.setCheckState(0, QtCore.Qt.Checked)
|
||||
|
||||
def ccb_disable_applet(self, name, group=None):
|
||||
"""Disables an applet.
|
||||
|
@ -216,7 +216,7 @@ class AppletsCCBDock(applets.AppletsDock):
|
|||
return
|
||||
parent, applet = self.locate_applet(name, group, False)
|
||||
if applet is not None:
|
||||
applet.setCheckState(0, QtCore.Qt.CheckState.Unchecked)
|
||||
applet.setCheckState(0, QtCore.Qt.Unchecked)
|
||||
|
||||
def ccb_disable_applet_group(self, group):
|
||||
"""Disables all the applets in a group.
|
||||
|
@ -246,7 +246,7 @@ class AppletsCCBDock(applets.AppletsDock):
|
|||
return
|
||||
else:
|
||||
wi = nwi
|
||||
wi.setCheckState(0, QtCore.Qt.CheckState.Unchecked)
|
||||
wi.setCheckState(0, QtCore.Qt.Unchecked)
|
||||
|
||||
def ccb_notify(self, message):
|
||||
try:
|
||||
|
|
|
@ -2,11 +2,11 @@ import asyncio
|
|||
import logging
|
||||
|
||||
import numpy as np
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
from sipyco import pyon
|
||||
|
||||
from artiq.tools import scale_from_metadata, short_format, exc_to_warning
|
||||
from artiq.gui.tools import LayoutWidget
|
||||
from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel
|
||||
from artiq.gui.models import DictSyncTreeSepModel
|
||||
|
||||
|
||||
|
@ -63,11 +63,11 @@ class CreateEditDialog(QtWidgets.QDialog):
|
|||
self.cancel = QtWidgets.QPushButton('&Cancel')
|
||||
self.buttons = QtWidgets.QDialogButtonBox(self)
|
||||
self.buttons.addButton(
|
||||
self.ok, QtWidgets.QDialogButtonBox.ButtonRole.AcceptRole)
|
||||
self.ok, QtWidgets.QDialogButtonBox.AcceptRole)
|
||||
self.buttons.addButton(
|
||||
self.cancel, QtWidgets.QDialogButtonBox.ButtonRole.RejectRole)
|
||||
self.cancel, QtWidgets.QDialogButtonBox.RejectRole)
|
||||
grid.setRowStretch(6, 1)
|
||||
grid.addWidget(self.buttons, 7, 0, 1, 3, alignment=QtCore.Qt.AlignmentFlag.AlignHCenter)
|
||||
grid.addWidget(self.buttons, 7, 0, 1, 3, alignment=QtCore.Qt.AlignHCenter)
|
||||
self.buttons.accepted.connect(self.accept)
|
||||
self.buttons.rejected.connect(self.reject)
|
||||
|
||||
|
@ -125,7 +125,7 @@ class CreateEditDialog(QtWidgets.QDialog):
|
|||
pyon.encode(result)
|
||||
except:
|
||||
pixmap = self.style().standardPixmap(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_MessageBoxWarning)
|
||||
QtWidgets.QStyle.SP_MessageBoxWarning)
|
||||
self.data_type.setPixmap(pixmap)
|
||||
self.ok.setEnabled(False)
|
||||
else:
|
||||
|
@ -181,8 +181,8 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
|||
def __init__(self, dataset_sub, dataset_ctl):
|
||||
QtWidgets.QDockWidget.__init__(self, "Datasets")
|
||||
self.setObjectName("Datasets")
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
self.dataset_ctl = dataset_ctl
|
||||
|
||||
grid = LayoutWidget()
|
||||
|
@ -194,27 +194,27 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
|||
grid.addWidget(self.search, 0, 0)
|
||||
|
||||
self.table = QtWidgets.QTreeView()
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectRows)
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
|
||||
self.table.setSelectionMode(
|
||||
QtWidgets.QAbstractItemView.SelectionMode.SingleSelection)
|
||||
QtWidgets.QAbstractItemView.SingleSelection)
|
||||
grid.addWidget(self.table, 1, 0)
|
||||
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
create_action = QtGui.QAction("New dataset", self.table)
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
create_action = QtWidgets.QAction("New dataset", self.table)
|
||||
create_action.triggered.connect(self.create_clicked)
|
||||
create_action.setShortcut("CTRL+N")
|
||||
create_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
create_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.table.addAction(create_action)
|
||||
edit_action = QtGui.QAction("Edit dataset", self.table)
|
||||
edit_action = QtWidgets.QAction("Edit dataset", self.table)
|
||||
edit_action.triggered.connect(self.edit_clicked)
|
||||
edit_action.setShortcut("RETURN")
|
||||
edit_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
edit_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.table.doubleClicked.connect(self.edit_clicked)
|
||||
self.table.addAction(edit_action)
|
||||
delete_action = QtGui.QAction("Delete dataset", self.table)
|
||||
delete_action = QtWidgets.QAction("Delete dataset", self.table)
|
||||
delete_action.triggered.connect(self.delete_clicked)
|
||||
delete_action.setShortcut("DELETE")
|
||||
delete_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
delete_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.table.addAction(delete_action)
|
||||
|
||||
self.table_model = Model(dict())
|
||||
|
@ -227,8 +227,7 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
|||
|
||||
def set_model(self, model):
|
||||
self.table_model = model
|
||||
self.table_model_filter = QtCore.QSortFilterProxyModel()
|
||||
self.table_model_filter.setRecursiveFilteringEnabled(True)
|
||||
self.table_model_filter = QRecursiveFilterProxyModel()
|
||||
self.table_model_filter.setSourceModel(self.table_model)
|
||||
self.table.setModel(self.table_model_filter)
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ import os
|
|||
from functools import partial
|
||||
from collections import OrderedDict
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
import h5py
|
||||
|
||||
from sipyco import pyon
|
||||
|
@ -44,12 +44,12 @@ class _ArgumentEditor(EntryTreeWidget):
|
|||
recompute_arguments = QtWidgets.QPushButton("Recompute all arguments")
|
||||
recompute_arguments.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
||||
QtWidgets.QStyle.SP_BrowserReload))
|
||||
recompute_arguments.clicked.connect(dock._recompute_arguments_clicked)
|
||||
|
||||
load_hdf5 = QtWidgets.QPushButton("Load HDF5")
|
||||
load_hdf5.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOpenButton))
|
||||
QtWidgets.QStyle.SP_DialogOpenButton))
|
||||
load_hdf5.clicked.connect(dock._load_hdf5_clicked)
|
||||
|
||||
buttons = LayoutWidget()
|
||||
|
@ -101,7 +101,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
|||
self.resize(100 * qfm.averageCharWidth(), 30 * qfm.lineSpacing())
|
||||
self.setWindowTitle(expurl)
|
||||
self.setWindowIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogContentsView))
|
||||
QtWidgets.QStyle.SP_FileDialogContentsView))
|
||||
|
||||
self.layout = QtWidgets.QGridLayout()
|
||||
top_widget = QtWidgets.QWidget()
|
||||
|
@ -237,21 +237,21 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
|||
|
||||
submit = QtWidgets.QPushButton("Submit")
|
||||
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
||||
QtWidgets.QStyle.SP_DialogOkButton))
|
||||
submit.setToolTip("Schedule the experiment (Ctrl+Return)")
|
||||
submit.setShortcut("CTRL+RETURN")
|
||||
submit.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
||||
submit.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||
QtWidgets.QSizePolicy.Expanding)
|
||||
self.layout.addWidget(submit, 1, 4, 2, 1)
|
||||
submit.clicked.connect(self.submit_clicked)
|
||||
|
||||
reqterm = QtWidgets.QPushButton("Terminate instances")
|
||||
reqterm.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogCancelButton))
|
||||
QtWidgets.QStyle.SP_DialogCancelButton))
|
||||
reqterm.setToolTip("Request termination of instances (Ctrl+Backspace)")
|
||||
reqterm.setShortcut("CTRL+BACKSPACE")
|
||||
reqterm.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
||||
reqterm.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||
QtWidgets.QSizePolicy.Expanding)
|
||||
self.layout.addWidget(reqterm, 3, 4)
|
||||
reqterm.clicked.connect(self.reqterm_clicked)
|
||||
|
||||
|
@ -306,7 +306,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
|||
def contextMenuEvent(self, event):
|
||||
menu = QtWidgets.QMenu(self)
|
||||
reset_sched = menu.addAction("Reset scheduler settings")
|
||||
action = menu.exec(self.mapToGlobal(event.pos()))
|
||||
action = menu.exec_(self.mapToGlobal(event.pos()))
|
||||
if action == reset_sched:
|
||||
asyncio.ensure_future(self._recompute_sched_options_task())
|
||||
|
||||
|
@ -423,7 +423,7 @@ class _QuickOpenDialog(QtWidgets.QDialog):
|
|||
QtWidgets.QDialog.done(self, r)
|
||||
|
||||
def _open_experiment(self, exp_name, modifiers):
|
||||
if modifiers & QtCore.Qt.KeyboardModifier.ControlModifier:
|
||||
if modifiers & QtCore.Qt.ControlModifier:
|
||||
try:
|
||||
self.manager.submit(exp_name)
|
||||
except:
|
||||
|
@ -467,10 +467,10 @@ class ExperimentManager:
|
|||
self.open_experiments = dict()
|
||||
|
||||
self.is_quick_open_shown = False
|
||||
quick_open_shortcut = QtGui.QShortcut(
|
||||
QtGui.QKeySequence("Ctrl+P"),
|
||||
quick_open_shortcut = QtWidgets.QShortcut(
|
||||
QtCore.Qt.CTRL + QtCore.Qt.Key_P,
|
||||
main_window)
|
||||
quick_open_shortcut.setContext(QtCore.Qt.ShortcutContext.ApplicationShortcut)
|
||||
quick_open_shortcut.setContext(QtCore.Qt.ApplicationShortcut)
|
||||
quick_open_shortcut.activated.connect(self.show_quick_open)
|
||||
|
||||
def set_dataset_model(self, model):
|
||||
|
@ -589,7 +589,7 @@ class ExperimentManager:
|
|||
del self.submission_arguments[expurl]
|
||||
dock = _ExperimentDock(self, expurl)
|
||||
self.open_experiments[expurl] = dock
|
||||
dock.setAttribute(QtCore.Qt.WidgetAttribute.WA_DeleteOnClose)
|
||||
dock.setAttribute(QtCore.Qt.WA_DeleteOnClose)
|
||||
self.main_window.centralWidget().addSubWindow(dock)
|
||||
dock.show()
|
||||
dock.sigClosed.connect(partial(self.on_dock_closed, expurl))
|
||||
|
|
|
@ -3,7 +3,7 @@ import logging
|
|||
import re
|
||||
from functools import partial
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
|
||||
from artiq.gui.tools import LayoutWidget
|
||||
from artiq.gui.models import DictSyncTreeSepModel
|
||||
|
@ -37,8 +37,7 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
|||
self.file_list.doubleClicked.connect(self.accept)
|
||||
|
||||
buttons = QtWidgets.QDialogButtonBox(
|
||||
QtWidgets.QDialogButtonBox.StandardButton.Ok |
|
||||
QtWidgets.QDialogButtonBox.StandardButton.Cancel)
|
||||
QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel)
|
||||
grid.addWidget(buttons, 2, 0, 1, 2)
|
||||
buttons.accepted.connect(self.accept)
|
||||
buttons.rejected.connect(self.reject)
|
||||
|
@ -53,7 +52,7 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
|||
item = QtWidgets.QListWidgetItem()
|
||||
item.setText("..")
|
||||
item.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogToParent))
|
||||
QtWidgets.QStyle.SP_FileDialogToParent))
|
||||
self.file_list.addItem(item)
|
||||
|
||||
try:
|
||||
|
@ -65,9 +64,9 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
|||
return
|
||||
for name in sorted(contents, key=lambda x: (x[-1] not in "\\/", x)):
|
||||
if name[-1] in "\\/":
|
||||
icon = QtWidgets.QStyle.StandardPixmap.SP_DirIcon
|
||||
icon = QtWidgets.QStyle.SP_DirIcon
|
||||
else:
|
||||
icon = QtWidgets.QStyle.StandardPixmap.SP_FileIcon
|
||||
icon = QtWidgets.QStyle.SP_FileIcon
|
||||
if name[-3:] != ".py":
|
||||
continue
|
||||
item = QtWidgets.QListWidgetItem()
|
||||
|
@ -164,8 +163,8 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
|||
schedule_ctl, experiment_db_ctl, device_db_ctl):
|
||||
QtWidgets.QDockWidget.__init__(self, "Explorer")
|
||||
self.setObjectName("Explorer")
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
|
||||
top_widget = LayoutWidget()
|
||||
self.setWidget(top_widget)
|
||||
|
@ -176,7 +175,7 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
|||
|
||||
top_widget.addWidget(QtWidgets.QLabel("Revision:"), 0, 0)
|
||||
self.revision = QtWidgets.QLabel()
|
||||
self.revision.setTextInteractionFlags(QtCore.Qt.TextInteractionFlag.TextSelectableByMouse)
|
||||
self.revision.setTextInteractionFlags(QtCore.Qt.TextSelectableByMouse)
|
||||
top_widget.addWidget(self.revision, 0, 1)
|
||||
|
||||
self.stack = QtWidgets.QStackedWidget()
|
||||
|
@ -188,14 +187,14 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
|||
|
||||
self.el = QtWidgets.QTreeView()
|
||||
self.el.setHeaderHidden(True)
|
||||
self.el.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectItems)
|
||||
self.el.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectItems)
|
||||
self.el.doubleClicked.connect(
|
||||
partial(self.expname_action, "open_experiment"))
|
||||
self.el_buttons.addWidget(self.el, 0, 0, colspan=2)
|
||||
|
||||
open = QtWidgets.QPushButton("Open")
|
||||
open.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOpenButton))
|
||||
QtWidgets.QStyle.SP_DialogOpenButton))
|
||||
open.setToolTip("Open the selected experiment (Return)")
|
||||
self.el_buttons.addWidget(open, 1, 0)
|
||||
open.clicked.connect(
|
||||
|
@ -203,7 +202,7 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
|||
|
||||
submit = QtWidgets.QPushButton("Submit")
|
||||
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
||||
QtWidgets.QStyle.SP_DialogOkButton))
|
||||
submit.setToolTip("Schedule the selected experiment (Ctrl+Return)")
|
||||
self.el_buttons.addWidget(submit, 1, 1)
|
||||
submit.clicked.connect(
|
||||
|
@ -212,41 +211,41 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
|||
self.explist_model = Model(dict())
|
||||
explist_sub.add_setmodel_callback(self.set_model)
|
||||
|
||||
self.el.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
open_action = QtGui.QAction("Open", self.el)
|
||||
self.el.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
open_action = QtWidgets.QAction("Open", self.el)
|
||||
open_action.triggered.connect(
|
||||
partial(self.expname_action, "open_experiment"))
|
||||
open_action.setShortcut("RETURN")
|
||||
open_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
open_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.el.addAction(open_action)
|
||||
submit_action = QtGui.QAction("Submit", self.el)
|
||||
submit_action = QtWidgets.QAction("Submit", self.el)
|
||||
submit_action.triggered.connect(
|
||||
partial(self.expname_action, "submit"))
|
||||
submit_action.setShortcut("CTRL+RETURN")
|
||||
submit_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
submit_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.el.addAction(submit_action)
|
||||
reqterm_action = QtGui.QAction("Request termination of instances", self.el)
|
||||
reqterm_action = QtWidgets.QAction("Request termination of instances", self.el)
|
||||
reqterm_action.triggered.connect(
|
||||
partial(self.expname_action, "request_inst_term"))
|
||||
reqterm_action.setShortcut("CTRL+BACKSPACE")
|
||||
reqterm_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
reqterm_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.el.addAction(reqterm_action)
|
||||
|
||||
set_shortcut_menu = QtWidgets.QMenu(self.el)
|
||||
set_shortcut_menu = QtWidgets.QMenu()
|
||||
for i in range(12):
|
||||
action = QtGui.QAction("F" + str(i+1), self.el)
|
||||
action = QtWidgets.QAction("F" + str(i + 1), self.el)
|
||||
action.triggered.connect(partial(self.set_shortcut, i))
|
||||
set_shortcut_menu.addAction(action)
|
||||
|
||||
set_shortcut_action = QtGui.QAction("Set shortcut", self.el)
|
||||
set_shortcut_action = QtWidgets.QAction("Set shortcut", self.el)
|
||||
set_shortcut_action.setMenu(set_shortcut_menu)
|
||||
self.el.addAction(set_shortcut_action)
|
||||
|
||||
sep = QtGui.QAction(self.el)
|
||||
sep = QtWidgets.QAction(self.el)
|
||||
sep.setSeparator(True)
|
||||
self.el.addAction(sep)
|
||||
|
||||
scan_repository_action = QtGui.QAction("Scan repository HEAD",
|
||||
scan_repository_action = QtWidgets.QAction("Scan repository HEAD",
|
||||
self.el)
|
||||
|
||||
def scan_repository():
|
||||
|
@ -254,14 +253,15 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
|||
scan_repository_action.triggered.connect(scan_repository)
|
||||
self.el.addAction(scan_repository_action)
|
||||
|
||||
scan_ddb_action = QtGui.QAction("Scan device database", self.el)
|
||||
scan_ddb_action = QtWidgets.QAction("Scan device database", self.el)
|
||||
|
||||
def scan_ddb():
|
||||
asyncio.ensure_future(device_db_ctl.scan())
|
||||
scan_ddb_action.triggered.connect(scan_ddb)
|
||||
self.el.addAction(scan_ddb_action)
|
||||
|
||||
self.current_directory = ""
|
||||
open_file_action = QtGui.QAction("Open file outside repository",
|
||||
open_file_action = QtWidgets.QAction("Open file outside repository",
|
||||
self.el)
|
||||
open_file_action.triggered.connect(
|
||||
lambda: _OpenFileDialog(self, self.exp_manager,
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import logging
|
||||
import asyncio
|
||||
|
||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
||||
from PyQt5 import QtCore, QtWidgets, QtGui
|
||||
|
||||
from artiq.gui.models import DictSyncModel
|
||||
from artiq.gui.entries import EntryTreeWidget, procdesc_to_entry
|
||||
|
@ -44,11 +44,11 @@ class _InteractiveArgsRequest(EntryTreeWidget):
|
|||
self.quickStyleClicked.connect(self.supply)
|
||||
cancel_btn = QtWidgets.QPushButton("Cancel")
|
||||
cancel_btn.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogCancelButton))
|
||||
QtWidgets.QStyle.SP_DialogCancelButton))
|
||||
cancel_btn.clicked.connect(self.cancel)
|
||||
supply_btn = QtWidgets.QPushButton("Supply")
|
||||
supply_btn.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
||||
QtWidgets.QStyle.SP_DialogOkButton))
|
||||
supply_btn.clicked.connect(self.supply)
|
||||
buttons = LayoutWidget()
|
||||
buttons.addWidget(cancel_btn, 1, 1)
|
||||
|
@ -78,7 +78,7 @@ class _InteractiveArgsView(QtWidgets.QStackedWidget):
|
|||
QtWidgets.QStackedWidget.__init__(self)
|
||||
self.tabs = QtWidgets.QTabWidget()
|
||||
self.default_label = QtWidgets.QLabel("No pending interactive arguments requests.")
|
||||
self.default_label.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
||||
self.default_label.setAlignment(QtCore.Qt.AlignCenter)
|
||||
font = QtGui.QFont(self.default_label.font())
|
||||
font.setItalic(True)
|
||||
self.default_label.setFont(font)
|
||||
|
@ -99,12 +99,9 @@ class _InteractiveArgsView(QtWidgets.QStackedWidget):
|
|||
self._insert_widget(i)
|
||||
|
||||
def _insert_widget(self, row):
|
||||
rid = self.model.data(self.model.index(row, 0),
|
||||
QtCore.Qt.ItemDataRole.DisplayRole)
|
||||
title = self.model.data(self.model.index(row, 1),
|
||||
QtCore.Qt.ItemDataRole.DisplayRole)
|
||||
arglist_desc = self.model.data(self.model.index(row, 2),
|
||||
QtCore.Qt.ItemDataRole.DisplayRole)
|
||||
rid = self.model.data(self.model.index(row, 0), QtCore.Qt.DisplayRole)
|
||||
title = self.model.data(self.model.index(row, 1), QtCore.Qt.DisplayRole)
|
||||
arglist_desc = self.model.data(self.model.index(row, 2), QtCore.Qt.DisplayRole)
|
||||
inter_args_request = _InteractiveArgsRequest(rid, arglist_desc)
|
||||
inter_args_request.supplied.connect(self.supplied)
|
||||
inter_args_request.cancelled.connect(self.cancelled)
|
||||
|
@ -129,7 +126,7 @@ class InteractiveArgsDock(QtWidgets.QDockWidget):
|
|||
QtWidgets.QDockWidget.__init__(self, "Interactive Args")
|
||||
self.setObjectName("Interactive Args")
|
||||
self.setFeatures(
|
||||
self.DockWidgetFeature.DockWidgetMovable | self.DockWidgetFeature.DockWidgetFloatable)
|
||||
QtWidgets.QDockWidget.DockWidgetMovable | QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
self.interactive_args_rpc = interactive_args_rpc
|
||||
self.request_view = _InteractiveArgsView()
|
||||
self.request_view.supplied.connect(self.supply)
|
||||
|
|
|
@ -4,7 +4,7 @@ import textwrap
|
|||
from collections import namedtuple
|
||||
from functools import partial
|
||||
|
||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
|
||||
from artiq.coredevice.comm_moninj import CommMonInj, TTLOverride, TTLProbe
|
||||
from artiq.coredevice.ad9912_reg import AD9912_SER_CONF
|
||||
|
@ -23,7 +23,7 @@ class _CancellableLineEdit(QtWidgets.QLineEdit):
|
|||
|
||||
def keyPressEvent(self, event):
|
||||
key = event.key()
|
||||
if key == QtCore.Qt.Key.Key_Escape:
|
||||
if key == QtCore.Qt.Key_Escape:
|
||||
self.esc_cb(event)
|
||||
QtWidgets.QLineEdit.keyPressEvent(self, event)
|
||||
|
||||
|
@ -31,8 +31,9 @@ class _CancellableLineEdit(QtWidgets.QLineEdit):
|
|||
class _MoninjWidget(QtWidgets.QFrame):
|
||||
def __init__(self, title):
|
||||
QtWidgets.QFrame.__init__(self)
|
||||
self.setFrameShape(QtWidgets.QFrame.Shape.Box)
|
||||
self.setFrameShadow(QtWidgets.QFrame.Shadow.Raised)
|
||||
self.setFrameShape(QtWidgets.QFrame.Box)
|
||||
self.setFrameShape(QtWidgets.QFrame.Box)
|
||||
self.setFrameShadow(QtWidgets.QFrame.Raised)
|
||||
self.setFixedHeight(100)
|
||||
self.setFixedWidth(150)
|
||||
self.grid = QtWidgets.QGridLayout()
|
||||
|
@ -42,9 +43,7 @@ class _MoninjWidget(QtWidgets.QFrame):
|
|||
self.setLayout(self.grid)
|
||||
title = elide(title, 20)
|
||||
label = QtWidgets.QLabel(title)
|
||||
label.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
||||
label.setSizePolicy(QtWidgets.QSizePolicy.Policy.Ignored,
|
||||
QtWidgets.QSizePolicy.Policy.Preferred)
|
||||
label.setAlignment(QtCore.Qt.AlignHCenter | QtCore.Qt.AlignTop)
|
||||
self.grid.addWidget(label, 1, 1)
|
||||
|
||||
|
||||
|
@ -61,7 +60,7 @@ class _TTLWidget(_MoninjWidget):
|
|||
self.grid.addWidget(self.stack, 2, 1)
|
||||
|
||||
self.direction = QtWidgets.QLabel()
|
||||
self.direction.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
||||
self.direction.setAlignment(QtCore.Qt.AlignCenter)
|
||||
self.stack.addWidget(self.direction)
|
||||
|
||||
grid_cb = LayoutWidget()
|
||||
|
@ -81,7 +80,7 @@ class _TTLWidget(_MoninjWidget):
|
|||
self.stack.addWidget(grid_cb)
|
||||
|
||||
self.value = QtWidgets.QLabel()
|
||||
self.value.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
||||
self.value.setAlignment(QtCore.Qt.AlignCenter)
|
||||
self.grid.addWidget(self.value, 3, 1)
|
||||
|
||||
self.grid.setRowStretch(1, 1)
|
||||
|
@ -216,11 +215,11 @@ class _DDSWidget(_MoninjWidget):
|
|||
grid_disp.layout.setVerticalSpacing(0)
|
||||
|
||||
self.value_label = QtWidgets.QLabel()
|
||||
self.value_label.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
||||
self.value_label.setAlignment(QtCore.Qt.AlignCenter)
|
||||
grid_disp.addWidget(self.value_label, 0, 1, 1, 2)
|
||||
|
||||
unit = QtWidgets.QLabel("MHz")
|
||||
unit.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
||||
unit.setAlignment(QtCore.Qt.AlignCenter)
|
||||
grid_disp.addWidget(unit, 0, 3, 1, 1)
|
||||
|
||||
self.data_stack.addWidget(grid_disp)
|
||||
|
@ -232,10 +231,10 @@ class _DDSWidget(_MoninjWidget):
|
|||
grid_edit.layout.setVerticalSpacing(0)
|
||||
|
||||
self.value_edit = _CancellableLineEdit(self)
|
||||
self.value_edit.setAlignment(QtCore.Qt.AlignmentFlag.AlignRight)
|
||||
self.value_edit.setAlignment(QtCore.Qt.AlignRight)
|
||||
grid_edit.addWidget(self.value_edit, 0, 1, 1, 2)
|
||||
unit = QtWidgets.QLabel("MHz")
|
||||
unit.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
||||
unit.setAlignment(QtCore.Qt.AlignCenter)
|
||||
grid_edit.addWidget(unit, 0, 3, 1, 1)
|
||||
self.data_stack.addWidget(grid_edit)
|
||||
|
||||
|
@ -338,7 +337,7 @@ class _DACWidget(_MoninjWidget):
|
|||
self.offset_dacs = offset_dacs
|
||||
|
||||
self.value = QtWidgets.QLabel()
|
||||
self.value.setAlignment(QtCore.Qt.AlignmentFlag.AlignHCenter | QtCore.Qt.AlignmentFlag.AlignTop)
|
||||
self.value.setAlignment(QtCore.Qt.AlignHCenter | QtCore.Qt.AlignTop)
|
||||
self.grid.addWidget(self.value, 2, 1, 6, 1)
|
||||
|
||||
self.grid.setRowStretch(1, 1)
|
||||
|
@ -761,7 +760,7 @@ class Model(DictSyncTreeSepModel):
|
|||
class _AddChannelDialog(QtWidgets.QDialog):
|
||||
def __init__(self, parent, model):
|
||||
QtWidgets.QDialog.__init__(self, parent=parent)
|
||||
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
self.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
self.setWindowTitle("Add channels")
|
||||
|
||||
layout = QtWidgets.QVBoxLayout()
|
||||
|
@ -771,15 +770,14 @@ class _AddChannelDialog(QtWidgets.QDialog):
|
|||
self._tree_view = QtWidgets.QTreeView()
|
||||
self._tree_view.setHeaderHidden(True)
|
||||
self._tree_view.setSelectionBehavior(
|
||||
QtWidgets.QAbstractItemView.SelectionBehavior.SelectItems)
|
||||
QtWidgets.QAbstractItemView.SelectItems)
|
||||
self._tree_view.setSelectionMode(
|
||||
QtWidgets.QAbstractItemView.SelectionMode.ExtendedSelection)
|
||||
QtWidgets.QAbstractItemView.ExtendedSelection)
|
||||
self._tree_view.setModel(self._model)
|
||||
layout.addWidget(self._tree_view)
|
||||
|
||||
self._button_box = QtWidgets.QDialogButtonBox(
|
||||
QtWidgets.QDialogButtonBox.StandardButton.Ok | \
|
||||
QtWidgets.QDialogButtonBox.StandardButton.Cancel
|
||||
QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel
|
||||
)
|
||||
self._button_box.setCenterButtons(True)
|
||||
self._button_box.accepted.connect(self.add_channels)
|
||||
|
@ -801,8 +799,8 @@ class _MonInjDock(QDockWidgetCloseDetect):
|
|||
def __init__(self, name, manager):
|
||||
QtWidgets.QDockWidget.__init__(self, "MonInj")
|
||||
self.setObjectName(name)
|
||||
self.setFeatures(self.DockWidgetFeature.DockWidgetMovable |
|
||||
self.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
grid = LayoutWidget()
|
||||
self.setWidget(grid)
|
||||
self.manager = manager
|
||||
|
@ -811,7 +809,7 @@ class _MonInjDock(QDockWidgetCloseDetect):
|
|||
newdock = QtWidgets.QToolButton()
|
||||
newdock.setToolTip("Create new moninj dock")
|
||||
newdock.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogNewFolder))
|
||||
QtWidgets.QStyle.SP_FileDialogNewFolder))
|
||||
newdock.clicked.connect(lambda: self.manager.create_new_dock())
|
||||
grid.addWidget(newdock, 0, 0)
|
||||
|
||||
|
@ -822,7 +820,7 @@ class _MonInjDock(QDockWidgetCloseDetect):
|
|||
dialog_btn.setToolTip("Add channels")
|
||||
dialog_btn.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogListView))
|
||||
QtWidgets.QStyle.SP_FileDialogListView))
|
||||
dialog_btn.clicked.connect(self.channel_dialog.open)
|
||||
grid.addWidget(dialog_btn, 0, 1)
|
||||
|
||||
|
@ -835,8 +833,7 @@ class _MonInjDock(QDockWidgetCloseDetect):
|
|||
self.flow = DragDropFlowLayoutWidget()
|
||||
scroll_area.setWidgetResizable(True)
|
||||
scroll_area.setWidget(self.flow)
|
||||
self.flow.setContextMenuPolicy(
|
||||
QtCore.Qt.ContextMenuPolicy.CustomContextMenu)
|
||||
self.flow.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
|
||||
self.flow.customContextMenuRequested.connect(self.custom_context_menu)
|
||||
|
||||
def custom_context_menu(self, pos):
|
||||
|
@ -844,7 +841,7 @@ class _MonInjDock(QDockWidgetCloseDetect):
|
|||
if index == -1:
|
||||
return
|
||||
menu = QtWidgets.QMenu()
|
||||
delete_action = QtGui.QAction("Delete widget", menu)
|
||||
delete_action = QtWidgets.QAction("Delete widget", menu)
|
||||
delete_action.triggered.connect(partial(self.delete_widget, index))
|
||||
menu.addAction(delete_action)
|
||||
menu.exec_(self.flow.mapToGlobal(pos))
|
||||
|
@ -933,7 +930,7 @@ class MonInj:
|
|||
dock = _MonInjDock(name, self)
|
||||
self.docks[name] = dock
|
||||
if add_to_area:
|
||||
self.main_window.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, dock)
|
||||
self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock)
|
||||
dock.setFloating(True)
|
||||
dock.sigClosed.connect(partial(self.on_dock_closed, name))
|
||||
self.update_closable()
|
||||
|
@ -947,10 +944,10 @@ class MonInj:
|
|||
dock.deleteLater()
|
||||
|
||||
def update_closable(self):
|
||||
flags = (QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
||||
flags = (QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
if len(self.docks) > 1:
|
||||
flags |= QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetClosable
|
||||
flags |= QtWidgets.QDockWidget.DockWidgetClosable
|
||||
for dock in self.docks.values():
|
||||
dock.setFeatures(flags)
|
||||
|
||||
|
@ -970,8 +967,7 @@ class MonInj:
|
|||
dock = _MonInjDock(name, self)
|
||||
self.docks[name] = dock
|
||||
dock.restore_state(dock_state)
|
||||
self.main_window.addDockWidget(
|
||||
QtCore.Qt.DockWidgetArea.RightDockWidgetArea, dock)
|
||||
self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock)
|
||||
dock.sigClosed.connect(partial(self.on_dock_closed, name))
|
||||
self.update_closable()
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ import time
|
|||
from functools import partial
|
||||
import logging
|
||||
|
||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
||||
from PyQt5 import QtCore, QtWidgets, QtGui
|
||||
|
||||
from artiq.gui.models import DictSyncModel
|
||||
from artiq.tools import elide
|
||||
|
@ -61,31 +61,31 @@ class ScheduleDock(QtWidgets.QDockWidget):
|
|||
def __init__(self, schedule_ctl, schedule_sub):
|
||||
QtWidgets.QDockWidget.__init__(self, "Schedule")
|
||||
self.setObjectName("Schedule")
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
|
||||
self.schedule_ctl = schedule_ctl
|
||||
|
||||
self.table = QtWidgets.QTableView()
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectRows)
|
||||
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SelectionMode.SingleSelection)
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
|
||||
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection)
|
||||
self.table.verticalHeader().setSectionResizeMode(
|
||||
QtWidgets.QHeaderView.ResizeMode.ResizeToContents)
|
||||
QtWidgets.QHeaderView.ResizeToContents)
|
||||
self.table.verticalHeader().hide()
|
||||
self.setWidget(self.table)
|
||||
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
request_termination_action = QtGui.QAction("Request termination", self.table)
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
request_termination_action = QtWidgets.QAction("Request termination", self.table)
|
||||
request_termination_action.triggered.connect(partial(self.delete_clicked, True))
|
||||
request_termination_action.setShortcut("DELETE")
|
||||
request_termination_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
request_termination_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.table.addAction(request_termination_action)
|
||||
delete_action = QtGui.QAction("Delete", self.table)
|
||||
delete_action = QtWidgets.QAction("Delete", self.table)
|
||||
delete_action.triggered.connect(partial(self.delete_clicked, False))
|
||||
delete_action.setShortcut("SHIFT+DELETE")
|
||||
delete_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
delete_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
self.table.addAction(delete_action)
|
||||
terminate_pipeline = QtGui.QAction(
|
||||
terminate_pipeline = QtWidgets.QAction(
|
||||
"Gracefully terminate all in pipeline", self.table)
|
||||
terminate_pipeline.triggered.connect(self.terminate_pipeline_clicked)
|
||||
self.table.addAction(terminate_pipeline)
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import logging
|
||||
from functools import partial
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
@ -11,8 +11,8 @@ class ShortcutsDock(QtWidgets.QDockWidget):
|
|||
def __init__(self, main_window, exp_manager):
|
||||
QtWidgets.QDockWidget.__init__(self, "Shortcuts")
|
||||
self.setObjectName("Shortcuts")
|
||||
self.setFeatures(self.DockWidgetFeature.DockWidgetMovable |
|
||||
self.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
|
||||
layout = QtWidgets.QGridLayout()
|
||||
top_widget = QtWidgets.QWidget()
|
||||
|
@ -36,25 +36,25 @@ class ShortcutsDock(QtWidgets.QDockWidget):
|
|||
layout.addWidget(QtWidgets.QLabel("F" + str(i + 1)), row, 0)
|
||||
|
||||
label = QtWidgets.QLabel()
|
||||
label.setSizePolicy(QtWidgets.QSizePolicy.Policy.Ignored,
|
||||
QtWidgets.QSizePolicy.Policy.Ignored)
|
||||
label.setSizePolicy(QtWidgets.QSizePolicy.Ignored,
|
||||
QtWidgets.QSizePolicy.Ignored)
|
||||
layout.addWidget(label, row, 1)
|
||||
|
||||
clear = QtWidgets.QToolButton()
|
||||
clear.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogDiscardButton))
|
||||
QtWidgets.QStyle.SP_DialogDiscardButton))
|
||||
layout.addWidget(clear, row, 2)
|
||||
clear.clicked.connect(partial(self.set_shortcut, i, ""))
|
||||
|
||||
open = QtWidgets.QToolButton()
|
||||
open.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOpenButton))
|
||||
QtWidgets.QStyle.SP_DialogOpenButton))
|
||||
layout.addWidget(open, row, 3)
|
||||
open.clicked.connect(partial(self._open_experiment, i))
|
||||
|
||||
submit = QtWidgets.QPushButton("Submit")
|
||||
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
||||
QtWidgets.QStyle.SP_DialogOkButton))
|
||||
layout.addWidget(submit, row, 4)
|
||||
submit.clicked.connect(partial(self._activated, i))
|
||||
|
||||
|
@ -68,8 +68,8 @@ class ShortcutsDock(QtWidgets.QDockWidget):
|
|||
"open": open,
|
||||
"submit": submit
|
||||
}
|
||||
shortcut = QtGui.QShortcut("F" + str(i+1), main_window)
|
||||
shortcut.setContext(QtCore.Qt.ShortcutContext.ApplicationShortcut)
|
||||
shortcut = QtWidgets.QShortcut("F" + str(i + 1), main_window)
|
||||
shortcut.setContext(QtCore.Qt.ApplicationShortcut)
|
||||
shortcut.activated.connect(partial(self._activated, i))
|
||||
|
||||
def _activated(self, nr):
|
||||
|
|
|
@ -5,7 +5,7 @@ import bisect
|
|||
import itertools
|
||||
import math
|
||||
|
||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
||||
from PyQt5 import QtCore, QtWidgets, QtGui
|
||||
|
||||
import pyqtgraph as pg
|
||||
import numpy as np
|
||||
|
@ -130,7 +130,7 @@ class _BaseWaveform(pg.PlotWidget):
|
|||
self.setMinimumHeight(WAVEFORM_MIN_HEIGHT)
|
||||
self.setMaximumHeight(WAVEFORM_MAX_HEIGHT)
|
||||
self.setMenuEnabled(False)
|
||||
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
self.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
|
||||
self.name = name
|
||||
self.width = width
|
||||
|
@ -200,27 +200,24 @@ class _BaseWaveform(pg.PlotWidget):
|
|||
self.cursor_y = self.y_data[ind]
|
||||
|
||||
def mouseMoveEvent(self, e):
|
||||
if e.buttons() == QtCore.Qt.MouseButton.LeftButton \
|
||||
and e.modifiers() == QtCore.Qt.KeyboardModifier.ShiftModifier:
|
||||
if e.buttons() == QtCore.Qt.LeftButton \
|
||||
and e.modifiers() == QtCore.Qt.ShiftModifier:
|
||||
drag = QtGui.QDrag(self)
|
||||
mime = QtCore.QMimeData()
|
||||
drag.setMimeData(mime)
|
||||
pixmapi = QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileIcon)
|
||||
QtWidgets.QStyle.SP_FileIcon)
|
||||
drag.setPixmap(pixmapi.pixmap(32))
|
||||
drag.exec(QtCore.Qt.DropAction.MoveAction)
|
||||
drag.exec_(QtCore.Qt.MoveAction)
|
||||
else:
|
||||
super().mouseMoveEvent(e)
|
||||
|
||||
def wheelEvent(self, e):
|
||||
if e.modifiers() & QtCore.Qt.KeyboardModifier.ControlModifier:
|
||||
if e.modifiers() & QtCore.Qt.ControlModifier:
|
||||
super().wheelEvent(e)
|
||||
else:
|
||||
e.ignore()
|
||||
|
||||
|
||||
def mouseDoubleClickEvent(self, e):
|
||||
pos = self.view_box.mapSceneToView(e.position())
|
||||
pos = self.view_box.mapSceneToView(e.pos())
|
||||
self.cursorMove.emit(pos.x())
|
||||
|
||||
|
||||
|
@ -396,13 +393,6 @@ class LogWaveform(_BaseWaveform):
|
|||
self.plot_data_item.setData(x=[], y=[])
|
||||
|
||||
|
||||
# pg.GraphicsView ignores dragEnterEvent but not dragLeaveEvent
|
||||
# https://github.com/pyqtgraph/pyqtgraph/blob/1e98704eac6b85de9c35371079f561042e88ad68/pyqtgraph/widgets/GraphicsView.py#L388
|
||||
class _RefAxis(pg.PlotWidget):
|
||||
def dragLeaveEvent(self, ev):
|
||||
ev.ignore()
|
||||
|
||||
|
||||
class _WaveformView(QtWidgets.QWidget):
|
||||
cursorMove = QtCore.pyqtSignal(float)
|
||||
|
||||
|
@ -418,7 +408,7 @@ class _WaveformView(QtWidgets.QWidget):
|
|||
layout.setSpacing(0)
|
||||
self.setLayout(layout)
|
||||
|
||||
self._ref_axis = _RefAxis()
|
||||
self._ref_axis = pg.PlotWidget()
|
||||
self._ref_axis.hideAxis("bottom")
|
||||
self._ref_axis.hideAxis("left")
|
||||
self._ref_axis.hideButtons()
|
||||
|
@ -438,9 +428,8 @@ class _WaveformView(QtWidgets.QWidget):
|
|||
scroll_area = VDragScrollArea(self)
|
||||
scroll_area.setWidgetResizable(True)
|
||||
scroll_area.setContentsMargins(0, 0, 0, 0)
|
||||
scroll_area.setFrameShape(QtWidgets.QFrame.Shape.NoFrame)
|
||||
scroll_area.setVerticalScrollBarPolicy(
|
||||
QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
|
||||
scroll_area.setFrameShape(QtWidgets.QFrame.NoFrame)
|
||||
scroll_area.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
|
||||
layout.addWidget(scroll_area)
|
||||
|
||||
self._splitter = VDragDropSplitter(parent=scroll_area)
|
||||
|
@ -455,11 +444,10 @@ class _WaveformView(QtWidgets.QWidget):
|
|||
)
|
||||
self.confirm_delete_dialog.setText("Delete all waveforms?")
|
||||
self.confirm_delete_dialog.setStandardButtons(
|
||||
QtWidgets.QMessageBox.StandardButton.Ok |
|
||||
QtWidgets.QMessageBox.StandardButton.Cancel
|
||||
QtWidgets.QMessageBox.Ok | QtWidgets.QMessageBox.Cancel
|
||||
)
|
||||
self.confirm_delete_dialog.setDefaultButton(
|
||||
QtWidgets.QMessageBox.StandardButton.Ok
|
||||
QtWidgets.QMessageBox.Ok
|
||||
)
|
||||
|
||||
def setModel(self, model):
|
||||
|
@ -531,10 +519,10 @@ class _WaveformView(QtWidgets.QWidget):
|
|||
w.setStoppedX(self._stopped_x)
|
||||
w.cursorMove.connect(self.cursorMove)
|
||||
w.onCursorMove(self._cursor_x)
|
||||
action = QtGui.QAction("Delete waveform", w)
|
||||
action = QtWidgets.QAction("Delete waveform", w)
|
||||
action.triggered.connect(lambda: self._delete_waveform(w))
|
||||
w.addAction(action)
|
||||
action = QtGui.QAction("Delete all waveforms", w)
|
||||
action = QtWidgets.QAction("Delete all waveforms", w)
|
||||
action.triggered.connect(self.confirm_delete_dialog.open)
|
||||
w.addAction(action)
|
||||
return w
|
||||
|
@ -560,7 +548,7 @@ class _WaveformModel(QtCore.QAbstractTableModel):
|
|||
def columnCount(self, parent=QtCore.QModelIndex()):
|
||||
return len(self.headers)
|
||||
|
||||
def data(self, index, role=QtCore.Qt.ItemDataRole.DisplayRole):
|
||||
def data(self, index, role=QtCore.Qt.DisplayRole):
|
||||
if index.isValid():
|
||||
return self.backing_struct[index.row()][index.column()]
|
||||
return None
|
||||
|
@ -668,7 +656,7 @@ class Model(DictSyncTreeSepModel):
|
|||
class _AddChannelDialog(QtWidgets.QDialog):
|
||||
def __init__(self, parent, model):
|
||||
QtWidgets.QDialog.__init__(self, parent=parent)
|
||||
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
self.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
self.setWindowTitle("Add channels")
|
||||
|
||||
layout = QtWidgets.QVBoxLayout()
|
||||
|
@ -678,14 +666,14 @@ class _AddChannelDialog(QtWidgets.QDialog):
|
|||
self._tree_view = QtWidgets.QTreeView()
|
||||
self._tree_view.setHeaderHidden(True)
|
||||
self._tree_view.setSelectionBehavior(
|
||||
QtWidgets.QAbstractItemView.SelectionBehavior.SelectItems)
|
||||
QtWidgets.QAbstractItemView.SelectItems)
|
||||
self._tree_view.setSelectionMode(
|
||||
QtWidgets.QAbstractItemView.SelectionMode.ExtendedSelection)
|
||||
QtWidgets.QAbstractItemView.ExtendedSelection)
|
||||
self._tree_view.setModel(self._model)
|
||||
layout.addWidget(self._tree_view)
|
||||
|
||||
self._button_box = QtWidgets.QDialogButtonBox(
|
||||
QtWidgets.QDialogButtonBox.StandardButton.Ok | QtWidgets.QDialogButtonBox.StandardButton.Cancel
|
||||
QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel
|
||||
)
|
||||
self._button_box.setCenterButtons(True)
|
||||
self._button_box.accepted.connect(self.add_channels)
|
||||
|
@ -708,7 +696,7 @@ class WaveformDock(QtWidgets.QDockWidget):
|
|||
QtWidgets.QDockWidget.__init__(self, "Waveform")
|
||||
self.setObjectName("Waveform")
|
||||
self.setFeatures(
|
||||
self.DockWidgetFeature.DockWidgetMovable | self.DockWidgetFeature.DockWidgetFloatable)
|
||||
QtWidgets.QDockWidget.DockWidgetMovable | QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
|
||||
self._channel_model = Model({})
|
||||
self._waveform_model = _WaveformModel()
|
||||
|
@ -736,14 +724,14 @@ class WaveformDock(QtWidgets.QDockWidget):
|
|||
self._menu_btn = QtWidgets.QPushButton()
|
||||
self._menu_btn.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogStart))
|
||||
QtWidgets.QStyle.SP_FileDialogStart))
|
||||
grid.addWidget(self._menu_btn, 0, 0)
|
||||
|
||||
self._request_dump_btn = QtWidgets.QToolButton()
|
||||
self._request_dump_btn.setToolTip("Fetch analyzer data from device")
|
||||
self._request_dump_btn.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
||||
QtWidgets.QStyle.SP_BrowserReload))
|
||||
self._request_dump_btn.clicked.connect(
|
||||
lambda: asyncio.ensure_future(exc_to_warning(self.proxy_client.trigger_proxy_task())))
|
||||
grid.addWidget(self._request_dump_btn, 0, 1)
|
||||
|
@ -755,7 +743,7 @@ class WaveformDock(QtWidgets.QDockWidget):
|
|||
self._add_btn.setToolTip("Add channels...")
|
||||
self._add_btn.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogListView))
|
||||
QtWidgets.QStyle.SP_FileDialogListView))
|
||||
self._add_btn.clicked.connect(self._add_channel_dialog.open)
|
||||
grid.addWidget(self._add_btn, 0, 2)
|
||||
|
||||
|
@ -775,7 +763,7 @@ class WaveformDock(QtWidgets.QDockWidget):
|
|||
self._reset_zoom_btn.setToolTip("Reset zoom")
|
||||
self._reset_zoom_btn.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_TitleBarMaxButton))
|
||||
QtWidgets.QStyle.SP_TitleBarMaxButton))
|
||||
self._reset_zoom_btn.clicked.connect(self._waveform_view.resetZoom)
|
||||
grid.addWidget(self._reset_zoom_btn, 0, 3)
|
||||
|
||||
|
@ -785,7 +773,7 @@ class WaveformDock(QtWidgets.QDockWidget):
|
|||
grid.addWidget(self._cursor_control, 0, 4, colspan=6)
|
||||
|
||||
def _add_async_action(self, label, coro):
|
||||
action = QtGui.QAction(label, self)
|
||||
action = QtWidgets.QAction(label, self)
|
||||
action.triggered.connect(
|
||||
lambda: asyncio.ensure_future(exc_to_warning(coro())))
|
||||
self._file_menu.addAction(action)
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from PyQt6 import QtWidgets
|
||||
from PyQt5 import QtWidgets
|
||||
|
||||
from artiq.applets.simple import SimpleApplet
|
||||
|
||||
|
|
|
@ -513,6 +513,7 @@ dependencies = [
|
|||
"board_misoc",
|
||||
"build_misoc",
|
||||
"byteorder",
|
||||
"crc",
|
||||
"cslice",
|
||||
"dyld",
|
||||
"eh",
|
||||
|
@ -553,6 +554,8 @@ dependencies = [
|
|||
"board_artiq",
|
||||
"board_misoc",
|
||||
"build_misoc",
|
||||
"byteorder",
|
||||
"crc",
|
||||
"cslice",
|
||||
"eh",
|
||||
"io",
|
||||
|
|
|
@ -173,12 +173,4 @@ static mut API: &'static [(&'static str, *const ())] = &[
|
|||
api!(spi_set_config = ::nrt_bus::spi::set_config),
|
||||
api!(spi_write = ::nrt_bus::spi::write),
|
||||
api!(spi_read = ::nrt_bus::spi::read),
|
||||
|
||||
/*
|
||||
* syscall for unit tests
|
||||
* Used in `artiq.tests.coredevice.test_exceptions.ExceptionTest.test_raise_exceptions_kernel`
|
||||
* This syscall checks that the exception IDs used in the Python `EmbeddingMap` (in `artiq.compiler.embedding`)
|
||||
* match the `EXCEPTION_ID_LOOKUP` defined in the firmware (`artiq::firmware::ksupport::eh_artiq`)
|
||||
*/
|
||||
api!(test_exception_id_sync = ::eh_artiq::test_exception_id_sync)
|
||||
];
|
||||
|
|
|
@ -328,30 +328,19 @@ extern fn stop_fn(_version: c_int,
|
|||
}
|
||||
}
|
||||
|
||||
// Must be kept in sync with `artiq.compiler.embedding`
|
||||
static EXCEPTION_ID_LOOKUP: [(&str, u32); 22] = [
|
||||
("RTIOUnderflow", 0),
|
||||
("RTIOOverflow", 1),
|
||||
("RTIODestinationUnreachable", 2),
|
||||
("DMAError", 3),
|
||||
("I2CError", 4),
|
||||
("CacheError", 5),
|
||||
("SPIError", 6),
|
||||
("SubkernelError", 7),
|
||||
("AssertionError", 8),
|
||||
("AttributeError", 9),
|
||||
("IndexError", 10),
|
||||
("IOError", 11),
|
||||
("KeyError", 12),
|
||||
("NotImplementedError", 13),
|
||||
("OverflowError", 14),
|
||||
("RuntimeError", 15),
|
||||
("TimeoutError", 16),
|
||||
("TypeError", 17),
|
||||
("ValueError", 18),
|
||||
("ZeroDivisionError", 19),
|
||||
("LinAlgError", 20),
|
||||
("UnwrapNoneError", 21),
|
||||
static EXCEPTION_ID_LOOKUP: [(&str, u32); 12] = [
|
||||
("RuntimeError", 0),
|
||||
("RTIOUnderflow", 1),
|
||||
("RTIOOverflow", 2),
|
||||
("RTIODestinationUnreachable", 3),
|
||||
("DMAError", 4),
|
||||
("I2CError", 5),
|
||||
("CacheError", 6),
|
||||
("SPIError", 7),
|
||||
("ZeroDivisionError", 8),
|
||||
("IndexError", 9),
|
||||
("UnwrapNoneError", 10),
|
||||
("SubkernelError", 11)
|
||||
];
|
||||
|
||||
pub fn get_exception_id(name: &str) -> u32 {
|
||||
|
@ -363,29 +352,3 @@ pub fn get_exception_id(name: &str) -> u32 {
|
|||
unimplemented!("unallocated internal exception id")
|
||||
}
|
||||
|
||||
/// Takes as input exception id from host
|
||||
/// Generates a new exception with:
|
||||
/// * `id` set to `exn_id`
|
||||
/// * `message` set to corresponding exception name from `EXCEPTION_ID_LOOKUP`
|
||||
///
|
||||
/// The message is matched on host to ensure correct exception is being referred
|
||||
/// This test checks the synchronization of exception ids for runtime errors
|
||||
#[no_mangle]
|
||||
pub extern "C-unwind" fn test_exception_id_sync(exn_id: u32) {
|
||||
let message = EXCEPTION_ID_LOOKUP
|
||||
.iter()
|
||||
.find_map(|&(name, id)| if id == exn_id { Some(name) } else { None })
|
||||
.unwrap_or("unallocated internal exception id");
|
||||
|
||||
let exn = Exception {
|
||||
id: exn_id,
|
||||
file: file!().as_c_slice(),
|
||||
line: 0,
|
||||
column: 0,
|
||||
function: "test_exception_id_sync".as_c_slice(),
|
||||
message: message.as_c_slice(),
|
||||
param: [0, 0, 0]
|
||||
};
|
||||
unsafe { raise(&exn) };
|
||||
}
|
||||
|
||||
|
|
|
@ -484,23 +484,17 @@ extern "C-unwind" fn subkernel_load_run(id: u32, destination: u8, run: bool) {
|
|||
|
||||
extern "C-unwind" fn subkernel_await_finish(id: u32, timeout: i64) {
|
||||
send(&SubkernelAwaitFinishRequest { id: id, timeout: timeout });
|
||||
recv(move |request| {
|
||||
if let SubkernelAwaitFinishReply = request { }
|
||||
else if let SubkernelError(status) = request {
|
||||
match status {
|
||||
SubkernelStatus::IncorrectState => raise!("SubkernelError",
|
||||
"Subkernel not running"),
|
||||
SubkernelStatus::Timeout => raise!("SubkernelError",
|
||||
"Subkernel timed out"),
|
||||
SubkernelStatus::CommLost => raise!("SubkernelError",
|
||||
"Lost communication with satellite"),
|
||||
SubkernelStatus::OtherError => raise!("SubkernelError",
|
||||
"An error occurred during subkernel operation"),
|
||||
SubkernelStatus::Exception(e) => unsafe { crate::eh_artiq::raise(e) },
|
||||
}
|
||||
} else {
|
||||
send(&Log(format_args!("unexpected reply: {:?}\n", request)));
|
||||
loop {}
|
||||
recv!(SubkernelAwaitFinishReply { status } => {
|
||||
match status {
|
||||
SubkernelStatus::NoError => (),
|
||||
SubkernelStatus::IncorrectState => raise!("SubkernelError",
|
||||
"Subkernel not running"),
|
||||
SubkernelStatus::Timeout => raise!("SubkernelError",
|
||||
"Subkernel timed out"),
|
||||
SubkernelStatus::CommLost => raise!("SubkernelError",
|
||||
"Lost communication with satellite"),
|
||||
SubkernelStatus::OtherError => raise!("SubkernelError",
|
||||
"An error occurred during subkernel operation")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
@ -518,28 +512,23 @@ extern fn subkernel_send_message(id: u32, is_return: bool, destination: u8,
|
|||
|
||||
extern "C-unwind" fn subkernel_await_message(id: i32, timeout: i64, tags: &CSlice<u8>, min: u8, max: u8) -> u8 {
|
||||
send(&SubkernelMsgRecvRequest { id: id, timeout: timeout, tags: tags.as_ref() });
|
||||
recv(move |request| {
|
||||
if let SubkernelMsgRecvReply { count } = request {
|
||||
if count < &min || count > &max {
|
||||
raise!("SubkernelError",
|
||||
"Received less or more arguments than expected");
|
||||
recv!(SubkernelMsgRecvReply { status, count } => {
|
||||
match status {
|
||||
SubkernelStatus::NoError => {
|
||||
if count < &min || count > &max {
|
||||
raise!("SubkernelError",
|
||||
"Received less or more arguments than expected");
|
||||
}
|
||||
*count
|
||||
}
|
||||
*count
|
||||
} else if let SubkernelError(status) = request {
|
||||
match status {
|
||||
SubkernelStatus::IncorrectState => raise!("SubkernelError",
|
||||
"Subkernel not running"),
|
||||
SubkernelStatus::Timeout => raise!("SubkernelError",
|
||||
"Subkernel timed out"),
|
||||
SubkernelStatus::CommLost => raise!("SubkernelError",
|
||||
"Lost communication with satellite"),
|
||||
SubkernelStatus::OtherError => raise!("SubkernelError",
|
||||
"An error occurred during subkernel operation"),
|
||||
SubkernelStatus::Exception(e) => unsafe { crate::eh_artiq::raise(e) },
|
||||
}
|
||||
} else {
|
||||
send(&Log(format_args!("unexpected reply: {:?}\n", request)));
|
||||
loop {}
|
||||
SubkernelStatus::IncorrectState => raise!("SubkernelError",
|
||||
"Subkernel not running"),
|
||||
SubkernelStatus::Timeout => raise!("SubkernelError",
|
||||
"Subkernel timed out"),
|
||||
SubkernelStatus::CommLost => raise!("SubkernelError",
|
||||
"Lost communication with satellite"),
|
||||
SubkernelStatus::OtherError => raise!("SubkernelError",
|
||||
"An error occurred during subkernel operation")
|
||||
}
|
||||
})
|
||||
// RpcRecvRequest should be called `count` times after this to receive message data
|
||||
|
|
|
@ -114,6 +114,16 @@ pub unsafe fn write(mut addr: usize, mut data: &[u8]) {
|
|||
}
|
||||
}
|
||||
|
||||
pub unsafe fn flash_binary(origin: usize, payload: &[u8]) {
|
||||
assert!((origin & (SECTOR_SIZE - 1)) == 0);
|
||||
let mut offset = 0;
|
||||
while offset < payload.len() {
|
||||
erase_sector(origin + offset);
|
||||
offset += SECTOR_SIZE;
|
||||
}
|
||||
write(origin, payload);
|
||||
}
|
||||
|
||||
#[cfg(any(soc_platform = "kasli", soc_platform = "kc705"))]
|
||||
pub unsafe fn reload () -> ! {
|
||||
csr::icap::iprog_write(1);
|
||||
|
|
|
@ -123,10 +123,28 @@ pub enum Packet {
|
|||
SubkernelLoadRunRequest { source: u8, destination: u8, id: u32, run: bool },
|
||||
SubkernelLoadRunReply { destination: u8, succeeded: bool },
|
||||
SubkernelFinished { destination: u8, id: u32, with_exception: bool, exception_src: u8 },
|
||||
SubkernelExceptionRequest { source: u8, destination: u8 },
|
||||
SubkernelException { destination: u8, last: bool, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
|
||||
SubkernelExceptionRequest { destination: u8 },
|
||||
SubkernelException { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE] },
|
||||
SubkernelMessage { source: u8, destination: u8, id: u32, status: PayloadStatus, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
|
||||
SubkernelMessageAck { destination: u8 },
|
||||
|
||||
CoreMgmtGetLogRequest { destination: u8, clear: bool },
|
||||
CoreMgmtClearLogRequest { destination: u8 },
|
||||
CoreMgmtSetLogLevelRequest { destination: u8, log_level: u8 },
|
||||
CoreMgmtSetUartLogLevelRequest { destination: u8, log_level: u8 },
|
||||
CoreMgmtConfigReadRequest { destination: u8, length: u16, key: [u8; MASTER_PAYLOAD_MAX_SIZE] },
|
||||
CoreMgmtConfigReadContinue { destination: u8 },
|
||||
CoreMgmtConfigWriteRequest { destination: u8, last: bool, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
|
||||
CoreMgmtConfigRemoveRequest { destination: u8, length: u16, key: [u8; MASTER_PAYLOAD_MAX_SIZE] },
|
||||
CoreMgmtConfigEraseRequest { destination: u8 },
|
||||
CoreMgmtRebootRequest { destination: u8 },
|
||||
CoreMgmtAllocatorDebugRequest { destination: u8 },
|
||||
CoreMgmtFlashRequest { destination: u8, last: bool, length: u16, data: [u8; MASTER_PAYLOAD_MAX_SIZE] },
|
||||
CoreMgmtDropLinkAck { destination: u8 },
|
||||
CoreMgmtDropLink,
|
||||
CoreMgmtGetLogReply { last: bool, length: u16, data: [u8; SAT_PAYLOAD_MAX_SIZE] },
|
||||
CoreMgmtConfigReadReply { last: bool, length: u16, value: [u8; SAT_PAYLOAD_MAX_SIZE] },
|
||||
CoreMgmtReply { succeeded: bool },
|
||||
}
|
||||
|
||||
impl Packet {
|
||||
|
@ -367,17 +385,14 @@ impl Packet {
|
|||
exception_src: reader.read_u8()?
|
||||
},
|
||||
0xc9 => Packet::SubkernelExceptionRequest {
|
||||
source: reader.read_u8()?,
|
||||
destination: reader.read_u8()?
|
||||
},
|
||||
0xca => {
|
||||
let destination = reader.read_u8()?;
|
||||
let last = reader.read_bool()?;
|
||||
let length = reader.read_u16()?;
|
||||
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
|
||||
reader.read_exact(&mut data[0..length as usize])?;
|
||||
Packet::SubkernelException {
|
||||
destination: destination,
|
||||
last: last,
|
||||
length: length,
|
||||
data: data
|
||||
|
@ -404,6 +419,111 @@ impl Packet {
|
|||
destination: reader.read_u8()?
|
||||
},
|
||||
|
||||
0xd0 => Packet::CoreMgmtGetLogRequest {
|
||||
destination: reader.read_u8()?,
|
||||
clear: reader.read_bool()?,
|
||||
},
|
||||
0xd1 => Packet::CoreMgmtClearLogRequest {
|
||||
destination: reader.read_u8()?,
|
||||
},
|
||||
0xd2 => Packet::CoreMgmtSetLogLevelRequest {
|
||||
destination: reader.read_u8()?,
|
||||
log_level: reader.read_u8()?,
|
||||
},
|
||||
0xd3 => Packet::CoreMgmtSetUartLogLevelRequest {
|
||||
destination: reader.read_u8()?,
|
||||
log_level: reader.read_u8()?,
|
||||
},
|
||||
0xd4 => {
|
||||
let destination = reader.read_u8()?;
|
||||
let length = reader.read_u16()?;
|
||||
let mut key: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
reader.read_exact(&mut key[0..length as usize])?;
|
||||
Packet::CoreMgmtConfigReadRequest {
|
||||
destination: destination,
|
||||
length: length,
|
||||
key: key,
|
||||
}
|
||||
},
|
||||
0xd5 => Packet::CoreMgmtConfigReadContinue {
|
||||
destination: reader.read_u8()?,
|
||||
},
|
||||
0xd6 => {
|
||||
let destination = reader.read_u8()?;
|
||||
let last = reader.read_bool()?;
|
||||
let length = reader.read_u16()?;
|
||||
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
reader.read_exact(&mut data[0..length as usize])?;
|
||||
Packet::CoreMgmtConfigWriteRequest {
|
||||
destination: destination,
|
||||
last: last,
|
||||
length: length,
|
||||
data: data,
|
||||
}
|
||||
},
|
||||
0xd7 => {
|
||||
let destination = reader.read_u8()?;
|
||||
let length = reader.read_u16()?;
|
||||
let mut key: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
reader.read_exact(&mut key[0..length as usize])?;
|
||||
Packet::CoreMgmtConfigRemoveRequest {
|
||||
destination: destination,
|
||||
length: length,
|
||||
key: key,
|
||||
}
|
||||
},
|
||||
0xd8 => Packet::CoreMgmtConfigEraseRequest {
|
||||
destination: reader.read_u8()?,
|
||||
},
|
||||
0xd9 => Packet::CoreMgmtRebootRequest {
|
||||
destination: reader.read_u8()?,
|
||||
},
|
||||
0xda => Packet::CoreMgmtAllocatorDebugRequest {
|
||||
destination: reader.read_u8()?,
|
||||
},
|
||||
0xdb => {
|
||||
let destination = reader.read_u8()?;
|
||||
let last = reader.read_bool()?;
|
||||
let length = reader.read_u16()?;
|
||||
let mut data: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
reader.read_exact(&mut data[0..length as usize])?;
|
||||
Packet::CoreMgmtFlashRequest {
|
||||
destination: destination,
|
||||
last: last,
|
||||
length: length,
|
||||
data: data,
|
||||
}
|
||||
},
|
||||
0xdc => Packet::CoreMgmtDropLinkAck {
|
||||
destination: reader.read_u8()?,
|
||||
},
|
||||
0xdd => Packet::CoreMgmtDropLink,
|
||||
0xde => {
|
||||
let last = reader.read_bool()?;
|
||||
let length = reader.read_u16()?;
|
||||
let mut data: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
|
||||
reader.read_exact(&mut data[0..length as usize])?;
|
||||
Packet::CoreMgmtGetLogReply {
|
||||
last: last,
|
||||
length: length,
|
||||
data: data,
|
||||
}
|
||||
},
|
||||
0xdf => {
|
||||
let last = reader.read_bool()?;
|
||||
let length = reader.read_u16()?;
|
||||
let mut value: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
|
||||
reader.read_exact(&mut value[0..length as usize])?;
|
||||
Packet::CoreMgmtConfigReadReply {
|
||||
last: last,
|
||||
length: length,
|
||||
value: value,
|
||||
}
|
||||
},
|
||||
0xe0 => Packet::CoreMgmtReply {
|
||||
succeeded: reader.read_bool()?,
|
||||
},
|
||||
|
||||
ty => return Err(Error::UnknownPacket(ty))
|
||||
})
|
||||
}
|
||||
|
@ -666,14 +786,12 @@ impl Packet {
|
|||
writer.write_bool(with_exception)?;
|
||||
writer.write_u8(exception_src)?;
|
||||
},
|
||||
Packet::SubkernelExceptionRequest { source, destination } => {
|
||||
Packet::SubkernelExceptionRequest { destination } => {
|
||||
writer.write_u8(0xc9)?;
|
||||
writer.write_u8(source)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
Packet::SubkernelException { destination, last, length, data } => {
|
||||
Packet::SubkernelException { last, length, data } => {
|
||||
writer.write_u8(0xca)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_bool(last)?;
|
||||
writer.write_u16(length)?;
|
||||
writer.write_all(&data[0..length as usize])?;
|
||||
|
@ -691,6 +809,103 @@ impl Packet {
|
|||
writer.write_u8(0xcc)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
|
||||
Packet::CoreMgmtGetLogRequest { destination, clear } => {
|
||||
writer.write_u8(0xd0)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_bool(clear)?;
|
||||
},
|
||||
Packet::CoreMgmtClearLogRequest { destination } => {
|
||||
writer.write_u8(0xd1)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
Packet::CoreMgmtSetLogLevelRequest { destination, log_level } => {
|
||||
writer.write_u8(0xd2)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_u8(log_level)?;
|
||||
},
|
||||
Packet::CoreMgmtSetUartLogLevelRequest { destination, log_level } => {
|
||||
writer.write_u8(0xd3)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_u8(log_level)?;
|
||||
},
|
||||
Packet::CoreMgmtConfigReadRequest {
|
||||
destination,
|
||||
length,
|
||||
key,
|
||||
} => {
|
||||
writer.write_u8(0xd4)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_u16(length)?;
|
||||
writer.write_all(&key[0..length as usize])?;
|
||||
},
|
||||
Packet::CoreMgmtConfigReadContinue { destination } => {
|
||||
writer.write_u8(0xd5)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
Packet::CoreMgmtConfigWriteRequest {
|
||||
destination,
|
||||
last,
|
||||
length,
|
||||
data,
|
||||
} => {
|
||||
writer.write_u8(0xd6)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_bool(last)?;
|
||||
writer.write_u16(length)?;
|
||||
writer.write_all(&data[0..length as usize])?;
|
||||
},
|
||||
Packet::CoreMgmtConfigRemoveRequest {
|
||||
destination,
|
||||
length,
|
||||
key,
|
||||
} => {
|
||||
writer.write_u8(0xd7)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_u16(length)?;
|
||||
writer.write_all(&key[0..length as usize])?;
|
||||
},
|
||||
Packet::CoreMgmtConfigEraseRequest { destination } => {
|
||||
writer.write_u8(0xd8)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
Packet::CoreMgmtRebootRequest { destination } => {
|
||||
writer.write_u8(0xd9)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
Packet::CoreMgmtAllocatorDebugRequest { destination } => {
|
||||
writer.write_u8(0xda)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
Packet::CoreMgmtFlashRequest { destination, last, length, data } => {
|
||||
writer.write_u8(0xdb)?;
|
||||
writer.write_u8(destination)?;
|
||||
writer.write_bool(last)?;
|
||||
writer.write_u16(length)?;
|
||||
writer.write_all(&data[..length as usize])?;
|
||||
},
|
||||
Packet::CoreMgmtDropLinkAck { destination } => {
|
||||
writer.write_u8(0xdc)?;
|
||||
writer.write_u8(destination)?;
|
||||
},
|
||||
Packet::CoreMgmtDropLink =>
|
||||
writer.write_u8(0xdd)?,
|
||||
Packet::CoreMgmtGetLogReply { last, length, data } => {
|
||||
writer.write_u8(0xde)?;
|
||||
writer.write_bool(last)?;
|
||||
writer.write_u16(length)?;
|
||||
writer.write_all(&data[0..length as usize])?;
|
||||
},
|
||||
Packet::CoreMgmtConfigReadReply { last, length, value } => {
|
||||
writer.write_u8(0xdf)?;
|
||||
writer.write_bool(last)?;
|
||||
writer.write_u16(length)?;
|
||||
writer.write_all(&value[0..length as usize])?;
|
||||
},
|
||||
Packet::CoreMgmtReply { succeeded } => {
|
||||
writer.write_u8(0xe0)?;
|
||||
writer.write_bool(succeeded)?;
|
||||
},
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
@ -698,20 +913,18 @@ impl Packet {
|
|||
pub fn routable_destination(&self) -> Option<u8> {
|
||||
// only for packets that could be re-routed, not only forwarded
|
||||
match self {
|
||||
Packet::DmaAddTraceRequest { destination, .. } => Some(*destination),
|
||||
Packet::DmaAddTraceReply { destination, .. } => Some(*destination),
|
||||
Packet::DmaRemoveTraceRequest { destination, .. } => Some(*destination),
|
||||
Packet::DmaRemoveTraceReply { destination, .. } => Some(*destination),
|
||||
Packet::DmaPlaybackRequest { destination, .. } => Some(*destination),
|
||||
Packet::DmaPlaybackReply { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelLoadRunRequest { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelLoadRunReply { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelMessage { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelMessageAck { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelExceptionRequest { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelException { destination, .. } => Some(*destination),
|
||||
Packet::DmaPlaybackStatus { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelFinished { destination, .. } => Some(*destination),
|
||||
Packet::DmaAddTraceRequest { destination, .. } => Some(*destination),
|
||||
Packet::DmaAddTraceReply { destination, .. } => Some(*destination),
|
||||
Packet::DmaRemoveTraceRequest { destination, .. } => Some(*destination),
|
||||
Packet::DmaRemoveTraceReply { destination, .. } => Some(*destination),
|
||||
Packet::DmaPlaybackRequest { destination, .. } => Some(*destination),
|
||||
Packet::DmaPlaybackReply { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelLoadRunRequest { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelLoadRunReply { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelMessage { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelMessageAck { destination, .. } => Some(*destination),
|
||||
Packet::DmaPlaybackStatus { destination, .. } => Some(*destination),
|
||||
Packet::SubkernelFinished { destination, .. } => Some(*destination),
|
||||
_ => None
|
||||
}
|
||||
}
|
||||
|
|
|
@ -11,12 +11,12 @@ pub const KERNELCPU_LAST_ADDRESS: usize = 0x4fffffff;
|
|||
pub const KSUPPORT_HEADER_SIZE: usize = 0x74;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum SubkernelStatus<'a> {
|
||||
pub enum SubkernelStatus {
|
||||
NoError,
|
||||
Timeout,
|
||||
IncorrectState,
|
||||
CommLost,
|
||||
Exception(eh::eh_artiq::Exception<'a>),
|
||||
OtherError,
|
||||
OtherError
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
|
@ -106,11 +106,10 @@ pub enum Message<'a> {
|
|||
SubkernelLoadRunRequest { id: u32, destination: u8, run: bool },
|
||||
SubkernelLoadRunReply { succeeded: bool },
|
||||
SubkernelAwaitFinishRequest { id: u32, timeout: i64 },
|
||||
SubkernelAwaitFinishReply,
|
||||
SubkernelAwaitFinishReply { status: SubkernelStatus },
|
||||
SubkernelMsgSend { id: u32, destination: Option<u8>, count: u8, tag: &'a [u8], data: *const *const () },
|
||||
SubkernelMsgRecvRequest { id: i32, timeout: i64, tags: &'a [u8] },
|
||||
SubkernelMsgRecvReply { count: u8 },
|
||||
SubkernelError(SubkernelStatus<'a>),
|
||||
SubkernelMsgRecvReply { status: SubkernelStatus, count: u8 },
|
||||
|
||||
Log(fmt::Arguments<'a>),
|
||||
LogSlice(&'a str)
|
||||
|
|
|
@ -16,7 +16,9 @@ pub enum Error<T> {
|
|||
#[fail(display = "invalid UTF-8: {}", _0)]
|
||||
Utf8(Utf8Error),
|
||||
#[fail(display = "{}", _0)]
|
||||
Io(#[cause] IoError<T>)
|
||||
Io(#[cause] IoError<T>),
|
||||
#[fail(display = "drtio error")]
|
||||
DrtioError,
|
||||
}
|
||||
|
||||
impl<T> From<IoError<T>> for Error<T> {
|
||||
|
@ -65,6 +67,8 @@ pub enum Request {
|
|||
|
||||
Reboot,
|
||||
|
||||
Flash { image: Vec<u8> },
|
||||
|
||||
DebugAllocator,
|
||||
}
|
||||
|
||||
|
@ -123,6 +127,10 @@ impl Request {
|
|||
|
||||
8 => Request::DebugAllocator,
|
||||
|
||||
9 => Request::Flash {
|
||||
image: reader.read_bytes()?,
|
||||
},
|
||||
|
||||
ty => return Err(Error::UnknownPacket(ty))
|
||||
})
|
||||
}
|
||||
|
|
|
@ -16,6 +16,7 @@ build_misoc = { path = "../libbuild_misoc" }
|
|||
failure = { version = "0.1", default-features = false }
|
||||
failure_derive = { version = "0.1", default-features = false }
|
||||
byteorder = { version = "1.0", default-features = false }
|
||||
crc = { version = "1.7", default-features = false }
|
||||
cslice = { version = "0.3" }
|
||||
log = { version = "=0.4.14", default-features = false }
|
||||
managed = { version = "^0.7.1", default-features = false, features = ["alloc", "map"] }
|
||||
|
|
|
@ -95,9 +95,7 @@ pub mod subkernel {
|
|||
use board_artiq::drtio_routing::RoutingTable;
|
||||
use board_misoc::clock;
|
||||
use proto_artiq::{drtioaux_proto::{PayloadStatus, MASTER_PAYLOAD_MAX_SIZE}, rpc_proto as rpc};
|
||||
use io::{Cursor, ProtoRead};
|
||||
use eh::eh_artiq::Exception;
|
||||
use cslice::CSlice;
|
||||
use io::Cursor;
|
||||
use rtio_mgt::drtio;
|
||||
use sched::{Io, Mutex, Error as SchedError};
|
||||
|
||||
|
@ -228,8 +226,8 @@ pub mod subkernel {
|
|||
if subkernel.state == SubkernelState::Running {
|
||||
subkernel.state = SubkernelState::Finished {
|
||||
status: match with_exception {
|
||||
true => FinishStatus::Exception(exception_src),
|
||||
false => FinishStatus::Ok,
|
||||
true => FinishStatus::Exception(exception_src),
|
||||
false => FinishStatus::Ok,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -258,49 +256,6 @@ pub mod subkernel {
|
|||
}
|
||||
}
|
||||
|
||||
fn read_exception_string<'a>(reader: &mut Cursor<&[u8]>) -> Result<CSlice<'a, u8>, Error> {
|
||||
let len = reader.read_u32()? as usize;
|
||||
if len == usize::MAX {
|
||||
let data = reader.read_u32()?;
|
||||
Ok(unsafe { CSlice::new(data as *const u8, len) })
|
||||
} else {
|
||||
let pos = reader.position();
|
||||
let slice = unsafe {
|
||||
let ptr = reader.get_ref().as_ptr().offset(pos as isize);
|
||||
CSlice::new(ptr, len)
|
||||
};
|
||||
reader.set_position(pos + len);
|
||||
Ok(slice)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn read_exception(buffer: &[u8]) -> Result<Exception, Error>
|
||||
{
|
||||
let mut reader = Cursor::new(buffer);
|
||||
|
||||
let mut byte = reader.read_u8()?;
|
||||
// to sync
|
||||
while byte != 0x5a {
|
||||
byte = reader.read_u8()?;
|
||||
}
|
||||
// skip sync bytes, 0x09 indicates exception
|
||||
while byte != 0x09 {
|
||||
byte = reader.read_u8()?;
|
||||
}
|
||||
let _len = reader.read_u32()?;
|
||||
// ignore the remaining exceptions, stack traces etc. - unwinding from another device would be unwise anyway
|
||||
Ok(Exception {
|
||||
id: reader.read_u32()?,
|
||||
message: read_exception_string(&mut reader)?,
|
||||
param: [reader.read_u64()? as i64, reader.read_u64()? as i64, reader.read_u64()? as i64],
|
||||
file: read_exception_string(&mut reader)?,
|
||||
line: reader.read_u32()?,
|
||||
column: reader.read_u32()?,
|
||||
function: read_exception_string(&mut reader)?
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
pub fn retrieve_finish_status(io: &Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, id: u32) -> Result<SubkernelFinished, Error> {
|
||||
let _lock = subkernel_mutex.lock(io)?;
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
#![feature(lang_items, panic_info_message, const_btree_new, iter_advance_by, never_type)]
|
||||
#![no_std]
|
||||
|
||||
extern crate crc;
|
||||
extern crate dyld;
|
||||
extern crate eh;
|
||||
#[macro_use]
|
||||
|
@ -206,7 +207,13 @@ fn startup() {
|
|||
|
||||
rtio_mgt::startup(&io, &aux_mutex, &drtio_routing_table, &up_destinations, &ddma_mutex, &subkernel_mutex);
|
||||
|
||||
io.spawn(4096, mgmt::thread);
|
||||
{
|
||||
let aux_mutex = aux_mutex.clone();
|
||||
let ddma_mutex = ddma_mutex.clone();
|
||||
let subkernel_mutex = subkernel_mutex.clone();
|
||||
let drtio_routing_table = drtio_routing_table.clone();
|
||||
io.spawn(4096, move |io| { mgmt::thread(io, &aux_mutex, &ddma_mutex, &subkernel_mutex, &drtio_routing_table) });
|
||||
}
|
||||
{
|
||||
let aux_mutex = aux_mutex.clone();
|
||||
let drtio_routing_table = drtio_routing_table.clone();
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
use log::{self, LevelFilter};
|
||||
use core::cell::RefCell;
|
||||
|
||||
use io::{Write, ProtoWrite, Error as IoError};
|
||||
use board_misoc::{config, spiflash};
|
||||
use logger_artiq::BufferLogger;
|
||||
use board_artiq::drtio_routing::RoutingTable;
|
||||
use io::{ProtoRead, Write, Error as IoError};
|
||||
use mgmt_proto::*;
|
||||
use sched::{Io, TcpListener, TcpStream, Error as SchedError};
|
||||
use sched::{Io, Mutex, TcpListener, TcpStream, Error as SchedError};
|
||||
use urc::Urc;
|
||||
|
||||
impl From<SchedError> for Error<SchedError> {
|
||||
fn from(value: SchedError) -> Error<SchedError> {
|
||||
|
@ -12,121 +12,614 @@ impl From<SchedError> for Error<SchedError> {
|
|||
}
|
||||
}
|
||||
|
||||
fn worker(io: &Io, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
mod local_coremgmt {
|
||||
use alloc::{string::String, vec::Vec};
|
||||
use byteorder::{ByteOrder, NativeEndian};
|
||||
use crc::crc32;
|
||||
use log::LevelFilter;
|
||||
|
||||
use board_misoc::{config, mem, spiflash};
|
||||
use io::ProtoWrite;
|
||||
use logger_artiq::BufferLogger;
|
||||
|
||||
use super::*;
|
||||
|
||||
|
||||
pub fn get_log(io: &Io, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
BufferLogger::with(|logger| {
|
||||
let mut buffer = io.until_ok(|| logger.buffer())?;
|
||||
Reply::LogContent(buffer.extract()).write_to(stream)
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn clear_log(io: &Io, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
BufferLogger::with(|logger| -> Result<(), IoError<SchedError>> {
|
||||
let mut buffer = io.until_ok(|| logger.buffer())?;
|
||||
Ok(buffer.clear())
|
||||
})?;
|
||||
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn pull_log(io: &Io, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
BufferLogger::with(|logger| -> Result<(), IoError<SchedError>> {
|
||||
loop {
|
||||
// Do this *before* acquiring the buffer, since that sets the log level
|
||||
// to OFF.
|
||||
let log_level = log::max_level();
|
||||
|
||||
let mut buffer = io.until_ok(|| logger.buffer())?;
|
||||
if buffer.is_empty() { continue }
|
||||
|
||||
stream.write_string(buffer.extract())?;
|
||||
|
||||
if log_level == LevelFilter::Trace {
|
||||
// Hold exclusive access over the logger until we get positive
|
||||
// acknowledgement; otherwise we get an infinite loop of network
|
||||
// trace messages being transmitted and causing more network
|
||||
// trace messages to be emitted.
|
||||
//
|
||||
// Any messages unrelated to this management socket that arrive
|
||||
// while it is flushed are lost, but such is life.
|
||||
stream.flush()?;
|
||||
}
|
||||
|
||||
// Clear the log *after* flushing the network buffers, or we're just
|
||||
// going to resend all the trace messages on the next iteration.
|
||||
buffer.clear();
|
||||
}
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn set_log_filter(_io: &Io, stream: &mut TcpStream, level: LevelFilter) -> Result<(), Error<SchedError>> {
|
||||
info!("changing log level to {}", level);
|
||||
log::set_max_level(level);
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn set_uart_log_filter(_io: &Io, stream: &mut TcpStream, level: LevelFilter) -> Result<(), Error<SchedError>> {
|
||||
info!("changing UART log level to {}", level);
|
||||
BufferLogger::with(|logger|
|
||||
logger.set_uart_log_level(level));
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn config_read(_io: &Io, stream: &mut TcpStream, key: &String) -> Result<(), Error<SchedError>>{
|
||||
config::read(key, |result| {
|
||||
match result {
|
||||
Ok(value) => Reply::ConfigData(&value).write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}
|
||||
})?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn config_write(_io: &Io, stream: &mut TcpStream, key: &String, value: &Vec<u8>) -> Result<(), Error<SchedError>> {
|
||||
match config::write(key, value) {
|
||||
Ok(_) => Reply::Success.write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn config_remove(_io: &Io, stream: &mut TcpStream, key: &String) -> Result<(), Error<SchedError>> {
|
||||
match config::remove(key) {
|
||||
Ok(()) => Reply::Success.write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn config_erase(_io: &Io, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
match config::erase() {
|
||||
Ok(()) => Reply::Success.write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn reboot(_io: &Io, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
Reply::RebootImminent.write_to(stream)?;
|
||||
stream.close()?;
|
||||
stream.flush()?;
|
||||
|
||||
warn!("restarting");
|
||||
unsafe { spiflash::reload(); }
|
||||
}
|
||||
|
||||
pub fn debug_allocator(_io: &Io, _stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
unsafe { println!("{}", ::ALLOC) }
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn flash(_io: &Io, stream: &mut TcpStream, image: &[u8]) -> Result<(), Error<SchedError>> {
|
||||
let (expected_crc, mut image) = {
|
||||
let (image, crc_slice) = image.split_at(image.len() - 4);
|
||||
(NativeEndian::read_u32(crc_slice), image)
|
||||
};
|
||||
|
||||
let actual_crc = crc32::checksum_ieee(image);
|
||||
|
||||
if actual_crc == expected_crc {
|
||||
let bin_origins = [
|
||||
("gateware" , 0 ),
|
||||
("bootloader", mem::ROM_BASE ),
|
||||
("firmware" , mem::FLASH_BOOT_ADDRESS),
|
||||
];
|
||||
|
||||
for (name, origin) in bin_origins {
|
||||
info!("Flashing {} binary...", name);
|
||||
let size = NativeEndian::read_u32(&image[..4]) as usize;
|
||||
image = &image[4..];
|
||||
|
||||
let (bin, remaining) = image.split_at(size);
|
||||
image = remaining;
|
||||
|
||||
unsafe { spiflash::flash_binary(origin, bin) };
|
||||
}
|
||||
|
||||
reboot(_io, stream)?;
|
||||
} else {
|
||||
error!("CRC failed in SDRAM (actual {:08x}, expected {:08x})", actual_crc, expected_crc);
|
||||
Reply::Error.write_to(stream)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(has_drtio)]
|
||||
mod remote_coremgmt {
|
||||
use alloc::{string::String, vec::Vec};
|
||||
use log::LevelFilter;
|
||||
|
||||
use board_artiq::{drtioaux, drtioaux::Packet};
|
||||
use io::ProtoWrite;
|
||||
use rtio_mgt::drtio;
|
||||
use proto_artiq::drtioaux_proto::MASTER_PAYLOAD_MAX_SIZE;
|
||||
|
||||
use super::*;
|
||||
|
||||
impl From<drtio::Error> for Error<SchedError> {
|
||||
fn from(_value: drtio::Error) -> Error<SchedError> {
|
||||
Error::DrtioError
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_log(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
let mut buffer = String::new();
|
||||
loop {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtGetLogRequest { destination, clear: false }
|
||||
);
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtGetLogReply { last, length, data }) => {
|
||||
buffer.push_str(
|
||||
core::str::from_utf8(&data[..length as usize]).map_err(|_| Error::DrtioError)?);
|
||||
if last {
|
||||
Reply::LogContent(&buffer).write_to(stream)?;
|
||||
return Ok(());
|
||||
}
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
return Err(drtio::Error::UnexpectedReply.into());
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
return Err(e.into());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn clear_log(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtClearLogRequest { destination }
|
||||
);
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => {
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(drtio::Error::UnexpectedReply.into())
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn pull_log(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
loop {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtGetLogRequest { destination, clear: true }
|
||||
);
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtGetLogReply { last: _, length, data }) => {
|
||||
stream.write_bytes(&data[..length as usize])?;
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
return Err(drtio::Error::UnexpectedReply.into());
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
return Err(e.into());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_log_filter(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream, level: LevelFilter) -> Result<(), Error<SchedError>> {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtSetLogLevelRequest { destination, log_level: level as u8 }
|
||||
);
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => {
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(drtio::Error::UnexpectedReply.into())
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_uart_log_filter(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream, level: LevelFilter) -> Result<(), Error<SchedError>> {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtSetUartLogLevelRequest { destination, log_level: level as u8 }
|
||||
);
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => {
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(drtio::Error::UnexpectedReply.into())
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn config_read(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream, key: &String) -> Result<(), Error<SchedError>> {
|
||||
let mut config_key: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
let len = key.len();
|
||||
config_key[..len].clone_from_slice(key.as_bytes());
|
||||
|
||||
let mut reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtConfigReadRequest {
|
||||
destination: destination,
|
||||
length: len as u16,
|
||||
key: config_key,
|
||||
}
|
||||
);
|
||||
|
||||
let mut buffer = Vec::<u8>::new();
|
||||
loop {
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtConfigReadReply { length, last, value }) => {
|
||||
buffer.extend(&value[..length as usize]);
|
||||
|
||||
if last {
|
||||
Reply::ConfigData(&buffer).write_to(stream)?;
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtConfigReadContinue {
|
||||
destination: destination,
|
||||
}
|
||||
);
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
return Err(drtio::Error::UnexpectedReply.into());
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
return Err(e.into());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn config_write(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream, key: &String, value: &Vec<u8>) -> Result<(), Error<SchedError>> {
|
||||
let mut message = Vec::with_capacity(key.len() + value.len() + 4 * 2);
|
||||
message.write_string(key).unwrap();
|
||||
message.write_bytes(value).unwrap();
|
||||
|
||||
match drtio::partition_data(&message, |slice, status, len: usize| {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtConfigWriteRequest {
|
||||
destination: destination, length: len as u16, last: status.is_last(), data: *slice});
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => Ok(()),
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Err(drtio::Error::UnexpectedReply)
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Err(e)
|
||||
}
|
||||
}
|
||||
}) {
|
||||
Ok(()) => {
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
},
|
||||
Err(e) => {
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
pub fn config_remove(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream, key: &String) -> Result<(), Error<SchedError>> {
|
||||
let mut config_key: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
let len = key.len();
|
||||
config_key[..len].clone_from_slice(key.as_bytes());
|
||||
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtConfigRemoveRequest {
|
||||
destination: destination,
|
||||
length: key.len() as u16,
|
||||
key: config_key,
|
||||
});
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => {
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(drtio::Error::UnexpectedReply.into())
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn config_erase(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtConfigEraseRequest {
|
||||
destination: destination,
|
||||
});
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => {
|
||||
Reply::Success.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(drtio::Error::UnexpectedReply.into())
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn reboot(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtRebootRequest {
|
||||
destination: destination,
|
||||
});
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => {
|
||||
Reply::RebootImminent.write_to(stream)?;
|
||||
Ok(())
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(drtio::Error::UnexpectedReply.into())
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn debug_allocator(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, _stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtAllocatorDebugRequest {
|
||||
destination: destination,
|
||||
});
|
||||
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => Ok(()),
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Err(drtio::Error::UnexpectedReply.into())
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Err(e.into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn flash(io: &Io, aux_mutex: &Mutex,
|
||||
ddma_mutex: &Mutex, subkernel_mutex: &Mutex,
|
||||
routing_table: &RoutingTable, linkno: u8,
|
||||
destination: u8, stream: &mut TcpStream, image: &[u8]) -> Result<(), Error<SchedError>> {
|
||||
|
||||
match drtio::partition_data(&image, |slice, status, len: usize| {
|
||||
let reply = drtio::aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&Packet::CoreMgmtFlashRequest {
|
||||
destination: destination, length: len as u16, last: status.is_last(), data: *slice});
|
||||
match reply {
|
||||
Ok(Packet::CoreMgmtReply { succeeded: true }) => Ok(()),
|
||||
Ok(Packet::CoreMgmtDropLink) => {
|
||||
if status.is_last() {
|
||||
drtioaux::send(
|
||||
linkno, &Packet::CoreMgmtDropLinkAck { destination: destination }
|
||||
).map_err(|_| drtio::Error::AuxError)
|
||||
} else {
|
||||
error!("received unexpected drop link packet");
|
||||
Err(drtio::Error::UnexpectedReply)
|
||||
}
|
||||
}
|
||||
Ok(packet) => {
|
||||
error!("received unexpected aux packet: {:?}", packet);
|
||||
Err(drtio::Error::UnexpectedReply)
|
||||
}
|
||||
Err(e) => {
|
||||
error!("aux packet error ({})", e);
|
||||
Err(e)
|
||||
}
|
||||
}
|
||||
}) {
|
||||
Ok(()) => {
|
||||
Reply::RebootImminent.write_to(stream)?;
|
||||
Ok(())
|
||||
},
|
||||
Err(e) => {
|
||||
Reply::Error.write_to(stream)?;
|
||||
Err(e.into())
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(has_drtio)]
|
||||
macro_rules! process {
|
||||
($io:ident, $aux_mutex:ident, $ddma_mutex:ident, $subkernel_mutex:ident, $routing_table:ident, $tcp_stream:ident, $destination: ident, $func:ident $(, $param:expr)*) => {{
|
||||
let hop = $routing_table.0[$destination as usize][0];
|
||||
if hop == 0 {
|
||||
local_coremgmt::$func($io, $tcp_stream, $($param, )*)
|
||||
} else {
|
||||
let linkno = hop - 1;
|
||||
remote_coremgmt::$func($io, $aux_mutex, $ddma_mutex, $subkernel_mutex, $routing_table, linkno, $destination, $tcp_stream, $($param, )*)
|
||||
}
|
||||
}}
|
||||
}
|
||||
|
||||
#[cfg(not(has_drtio))]
|
||||
macro_rules! process {
|
||||
($io:ident, $aux_mutex:ident, $ddma_mutex:ident, $subkernel_mutex:ident, $routing_table:ident, $tcp_stream:ident, $_destination: ident, $func:ident $(, $param:expr)*) => {{
|
||||
local_coremgmt::$func($io, $tcp_stream, $($param, )*)
|
||||
}}
|
||||
}
|
||||
|
||||
fn worker(io: &Io, _aux_mutex: &Mutex, _ddma_mutex: &Mutex, _subkernel_mutex: &Mutex,
|
||||
_routing_table: &RoutingTable, stream: &mut TcpStream) -> Result<(), Error<SchedError>> {
|
||||
read_magic(stream)?;
|
||||
let _destination = stream.read_u8()?;
|
||||
Write::write_all(stream, "e".as_bytes())?;
|
||||
info!("new connection from {}", stream.remote_endpoint());
|
||||
|
||||
loop {
|
||||
match Request::read_from(stream)? {
|
||||
Request::GetLog => {
|
||||
BufferLogger::with(|logger| {
|
||||
let mut buffer = io.until_ok(|| logger.buffer())?;
|
||||
Reply::LogContent(buffer.extract()).write_to(stream)
|
||||
})?;
|
||||
}
|
||||
Request::ClearLog => {
|
||||
BufferLogger::with(|logger| -> Result<(), Error<SchedError>> {
|
||||
let mut buffer = io.until_ok(|| logger.buffer())?;
|
||||
Ok(buffer.clear())
|
||||
})?;
|
||||
|
||||
Reply::Success.write_to(stream)?;
|
||||
}
|
||||
Request::PullLog => {
|
||||
BufferLogger::with(|logger| -> Result<(), Error<SchedError>> {
|
||||
loop {
|
||||
// Do this *before* acquiring the buffer, since that sets the log level
|
||||
// to OFF.
|
||||
let log_level = log::max_level();
|
||||
|
||||
let mut buffer = io.until_ok(|| logger.buffer())?;
|
||||
if buffer.is_empty() { continue }
|
||||
|
||||
stream.write_string(buffer.extract())?;
|
||||
|
||||
if log_level == LevelFilter::Trace {
|
||||
// Hold exclusive access over the logger until we get positive
|
||||
// acknowledgement; otherwise we get an infinite loop of network
|
||||
// trace messages being transmitted and causing more network
|
||||
// trace messages to be emitted.
|
||||
//
|
||||
// Any messages unrelated to this management socket that arrive
|
||||
// while it is flushed are lost, but such is life.
|
||||
stream.flush()?;
|
||||
}
|
||||
|
||||
// Clear the log *after* flushing the network buffers, or we're just
|
||||
// going to resend all the trace messages on the next iteration.
|
||||
buffer.clear();
|
||||
}
|
||||
})?;
|
||||
}
|
||||
Request::SetLogFilter(level) => {
|
||||
info!("changing log level to {}", level);
|
||||
log::set_max_level(level);
|
||||
Reply::Success.write_to(stream)?;
|
||||
}
|
||||
Request::SetUartLogFilter(level) => {
|
||||
info!("changing UART log level to {}", level);
|
||||
BufferLogger::with(|logger|
|
||||
logger.set_uart_log_level(level));
|
||||
Reply::Success.write_to(stream)?;
|
||||
}
|
||||
|
||||
Request::ConfigRead { ref key } => {
|
||||
config::read(key, |result| {
|
||||
match result {
|
||||
Ok(value) => Reply::ConfigData(&value).write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}
|
||||
})?;
|
||||
}
|
||||
Request::ConfigWrite { ref key, ref value } => {
|
||||
match config::write(key, value) {
|
||||
Ok(_) => Reply::Success.write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}?;
|
||||
}
|
||||
Request::ConfigRemove { ref key } => {
|
||||
match config::remove(key) {
|
||||
Ok(()) => Reply::Success.write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}?;
|
||||
|
||||
}
|
||||
Request::ConfigErase => {
|
||||
match config::erase() {
|
||||
Ok(()) => Reply::Success.write_to(stream),
|
||||
Err(_) => Reply::Error.write_to(stream)
|
||||
}?;
|
||||
}
|
||||
|
||||
Request::Reboot => {
|
||||
Reply::RebootImminent.write_to(stream)?;
|
||||
stream.close()?;
|
||||
stream.flush()?;
|
||||
|
||||
warn!("restarting");
|
||||
unsafe { spiflash::reload(); }
|
||||
}
|
||||
|
||||
Request::DebugAllocator =>
|
||||
unsafe { println!("{}", ::ALLOC) },
|
||||
};
|
||||
Request::GetLog => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, get_log),
|
||||
Request::ClearLog => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, clear_log),
|
||||
Request::PullLog => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, pull_log),
|
||||
Request::SetLogFilter(level) => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, set_log_filter, level),
|
||||
Request::SetUartLogFilter(level) => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, set_uart_log_filter, level),
|
||||
Request::ConfigRead { ref key } => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, config_read, key),
|
||||
Request::ConfigWrite { ref key, ref value } => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, config_write, key, value),
|
||||
Request::ConfigRemove { ref key } => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, config_remove, key),
|
||||
Request::ConfigErase => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, config_erase),
|
||||
Request::Reboot => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, reboot),
|
||||
Request::DebugAllocator => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, debug_allocator),
|
||||
Request::Flash { ref image } => process!(io, _aux_mutex, _ddma_mutex, _subkernel_mutex, _routing_table, stream, _destination, flash, &image[..]),
|
||||
}?;
|
||||
}
|
||||
}
|
||||
|
||||
pub fn thread(io: Io) {
|
||||
pub fn thread(io: Io, aux_mutex: &Mutex, ddma_mutex: &Mutex, subkernel_mutex: &Mutex, routing_table: &Urc<RefCell<RoutingTable>>) {
|
||||
let listener = TcpListener::new(&io, 8192);
|
||||
listener.listen(1380).expect("mgmt: cannot listen");
|
||||
info!("management interface active");
|
||||
|
||||
loop {
|
||||
let aux_mutex = aux_mutex.clone();
|
||||
let ddma_mutex = ddma_mutex.clone();
|
||||
let subkernel_mutex = subkernel_mutex.clone();
|
||||
let routing_table = routing_table.clone();
|
||||
let stream = listener.accept().expect("mgmt: cannot accept").into_handle();
|
||||
io.spawn(4096, move |io| {
|
||||
io.spawn(16384, move |io| {
|
||||
let routing_table = routing_table.borrow();
|
||||
let mut stream = TcpStream::from_handle(&io, stream);
|
||||
match worker(&io, &mut stream) {
|
||||
match worker(&io, &aux_mutex, &ddma_mutex, &subkernel_mutex, &routing_table, &mut stream) {
|
||||
Ok(()) => (),
|
||||
Err(Error::Io(IoError::UnexpectedEnd)) => (),
|
||||
Err(err) => error!("aborted: {}", err)
|
||||
|
|
|
@ -119,19 +119,17 @@ pub mod drtio {
|
|||
},
|
||||
// (potentially) routable packets
|
||||
drtioaux::Packet::DmaAddTraceRequest { destination, .. } |
|
||||
drtioaux::Packet::DmaAddTraceReply { destination, .. } |
|
||||
drtioaux::Packet::DmaRemoveTraceRequest { destination, .. } |
|
||||
drtioaux::Packet::DmaRemoveTraceReply { destination, .. } |
|
||||
drtioaux::Packet::DmaPlaybackRequest { destination, .. } |
|
||||
drtioaux::Packet::DmaPlaybackReply { destination, .. } |
|
||||
drtioaux::Packet::SubkernelLoadRunRequest { destination, .. } |
|
||||
drtioaux::Packet::SubkernelLoadRunReply { destination, .. } |
|
||||
drtioaux::Packet::SubkernelMessage { destination, .. } |
|
||||
drtioaux::Packet::SubkernelMessageAck { destination, .. } |
|
||||
drtioaux::Packet::SubkernelExceptionRequest { destination, .. } |
|
||||
drtioaux::Packet::SubkernelException { destination, .. } |
|
||||
drtioaux::Packet::DmaPlaybackStatus { destination, .. } |
|
||||
drtioaux::Packet::SubkernelFinished { destination, .. } => {
|
||||
drtioaux::Packet::DmaAddTraceReply { destination, .. } |
|
||||
drtioaux::Packet::DmaRemoveTraceRequest { destination, .. } |
|
||||
drtioaux::Packet::DmaRemoveTraceReply { destination, .. } |
|
||||
drtioaux::Packet::DmaPlaybackRequest { destination, .. } |
|
||||
drtioaux::Packet::DmaPlaybackReply { destination, .. } |
|
||||
drtioaux::Packet::SubkernelLoadRunRequest { destination, .. } |
|
||||
drtioaux::Packet::SubkernelLoadRunReply { destination, .. } |
|
||||
drtioaux::Packet::SubkernelMessage { destination, .. } |
|
||||
drtioaux::Packet::SubkernelMessageAck { destination, .. } |
|
||||
drtioaux::Packet::DmaPlaybackStatus { destination, .. } |
|
||||
drtioaux::Packet::SubkernelFinished { destination, .. } => {
|
||||
if *destination == 0 {
|
||||
false
|
||||
} else {
|
||||
|
@ -471,7 +469,7 @@ pub mod drtio {
|
|||
}
|
||||
}
|
||||
|
||||
fn partition_data<F>(data: &[u8], send_f: F) -> Result<(), Error>
|
||||
pub fn partition_data<F>(data: &[u8], send_f: F) -> Result<(), Error>
|
||||
where F: Fn(&[u8; MASTER_PAYLOAD_MAX_SIZE], PayloadStatus, usize) -> Result<(), Error> {
|
||||
let mut i = 0;
|
||||
while i < data.len() {
|
||||
|
@ -614,9 +612,9 @@ pub mod drtio {
|
|||
let mut remote_data: Vec<u8> = Vec::new();
|
||||
loop {
|
||||
let reply = aux_transact(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table, linkno,
|
||||
&drtioaux::Packet::SubkernelExceptionRequest { source: 0, destination: destination })?;
|
||||
&drtioaux::Packet::SubkernelExceptionRequest { destination: destination })?;
|
||||
match reply {
|
||||
drtioaux::Packet::SubkernelException { destination: 0, last, length, data } => {
|
||||
drtioaux::Packet::SubkernelException { last, length, data } => {
|
||||
remote_data.extend(&data[0..length as usize]);
|
||||
if last {
|
||||
return Ok(remote_data);
|
||||
|
|
|
@ -126,6 +126,19 @@ macro_rules! unexpected {
|
|||
($($arg:tt)*) => (return Err(Error::Unexpected(format!($($arg)*))));
|
||||
}
|
||||
|
||||
#[cfg(has_drtio)]
|
||||
macro_rules! propagate_subkernel_exception {
|
||||
( $exception:ident, $stream:ident ) => {
|
||||
error!("Exception in subkernel");
|
||||
match $stream {
|
||||
None => return Ok(true),
|
||||
Some(ref mut $stream) => {
|
||||
$stream.write_all($exception)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Persistent state
|
||||
#[derive(Debug)]
|
||||
struct Congress {
|
||||
|
@ -673,26 +686,23 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
|
|||
&kern::SubkernelAwaitFinishRequest{ id, timeout } => {
|
||||
let res = subkernel::await_finish(io, aux_mutex, ddma_mutex, subkernel_mutex, routing_table,
|
||||
id, timeout);
|
||||
let response = match res {
|
||||
let status = match res {
|
||||
Ok(ref res) => {
|
||||
if res.comm_lost {
|
||||
kern::SubkernelError(kern::SubkernelStatus::CommLost)
|
||||
} else if let Some(raw_exception) = &res.exception {
|
||||
let exception = subkernel::read_exception(raw_exception);
|
||||
if let Ok(exception) = exception {
|
||||
kern::SubkernelError(kern::SubkernelStatus::Exception(exception))
|
||||
} else {
|
||||
kern::SubkernelError(kern::SubkernelStatus::OtherError)
|
||||
}
|
||||
kern::SubkernelStatus::CommLost
|
||||
} else if let Some(exception) = &res.exception {
|
||||
propagate_subkernel_exception!(exception, stream);
|
||||
// will not be called after exception is served
|
||||
kern::SubkernelStatus::OtherError
|
||||
} else {
|
||||
kern::SubkernelAwaitFinishReply
|
||||
kern::SubkernelStatus::NoError
|
||||
}
|
||||
},
|
||||
Err(SubkernelError::Timeout) => kern::SubkernelError(kern::SubkernelStatus::Timeout),
|
||||
Err(SubkernelError::IncorrectState) => kern::SubkernelError(kern::SubkernelStatus::IncorrectState),
|
||||
Err(_) => kern::SubkernelError(kern::SubkernelStatus::OtherError)
|
||||
Err(SubkernelError::Timeout) => kern::SubkernelStatus::Timeout,
|
||||
Err(SubkernelError::IncorrectState) => kern::SubkernelStatus::IncorrectState,
|
||||
Err(_) => kern::SubkernelStatus::OtherError
|
||||
};
|
||||
kern_send(io, &response)
|
||||
kern_send(io, &kern::SubkernelAwaitFinishReply { status: status })
|
||||
}
|
||||
#[cfg(has_drtio)]
|
||||
&kern::SubkernelMsgSend { id, destination, count, tag, data } => {
|
||||
|
@ -702,80 +712,72 @@ fn process_kern_message(io: &Io, aux_mutex: &Mutex,
|
|||
#[cfg(has_drtio)]
|
||||
&kern::SubkernelMsgRecvRequest { id, timeout, tags } => {
|
||||
let message_received = subkernel::message_await(io, subkernel_mutex, id as u32, timeout);
|
||||
if let Err(SubkernelError::SubkernelFinished) = message_received {
|
||||
let res = subkernel::retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex,
|
||||
routing_table, id as u32)?;
|
||||
if res.comm_lost {
|
||||
kern_send(io,
|
||||
&kern::SubkernelError(kern::SubkernelStatus::CommLost))?;
|
||||
} else if let Some(raw_exception) = &res.exception {
|
||||
let exception = subkernel::read_exception(raw_exception);
|
||||
if let Ok(exception) = exception {
|
||||
kern_send(io,
|
||||
&kern::SubkernelError(kern::SubkernelStatus::Exception(exception)))?;
|
||||
let (status, count) = match message_received {
|
||||
Ok(ref message) => (kern::SubkernelStatus::NoError, message.count),
|
||||
Err(SubkernelError::Timeout) => (kern::SubkernelStatus::Timeout, 0),
|
||||
Err(SubkernelError::IncorrectState) => (kern::SubkernelStatus::IncorrectState, 0),
|
||||
Err(SubkernelError::SubkernelFinished) => {
|
||||
let res = subkernel::retrieve_finish_status(io, aux_mutex, ddma_mutex, subkernel_mutex,
|
||||
routing_table, id as u32)?;
|
||||
if res.comm_lost {
|
||||
(kern::SubkernelStatus::CommLost, 0)
|
||||
} else if let Some(exception) = &res.exception {
|
||||
propagate_subkernel_exception!(exception, stream);
|
||||
(kern::SubkernelStatus::OtherError, 0)
|
||||
} else {
|
||||
kern_send(io,
|
||||
&kern::SubkernelError(kern::SubkernelStatus::OtherError))?;
|
||||
(kern::SubkernelStatus::OtherError, 0)
|
||||
}
|
||||
} else {
|
||||
kern_send(io,
|
||||
&kern::SubkernelError(kern::SubkernelStatus::OtherError))?;
|
||||
}
|
||||
} else {
|
||||
let message = match message_received {
|
||||
Ok(ref message) => kern::SubkernelMsgRecvReply { count: message.count },
|
||||
Err(SubkernelError::Timeout) => kern::SubkernelError(kern::SubkernelStatus::Timeout),
|
||||
Err(SubkernelError::IncorrectState) => kern::SubkernelError(kern::SubkernelStatus::IncorrectState),
|
||||
Err(SubkernelError::SubkernelFinished) => unreachable!(), // taken care of above
|
||||
Err(_) => kern::SubkernelError(kern::SubkernelStatus::OtherError)
|
||||
};
|
||||
kern_send(io, &message)?;
|
||||
if let Ok(message) = message_received {
|
||||
// receive code almost identical to RPC recv, except we are not reading from a stream
|
||||
let mut reader = Cursor::new(message.data);
|
||||
let mut current_tags = tags;
|
||||
let mut i = 0;
|
||||
loop {
|
||||
// kernel has to consume all arguments in the whole message
|
||||
let slot = kern_recv(io, |reply| {
|
||||
Err(_) => (kern::SubkernelStatus::OtherError, 0)
|
||||
};
|
||||
kern_send(io, &kern::SubkernelMsgRecvReply { status: status, count: count})?;
|
||||
if let Ok(message) = message_received {
|
||||
// receive code almost identical to RPC recv, except we are not reading from a stream
|
||||
let mut reader = Cursor::new(message.data);
|
||||
let mut current_tags = tags;
|
||||
let mut i = 0;
|
||||
loop {
|
||||
// kernel has to consume all arguments in the whole message
|
||||
let slot = kern_recv(io, |reply| {
|
||||
match reply {
|
||||
&kern::RpcRecvRequest(slot) => Ok(slot),
|
||||
other => unexpected!(
|
||||
"expected root value slot from kernel CPU, not {:?}", other)
|
||||
}
|
||||
})?;
|
||||
let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error<SchedError>> {
|
||||
if size == 0 {
|
||||
return Ok(0 as *mut ())
|
||||
}
|
||||
kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
|
||||
Ok(kern_recv(io, |reply| {
|
||||
match reply {
|
||||
&kern::RpcRecvRequest(slot) => Ok(slot),
|
||||
other => unexpected!(
|
||||
"expected root value slot from kernel CPU, not {:?}", other)
|
||||
"expected nested value slot from kernel CPU, not {:?}", other)
|
||||
}
|
||||
})?;
|
||||
let res = rpc::recv_return(&mut reader, current_tags, slot, &|size| -> Result<_, Error<SchedError>> {
|
||||
if size == 0 {
|
||||
return Ok(0 as *mut ())
|
||||
})?)
|
||||
});
|
||||
match res {
|
||||
Ok(new_tags) => {
|
||||
kern_send(io, &kern::RpcRecvReply(Ok(0)))?;
|
||||
i += 1;
|
||||
if i < message.count {
|
||||
// update the tag for next read
|
||||
current_tags = new_tags;
|
||||
} else {
|
||||
// should be done by then
|
||||
break;
|
||||
}
|
||||
kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
|
||||
Ok(kern_recv(io, |reply| {
|
||||
match reply {
|
||||
&kern::RpcRecvRequest(slot) => Ok(slot),
|
||||
other => unexpected!(
|
||||
"expected nested value slot from kernel CPU, not {:?}", other)
|
||||
}
|
||||
})?)
|
||||
});
|
||||
match res {
|
||||
Ok(new_tags) => {
|
||||
kern_send(io, &kern::RpcRecvReply(Ok(0)))?;
|
||||
i += 1;
|
||||
if i < message.count {
|
||||
// update the tag for next read
|
||||
current_tags = new_tags;
|
||||
} else {
|
||||
// should be done by then
|
||||
break;
|
||||
}
|
||||
},
|
||||
Err(_) => unexpected!("expected valid subkernel message data")
|
||||
};
|
||||
}
|
||||
},
|
||||
Err(_) => unexpected!("expected valid subkernel message data")
|
||||
};
|
||||
}
|
||||
Ok(())
|
||||
} else {
|
||||
// if timed out, no data has been received, exception should be raised by kernel
|
||||
Ok(())
|
||||
}
|
||||
Ok(())
|
||||
},
|
||||
|
||||
request => unexpected!("unexpected request {:?} from kernel CPU", request)
|
||||
|
@ -914,7 +916,7 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
|
|||
Ok(()) =>
|
||||
info!("startup kernel finished"),
|
||||
Err(Error::KernelNotFound) =>
|
||||
debug!("no startup kernel found"),
|
||||
info!("no startup kernel found"),
|
||||
Err(err) => {
|
||||
congress.finished_cleanly.set(false);
|
||||
error!("startup kernel aborted: {}", err);
|
||||
|
@ -1009,7 +1011,7 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
|
|||
drtio::clear_buffers(&io, &aux_mutex);
|
||||
}
|
||||
Err(Error::KernelNotFound) => {
|
||||
debug!("no idle kernel found");
|
||||
info!("no idle kernel found");
|
||||
while io.relinquish().is_ok() {}
|
||||
}
|
||||
Err(err) => {
|
||||
|
|
|
@ -15,6 +15,8 @@ build_misoc = { path = "../libbuild_misoc" }
|
|||
[dependencies]
|
||||
log = { version = "0.4", default-features = false }
|
||||
io = { path = "../libio", features = ["byteorder", "alloc"] }
|
||||
byteorder = { version = "1.0", default-features = false }
|
||||
crc = { version = "1.7", default-features = false }
|
||||
cslice = { version = "0.3" }
|
||||
board_misoc = { path = "../libboard_misoc", features = ["uart_console", "log"] }
|
||||
board_artiq = { path = "../libboard_artiq", features = ["alloc"] }
|
||||
|
|
|
@ -1,22 +1,23 @@
|
|||
use core::mem;
|
||||
use alloc::{string::String, format, vec::Vec, collections::btree_map::BTreeMap};
|
||||
use cslice::{CSlice, AsCSlice};
|
||||
use cslice::AsCSlice;
|
||||
|
||||
use board_artiq::{drtioaux, drtio_routing::RoutingTable, mailbox, spi};
|
||||
use board_misoc::{csr, clock, i2c};
|
||||
use proto_artiq::{
|
||||
drtioaux_proto::PayloadStatus,
|
||||
kernel_proto as kern,
|
||||
kernel_proto as kern,
|
||||
session_proto::Reply::KernelException as HostKernelException,
|
||||
rpc_proto as rpc};
|
||||
use eh::eh_artiq;
|
||||
use io::{Cursor, ProtoRead};
|
||||
use io::Cursor;
|
||||
use kernel::eh_artiq::StackPointerBacktrace;
|
||||
|
||||
use ::{cricon_select, RtioMaster};
|
||||
use cache::Cache;
|
||||
use dma::{Manager as DmaManager, Error as DmaError};
|
||||
use routing::{Router, Sliceable, SliceMeta};
|
||||
use SAT_PAYLOAD_MAX_SIZE;
|
||||
use MASTER_PAYLOAD_MAX_SIZE;
|
||||
|
||||
mod kernel_cpu {
|
||||
|
@ -68,7 +69,6 @@ enum KernelState {
|
|||
SubkernelAwaitFinish { max_time: i64, id: u32 },
|
||||
DmaUploading { max_time: u64 },
|
||||
DmaAwait { max_time: u64 },
|
||||
SubkernelRetrievingException { destination: u8 },
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
|
@ -134,13 +134,10 @@ struct MessageManager {
|
|||
struct Session {
|
||||
kernel_state: KernelState,
|
||||
log_buffer: String,
|
||||
last_exception: Option<Sliceable>, // exceptions raised locally
|
||||
external_exception: Vec<u8>, // exceptions from sub-subkernels
|
||||
// which destination requested running the kernel
|
||||
source: u8,
|
||||
last_exception: Option<Sliceable>,
|
||||
source: u8, // which destination requested running the kernel
|
||||
messages: MessageManager,
|
||||
// ids of subkernels finished (with exception)
|
||||
subkernels_finished: Vec<(u32, Option<u8>)>,
|
||||
subkernels_finished: Vec<u32> // ids of subkernels finished
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
|
@ -280,7 +277,6 @@ impl Session {
|
|||
kernel_state: KernelState::Absent,
|
||||
log_buffer: String::new(),
|
||||
last_exception: None,
|
||||
external_exception: Vec::new(),
|
||||
source: 0,
|
||||
messages: MessageManager::new(),
|
||||
subkernels_finished: Vec::new()
|
||||
|
@ -432,9 +428,9 @@ impl Manager {
|
|||
}
|
||||
}
|
||||
|
||||
pub fn exception_get_slice(&mut self, data_slice: &mut [u8; MASTER_PAYLOAD_MAX_SIZE]) -> SliceMeta {
|
||||
pub fn exception_get_slice(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> SliceMeta {
|
||||
match self.session.last_exception.as_mut() {
|
||||
Some(exception) => exception.get_slice_master(data_slice),
|
||||
Some(exception) => exception.get_slice_sat(data_slice),
|
||||
None => SliceMeta { destination: 0, len: 0, status: PayloadStatus::FirstAndLast }
|
||||
}
|
||||
}
|
||||
|
@ -521,7 +517,7 @@ impl Manager {
|
|||
return;
|
||||
}
|
||||
|
||||
match self.process_external_messages(router, routing_table, rank, destination) {
|
||||
match self.process_external_messages() {
|
||||
Ok(()) => (),
|
||||
Err(Error::AwaitingMessage) => return, // kernel still waiting, do not process kernel messages
|
||||
Err(Error::KernelException(exception)) => {
|
||||
|
@ -553,42 +549,20 @@ impl Manager {
|
|||
}
|
||||
}
|
||||
|
||||
fn check_finished_kernels(&mut self, id: u32, router: &mut Router, routing_table: &RoutingTable, rank: u8, self_destination: u8) {
|
||||
for (i, (status, exception_source)) in self.session.subkernels_finished.iter().enumerate() {
|
||||
if *status == id {
|
||||
if exception_source.is_none() {
|
||||
kern_send(&kern::SubkernelAwaitFinishReply).unwrap();
|
||||
self.session.kernel_state = KernelState::Running;
|
||||
self.session.subkernels_finished.swap_remove(i);
|
||||
} else {
|
||||
let destination = exception_source.unwrap();
|
||||
self.session.external_exception = Vec::new();
|
||||
self.session.kernel_state = KernelState::SubkernelRetrievingException { destination: destination };
|
||||
router.route(drtioaux::Packet::SubkernelExceptionRequest {
|
||||
source: self_destination, destination: destination
|
||||
}, &routing_table, rank, self_destination);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn process_external_messages(&mut self, router: &mut Router, routing_table: &RoutingTable, rank: u8, self_destination: u8) -> Result<(), Error> {
|
||||
fn process_external_messages(&mut self) -> Result<(), Error> {
|
||||
match &self.session.kernel_state {
|
||||
KernelState::MsgAwait { id, max_time, tags } => {
|
||||
if *max_time > 0 && clock::get_ms() > *max_time as u64 {
|
||||
kern_send(&kern::SubkernelError(kern::SubkernelStatus::Timeout))?;
|
||||
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::Timeout, count: 0 })?;
|
||||
self.session.kernel_state = KernelState::Running;
|
||||
return Ok(())
|
||||
}
|
||||
if let Some(message) = self.session.messages.get_incoming(*id) {
|
||||
kern_send(&kern::SubkernelMsgRecvReply { count: message.count })?;
|
||||
kern_send(&kern::SubkernelMsgRecvReply { status: kern::SubkernelStatus::NoError, count: message.count })?;
|
||||
let tags = tags.clone();
|
||||
self.session.kernel_state = KernelState::Running;
|
||||
pass_message_to_kernel(&message, &tags)
|
||||
} else {
|
||||
let id = *id;
|
||||
self.check_finished_kernels(id, router, routing_table, rank, self_destination);
|
||||
Err(Error::AwaitingMessage)
|
||||
}
|
||||
},
|
||||
|
@ -602,11 +576,19 @@ impl Manager {
|
|||
},
|
||||
KernelState::SubkernelAwaitFinish { max_time, id } => {
|
||||
if *max_time > 0 && clock::get_ms() > *max_time as u64 {
|
||||
kern_send(&kern::SubkernelError(kern::SubkernelStatus::Timeout))?;
|
||||
kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::Timeout })?;
|
||||
self.session.kernel_state = KernelState::Running;
|
||||
} else {
|
||||
let id = *id;
|
||||
self.check_finished_kernels(id, router, routing_table, rank, self_destination);
|
||||
let mut i = 0;
|
||||
for status in &self.session.subkernels_finished {
|
||||
if *status == *id {
|
||||
kern_send(&kern::SubkernelAwaitFinishReply { status: kern::SubkernelStatus::NoError })?;
|
||||
self.session.kernel_state = KernelState::Running;
|
||||
self.session.subkernels_finished.swap_remove(i);
|
||||
break;
|
||||
}
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
@ -624,9 +606,6 @@ impl Manager {
|
|||
}
|
||||
Ok(())
|
||||
}
|
||||
KernelState::SubkernelRetrievingException { destination: _ } => {
|
||||
Err(Error::AwaitingMessage)
|
||||
}
|
||||
_ => Ok(())
|
||||
}
|
||||
}
|
||||
|
@ -649,30 +628,16 @@ impl Manager {
|
|||
}
|
||||
|
||||
pub fn remote_subkernel_finished(&mut self, id: u32, with_exception: bool, exception_source: u8) {
|
||||
let exception_src = if with_exception { Some(exception_source) } else { None };
|
||||
self.session.subkernels_finished.push((id, exception_src));
|
||||
}
|
||||
|
||||
pub fn received_exception(&mut self, exception_data: &[u8], last: bool, router: &mut Router, routing_table: &RoutingTable,
|
||||
rank: u8, self_destination: u8) {
|
||||
if let KernelState::SubkernelRetrievingException { destination } = self.session.kernel_state {
|
||||
self.session.external_exception.extend_from_slice(exception_data);
|
||||
if last {
|
||||
if let Ok(exception) = read_exception(&self.session.external_exception) {
|
||||
kern_send(&kern::SubkernelError(kern::SubkernelStatus::Exception(exception))).unwrap();
|
||||
} else {
|
||||
kern_send(
|
||||
&kern::SubkernelError(kern::SubkernelStatus::OtherError)).unwrap();
|
||||
}
|
||||
self.session.kernel_state = KernelState::Running;
|
||||
} else {
|
||||
/* fetch another slice */
|
||||
router.route(drtioaux::Packet::SubkernelExceptionRequest {
|
||||
source: self_destination, destination: destination
|
||||
}, routing_table, rank, self_destination);
|
||||
}
|
||||
if with_exception {
|
||||
unsafe { kernel_cpu::stop() }
|
||||
self.session.kernel_state = KernelState::Absent;
|
||||
unsafe { self.cache.unborrow() }
|
||||
self.last_finished = Some(SubkernelFinished {
|
||||
source: self.session.source, id: self.current_id,
|
||||
with_exception: true, exception_source: exception_source
|
||||
})
|
||||
} else {
|
||||
warn!("Received unsolicited exception data");
|
||||
self.session.subkernels_finished.push(id);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -690,7 +655,6 @@ impl Manager {
|
|||
(_, KernelState::DmaAwait { .. }) |
|
||||
(_, KernelState::MsgSending) |
|
||||
(_, KernelState::SubkernelAwaitLoad) |
|
||||
(_, KernelState::SubkernelRetrievingException { .. }) |
|
||||
(_, KernelState::SubkernelAwaitFinish { .. }) => {
|
||||
// We're standing by; ignore the message.
|
||||
return Ok(None)
|
||||
|
@ -858,48 +822,6 @@ impl Drop for Manager {
|
|||
}
|
||||
}
|
||||
|
||||
fn read_exception_string<'a>(reader: &mut Cursor<&[u8]>) -> Result<CSlice<'a, u8>, Error> {
|
||||
let len = reader.read_u32()? as usize;
|
||||
if len == usize::MAX {
|
||||
let data = reader.read_u32()?;
|
||||
Ok(unsafe { CSlice::new(data as *const u8, len) })
|
||||
} else {
|
||||
let pos = reader.position();
|
||||
let slice = unsafe {
|
||||
let ptr = reader.get_ref().as_ptr().offset(pos as isize);
|
||||
CSlice::new(ptr, len)
|
||||
};
|
||||
reader.set_position(pos + len);
|
||||
Ok(slice)
|
||||
}
|
||||
}
|
||||
|
||||
fn read_exception(buffer: &[u8]) -> Result<eh_artiq::Exception, Error>
|
||||
{
|
||||
let mut reader = Cursor::new(buffer);
|
||||
|
||||
let mut byte = reader.read_u8()?;
|
||||
// to sync
|
||||
while byte != 0x5a {
|
||||
byte = reader.read_u8()?;
|
||||
}
|
||||
// skip sync bytes, 0x09 indicates exception
|
||||
while byte != 0x09 {
|
||||
byte = reader.read_u8()?;
|
||||
}
|
||||
let _len = reader.read_u32()?;
|
||||
// ignore the remaining exceptions, stack traces etc. - unwinding from another device would be unwise anyway
|
||||
Ok(eh_artiq::Exception {
|
||||
id: reader.read_u32()?,
|
||||
message: read_exception_string(&mut reader)?,
|
||||
param: [reader.read_u64()? as i64, reader.read_u64()? as i64, reader.read_u64()? as i64],
|
||||
file: read_exception_string(&mut reader)?,
|
||||
line: reader.read_u32()?,
|
||||
column: reader.read_u32()?,
|
||||
function: read_exception_string(&mut reader)?
|
||||
})
|
||||
}
|
||||
|
||||
fn kern_recv<R, F>(f: F) -> Result<R, Error>
|
||||
where F: FnOnce(&kern::Message) -> Result<R, Error> {
|
||||
if mailbox::receive() == 0 {
|
||||
|
|
|
@ -9,6 +9,8 @@ extern crate board_artiq;
|
|||
extern crate riscv;
|
||||
extern crate alloc;
|
||||
extern crate proto_artiq;
|
||||
extern crate byteorder;
|
||||
extern crate crc;
|
||||
extern crate cslice;
|
||||
extern crate io;
|
||||
extern crate eh;
|
||||
|
@ -21,6 +23,7 @@ use board_artiq::si5324;
|
|||
use board_artiq::si549;
|
||||
#[cfg(soc_platform = "kasli")]
|
||||
use board_misoc::irq;
|
||||
use board_misoc::spiflash;
|
||||
use board_artiq::{spi, drtioaux, drtio_routing};
|
||||
#[cfg(soc_platform = "efc")]
|
||||
use board_artiq::ad9117;
|
||||
|
@ -30,6 +33,7 @@ use board_artiq::drtio_eem;
|
|||
use riscv::register::{mcause, mepc, mtval};
|
||||
use dma::Manager as DmaManager;
|
||||
use kernel::Manager as KernelManager;
|
||||
use mgmt::Manager as CoreManager;
|
||||
use analyzer::Analyzer;
|
||||
|
||||
#[global_allocator]
|
||||
|
@ -41,6 +45,7 @@ mod dma;
|
|||
mod analyzer;
|
||||
mod kernel;
|
||||
mod cache;
|
||||
mod mgmt;
|
||||
|
||||
fn drtiosat_reset(reset: bool) {
|
||||
unsafe {
|
||||
|
@ -129,7 +134,7 @@ macro_rules! forward {
|
|||
($router:expr, $routing_table:expr, $destination:expr, $rank:expr, $self_destination:expr, $repeaters:expr, $packet:expr) => {}
|
||||
}
|
||||
|
||||
fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmgr: &mut KernelManager,
|
||||
fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmgr: &mut KernelManager, coremgr: &mut CoreManager,
|
||||
_repeaters: &mut [repeater::Repeater], _routing_table: &mut drtio_routing::RoutingTable, rank: &mut u8,
|
||||
router: &mut routing::Router, self_destination: &mut u8, packet: drtioaux::Packet
|
||||
) -> Result<(), drtioaux::Error<!>> {
|
||||
|
@ -455,21 +460,15 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
|
|||
kernelmgr.remote_subkernel_finished(id, with_exception, exception_src);
|
||||
Ok(())
|
||||
}
|
||||
drtioaux::Packet::SubkernelExceptionRequest { source, destination: _destination } => {
|
||||
drtioaux::Packet::SubkernelExceptionRequest { destination: _destination } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
let mut data_slice: [u8; MASTER_PAYLOAD_MAX_SIZE] = [0; MASTER_PAYLOAD_MAX_SIZE];
|
||||
let mut data_slice: [u8; SAT_PAYLOAD_MAX_SIZE] = [0; SAT_PAYLOAD_MAX_SIZE];
|
||||
let meta = kernelmgr.exception_get_slice(&mut data_slice);
|
||||
router.send(drtioaux::Packet::SubkernelException {
|
||||
destination: source,
|
||||
drtioaux::send(0, &drtioaux::Packet::SubkernelException {
|
||||
last: meta.status.is_last(),
|
||||
length: meta.len,
|
||||
data: data_slice,
|
||||
}, _routing_table, *rank, *self_destination)
|
||||
}
|
||||
drtioaux::Packet::SubkernelException { destination: _destination, last, length, data } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
kernelmgr.received_exception(&data[..length as usize], last, router, _routing_table, *rank, *self_destination);
|
||||
Ok(())
|
||||
})
|
||||
}
|
||||
drtioaux::Packet::SubkernelMessage { source, destination: _destination, id, status, length, data } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
@ -495,6 +494,130 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
|
|||
Ok(())
|
||||
}
|
||||
|
||||
drtioaux::Packet::CoreMgmtGetLogRequest { destination: _destination, .. } |
|
||||
drtioaux::Packet::CoreMgmtClearLogRequest { destination: _destination } |
|
||||
drtioaux::Packet::CoreMgmtSetLogLevelRequest {destination: _destination, .. } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
error!("RISC-V satellite devices do not support buffered logging");
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: false })
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtSetUartLogLevelRequest { destination: _destination, .. } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
error!("RISC-V satellite devices has fixed UART log level fixed at TRACE");
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: false })
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtConfigReadRequest {
|
||||
destination: _destination,
|
||||
length,
|
||||
key,
|
||||
} => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
let mut value_slice = [0; SAT_PAYLOAD_MAX_SIZE];
|
||||
|
||||
let key_slice = &key[..length as usize];
|
||||
if !key_slice.is_ascii() {
|
||||
error!("invalid key");
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: false })
|
||||
} else {
|
||||
let key = core::str::from_utf8(key_slice).unwrap();
|
||||
if coremgr.fetch_config_value(key).is_ok() {
|
||||
let meta = coremgr.get_config_value_slice(&mut value_slice);
|
||||
drtioaux::send(
|
||||
0,
|
||||
&drtioaux::Packet::CoreMgmtConfigReadReply {
|
||||
length: meta.len as u16,
|
||||
last: meta.status.is_last(),
|
||||
value: value_slice,
|
||||
},
|
||||
)
|
||||
} else {
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: false })
|
||||
}
|
||||
}
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtConfigReadContinue {
|
||||
destination: _destination,
|
||||
} => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
let mut value_slice = [0; SAT_PAYLOAD_MAX_SIZE];
|
||||
let meta = coremgr.get_config_value_slice(&mut value_slice);
|
||||
drtioaux::send(
|
||||
0,
|
||||
&drtioaux::Packet::CoreMgmtConfigReadReply {
|
||||
length: meta.len as u16,
|
||||
last: meta.status.is_last(),
|
||||
value: value_slice,
|
||||
},
|
||||
)
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtConfigWriteRequest { destination: _destination, last, length, data } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
coremgr.add_config_data(&data, length as usize);
|
||||
if last {
|
||||
coremgr.write_config()
|
||||
} else {
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: true })
|
||||
}
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtConfigRemoveRequest { destination: _destination, length, key } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
let key = core::str::from_utf8(&key[..length as usize]).unwrap();
|
||||
let succeeded = config::remove(key)
|
||||
.map_err(|err| warn!("error on removing config: {:?}", err))
|
||||
.is_ok();
|
||||
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded })
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtConfigEraseRequest { destination: _destination } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
let succeeded = config::erase()
|
||||
.map_err(|err| warn!("error on erasing config: {:?}", err))
|
||||
.is_ok();
|
||||
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded })
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtRebootRequest { destination: _destination } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: true })?;
|
||||
warn!("restarting");
|
||||
unsafe { spiflash::reload(); }
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtFlashRequest { destination: _destination, last, length, data } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
coremgr.add_image_data(&data, length as usize);
|
||||
if last {
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtDropLink)
|
||||
} else {
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: true })
|
||||
}
|
||||
}
|
||||
drtioaux::Packet::CoreMgmtDropLinkAck { destination: _destination } => {
|
||||
forward!(router, _routing_table, _destination, *rank, *self_destination, _repeaters, &packet);
|
||||
|
||||
#[cfg(not(soc_platform = "efc"))]
|
||||
unsafe {
|
||||
csr::gt_drtio::txenable_write(0);
|
||||
}
|
||||
|
||||
#[cfg(has_drtio_eem)]
|
||||
unsafe {
|
||||
csr::eem_transceiver::txenable_write(0);
|
||||
}
|
||||
|
||||
coremgr.flash_image();
|
||||
warn!("restarting");
|
||||
unsafe { spiflash::reload(); }
|
||||
}
|
||||
|
||||
_ => {
|
||||
warn!("received unexpected aux packet");
|
||||
Ok(())
|
||||
|
@ -503,13 +626,13 @@ fn process_aux_packet(dmamgr: &mut DmaManager, analyzer: &mut Analyzer, kernelmg
|
|||
}
|
||||
|
||||
fn process_aux_packets(dma_manager: &mut DmaManager, analyzer: &mut Analyzer,
|
||||
kernelmgr: &mut KernelManager, repeaters: &mut [repeater::Repeater],
|
||||
kernelmgr: &mut KernelManager, coremgr: &mut CoreManager, repeaters: &mut [repeater::Repeater],
|
||||
routing_table: &mut drtio_routing::RoutingTable, rank: &mut u8, router: &mut routing::Router,
|
||||
destination: &mut u8) {
|
||||
let result =
|
||||
drtioaux::recv(0).and_then(|packet| {
|
||||
if let Some(packet) = packet.or_else(|| router.get_local_packet()) {
|
||||
process_aux_packet(dma_manager, analyzer, kernelmgr,
|
||||
process_aux_packet(dma_manager, analyzer, kernelmgr, coremgr,
|
||||
repeaters, routing_table, rank, router, destination, packet)
|
||||
} else {
|
||||
Ok(())
|
||||
|
@ -829,6 +952,7 @@ pub extern fn main() -> i32 {
|
|||
let mut dma_manager = DmaManager::new();
|
||||
let mut analyzer = Analyzer::new();
|
||||
let mut kernelmgr = KernelManager::new();
|
||||
let mut coremgr = CoreManager::new();
|
||||
|
||||
cricon_select(RtioMaster::Drtio);
|
||||
drtioaux::reset(0);
|
||||
|
@ -838,7 +962,7 @@ pub extern fn main() -> i32 {
|
|||
while drtiosat_link_rx_up() {
|
||||
drtiosat_process_errors();
|
||||
process_aux_packets(&mut dma_manager, &mut analyzer,
|
||||
&mut kernelmgr, &mut repeaters, &mut routing_table,
|
||||
&mut kernelmgr, &mut coremgr, &mut repeaters, &mut routing_table,
|
||||
&mut rank, &mut router, &mut destination);
|
||||
for rep in repeaters.iter_mut() {
|
||||
rep.service(&routing_table, rank, destination, &mut router);
|
||||
|
|
|
@ -0,0 +1,103 @@
|
|||
use alloc::vec::Vec;
|
||||
use byteorder::{ByteOrder, NativeEndian};
|
||||
use crc::crc32;
|
||||
|
||||
use routing::{Sliceable, SliceMeta};
|
||||
use board_artiq::drtioaux;
|
||||
use board_misoc::{mem, config, spiflash};
|
||||
use io::{Cursor, ProtoRead, ProtoWrite};
|
||||
use proto_artiq::drtioaux_proto::SAT_PAYLOAD_MAX_SIZE;
|
||||
|
||||
|
||||
pub struct Manager {
|
||||
config_payload: Cursor<Vec<u8>>,
|
||||
image_payload: Cursor<Vec<u8>>,
|
||||
last_value: Sliceable,
|
||||
}
|
||||
|
||||
impl Manager {
|
||||
pub fn new() -> Manager {
|
||||
Manager {
|
||||
config_payload: Cursor::new(Vec::new()),
|
||||
image_payload: Cursor::new(Vec::new()),
|
||||
last_value: Sliceable::new(0, Vec::new()),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn fetch_config_value(&mut self, key: &str) -> Result<(), ()> {
|
||||
config::read(key, |result| result.map(
|
||||
|value| self.last_value = Sliceable::new(0, value.to_vec())
|
||||
)).map_err(|_err| warn!("read error: no such key"))
|
||||
}
|
||||
|
||||
pub fn get_config_value_slice(&mut self, data_slice: &mut [u8; SAT_PAYLOAD_MAX_SIZE]) -> SliceMeta {
|
||||
self.last_value.get_slice_sat(data_slice)
|
||||
}
|
||||
|
||||
pub fn add_config_data(&mut self, data: &[u8], data_len: usize) {
|
||||
self.config_payload.write_all(&data[..data_len]).unwrap();
|
||||
}
|
||||
|
||||
pub fn clear_config_data(&mut self) {
|
||||
self.config_payload.get_mut().clear();
|
||||
self.config_payload.set_position(0);
|
||||
}
|
||||
|
||||
pub fn write_config(&mut self) -> Result<(), drtioaux::Error<!>> {
|
||||
let key = match self.config_payload.read_string() {
|
||||
Ok(key) => key,
|
||||
Err(err) => {
|
||||
self.clear_config_data();
|
||||
error!("error on reading key: {:?}", err);
|
||||
return drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded: false });
|
||||
}
|
||||
};
|
||||
|
||||
let value = self.config_payload.read_bytes().unwrap();
|
||||
|
||||
let succeeded = config::write(&key, &value).map_err(|err| {
|
||||
error!("error on writing config: {:?}", err);
|
||||
}).is_ok();
|
||||
|
||||
self.clear_config_data();
|
||||
|
||||
drtioaux::send(0, &drtioaux::Packet::CoreMgmtReply { succeeded })
|
||||
}
|
||||
|
||||
pub fn add_image_data(&mut self, data: &[u8], data_len: usize) {
|
||||
self.image_payload.write_all(&data[..data_len]).unwrap();
|
||||
}
|
||||
|
||||
pub fn flash_image(&self) {
|
||||
let image = &self.image_payload.get_ref()[..];
|
||||
|
||||
let (expected_crc, mut image) = {
|
||||
let (image, crc_slice) = image.split_at(image.len() - 4);
|
||||
(NativeEndian::read_u32(crc_slice), image)
|
||||
};
|
||||
|
||||
let actual_crc = crc32::checksum_ieee(image);
|
||||
|
||||
if actual_crc == expected_crc {
|
||||
let bin_origins = [
|
||||
("gateware" , 0 ),
|
||||
("bootloader", mem::ROM_BASE ),
|
||||
("firmware" , mem::FLASH_BOOT_ADDRESS),
|
||||
];
|
||||
|
||||
for (name, origin) in bin_origins {
|
||||
info!("flashing {} binary...", name);
|
||||
let size = NativeEndian::read_u32(&image[..4]) as usize;
|
||||
image = &image[4..];
|
||||
|
||||
let (bin, remaining) = image.split_at(size);
|
||||
image = remaining;
|
||||
|
||||
unsafe { spiflash::flash_binary(origin, bin) };
|
||||
}
|
||||
|
||||
} else {
|
||||
panic!("CRC failed in SDRAM (actual {:08x}, expected {:08x})", actual_crc, expected_crc);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -4,6 +4,7 @@ use board_artiq::{drtioaux, drtio_routing};
|
|||
use board_misoc::csr;
|
||||
use core::cmp::min;
|
||||
use proto_artiq::drtioaux_proto::PayloadStatus;
|
||||
use SAT_PAYLOAD_MAX_SIZE;
|
||||
use MASTER_PAYLOAD_MAX_SIZE;
|
||||
|
||||
/* represents data that has to be sent with the aux protocol */
|
||||
|
@ -56,6 +57,7 @@ impl Sliceable {
|
|||
self.data.extend(data);
|
||||
}
|
||||
|
||||
get_slice_fn!(get_slice_sat, SAT_PAYLOAD_MAX_SIZE);
|
||||
get_slice_fn!(get_slice_master, MASTER_PAYLOAD_MAX_SIZE);
|
||||
}
|
||||
|
||||
|
|
|
@ -164,10 +164,8 @@ class Client:
|
|||
if reply[0] != "OK":
|
||||
return reply[0], None
|
||||
length = int(reply[1])
|
||||
json_bytes = await self.reader.read(length)
|
||||
if length != len(json_bytes):
|
||||
raise ValueError(f"Received data length ({len(json_bytes)}) doesn't match expected length ({length})")
|
||||
return "OK", json_bytes
|
||||
json_str = (await self.reader.read(length)).decode("ascii")
|
||||
return "OK", json_str
|
||||
|
||||
|
||||
def get_argparser():
|
||||
|
@ -268,7 +266,7 @@ async def main_async():
|
|||
variant = args.variant
|
||||
else:
|
||||
variant = await client.get_single_variant(error_msg="User can get JSON of more than 1 variant - need to specify")
|
||||
result, json_bytes = await client.get_json(variant)
|
||||
result, json_str = await client.get_json(variant)
|
||||
if result != "OK":
|
||||
if result == "UNAUTHORIZED":
|
||||
print(f"You are not authorized to get JSON of variant {variant}. Your firmware subscription may have expired. Contact helpdesk\x40m-labs.hk.")
|
||||
|
@ -277,10 +275,10 @@ async def main_async():
|
|||
if not args.force and os.path.exists(args.out):
|
||||
print(f"File {args.out} already exists. You can use -f to overwrite the existing file.")
|
||||
sys.exit(1)
|
||||
with open(args.out, "wb") as f:
|
||||
f.write(json_bytes)
|
||||
with open(args.out, "w") as f:
|
||||
f.write(json_str)
|
||||
else:
|
||||
sys.stdout.buffer.write(json_bytes)
|
||||
print(json_str)
|
||||
else:
|
||||
raise ValueError
|
||||
finally:
|
||||
|
|
|
@ -25,6 +25,9 @@ def get_argparser():
|
|||
help="Simulation - does not connect to device")
|
||||
parser.add_argument("core_addr", metavar="CORE_ADDR",
|
||||
help="hostname or IP address of the core device")
|
||||
parser.add_argument("-s", "--satellite", default=0,
|
||||
metavar="DRTIO_ID", type=int,
|
||||
help="the logged DRTIO destination")
|
||||
return parser
|
||||
|
||||
|
||||
|
@ -39,7 +42,7 @@ async def get_logs_sim(host):
|
|||
log_with_name("firmware.simulation", logging.INFO, "hello " + host)
|
||||
|
||||
|
||||
async def get_logs(host):
|
||||
async def get_logs(host, drtio_dest):
|
||||
try:
|
||||
reader, writer = await async_open_connection(
|
||||
host,
|
||||
|
@ -49,6 +52,7 @@ async def get_logs(host):
|
|||
max_fails=3,
|
||||
)
|
||||
writer.write(b"ARTIQ management\n")
|
||||
writer.write(drtio_dest.to_bytes(1))
|
||||
endian = await reader.readexactly(1)
|
||||
if endian == b"e":
|
||||
endian = "<"
|
||||
|
@ -96,7 +100,7 @@ def main():
|
|||
signal_handler.setup()
|
||||
try:
|
||||
get_logs_task = asyncio.ensure_future(
|
||||
get_logs_sim(args.core_addr) if args.simulation else get_logs(args.core_addr),
|
||||
get_logs_sim(args.core_addr) if args.simulation else get_logs(args.core_addr, args.satellite),
|
||||
loop=loop)
|
||||
try:
|
||||
server = Server({"corelog": PingTarget()}, None, True)
|
||||
|
|
|
@ -7,7 +7,7 @@ import os
|
|||
import logging
|
||||
import sys
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
from qasync import QEventLoop
|
||||
|
||||
from sipyco.asyncio_tools import atexit_register_coroutine
|
||||
|
@ -68,9 +68,9 @@ class Browser(QtWidgets.QMainWindow):
|
|||
browse_root, dataset_sub)
|
||||
smgr.register(self.experiments)
|
||||
self.experiments.setHorizontalScrollBarPolicy(
|
||||
QtCore.Qt.ScrollBarPolicy.ScrollBarAsNeeded)
|
||||
QtCore.Qt.ScrollBarAsNeeded)
|
||||
self.experiments.setVerticalScrollBarPolicy(
|
||||
QtCore.Qt.ScrollBarPolicy.ScrollBarAsNeeded)
|
||||
QtCore.Qt.ScrollBarAsNeeded)
|
||||
self.setCentralWidget(self.experiments)
|
||||
|
||||
self.files = files.FilesDock(dataset_sub, browse_root)
|
||||
|
@ -91,29 +91,29 @@ class Browser(QtWidgets.QMainWindow):
|
|||
|
||||
self.log = log.LogDock(None, "log")
|
||||
smgr.register(self.log)
|
||||
self.log.setFeatures(self.log.DockWidgetFeature.DockWidgetMovable |
|
||||
self.log.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.log.setFeatures(self.log.DockWidgetMovable |
|
||||
self.log.DockWidgetFloatable)
|
||||
|
||||
self.addDockWidget(QtCore.Qt.DockWidgetArea.LeftDockWidgetArea, self.files)
|
||||
self.addDockWidget(QtCore.Qt.DockWidgetArea.BottomDockWidgetArea, self.applets)
|
||||
self.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, self.datasets)
|
||||
self.addDockWidget(QtCore.Qt.DockWidgetArea.BottomDockWidgetArea, self.log)
|
||||
self.addDockWidget(QtCore.Qt.LeftDockWidgetArea, self.files)
|
||||
self.addDockWidget(QtCore.Qt.BottomDockWidgetArea, self.applets)
|
||||
self.addDockWidget(QtCore.Qt.RightDockWidgetArea, self.datasets)
|
||||
self.addDockWidget(QtCore.Qt.BottomDockWidgetArea, self.log)
|
||||
|
||||
g = self.menuBar().addMenu("&Experiment")
|
||||
a = QtGui.QAction("&Open", self)
|
||||
a = QtWidgets.QAction("&Open", self)
|
||||
a.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOpenButton))
|
||||
a.setShortcuts(QtGui.QKeySequence.StandardKey.Open)
|
||||
QtWidgets.QStyle.SP_DialogOpenButton))
|
||||
a.setShortcuts(QtGui.QKeySequence.Open)
|
||||
a.setStatusTip("Open an experiment")
|
||||
a.triggered.connect(self.experiments.select_experiment)
|
||||
g.addAction(a)
|
||||
|
||||
g = self.menuBar().addMenu("&View")
|
||||
a = QtGui.QAction("Cascade", self)
|
||||
a = QtWidgets.QAction("Cascade", self)
|
||||
a.setStatusTip("Cascade experiment windows")
|
||||
a.triggered.connect(self.experiments.cascadeSubWindows)
|
||||
g.addAction(a)
|
||||
a = QtGui.QAction("Tile", self)
|
||||
a = QtWidgets.QAction("Tile", self)
|
||||
a.setStatusTip("Tile experiment windows")
|
||||
a.triggered.connect(self.experiments.tileSubWindows)
|
||||
g.addAction(a)
|
||||
|
@ -140,12 +140,7 @@ def main():
|
|||
args.db_file = os.path.join(get_user_config_dir(), "artiq_browser.pyon")
|
||||
widget_log_handler = log.init_log(args, "browser")
|
||||
|
||||
forced_platform = []
|
||||
if (QtGui.QGuiApplication.platformName() == "wayland" and
|
||||
not os.getenv("QT_QPA_PLATFORM")):
|
||||
# force XCB instead of Wayland due to applets not embedding
|
||||
forced_platform = ["-platform", "xcb"]
|
||||
app = QtWidgets.QApplication(["ARTIQ Browser"] + forced_platform)
|
||||
app = QtWidgets.QApplication(["ARTIQ Browser"])
|
||||
loop = QEventLoop(app)
|
||||
asyncio.set_event_loop(loop)
|
||||
atexit.register(loop.close)
|
||||
|
|
|
@ -1,4 +1,10 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Client to send commands to :mod:`artiq_master` and display results locally.
|
||||
|
||||
The client can perform actions such as accessing/setting datasets,
|
||||
scanning devices, scheduling experiments, and looking for experiments/devices.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import struct
|
||||
import tempfile
|
||||
import atexit
|
||||
|
||||
from sipyco import common_args
|
||||
|
||||
|
@ -9,6 +12,8 @@ from artiq import __version__ as artiq_version
|
|||
from artiq.master.databases import DeviceDB
|
||||
from artiq.coredevice.comm_kernel import CommKernel
|
||||
from artiq.coredevice.comm_mgmt import CommMgmt
|
||||
from artiq.frontend.bit2bin import bit2bin
|
||||
from misoc.tools.mkmscimg import insert_crc
|
||||
|
||||
|
||||
def get_argparser():
|
||||
|
@ -85,6 +90,17 @@ def get_argparser():
|
|||
t_boot = tools.add_parser("reboot",
|
||||
help="reboot the running system")
|
||||
|
||||
# flashing
|
||||
t_flash = tools.add_parser("flash",
|
||||
help="flash the running system")
|
||||
|
||||
p_directory = t_flash.add_argument("directory", metavar="DIRECTORY", type=str,
|
||||
help="directory that contains the binaries")
|
||||
|
||||
p_srcbuild = t_flash.add_argument("--srcbuild",
|
||||
help="board binaries directory is laid out as a source build tree",
|
||||
default=False, action="store_true")
|
||||
|
||||
# misc debug
|
||||
t_debug = tools.add_parser("debug",
|
||||
help="specialized debug functions")
|
||||
|
@ -95,6 +111,11 @@ def get_argparser():
|
|||
p_allocator = subparsers.add_parser("allocator",
|
||||
help="show heap layout")
|
||||
|
||||
# manage target
|
||||
p_drtio_dest = parser.add_argument("-s", "--satellite", default=0,
|
||||
metavar="DRTIO ID", type=int,
|
||||
help="specify DRTIO destination that receives this command")
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
|
@ -107,7 +128,7 @@ def main():
|
|||
core_addr = ddb.get("core", resolve_alias=True)["arguments"]["host"]
|
||||
else:
|
||||
core_addr = args.device
|
||||
mgmt = CommMgmt(core_addr)
|
||||
mgmt = CommMgmt(core_addr, drtio_dest=args.satellite)
|
||||
|
||||
if args.tool == "log":
|
||||
if args.action == "set_level":
|
||||
|
@ -137,6 +158,43 @@ def main():
|
|||
mgmt.config_remove(key)
|
||||
if args.action == "erase":
|
||||
mgmt.config_erase()
|
||||
|
||||
if args.tool == "flash":
|
||||
def artifact_path(this_binary_dir, *path_filename):
|
||||
if args.srcbuild:
|
||||
# source tree - use path elements to locate file
|
||||
return os.path.join(this_binary_dir, *path_filename)
|
||||
else:
|
||||
# flat tree - all files in the same directory, discard path elements
|
||||
*_, filename = path_filename
|
||||
return os.path.join(this_binary_dir, filename)
|
||||
|
||||
def convert_gateware(bit_filename):
|
||||
bin_handle, bin_filename = tempfile.mkstemp(
|
||||
prefix="artiq_", suffix="_" + os.path.basename(bit_filename))
|
||||
with open(bit_filename, "rb") as bit_file, open(bin_handle, "wb") as bin_file:
|
||||
bit2bin(bit_file, bin_file)
|
||||
atexit.register(lambda: os.unlink(bin_filename))
|
||||
return bin_filename
|
||||
|
||||
gateware = convert_gateware(
|
||||
artifact_path(args.directory, "gateware", "top.bit"))
|
||||
bootloader = artifact_path(args.directory, "software", "bootloader", "bootloader.bin")
|
||||
|
||||
firmwares = []
|
||||
for firmware in "satman", "runtime":
|
||||
filename = artifact_path(args.directory, "software", firmware, firmware + ".fbi")
|
||||
if os.path.exists(filename):
|
||||
firmwares.append(filename)
|
||||
if not firmwares:
|
||||
raise FileNotFoundError("no firmware found")
|
||||
if len(firmwares) > 1:
|
||||
raise ValueError("more than one firmware file, please clean up your build directory. "
|
||||
"Found firmware files: {}".format(" ".join(firmwares)))
|
||||
firmware = firmwares[0]
|
||||
|
||||
bins = [ gateware, bootloader, firmware ]
|
||||
mgmt.flash(bins)
|
||||
|
||||
if args.tool == "reboot":
|
||||
mgmt.reboot()
|
||||
|
|
|
@ -7,7 +7,7 @@ import importlib
|
|||
import os
|
||||
import logging
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
from qasync import QEventLoop
|
||||
|
||||
from sipyco.pc_rpc import AsyncioClient, Client
|
||||
|
@ -95,15 +95,14 @@ class MdiArea(QtWidgets.QMdiArea):
|
|||
self.pixmap = QtGui.QPixmap(os.path.join(
|
||||
artiq_dir, "gui", "logo_ver.svg"))
|
||||
|
||||
self.setActivationOrder(
|
||||
QtWidgets.QMdiArea.WindowOrder.ActivationHistoryOrder)
|
||||
self.setActivationOrder(self.ActivationHistoryOrder)
|
||||
|
||||
self.tile = QtGui.QShortcut(
|
||||
self.tile = QtWidgets.QShortcut(
|
||||
QtGui.QKeySequence('Ctrl+Shift+T'), self)
|
||||
self.tile.activated.connect(
|
||||
lambda: self.tileSubWindows())
|
||||
|
||||
self.cascade = QtGui.QShortcut(
|
||||
self.cascade = QtWidgets.QShortcut(
|
||||
QtGui.QKeySequence('Ctrl+Shift+C'), self)
|
||||
self.cascade.activated.connect(
|
||||
lambda: self.cascadeSubWindows())
|
||||
|
@ -133,12 +132,7 @@ def main():
|
|||
server=args.server.replace(":", "."),
|
||||
port=args.port_notify))
|
||||
|
||||
forced_platform = []
|
||||
if (QtGui.QGuiApplication.platformName() == "wayland" and
|
||||
not os.getenv("QT_QPA_PLATFORM")):
|
||||
# force XCB instead of Wayland due to applets not embedding
|
||||
forced_platform = ["-platform", "xcb"]
|
||||
app = QtWidgets.QApplication(["ARTIQ Dashboard"] + forced_platform)
|
||||
app = QtWidgets.QApplication(["ARTIQ Dashboard"])
|
||||
loop = QEventLoop(app)
|
||||
asyncio.set_event_loop(loop)
|
||||
atexit.register(loop.close)
|
||||
|
@ -192,8 +186,8 @@ def main():
|
|||
main_window = MainWindow(args.server if server_name is None else server_name)
|
||||
smgr.register(main_window)
|
||||
mdi_area = MdiArea()
|
||||
mdi_area.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAsNeeded)
|
||||
mdi_area.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAsNeeded)
|
||||
mdi_area.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
|
||||
mdi_area.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
|
||||
main_window.setCentralWidget(mdi_area)
|
||||
|
||||
# create UI components
|
||||
|
@ -263,10 +257,10 @@ def main():
|
|||
d_datasets, d_applets,
|
||||
d_waveform, d_interactive_args
|
||||
]
|
||||
main_window.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, right_docks[0])
|
||||
main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, right_docks[0])
|
||||
for d1, d2 in zip(right_docks, right_docks[1:]):
|
||||
main_window.tabifyDockWidget(d1, d2)
|
||||
main_window.addDockWidget(QtCore.Qt.DockWidgetArea.BottomDockWidgetArea, d_schedule)
|
||||
main_window.addDockWidget(QtCore.Qt.BottomDockWidgetArea, d_schedule)
|
||||
|
||||
# load/initialize state
|
||||
if os.name == "nt":
|
||||
|
|
|
@ -11,7 +11,7 @@ def get_argparser():
|
|||
description="ARTIQ session manager. "
|
||||
"Automatically runs the master, dashboard and "
|
||||
"local controller manager on the current machine. "
|
||||
"The latter requires the ``artiq-comtools`` package to "
|
||||
"The latter requires the artiq-comtools package to "
|
||||
"be installed.")
|
||||
parser.add_argument("--version", action="version",
|
||||
version="ARTIQ v{}".format(artiq_version),
|
||||
|
|
|
@ -9,7 +9,7 @@ from functools import partial
|
|||
from itertools import count
|
||||
from types import SimpleNamespace
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
|
||||
from sipyco.pipe_ipc import AsyncioParentComm
|
||||
from sipyco.logging_tools import LogParser
|
||||
|
@ -29,7 +29,7 @@ class EntryArea(EntryTreeWidget):
|
|||
reset_all_button.setToolTip("Reset all to default values")
|
||||
reset_all_button.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
||||
QtWidgets.QStyle.SP_BrowserReload))
|
||||
reset_all_button.clicked.connect(self.reset_all)
|
||||
buttons = LayoutWidget()
|
||||
buttons.layout.setColumnStretch(0, 1)
|
||||
|
@ -137,7 +137,7 @@ class AppletIPCServer(AsyncioParentComm):
|
|||
return
|
||||
self.write_pyon({"action": "mod", "mod": mod})
|
||||
|
||||
async def serve(self, embed_cb):
|
||||
async def serve(self, embed_cb, fix_initial_size_cb):
|
||||
self.dataset_sub.notify_cbs.append(self._on_mod)
|
||||
try:
|
||||
while True:
|
||||
|
@ -145,11 +145,10 @@ class AppletIPCServer(AsyncioParentComm):
|
|||
try:
|
||||
action = obj["action"]
|
||||
if action == "embed":
|
||||
size = embed_cb(obj["win_id"])
|
||||
if size is None:
|
||||
self.write_pyon({"action": "embed_done"})
|
||||
else:
|
||||
self.write_pyon({"action": "embed_done", "size_h": size.height(), "size_w": size.width()})
|
||||
embed_cb(obj["win_id"])
|
||||
self.write_pyon({"action": "embed_done"})
|
||||
elif action == "fix_initial_size":
|
||||
fix_initial_size_cb()
|
||||
elif action == "subscribe":
|
||||
self.datasets = obj["datasets"]
|
||||
self.dataset_prefixes = obj["dataset_prefixes"]
|
||||
|
@ -177,9 +176,9 @@ class AppletIPCServer(AsyncioParentComm):
|
|||
finally:
|
||||
self.dataset_sub.notify_cbs.remove(self._on_mod)
|
||||
|
||||
def start_server(self, embed_cb, *, loop=None):
|
||||
def start_server(self, embed_cb, fix_initial_size_cb, *, loop=None):
|
||||
self.server_task = asyncio.ensure_future(
|
||||
self.serve(embed_cb), loop=loop)
|
||||
self.serve(embed_cb, fix_initial_size_cb), loop=loop)
|
||||
|
||||
async def stop_server(self):
|
||||
if hasattr(self, "server_task"):
|
||||
|
@ -240,7 +239,7 @@ class _AppletDock(QDockWidgetCloseDetect):
|
|||
asyncio.ensure_future(
|
||||
LogParser(self._get_log_source).stream_task(
|
||||
self.ipc.process.stderr))
|
||||
self.ipc.start_server(self.embed)
|
||||
self.ipc.start_server(self.embed, self.fix_initial_size)
|
||||
finally:
|
||||
self.starting_stopping = False
|
||||
|
||||
|
@ -269,9 +268,11 @@ class _AppletDock(QDockWidgetCloseDetect):
|
|||
self.embed_widget = QtWidgets.QWidget.createWindowContainer(
|
||||
self.embed_window)
|
||||
self.setWidget(self.embed_widget)
|
||||
# return the size after embedding. Applet must resize to that,
|
||||
# otherwise the applet may not fit within the dock properly.
|
||||
return self.embed_widget.size()
|
||||
|
||||
# HACK: This function would not be needed if Qt window embedding
|
||||
# worked correctly.
|
||||
def fix_initial_size(self):
|
||||
self.embed_window.resize(self.embed_widget.size())
|
||||
|
||||
async def terminate(self, delete_self=True):
|
||||
if self.starting_stopping:
|
||||
|
@ -387,8 +388,8 @@ class _CompleterDelegate(QtWidgets.QStyledItemDelegate):
|
|||
completer = QtWidgets.QCompleter()
|
||||
completer.splitPath = lambda path: path.replace("/", ".").split(".")
|
||||
completer.setModelSorting(
|
||||
QtWidgets.QCompleter.ModelSorting.CaseSensitivelySortedModel)
|
||||
completer.setCompletionRole(QtCore.Qt.ItemDataRole.DisplayRole)
|
||||
QtWidgets.QCompleter.CaseSensitivelySortedModel)
|
||||
completer.setCompletionRole(QtCore.Qt.DisplayRole)
|
||||
if hasattr(self, "model"):
|
||||
# "TODO: Optimize updates in the source model"
|
||||
# - Qt (qcompleter.cpp), never ceasing to disappoint.
|
||||
|
@ -419,8 +420,8 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
"""
|
||||
QtWidgets.QDockWidget.__init__(self, "Applets")
|
||||
self.setObjectName("Applets")
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
|
||||
self.main_window = main_window
|
||||
self.dataset_sub = dataset_sub
|
||||
|
@ -434,18 +435,18 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
self.table = QtWidgets.QTreeWidget()
|
||||
self.table.setColumnCount(2)
|
||||
self.table.setHeaderLabels(["Name", "Command"])
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectRows)
|
||||
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SelectionMode.SingleSelection)
|
||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
|
||||
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection)
|
||||
|
||||
self.table.header().setStretchLastSection(True)
|
||||
self.table.header().setSectionResizeMode(
|
||||
QtWidgets.QHeaderView.ResizeMode.ResizeToContents)
|
||||
self.table.setTextElideMode(QtCore.Qt.TextElideMode.ElideNone)
|
||||
QtWidgets.QHeaderView.ResizeToContents)
|
||||
self.table.setTextElideMode(QtCore.Qt.ElideNone)
|
||||
|
||||
self.table.setDragEnabled(True)
|
||||
self.table.viewport().setAcceptDrops(True)
|
||||
self.table.setDropIndicatorShown(True)
|
||||
self.table.setDragDropMode(QtWidgets.QAbstractItemView.DragDropMode.InternalMove)
|
||||
self.table.setDragDropMode(QtWidgets.QAbstractItemView.InternalMove)
|
||||
|
||||
self.setWidget(self.table)
|
||||
|
||||
|
@ -453,44 +454,44 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
self.table.setItemDelegateForColumn(1, completer_delegate)
|
||||
dataset_sub.add_setmodel_callback(completer_delegate.set_model)
|
||||
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
new_action = QtGui.QAction("New applet", self.table)
|
||||
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
new_action = QtWidgets.QAction("New applet", self.table)
|
||||
new_action.triggered.connect(partial(self.new_with_parent, self.new))
|
||||
self.table.addAction(new_action)
|
||||
templates_menu = QtWidgets.QMenu(self.table)
|
||||
templates_menu = QtWidgets.QMenu()
|
||||
for name, template in _templates:
|
||||
spec = {"ty": "command", "command": template}
|
||||
action = QtGui.QAction(name, self.table)
|
||||
action = QtWidgets.QAction(name, self.table)
|
||||
action.triggered.connect(partial(
|
||||
self.new_with_parent, self.new, spec=spec))
|
||||
templates_menu.addAction(action)
|
||||
restart_action = QtGui.QAction("New applet from template", self.table)
|
||||
restart_action = QtWidgets.QAction("New applet from template", self.table)
|
||||
restart_action.setMenu(templates_menu)
|
||||
self.table.addAction(restart_action)
|
||||
restart_action = QtGui.QAction("Restart selected applet or group", self.table)
|
||||
restart_action = QtWidgets.QAction("Restart selected applet or group", self.table)
|
||||
restart_action.setShortcut("CTRL+R")
|
||||
restart_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
restart_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
restart_action.triggered.connect(self.restart)
|
||||
self.table.addAction(restart_action)
|
||||
delete_action = QtGui.QAction("Delete selected applet or group", self.table)
|
||||
delete_action = QtWidgets.QAction("Delete selected applet or group", self.table)
|
||||
delete_action.setShortcut("DELETE")
|
||||
delete_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
delete_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
delete_action.triggered.connect(self.delete)
|
||||
self.table.addAction(delete_action)
|
||||
close_nondocked_action = QtGui.QAction("Close non-docked applets", self.table)
|
||||
close_nondocked_action = QtWidgets.QAction("Close non-docked applets", self.table)
|
||||
close_nondocked_action.setShortcut("CTRL+ALT+W")
|
||||
close_nondocked_action.setShortcutContext(QtCore.Qt.ShortcutContext.ApplicationShortcut)
|
||||
close_nondocked_action.setShortcutContext(QtCore.Qt.ApplicationShortcut)
|
||||
close_nondocked_action.triggered.connect(self.close_nondocked)
|
||||
self.table.addAction(close_nondocked_action)
|
||||
|
||||
new_group_action = QtGui.QAction("New group", self.table)
|
||||
new_group_action = QtWidgets.QAction("New group", self.table)
|
||||
new_group_action.triggered.connect(partial(self.new_with_parent, self.new_group))
|
||||
self.table.addAction(new_group_action)
|
||||
|
||||
self.table.itemChanged.connect(self.item_changed)
|
||||
|
||||
# HACK
|
||||
self.table.setEditTriggers(QtWidgets.QAbstractItemView.EditTrigger.NoEditTriggers)
|
||||
self.table.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
|
||||
self.table.itemDoubleClicked.connect(self.open_editor)
|
||||
|
||||
def open_editor(self, item, column):
|
||||
|
@ -517,7 +518,7 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
del item.applet_code
|
||||
elif spec["ty"] == "code":
|
||||
item.setIcon(1, QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileIcon))
|
||||
QtWidgets.QStyle.SP_FileIcon))
|
||||
item.applet_code = spec["code"]
|
||||
else:
|
||||
raise ValueError
|
||||
|
@ -529,7 +530,7 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
|
||||
def create(self, item, name, spec):
|
||||
dock = _AppletDock(self.dataset_sub, self.dataset_ctl, self.expmgr, item.applet_uid, name, spec, self.extra_substitutes)
|
||||
self.main_window.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, dock)
|
||||
self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock)
|
||||
dock.setFloating(True)
|
||||
asyncio.ensure_future(dock.start(), loop=self._loop)
|
||||
dock.sigClosed.connect(partial(self.on_dock_closed, item, dock))
|
||||
|
@ -546,7 +547,7 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
dock.spec = self.get_spec(item)
|
||||
|
||||
if column == 0:
|
||||
if item.checkState(0) == QtCore.Qt.CheckState.Checked:
|
||||
if item.checkState(0) == QtCore.Qt.Checked:
|
||||
if item.applet_dock is None:
|
||||
name = item.text(0)
|
||||
spec = self.get_spec(item)
|
||||
|
@ -571,7 +572,7 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
def on_dock_closed(self, item, dock):
|
||||
item.applet_geometry = dock.saveGeometry()
|
||||
asyncio.ensure_future(dock.terminate(), loop=self._loop)
|
||||
item.setCheckState(0, QtCore.Qt.CheckState.Unchecked)
|
||||
item.setCheckState(0, QtCore.Qt.Unchecked)
|
||||
|
||||
def get_untitled(self):
|
||||
existing_names = set()
|
||||
|
@ -601,18 +602,18 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
name = self.get_untitled()
|
||||
item = QtWidgets.QTreeWidgetItem([name, ""])
|
||||
item.ty = "applet"
|
||||
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable |
|
||||
QtCore.Qt.ItemFlag.ItemIsUserCheckable |
|
||||
QtCore.Qt.ItemFlag.ItemIsEditable |
|
||||
QtCore.Qt.ItemFlag.ItemIsDragEnabled |
|
||||
QtCore.Qt.ItemFlag.ItemNeverHasChildren |
|
||||
QtCore.Qt.ItemFlag.ItemIsEnabled)
|
||||
item.setCheckState(0, QtCore.Qt.CheckState.Unchecked)
|
||||
item.setFlags(QtCore.Qt.ItemIsSelectable |
|
||||
QtCore.Qt.ItemIsUserCheckable |
|
||||
QtCore.Qt.ItemIsEditable |
|
||||
QtCore.Qt.ItemIsDragEnabled |
|
||||
QtCore.Qt.ItemNeverHasChildren |
|
||||
QtCore.Qt.ItemIsEnabled)
|
||||
item.setCheckState(0, QtCore.Qt.Unchecked)
|
||||
item.applet_uid = uid
|
||||
item.applet_dock = None
|
||||
item.applet_geometry = None
|
||||
item.setIcon(0, QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_ComputerIcon))
|
||||
QtWidgets.QStyle.SP_ComputerIcon))
|
||||
self.set_spec(item, spec)
|
||||
if parent is None:
|
||||
self.table.addTopLevelItem(item)
|
||||
|
@ -625,15 +626,15 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
name = self.get_untitled()
|
||||
item = QtWidgets.QTreeWidgetItem([name, attr])
|
||||
item.ty = "group"
|
||||
item.setFlags(QtCore.Qt.ItemFlag.ItemIsSelectable |
|
||||
QtCore.Qt.ItemFlag.ItemIsEditable |
|
||||
QtCore.Qt.ItemFlag.ItemIsUserCheckable |
|
||||
QtCore.Qt.ItemFlag.ItemIsAutoTristate |
|
||||
QtCore.Qt.ItemFlag.ItemIsDragEnabled |
|
||||
QtCore.Qt.ItemFlag.ItemIsDropEnabled |
|
||||
QtCore.Qt.ItemFlag.ItemIsEnabled)
|
||||
item.setFlags(QtCore.Qt.ItemIsSelectable |
|
||||
QtCore.Qt.ItemIsEditable |
|
||||
QtCore.Qt.ItemIsUserCheckable |
|
||||
QtCore.Qt.ItemIsTristate |
|
||||
QtCore.Qt.ItemIsDragEnabled |
|
||||
QtCore.Qt.ItemIsDropEnabled |
|
||||
QtCore.Qt.ItemIsEnabled)
|
||||
item.setIcon(0, QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DirIcon))
|
||||
QtWidgets.QStyle.SP_DirIcon))
|
||||
if parent is None:
|
||||
self.table.addTopLevelItem(item)
|
||||
else:
|
||||
|
@ -711,7 +712,7 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
cwi = wi.child(row)
|
||||
if cwi.ty == "applet":
|
||||
uid = cwi.applet_uid
|
||||
enabled = cwi.checkState(0) == QtCore.Qt.CheckState.Checked
|
||||
enabled = cwi.checkState(0) == QtCore.Qt.Checked
|
||||
name = cwi.text(0)
|
||||
spec = self.get_spec(cwi)
|
||||
geometry = cwi.applet_geometry
|
||||
|
@ -743,7 +744,7 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
geometry = QtCore.QByteArray(geometry)
|
||||
item.applet_geometry = geometry
|
||||
if enabled:
|
||||
item.setCheckState(0, QtCore.Qt.CheckState.Checked)
|
||||
item.setCheckState(0, QtCore.Qt.Checked)
|
||||
elif wis[0] == "group":
|
||||
_, name, attr, expanded, state_child = wis
|
||||
item = self.new_group(name, attr, parent=parent)
|
||||
|
@ -760,11 +761,11 @@ class AppletsDock(QtWidgets.QDockWidget):
|
|||
for i in range(wi.childCount()):
|
||||
cwi = wi.child(i)
|
||||
if cwi.ty == "applet":
|
||||
if cwi.checkState(0) == QtCore.Qt.CheckState.Checked:
|
||||
if cwi.checkState(0) == QtCore.Qt.Checked:
|
||||
if cwi.applet_dock is not None:
|
||||
if not cwi.applet_dock.isFloating():
|
||||
continue
|
||||
cwi.setCheckState(0, QtCore.Qt.CheckState.Unchecked)
|
||||
cwi.setCheckState(0, QtCore.Qt.Unchecked)
|
||||
elif cwi.ty == "group":
|
||||
walk(cwi)
|
||||
walk(self.table.invisibleRootItem())
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from PyQt6 import QtCore, QtWidgets, QtGui
|
||||
from PyQt5 import QtCore, QtWidgets, QtGui
|
||||
|
||||
from artiq.gui.flowlayout import FlowLayout
|
||||
|
||||
|
@ -10,7 +10,7 @@ class VDragDropSplitter(QtWidgets.QSplitter):
|
|||
QtWidgets.QSplitter.__init__(self, parent=parent)
|
||||
self.setAcceptDrops(True)
|
||||
self.setContentsMargins(0, 0, 0, 0)
|
||||
self.setOrientation(QtCore.Qt.Orientation.Vertical)
|
||||
self.setOrientation(QtCore.Qt.Vertical)
|
||||
self.setChildrenCollapsible(False)
|
||||
|
||||
def resetSizes(self):
|
||||
|
@ -24,7 +24,7 @@ class VDragDropSplitter(QtWidgets.QSplitter):
|
|||
e.accept()
|
||||
|
||||
def dragMoveEvent(self, e):
|
||||
pos = e.position()
|
||||
pos = e.pos()
|
||||
src = e.source()
|
||||
src_i = self.indexOf(src)
|
||||
self.setRubberBand(self.height())
|
||||
|
@ -48,7 +48,7 @@ class VDragDropSplitter(QtWidgets.QSplitter):
|
|||
|
||||
def dropEvent(self, e):
|
||||
self.setRubberBand(-1)
|
||||
pos = e.position()
|
||||
pos = e.pos()
|
||||
src = e.source()
|
||||
src_i = self.indexOf(src)
|
||||
for n in range(self.count()):
|
||||
|
@ -78,10 +78,10 @@ class VDragScrollArea(QtWidgets.QScrollArea):
|
|||
self._speed = speed
|
||||
|
||||
def eventFilter(self, obj, e):
|
||||
if e.type() == QtCore.QEvent.Type.DragMove:
|
||||
if e.type() == QtCore.QEvent.DragMove:
|
||||
val = self.verticalScrollBar().value()
|
||||
height = self.viewport().height()
|
||||
y = e.position().y()
|
||||
y = e.pos().y()
|
||||
self._direction = 0
|
||||
if y < val + self._margin:
|
||||
self._direction = -1
|
||||
|
@ -89,7 +89,7 @@ class VDragScrollArea(QtWidgets.QScrollArea):
|
|||
self._direction = 1
|
||||
if not self._timer.isActive():
|
||||
self._timer.start()
|
||||
elif e.type() in (QtCore.QEvent.Type.Drop, QtCore.QEvent.Type.DragLeave):
|
||||
elif e.type() in (QtCore.QEvent.Drop, QtCore.QEvent.DragLeave):
|
||||
self._timer.stop()
|
||||
return False
|
||||
|
||||
|
@ -117,9 +117,9 @@ class DragDropFlowLayoutWidget(QtWidgets.QWidget):
|
|||
return -1
|
||||
|
||||
def mousePressEvent(self, event):
|
||||
if event.buttons() == QtCore.Qt.MouseButton.LeftButton \
|
||||
and event.modifiers() == QtCore.Qt.KeyboardModifier.ShiftModifier:
|
||||
index = self._get_index(event.position())
|
||||
if event.buttons() == QtCore.Qt.LeftButton \
|
||||
and event.modifiers() == QtCore.Qt.ShiftModifier:
|
||||
index = self._get_index(event.pos())
|
||||
if index == -1:
|
||||
return
|
||||
drag = QtGui.QDrag(self)
|
||||
|
@ -127,7 +127,7 @@ class DragDropFlowLayoutWidget(QtWidgets.QWidget):
|
|||
mime.setData("index", str(index).encode())
|
||||
drag.setMimeData(mime)
|
||||
pixmapi = QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileIcon)
|
||||
QtWidgets.QStyle.SP_FileIcon)
|
||||
drag.setPixmap(pixmapi.pixmap(32))
|
||||
drag.exec_(QtCore.Qt.MoveAction)
|
||||
event.accept()
|
||||
|
@ -136,7 +136,7 @@ class DragDropFlowLayoutWidget(QtWidgets.QWidget):
|
|||
event.accept()
|
||||
|
||||
def dropEvent(self, event):
|
||||
index = self._get_index(event.position())
|
||||
index = self._get_index(event.pos())
|
||||
source_layout = event.source()
|
||||
source_index = int(bytes(event.mimeData().data("index")).decode())
|
||||
if source_layout == self:
|
||||
|
|
|
@ -2,7 +2,7 @@ import logging
|
|||
from collections import OrderedDict
|
||||
from functools import partial
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
|
||||
from artiq.gui.tools import LayoutWidget, disable_scroll_wheel, WheelFilter
|
||||
from artiq.gui.scanwidget import ScanWidget
|
||||
|
@ -23,13 +23,13 @@ class EntryTreeWidget(QtWidgets.QTreeWidget):
|
|||
set_resize_mode = self.header().setSectionResizeMode
|
||||
else:
|
||||
set_resize_mode = self.header().setResizeMode
|
||||
set_resize_mode(0, QtWidgets.QHeaderView.ResizeMode.ResizeToContents)
|
||||
set_resize_mode(1, QtWidgets.QHeaderView.ResizeMode.Stretch)
|
||||
set_resize_mode(2, QtWidgets.QHeaderView.ResizeMode.ResizeToContents)
|
||||
set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents)
|
||||
set_resize_mode(1, QtWidgets.QHeaderView.Stretch)
|
||||
set_resize_mode(2, QtWidgets.QHeaderView.ResizeToContents)
|
||||
self.header().setVisible(False)
|
||||
self.setSelectionMode(self.SelectionMode.NoSelection)
|
||||
self.setHorizontalScrollMode(self.ScrollMode.ScrollPerPixel)
|
||||
self.setVerticalScrollMode(self.ScrollMode.ScrollPerPixel)
|
||||
self.setSelectionMode(self.NoSelection)
|
||||
self.setHorizontalScrollMode(self.ScrollPerPixel)
|
||||
self.setVerticalScrollMode(self.ScrollPerPixel)
|
||||
|
||||
self.setStyleSheet("QTreeWidget {background: " +
|
||||
self.palette().midlight().color().name() + " ;}")
|
||||
|
@ -82,14 +82,14 @@ class EntryTreeWidget(QtWidgets.QTreeWidget):
|
|||
reset_entry.setToolTip("Reset to default value")
|
||||
reset_entry.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
||||
QtWidgets.QStyle.SP_BrowserReload))
|
||||
reset_entry.clicked.connect(partial(self.reset_entry, key))
|
||||
|
||||
disable_other_scans = QtWidgets.QToolButton()
|
||||
widgets["disable_other_scans"] = disable_other_scans
|
||||
disable_other_scans.setIcon(
|
||||
QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogResetButton))
|
||||
QtWidgets.QStyle.SP_DialogResetButton))
|
||||
disable_other_scans.setToolTip("Disable other scans")
|
||||
disable_other_scans.clicked.connect(
|
||||
partial(self._disable_other_scans, key))
|
||||
|
@ -109,6 +109,7 @@ class EntryTreeWidget(QtWidgets.QTreeWidget):
|
|||
group = QtWidgets.QTreeWidgetItem([key])
|
||||
for col in range(3):
|
||||
group.setBackground(col, self.palette().mid())
|
||||
group.setForeground(col, self.palette().brightText())
|
||||
font = group.font(col)
|
||||
font.setBold(True)
|
||||
group.setFont(col, font)
|
||||
|
@ -539,8 +540,7 @@ class _ExplicitScan(LayoutWidget):
|
|||
|
||||
float_regexp = r"(([+-]?\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?)"
|
||||
regexp = "(float)?( +float)* *".replace("float", float_regexp)
|
||||
self.value.setValidator(QtGui.QRegularExpressionValidator(
|
||||
QtCore.QRegularExpression(regexp)))
|
||||
self.value.setValidator(QtGui.QRegExpValidator(QtCore.QRegExp(regexp)))
|
||||
|
||||
self.value.setText(" ".join([str(x) for x in state["sequence"]]))
|
||||
def update(text):
|
||||
|
|
|
@ -39,8 +39,9 @@
|
|||
#############################################################################
|
||||
|
||||
|
||||
from PyQt6.QtCore import QPoint, QRect, QSize, Qt
|
||||
from PyQt6.QtWidgets import QLayout, QSizePolicy
|
||||
from PyQt5.QtCore import QPoint, QRect, QSize, Qt
|
||||
from PyQt5.QtWidgets import (QApplication, QLayout, QPushButton, QSizePolicy,
|
||||
QWidget)
|
||||
|
||||
|
||||
class FlowLayout(QLayout):
|
||||
|
@ -112,8 +113,8 @@ class FlowLayout(QLayout):
|
|||
|
||||
for item in self.itemList:
|
||||
wid = item.widget()
|
||||
spaceX = self.spacing() + wid.style().layoutSpacing(QSizePolicy.ControlType.PushButton, QSizePolicy.ControlType.PushButton, Qt.Orientation.Horizontal)
|
||||
spaceY = self.spacing() + wid.style().layoutSpacing(QSizePolicy.ControlType.PushButton, QSizePolicy.ControlType.PushButton, Qt.Orientation.Vertical)
|
||||
spaceX = self.spacing() + wid.style().layoutSpacing(QSizePolicy.PushButton, QSizePolicy.PushButton, Qt.Horizontal)
|
||||
spaceY = self.spacing() + wid.style().layoutSpacing(QSizePolicy.PushButton, QSizePolicy.PushButton, Qt.Vertical)
|
||||
nextX = x + item.sizeHint().width() + spaceX
|
||||
if nextX - spaceX > rect.right() and lineHeight > 0:
|
||||
x = rect.x()
|
||||
|
|
|
@ -2,7 +2,7 @@ import re
|
|||
|
||||
from functools import partial
|
||||
from typing import List, Tuple
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
|
||||
from artiq.gui.tools import LayoutWidget
|
||||
|
||||
|
@ -19,7 +19,7 @@ class FuzzySelectWidget(LayoutWidget):
|
|||
|
||||
#: Raised when an entry has been selected, giving the label of the user
|
||||
#: choice and any additional QEvent.modifiers() (e.g. Ctrl key pressed).
|
||||
finished = QtCore.pyqtSignal(str, QtCore.Qt.KeyboardModifier)
|
||||
finished = QtCore.pyqtSignal(str, int)
|
||||
|
||||
def __init__(self,
|
||||
choices: List[Tuple[str, int]] = [],
|
||||
|
@ -138,16 +138,16 @@ class FuzzySelectWidget(LayoutWidget):
|
|||
first_action = None
|
||||
last_action = None
|
||||
for choice in filtered_choices:
|
||||
action = QtGui.QAction(choice, self.menu)
|
||||
action = QtWidgets.QAction(choice, self.menu)
|
||||
action.triggered.connect(partial(self._finish, action, choice))
|
||||
action.modifiers = QtCore.Qt.KeyboardModifier.NoModifier
|
||||
action.modifiers = 0
|
||||
self.menu.addAction(action)
|
||||
if not first_action:
|
||||
first_action = action
|
||||
last_action = action
|
||||
|
||||
if num_omitted > 0:
|
||||
action = QtGui.QAction("<{} not shown>".format(num_omitted),
|
||||
action = QtWidgets.QAction("<{} not shown>".format(num_omitted),
|
||||
self.menu)
|
||||
action.setEnabled(False)
|
||||
self.menu.addAction(action)
|
||||
|
@ -239,9 +239,9 @@ class _FocusEventFilter(QtCore.QObject):
|
|||
focus_lost = QtCore.pyqtSignal()
|
||||
|
||||
def eventFilter(self, obj, event):
|
||||
if event.type() == QtCore.QEvent.Type.FocusIn:
|
||||
if event.type() == QtCore.QEvent.FocusIn:
|
||||
self.focus_gained.emit()
|
||||
elif event.type() == QtCore.QEvent.Type.FocusOut:
|
||||
elif event.type() == QtCore.QEvent.FocusOut:
|
||||
self.focus_lost.emit()
|
||||
return False
|
||||
|
||||
|
@ -251,8 +251,8 @@ class _EscapeKeyFilter(QtCore.QObject):
|
|||
escape_pressed = QtCore.pyqtSignal()
|
||||
|
||||
def eventFilter(self, obj, event):
|
||||
if event.type() == QtCore.QEvent.Type.KeyPress:
|
||||
if event.key() == QtCore.Qt.Key.Key_Escape:
|
||||
if event.type() == QtCore.QEvent.KeyPress:
|
||||
if event.key() == QtCore.Qt.Key_Escape:
|
||||
self.escape_pressed.emit()
|
||||
return False
|
||||
|
||||
|
@ -266,13 +266,13 @@ class _UpDownKeyFilter(QtCore.QObject):
|
|||
self.last_item = last_item
|
||||
|
||||
def eventFilter(self, obj, event):
|
||||
if event.type() == QtCore.QEvent.Type.KeyPress:
|
||||
if event.key() == QtCore.Qt.Key.Key_Down:
|
||||
if event.type() == QtCore.QEvent.KeyPress:
|
||||
if event.key() == QtCore.Qt.Key_Down:
|
||||
self.menu.setActiveAction(self.first_item)
|
||||
self.menu.setFocus()
|
||||
return True
|
||||
|
||||
if event.key() == QtCore.Qt.Key.Key_Up:
|
||||
if event.key() == QtCore.Qt.Key_Up:
|
||||
self.menu.setActiveAction(self.last_item)
|
||||
self.menu.setFocus()
|
||||
return True
|
||||
|
@ -286,16 +286,16 @@ class _NonUpDownKeyFilter(QtCore.QObject):
|
|||
self.target = target
|
||||
|
||||
def eventFilter(self, obj, event):
|
||||
if event.type() == QtCore.QEvent.Type.KeyPress:
|
||||
if event.type() == QtCore.QEvent.KeyPress:
|
||||
k = event.key()
|
||||
if k in (QtCore.Qt.Key.Key_Return, QtCore.Qt.Key.Key_Enter):
|
||||
if k in (QtCore.Qt.Key_Return, QtCore.Qt.Key_Enter):
|
||||
action = obj.activeAction()
|
||||
if action is not None:
|
||||
action.modifiers = event.modifiers()
|
||||
return False
|
||||
if (k != QtCore.Qt.Key.Key_Down and k != QtCore.Qt.Key.Key_Up
|
||||
and k != QtCore.Qt.Key.Key_Enter
|
||||
and k != QtCore.Qt.Key.Key_Return):
|
||||
if (k != QtCore.Qt.Key_Down and k != QtCore.Qt.Key_Up
|
||||
and k != QtCore.Qt.Key_Enter
|
||||
and k != QtCore.Qt.Key_Return):
|
||||
QtWidgets.QApplication.sendEvent(self.target, event)
|
||||
return True
|
||||
return False
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
import logging
|
||||
import time
|
||||
import re
|
||||
from functools import partial
|
||||
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
|
||||
from sipyco.logging_tools import SourceFilter
|
||||
from artiq.gui.tools import (LayoutWidget, log_level_to_name,
|
||||
|
@ -19,7 +20,7 @@ class _ModelItem:
|
|||
class _LogFilterProxyModel(QtCore.QSortFilterProxyModel):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.setFilterCaseSensitivity(QtCore.Qt.CaseSensitivity.CaseInsensitive)
|
||||
self.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive)
|
||||
self.setRecursiveFilteringEnabled(True)
|
||||
self.filter_level = 0
|
||||
|
||||
|
@ -27,13 +28,13 @@ class _LogFilterProxyModel(QtCore.QSortFilterProxyModel):
|
|||
source = self.sourceModel()
|
||||
index0 = source.index(source_row, 0, source_parent)
|
||||
index1 = source.index(source_row, 1, source_parent)
|
||||
level = source.data(index0, QtCore.Qt.ItemDataRole.UserRole)
|
||||
level = source.data(index0, QtCore.Qt.UserRole)
|
||||
|
||||
if level >= self.filter_level:
|
||||
regex = self.filterRegularExpression()
|
||||
index0_text = source.data(index0, QtCore.Qt.ItemDataRole.DisplayRole)
|
||||
msg_text = source.data(index1, QtCore.Qt.ItemDataRole.DisplayRole)
|
||||
return (regex.match(index0_text).hasMatch() or regex.match(msg_text).hasMatch())
|
||||
regex = self.filterRegExp()
|
||||
index0_text = source.data(index0, QtCore.Qt.DisplayRole)
|
||||
msg_text = source.data(index1, QtCore.Qt.DisplayRole)
|
||||
return (regex.indexIn(index0_text) != -1 or regex.indexIn(msg_text) != -1)
|
||||
else:
|
||||
return False
|
||||
|
||||
|
@ -56,7 +57,7 @@ class _Model(QtCore.QAbstractItemModel):
|
|||
timer.timeout.connect(self.timer_tick)
|
||||
timer.start(100)
|
||||
|
||||
self.fixed_font = QtGui.QFontDatabase.systemFont(QtGui.QFontDatabase.SystemFont.FixedFont)
|
||||
self.fixed_font = QtGui.QFontDatabase.systemFont(QtGui.QFontDatabase.FixedFont)
|
||||
|
||||
self.white = QtGui.QBrush(QtGui.QColor(255, 255, 255))
|
||||
self.black = QtGui.QBrush(QtGui.QColor(0, 0, 0))
|
||||
|
@ -65,8 +66,8 @@ class _Model(QtCore.QAbstractItemModel):
|
|||
self.error_bg = QtGui.QBrush(QtGui.QColor(255, 150, 150))
|
||||
|
||||
def headerData(self, col, orientation, role):
|
||||
if (orientation == QtCore.Qt.Orientation.Horizontal
|
||||
and role == QtCore.Qt.ItemDataRole.DisplayRole):
|
||||
if (orientation == QtCore.Qt.Horizontal
|
||||
and role == QtCore.Qt.DisplayRole):
|
||||
return self.headers[col]
|
||||
return None
|
||||
|
||||
|
@ -154,9 +155,9 @@ class _Model(QtCore.QAbstractItemModel):
|
|||
else:
|
||||
msgnum = item.parent.row
|
||||
|
||||
if role == QtCore.Qt.ItemDataRole.FontRole and index.column() == 1:
|
||||
if role == QtCore.Qt.FontRole and index.column() == 1:
|
||||
return self.fixed_font
|
||||
elif role == QtCore.Qt.ItemDataRole.BackgroundRole:
|
||||
elif role == QtCore.Qt.BackgroundRole:
|
||||
level = self.entries[msgnum][0]
|
||||
if level >= logging.ERROR:
|
||||
return self.error_bg
|
||||
|
@ -164,13 +165,13 @@ class _Model(QtCore.QAbstractItemModel):
|
|||
return self.warning_bg
|
||||
else:
|
||||
return self.white
|
||||
elif role == QtCore.Qt.ItemDataRole.ForegroundRole:
|
||||
elif role == QtCore.Qt.ForegroundRole:
|
||||
level = self.entries[msgnum][0]
|
||||
if level <= logging.DEBUG:
|
||||
return self.debug_fg
|
||||
else:
|
||||
return self.black
|
||||
elif role == QtCore.Qt.ItemDataRole.DisplayRole:
|
||||
elif role == QtCore.Qt.DisplayRole:
|
||||
v = self.entries[msgnum]
|
||||
column = index.column()
|
||||
if item.parent is self:
|
||||
|
@ -183,7 +184,7 @@ class _Model(QtCore.QAbstractItemModel):
|
|||
return ""
|
||||
else:
|
||||
return v[3][item.row+1]
|
||||
elif role == QtCore.Qt.ItemDataRole.ToolTipRole:
|
||||
elif role == QtCore.Qt.ToolTipRole:
|
||||
v = self.entries[msgnum]
|
||||
if item.parent is self:
|
||||
lineno = 0
|
||||
|
@ -192,7 +193,7 @@ class _Model(QtCore.QAbstractItemModel):
|
|||
return (log_level_to_name(v[0]) + ", " +
|
||||
time.strftime("%m/%d %H:%M:%S", time.localtime(v[2])) +
|
||||
"\n" + v[3][lineno])
|
||||
elif role == QtCore.Qt.ItemDataRole.UserRole:
|
||||
elif role == QtCore.Qt.UserRole:
|
||||
return self.entries[msgnum][0]
|
||||
|
||||
|
||||
|
@ -217,13 +218,13 @@ class LogDock(QDockWidgetCloseDetect):
|
|||
scrollbottom = QtWidgets.QToolButton()
|
||||
scrollbottom.setToolTip("Scroll to bottom")
|
||||
scrollbottom.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_ArrowDown))
|
||||
QtWidgets.QStyle.SP_ArrowDown))
|
||||
grid.addWidget(scrollbottom, 0, 3)
|
||||
scrollbottom.clicked.connect(self.scroll_to_bottom)
|
||||
|
||||
clear = QtWidgets.QToolButton()
|
||||
clear.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_DialogResetButton))
|
||||
QtWidgets.QStyle.SP_DialogResetButton))
|
||||
grid.addWidget(clear, 0, 4)
|
||||
clear.clicked.connect(lambda: self.model.clear())
|
||||
|
||||
|
@ -231,7 +232,7 @@ class LogDock(QDockWidgetCloseDetect):
|
|||
newdock = QtWidgets.QToolButton()
|
||||
newdock.setToolTip("Create new log dock")
|
||||
newdock.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogNewFolder))
|
||||
QtWidgets.QStyle.SP_FileDialogNewFolder))
|
||||
# note the lambda, the default parameter is overriden otherwise
|
||||
newdock.clicked.connect(lambda: manager.create_new_dock())
|
||||
grid.addWidget(newdock, 0, 5)
|
||||
|
@ -239,27 +240,27 @@ class LogDock(QDockWidgetCloseDetect):
|
|||
|
||||
self.log = QtWidgets.QTreeView()
|
||||
self.log.setHorizontalScrollMode(
|
||||
QtWidgets.QAbstractItemView.ScrollMode.ScrollPerPixel)
|
||||
QtWidgets.QAbstractItemView.ScrollPerPixel)
|
||||
self.log.setVerticalScrollMode(
|
||||
QtWidgets.QAbstractItemView.ScrollMode.ScrollPerPixel)
|
||||
self.log.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAsNeeded)
|
||||
QtWidgets.QAbstractItemView.ScrollPerPixel)
|
||||
self.log.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
|
||||
grid.addWidget(self.log, 1, 0, colspan=6 if manager else 5)
|
||||
self.scroll_at_bottom = False
|
||||
self.scroll_value = 0
|
||||
|
||||
self.log.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
copy_action = QtGui.QAction("Copy entry to clipboard", self.log)
|
||||
self.log.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
copy_action = QtWidgets.QAction("Copy entry to clipboard", self.log)
|
||||
copy_action.triggered.connect(self.copy_to_clipboard)
|
||||
self.log.addAction(copy_action)
|
||||
clear_action = QtGui.QAction("Clear", self.log)
|
||||
clear_action = QtWidgets.QAction("Clear", self.log)
|
||||
clear_action.triggered.connect(lambda: self.model.clear())
|
||||
self.log.addAction(clear_action)
|
||||
|
||||
# If Qt worked correctly, this would be nice to have. Alas, resizeSections
|
||||
# is broken when the horizontal scrollbar is enabled.
|
||||
# sizeheader_action = QtGui.QAction("Resize header", self.log)
|
||||
# sizeheader_action = QtWidgets.QAction("Resize header", self.log)
|
||||
# sizeheader_action.triggered.connect(
|
||||
# lambda: self.log.header().resizeSections(QtWidgets.QHeaderView.ResizeMode.ResizeToContents))
|
||||
# lambda: self.log.header().resizeSections(QtWidgets.QHeaderView.ResizeToContents))
|
||||
# self.log.addAction(sizeheader_action)
|
||||
|
||||
cw = QtGui.QFontMetrics(self.font()).averageCharWidth()
|
||||
|
@ -278,7 +279,7 @@ class LogDock(QDockWidgetCloseDetect):
|
|||
self.filter_level.currentIndexChanged.connect(self.apply_level_filter)
|
||||
|
||||
def apply_text_filter(self):
|
||||
self.proxy_model.setFilterRegularExpression(self.filter_freetext.text())
|
||||
self.proxy_model.setFilterRegExp(self.filter_freetext.text())
|
||||
|
||||
def apply_level_filter(self):
|
||||
self.proxy_model.apply_filter_level(self.filter_level.currentText())
|
||||
|
@ -365,7 +366,7 @@ class LogDockManager:
|
|||
dock = LogDock(self, name)
|
||||
self.docks[name] = dock
|
||||
if add_to_area:
|
||||
self.main_window.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, dock)
|
||||
self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock)
|
||||
dock.setFloating(True)
|
||||
dock.sigClosed.connect(partial(self.on_dock_closed, name))
|
||||
self.update_closable()
|
||||
|
@ -378,8 +379,8 @@ class LogDockManager:
|
|||
self.update_closable()
|
||||
|
||||
def update_closable(self):
|
||||
flags = (QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
||||
flags = (QtWidgets.QDockWidget.DockWidgetMovable |
|
||||
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||
if len(self.docks) > 1:
|
||||
flags |= QtWidgets.QDockWidget.DockWidgetClosable
|
||||
for dock in self.docks.values():
|
||||
|
@ -395,7 +396,7 @@ class LogDockManager:
|
|||
dock = LogDock(self, name)
|
||||
self.docks[name] = dock
|
||||
dock.restore_state(dock_state)
|
||||
self.main_window.addDockWidget(QtCore.Qt.DockWidgetArea.RightDockWidgetArea, dock)
|
||||
self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock)
|
||||
dock.sigClosed.connect(partial(self.on_dock_closed, name))
|
||||
self.update_closable()
|
||||
|
||||
|
|
|
@ -151,8 +151,15 @@
|
|||
inkscape:connector-curvature="0" /></g><path
|
||||
d="M 28.084,368.98 0,429.872 v 1.124 h 14.16 l 4.202,-8.945 H 43.57 l 4.195,8.945 h 14.16 v -1.124 L 33.753,368.98 Z m -5.438,41.259 8.215,-19.134 8.424,19.134 z"
|
||||
id="path493"
|
||||
style="fill:#ffffff;fill-opacity:1" /><path
|
||||
d="m 313.73062,383.49498 10.87999,-18.07999 c 1.49333,-2.18667 1.97333,-4.8 1.97333,-7.36 0,-8.79999 -6.18666,-12.79999 -14.34666,-12.79999 -8.21332,0 -14.18665,4 -14.18665,12.79999 0,7.09333 5.11999,12.53333 13.01332,11.94666 l -7.94666,13.49333 z m -1.49334,-30.07999 c 2.93334,0 4.69333,2.02667 4.69333,4.64 0,2.88 -1.59999,4.69333 -4.69333,4.69333 -2.82666,0 -4.69333,-1.97333 -4.69333,-4.69333 0,-2.61333 1.86667,-4.64 4.69333,-4.64 z"
|
||||
id="text1"
|
||||
style="font-size:53.3333px;font-family:Intro;-inkscape-font-specification:Intro;fill:#ffffff"
|
||||
aria-label="9" /></svg>
|
||||
style="fill:#ffffff;fill-opacity:1"
|
||||
inkscape:connector-curvature="0" /><g
|
||||
id="text3371"
|
||||
style="font-style:normal;font-variant:normal;font-weight:600;font-stretch:expanded;font-size:45px;line-height:125%;font-family:'Novecento sans wide';-inkscape-font-specification:'Novecento sans wide, Semi-Bold Expanded';text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"><g
|
||||
transform="translate(-1.7346398,0.84745763)"
|
||||
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:96px;line-height:25px;font-family:'Droid Sans Thai';-inkscape-font-specification:'Droid Sans Thai, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
|
||||
id="text861"
|
||||
aria-label="7"><path
|
||||
style="font-size:53.3333px;line-height:13.8889px;font-family:Intro;-inkscape-font-specification:Intro;fill:#ffffff"
|
||||
d="m 325.20597,358.80118 c 5.38667,-5.86667 2.18667,-16.74666 -9.43999,-16.74666 -11.62666,0 -14.87999,10.77333 -9.44,16.69332 -8.90666,6.4 -5.75999,21.91999 9.44,21.91999 15.09332,0 18.23999,-15.41332 9.43999,-21.86665 z m -9.43999,13.81332 c -6.4,0 -6.4,-9.27999 0,-9.27999 6.45333,0 6.45333,9.27999 0,9.27999 z m 0,-17.06665 c -4.64,0 -4.64,-5.97333 0,-5.97333 4.58666,0 4.58666,5.97333 0,5.97333 z"
|
||||
id="text248"
|
||||
aria-label="8" /></g> </g></svg>
|
||||
|
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 15 KiB |
|
@ -1,4 +1,4 @@
|
|||
from PyQt6 import QtCore
|
||||
from PyQt5 import QtCore
|
||||
|
||||
from sipyco.sync_struct import Subscriber, process_mod
|
||||
|
||||
|
@ -91,15 +91,15 @@ class DictSyncModel(QtCore.QAbstractTableModel):
|
|||
return len(self.headers)
|
||||
|
||||
def data(self, index, role):
|
||||
if not index.isValid() or role != QtCore.Qt.ItemDataRole.DisplayRole:
|
||||
if not index.isValid() or role != QtCore.Qt.DisplayRole:
|
||||
return None
|
||||
else:
|
||||
k = self.row_to_key[index.row()]
|
||||
return self.convert(k, self.backing_store[k], index.column())
|
||||
|
||||
def headerData(self, col, orientation, role):
|
||||
if (orientation == QtCore.Qt.Orientation.Horizontal and
|
||||
role == QtCore.Qt.ItemDataRole.DisplayRole):
|
||||
if (orientation == QtCore.Qt.Horizontal and
|
||||
role == QtCore.Qt.DisplayRole):
|
||||
return self.headers[col]
|
||||
return None
|
||||
|
||||
|
@ -170,15 +170,15 @@ class ListSyncModel(QtCore.QAbstractTableModel):
|
|||
return len(self.headers)
|
||||
|
||||
def data(self, index, role):
|
||||
if not index.isValid() or role != QtCore.Qt.ItemDataRole.DisplayRole:
|
||||
if not index.isValid() or role != QtCore.Qt.DisplayRole:
|
||||
return None
|
||||
else:
|
||||
return self.convert(self.backing_store[index.row()],
|
||||
index.column())
|
||||
|
||||
def headerData(self, col, orientation, role):
|
||||
if (orientation == QtCore.Qt.Orientation.Horizontal and
|
||||
role == QtCore.Qt.ItemDataRole.DisplayRole):
|
||||
if (orientation == QtCore.Qt.Horizontal and
|
||||
role == QtCore.Qt.DisplayRole):
|
||||
return self.headers[col]
|
||||
return None
|
||||
|
||||
|
@ -271,8 +271,8 @@ class DictSyncTreeSepModel(QtCore.QAbstractItemModel):
|
|||
return len(self.headers)
|
||||
|
||||
def headerData(self, col, orientation, role):
|
||||
if (orientation == QtCore.Qt.Orientation.Horizontal and
|
||||
role == QtCore.Qt.ItemDataRole.DisplayRole):
|
||||
if (orientation == QtCore.Qt.Horizontal and
|
||||
role == QtCore.Qt.DisplayRole):
|
||||
return self.headers[col]
|
||||
return None
|
||||
|
||||
|
@ -394,19 +394,19 @@ class DictSyncTreeSepModel(QtCore.QAbstractItemModel):
|
|||
return key
|
||||
|
||||
def data(self, index, role):
|
||||
if not index.isValid() or (role != QtCore.Qt.ItemDataRole.DisplayRole
|
||||
and role != QtCore.Qt.ItemDataRole.ToolTipRole):
|
||||
if not index.isValid() or (role != QtCore.Qt.DisplayRole
|
||||
and role != QtCore.Qt.ToolTipRole):
|
||||
return None
|
||||
else:
|
||||
column = index.column()
|
||||
if column == 0 and role == QtCore.Qt.ItemDataRole.DisplayRole:
|
||||
if column == 0 and role == QtCore.Qt.DisplayRole:
|
||||
return index.internalPointer().name
|
||||
else:
|
||||
key = self.index_to_key(index)
|
||||
if key is None:
|
||||
return None
|
||||
else:
|
||||
if role == QtCore.Qt.ItemDataRole.DisplayRole:
|
||||
if role == QtCore.Qt.DisplayRole:
|
||||
convert = self.convert
|
||||
else:
|
||||
convert = self.convert_tooltip
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
import logging
|
||||
|
||||
from PyQt6 import QtGui, QtCore, QtWidgets
|
||||
from PyQt5 import QtGui, QtCore, QtWidgets
|
||||
import numpy as np
|
||||
|
||||
from .ticker import Ticker
|
||||
|
@ -23,15 +23,15 @@ class ScanWidget(QtWidgets.QWidget):
|
|||
|
||||
self.ticker = Ticker()
|
||||
|
||||
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
||||
action = QtGui.QAction("V&iew range", self)
|
||||
self.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||
action = QtWidgets.QAction("V&iew range", self)
|
||||
action.setShortcut(QtGui.QKeySequence("CTRL+i"))
|
||||
action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
action.triggered.connect(self.viewRange)
|
||||
self.addAction(action)
|
||||
action = QtGui.QAction("Sna&p range", self)
|
||||
action = QtWidgets.QAction("Sna&p range", self)
|
||||
action.setShortcut(QtGui.QKeySequence("CTRL+p"))
|
||||
action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
||||
action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||
action.triggered.connect(self.snapRange)
|
||||
self.addAction(action)
|
||||
|
||||
|
@ -143,56 +143,56 @@ class ScanWidget(QtWidgets.QWidget):
|
|||
if ev.buttons() ^ ev.button(): # buttons changed
|
||||
ev.ignore()
|
||||
return
|
||||
if ev.modifiers() & QtCore.Qt.KeyboardModifier.ShiftModifier:
|
||||
if ev.modifiers() & QtCore.Qt.ShiftModifier:
|
||||
self._drag = "select"
|
||||
self.setStart(self._pixelToAxis(ev.position().x()))
|
||||
self.setStart(self._pixelToAxis(ev.x()))
|
||||
self.setStop(self._start)
|
||||
elif ev.modifiers() & QtCore.Qt.KeyboardModifier.ControlModifier:
|
||||
elif ev.modifiers() & QtCore.Qt.ControlModifier:
|
||||
self._drag = "zoom"
|
||||
self._offset = QtCore.QPoint(ev.position().x(), 0)
|
||||
self._offset = QtCore.QPoint(ev.x(), 0)
|
||||
self._rubber = QtWidgets.QRubberBand(
|
||||
QtWidgets.QRubberBand.Rectangle, self)
|
||||
self._rubber.setGeometry(QtCore.QRect(
|
||||
self._offset, QtCore.QPoint(ev.position().x(), self.height() - 1)))
|
||||
self._offset, QtCore.QPoint(ev.x(), self.height() - 1)))
|
||||
self._rubber.show()
|
||||
else:
|
||||
qfm = QtGui.QFontMetrics(self.font())
|
||||
if ev.position().y() <= 2.5*qfm.lineSpacing():
|
||||
if ev.y() <= 2.5*qfm.lineSpacing():
|
||||
self._drag = "axis"
|
||||
self._offset = ev.position().x() - self._axisView[0]
|
||||
self._offset = ev.x() - self._axisView[0]
|
||||
# testing should match inverse drawing order for start/stop
|
||||
elif abs(self._axisToPixel(self._stop) -
|
||||
ev.position().x()) < qfm.lineSpacing()/2:
|
||||
ev.x()) < qfm.lineSpacing()/2:
|
||||
self._drag = "stop"
|
||||
self._offset = ev.position().x() - self._axisToPixel(self._stop)
|
||||
self._offset = ev.x() - self._axisToPixel(self._stop)
|
||||
elif abs(self._axisToPixel(self._start) -
|
||||
ev.position().x()) < qfm.lineSpacing()/2:
|
||||
ev.x()) < qfm.lineSpacing()/2:
|
||||
self._drag = "start"
|
||||
self._offset = ev.position().x() - self._axisToPixel(self._start)
|
||||
self._offset = ev.x() - self._axisToPixel(self._start)
|
||||
else:
|
||||
self._drag = "both"
|
||||
self._offset = (ev.position().x() - self._axisToPixel(self._start),
|
||||
ev.position().x() - self._axisToPixel(self._stop))
|
||||
self._offset = (ev.x() - self._axisToPixel(self._start),
|
||||
ev.x() - self._axisToPixel(self._stop))
|
||||
|
||||
def mouseMoveEvent(self, ev):
|
||||
if not self._drag:
|
||||
ev.ignore()
|
||||
return
|
||||
if self._drag == "select":
|
||||
self.setStop(self._pixelToAxis(ev.position().x()))
|
||||
self.setStop(self._pixelToAxis(ev.x()))
|
||||
elif self._drag == "zoom":
|
||||
self._rubber.setGeometry(QtCore.QRect(
|
||||
self._offset, QtCore.QPoint(ev.position().x(), self.height() - 1)
|
||||
self._offset, QtCore.QPoint(ev.x(), self.height() - 1)
|
||||
).normalized())
|
||||
elif self._drag == "axis":
|
||||
self._setView(ev.position().x() - self._offset, self._axisView[1])
|
||||
self._setView(ev.x() - self._offset, self._axisView[1])
|
||||
elif self._drag == "start":
|
||||
self.setStart(self._pixelToAxis(ev.position().x() - self._offset))
|
||||
self.setStart(self._pixelToAxis(ev.x() - self._offset))
|
||||
elif self._drag == "stop":
|
||||
self.setStop(self._pixelToAxis(ev.position().x() - self._offset))
|
||||
self.setStop(self._pixelToAxis(ev.x() - self._offset))
|
||||
elif self._drag == "both":
|
||||
self.setStart(self._pixelToAxis(ev.position().x() - self._offset[0]))
|
||||
self.setStop(self._pixelToAxis(ev.position().x() - self._offset[1]))
|
||||
self.setStart(self._pixelToAxis(ev.x() - self._offset[0]))
|
||||
self.setStop(self._pixelToAxis(ev.x() - self._offset[1]))
|
||||
|
||||
def mouseReleaseEvent(self, ev):
|
||||
if self._drag == "zoom":
|
||||
|
@ -217,10 +217,10 @@ class ScanWidget(QtWidgets.QWidget):
|
|||
y = round(ev.angleDelta().y()/120.)
|
||||
if not y:
|
||||
return
|
||||
if ev.modifiers() & QtCore.Qt.KeyboardModifier.ShiftModifier:
|
||||
if ev.modifiers() & QtCore.Qt.ShiftModifier:
|
||||
self.setNum(max(1, self._num + y))
|
||||
else:
|
||||
self._zoom(self.zoomFactor**y, ev.position().x())
|
||||
self._zoom(self.zoomFactor**y, ev.x())
|
||||
|
||||
def resizeEvent(self, ev):
|
||||
if not ev.oldSize().isValid() or not ev.oldSize().width():
|
||||
|
@ -245,8 +245,8 @@ class ScanWidget(QtWidgets.QWidget):
|
|||
ticks, prefix, labels = self.ticker(self._pixelToAxis(0),
|
||||
self._pixelToAxis(self.width()))
|
||||
rect = QtCore.QRect(0, 0, self.width(), lineSpacing)
|
||||
painter.drawText(rect, QtCore.Qt.AlignmentFlag.AlignLeft, prefix)
|
||||
painter.drawText(rect, QtCore.Qt.AlignmentFlag.AlignRight, self.suffix)
|
||||
painter.drawText(rect, QtCore.Qt.AlignLeft, prefix)
|
||||
painter.drawText(rect, QtCore.Qt.AlignRight, self.suffix)
|
||||
|
||||
painter.translate(0, lineSpacing + ascent)
|
||||
|
||||
|
@ -264,7 +264,7 @@ class ScanWidget(QtWidgets.QWidget):
|
|||
painter.drawLine(int(p), 0, int(p), int(lineSpacing/2))
|
||||
painter.translate(0, int(lineSpacing/2))
|
||||
|
||||
for x, c in (self._start, QtCore.Qt.GlobalColor.blue), (self._stop, QtCore.Qt.GlobalColor.red):
|
||||
for x, c in (self._start, QtCore.Qt.blue), (self._stop, QtCore.Qt.red):
|
||||
x = self._axisToPixel(x)
|
||||
painter.setPen(c)
|
||||
painter.setBrush(c)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
import re
|
||||
from math import inf, copysign
|
||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
||||
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||
|
||||
|
||||
_float_acceptable = re.compile(
|
||||
|
@ -16,8 +16,8 @@ class ScientificSpinBox(QtWidgets.QDoubleSpinBox):
|
|||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.setGroupSeparatorShown(False)
|
||||
self.setInputMethodHints(QtCore.Qt.InputMethodHint.ImhNone)
|
||||
self.setCorrectionMode(self.CorrectionMode.CorrectToPreviousValue)
|
||||
self.setInputMethodHints(QtCore.Qt.ImhNone)
|
||||
self.setCorrectionMode(self.CorrectToPreviousValue)
|
||||
# singleStep: resolution for step, buttons, accelerators
|
||||
# decimals: absolute rounding granularity
|
||||
# sigFigs: number of significant digits shown
|
||||
|
@ -69,11 +69,11 @@ class ScientificSpinBox(QtWidgets.QDoubleSpinBox):
|
|||
clean = clean.rsplit(self.suffix(), 1)[0]
|
||||
try:
|
||||
float(clean) # faster than matching
|
||||
return QtGui.QValidator.State.Acceptable, text, pos
|
||||
return QtGui.QValidator.Acceptable, text, pos
|
||||
except ValueError:
|
||||
if re.fullmatch(_float_intermediate, clean):
|
||||
return QtGui.QValidator.State.Intermediate, text, pos
|
||||
return QtGui.QValidator.State.Invalid, text, pos
|
||||
return QtGui.QValidator.Intermediate, text, pos
|
||||
return QtGui.QValidator.Invalid, text, pos
|
||||
|
||||
def stepBy(self, s):
|
||||
if abs(s) < 10: # unaccelerated buttons, keys, wheel/trackpad
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
import asyncio
|
||||
from collections import OrderedDict
|
||||
import logging
|
||||
import shutil
|
||||
|
||||
from sipyco.asyncio_tools import TaskObject
|
||||
from sipyco import pyon
|
||||
|
@ -46,8 +45,6 @@ class StateManager(TaskObject):
|
|||
logger.info("State database '%s' not found, using defaults",
|
||||
self.filename)
|
||||
return
|
||||
else:
|
||||
shutil.copy2(self.filename, self.filename + ".backup")
|
||||
# The state of one object may depend on the state of another,
|
||||
# e.g. the display state may create docks that are referenced in
|
||||
# the area state.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import asyncio
|
||||
import logging
|
||||
|
||||
from PyQt6 import QtCore, QtWidgets
|
||||
from PyQt5 import QtCore, QtWidgets
|
||||
|
||||
|
||||
class DoubleClickLineEdit(QtWidgets.QLineEdit):
|
||||
|
@ -56,9 +56,9 @@ class WheelFilter(QtCore.QObject):
|
|||
self.ignore_with_modifier = ignore_with_modifier
|
||||
|
||||
def eventFilter(self, obj, event):
|
||||
if event.type() != QtCore.QEvent.Type.Wheel:
|
||||
if event.type() != QtCore.QEvent.Wheel:
|
||||
return False
|
||||
has_modifier = event.modifiers() != QtCore.Qt.KeyboardModifier.NoModifier
|
||||
has_modifier = event.modifiers() != QtCore.Qt.NoModifier
|
||||
if has_modifier == self.ignore_with_modifier:
|
||||
event.ignore()
|
||||
return True
|
||||
|
@ -66,7 +66,7 @@ class WheelFilter(QtCore.QObject):
|
|||
|
||||
|
||||
def disable_scroll_wheel(widget):
|
||||
widget.setFocusPolicy(QtCore.Qt.FocusPolicy.StrongFocus)
|
||||
widget.setFocusPolicy(QtCore.Qt.StrongFocus)
|
||||
widget.installEventFilter(WheelFilter(widget))
|
||||
|
||||
|
||||
|
@ -91,8 +91,8 @@ class LayoutWidget(QtWidgets.QWidget):
|
|||
async def get_open_file_name(parent, caption, dir, filter):
|
||||
"""like QtWidgets.QFileDialog.getOpenFileName(), but a coroutine"""
|
||||
dialog = QtWidgets.QFileDialog(parent, caption, dir, filter)
|
||||
dialog.setFileMode(dialog.FileMode.ExistingFile)
|
||||
dialog.setAcceptMode(dialog.AcceptMode.AcceptOpen)
|
||||
dialog.setFileMode(dialog.ExistingFile)
|
||||
dialog.setAcceptMode(dialog.AcceptOpen)
|
||||
fut = asyncio.Future()
|
||||
|
||||
def on_accept():
|
||||
|
@ -119,3 +119,20 @@ async def get_save_file_name(parent, caption, dir, filter, suffix=None):
|
|||
dialog.open()
|
||||
return await fut
|
||||
|
||||
|
||||
# Based on:
|
||||
# http://stackoverflow.com/questions/250890/using-qsortfilterproxymodel-with-a-tree-model
|
||||
class QRecursiveFilterProxyModel(QtCore.QSortFilterProxyModel):
|
||||
def filterAcceptsRow(self, source_row, source_parent):
|
||||
regexp = self.filterRegExp()
|
||||
if not regexp.isEmpty():
|
||||
source_index = self.sourceModel().index(
|
||||
source_row, self.filterKeyColumn(), source_parent)
|
||||
if source_index.isValid():
|
||||
for i in range(self.sourceModel().rowCount(source_index)):
|
||||
if self.filterAcceptsRow(i, source_index):
|
||||
return True
|
||||
key = self.sourceModel().data(source_index, self.filterRole())
|
||||
return regexp.indexIn(key) != -1
|
||||
return QtCore.QSortFilterProxyModel.filterAcceptsRow(
|
||||
self, source_row, source_parent)
|
||||
|
|
|
@ -27,9 +27,9 @@ SOFTWARE.
|
|||
|
||||
import math
|
||||
|
||||
from PyQt6.QtCore import *
|
||||
from PyQt6.QtGui import *
|
||||
from PyQt6.QtWidgets import *
|
||||
from PyQt5.QtCore import *
|
||||
from PyQt5.QtGui import *
|
||||
from PyQt5.QtWidgets import *
|
||||
|
||||
|
||||
class QtWaitingSpinner(QWidget):
|
||||
|
@ -37,7 +37,7 @@ class QtWaitingSpinner(QWidget):
|
|||
super().__init__()
|
||||
|
||||
# WAS IN initialize()
|
||||
self._color = Qt.GlobalColor.black
|
||||
self._color = QColor(Qt.black)
|
||||
self._roundness = 100.0
|
||||
self._minimumTrailOpacity = 3.14159265358979323846
|
||||
self._trailFadePercentage = 80.0
|
||||
|
@ -54,17 +54,17 @@ class QtWaitingSpinner(QWidget):
|
|||
self.updateTimer()
|
||||
# END initialize()
|
||||
|
||||
self.setAttribute(Qt.WidgetAttribute.WA_TranslucentBackground)
|
||||
self.setAttribute(Qt.WA_TranslucentBackground)
|
||||
|
||||
def paintEvent(self, QPaintEvent):
|
||||
painter = QPainter(self)
|
||||
painter.fillRect(self.rect(), Qt.GlobalColor.transparent)
|
||||
painter.setRenderHint(QPainter.RenderHint.Antialiasing, True)
|
||||
painter.fillRect(self.rect(), Qt.transparent)
|
||||
painter.setRenderHint(QPainter.Antialiasing, True)
|
||||
|
||||
if self._currentCounter >= self._numberOfLines:
|
||||
self._currentCounter = 0
|
||||
|
||||
painter.setPen(Qt.PenStyle.NoPen)
|
||||
painter.setPen(Qt.NoPen)
|
||||
for i in range(0, self._numberOfLines):
|
||||
painter.save()
|
||||
painter.translate(self._innerRadius + self._lineLength, self._innerRadius + self._lineLength)
|
||||
|
@ -76,7 +76,7 @@ class QtWaitingSpinner(QWidget):
|
|||
self._minimumTrailOpacity, self._color)
|
||||
painter.setBrush(color)
|
||||
painter.drawRoundedRect(QRect(0, int(-self._lineWidth / 2), self._lineLength, self._lineWidth), self._roundness,
|
||||
self._roundness, Qt.SizeMode.RelativeSize)
|
||||
self._roundness, Qt.RelativeSize)
|
||||
painter.restore()
|
||||
|
||||
def start(self):
|
||||
|
@ -136,7 +136,7 @@ class QtWaitingSpinner(QWidget):
|
|||
def setRoundness(self, roundness):
|
||||
self._roundness = max(0.0, min(100.0, roundness))
|
||||
|
||||
def setColor(self, color=Qt.GlobalColor.black):
|
||||
def setColor(self, color=Qt.black):
|
||||
self._color = QColor(color)
|
||||
|
||||
def setRevolutionsPerSecond(self, revolutionsPerSecond):
|
||||
|
|
|
@ -206,9 +206,10 @@ class GitBackend:
|
|||
a git hash
|
||||
"""
|
||||
commit, _ = self.git.resolve_refish(rev)
|
||||
commit_id = str(commit.id)
|
||||
logger.debug('Resolved git ref "%s" into "%s"', rev, commit_id)
|
||||
return commit_id
|
||||
|
||||
logger.debug('Resolved git ref "%s" into "%s"', rev, commit.hex)
|
||||
|
||||
return commit.hex
|
||||
|
||||
def request_rev(self, rev):
|
||||
rev = self._get_pinned_rev(rev)
|
||||
|
|
|
@ -1,59 +0,0 @@
|
|||
import unittest
|
||||
import artiq.coredevice.exceptions as exceptions
|
||||
|
||||
from artiq.experiment import *
|
||||
from artiq.test.hardware_testbench import ExperimentCase
|
||||
from artiq.compiler.embedding import EmbeddingMap
|
||||
from artiq.coredevice.core import test_exception_id_sync
|
||||
|
||||
"""
|
||||
Test sync in exceptions raised between host and kernel
|
||||
Check `artiq.compiler.embedding` and `artiq::firmware::ksupport::eh_artiq`
|
||||
|
||||
Considers the following two cases:
|
||||
1) Exception raised on kernel and passed to host
|
||||
2) Exception raised in a host function called from kernel
|
||||
Ensures same exception is raised on both kernel and host in either case
|
||||
"""
|
||||
|
||||
exception_names = EmbeddingMap().str_reverse_map
|
||||
|
||||
|
||||
class _TestExceptionSync(EnvExperiment):
|
||||
def build(self):
|
||||
self.setattr_device("core")
|
||||
|
||||
@rpc
|
||||
def _raise_exception_host(self, id):
|
||||
exn = exception_names[id].split('.')[-1].split(':')[-1]
|
||||
exn = getattr(exceptions, exn)
|
||||
raise exn
|
||||
|
||||
@kernel
|
||||
def raise_exception_host(self, id):
|
||||
self._raise_exception_host(id)
|
||||
|
||||
@kernel
|
||||
def raise_exception_kernel(self, id):
|
||||
test_exception_id_sync(id)
|
||||
|
||||
|
||||
class ExceptionTest(ExperimentCase):
|
||||
def test_raise_exceptions_kernel(self):
|
||||
exp = self.create(_TestExceptionSync)
|
||||
|
||||
for id, name in list(exception_names.items())[::-1]:
|
||||
name = name.split('.')[-1].split(':')[-1]
|
||||
with self.assertRaises(getattr(exceptions, name)) as ctx:
|
||||
exp.raise_exception_kernel(id)
|
||||
self.assertEqual(str(ctx.exception).strip("'"), name)
|
||||
|
||||
|
||||
def test_raise_exceptions_host(self):
|
||||
exp = self.create(_TestExceptionSync)
|
||||
|
||||
for id, name in exception_names.items():
|
||||
name = name.split('.')[-1].split(':')[-1]
|
||||
with self.assertRaises(getattr(exceptions, name)) as ctx:
|
||||
exp.raise_exception_host(id)
|
||||
|
|
@ -8,7 +8,7 @@ def catch(f):
|
|||
except Exception as e:
|
||||
print(e)
|
||||
|
||||
# CHECK-L: 19(0, 0, 0)
|
||||
# CHECK-L: 8(0, 0, 0)
|
||||
catch(lambda: 1/0)
|
||||
# CHECK-L: 10(10, 1, 0)
|
||||
# CHECK-L: 9(10, 1, 0)
|
||||
catch(lambda: [1.0][10])
|
||||
|
|
|
@ -10,7 +10,7 @@ def catch(f):
|
|||
except IndexError as ie:
|
||||
print(ie)
|
||||
|
||||
# CHECK-L: 19(0, 0, 0)
|
||||
# CHECK-L: 8(0, 0, 0)
|
||||
catch(lambda: 1/0)
|
||||
# CHECK-L: 10(10, 1, 0)
|
||||
# CHECK-L: 9(10, 1, 0)
|
||||
catch(lambda: [1.0][10])
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
# REQUIRES: exceptions
|
||||
|
||||
def f():
|
||||
# CHECK-L: Uncaught 19
|
||||
# CHECK-L: Uncaught 8
|
||||
# CHECK-L: at input.py:${LINE:+1}:
|
||||
1/0
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ def g():
|
|||
try:
|
||||
f()
|
||||
except Exception as e:
|
||||
# CHECK-L: Uncaught 19
|
||||
# CHECK-L: Uncaught 8
|
||||
# CHECK-L: at input.py:${LINE:+1}:
|
||||
raise e
|
||||
|
||||
|
|
|
@ -2,6 +2,6 @@
|
|||
# RUN: OutputCheck %s --file-to-check=%t
|
||||
# REQUIRES: exceptions
|
||||
|
||||
# CHECK-L: Uncaught 19: cannot divide by zero (0, 0, 0)
|
||||
# CHECK-L: Uncaught 8: cannot divide by zero (0, 0, 0)
|
||||
# CHECK-L: at input.py:${LINE:+1}:
|
||||
1/0
|
||||
|
|
|
@ -44,8 +44,7 @@ class TestClient(unittest.TestCase):
|
|||
self.device_db_path, *args], encoding="utf8", env=get_env(),
|
||||
text=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
||||
while self.master.stdout.readline().strip() != "ARTIQ master is now ready.":
|
||||
if self.master.poll() is not None:
|
||||
raise IOError("master terminated unexpectedly")
|
||||
pass
|
||||
|
||||
def check_and_terminate_master(self):
|
||||
while not ("test content" in self.master.stdout.readline()):
|
||||
|
|
|
@ -3,6 +3,7 @@ import importlib.util
|
|||
import importlib.machinery
|
||||
import inspect
|
||||
import logging
|
||||
import os
|
||||
import pathlib
|
||||
import string
|
||||
import sys
|
||||
|
@ -10,9 +11,9 @@ import sys
|
|||
import numpy as np
|
||||
|
||||
from sipyco import pyon
|
||||
from platformdirs import user_config_dir
|
||||
|
||||
from artiq import __version__ as artiq_version
|
||||
from artiq.appdirs import user_config_dir
|
||||
from artiq.language.environment import is_public_experiment
|
||||
from artiq.language import units
|
||||
|
||||
|
@ -194,4 +195,6 @@ def get_windows_drives():
|
|||
|
||||
def get_user_config_dir():
|
||||
major = artiq_version.split(".")[0]
|
||||
return user_config_dir("artiq", "m-labs", major, ensure_exists=True)
|
||||
dir = user_config_dir("artiq", "m-labs", major)
|
||||
os.makedirs(dir, exist_ok=True)
|
||||
return dir
|
||||
|
|
|
@ -1,252 +0,0 @@
|
|||
Building and developing ARTIQ
|
||||
=============================
|
||||
|
||||
.. warning::
|
||||
This section is only for software or FPGA developers who want to modify ARTIQ. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we usually provide board binaries for you; you can use the AFWS client to get updated versions if necessary, as described in :ref:`obtaining-binaries`. It is not necessary to build them yourself.
|
||||
|
||||
The easiest way to obtain an ARTIQ development environment is via the `Nix package manager <https://nixos.org/nix/>`_ on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
|
||||
|
||||
ARTIQ itself does not depend on Nix, and it is also possible to obtain everything from source (look into the ``flake.nix`` file to see what resources are used, and run the commands manually, adapting to your system) - but Nix makes the process a lot easier.
|
||||
|
||||
Installing Vivado
|
||||
-----------------
|
||||
|
||||
It is necessary to independently install AMD's `Vivado <https://www.xilinx.com/support/download.html>`_, which requires a login for download and can't be automatically obtained by package managers. The "appropriate" Vivado version to use for building gateware and firmware can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs.
|
||||
|
||||
.. tip::
|
||||
Text-search ``flake.nix`` for a mention of ``/opt/Xilinx/Vivado``. Given e.g. the line ::
|
||||
|
||||
profile = "set -e; source /opt/Xilinx/Vivado/2022.2/settings64.sh"
|
||||
|
||||
the intended Vivado version is 2022.2.
|
||||
|
||||
Download and run the official installer. If using NixOS, note that this will require a FHS chroot environment; the ARTIQ flake provides such an environment, which you can enter with the command ``vivado-env`` from the development environment (i.e. after ``nix develop``). Other tips:
|
||||
|
||||
- Be aware that Vivado is notoriously not a lightweight piece of software and you will likely need **at least 70GB+** of free space to install it.
|
||||
- If you do not want to write to ``/opt``, you can install into a folder of your home directory.
|
||||
- During the Vivado installation, uncheck ``Install cable drivers`` (they are not required, as we use better open source alternatives).
|
||||
- If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
|
||||
- Vivado installation issues are not uncommon. Searching for similar problems on `the M-Labs forum <https://forum.m-labs.hk/>`_ or `Vivado's own support forums <https://support.xilinx.com/s/topic/0TO2E000000YKXwWAO/installation-and-licensing>`_ might be helpful when looking for solutions.
|
||||
|
||||
.. _system-description:
|
||||
|
||||
System description file
|
||||
-----------------------
|
||||
|
||||
ARTIQ gateware and firmware binaries are dependent on the system configuration. In other words, a specific set of ARTIQ binaries is bound to the exact arrangement of real-time hardware it was generated for: the core device itself, its role in a DRTIO context (master, satellite, or standalone), the (real-time) peripherals in use, the physical EEM ports they will be connected to, and various other basic specifications. This information is normally provided to the software in the form of a JSON file called the system description or system configuration file.
|
||||
|
||||
.. warning::
|
||||
|
||||
System configuration files are only used with Kasli and Kasli-SoC boards. KC705 and ZC706 ARTIQ configurations, due to their relative rarity and specialization, are handled on a case-by-case basis and selected through a variant name such as ``nist_clock``, with no system description file necessary. See below in :ref:`building` for where to find the list of supported variants. Writing new KC705 or ZC706 variants is not a trivial task, and not particularly recommended, unless you are an FPGA developer and know what you're doing.
|
||||
|
||||
If you already have your system configuration file on hand, you can edit it to reflect any changes in configuration. If you purchased your original system from M-Labs, or recently purchased new hardware to add to it, you can obtain your up-to-date system configuration file through AFWS at any time using the command ``$ afws_client get_json`` (see :ref:`AFWS client<afws-client>`). If you are starting from scratch, a close reading of ``coredevice_generic.schema.json`` in ``artiq/coredevice`` will be helpful.
|
||||
|
||||
System descriptions do not need to be very complex. At its most basic, a system description looks something like: ::
|
||||
|
||||
{
|
||||
"target": "kasli",
|
||||
"variant": "example",
|
||||
"hw_rev": "v2.0",
|
||||
"base": "master",
|
||||
"peripherals": [
|
||||
{
|
||||
"type": "grabber",
|
||||
"ports": [0]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
Only these five fields are required, and the ``peripherals`` list can in principle be empty. A limited number of more extensive examples can currently be found in `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq/src/branch/master>`_, as well as in the main repository under ``artiq/examples/kasli_shuttler``. Once your system description file is complete, you can use ``artiq_ddb_template`` (see also :ref:`Utilities <ddb-template-tool>`) to test it and to generate a template for the corresponding :ref:`device database <device-db>`.
|
||||
|
||||
DRTIO descriptions
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Note that in DRTIO systems it is necessary to create one description file *per core device*. Satellites and their connected peripherals must be described separately. Satellites also need to be reflashed separately, albeit only if their personal system descriptions have changed. (The layout of satellites relative to the master is configurable on the fly and will be established much later, in the routing table; see :ref:`drtio-routing`. It is not necessary to rebuild or reflash if only changing the DRTIO routing table).
|
||||
|
||||
In contrast, only one device database should be generated even for a DRTIO system. Use a command of the form: ::
|
||||
|
||||
$ artiq_ddb_template -s 1 <satellite1>.json -s 2 <satellite2>.json <master>.json
|
||||
|
||||
The numbers designate the respective satellite's destination number, which must correspond to the destination numbers used when generating the routing table later.
|
||||
|
||||
Common system description changes
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To add or remove peripherals from the system, add or remove their entries from the ``peripherals`` field. When replacing hardware with upgraded versions, update the corresponding ``hw_rev`` (hardware revision) field. Other fields to consider include:
|
||||
|
||||
- ``enable_wrpll`` (a simple boolean, see :ref:`core-device-clocking`)
|
||||
- ``sed_lanes`` (increasing the number of SED lanes can reduce sequence errors, but correspondingly consumes more FPGA resources, see :ref:`sequence-errors`)
|
||||
- various defaults (e.g. ``core_addr`` defines a default IP address, which can be freely reconfigured later).
|
||||
|
||||
Nix development environment
|
||||
---------------------------
|
||||
|
||||
* Install `Nix <http://nixos.org/nix/>`_ if you haven't already. Prefer a single-user installation for simplicity.
|
||||
* Enable flakes in Nix, for example by adding ``experimental-features = nix-command flakes`` to ``nix.conf``; see the `NixOS Wiki on flakes <https://nixos.wiki/wiki/flakes>`_ for details and more options.
|
||||
* Clone `the ARTIQ Git repository <https://github.com/m-labs/artiq>`_, or `the ARTIQ-Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`__ for Zynq devices (Kasli-SoC or ZC706). By default, you are working with the ``master`` branch, which represents the beta version and is not stable (see :doc:`releases`). Checkout the most recent release (``git checkout release-[number]``) for a stable version.
|
||||
* If your Vivado installation is not in its default location ``/opt``, open ``flake.nix`` and edit it accordingly (note that the edits must be made in the main ARTIQ flake, even if you are working with Zynq, see also tip below).
|
||||
* Run ``nix develop`` at the root of the repository, where ``flake.nix`` is.
|
||||
* Answer ``y``/'yes' to any Nix configuration questions if necessary, as in :ref:`installing-troubleshooting`.
|
||||
|
||||
.. note::
|
||||
You can also target legacy versions of ARTIQ; use Git to checkout older release branches. Note however that older releases of ARTIQ required different processes for developing and building, which you are broadly more likely to figure out by (also) consulting the corresponding older versions of the manual.
|
||||
|
||||
Once you have run ``nix develop`` you are in the ARTIQ development environment. All ARTIQ commands and utilities -- :mod:`~artiq.frontend.artiq_run`, :mod:`~artiq.frontend.artiq_master`, etc. -- should be available, as well as all the packages necessary to build or run ARTIQ itself. You can exit the environment at any time using Control+D or the ``exit`` command and re-enter it by re-running ``nix develop`` again in the same location.
|
||||
|
||||
.. tip::
|
||||
If you are developing for Zynq, you will have noted that the ARTIQ-Zynq repository consists largely of firmware. The firmware for Zynq (NAR3) is more modern than that used for current mainline ARTIQ, and is intended to eventually replace it; for now it constitutes most of the difference between the two ARTIQ variants. The gateware for Zynq, on the other hand, is largely imported from mainline ARTIQ.
|
||||
|
||||
If you intend to modify the source housed in the original ARTIQ repository, but build and test the results on a Zynq device, clone both repositories and set your ``PYTHONPATH`` after entering the ARTIQ-Zynq development shell: ::
|
||||
|
||||
$ export PYTHONPATH=/absolute/path/to/your/artiq:$PYTHONPATH
|
||||
|
||||
Note that this only applies for incremental builds. If you want to use ``nix build``, or make changes to the dependencies, look into changing the inputs of the ``flake.nix`` instead. You can do this by replacing the URL of the GitHub ARTIQ repository with ``path:/absolute/path/to/your/artiq``; remember that Nix caches dependencies, so to incorporate new changes you will need to exit the development shell, update the Nix cache with ``nix flake update``, and re-run ``nix develop``.
|
||||
|
||||
Building only standard binaries
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you are working with original ARTIQ, and you only want to build a set of standard binaries (i.e. without changing the source code), you can also enter the development shell without cloning the repository, using ``nix develop`` as follows: ::
|
||||
|
||||
$ nix develop git+https://github.com/m-labs/artiq.git\?ref=release-[number]#boards
|
||||
|
||||
Leave off ``\?ref=release-[number]`` to prefer the current beta version instead of a numbered release.
|
||||
|
||||
.. note::
|
||||
Adding ``#boards`` makes use of the ARTIQ flake's provided ``artiq-boards-shell``, a lighter environment optimized for building firmware and flashing boards, which can also be accessed by running ``nix develop .#boards`` if you have already cloned the repository. Developers should be aware that in this shell the current copy of the ARTIQ sources is not added to your ``PYTHONPATH``. Run ``nix flake show`` and read ``flake.nix`` carefully to understand the different available shells.
|
||||
|
||||
The parallel command does exist for ARTIQ-Zynq: ::
|
||||
|
||||
$ nix develop git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number]
|
||||
|
||||
but if you are building ARTIQ-Zynq without intention to change the source, it is not actually necessary to enter the development environment at all; Nix is capable of accessing the official flake remotely for the build itself, eliminating the requirement for any particular environment.
|
||||
|
||||
This is equally possible for original ARTIQ, but not as useful, as the development environment (specifically the ``#boards`` shell) is still the easiest way to access the necessary tools for flashing the board. On the other hand, with Zynq, it is normally recommended to boot from SD card, which requires no further special tools. As long as you have a functioning Nix installation with flakes enabled, you can progress directly to the building instructions below.
|
||||
|
||||
.. _building:
|
||||
|
||||
Building ARTIQ
|
||||
--------------
|
||||
|
||||
For general troubleshooting and debugging, especially with a 'fresh' board, see also :ref:`connecting-uart`.
|
||||
|
||||
Kasli or KC705 (ARTIQ original)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For Kasli, if you have your system description file on-hand, you can at this point build both firmware and gateware with a command of the form: ::
|
||||
|
||||
$ python -m artiq.gateware.targets.kasli <description>.json
|
||||
|
||||
With KC705, use: ::
|
||||
|
||||
$ python -m artiq.gateware.targets.kc705 -V <variant>
|
||||
|
||||
This will create a directory ``artiq_kasli`` or ``artiq_kc705`` containing the binaries in a subdirectory named after your description file or variant. Flash the board as described in :ref:`writing-flash`, adding the option ``--srcbuild``, e.g., assuming your board is already connected by JTAG USB: ::
|
||||
|
||||
$ artiq_flash --srcbuild [-t kc705] -d artiq_<board>/<variant>
|
||||
|
||||
.. note::
|
||||
To see supported KC705 variants, run: ::
|
||||
|
||||
$ python -m artiq.gateware.targets.kc705 --help
|
||||
|
||||
Look for the option ``-V VARIANT, --variant VARIANT``.
|
||||
|
||||
Kasli-SoC or ZC706 (ARTIQ on Zynq)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The building process for Zynq devices is a little more complex. The easiest method is to leverage ``nix build`` and the ``makeArtiqZynqPackage`` utility provided by the official flake. The ensuing command is rather long, because it uses a multi-clause expression in the Nix language to describe the desired result; it can be executed piece-by-piece using the `Nix REPL <https://nix.dev/manual/nix/2.18/command-ref/new-cli/nix3-repl.html>`_, but ``nix build`` provides a lot of useful conveniences.
|
||||
|
||||
For Kasli-SoC, run: ::
|
||||
|
||||
$ nix build --print-build-logs --impure --expr 'let fl = builtins.getFlake "git+https://git.m-labs.hk/m-labs/artiq-zynq?ref=release-[number]"; in (fl.makeArtiqZynqPackage {target="kasli_soc"; variant="<variant>"; json=<path/to/description.json>;}).kasli_soc-<variant>-sd'
|
||||
|
||||
Replace ``<variant>`` with ``master``, ``satellite``, or ``standalone``, depending on your targeted DRTIO role. Remove ``?ref=release-[number]`` to use the current beta version rather than a numbered release. If you have cloned the repository and prefer to use your local copy of the flake, replace the corresponding clause with ``builtins.getFlake "/absolute/path/to/your/artiq-zynq"``.
|
||||
|
||||
For ZC706, you can use a command of the same form: ::
|
||||
|
||||
$ nix build --print-build-logs --impure --expr 'let fl = builtins.getFlake "git+https://git.m-labs.hk/m-labs/artiq-zynq?ref=release-[number]"; in (fl.makeArtiqZynqPackage {target="zc706"; variant="<variant>";}).zc706-<variant>-sd'
|
||||
|
||||
or you can use the more direct version: ::
|
||||
|
||||
$ nix build --print-build-logs git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number]#zc706-<variant>-sd
|
||||
|
||||
(which is possible for ZC706 because there is no need to be able to specify a system description file in the arguments.)
|
||||
|
||||
.. note::
|
||||
To see supported ZC706 variants, you can run the following at the root of the repository: ::
|
||||
|
||||
$ src/gateware/zc706.py --help
|
||||
|
||||
Look for the option ``-V VARIANT, --variant VARIANT``. If you have not cloned the repository or are not in the development environment, try: ::
|
||||
|
||||
$ nix flake show git+https://git.m-labs.hk/m-labs/artiq-zynq\?ref=release-[number] | grep "package 'zc706.*sd"
|
||||
|
||||
to see the list of suitable build targets directly.
|
||||
|
||||
Any of these commands should produce a directory ``result`` which contains a file ``boot.bin``. As described in :ref:`writing-flash`, if your core device is currently accessible over the network, it can be flashed with :mod:`~artiq.frontend.artiq_coremgmt`. If it is not connected to the network:
|
||||
|
||||
1. Power off the board, extract the SD card and load ``boot.bin`` onto it manually.
|
||||
2. Insert the SD card back into the board.
|
||||
3. Ensure that the DIP switches (labeled BOOT MODE) are set correctly, to SD.
|
||||
4. Power the board back on.
|
||||
|
||||
Optionally, the SD card may also be loaded at the same time with an additional file ``config.txt``, which can contain preset configuration values in the format ``key=value``, one per line. The keys are those used with :mod:`~artiq.frontend.artiq_coremgmt`. This allows e.g. presetting an IP address and any other configuration information.
|
||||
|
||||
After a successful boot, the "FPGA DONE" light should be illuminated and the board should respond to ping when plugged into Ethernet.
|
||||
|
||||
.. _zynq-jtag-boot :
|
||||
|
||||
Booting over JTAG/Ethernet
|
||||
""""""""""""""""""""""""""
|
||||
|
||||
It is also possible to boot Zynq devices over USB and Ethernet. Flip the DIP switches to JTAG. The scripts ``remote_run.sh`` and ``local_run.sh`` in the ARTIQ-Zynq repository, intended for use with a remote JTAG server or a local connection to the core device respectively, are used at M-Labs to accomplish this. Both make use of the netboot tool ``artiq_netboot``, see also its source `here <https://git.m-labs.hk/M-Labs/artiq-netboot>`__, which is included in the ARTIQ-Zynq development environment. Adapt the relevant script to your system or read it closely to understand the options and the commands being run; note for example that ``remote_run.sh`` as written only supports ZC706.
|
||||
|
||||
You will need to generate the gateware, firmware and bootloader first, either through ``nix build`` or incrementally as below. After an incremental build add the option ``-i`` when running either of the scripts. If using ``nix build``, note that target names of the form ``<board>-<variant>-jtag`` (run ``nix flake show`` to see all targets) will output the three necessary files without combining them into ``boot.bin``.
|
||||
|
||||
.. warning::
|
||||
|
||||
A known Xilinx hardware bug on Zynq prevents repeatedly loading the SZL bootloader over JTAG (i.e. repeated calls of the ``*_run.sh`` scripts) without a POR reset. On Kasli-SoC, you can physically set a jumper on the ``PS_POR_B`` pins of your board and use the M-Labs `POR reset script <https://git.m-labs.hk/M-Labs/zynq-rs/src/branch/master/kasli_soc_por.py>`_.
|
||||
|
||||
Zynq incremental build
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``boot.bin`` file used in a Zynq SD card boot is in practice the combination of several files, normally ``top.bit`` (the gateware), ``runtime`` or ``satman`` (the firmware) and ``szl.elf`` (an open-source bootloader for Zynq `written by M-Labs <https://git.m-labs.hk/M-Labs/zynq-rs/src/branch/master/szl>`_, used in ARTIQ in place of Xilinx's FSBL). In some circumstances, especially if you are developing ARTIQ, you may prefer to construct these components separately. Be sure that you have cloned the repository and entered the development environment as described above.
|
||||
|
||||
To compile the gateware and firmware, enter the ``src`` directory and run two commands as follows:
|
||||
|
||||
For Kasli-SoC:
|
||||
::
|
||||
|
||||
$ gateware/kasli_soc.py -g ../build/gateware <description.json>
|
||||
$ make TARGET=kasli_soc GWARGS="path/to/description.json" <fw-type>
|
||||
|
||||
For ZC706:
|
||||
::
|
||||
|
||||
$ gateware/zc706.py -g ../build/gateware -V <variant>
|
||||
$ make TARGET=zc706 GWARGS="-V <variant>" <fw-type>
|
||||
|
||||
where ``fw-type`` is ``runtime`` for standalone or DRTIO master builds and ``satman`` for DRTIO satellites. Both the gateware and the firmware will generate into the ``../build`` destination directory. At this stage you can :ref:`boot from JTAG <zynq-jtag-boot>`; either of the ``*_run.sh`` scripts will expect the gateware and firmware files at their default locations, and the ``szl.elf`` bootloader is retrieved automatically.
|
||||
|
||||
.. warning::
|
||||
Note that in between runs of ``make`` it is necessary to manually clear ``build``, even for different targets, or ``make`` will do nothing.
|
||||
|
||||
If you prefer to boot from SD card, you will need to construct your own ``boot.bin``. Build ``szl.elf`` from source by running a command of the form: ::
|
||||
|
||||
$ nix build git+https://git.m-labs.hk/m-labs/zynq-rs#<board>-szl
|
||||
|
||||
For easiest access run this command in the ``build`` directory. The ``szl.elf`` file will be in the subdirectory ``result``. To combine all three files into the boot image, create a file called ``boot.bif`` in ``build`` with the following contents: ::
|
||||
|
||||
the_ROM_image:
|
||||
{
|
||||
[bootloader]result/szl.elf
|
||||
gateware/top.bit
|
||||
firmware/armv7-none-eabihf/release/<fw-type>
|
||||
}
|
||||
EOF
|
||||
|
||||
Save this file. Now use ``mkbootimage`` to create ``boot.bin``. ::
|
||||
|
||||
$ mkbootimage boot.bif boot.bin
|
||||
|
||||
Boot from SD card as above.
|
|
@ -1,49 +1,49 @@
|
|||
Compiler
|
||||
========
|
||||
|
||||
The ARTIQ compiler transforms the Python code of the kernels into machine code executable on the core device. For limited purposes (normally, obtaining executable binaries of idle and startup kernels), it can be accessed through :mod:`~artiq.frontend.artiq_compile`. Otherwise it is invoked automatically whenever a function with an applicable decorator is called.
|
||||
The ARTIQ compiler transforms the Python code of the kernels into machine code executable on the core device. For limited purposes (normally, obtaining executable binaries of idle and startup kernels), it can be accessed through ``artiq_compile``. Otherwise it is invoked automatically whenever a function with an applicable decorator is called.
|
||||
|
||||
ARTIQ kernel code accepts *nearly,* but not quite, a strict subset of Python 3. The necessities of real-time operation impose a harsher set of limitations; as a result, many Python features are necessarily omitted, and there are some specific discrepancies (see also :ref:`compiler-pitfalls`).
|
||||
ARTIQ kernel code accepts *nearly,* but not quite, a strict subset of Python 3. The necessities of real-time operation impose a harsher set of limitations; as a result, many Python features are necessarily omitted, and there are some specific discrepancies (see also :ref:`compiler-pitfalls`).
|
||||
|
||||
In general, ARTIQ Python supports only statically typed variables; it implements no heap allocation or garbage collection systems, essentially disallowing any heap-based data structures (although lists and arrays remain available in a stack-based form); and it cannot use runtime dispatch, meaning that, for example, all elements of an array must be of the same type. Nonetheless, technical details aside, a basic knowledge of Python is entirely sufficient to write ARTIQ experiments.
|
||||
In general, ARTIQ Python supports only statically typed variables; it implements no heap allocation or garbage collection systems, essentially disallowing any heap-based data structures (although lists and arrays remain available in a stack-based form); and it cannot use runtime dispatch, meaning that, for example, all elements of an array must be of the same type. Nonetheless, technical details aside, a basic knowledge of Python is entirely sufficient to write useful and coherent ARTIQ experiments.
|
||||
|
||||
.. note::
|
||||
The ARTIQ compiler is now in its second iteration. The third generation, known as NAC3, is `currently in development <https://git.m-labs.hk/M-Labs/nac3>`_, and available for pre-alpha experimental use. NAC3 represents a major overhaul of ARTIQ compilation, and will feature much faster compilation speeds, a greatly improved type system, and more predictable and transparent operation. It is compatible with ARTIQ firmware starting at ARTIQ-7. Instructions for installation and basic usage differences can also be found `on the M-Labs Forum <https://forum.m-labs.hk/d/392-nac3-new-artiq-compiler-3-prealpha-release>`_. While NAC3 is a work in progress and many important features remain unimplemented, installation and feedback is welcomed.
|
||||
.. note::
|
||||
The ARTIQ compiler is now in its second iteration. The third generation, known as NAC3, is `currently in development <https://git.m-labs.hk/M-Labs/nac3>`_, and available for pre-alpha experimental use. NAC3 represents a major overhaul of ARTIQ compilation, and will feature much faster compilation speeds, a greatly improved type system, and more predictable and transparent operation. It is compatible with ARTIQ firmware starting at ARTIQ-7. Instructions for installation and basic usage differences can also be found `on the M-Labs Forum <https://forum.m-labs.hk/d/392-nac3-new-artiq-compiler-3-prealpha-release>`_. While NAC3 is a work in progress and many important features remain unimplemented, installation and feedback is welcomed.
|
||||
|
||||
ARTIQ Python code
|
||||
ARTIQ Python code
|
||||
-----------------
|
||||
|
||||
A variety of short experiments can be found in the subfolders of ``artiq/examples``, especially under ``kc705_nist_clock/repository`` and ``no_hardware/repository``. Reading through these will give you a general idea of what ARTIQ Python is capable of and how to use it.
|
||||
A variety of short experiments can be found in the subfolders of ``artiq/examples``, especially under ``kc705_nist_clock/repository`` and ``no_hardware/repository``. Reading through these will give you a general idea of what ARTIQ Python is capable of and how to use it.
|
||||
|
||||
Functions and decorators
|
||||
Functions and decorators
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ARTIQ compiler recognizes several specialized decorators, which determine the way the decorated function will be compiled and handled.
|
||||
The ARTIQ compiler recognizes several specialized decorators, which determine the way the decorated function will be compiled and handled.
|
||||
|
||||
``@kernel`` (see :meth:`~artiq.language.core.kernel`) designates kernel functions, which will be compiled for and executed on the core device; the basic setup and background for kernels is detailed on the :doc:`getting_started_core` page. ``@subkernel`` (:meth:`~artiq.language.core.subkernel`) designates subkernel functions, which are largely similar to kernels except that they are executed on satellite devices in a DRTIO setting, with some associated limitations; they are described in more detail on the :doc:`using_drtio_subkernels` page.
|
||||
``@kernel`` (see :meth:`~artiq.language.core.kernel`) designates kernel functions, which will be compiled for and wholly executed on the core device; the basic setup and background for kernels is detailed on the :doc:`getting_started_core` page. ``@subkernel`` (:meth:`~artiq.language.core.subkernel`) designates subkernel functions, which are largely similar to kernels except that they are executed on satellite devices in a DRTIO setting, with some associated limitations; they are described in more detail on the :doc:`using_drtio_subkernels` page.
|
||||
|
||||
``@rpc`` (:meth:`~artiq.language.core.rpc`) designates functions to be executed on the host machine, which are compiled and run in regular Python, outside of the core device's real-time limitations. Notably, functions without decorators are assumed to be host-bound by default, and treated identically to an explicitly marked ``@rpc``. As a result, the explicit decorator is only really necessary when specifying additional flags (for example, ``flags={"async"}``, see below).
|
||||
``@rpc`` (:meth:`~artiq.language.core.rpc`) designates functions to be executed on the host machine, which are compiled and run in regular Python, outside of the core device's real-time limitations. Notably, functions without decorators are assumed to be host-bound by default, and treated identically to an explicitly marked ``@rpc``. As a result, the explicit decorator is only really necessary when specifying additional flags (for example, ``flags={"async"}``, see below).
|
||||
|
||||
``@portable`` (:meth:`~artiq.language.core.portable`) designates functions to be executed *on the same device they are called.* In other words, when called from a kernel, a portable is executed as a kernel; when called from a subkernel, it is executed as a kernel, on the same satellite device as the calling subkernel; when called from a host function, it is executed on the host machine.
|
||||
``@portable`` (:meth:`~artiq.language.core.portable`) designates functions to be executed *on the same device they are called.* In other words, when called from a kernel, a portable is executed as a kernel; when called from a subkernel, it is executed as a kernel, on the same satellite device as the calling subkernel; when called from a host function, it is executed on the host machine.
|
||||
|
||||
``@host_only`` (:meth:`~artiq.language.core.host_only`) functions are executed fully on the host, similarly to ``@rpc``, but calling them from a kernel as an RPC will be refused by the compiler. It can be used to mark functions which should only ever be called by the host.
|
||||
``@host_only`` (:meth:`~artiq.language.core.host_only`) functions are executed fully on the host, similarly to ``@rpc``, but calling them from a kernel as an RPC will be refused by the compiler. It can be used to mark functions which should only ever be called by the host.
|
||||
|
||||
.. warning::
|
||||
ARTIQ goes to some lengths to cache code used in experiments correctly, so that experiments run according to the state of the code when they were started, even if the source is changed during the run time. Python itself annoyingly fails to implement this (see also `issue #416 <https://github.com/m-labs/artiq/issues/416>`_), necessitating a workaround on ARTIQ's part. One particular downstream limitation is that the ARTIQ compiler is unable to recognize decorators with path prefixes, i.e.: ::
|
||||
.. warning::
|
||||
ARTIQ goes to some lengths to cache code used in experiments correctly, so that experiments run according to the state of the code when they were started, even if the source is changed during the run time. Python itself annoyingly fails to implement this (see also `issue #416 <https://github.com/m-labs/artiq/issues/416>`_), necessitating a workaround on ARTIQ's part. One particular downstream limitation is that the ARTIQ compiler is unable to recognize decorators with path prefixes, i.e.: ::
|
||||
|
||||
import artiq.experiment as aq
|
||||
|
||||
|
||||
[...]
|
||||
|
||||
@aq.kernel
|
||||
def run(self):
|
||||
@aq.kernel
|
||||
def run(self):
|
||||
pass
|
||||
|
||||
will fail to compile. As long as ``from artiq.experiment import *`` is used as in the examples, this is never an issue. If prefixes are strongly preferred, a possible workaround is to import decorators separately, as e.g. ``from artiq.language.core import kernel``.
|
||||
|
||||
will fail to compile. As long as ``from artiq.experiment import *`` is used as in the examples, this is never an issue. If prefixes are strongly preferred, a possible workaround is to import decorators separately, as e.g. ``from artiq.language.core import kernel``.
|
||||
.. _compiler-types:
|
||||
|
||||
.. _compiler-types:
|
||||
|
||||
ARTIQ types
|
||||
ARTIQ types
|
||||
^^^^^^^^^^^
|
||||
|
||||
Python/NumPy types correspond to ARTIQ types as follows:
|
||||
|
@ -76,16 +76,16 @@ Python/NumPy types correspond to ARTIQ types as follows:
|
|||
| numpy.int64 | TInt64 |
|
||||
+---------------+-------------------------+
|
||||
| numpy.float64 | TFloat |
|
||||
+---------------+-------------------------+
|
||||
+---------------+-------------------------+
|
||||
|
||||
Integers are 32-bit by default but may be converted to 64-bit with ``numpy.int64``.
|
||||
Integers are 32-bit by default but may be converted to 64-bit with ``numpy.int64``.
|
||||
|
||||
The ARTIQ compiler can be thought of as overriding all built-in Python types, and types in kernel code cannot always be assumed to behave as they would in host Python. In particular, normally heap-allocated types such as arrays, lists, and strings are very limited in what they support. Strings must be constant and lists and arrays must be of constant size. Methods like ``append``, ``push``, and ``pop`` are unavailable as a matter of principle, and will not compile. Certain types, notably dictionaries, have no ARTIQ implementation and cannot be used in kernels at all.
|
||||
The ARTIQ compiler can be thought of as overriding all built-in Python types, and types in kernel code cannot always be assumed to behave as they would in host Python. In particular, normally heap-allocated types such as arrays, lists, and strings are very limited in what they support. Strings must be constant and lists and arrays must be of constant size. Methods like ``append``, ``push``, and ``pop`` are unavailable as a matter of principle, and will not compile. Certain types, notably dictionaries, have no ARTIQ implementation and cannot be used in kernels at all.
|
||||
|
||||
.. tip::
|
||||
Instead of pushing or appending, preallocate for the maximum number of elements you expect with a list comprehension, i.e. ``x = [0 for _ in range(1024)]``, and then keep a variable ``n`` noting the last filled element of the array. Afterwards, ``x[0:n]`` will give you a list with that number of elements.
|
||||
.. tip::
|
||||
Instead of pushing or appending, preallocate for the maximum number of elements you expect with a list comprehension, i.e. ``x = [0 for _ in range(1024)]``, and then keep a variable ``n`` noting the last filled element of the array. Afterwards, ``x[0:n]`` will give you a list with that number of elements.
|
||||
|
||||
Multidimensional arrays are allowed (using NumPy syntax). Element-wise operations (e.g. ``+``, ``/``), matrix multiplication (``@``) and multidimensional indexing are supported; slices and views (currently) are not.
|
||||
Multidimensional arrays are allowed (using NumPy syntax). Element-wise operations (e.g. ``+``, ``/``), matrix multiplication (``@``) and multidimensional indexing are supported; slices and views (currently) are not.
|
||||
|
||||
User-defined classes are supported, provided their attributes are of other supported types (attributes that are not used in the kernel are ignored and thus unrestricted). When several instances of a user-defined class are referenced from the same kernel, every attribute must have the same type in every instance of the class.
|
||||
|
||||
|
@ -99,11 +99,11 @@ Kernel code can call host functions without any additional ceremony. However, su
|
|||
def return_four() -> TInt32:
|
||||
return 4
|
||||
|
||||
Kernels can freely modify attributes of objects shared with the host. However, by necessity, these modifications are actually applied to local copies of the objects, as the latency of immediate writeback would be unsupportable in a real-time environment. Instead, modifications are written back *when the kernel completes;* notably, this means RPCs called by a kernel itself will only have access to the unmodified host version of the object, as the kernel hasn't finished execution yet. In some cases, accessing data on the host is better handled by calling RPCs specifically to make the desired modifications.
|
||||
Kernels can freely modify attributes of objects shared with the host. However, by necessity, these modifications are actually applied to local copies of the objects, as the latency of immediate writeback would be unsupportable in a real-time environment. Instead, modifications are written back *when the kernel completes;* notably, this means RPCs called by a kernel itself will only have access to the unmodified host version of the object, as the kernel hasn't finished execution yet. In some cases, accessing data on the host is better handled by calling RPCs specifically to make the desired modifications.
|
||||
|
||||
.. warning::
|
||||
|
||||
Kernels *cannot and should not* return lists, arrays, or strings they have created, or any objects containing them; in the absence of a heap, the way these values are allocated means they cannot outlive the kernels they are created in. Trying to do so will normally be discovered by lifetime tracking and result in compilation errors, but in certain cases lifetime tracking will fail to detect a problem and experiments will encounter memory corruption at runtime. For example: ::
|
||||
Kernels *cannot and should not* return lists, arrays, or strings they have created, or any objects containing them; in the absence of a heap, the way these values are allocated means they cannot outlive the kernels they are created in. Trying to do so will normally be discovered by lifetime tracking and result in compilation errors, but in certain cases lifetime tracking will fail to detect a problem and experiments will encounter memory corruption at runtime. For example: ::
|
||||
|
||||
def func(a):
|
||||
return a
|
||||
|
@ -117,53 +117,49 @@ Kernels can freely modify attributes of objects shared with the host. However, b
|
|||
# results in memory corruption
|
||||
return func([1, 2, 3])
|
||||
|
||||
will compile, **but corrupts at runtime.** On the other hand, lists, arrays, or strings can and should be used as inputs for RPCs, and this is the preferred method of returning data to the host. In this way the data is inherently read and sent before the kernel completes and there are no allocation issues.
|
||||
will compile, **but corrupts at runtime.** On the other hand, lists, arrays, or strings can and should be used as inputs for RPCs, and this is the preferred method of returning data to the host. In this way the data is inherently read and sent before the kernel completes and there are no allocation issues.
|
||||
|
||||
Available built-in functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Available built-in functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
ARTIQ makes various useful built-in and mathematical functions from Python, NumPy, and SciPy available in kernel code. They are not guaranteed to be perfectly equivalent to their host namesakes (for example, ``numpy.rint()`` normally rounds-to-even, but in kernel code rounds toward zero) but their behavior should be basically predictable.
|
||||
ARTIQ makes various useful built-in and mathematical functions from Python, NumPy, and SciPy available in kernel code. They are not guaranteed to be perfectly equivalent to their host namesakes (for example, ``numpy.rint()`` normally rounds-to-even, but in kernel code rounds toward zero) but their behavior should be basically predictable.
|
||||
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
+ * Reference
|
||||
* Functions
|
||||
|
||||
+ * Reference
|
||||
* Functions
|
||||
+ * `Python built-ins <https://docs.python.org/3/library/functions.html>`_
|
||||
* - ``len()``, ``round()``, ``abs()``, ``min()``, ``max()``
|
||||
- ``print()`` (with caveats; see below)
|
||||
- all basic type conversions (``int()``, ``float()`` etc.)
|
||||
- ``print()`` (for specifics see warning below)
|
||||
- all basic type conversions (``int()``, ``float()`` etc.)
|
||||
+ * `NumPy mathematic utilities <https://numpy.org/doc/stable/reference/routines.math.html>`_
|
||||
* - ``sqrt()``, ``cbrt()``
|
||||
* - ``sqrt()``, ``cbrt```
|
||||
- ``fabs()``, ``fmax()``, ``fmin()``
|
||||
- ``floor()``, ``ceil()``, ``trunc()``, ``rint()``
|
||||
+ * `NumPy exponents and logarithms <https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms>`_
|
||||
* - ``exp()``, ``exp2()``, ``expm1()``
|
||||
- ``log()``, ``log2()``, ``log10()``
|
||||
+ * `NumPy trigonometric and hyperbolic functions <https://numpy.org/doc/stable/reference/routines.math.html#trigonometric-functions>`_
|
||||
* - ``sin()``, ``cos()``, ``tan()``,
|
||||
- ``arcsin()``, ``arccos()``, ``arctan()``
|
||||
- ``sinh()``, ``cosh()``, ``tanh()``
|
||||
- ``arcsinh()``, ``arccosh()``, ``arctanh()``
|
||||
* - ``sin()``, ``cos()``, ``tan()``,
|
||||
- ``arcsin()``, ``arccos()``, ``arctan()``
|
||||
- ``sinh()``, ``cosh()``, ``tanh()``
|
||||
- ``arcsinh()``, ``arccosh()``, ``arctanh()``
|
||||
- ``hypot()``, ``arctan2()``
|
||||
+ * `NumPy floating point routines <https://numpy.org/doc/stable/reference/routines.math.html#floating-point-routines>`_
|
||||
* - ``copysign()``, ``nextafter()``
|
||||
+ * `SciPy special functions <https://docs.scipy.org/doc/scipy/reference/special.html>`_
|
||||
+ * `SciPy special functions <https://docs.scipy.org/doc/scipy/reference/special.html>`_
|
||||
* - ``erf()``, ``erfc()``
|
||||
- ``gamma()``, ``gammaln()``
|
||||
- ``j0()``, ``j1()``, ``y0()``, ``y1()``
|
||||
|
||||
Basic NumPy array handling (``np.array()``, ``numpy.transpose()``, ``numpy.full()``, ``@``, element-wise operation, etc.) is also available. NumPy functions are implicitly broadcast when applied to arrays.
|
||||
Basic NumPy array handling (``np.array()``, ``numpy.transpose()``, ``numpy.full``, ``@``, element-wise operation, etc.) is also available. NumPy functions are implicitly broadcast when applied to arrays.
|
||||
|
||||
Print and logging functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. warning::
|
||||
A kernel ``print`` call normally specifically prints to the host machine, either the terminal of ``artiq_run`` or the dashboard log when using ``artiq_dashboard``. This makes it effectively an RPC, with some of the corresponding limitations. In subkernels and whenever compiled through ``artiq_compile``, where RPCs are not supported, it is instead considered a print to the local log (UART only in the case of satellites, UART and core log for master/standalone devices).
|
||||
|
||||
ARTIQ offers two native built-in logging functions: ``rtio_log()``, which prints to the :ref:`RTIO log <rtio-analyzer>`, as retrieved by :mod:`~artiq.frontend.artiq_coreanalyzer`, and ``core_log()``, which prints directly to the core log, regardless of context or network connection status. Both exist for debugging purposes, especially in contexts where a ``print()`` RPC is not suitable, such as in idle/startup kernels or when debugging delicate RTIO slack issues which may be significantly affected by the overhead of ``print()``.
|
||||
|
||||
``print()`` itself is in practice an RPC to the regular host Python ``print()``, i.e. with output either in the terminal of :mod:`~artiq.frontend.artiq_run` or in the client logs when using :mod:`~artiq.frontend.artiq_dashboard` or :mod:`~artiq.frontend.artiq_compile`. This means on one hand that it should not be used in idle, startup, or subkernels, and on the other hand that it suffers of some of the timing limitations of any other RPC, especially if the RPC queue is full. Accordingly, it is important to be aware that the timing of ``print()`` outputs can't reliably be used to debug timing in kernels, and especially not the timing of other RPCs.
|
||||
|
||||
.. _compiler-pitfalls:
|
||||
.. _compiler-pitfalls:
|
||||
|
||||
Pitfalls
|
||||
--------
|
||||
|
|
|
@ -29,13 +29,21 @@ builtins.__in_sphinx__ = True
|
|||
# we cannot use autodoc_mock_imports (does not help with argparse)
|
||||
mock_modules = ["artiq.gui.waitingspinnerwidget",
|
||||
"artiq.gui.flowlayout",
|
||||
"artiq.gui.state",
|
||||
"artiq.gui.log",
|
||||
"artiq.gui.models",
|
||||
"artiq.compiler.module",
|
||||
"artiq.compiler.embedding",
|
||||
"artiq.dashboard.waveform",
|
||||
"artiq.coredevice.jsondesc",
|
||||
"qasync", "lmdb", "dateutil.parser", "prettytable", "PyQt6",
|
||||
"h5py", "llvmlite", "pythonparser", "tqdm", "jsonschema"]
|
||||
"artiq.dashboard.interactive_args",
|
||||
"qasync", "pyqtgraph", "matplotlib", "lmdb",
|
||||
"numpy", "dateutil", "dateutil.parser", "prettytable", "PyQt5",
|
||||
"h5py", "serial", "scipy", "scipy.interpolate",
|
||||
"llvmlite", "Levenshtein", "pythonparser",
|
||||
"sipyco", "sipyco.pc_rpc", "sipyco.sync_struct",
|
||||
"sipyco.asyncio_tools", "sipyco.logging_tools",
|
||||
"sipyco.broadcast", "sipyco.packed_exceptions",
|
||||
"sipyco.keepalive", "sipyco.pipe_ipc"]
|
||||
|
||||
for module in mock_modules:
|
||||
sys.modules[module] = Mock()
|
||||
|
@ -70,8 +78,7 @@ extensions = [
|
|||
'sphinx.ext.napoleon',
|
||||
'sphinxarg.ext',
|
||||
'sphinxcontrib.wavedrom', # see also below for config
|
||||
'sphinxcontrib.jquery',
|
||||
'sphinxcontrib.tikz' # see also below for config
|
||||
"sphinxcontrib.jquery",
|
||||
]
|
||||
|
||||
mathjax_path = "https://m-labs.hk/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML.js"
|
||||
|
@ -139,19 +146,19 @@ pygments_style = 'sphinx'
|
|||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||
#keep_warnings = False
|
||||
|
||||
# If true, Sphinx will warn about *all* references where the target cannot be found.
|
||||
nitpicky = True
|
||||
# If true, Sphinx will warn about *all* references where the target cannot be found.
|
||||
nitpicky = True
|
||||
|
||||
# (type, target) regex tuples to ignore when generating warnings in 'nitpicky' mode
|
||||
# i.e. objects that are not documented in this manual and do not need to be
|
||||
# (type, target) regex tuples to ignore when generating warnings in 'nitpicky' mode
|
||||
nitpick_ignore_regex = [
|
||||
(r'py:.*', r'numpy..*'),
|
||||
(r'py:.*', r'sipyco..*'),
|
||||
('py:const', r'.*'), # no constants are documented anyway
|
||||
('py:const', r'.*'), # no constants are documented anyway
|
||||
('py.attr', r'.*'), # nor attributes
|
||||
(r'py:.*', r'artiq.gateware.*'),
|
||||
(r'py:.*', r'artiq.gateware.*'),
|
||||
('py:mod', r'artiq.frontend.*'),
|
||||
('py:mod', r'artiq.test.*'),
|
||||
('py:mod', r'artiq.applets.*'),
|
||||
('py:mod', 'artiq.experiment'),
|
||||
('py:class', 'dac34H84'),
|
||||
('py:class', 'trf372017'),
|
||||
('py:class', r'list(.*)'),
|
||||
|
@ -327,10 +334,3 @@ texinfo_documents = [
|
|||
offline_skin_js_path = '_static/default.js'
|
||||
offline_wavedrom_js_path = '_static/WaveDrom.js'
|
||||
render_using_wavedrompy = True
|
||||
|
||||
# -- Options for sphinxcontrib-tikz ---------------------------------------
|
||||
# tikz_proc_suite = pdf2svg
|
||||
# tikz_transparent = True
|
||||
# these are the defaults
|
||||
|
||||
tikz_tikzlibraries = 'positioning, shapes, arrows.meta'
|
|
@ -1,7 +1,7 @@
|
|||
Networking and configuration
|
||||
============================
|
||||
|
||||
.. _core-device-networking:
|
||||
.. _core-device-networking:
|
||||
|
||||
Setting up core device networking
|
||||
---------------------------------
|
||||
|
@ -24,7 +24,7 @@ If ping fails, check that the Ethernet LED is ON; on Kasli, it is the LED next t
|
|||
Core management tool
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The tool used to configure the core device is the command-line utility :mod:`~artiq.frontend.artiq_coremgmt`. In order for it to connect to your core device, it is necessary to supply it somehow with the correct IP address for your core device. This can be done directly through use of the ``-D`` option, for example in: ::
|
||||
The tool used to configure the core device is the command-line utility ``artiq_coremgmt``. In order for it to connect to your core device, it is necessary to supply it somehow with the correct IP address for your core device. This can be done directly through use of the ``-D`` option, for example in: ::
|
||||
|
||||
$ artiq_coremgmt -D <IP_address> log
|
||||
|
||||
|
@ -33,7 +33,7 @@ The tool used to configure the core device is the command-line utility :mod:`~ar
|
|||
|
||||
Normally, however, the core device IP is supplied through the *device database* for your system, which comes in the form of a Python script called ``device_db.py`` (see also :ref:`device-db`). If you purchased a system from M-Labs, the ``device_db.py`` for your system will have been provided for you, either on the USB stick, inside ``~/artiq`` on your NUC, or sent by email.
|
||||
|
||||
Make sure the field ``core_addr`` at the top of the file is set to your core device's correct IP address, and always execute :mod:`~artiq.frontend.artiq_coremgmt` from the same directory the device database is placed in.
|
||||
Make sure the field ``core_addr`` at the top of the file is set to your core device's correct IP address, and always execute ``artiq_coremgmt`` from the same directory the device database is placed in.
|
||||
|
||||
Once you can reach your core device, the IP can be changed at any time by running: ::
|
||||
|
||||
|
@ -63,36 +63,34 @@ For Kasli or KC705:
|
|||
$ artiq_mkfs flash_storage.img [-s mac xx:xx:xx:xx:xx:xx] [-s ip xx.xx.xx.xx/xx] [-s ipv4_default_route xx.xx.xx.xx] [-s ip6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx] [-s ipv6_default_route xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx]
|
||||
$ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start
|
||||
|
||||
On Kasli or Kasli-SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. On any board, once a device can be reached by :mod:`~artiq.frontend.artiq_coremgmt`, these values can be set and edited at any time, following the procedure for IP above.
|
||||
On Kasli or Kasli-SoC devices, specifying the MAC address is unnecessary, as they can obtain it from their EEPROM. If you only want to access the core device from the same subnet, default gateway and IPv4 prefix length may also be ommitted. On any board, once a device can be reached by ``artiq_coremgmt``, these values can be set and edited at any time, following the procedure for IP above.
|
||||
|
||||
Regarding IPv6, note that the device also has a link-local address that corresponds to its EUI-64, which can be used simultaneously to the (potentially unrelated) IPv6 address defined by using the ``ip6`` configuration key.
|
||||
|
||||
If problems persist, see the :ref:`network troubleshooting <faq-networking>` section of the FAQ.
|
||||
|
||||
.. _core-device-config:
|
||||
|
||||
Configuring the core device
|
||||
---------------------------
|
||||
|
||||
.. note::
|
||||
The following steps are optional, and you only need to execute them if they are necessary for your specific system. To learn more about how ARTIQ works and how to use it first, you might skip to the first tutorial page, :doc:`rtio`. For all configuration options, the core device generally must be restarted for changes to take effect.
|
||||
The following steps are optional, and you only need to execute them if they are necessary for your specific system. To learn more about how ARTIQ works and how to use it first, you might skip to the next page, :doc:`rtio`. For all configuration options, the core device generally must be restarted for changes to take effect.
|
||||
|
||||
Flash idle and/or startup kernel
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The *idle kernel* is the kernel (that is, a piece of code running on the core device; see :doc:`rtio` for further explanation) which the core device runs in between experiments and whenever not connected to the host. It is saved directly to the core device's flash storage in compiled form. Potential uses include cleanup of the environment between experiments, state maintenance for certain hardware, or anything else that should run continuously whenever the system is not otherwise occupied.
|
||||
The *idle kernel* is the kernel (that is, a piece of code running on the core device; see :doc:`the next page <rtio>` for further explanation) which the core device runs in between experiments and whenever not connected to the host. It is saved directly to the core device's flash storage in compiled form. Potential uses include cleanup of the environment between experiments, state maintenance for certain hardware, or anything else that should run continuously whenever the system is not otherwise occupied.
|
||||
|
||||
To flash an idle kernel, first write an idle experiment. Note that since the idle kernel runs regardless of whether the core device is connected to the host, remote procedure calls or RPCs (functions called by a kernel to run on the host) are forbidden and the ``run()`` method must be a kernel marked with ``@kernel``. Once written, you can compile and flash your idle experiment: ::
|
||||
|
||||
$ artiq_compile idle.py
|
||||
$ artiq_coremgmt config write -f idle_kernel idle.elf
|
||||
|
||||
The *startup kernel* is a kernel executed once and only once immediately whenever the core device powers on. Uses include initializing DDSes and setting TTL directions. For DRTIO systems, the startup kernel should wait until the desired destinations, including local RTIO, are up, using ``self.core.get_rtio_destination_status`` (see :meth:`~artiq.coredevice.core.Core.get_rtio_destination_status`).
|
||||
The *startup kernel* is a kernel executed once and only once immediately whenever the core device powers on. Uses include initializing DDSes and setting TTL directions. For DRTIO systems, the startup kernel should wait until the desired destinations, including local RTIO, are up, using ``self.core.get_rtio_destination_status`` (:meth:`~artiq.coredevice.core.Core.get_rtio_destination_status`).
|
||||
|
||||
To flash a startup kernel, proceed as with the idle kernel, but using the ``startup_kernel`` key in the :mod:`~artiq.frontend.artiq_coremgmt` command.
|
||||
To flash a startup kernel, proceed as with the idle kernel, but using the ``startup_kernel`` key in the ``artiq_coremgmt`` command.
|
||||
|
||||
.. note::
|
||||
Subkernels (see :doc:`using_drtio_subkernels`) are allowed in idle (and startup) experiments without any additional ceremony. :mod:`~artiq.frontend.artiq_compile` will produce a ``.tar`` rather than a ``.elf``; simply substitute ``idle.tar`` for ``idle.elf`` in the ``artiq_coremgmt config write`` command.
|
||||
Subkernels (see :doc:`using_drtio_subkernels`) are allowed in idle (and startup) experiments without any additional ceremony. ``artiq_compile`` will produce a ``.tar`` rather than a ``.elf``; simply substitute ``idle.tar`` for ``idle.elf`` in the ``artiq_coremgmt config write`` command.
|
||||
|
||||
Select the RTIO clock source
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -106,12 +104,10 @@ The default is to use an internal 125MHz clock. To select a source, use a comman
|
|||
|
||||
See :ref:`core-device-clocking` for availability of specific options.
|
||||
|
||||
.. _config-rtiomap:
|
||||
|
||||
Set up resolving RTIO channels to their names
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
This feature allows you to print the channels' respective names alongside with their numbers in RTIO error messages. To enable it, run the :mod:`~artiq.frontend.artiq_rtiomap` tool and write its result into the device config at the ``device_map`` key: ::
|
||||
This feature allows you to print the channels' respective names alongside with their numbers in RTIO error messages. To enable it, run the ``artiq_rtiomap`` tool and write its result into the device config at the ``device_map`` key: ::
|
||||
|
||||
$ artiq_rtiomap dev_map.bin
|
||||
$ artiq_coremgmt config write -f device_map dev_map.bin
|
||||
|
|
|
@ -1,16 +1,16 @@
|
|||
Core device
|
||||
===========
|
||||
|
||||
The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :doc:`rtio`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code.
|
||||
The core device is a FPGA-based hardware component that contains a softcore or hardcore CPU tightly coupled with the so-called RTIO core, which runs in gateware and provides precision timing. The CPU executes Python code that is statically compiled by the ARTIQ compiler and communicates with peripherals (TTL, DDS, etc.) through the RTIO core, as described in :ref:`artiq-real-time-i-o-concepts`. This architecture provides high timing resolution, low latency, low jitter, high-level programming capabilities, and good integration with the rest of the Python experiment code.
|
||||
|
||||
While it is possible to use the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, most use cases will require it.
|
||||
While it is possible to use the other parts of ARTIQ (controllers, master, GUI, dataset management, etc.) without a core device, many experiments require it.
|
||||
|
||||
.. _configuration-storage:
|
||||
.. _core-device-flash-storage:
|
||||
|
||||
Configuration storage
|
||||
---------------------
|
||||
Flash storage
|
||||
-------------
|
||||
|
||||
The core device reserves some storage space (either flash or directly on SD card, depending on target board) to store configuration data. The configuration data is organized as a list of key-value records, accessible either through :mod:`~artiq.frontend.artiq_mkfs` and :mod:`~artiq.frontend.artiq_flash` or, preferably in most cases, the ``config`` option of the :mod:`~artiq.frontend.artiq_coremgmt` core management tool (see below). Information can be stored to keys of any name, but the specific keys currently used and referenced by ARTIQ are summarized below:
|
||||
The core device contains some flash storage space which is used to store configuration data. It is one sector (typically 64 kB) large and organized as a list of key-value records, accessible either through ``artiq_mkfs`` and ``artiq_flash`` or, preferably in most cases, the ``config`` option of the ``artiq_coremgmt`` core management tool (see below). Information can be stored to keys of any name, but the specific keys currently used and referenced by ARTIQ are summarized below:
|
||||
|
||||
``idle_kernel``
|
||||
Stores (compiled ``.tar`` or ``.elf`` binary of) idle kernel. See :ref:`core-device-config`.
|
||||
|
@ -37,7 +37,7 @@ The core device reserves some storage space (either flash or directly on SD card
|
|||
``routing_table``
|
||||
Sets the routing table in DRTIO systems; see :ref:`drtio-routing`. If not set, a star topology is assumed.
|
||||
``device_map``
|
||||
If set, allows the core log to connect RTIO channels to device names and use device names as well as channel numbers in log output. A correctly formatted table can be automatically generated with :mod:`~artiq.frontend.artiq_rtiomap`, see :ref:`Utilities<rtiomap-tool>`.
|
||||
If set, allows the core log to connect RTIO channels to device names and use device names as well as channel numbers in log output. A correctly formatted table can be automatically generated with ``artiq_rtiomap``, see :ref:`Utilities<rtiomap-tool>`.
|
||||
``net_trace``
|
||||
If set to ``1``, will activate net trace (print all packets sent and received to UART and core log). This will considerably slow down all network response from the core. Not applicable for ARTIQ-Zynq (Kasli-SoC, ZC706).
|
||||
``panic_reset``
|
||||
|
@ -45,7 +45,7 @@ The core device reserves some storage space (either flash or directly on SD card
|
|||
``no_flash_boot``
|
||||
If set to ``1``, will disable flash boot. Network boot is attempted if possible. Not applicable for ARTIQ-Zynq.
|
||||
``boot``
|
||||
Allows full firmware/gateware (``boot.bin``) to be written with :mod:`~artiq.frontend.artiq_coremgmt`, on ARTIQ-Zynq systems only.
|
||||
Allows full firmware/gateware (``boot.bin``) to be written with ``artiq_coremgmt``, on ARTIQ-Zynq systems only.
|
||||
|
||||
Common configuration commands
|
||||
-----------------------------
|
||||
|
@ -65,7 +65,7 @@ You do not need to remove a record in order to change its value. Just overwrite
|
|||
|
||||
You can write several records at once::
|
||||
|
||||
$ artiq_coremgmt config write -s key1 value1 -f key2 filename -s key3 value3
|
||||
$ artiq_coremgmt config write -s key1 value1 -f key2 filename -s key3 value3
|
||||
|
||||
You can also write entire files in a record using the ``-f`` option. This is useful for instance to write the startup and idle kernels into the flash storage::
|
||||
|
||||
|
@ -75,36 +75,7 @@ You can also write entire files in a record using the ``-f`` option. This is use
|
|||
|
||||
The same option is used to write ``boot.bin`` in ARTIQ-Zynq. Note that the ``boot`` key is write-only.
|
||||
|
||||
See also the full reference of :mod:`~artiq.frontend.artiq_coremgmt` in :ref:`Utilities <core-device-management-tool>`.
|
||||
|
||||
.. _core-device-clocking:
|
||||
|
||||
Clocking
|
||||
--------
|
||||
|
||||
The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include:
|
||||
|
||||
* ``int_100`` - internal crystal reference is used to synthesize a 100MHz RTIO clock,
|
||||
* ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option),
|
||||
* ``int_150`` - internal crystal reference is used to synthesize a 150MHz RTIO clock.
|
||||
* ``ext0_synth0_10to125`` - external 10MHz reference clock used to synthesize a 125MHz RTIO clock,
|
||||
* ``ext0_synth0_80to125`` - external 80MHz reference clock used to synthesize a 125MHz RTIO clock,
|
||||
* ``ext0_synth0_100to125`` - external 100MHz reference clock used to synthesize a 125MHz RTIO clock,
|
||||
* ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock.
|
||||
|
||||
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
|
||||
|
||||
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the :ref:`JSON description file <system-description>`. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
|
||||
|
||||
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.
|
||||
|
||||
If not using WRPLL, PLL can also be bypassed entirely with the options
|
||||
|
||||
* ``ext0_bypass`` (input clock used directly)
|
||||
* ``ext0_bypass_125`` (explicit alias)
|
||||
* ``ext0_bypass_100`` (explicit alias)
|
||||
|
||||
Bypassing the PLL ensures the skews between input clock, downstream clock outputs, and RTIO clock are deterministic across reboots of the system. This is useful when phase determinism is required in situations where the reference clock fans out to other devices before reaching the master.
|
||||
See also the full reference of ``artiq_coremgmt`` in :ref:`Utilities <core-device-management-tool>`.
|
||||
|
||||
Board details
|
||||
-------------
|
||||
|
@ -114,23 +85,23 @@ FPGA board ports
|
|||
|
||||
All boards have a serial interface running at 115200bps 8-N-1 that can be used for debugging.
|
||||
|
||||
Kasli and Kasli-SoC
|
||||
Kasli and Kasli SoC
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli is based on a Xilinx Artix-7 FPGA, and Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is based on a Zynq-7000 SoC, notably including an ARM CPU allowing for much heavier software computations at high speeds. They are architecturally very different but supply similar feature sets. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original v1.0 remains supported by ARTIQ.
|
||||
`Kasli <https://github.com/sinara-hw/Kasli/wiki>`_ and `Kasli-SoC <https://github.com/sinara-hw/Kasli-SOC/wiki>`_ are versatile core devices designed for ARTIQ as part of the open-source `Sinara <https://github.com/sinara-hw/meta/wiki>`_ family of boards. All support interfacing to various EEM daughterboards (TTL, DDS, ADC, DAC...) through twelve onboard EEM ports. Kasli-SoC, which runs on a separate `Zynq port <https://git.m-labs.hk/M-Labs/artiq-zynq>`_ of the ARTIQ firmware, is architecturally separate, among other things being capable of performing much heavier software computations quickly locally to the board, but provides generally similar features to Kasli. Kasli itself exists in two versions, of which the improved Kasli v2.0 is now in more common use, but the original Kasli v1.0 remains supported by ARTIQ.
|
||||
|
||||
Kasli can be connected to the network using a 10000Base-X SFP module, installed into the SFP0 cage. Kasli-SoC features a built-in Ethernet port to use instead. If configured as a DRTIO satellite, both boards instead reserve SFP0 for the upstream DRTIO connection; remaining SFP cages are available for downstream connections. Equally, if used as a DRTIO master, all free SFP cages are available for downstream connections (i.e. all but SFP0 on Kasli, all four on Kasli-SoC).
|
||||
|
||||
The DRTIO line rate depends upon the RTIO clock frequency running, e.g., at 125MHz the line rate is 2.5Gbps, at 150MHz 3.0Gbps, etc. See below for information on RTIO clocks.
|
||||
|
||||
KC705 and ZC706
|
||||
KC705
|
||||
^^^^^
|
||||
|
||||
An alternative target board for the ARTIQ core device is the KC705 development board from Xilinx. It supports the NIST CLOCK and QC2 hardware (FMC).
|
||||
|
||||
Common problems
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Two high-end evaluation kits are also supported as alternative ARTIQ core device target boards, respectively the Kintex7 `KC705 <https://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ and Zynq-SoC `ZC706 <https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html>`_, both from Xilinx. ZC706, like Kasli-SoC, runs on the ARTIQ-Zynq port. Both are supported in several set variants, namely NIST CLOCK and QC2 (FMC), either available in master, satellite, or standalone variants. See also :doc:`building_developing` for more on system variants.
|
||||
|
||||
Common KC705 problems
|
||||
"""""""""""""""""""""
|
||||
|
||||
* The SW13 switches on the board need to be set to 00001.
|
||||
* When connected, the CLOCK adapter breaks the JTAG chain due to TDI not being connected to TDO on the FMC mezzanine.
|
||||
* On some boards, the JTAG USB connector is not correctly soldered.
|
||||
|
@ -140,13 +111,11 @@ VADJ
|
|||
|
||||
With the NIST CLOCK and QC2 adapters, for safe operation of the DDS buses (to prevent damage to the IO banks of the FPGA), the FMC VADJ rail of the KC705 should be changed to 3.3V. Plug the Texas Instruments USB-TO-GPIO PMBus adapter into the PMBus connector in the corner of the KC705 and use the Fusion Digital Power Designer software to configure (requires Windows). Write to chip number U55 (address 52), channel 4, which is the VADJ rail, to make it 3.3V instead of 2.5V. Power cycle the KC705 board to check that the startup voltage on the VADJ rail is now 3.3V.
|
||||
|
||||
Variant details
|
||||
---------------
|
||||
|
||||
NIST CLOCK
|
||||
^^^^^^^^^^
|
||||
|
||||
With the KC705 CLOCK hardware, the TTL lines are mapped as follows:
|
||||
With the CLOCK hardware, the TTL lines are mapped as follows:
|
||||
|
||||
+--------------------+-----------------------+--------------+
|
||||
| RTIO channel | TTL line | Capability |
|
||||
|
@ -186,21 +155,11 @@ The board has RTIO SPI buses mapped as follows:
|
|||
|
||||
The DDS bus is on channel 27.
|
||||
|
||||
The ZC706 variant is identical except for the following differences:
|
||||
|
||||
- The SMA GPIO on channel 18 is replaced by an Input+Output capable PMOD1_0 line.
|
||||
- Since there is no SDIO on the programmable logic side, channel 26 is instead occupied by an additional SPI:
|
||||
|
||||
+--------------+------------------+--------------+--------------+--------------+
|
||||
| RTIO channel | CS_N | MOSI | MISO | CLK |
|
||||
+==============+==================+==============+==============+==============+
|
||||
| 26 | PMOD_SPI_CS_N | PMOD_SPI_MOSI| PMOD_SPI_MISO| PMOD_SPI_CLK |
|
||||
+--------------+------------------+--------------+--------------+--------------+
|
||||
|
||||
NIST QC2
|
||||
^^^^^^^^
|
||||
|
||||
With the KC705 QC2 hardware, the TTL lines are mapped as follows:
|
||||
With the QC2 hardware, the TTL lines are mapped as follows:
|
||||
|
||||
+--------------------+-----------------------+--------------+
|
||||
| RTIO channel | TTL line | Capability |
|
||||
|
@ -234,11 +193,38 @@ The board has RTIO SPI buses mapped as follows:
|
|||
|
||||
There are two DDS buses on channels 50 (LPC, DDS0-DDS11) and 51 (HPC, DDS12-DDS23).
|
||||
|
||||
|
||||
The QC2 hardware uses TCA6424A I2C I/O expanders to define the directions of its TTL buffers. There is one such expander per FMC card, and they are selected using the PCA9548 on the KC705.
|
||||
|
||||
To avoid I/O contention, the startup kernel should first program the TCA6424A expanders and then call ``output()`` on all ``TTLInOut`` channels that should be configured as outputs. See :mod:`artiq.coredevice.i2c` for more details.
|
||||
To avoid I/O contention, the startup kernel should first program the TCA6424A expanders and then call ``output()`` on all ``TTLInOut`` channels that should be configured as outputs.
|
||||
|
||||
The ZC706 is identical except for the following differences:
|
||||
See :mod:`artiq.coredevice.i2c` for more details.
|
||||
|
||||
- The SMA GPIO is once again replaced with PMOD1_0.
|
||||
- The first four TTLs also have edge counters, on channels 52, 53, 54, and 55.
|
||||
.. _core-device-clocking:
|
||||
|
||||
Clocking
|
||||
--------
|
||||
|
||||
The core device generates the RTIO clock using a PLL locked either to an internal crystal or to an external frequency reference. If choosing the latter, external reference must be provided (via front panel SMA input on Kasli boards). Valid configuration options include:
|
||||
|
||||
* ``int_100`` - internal crystal reference is used to synthesize a 100MHz RTIO clock,
|
||||
* ``int_125`` - internal crystal reference is used to synthesize a 125MHz RTIO clock (default option),
|
||||
* ``int_150`` - internal crystal reference is used to synthesize a 150MHz RTIO clock.
|
||||
* ``ext0_synth0_10to125`` - external 10MHz reference clock used to synthesize a 125MHz RTIO clock,
|
||||
* ``ext0_synth0_80to125`` - external 80MHz reference clock used to synthesize a 125MHz RTIO clock,
|
||||
* ``ext0_synth0_100to125`` - external 100MHz reference clock used to synthesize a 125MHz RTIO clock,
|
||||
* ``ext0_synth0_125to125`` - external 125MHz reference clock used to synthesize a 125MHz RTIO clock.
|
||||
|
||||
The selected option can be observed in the core device boot logs and accessed using ``artiq_coremgmt config`` with key ``rtio_clock``.
|
||||
|
||||
As of ARTIQ 8, it is now possible for Kasli and Kasli-SoC configurations to enable WRPLL -- a clock recovery method using `DDMTD <http://white-rabbit.web.cern.ch/documents/DDMTD_for_Sub-ns_Synchronization.pdf>`_ and Si549 oscillators -- both to lock the main RTIO clock and (in DRTIO configurations) to lock satellites to master. This is set by the ``enable_wrpll`` option in the JSON description file. Because WRPLL requires slightly different gateware and firmware, it is necessary to re-flash devices to enable or disable it in extant systems. If you would like to obtain the firmware for a different WRPLL setting through AFWS, write to the helpdesk@ email.
|
||||
|
||||
If phase noise performance is the priority, it is recommended to use ``ext0_synth0_125to125`` over other ``ext0`` options, as this bypasses the (noisy) MMCM.
|
||||
|
||||
If not using WRPLL, PLL can also be bypassed entirely with the options
|
||||
|
||||
* ``ext0_bypass`` (input clock used directly)
|
||||
* ``ext0_bypass_125`` (explicit alias)
|
||||
* ``ext0_bypass_100`` (explicit alias)
|
||||
|
||||
Bypassing the PLL ensures the skews between input clock, downstream clock outputs, and RTIO clock are deterministic across reboots of the system. This is useful when phase determinism is required in situations where the reference clock fans out to other devices before reaching the master.
|
|
@ -1,4 +1,4 @@
|
|||
Core real-time drivers
|
||||
Core drivers reference
|
||||
======================
|
||||
|
||||
These drivers are for the core device and the peripherals closely integrated into it, which do not use the controller mechanism.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
Core language and environment
|
||||
=============================
|
||||
Core language reference
|
||||
=======================
|
||||
|
||||
The most commonly used features from the ARTIQ language modules and from the core device modules are bundled together in ``artiq.experiment`` and can be imported with ``from artiq.experiment import *``.
|
||||
The most commonly used features from the ARTIQ language modules and from the core device modules are bundled together in :mod:`artiq.experiment` and can be imported with ``from artiq.experiment import *``.
|
||||
|
||||
:mod:`artiq.language.core` module
|
||||
---------------------------------
|
||||
|
|
|
@ -10,9 +10,9 @@ Default network ports
|
|||
+---------------------------------+--------------+
|
||||
| Core device (analyzer) | 1382 |
|
||||
+---------------------------------+--------------+
|
||||
| MonInj (core device or proxy) | 1383 |
|
||||
| Moninj (core device or proxy) | 1383 |
|
||||
+---------------------------------+--------------+
|
||||
| MonInj (proxy control) | 1384 |
|
||||
| Moninj (proxy control) | 1384 |
|
||||
+---------------------------------+--------------+
|
||||
| Core analyzer proxy (proxy) | 1385 |
|
||||
+---------------------------------+--------------+
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
.. _developing-artiq:
|
||||
|
||||
Developing ARTIQ
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
.. warning::
|
||||
This section is only for software or FPGA developers who want to modify ARTIQ. If you want to modify the firmware for Kasli-SoC or other ARM boards, please refer to the `ARTIQ on Zynq repository <https://git.m-labs.hk/M-Labs/artiq-zynq>`_. The steps described here are not required if you simply want to run experiments with ARTIQ. If you purchased a system from M-Labs or QUARTIQ, we normally provide board binaries for you.
|
||||
|
||||
The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
|
||||
ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``flake.nix`` file and/or nixpkgs, and run the commands manually) - but Nix makes the process a lot easier.
|
||||
|
||||
* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment, which can be entered with the command `vivado-env`). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra <https://nixbld.m-labs.hk/>`_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
|
||||
* During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives).
|
||||
* Install the `Nix package manager <http://nixos.org/nix/>`_, version 2.4 or later. Prefer a single-user installation for simplicity.
|
||||
* If you did not install Vivado in its default location ``/opt``, clone the ARTIQ Git repository and edit ``flake.nix`` accordingly.
|
||||
* Enable flakes in Nix by e.g. adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (for example ``~/.config/nix/nix.conf``).
|
||||
* Clone the ARTIQ Git repository and run ``nix develop`` at the root (where ``flake.nix`` is).
|
||||
* Make the current source code of ARTIQ available to the Python interpreter by running ``export PYTHONPATH=`pwd`:$PYTHONPATH``.
|
||||
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli <description>.json``, using a JSON system description file.
|
||||
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli/<your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <installing-configuring-openocd>`. OpenOCD is already part of the flake's development environment.
|
||||
* Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed in the flake's development environment). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``.
|
||||
* The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group).
|
|
@ -19,7 +19,7 @@ Full support for a specific device, called a network device support package or N
|
|||
|
||||
1. The `driver`, which contains the Python API functions to be called over the network and performs the I/O to the device. The top-level module of the driver should be called ``artiq.devices.XXX.driver``.
|
||||
2. The `controller`, which instantiates, initializes and terminates the driver, and sets up the RPC server. The controller is a front-end command-line tool to the user and should be called ``artiq.frontend.aqctl_XXX``. A ``setup.py`` entry must also be created to install it.
|
||||
3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data, which would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters, must be transferred over the network API.
|
||||
3. An optional `client`, which connects to the controller and exposes the functions of the driver as a command-line interface. Clients are front-end tools (called ``artiq.frontend.aqcli_XXX``) that have ``setup.py`` entries. In most cases, a custom client is not needed and the generic ``sipyco_rpctool`` utility can be used instead. Custom clients are only required when large amounts of data must be transferred over the network API, that would be unwieldy to pass as ``sipyco_rpctool`` command-line parameters.
|
||||
4. An optional `mediator`, which is code executed on the client that supplements the network API. A mediator may contain kernels that control real-time signals such as TTL lines connected to the device. Simple devices use the network API directly and do not have a mediator. Mediator modules are called ``artiq.devices.XXX.mediator`` and their public classes are exported at the ``artiq.devices.XXX`` level (via ``__init__.py``) for direct import and use by the experiments.
|
||||
|
||||
The driver and controller
|
||||
|
@ -120,7 +120,7 @@ To access this driver in an experiment, we can retrieve the ``Client`` instance
|
|||
Integration with ARTIQ experiments
|
||||
----------------------------------
|
||||
|
||||
Generally we will want to add the device to our :ref:`device database <device-db>` so that we can add it to an experiment with ``self.setattr_device`` and so the controller can be started and stopped automatically by a controller manager (the :mod:`~artiq_comtools.artiq_ctlmgr` utility from ``artiq-comtools``). To do so, add an entry to your device database in this format: ::
|
||||
Generally we will want to add the device to our :ref:`device database <device-db>` so that we can add it to an experiment with ``self.setattr_device`` and so the controller can be started and stopped automatically by a controller manager (the ``artiq_ctlmgr`` utility from ``artiq-comtools``). To do so, add an entry to your device database in this format: ::
|
||||
|
||||
device_db.update({
|
||||
"hello": {
|
||||
|
@ -213,7 +213,7 @@ Command line and options
|
|||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* Controllers should be able to operate in "simulation" mode, specified with ``--simulation``, where they behave properly even if the associated hardware is not connected. For example, they can print the data to the console instead of sending it to the device, or dump it into a file.
|
||||
* The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter names.
|
||||
* The device identification (e.g. serial number, or entry in ``/dev``) to attach to must be passed as a command-line parameter to the controller. We suggest using ``-d`` and ``--device`` as parameter name.
|
||||
* Keep command line parameters consistent across clients/controllers. When adding new command line options, look for a client/controller that does a similar thing and follow its use of ``argparse``. If the original client/controller could use ``argparse`` in a better way, improve it.
|
||||
|
||||
Style
|
||||
|
|
|
@ -30,12 +30,12 @@ The routing table
|
|||
|
||||
The routing table defines, for each destination, the list of hops ("route") that must be taken from the root in order to reach it.
|
||||
|
||||
It is stored in a binary format that can be generated and manipulated with the utility :mod:`~artiq.frontend.artiq_route`, see :ref:`drtio-routing`. The binary file is programmed into the flash storage of the core device under the ``routing_table`` key. It is automatically distributed to downstream devices when the connections are established. Modifying the routing table requires rebooting the core device for the new table to be taken into account.
|
||||
It is stored in a binary format that can be generated and manipulated with the :ref:`artiq_route utility <routing-table-tool>`, see :ref:`drtio-routing`. The binary file is programmed into the flash storage of the core device under the ``routing_table`` key. It is automatically distributed to downstream devices when the connections are established. Modifying the routing table requires rebooting the core device for the new table to be taken into account.
|
||||
|
||||
Internal details
|
||||
----------------
|
||||
|
||||
Bits 16-24 of the RTIO channel number (assigned to a respective device in the initial :ref:`system description JSON <system-description>`, and specified again for use of the ARTIQ front-end in the device database) define the destination. Bits 0-15 of the RTIO channel number select the channel within the destination.
|
||||
Bits 16-24 of the RTIO channel number (assigned to a respective device in the initial system description JSON, and specified again for use of the ARTIQ front-end in the device database) define the destination. Bits 0-15 of the RTIO channel number select the channel within the destination.
|
||||
|
||||
Real-time and auxiliary packets
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
|
@ -29,7 +29,7 @@ This is stored in a Python dictionary whose keys are the device names, which the
|
|||
"arguments": {"channel": 19}
|
||||
},
|
||||
|
||||
Note that the key (the name of the device) is ``led`` and the value is itself a Python dictionary. Names will later be used to gain access to a device through methods such as ``self.setattr_device("led")``. While in this case ``led`` can be replaced with another name, provided it is used consistently, some names (in particular, ``core``) are used internally by ARTIQ and will cause problems if changed. It is often more convenient to use aliases for renaming purposes, see below.
|
||||
Note that the key (the name of the device) is ``led`` and the value is itself a Python dictionary. Names will later be used to gain access to a device through methods such as ``self.setattr_device("led")``. While in this case ``led`` can be replaced with another name, provided it is used consistently, some names (e.g. in particular ``core``) are used internally by ARTIQ and will cause problems if changed. It is often more convenient to use aliases for renaming purposes, see below.
|
||||
|
||||
.. note::
|
||||
The device database is generated and stored in the memory of the master when the master is first started. Changes to the ``device_db.py`` file will not immediately affect a running master. In order to update the device database, right-click in the Explorer window and select 'Scan device database', or run the command ``artiq_client scan-devices``.
|
||||
|
@ -37,13 +37,13 @@ Note that the key (the name of the device) is ``led`` and the value is itself a
|
|||
.. warning::
|
||||
It is important to understand that the device database does not *set* your system configuration, only *describe* it. If you change the devices available to your system, it is usually necessary to edit the device database, but editing the database will not change what devices are available to your system.
|
||||
|
||||
Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, real-time, e.g. your Sinara hardware) must be connected to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports.
|
||||
Remote (normally, non-realtime) devices must have accessible, suitable controllers and drivers; see :doc:`developing_a_ndsp` for more information, including how to add entries for new remote devices to your device database. Local devices (normally, realtime, e.g. your Sinara hardware) must be factually attached to your system, and more importantly, your gateware and firmware must have been compiled to account for them, and to expect them at those ports.
|
||||
|
||||
While controllers can be added and removed to your device database on an *ad hoc* basis, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will be provided with new binaries and necessary assistance.) See :doc:`building_developing`.
|
||||
While controllers can be added and removed to your device database relatively easily, in order to make new real-time hardware accessible, it is generally also necessary to recompile and reflash your gateware and firmware. (If you purchase your hardware from M-Labs, you will normally be provided with new binaries and necessary assistance.)
|
||||
|
||||
Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the :ref:`system description file<system-description>`. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around.
|
||||
Adding or removing new real-time hardware is a difference in *system configuration,* which must be specified at compilation time of gateware and firmware. For Kasli and Kasli-SoC, this is managed in the form of a JSON usually called the *system description* file. The device database generally provides that information to ARTIQ which can change from instance to instance ARTIQ is run, e.g., device names and aliases, network addresses, clock frequencies, and so on. The system configuration defines that information which is *not* permitted to change, e.g., what device is associated with which EEM port or RTIO channels. Insofar as data is duplicated between the two, the device database is obliged to agree with the system description, not the other way around.
|
||||
|
||||
If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add controllers, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the :mod:`~artiq.frontend.artiq_ddb_template` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly. See also the corresponding entry in :ref:`Utilities <ddb-template-tool>`.
|
||||
If you obtain your hardware from M-Labs, you will always be provided with a ``device_db.py`` to match your system configuration, which you can edit as necessary to add remote devices, aliases, and so on. In the relatively unlikely case that you are writing a device database from scratch, the ``artiq_ddb_template`` utility can be used to generate a template device database directly from the JSON system description used to compile your gateware and firmware. This is the easiest way to ensure that details such as the allocation of RTIO channel numbers will be represented in the device database correctly. See also the corresponding entry in :ref:`Utilities <ddb-template-tool>`.
|
||||
|
||||
Local devices
|
||||
^^^^^^^^^^^^^
|
||||
|
@ -52,16 +52,14 @@ Local device entries are dictionaries which contain a ``type`` field set to ``lo
|
|||
|
||||
The fields ``module`` and ``class`` determine the location of the Python class of the driver. The ``arguments`` field is another (possibly empty) dictionary that contains arguments to pass to the device driver constructor. ``arguments`` is often used to specify the RTIO channel number of a peripheral, which must match the channel number in gateware.
|
||||
|
||||
On Kasli and Kasli-SoC, the allocation of RTIO channels to EEM ports is done automatically when the gateware is compiled, and while conceptually simple (channels are assigned one after the other, from zero upwards, for each device entry in the system description file) it is not entirely straightforward (different devices require different numbers of RTIO channels). Again, the easiest way to handle this when writing a new device database is automatically, using :mod:`~artiq.frontend.artiq_ddb_template`.
|
||||
|
||||
.. _environment-ctlrs:
|
||||
On Kasli and Kasli-SoC, the allocation of RTIO channels to EEM ports is done automatically when the gateware is compiled, and while conceptually simple (channels are assigned one after the other, from zero upwards, for each device entry in the system description file) it is not entirely straightforward (different devices require different numbers of RTIO channels). Again, the easiest way to handle this when writing a new device database is automatically, using ``artiq_ddb_template``.
|
||||
|
||||
Controllers
|
||||
^^^^^^^^^^^
|
||||
|
||||
Controller entries are dictionaries which contain a ``type`` field set to ``controller``. When an experiment requests such a device, a RPC client (see ``sipyco.pc_rpc``) is created and connected to the appropriate controller. Controller entries are also used by controller managers to determine what controllers to run. For an example, see :ref:`the NDSP development page <ndsp-integration>`.
|
||||
|
||||
The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matching the ``port`` field) and an appropriate bind address for the controller's listening socket.
|
||||
The ``host`` and ``port`` fields configure the TCP connection. The ``target`` field contains the name of the RPC target to use (you may use ``sipyco_rpctool`` on a controller to list its targets). Controller managers run the ``command`` field in a shell to launch the controller, after replacing ``{port}`` and ``{bind}`` by respectively the TCP port the controller should listen to (matches the ``port`` field) and an appropriate bind address for the controller's listening socket.
|
||||
|
||||
An optional ``best_effort`` boolean field determines whether to use ``sipyco.pc_rpc.Client`` or ``sipyco.pc_rpc.BestEffortClient``. ``BestEffortClient`` is very similar to ``Client``, but suppresses network errors and automatically retries connections in the background. If no ``best_effort`` field is present, ``Client`` is used by default.
|
||||
|
||||
|
@ -73,20 +71,18 @@ If an entry is a string, that string is used as a key for another lookup in the
|
|||
Arguments
|
||||
---------
|
||||
|
||||
Arguments are values that parameterize the behavior of an experiment. ARTIQ supports both interactive arguments, requested and supplied at some point while an experiment is running, and submission-time arguments, requested in the build phase and set before the experiment is executed. For more on arguments in practice, see the tutorial section :ref:`mgmt-arguments`. For supported argument types, see the reference for :mod:`artiq.language.environment`; for specific methods, see the reference for :class:`~artiq.language.environment.HasEnvironment`.
|
||||
|
||||
.. _environment-datasets:
|
||||
Arguments are values that parameterize the behavior of an experiment. ARTIQ supports both interactive arguments, requested and supplied at some point while an experiment is running, and submission-time arguments, requested in the build phase and set before the experiment is executed. For more on arguments in practice, see the tutorial section :ref:`mgmt-arguments`. For supported argument types and specific reference, see the relevant sections of :doc:`the core language reference <core_language_reference>`, as well as the example experiment ``examples/no_hardware/interactive.py``.
|
||||
|
||||
Datasets
|
||||
--------
|
||||
|
||||
Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`data interfaces tutorial <mgmt-datasets>`.
|
||||
Datasets are values that are read and written by experiments kept in a key-value store. They exist to facilitate the exchange and preservation of information between experiments, from experiments to the management system, and from experiments to long-term storage. Datasets may be either scalars (``bool``, ``int``, ``float``, or NumPy scalar) or NumPy arrays. For basic use of datasets, see the :ref:`management system tutorial <getting-started-datasets>`.
|
||||
|
||||
A dataset may be broadcast (``broadcast=True``), that is, distributed to all clients connected to the master. This is useful e.g. for the ARTIQ dashboard to plot results while an experiment is in progress and give rapid feedback to the user. Broadcasted datasets live in a global key-value store owned by the master. Care should be taken that experiments use distinctive real-time result names in order to avoid conflicts. Broadcasted datasets may be used to communicate values across experiments; for instance, a periodic calibration experiment might update a dataset read by payload experiments.
|
||||
|
||||
Broadcasted datasets are replaced when a new dataset with the same key (name) is produced. By default, they are erased when the master halts. Broadcasted datasets may be made persistent (``persistent=True``, which also implies ``broadcast=True``), in which case the master stores them in a LMDB database typically called ``dataset_db.mdb``, where they are saved across master restarts.
|
||||
|
||||
By default, datasets are archived in the ``results`` HDF5 output for that run, although this can be opted against (``archive=False``). They can be viewed and analyzed with the ARTIQ browser, or with an HDF5 viewer of your choice.
|
||||
By default, datasets are archived in the HDF5 output for that run, although this can be opted against (``archive=False``).
|
||||
|
||||
Datasets and units
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
|
|
@ -1,272 +0,0 @@
|
|||
Extending RTIO
|
||||
==============
|
||||
|
||||
.. warning::
|
||||
|
||||
This page is for users who want to extend or modify ARTIQ RTIO. Broadly speaking, one of the core intentions of ARTIQ is to provide a high-level, easy-to-use interface for experimentation, while the infrastructure handles the technological challenges of the high-resolution, timing-critical operations required. Rather than worrying about the details of timing in hardware, users can outline tasks quickly and efficiently in ARTIQ Python, and trust the system to carry out those tasks in real time. It is not normally, or indeed ever, necessary to make modifications on a gateware level.
|
||||
|
||||
However, ARTIQ is an open-source project, and welcomes interest and agency from its users, as well as from experienced developers. This page is intended to serve firstly as a broad introduction to the internal structure of ARTIQ, and secondly as a tutorial for how RTIO extensions in ARTIQ can be made. Experience with FPGAs or hardware description languages is not strictly necessary, but additional research on the topic will likely be required to make serious modifications of your own.
|
||||
|
||||
For instructions on setting up the ARTIQ development environment and on building gateware and firmware binaries, first see :doc:`building_developing` in the main part of the manual.
|
||||
|
||||
Introduction to the ARTIQ internal stack
|
||||
----------------------------------------
|
||||
|
||||
.. tikz::
|
||||
:align: center
|
||||
:libs: arrows.meta
|
||||
:xscale: 70
|
||||
|
||||
\definecolor{primary}{HTML}{0d3547} % ARTIQ blue
|
||||
|
||||
\pgfdeclarelayer{bg} % declare background layer
|
||||
\pgfsetlayers{bg,main} % set layer order
|
||||
|
||||
\node[draw, dotted, thick] (frontend) at (0, 6) {Host machine: Compiler, master, GUI...};
|
||||
|
||||
\node[draw=primary, fill=white] (software) at (0, 5) {Software: \it{Kernel code}};
|
||||
\node[draw=blue, fill=white, thick] (firmware) at (0, 4) {Firmware: \it{ARTIQ runtime}};
|
||||
\node[draw=blue, fill=white, thick] (gateware) at (0, 3) {Gateware: \it{Migen and Misoc}};
|
||||
\node[draw=primary, fill=white] (hardware) at (0, 2) {Hardware: \it{Sinara ecosystem}};
|
||||
|
||||
\begin{pgfonlayer}{bg}
|
||||
\draw[primary, -Stealth, dotted, thick] (frontend.south) to [out=180, in=180] (firmware.west);
|
||||
\draw[primary, -Stealth, dotted, thick] (frontend) to (software);
|
||||
\end{pgfonlayer}
|
||||
\draw[primary, -Stealth] (firmware) to (software);
|
||||
\draw[primary, -Stealth] (gateware) to (firmware);
|
||||
\draw[primary, -Stealth] (hardware) to (gateware);
|
||||
|
||||
Like any other modern piece of software, kernel code running on an ARTIQ core device rests upon a layered infrastructure, starting with the hardware: the physical carrier board and its peripherals. Generally, though not exclusively, this is the `Sinara device family <https://m-labs.hk/experiment-control/sinara-core/>`_, which is designed to work with ARTIQ. Other carrier boards, such as the Xilinx KC705 and ZC706, are also supported.
|
||||
|
||||
All of the ARTIQ core device carrier boards necessarily center around a physical field-programmable gate array, or FPGA. If you have never worked with FPGAs before, it is easiest to understand them as 'rearrangeable' circuits. Ideally, they are capable of approaching the tremendous speed and timing precision advantages of custom-designed, application-specific hardware, while still being reprogrammable, allowing development and revision to continue after manufacturing.
|
||||
|
||||
The 'configuration' of an FPGA, the circuit design it is programmed with, is its *gateware*. Gateware is not software, and is not written in programming languages. Rather, it is written in a *hardware description language,* of which the most common are VHDL and Verilog. The ARTIQ codebase uses a set of tools called `Migen <https://m-labs.hk/gateware/migen/>`_ to write hardware description in a subset of Python, which is later translated to Verilog behind the scenes. This has the advantage of preserving much of the flexibility and convenience of Python as a programming language, but shouldn't be mistaken for it *being* Python, or functioning like Python. (MiSoC, built on Migen, is used to implement softcore -- i.e. 'programmed', on-FPGA, not hardwired -- CPUs on Kasli and KC705. Zynq devices contain 'hardcore' ARM CPUs already and correspondingly make relatively less intensive use of MiSoC.)
|
||||
|
||||
The low-level software that runs directly on the core device's CPU, softcore or hardcore, is its *firmware.* This is the 'operating system' of the core device. The firmware is tasked, among other things, with handling the low-level communication between the core device and the host machine, as well as between the core devices in a DRTIO setting. It is written in bare-metal `Rust <https://www.rust-lang.org/>`__. There are currently two active versions of the ARTIQ firmware (the version used for ARTIQ-Zynq, NAR3, is more modern than that used on Kasli and KC705, and will likely eventually replace it) but they are functionally equivalent except for internal details.
|
||||
|
||||
Experiment kernels themselves -- ARTIQ Python, processed by the ARTIQ compiler and loaded from the host machine -- rest on top of and are framed and supported by the firmware, in the same sense way that application software on your PC rests on top of an operating system. All together, software kernels communicate with the firmware to set parameters for the gateware, which passes signals directly to the hardware.
|
||||
|
||||
These frameworks are built to be self-contained and extensible. To make additions to the gateware and software, for example, we do not need to make changes to the firmware; we can interact purely with the interfaces provided on either side.
|
||||
|
||||
Extending gateware logic
|
||||
------------------------
|
||||
|
||||
As briefly explained in :doc:`rtio`, when we talk about RTIO infrastructure, we are primarily speaking of structures implemented in gateware. The FIFO banks which hold scheduled output events or recorded input events, for example, are in gateware. Sequence errors, overflow exceptions, event spreading, and so on, happen in the gateware. In some cases, you may want to make relatively simple, parametric changes to existing RTIO, like changing the sizes of certain queues. In this case, it can be as simple as tracking down the part of the code where this parameter is set, changing it, and :doc:`rebuilding the binaries <building_developing>`.
|
||||
|
||||
.. warning::
|
||||
Note that FPGA resources are finite, and buffer sizes, lane counts, etc., are generally chosen to maximize available resources already, with different values depending on the core device in use. Depending on the peripherals you include (some are more resource-intensive than others) blanket increases will likely quickly outstrip the capacity of your FPGA and fail to build. Increasing the depth of a particular channel you know to be heavily used is more likely to succeed; the easiest way to find out is to attempt the build and observe what results.
|
||||
|
||||
Gateware in ARTIQ is housed in ``artiq/gateware`` on the main ARTIQ repository and (for Zynq-specific additions) in ``artiq-zynq/src/gateware`` on ARTIQ-Zynq. The starting point for figuring out your changes will often be the *target file*, which is core device-specific and which you may recognize as the primary module called when building gateware. Depending on your core device, simply track down the file named after it, as in ``kasli.py``, ``kasli_soc.py``, and so on. Note that the Kasli and Kasli-SoC targets are designed to take JSON description files as input, whereas their KC705 and ZC706 equivalents work with hardcoded variants instead.
|
||||
|
||||
To change parameters related to particular peripherals, see also the files ``eem.py`` and ``eem_7series.py``, which describe the core device's interface with other EEM cards in Migen terms, and contain ``add_std`` methods that in turn reference specific gateware modules and assign RTIO channels.
|
||||
|
||||
Adding a module to gateware
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To demonstrate how RTIO can be *extended,* on the other hand, we will develop a new interface entirely for the control of certain hardware -- in our case, for a simple example, the core device LEDs. If you haven't already, follow the instructions in :doc:`building_developing` to clone the ARTIQ repository and set up a development environment. The first part of our addition will be a module added to ``gateware/rtio/phy`` (PHY, for interaction with the physical layer), written in the Migen Fragmented Hardware Description Language (FHDL).
|
||||
|
||||
.. seealso::
|
||||
To find reference material for FHDL and the Migen constructs we will use, see the Migen manual, in particular the page `The FHDL domain-specific language <https://m-labs.hk/migen/manual/fhdl.html>`_.
|
||||
|
||||
.. warning::
|
||||
If you have never worked with a hardware description language before, it is important to understand that hardware description is fundamentally different to programming in a language like Python or Rust. At its most basic, a program is a set of instructions: a step-by-step guide to a task you want to see performed, where each step is written, and executed, principally in sequence. In contrast, hardware description is *a description*. It specifies the static state of a piece of hardware. There are no 'steps', and no chronological execution, only stated facts about how the system should be built.
|
||||
|
||||
The examples we will handle in this tutorial are simple, and you will likely find Migen much more readable than traditional languages like VHDL and Verilog, but keep in mind that we are describing how a system connects and interlocks its signals, *not* operations it should perform.
|
||||
|
||||
Normally, the PHY module used for LEDs is the ``Output`` of ``ttl_simple.py``. Take a look at its source code. Note that values like ``override`` and ``probes`` exist to support RTIO MonInj -- ``probes`` for monitoring, ``override`` for injection -- and are not involved with normal control of the output. Note also that ``pad``, among FPGA engineers, refers to an input/output pad, i.e. a physical connection through which signals are sent. ``pad_n`` is its negative pair, necessary only for certain kinds of TTLs and not applicable to LEDs.
|
||||
|
||||
Interface and signals
|
||||
"""""""""""""""""""""
|
||||
|
||||
To get started, create a new file in ``gateware/rtio/phy``. Call it ``linked_leds.py``. In it, create a class ``Output``, which will inherit from Migen's ``Module``, and give it an ``init`` method, which takes two pads as input: ::
|
||||
|
||||
from migen import *
|
||||
|
||||
class Output(Module):
|
||||
|
||||
def __init__(self, pad0, pad1):
|
||||
|
||||
``pad0`` and ``pad1`` will represent output pads, in our case ultimately connecting to the board's user LEDs. On the other side, to receive output events from a RTIO FIFO queue, we will use an ``Interface`` provided by the ``rtlink`` module, also found in ``artiq/gateware``. Both output and input interfaces are available, and both can be combined into one link, but we are only handling output events. We use the ``data_width`` parameter to request an interface that is 2 bits wide: ::
|
||||
|
||||
from migen import *
|
||||
from artiq.gateware.rtio import rtlink
|
||||
|
||||
class Output(Module):
|
||||
|
||||
def __init__(self, pad0, pad1):
|
||||
self.rtlink = rtlink.Interface(rtlink.OInterface(2))
|
||||
|
||||
In our example, rather than controlling both LEDs manually using ``on`` and ``off``, which is the functionality ``ttl_simple.py`` provides, we will control one LED manually and have the gateware determine the value of the other based on the first. This same logic would be easy (in fact, much easier) to implement in ARTIQ Python; the advantage of placing it in gateware is that logic in gateware is *extremely fast,* in effect 'instant', i.e., completed within a single clock cycle. Rather than waiting for a CPU to process and respond to instructions, a response can happen at the speed of a dedicated logic circuit.
|
||||
|
||||
.. note::
|
||||
Naturally, the truth is more complicated, and depends heavily on how complex the logic in question is. An overlong chain of gateware logic will fail to settle within a single RTIO clock cycle, causing a wide array of potential problems that are difficult to diagnose and difficult to fix; the only solutions are to simplify the logic, deliberately split it across multiple clock cycles (correspondingly increasing latency for the operation), or to decrease the speed of the clock (increasing latency for *everything* the device does).
|
||||
|
||||
For now, it's enough to say that you are unlikely to encounter timing failures with the kind of simple logic demonstrated in this tutorial. Indeed, designing gateware logic to run in as few cycles as possible without 'failing timing' is an engineering discipline in itself, and much of what FPGA developers spend their time on.
|
||||
|
||||
In practice, of course, since ARTIQ explicitly allows scheduling simultaneous output events to different channels, there's still no reason to make gateware modifications to accomplish this. After all, leveraging the real-time capabilities of customized gateware without making it necessary to *write* it is much of the point of ARTIQ as a system. Only in more complex cases, such as directly binding inputs to outputs without feeding back through the CPU, might gateware-level additions become necessary.
|
||||
|
||||
For now, add two intermediate signals for our logic, instances of the Migen ``Signal`` construct: ::
|
||||
|
||||
def __init__(self, pad0, pad1):
|
||||
self.rtlink = rtlink.Interface(rtlink.OInterface(2))
|
||||
reg = Signal()
|
||||
pad0_o = Signal()
|
||||
|
||||
.. note::
|
||||
A gateware 'signal' is not a signal in the sense of being a piece of transmitted information. Rather, it represents a channel, which bits of information can be held in. To conceptualize a Migen ``Signal``, take it as a kind of register: a box that holds a certain number of bits, and can update those bits from an input, or broadcast them to an output connection. The number of bits is arbitrary, e.g., a ``Signal(2)`` will be two bits wide, but in our example we handle only single-bit registers.
|
||||
|
||||
These are our inputs, outputs, and intermediate signals. By convention, in Migen, these definitions are all made at the beginning of a module, and separated from the logic that interconnects them with a line containing the three symbols ``###``. See also ``ttl_simple.py`` and other modules.
|
||||
|
||||
Since hardware description is not linear or chronological, nothing conceptually prevents us from making these statements in any other order -- in fact, except for the practicalities of code execution, nothing particularly prevents us from defining the connections between the signals before we define the signals themselves -- but for readable and maintainable code, this format is vastly preferable.
|
||||
|
||||
Combinatorial and synchronous statements
|
||||
""""""""""""""""""""""""""""""""""""""""
|
||||
|
||||
After the ``###`` separator, we will set the connecting logic. A Migen ``Module`` has several special attributes, to which different logical statements can be assigned. We will be using ``self.sync``, for synchronous statements, and ``self.comb``, for combinatorial statements. If a statement is *synchronous*, it is only updated once per clock cycle, i.e. when the clock ticks. If a statement is *combinatorial*, it is updated whenever one of its inputs change, i.e. 'instantly'.
|
||||
|
||||
Add a synchronous block as follows: ::
|
||||
|
||||
self.sync.rio_phy += [
|
||||
If(self.rtlink.o.stb,
|
||||
pad0_o.eq(self.rtlink.o.data[0] ^ pad0_o),
|
||||
reg.eq(self.rtlink.o.data[1])
|
||||
)
|
||||
]
|
||||
|
||||
In other words, at every tick of the ``rtio_phy`` clock, if the ``rtlink`` strobe signal (which is set to high when the data is valid, i.e., when an output event has just reached the PHY) is high, the ``pad0_o`` and ``reg`` registers are updated according to the input data on ``rtlink``.
|
||||
|
||||
.. note::
|
||||
Notice that, in a standard synchronous block, it makes no difference how or how many times the inputs to an ``.eq()`` statement change or fluctuate. The output is updated *exactly once* per cycle, at the tick, according to the instantaneous state of the inputs in that moment. In between ticks and during the clock cycle, it remains stable at the last updated level, no matter the state of the inputs. This stability is vital for the broader functioning of synchronous circuits, even though 'waiting for the tick' adds latency to the update.
|
||||
|
||||
``reg`` is simply set equal to the incoming bit. ``pad0_o``, on the other hand, flips its old value if the input is ``1``, and keeps it if the input is ``0``. Note that ``^``, which you may know as the Python notation for a bitwise XOR operation, here simply represents a XOR gate. In summary, we can flip the value of ``pad0`` with the first bit of the interface, and set the value of ``reg`` with the other.
|
||||
|
||||
Add the combinatorial block as follows: ::
|
||||
|
||||
self.comb += [
|
||||
pad0.eq(pad0_o),
|
||||
If(reg,
|
||||
pad1.eq(pad0_k)
|
||||
)
|
||||
]
|
||||
|
||||
The output ``pad0`` is continuously connected to the value of the ``pad0_o`` register. The output of ``pad1`` is set equal to that of ``pad0``, but only if the ``reg`` register is high, or ``1``.
|
||||
|
||||
The module is now capable of accepting RTIO output events and applying them to the hardware outputs. What we can't yet do is generate these output events in an ARTIQ kernel. To do that, we need to add a core device driver.
|
||||
|
||||
Adding a core device driver
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you have been writing ARTIQ experiments for any length of time, you will already be familiar with the core device drivers. Their reference is kept in this manual on the page :doc:`core_drivers_reference`; their methods are commonly used to manipulate the core device and its close peripherals. Source code for these drivers is kept in the directory ``artiq/coredevice``. Create a new file, again called ``linked_led.py``, in this directory.
|
||||
|
||||
The drivers are software, not gateware, and they are written in regular ARTIQ Python. They use methods given in ``coredevice/rtio.py`` to queue input and output events to RTIO channels. We will start with its ``__init__``, the method ``get_rtio_channels`` (which is formulaic, and exists only to be used by :meth:`~artiq.frontend.artiq_rtiomap`), and a output set method ``set_o``: ::
|
||||
|
||||
from artiq.language.core import *
|
||||
from artiq.language.types import *
|
||||
from artiq.coredevice.rtio import rtio_output
|
||||
|
||||
class LinkedLED:
|
||||
|
||||
def __init__(self, dmgr, channel, core_device="core"):
|
||||
self.core = dmgr.get(core_device)
|
||||
self.channel = channel
|
||||
self.target_o = channel << 8
|
||||
|
||||
@staticmethod
|
||||
def get_rtio_channels(channel, **kwargs):
|
||||
return [(channel, None)]
|
||||
|
||||
@kernel
|
||||
def set_o(self, o):
|
||||
rtio_output(self.target_o, o)
|
||||
|
||||
.. note::
|
||||
|
||||
``rtio_output()`` is one of four methods given in ``coredevice/rtio.py``, which provides an interface with lower layers of the system. You can think of it ultimately as representing the other side of the ``Interface`` we requested in our Migen module. Notably, in between the two, events pass through the SED and its FIFO lanes, where they are held until the exact real-time moment the events were scheduled for, as originally described in :doc:`rtio`.
|
||||
|
||||
Now we can write the kernel API. In the gateware, bit 0 flips the value of the first pad: ::
|
||||
|
||||
@kernel
|
||||
def flip_led(self):
|
||||
self.set_o(0b01)
|
||||
|
||||
and bit 1 connects the second pad to the first: ::
|
||||
|
||||
@kernel
|
||||
def link_up(self):
|
||||
self.set_o(0b10)
|
||||
|
||||
There's no reason we can't do both at the same time: ::
|
||||
|
||||
@kernel
|
||||
def flip_together(self):
|
||||
self.set_o(0b11)
|
||||
|
||||
Target and device database
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Our ``linked_led`` PHY module exists, but in order for it to be generated as part of a set of ARTIQ binaries, we need to add it to one of the target files. Find the target file for your core device, as described above. Each target file is structured differently; track down the part of the file where channels and PHY modules are assigned to the user LEDs. Depending on your core device, there may be two or more LEDs that are available. Look for lines similar to: ::
|
||||
|
||||
for i in (0, 1):
|
||||
user_led = self.platform.request("user_led", i)
|
||||
phy = ttl_simple.Output(user_led)
|
||||
self.submodules += phy
|
||||
self.rtio_channels.append(rtio.Channel.from_phy(phy))
|
||||
|
||||
Edit the code so that, rather than assigning a separate PHY and channel to each LED, two of the LEDs are grouped together in ``linked_led``. You might use something like: ::
|
||||
|
||||
print("Linked LEDs at:", len(rtio_channels))
|
||||
phy = linked_led.Output(self.platform.request("user_led", 0), self.platform.request("user_led", 1))
|
||||
self.submodules += phy
|
||||
self.rtio_channels.append(rtio.Channel.from_phy(phy))
|
||||
|
||||
Save the target file, under a different name if you prefer. Follow the instructions in :doc:`building_developing` to build a set of binaries, being sure to use your edited target file for the gateware, and flash your core device, for simplicity preferably in a standalone configuration without peripherals.
|
||||
|
||||
Now, before you can access your new core device driver from a kernel, it must be added to your device database. Find your ``device_db.py``. Delete the entries dedicated to the user LEDs that you have repurposed; if you tried to control those LEDs using the standard TTL interfaces now, the corresponding gateware would be missing anyway. Add an entry with your new driver, as in: ::
|
||||
|
||||
device_db["leds"] = {
|
||||
"type": "local",
|
||||
"module": "artiq.coredevice.linked_led",
|
||||
"class": "LinkedLED",
|
||||
"arguments": {"channel": 0x000008}
|
||||
}
|
||||
|
||||
.. warning::
|
||||
Channel numbers are assigned sequentially each time ``rtio_channels.append()`` is called. Since we assigned the channel for our linked LEDs in the same location as the old user LEDs, the correct channel number is likely simply the one previously used in your device database for the first LED. In any other case, however, the ``print()`` statement we added to the target file should tell us the exact canonical channel. Search through the console logs produced when generating the gateware to find the line starting with ``Linked LEDs at:``.
|
||||
|
||||
Depending on how your device database was written, note that the channel numbers for other peripherals, if they are present, *will have changed*, and :meth:`~artiq.frontend.artiq_ddb_template` will not generate their numbers correctly unless it is edited to match the new assignments of the user LEDs. For a longer-term gateware change, especially the addition of a new EEM card, ``artiq/frontend/artiq_ddb_template.py`` and ``artiq/coredevice/coredevice_generic.schema`` should be edited accordingly, so that system descriptions and device databases can continue to be parsed and generated correctly.
|
||||
|
||||
Test experiments
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
Now the device ``leds`` can be called from your device database, and its corresponding driver accessed, just as with any other device. Try writing some miniature experiments, for instance ``flip.py``: ::
|
||||
|
||||
from artiq.experiment import *
|
||||
|
||||
class flip(EnvExperiment):
|
||||
def build(self):
|
||||
self.setattr_device("core")
|
||||
self.setattr_device("leds")
|
||||
|
||||
@kernel
|
||||
def run(self):
|
||||
self.core.reset()
|
||||
self.leds.flip_led()
|
||||
|
||||
and ``linkup.py``: ::
|
||||
|
||||
from artiq.experiment import *
|
||||
|
||||
class sync(EnvExperiment):
|
||||
def build(self):
|
||||
self.setattr_device("core")
|
||||
self.setattr_device("leds")
|
||||
|
||||
@kernel
|
||||
def run(self):
|
||||
self.core.reset()
|
||||
self.leds.link_up()
|
||||
|
||||
Run these and observe the results. Congratulations! You have successfully constructed an extension to the ARTIQ RTIO.
|
||||
|
||||
.. + 'Adding custom EEMs' and 'Merging support'
|
|
@ -1,179 +1,32 @@
|
|||
FAQ (How do I...)
|
||||
=================
|
||||
.. Copyright (C) 2014, 2015 Robert Jordens <jordens@gmail.com>
|
||||
|
||||
use this documentation?
|
||||
-----------------------
|
||||
FAQ
|
||||
===
|
||||
|
||||
The content of this manual is arranged in rough reading order. If you start at the beginning and make your way through section by section, you should form a pretty good idea of how ARTIQ works and how to use it. Otherwise:
|
||||
|
||||
**If you are just starting out,** and would like to get ARTIQ set up on your computer and your core device, start with :doc:`installing`, :doc:`flashing`, and :doc:`configuring`, in that order.
|
||||
|
||||
**If you have a working ARTIQ setup** (or someone else has set it up for you), start with the tutorials: read :doc:`rtio`, then progress to :doc:`getting_started_core`, :doc:`getting_started_mgmt`, and :doc:`using_data_interfaces`. If your system is in a DRTIO configuration, :doc:`DRTIO and subkernels <using_drtio_subkernels>` will also be helpful.
|
||||
|
||||
Pages like :doc:`management_system` and :doc:`core_device` describe **specific components of the ARTIQ ecosystem** in more detail. If you want to understand more about device and dataset databases, for example, read the :doc:`environment` page; if you want to understand the ARTIQ Python dialect and everything it does or does not support, read the :doc:`compiler` page.
|
||||
|
||||
Reference pages, like :doc:`main_frontend_tools` and :doc:`core_drivers_reference`, contain the detailed documentation of the individual methods and command-line tools ARTIQ provides. They are heavily interlinked throughout the rest of the documentation: whenever a method, tool, or exception is mentioned by name, like :class:`~artiq.frontend.artiq_run`, :meth:`~artiq.language.core.now_mu`, or :exc:`~artiq.coredevice.exceptions.RTIOUnderflow`, it can normally be clicked on to directly access the reference material. Notice also that the online version of this manual is searchable; see the 'Search docs' bar at left.
|
||||
|
||||
.. _build-documentation:
|
||||
|
||||
build this documentation?
|
||||
-------------------------
|
||||
|
||||
To generate this manual from source, you can use ``nix build`` directives, for example: ::
|
||||
|
||||
$ nix build git+https://github.com/m-labs/artiq.git\?ref=release-[number]#artiq-manual-html
|
||||
|
||||
Substitute ``artiq-manual-pdf`` to get the LaTeX PDF version. The results will be in ``result``.
|
||||
|
||||
The manual is written in `reStructured Text <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_; you can find the source files in the ARTIQ repository under ``doc/manual``. If you spot a mistake, a typo, or something that's out of date or missing -- in particular, if you want to add something to this FAQ -- feel free to clone the repository, edit the source RST files, and make a pull request with your version of an improvement. (If you're not a fan of or not familiar with command-line Git, both GitHub and Gitea support making edits and pull requests directly in the web interface; tutorial materials are easy to find online.) The second best thing is to open an issue to make M-Labs aware of the problem.
|
||||
|
||||
.. _faq-networking:
|
||||
|
||||
troubleshoot networking problems?
|
||||
---------------------------------
|
||||
|
||||
Diagnosis aids:
|
||||
|
||||
- Can you ``ping`` the device?
|
||||
- Is the Ethernet LED on?
|
||||
- Is the ERROR LED on?
|
||||
- Is there anything unusual recorded in :ref:`the UART log <connecting-UART>`?
|
||||
|
||||
Some things to consider:
|
||||
|
||||
- Is the ``core_addr`` field of your ``device_db.py`` set correctly?
|
||||
- Did your device flash and boot successfully? Were the binaries generated for the correct board hardware version?
|
||||
- Are your core device's IP address and networking configurations definitely set correctly? Check the UART log to confirm, and talk to your network administrator about what the correct choices are.
|
||||
- Is your core device configured for an external reference clock? If so, it cannot function correctly without one. Is the external reference clock plugged in?
|
||||
- Are Ethernet and (on Kasli only) SFP0 plugged in all the way? Are they working? Try different cables and SFP adapters; M-Labs tests with CAT6 cables, but lower categories should be supported too.
|
||||
- Are your PC and your crate in the same subnet?
|
||||
- Is some other device in your network already using the configured IP address? Turn off the core device and try pinging the configured IP address; if it responds, you have a culprit. One of the two will need a different networking configuration.
|
||||
- Are there restrictions or issues in your router or subnet that are preventing the core device from connecting? It may help to try connecting the core device to your PC directly.
|
||||
|
||||
fix 'Mismatch between gateware and software versions'?
|
||||
------------------------------------------------------
|
||||
|
||||
Either reflash your core device with a newer version of ARTIQ (see :doc:`flashing`) or update your software (see :ref:`installing-upgrading`), depending on which is out of date.
|
||||
|
||||
.. note::
|
||||
You can check the specific versions you are using at any time by comparing the gateware version given in the core startup log and the output given by adding ``--version`` to any of the standard ARTIQ front-end commands. This is especially useful when e.g. seeking help in the forum or at the helpdesk, where your running ARTIQ version is often crucial information to diagnose a problem.
|
||||
|
||||
Minor version mismatches are common, even in stable ARTIQ versions, but should not cause any issues. The ARTIQ release system ensures breaking changes are strictly limited to new release versions, or to the beta branch (which explicitly makes no promises of stability.) Updates that *are* applied to the stable version are usually bug fixes, documentation improvements, or other quality-of-life changes. As long as gateware and software are using the same stable release version of ARTIQ, even if there is a minor mismatch, no warning will be displayed.
|
||||
|
||||
change configuration settings of satellite devices?
|
||||
---------------------------------------------------
|
||||
|
||||
Currently, it is not possible to reach satellites through ``artiq_coremgmt config``, although this is being worked on. On Kasli, use :class:`~artiq.frontend.artiq_mkfs` and :class:`~artiq.frontend.artiq_flash`; on Kasli-SoC, preload the SD card with a ``config.txt``, formatted as a list of ``key=value`` pairs, one per line.
|
||||
|
||||
Don't worry about individually flashing idle or startup kernels. If your idle or startup kernel contains subkernels, it will automatically compile as a ``.tar``, which you only need to flash to the master.
|
||||
|
||||
fix unreliable DRTIO master-satellite links?
|
||||
--------------------------------------------
|
||||
|
||||
Inconsistent DRTIO connections, especially with odd or absent errors in the core logs, are often a symptom of overheating either in the master or satellite boards. Check the core device fans for failure or defects. Improve air circulation around the crate or attach additional fans to see if that improves or resolves the issue. In the long term, fan trays to be rack-mounted together with the crate are a clean solution to these kinds of problems.
|
||||
|
||||
add or remove EEM peripherals or DRTIO satellites?
|
||||
--------------------------------------------------
|
||||
|
||||
Adding new real-time hardware to an ARTIQ system almost always means reflashing the core device; if you are adding new satellite core devices, they will have to be flashed as well. If you have obtained your upgrades from M-Labs or QUARTIQ, updated binaries and reflashing support will normally be offered to you directly. In any other case, track down your JSON system description file(s), bring them up to date with the updated state of your system, and see :doc:`building_developing`.
|
||||
|
||||
Once you have an updated set of binaries, reflash the core device, following the instructions in :doc:`flashing`. Be sure to update your device database before starting experimentation; run :mod:`~artiq.frontend.artiq_ddb_template` on your system description(s) to update the local devices, and copy over any aliases or entries for NDSP controllers you may have been using. Note that the device database is a Python file, and the generated file of local devices can also simply be imported into the final version, allowing for dynamic modifications, especially in complex systems that may have multiple device databases in use.
|
||||
|
||||
see command-line help?
|
||||
----------------------
|
||||
|
||||
Like most if not almost all terminal utilities, ARTIQ commands, tools and applets print their help messages directly into the terminal and exit when run with the flag ``--help`` or ``-h``: ::
|
||||
|
||||
$ artiq_run -h
|
||||
|
||||
This is the simplest and most direct way of accessing the same usage and reference material that is replicated in this manual on the pages :doc:`main_frontend_tools` and :doc:`utilities`.
|
||||
|
||||
.. _faq-find-examples:
|
||||
How do I ...
|
||||
------------
|
||||
|
||||
find ARTIQ examples?
|
||||
--------------------
|
||||
|
||||
The official examples are stored in the ``examples`` folder of the ARTIQ package. You can find the location of the ARTIQ package on your machine with: ::
|
||||
The examples are installed in the ``examples`` folder of the ARTIQ package. You can find where the ARTIQ package is installed on your machine with: ::
|
||||
|
||||
python3 -c "import artiq; print(artiq.__path__[0])"
|
||||
|
||||
Copy the ``examples`` folder from that path into your home or user directory, and start experimenting! (Note that some examples have dependencies not included with a standard ARTIQ install, like matplotlib and numba. To run those examples properly, make sure those modules are accessible.)
|
||||
Copy the ``examples`` folder from that path into your home/user directory, and start experimenting!
|
||||
|
||||
If you have progressed past this level and would like to see more in-depth code or real-life examples of how other groups have handled running experiments with ARTIQ, see the "Community code" directory on the M-labs `resources page <https://m-labs.hk/experiment-control/resources/>`_.
|
||||
prevent my first RTIO command from causing an underflow?
|
||||
--------------------------------------------------------
|
||||
|
||||
fix ``failed to connect to moninj`` in the dashboard?
|
||||
-----------------------------------------------------
|
||||
The first RTIO event is programmed with a small timestamp above the value of the timecounter when the core device is reset. If the kernel needs more time than this timestamp to produce the event, an underflow will occur. You can prevent it by calling ``break_realtime`` just before programming the first event, or by adding a sufficient delay.
|
||||
|
||||
This and other similar messages almost always indicate that your device database lists controllers (for example, ``aqctl_moninj_proxy``) that either haven't been started or aren't reachable at the given host and port. See :ref:`mgmt-ctlmgr`, or simply run: ::
|
||||
|
||||
$ artiq_ctlgmr
|
||||
|
||||
to let the controller manager start the necessary controllers automatically.
|
||||
|
||||
diagnose and fix sequence errors?
|
||||
---------------------------------
|
||||
|
||||
Go through your code, keeping manual track of SED lanes. See the following example: ::
|
||||
|
||||
@kernel
|
||||
def run(self):
|
||||
self.core.reset()
|
||||
with parallel:
|
||||
self.ttl0.on() # lane0
|
||||
self.ttl_sma.pulse(800*us) # lane1(rising) lane1(falling)
|
||||
with sequential:
|
||||
self.ttl1.on() # lane2
|
||||
self.ttl2.on() # lane3
|
||||
self.ttl3.on() # lane4
|
||||
self.ttl4.on() # lane5
|
||||
delay(800*us)
|
||||
self.ttl1.off() # lane5
|
||||
self.ttl2.off() # lane6
|
||||
self.ttl3.off() # lane7
|
||||
self.ttl4.off() # lane0
|
||||
self.ttl0.off() # lane1 -> clashes with the falling edge of ttl_sma,
|
||||
# which is already at +800us
|
||||
|
||||
In most cases, as in this one, it's relatively easy to rearrange the generation of events so that they will be better spread out across SED lanes without sacrificing actual functionality. One possible solution for the above sequence looks like: ::
|
||||
|
||||
@kernel
|
||||
def run(self):
|
||||
self.core.reset()
|
||||
self.ttl0.on() # lane0
|
||||
self.ttl_sma.on() # lane1
|
||||
self.ttl1.on() # lane2
|
||||
self.ttl2.on() # lane3
|
||||
self.ttl3.on() # lane4
|
||||
self.ttl4.on() # lane5
|
||||
delay(800*us)
|
||||
self.ttl1.off() # lane5
|
||||
self.ttl2.off() # lane6
|
||||
self.ttl3.off() # lane7
|
||||
self.ttl4.off() # lane0 (no clash: new timestamp is higher than last)
|
||||
self.ttl_sma.off() # lane1
|
||||
self.ttl0.off() # lane2
|
||||
|
||||
In this case, the :meth:`~artiq.coredevice.ttl.TTLInOut.pulse` is split up into its component :meth:`~artiq.coredevice.ttl.TTLInOut.on` and :meth:`~artiq.coredevice.ttl.TTLInOut.off` so that events can be generated more linearly. It can also be worth keeping in mind that delaying by even a single coarse RTIO cycle between events avoids switching SED lanes at all; in contexts where perfect simultaneity is not a priority, this is an easy way to avoid sequencing issues. See again :ref:`sequence-errors`.
|
||||
|
||||
understand applet commands?
|
||||
---------------------------
|
||||
|
||||
The 'Command' field contains the exact terminal command used to open and operate the applet. The default ``${artiq_applet}`` prefix simply translates to something to the effect of ``python -m artiq.applets.``, intended to be immediately followed by the applet module name. The options suffixed after the module name are the same used in the command line, and a list of them can be shown by using the standard command line ``-h`` help flag: ::
|
||||
|
||||
$ python -m artiq.applets.plot_xy -h
|
||||
|
||||
in any terminal.
|
||||
If you are not resetting the core device, the time cursor stays where the previous experiment left it.
|
||||
|
||||
organize datasets in folders?
|
||||
-----------------------------
|
||||
|
||||
Use the dot (".") in dataset names to separate folders. The GUI will automatically create and delete folders in the dataset tree display.
|
||||
|
||||
organize applets in groups?
|
||||
---------------------------
|
||||
|
||||
Create groups by left-clicking within the applet list and selecting 'New Group'. Move applets in and out of groups by dragging them with the mouse. To unselect an applet or a group, use CTRL+click.
|
||||
|
||||
organize experiment windows in the dashboard?
|
||||
---------------------------------------------
|
||||
|
||||
|
@ -184,38 +37,74 @@ Experiment windows can be organized by using the following hotkeys:
|
|||
|
||||
The windows will be organized in the order they were last interacted with.
|
||||
|
||||
fix errors when restarting the management system after a crash?
|
||||
write a generator feeding a kernel feeding an analyze function?
|
||||
---------------------------------------------------------------
|
||||
|
||||
On Windows in particular, abnormal shutdowns such as power outages or bluescreens can sometimes corrupt the organizational files used by the management system, resulting in errors to the tune of ``ValueError: source code string cannot contain null bytes`` when restarting. The easiest way to handle these problems is to delete the corrupted files and start from scratch.
|
||||
Like this::
|
||||
|
||||
If the master itself fails to start, it may be necessary to delete the dataset database or even ``last_rid.pyon`` to restart properly, but if the dashboard or browser fail, the problem is probably in the GUI configuration files, where the state of the GUI (arrangement of docks, applets, etc.) is kept. These files are backed up once whenever they are successfully loaded. Navigate to the user configuration directory (see :ref:`gui-config-files`) and look for a file suffixed ``.backup``. To restore the GUI, simply delete the corrupted configuration file and rename the backup to replace it.
|
||||
def run(self):
|
||||
self.parse(self.pipe(iter(range(10))))
|
||||
|
||||
create and use variable-length arrays in kernels?
|
||||
-------------------------------------------------
|
||||
def pipe(self, gen):
|
||||
for i in gen:
|
||||
r = self.do(i)
|
||||
yield r
|
||||
|
||||
You can't, in general; see the corresponding notes under :ref:`compiler-types`. ARTIQ kernels do not support heap allocation, meaning in particular that lists, arrays, and strings must be of constant size. One option is to preallocate everything, as mentioned on the Compiler page; another option is to chunk it and e.g. read 100 events per function call, push them upstream and retry until the gate time closes.
|
||||
def parse(self, gen):
|
||||
for i in gen:
|
||||
pass
|
||||
|
||||
@kernel
|
||||
def do(self, i):
|
||||
return i
|
||||
|
||||
create and use variable lengths arrays in kernels?
|
||||
--------------------------------------------------
|
||||
|
||||
Don't. Preallocate everything. Or chunk it and e.g. read 100 events per
|
||||
function call, push them upstream and retry until the gate time closes.
|
||||
|
||||
execute multiple slow controller RPCs in parallel without losing time?
|
||||
----------------------------------------------------------------------
|
||||
|
||||
Use ``threading.Thread``: portable, fast, simple for one-shot calls.
|
||||
|
||||
write part of my experiment as a coroutine/asyncio task/generator?
|
||||
------------------------------------------------------------------
|
||||
|
||||
You cannot change the API that your experiment exposes: :meth:`~artiq.language.environment.HasEnvironment.build`, :meth:`~artiq.language.environment.Experiment.prepare`, :meth:`~artiq.language.environment.Experiment.run` and :meth:`~artiq.language.environment.Experiment.analyze` need to be regular functions, not generators or asyncio coroutines. That would make reusing your own code in sub-experiments difficult and fragile. You can however wrap your own generators/coroutines/tasks in regular functions that you then expose as part of the API.
|
||||
You can not change the API that your experiment exposes: ``build()``,
|
||||
``prepare()``, ``run()`` and ``analyze()`` need to be regular functions, not
|
||||
generators or asyncio coroutines. That would make reusing your own code in
|
||||
sub-experiments difficult and fragile. You can however wrap your own
|
||||
generators/coroutines/tasks in regular functions that you then expose as part
|
||||
of the API.
|
||||
|
||||
determine the pyserial URL to connect to a device by its serial number?
|
||||
-----------------------------------------------------------------------
|
||||
determine the pyserial URL to attach to a device by its serial number?
|
||||
----------------------------------------------------------------------
|
||||
|
||||
You can list your system's serial devices and print their vendor/product id and serial number by running::
|
||||
You can list your system's serial devices and print their vendor/product
|
||||
id and serial number by running::
|
||||
|
||||
$ python3 -m serial.tools.list_ports -v
|
||||
|
||||
This will give you the ``/dev/ttyUSBxx`` (or ``COMxx`` for Windows) device names. The ``hwid:`` field gives you the string you can pass via the ``hwgrep://`` feature of pyserial `serial_for_url() <https://pythonhosted.org/pyserial/pyserial_api.html#serial.serial_for_url>`_ in order to open a serial device.
|
||||
It will give you the ``/dev/ttyUSBxx`` (or the ``COMxx`` for Windows) device
|
||||
names.
|
||||
The ``hwid:`` field gives you the string you can pass via the ``hwgrep://``
|
||||
feature of pyserial
|
||||
`serial_for_url() <https://pythonhosted.org/pyserial/pyserial_api.html#serial.serial_for_url>`_
|
||||
in order to open a serial device.
|
||||
|
||||
The preferred way to specify a serial device is to make use of the ``hwgrep://`` URL: it allows for selecting the serial device by its USB vendor ID, product
|
||||
ID and/or serial number. These never change, unlike the device file name.
|
||||
The preferred way to specify a serial device is to make use of the ``hwgrep://``
|
||||
URL: it allows to select the serial device by its USB vendor ID, product
|
||||
ID and/or serial number. Those never change, unlike the device file name.
|
||||
|
||||
For instance, if you want to specify the Vendor/Product ID and the USB Serial Number, you can do: ::
|
||||
For instance, if you want to specify the Vendor/Product ID and the USB Serial Number, you can do:
|
||||
|
||||
``-d "hwgrep://<VID>:<PID> SNR=<serial_number>"``.
|
||||
for example:
|
||||
|
||||
``-d "hwgrep://0403:faf0 SNR=83852734"``
|
||||
|
||||
$ -d "hwgrep://<VID>:<PID> SNR=<serial_number>"``.
|
||||
|
||||
run unit tests?
|
||||
---------------
|
||||
|
@ -235,44 +124,9 @@ The core device tests require the following TTL devices and connections:
|
|||
|
||||
If TTL devices are missing, the corresponding tests are skipped.
|
||||
|
||||
.. _gui-config-files:
|
||||
|
||||
find the dashboard and browser configuration files?
|
||||
---------------------------------------------------
|
||||
find the dashboard and browser configuration files are stored?
|
||||
--------------------------------------------------------------
|
||||
|
||||
::
|
||||
|
||||
python -c "from artiq.tools import get_user_config_dir; print(get_user_config_dir())"
|
||||
|
||||
Additional Resources
|
||||
====================
|
||||
|
||||
Other related documentation
|
||||
---------------------------
|
||||
|
||||
- the `Sinara wiki <https://github.com/sinara-hw/meta/wiki>`_
|
||||
- the `SiPyCo manual <https://m-labs.hk/artiq/sipyco-manual/>`_
|
||||
- the `Migen manual <https://m-labs.hk/migen/manual/>`_
|
||||
- in a pinch, the `M-labs internal docs <https://git.m-labs.hk/sinara-hw/assembly>`_
|
||||
|
||||
For more advanced questions, sometimes the `list of publications <https://m-labs.hk/experiment-control/publications/>`_ about experiments performed using ARTIQ may be interesting. See also the official M-Labs `resources <https://m-labs.hk/experiment-control/resources/>`_ page, especially the section on community code.
|
||||
|
||||
"Help, I've done my best and I can't get any further!"
|
||||
------------------------------------------------------
|
||||
|
||||
- If you have an active M-Labs AFWS/support subscription, you can email helpdesk@ at any time for personalized assistance. Please include the following information:
|
||||
|
||||
- Your installed ARTIQ version (add ``--version`` to any of the standard ARTIQ commands)
|
||||
- The variant name of your system (refer to the sticker on the crate if you aren't sure)
|
||||
- The recent output of your core log, either through ``artiq_coremgmt`` (if you're able to contact your device by network), or over UART following :ref:`the guide here <connecting-UART>`
|
||||
- How your problem happened, and what you've already tried to fix it
|
||||
|
||||
- Compare your materials with the examples; see also :ref:`finding ARTIQ examples <faq-find-examples>` above.
|
||||
- Check the list of `active issues <https://github.com/m-labs/artiq/issues>`_ on the ARTIQ GitHub repository for possible known problems with ARTIQ. Search through the closed issues to see if your question or concern has been addressed before.
|
||||
- Search the `M-Labs forum <https://forum.m-labs.hk/>`_ for similar problems, or make a post asking for help yourself.
|
||||
- Look into the `Mattermost live chat <https://chat.m-labs.hk>`_ or the bridged IRC channel.
|
||||
- Read the open source code and its docstrings and figure it out.
|
||||
- If you're reasonably certain you've identified a bug, or if you'd like to suggest a feature that should be included in future ARTIQ releases, `file a GitHub issue <https://github.com/m-labs/artiq/issues/new/choose>`_ yourself, following one of the provided templates.
|
||||
- In some odd cases, you may want to see the `mailing list archive <https://www.mail-archive.com/artiq@lists.m-labs.hk/>`_; the ARTIQ mailing list was shut down at the end of 2020 and was last regularly used during the time of ARTIQ-2 and 3, but for some older ARTIQ features, or to understand a development thought process, you may still find relevant information there.
|
||||
|
||||
In any situation, if you found the manual unclear or unhelpful, you might consider following the :ref:`directions for contribution <build-documentation>` and editing it to be more helpful for future readers.
|
|
@ -9,13 +9,13 @@
|
|||
Obtaining board binaries
|
||||
------------------------
|
||||
|
||||
If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware for your system that corresponds to your currently installed version of ARTIQ using the ARTIQ firmware service (AFWS). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. The client :mod:`~artiq.frontend.afws_client` is included in all ARTIQ installations.
|
||||
If you have an active firmware subscription with M-Labs or QUARTIQ, you can obtain firmware for your system that corresponds to your currently installed version of ARTIQ using the ARTIQ firmware service (AFWS). One year of subscription is included with most hardware purchases. You may purchase or extend firmware subscriptions by writing to the sales@ email. The client ``afws_client`` is included in all ARTIQ installations.
|
||||
|
||||
Run the command::
|
||||
|
||||
$ afws_client <username> build <afws_director> <variant>
|
||||
|
||||
Replace ``<username>`` with the login name that was given to you with the subscription, ``<variant>`` with the name of your system variant, and ``<afws_directory>`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email. For more information about :mod:`~artiq.frontend.afws_client` see also the corresponding entry on the :ref:`Utilities <afws-client>` page.
|
||||
Replace ``<username>`` with the login name that was given to you with the subscription, ``<variant>`` with the name of your system variant, and ``<afws_directory>`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email. For more information about ``afws_client`` see also the corresponding entry on the :ref:`Utilities <afws-client>` page.
|
||||
|
||||
For certain configurations (KC705 or ZC706 only) it is also possible to source firmware from `the M-Labs Hydra server <https://nixbld.m-labs.hk/project/artiq>`_ (in ``main`` and ``zynq`` respectively).
|
||||
|
||||
|
@ -25,9 +25,9 @@ Installing and configuring OpenOCD
|
|||
----------------------------------
|
||||
|
||||
.. warning::
|
||||
These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility :mod:`~artiq.frontend.artiq_flash`. If your core device is a Zynq device, skip straight to :ref:`writing-flash`.
|
||||
These instructions are not applicable to Zynq devices (Kasli-SoC or ZC706), which do not use the utility ``artiq_flash`` to reflash. If your core device is a Zynq device, skip straight to :ref:`writing-flash`.
|
||||
|
||||
ARTIQ supplies the utility :mod:`~artiq.frontend.artiq_flash`, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards.
|
||||
ARTIQ supplies the utility ``artiq_flash``, which uses OpenOCD to write the binary images into an FPGA board's flash memory. For both Nix and MSYS2, OpenOCD are included with the installation by default. Note that in the case of Nix this is the package ``artiq.openocd-bscanspi`` and not ``pkgs.openocd``; the second is OpenOCD from the Nix package collection, which does not support ARTIQ/Sinara boards.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -78,12 +78,12 @@ On Windows
|
|||
Writing the flash
|
||||
-----------------
|
||||
|
||||
First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you can simply connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses :mod:`~artiq.frontend.artiq_coremgmt` to flash over network, an Ethernet connection and an IP address, supplied either with the ``-D`` option or in your :ref:`device database <device-db>`, are sufficient.
|
||||
First ensure the board is connected to your computer. In the case of Kasli, the JTAG adapter is integrated into the Kasli board; for flashing (and debugging) you can simply connect your computer to the micro-USB connector on the Kasli front panel. For Kasli-SoC, which uses ``artiq_coremgmt`` to flash over network, an Ethernet connection and an IP address, supplied either with the ``-D`` option or in your :ref:`device database <device-db>`, are sufficient.
|
||||
|
||||
For Kasli-SoC or ZC706:
|
||||
::
|
||||
|
||||
$ artiq_coremgmt [-D IP_address] config write -f boot <afws_directory>/boot.bin
|
||||
$ artiq_coremgmt [-D 192.168.1.75] config write -f boot [afws_directory]/boot.bin
|
||||
$ artiq_coremgmt reboot
|
||||
|
||||
If the device is not reachable due to corrupted firmware or networking problems, extract the SD card and copy ``boot.bin`` onto it manually.
|
||||
|
@ -91,16 +91,16 @@ For Kasli-SoC or ZC706:
|
|||
For Kasli:
|
||||
::
|
||||
|
||||
$ artiq_flash -d <afws_directory>
|
||||
$ artiq_flash -d [afws_directory]
|
||||
|
||||
For KC705:
|
||||
::
|
||||
|
||||
$ artiq_flash -t kc705 -d <afws_directory>
|
||||
$ artiq_flash -t kc705 -d [afws_directory]
|
||||
|
||||
The SW13 switches need to be set to 00001.
|
||||
|
||||
Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for :mod:`~artiq.frontend.artiq_flash` in the :ref:`Utilities <flashing-loading-tool>` reference.
|
||||
Flashing over network is also possible for Kasli and KC705, assuming IP networking has already been set up. In this case, the ``-H HOSTNAME`` option is used; see the entry for ``artiq_flash`` in the :ref:`Utilities <flashing-loading-tool>` reference.
|
||||
|
||||
.. _connecting-uart:
|
||||
|
||||
|
|
|
@ -1,5 +1,10 @@
|
|||
Getting started with the core device
|
||||
====================================
|
||||
Getting started with the core language
|
||||
======================================
|
||||
|
||||
.. _connecting-to-the-core-device:
|
||||
|
||||
Connecting to the core device
|
||||
-----------------------------
|
||||
|
||||
As a very first step, we will turn on a LED on the core device. Create a file ``led.py`` containing the following: ::
|
||||
|
||||
|
@ -9,12 +14,12 @@ As a very first step, we will turn on a LED on the core device. Create a file ``
|
|||
class LED(EnvExperiment):
|
||||
def build(self):
|
||||
self.setattr_device("core")
|
||||
self.setattr_device("led0")
|
||||
self.setattr_device("led")
|
||||
|
||||
@kernel
|
||||
def run(self):
|
||||
self.core.reset()
|
||||
self.led0.on()
|
||||
self.led.on()
|
||||
|
||||
The central part of our code is our ``LED`` class, which derives from :class:`~artiq.language.environment.EnvExperiment`. Almost all experiments should derive from this class, which provides access to the environment as well as including the necessary experiment framework from the base-level :class:`~artiq.language.environment.Experiment`. It will call our :meth:`~artiq.language.environment.HasEnvironment.build` at the right time and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` we use to gain access to our devices ``core`` and ``led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method is a kernel and must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host).
|
||||
|
||||
|
@ -27,7 +32,7 @@ If you don't have a ``device_db.py`` for your system, consult :ref:`device-db` t
|
|||
|
||||
python3 -c "import artiq; print(artiq.__path__[0])"
|
||||
|
||||
Run your code using :mod:`~artiq.frontend.artiq_run`, which is one of the ARTIQ front-end tools: ::
|
||||
Run your code using ``artiq_run``, which is one of the ARTIQ front-end tools: ::
|
||||
|
||||
$ artiq_run led.py
|
||||
|
||||
|
@ -46,7 +51,7 @@ Modify ``led.py`` as follows: ::
|
|||
class LED(EnvExperiment):
|
||||
def build(self):
|
||||
self.setattr_device("core")
|
||||
self.setattr_device("led0")
|
||||
self.setattr_device("led")
|
||||
|
||||
@kernel
|
||||
def run(self):
|
||||
|
@ -54,9 +59,9 @@ Modify ``led.py`` as follows: ::
|
|||
s = input_led_state()
|
||||
self.core.break_realtime()
|
||||
if s:
|
||||
self.led0.on()
|
||||
self.led.on()
|
||||
else:
|
||||
self.led0.off()
|
||||
self.led.off()
|
||||
|
||||
|
||||
You can then turn the LED off and on by entering 0 or 1 at the prompt that appears: ::
|
||||
|
@ -70,7 +75,7 @@ What happens is that the ARTIQ compiler notices that the ``input_led_state`` fun
|
|||
|
||||
The return type of all RPC functions must be known in advance. If the return value is not ``None``, the compiler requires a type annotation, like ``-> TBool`` in the example above. See also :ref:`compiler-types`.
|
||||
|
||||
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led0.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led0.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :doc:`rtio` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock.
|
||||
Without the :meth:`~artiq.coredevice.core.Core.break_realtime` call, the RTIO events emitted by :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` would be scheduled at a fixed and very short delay after entering :meth:`~artiq.language.environment.Experiment.run()`. These events would fail because the RPC to ``input_led_state()`` can take an arbitrarily long amount of time, and therefore the deadline for the submission of RTIO events would have long passed when :meth:`self.led.on() <artiq.coredevice.ttl.TTLInOut.on>` or :meth:`self.led.off() <artiq.coredevice.ttl.TTLInOut.off>` are called (that is, the ``rtio_counter_mu`` wall clock will have advanced far ahead of the timeline cursor ``now_mu``, and an :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` would result; see :ref:`artiq-real-time-i-o-concepts` for the full explanation of wall clock vs. timeline.) The :meth:`~artiq.coredevice.core.Core.break_realtime` call is necessary to waive the real-time requirements of the LED state change. Rather than delaying by any particular time interval, it reads ``rtio_counter_mu`` and moves up the ``now_mu`` cursor far enough to ensure it's once again safely ahead of the wall clock.
|
||||
|
||||
Real-time Input/Output (RTIO)
|
||||
-----------------------------
|
||||
|
@ -95,11 +100,19 @@ Create a new file ``rtio.py`` containing the following: ::
|
|||
delay(2*us)
|
||||
self.ttl0.pulse(2*us)
|
||||
|
||||
In its :meth:`~artiq.language.environment.HasEnvironment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database. In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels.
|
||||
In its :meth:`~artiq.language.environment.HasEnvironment.build` method, the experiment obtains the core device and a TTL device called ``ttl0`` as defined in the device database.
|
||||
In ARTIQ, TTL is used roughly synonymous with "a single generic digital signal" and does not refer to a specific signaling standard or voltage/current levels.
|
||||
|
||||
When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to. Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode. There are no input-only TTL channels.
|
||||
When :meth:`~artiq.language.environment.Experiment.run`, the experiment first ensures that ``ttl0`` is in output mode and actively driving the device it is connected to.
|
||||
Bidirectional TTL channels (i.e. :class:`~artiq.coredevice.ttl.TTLInOut`) are in input (high impedance) mode by default, output-only TTL channels (:class:`~artiq.coredevice.ttl.TTLOut`) are always in output mode.
|
||||
There are no input-only TTL channels.
|
||||
|
||||
The experiment then drives one million 2 µs long pulses separated by 2 µs each. Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run rtio.py``. Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%. This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled. The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable. Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle.
|
||||
The experiment then drives one million 2 µs long pulses separated by 2 µs each.
|
||||
Connect an oscilloscope or logic analyzer to TTL0 and run ``artiq_run rtio.py``.
|
||||
Notice that the generated signal's period is precisely 4 µs, and that it has a duty cycle of precisely 50%.
|
||||
This is not what one would expect if the delay and the pulse were implemented with register-based general purpose input output (GPIO) that is CPU-controlled.
|
||||
The signal's period would depend on CPU speed, and overhead from the loop, memory management, function calls, etc., all of which are hard to predict and variable.
|
||||
Any asymmetry in the overhead would manifest itself in a distorted and variable duty cycle.
|
||||
|
||||
Instead, inside the core device, output timing is generated by the gateware and the CPU only programs switching commands with certain timestamps that the CPU computes.
|
||||
|
||||
|
@ -153,7 +166,11 @@ Try the following code and observe the generated pulses on a 2-channel oscillosc
|
|||
self.ttl1.pulse(4*us)
|
||||
delay(4*us)
|
||||
|
||||
ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches. It simply remembers its position on the timeline (``now_mu``) when entering the ``parallel`` block and resets to that position after each individual statement. At the end of the block, the cursor is advanced to the furthest position it reached during the block. In other words, the statements in a ``parallel`` block are actually executed sequentially. Only the RTIO events generated by the statements are *scheduled* in parallel.
|
||||
ARTIQ can implement ``with parallel`` blocks without having to resort to any of the typical parallel processing approaches.
|
||||
It simply remembers its position on the timeline (``now_mu``) when entering the ``parallel`` block and resets to that position after each individual statement.
|
||||
At the end of the block, the cursor is advanced to the furthest position it reached during the block.
|
||||
In other words, the statements in a ``parallel`` block are actually executed sequentially.
|
||||
Only the RTIO events generated by the statements are *scheduled* in parallel.
|
||||
|
||||
Remember that while ``now_mu`` resets at the beginning of each statement in a ``parallel`` block, the wall clock advances regardless. If a particular statement takes a long time to execute (which is different from -- and unrelated to! -- the events *scheduled* by the statement taking a long time), the wall clock may advance past the reset value, putting any subsequent statements inside the block into a situation of negative slack (i.e., resulting in :exc:`~artiq.coredevice.exceptions.RTIOUnderflow` ). Sometimes underflows may be avoided simply by reordering statements within the parallel block. This especially applies to input methods, which generally necessarily block CPU progress until the wall clock has caught up to or overtaken the cursor.
|
||||
|
||||
|
@ -169,34 +186,32 @@ Within a parallel block, some statements can be scheduled sequentially again usi
|
|||
delay(4*us)
|
||||
|
||||
.. warning::
|
||||
``with parallel`` specifically 'parallelizes' the *top-level* statements inside a block. Consider as an example:
|
||||
|
||||
.. code-block::
|
||||
:linenos:
|
||||
``with parallel`` specifically 'parallelizes' the *top-level* statements inside a block. Consider as an example: ::
|
||||
|
||||
for i in range(1000000):
|
||||
with parallel:
|
||||
self.ttl0.pulse(2*us)
|
||||
if True:
|
||||
self.ttl1.pulse(2*us)
|
||||
self.ttl2.pulse(2*us)
|
||||
self.ttl0.pulse(2*us) # 1
|
||||
if True: # 2
|
||||
self.ttl1.pulse(2*us) # 3
|
||||
self.ttl2.pulse(2*us) # 4
|
||||
delay(4*us)
|
||||
|
||||
This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of line #4; it will not repeat the reset at the deeper indentation level for #5 or #6.
|
||||
This code will not schedule the three pulses to ``ttl0``, ``ttl1``, and ``ttl2`` in parallel. Rather, the pulse to ``ttl1`` is 'parallelized' *with the if statement*. The timeline cursor resets once, at the beginning of statement #2; it will not repeat the reset at the deeper indentation level for #3 or #4.
|
||||
|
||||
In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, lines #5 and #6, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To schedule #5 and #6 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement.
|
||||
In practice, the pulses to ``ttl0`` and ``ttl1`` will execute simultaneously, and the pulse to ``ttl2`` will execute after the pulse to ``ttl1``, bringing the total duration of the ``parallel`` block to 4 us. Internally, statements #3 and #4, contained within the top-level if statement, are considered an atomic sequence and executed within an implicit ``with sequential``. To execute #3 and #4 in parallel, it is necessary to place them inside a second, nested ``parallel`` block within the if statement.
|
||||
|
||||
Particular care needs to be taken when working with ``parallel`` blocks which generate large numbers of RTIO events, as it is possible to cause sequencing issues in the gateware; see also :ref:`sequence-errors`.
|
||||
|
||||
.. _rtio-analyzer:
|
||||
.. _rtio-analyzer-example:
|
||||
|
||||
RTIO analyzer
|
||||
-------------
|
||||
|
||||
The core device records all real-time I/O waveforms, as well as the variation of RTIO slack, into a circular buffer, the contents of which can be extracted using :mod:`~artiq.frontend.artiq_coreanalyzer`. Try for example: ::
|
||||
The core device records the real-time I/O waveforms into a circular buffer. It is possible to dump any Python object so that it appears alongside the waveforms using the ``rtio_log`` function, which accepts a channel name (i.e. a log target) as the first argument: ::
|
||||
|
||||
from artiq.experiment import *
|
||||
|
||||
|
||||
class Tutorial(EnvExperiment):
|
||||
def build(self):
|
||||
self.setattr_device("core")
|
||||
|
@ -205,37 +220,19 @@ The core device records all real-time I/O waveforms, as well as the variation of
|
|||
@kernel
|
||||
def run(self):
|
||||
self.core.reset()
|
||||
for i in range(5):
|
||||
self.ttl0.pulse(0.1 * ms)
|
||||
delay(0.1 * ms)
|
||||
for i in range(100):
|
||||
self.ttl0.pulse(...)
|
||||
rtio_log("ttl0", "i", i)
|
||||
delay(...)
|
||||
|
||||
When using :mod:`~artiq.frontend.artiq_run`, the recorded buffer data can be extracted directly into the terminal, using a command in the form of: ::
|
||||
|
||||
$ artiq_coreanalyzer -p
|
||||
|
||||
.. note::
|
||||
The first time this command is run, it will retrieve the entire contents of the analyzer buffer, which may include every experiment you have run so far. For a more manageable introduction, run the analyzer once to clear the buffer, run the experiment, and then run the analyzer a second time, so that only the data from this single experiment is displayed.
|
||||
|
||||
This will produce a list of the exact output events submitted to RTIO, printed in chronological order, along with the state of both ``now_mu`` and ``rtio_counter_mu``. While useful in diagnosing some specific gateware errors (in particular, :ref:`sequencing issues <sequence-errors>`), it isn't the most readable of formats. An alternate is to export to VCD, which can be viewed using third-party tools such as GTKWave. Run the experiment again, and use a command in the form of: ::
|
||||
|
||||
$ artiq_coreanalyzer -w <file_name>.vcd
|
||||
|
||||
The ``<file_name>.vcd`` file should be immediately created and written. Check the directory the command was run in to find it.
|
||||
|
||||
.. tip::
|
||||
|
||||
Tutorials on GTKWave options (or other third-party tools) and how best to view VCD files can be found online. By default, the data in a trace like ``rtio_slack`` will probably be presented in a raw form. To see a stepped wave as in the ARTIQ dashboard, look for options to interpret the data as a real number, then as an analog signal.
|
||||
|
||||
Pay attention to the timescale of the waveform dock in your chosen viewer; if you have set your signals to display but nothing is visible, it is likely zoomed in or out much too far.
|
||||
|
||||
The easiest way to view recorded analyzer data, however, is directly in the ARTIQ dashboard, a feature which will be presented later in :ref:`interactivity-waveform`.
|
||||
When using ``artiq_run``, the recorded data can be extracted using ``artiq_coreanalyzer`` (see :ref:`core-device-rtio-analyzer-tool`). To export it to VCD, which can be viewed using third-party tools such as GtkWave, use the command ``artiq_coreanalyzer -w rtio.vcd``. Recorded data can also be viewed directly with the ARTIQ dashboard, which will be presented later in :doc:`getting_started_mgmt`.
|
||||
|
||||
.. _getting-started-dma:
|
||||
|
||||
Direct Memory Access (DMA)
|
||||
--------------------------
|
||||
|
||||
DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up' to ``now_mu``. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly.
|
||||
DMA allows for storing fixed sequences of RTIO events in system memory and having the DMA core in the FPGA play them back at high speed. Provided that the specifications of a desired event sequence are known far enough in advance, and no other RTIO issues (collisions, sequence errors) are provoked, even extremely fast and detailed event sequences can always be generated and executed. RTIO underflows occur when events cannot be generated *as fast as* they need to be executed, resulting in an exception when the wall clock 'catches up'. The solution is to record these sequences to the DMA core. Once recorded, event sequences are fixed and cannot be modified, but can be safely replayed very quickly at any position in the timeline, potentially repeatedly.
|
||||
|
||||
Try this: ::
|
||||
|
||||
|
|
|
@ -1,31 +1,27 @@
|
|||
Using the management system
|
||||
===========================
|
||||
Getting started with the management system
|
||||
==========================================
|
||||
|
||||
In practice, rather than managing experiments by executing :mod:`~artiq.frontend.artiq_run` over and over, most use cases are better served by using the ARTIQ *management system*. This is the high-level application part of ARTIQ, which can be used to schedule experiments, manage devices and parameters, and distribute and store results. It also allows for distributed use of ARTIQ, with a single master coordinating demands on the system issued over the network by multiple clients. Using this system, multiple users on different machines can schedule experiments or analyze results on the same ARTIQ system, potentially simultaneously, without interfering with each other.
|
||||
In practice, rather than managing experiments by executing ``artiq_run`` over and over, most use cases are better served by using the ARTIQ *management system.* This is the high-level part of ARTIQ, which can be used to schedule experiments, distribute and store the results, and manage devices and parameters. It possesses a detailed GUI and can be used on several machines concurrently, allowing them to coordinate with each other and with the specialized hardware over the network. As a result, multiple users on different machines can schedule experiments or retrieve results on the same ARTIQ system, potentially simultaneously.
|
||||
|
||||
The management system consists of at least two parts:
|
||||
|
||||
a. the **ARTIQ master,** which runs on a single machine, facilitates communication with the core device and peripherals, and is responsible for most of the actual duties of the system,
|
||||
b. one or more **ARTIQ clients,** which may be local or remote and which communicate only with the master. Both a GUI (the **dashboard**) and a straightforward **command line client** are provided, with many of the same capabilities.
|
||||
b. one or more **ARTIQ clients,** which may be local or remote and which communicate only with the master. Both a GUI (the **dashboard**) and a straightforward command line client are provided, with many of the same capabilities.
|
||||
|
||||
as well as, optionally,
|
||||
|
||||
c. one or more **controller managers**, which coordinate the operation of certain (generally, non-realtime) classes of device and provide certain services to the clients,
|
||||
d. and one or more instances of the **ARTIQ browser**, a GUI application designed to facilitate the analysis of experiment results and datasets.
|
||||
c. one or more **controller managers**, which help coordinate the operation of certain (generally, non-realtime) classes of device.
|
||||
|
||||
In this tutorial, we will explore the basic operation of the management system. Because the various components of the management system run wholly on the host machine, and not on the core device (in other words, they do not inherently involve any kernel functions), it is not necessary to have a core device or any specialized hardware set up to use it. The examples in this tutorial can all be carried out using only your host computer.
|
||||
In this tutorial, we will explore the basic operation of the management system. Because the various components of the management system run wholly on the host machine, and not on the core device (in other words, they do not inherently involve any kernel functions), it is not necessary to have a core device or any special hardware set up to use it. The examples in this tutorial can all be carried out using only your computer.
|
||||
|
||||
Running your first experiment with the master
|
||||
---------------------------------------------
|
||||
Starting your first experiment with the master
|
||||
----------------------------------------------
|
||||
|
||||
Until now, we have executed experiments using :mod:`~artiq.frontend.artiq_run`, which is a simple standalone tool that bypasses the management system. We will now see how to run an experiment using a master and a client. In this arrangement, the master is responsible for communicating with the core device, scheduling and keeping track of experiments, and carrying out RPCs the core device may call for. Clients submit experiments to the master to be scheduled and, if necessary, query the master about the state of experiments and their results.
|
||||
In the previous tutorial, we used the ``artiq_run`` utility to execute our experiments, which is a simple standalone tool that bypasses the management system. We will now see how to run an experiment using the master and the dashboard.
|
||||
|
||||
First, create a folder called ``~/artiq-master``. Copy into it the ``device_db.py`` for your system (your device database, exactly as in :doc:`getting_started_core`); the master uses the device database in the same way as :mod:`~artiq.frontend.artiq_run` when communicating with the core device.
|
||||
First, create a folder ``~/artiq-master`` and copy into it the ``device_db.py`` for your system (your device database, exactly as in :ref:`connecting-to-the-core-device`.) The master uses the device database in the same way as ``artiq_run`` when communicating with the core device. Since no devices are actually used in these examples, you can also use the ``device_db.py`` found in ``examples/no_hardware``.
|
||||
|
||||
.. tip::
|
||||
Since no devices are actually used in these examples, you can also use a device database in the model of the ``device_db.py`` from ``examples/no_hardware``, which uses resources from ``artiq/sim`` instead of referencing or requiring any real local hardware.
|
||||
|
||||
Secondly, create a subfolder ``~/artiq-master/repository`` to contain experiments. By default, the master scans for a folder of this name to determine what experiments are available; if you'd prefer to use a different name, this can be changed by running ``artiq_master -r [folder name]`` instead of ``artiq_master`` below. Experiments don't have to be in the repository to be submitted to the master, but the repository contains those experiments the master is automatically aware of.
|
||||
Secondly, create a subfolder ``~/artiq-master/repository`` to contain experiments. By default, the master scans for a folder of this name to determine what experiments are available. If you'd prefer to use a different name, this can be changed by running ``artiq_master -r [folder name]`` instead of ``artiq_master`` below.
|
||||
|
||||
Create a very simple experiment in ``~/artiq-master/repository`` and save it as ``mgmt_tutorial.py``: ::
|
||||
|
||||
|
@ -39,117 +35,63 @@ Create a very simple experiment in ``~/artiq-master/repository`` and save it as
|
|||
def run(self):
|
||||
print("Hello World")
|
||||
|
||||
|
||||
Start the master with: ::
|
||||
|
||||
$ cd ~/artiq-master
|
||||
$ artiq_master
|
||||
|
||||
This command should display ``ARTIQ master is now ready`` and not return, as the master keeps running. In another terminal, use the client to request this experiment: ::
|
||||
This last command should not return, as the master keeps running.
|
||||
|
||||
$ artiq_client submit repository/mgmt_tutorial.py
|
||||
Now, start the dashboard with the following commands in another terminal: ::
|
||||
|
||||
This command should print a message in the format ``RID: 0``, telling you the scheduling ID assigned to the experiment by the master, and exit. Note that it doesn't matter *where* the client is run; the client does not require direct access to ``device_db.py`` or the repository folder, and only directly communicates with the master. Relatedly, the path to an experiment a client submits is given relative to the location of the *master*, not the client.
|
||||
$ cd ~
|
||||
$ artiq_dashboard
|
||||
|
||||
Return to the terminal where the master is running. You should see an output similar to: ::
|
||||
.. note::
|
||||
In order to connect to a master over the network, start it with the command ::
|
||||
|
||||
INFO:worker(0,mgmt_tutorial.py):print:Hello World
|
||||
|
||||
In other words, a worker created by the master has executed the experiment and carried out the print instruction. Congratulations!
|
||||
|
||||
.. tip::
|
||||
|
||||
In order to run the master and the clients on different PCs, start the master with a ``--bind`` flag: ::
|
||||
|
||||
$ artiq_master --bind [hostname or IP to bind to]
|
||||
$ artiq_master --bind [hostname or IP]
|
||||
|
||||
and then use the option ``--server`` or ``-s`` for clients, as in: ::
|
||||
|
||||
$ artiq_client -s [hostname or IP of the master]
|
||||
$ artiq_dashboard -s [hostname or IP of the master]
|
||||
$ artiq_client -s [hostname or IP of the master]
|
||||
|
||||
Both IPv4 and IPv6 are supported. See also the individual references :mod:`~artiq.frontend.artiq_master`, :mod:`~artiq.frontend.artiq_dashboard`, and :mod:`~artiq.frontend.artiq_client` for more details.
|
||||
Both IPv4 and IPv6 are supported.
|
||||
|
||||
You may also notice that the master has created some other organizational files in its home directory, notably a folder ``results``, where a HDF5 record is preserved of every experiment that is submitted and run. The files in ``results`` will be discussed in greater detail in :doc:`using_data_interfaces`.
|
||||
The dashboard should display the list of experiments from the repository folder in a dock called "Explorer". There should be only the experiment we created. Select it and click "Submit", then look at the "Log" dock for the output from this simple experiment.
|
||||
|
||||
Running the dashboard and controller manager
|
||||
--------------------------------------------
|
||||
|
||||
Submitting experiments with :mod:`~artiq.frontend.artiq_client` has some interesting qualities: for instance, experiments can be requested simultaneously by different clients and be relied upon to execute cleanly in sequence, which is useful in a distributed context. On the other hand, on an local level, it doesn't necessarily carry many practical advantages over using :mod:`~artiq.frontend.artiq_run`. The real convenience of the management system lies in its GUI, the dashboard. We will now try submitting an experiment using the dashboard.
|
||||
|
||||
First, start the controller manager: ::
|
||||
|
||||
$ artiq_ctlmgr
|
||||
|
||||
Like the master, this command should not return, as the controller manager keeps running. Note that the controller manager requires access to the device database, but not in the local directory -- it gets that access automatically by connecting to the master.
|
||||
|
||||
.. note::
|
||||
We will not be using controllers in this part of the tutorial. Nonetheless, the dashboard will expect to be able to contact certain controllers given in the device database, and print error messages if this isn't the case (e.g. ``Is aqctl_moninj_proxy running?``). It is equally possible to check your device database and start the requisite controllers manually, or to temporarily delete their entries from ``device_db.py``, but it's normally quite convenient to let the controller manager handle things. The role and use of controller managers will be covered in more detail in :doc:`using_data_interfaces`.
|
||||
|
||||
In a third terminal, start the dashboard: ::
|
||||
|
||||
$ artiq_dashboard
|
||||
|
||||
Like :mod:`~artiq.frontend.artiq_client`, the dashboard requires no direct access to the device database or the repository. It communicates with the master to receive information about available experiments and the state of the system.
|
||||
|
||||
You should see the list of experiments from the ``repository`` in the dock called 'Explorer'. In our case, this will only be the single experiment we created, listed by the name we gave it in the docstring inside the triple quotes, "Management tutorial". Select it, and in the window that opens, click 'Submit'.
|
||||
|
||||
This time you will find the output displayed directly in the dock called 'Log'. The dashboard log combines the master's console output, the dashboard's own logs, and the device logs of the core device itself (if there is one in use); normally, this is the only log it's necessary to check.
|
||||
|
||||
Adding a new experiment
|
||||
-----------------------
|
||||
|
||||
Create a new file in your ``repository`` folder, called ``timed_tutorial.py``: ::
|
||||
|
||||
from artiq.experiment import *
|
||||
import time
|
||||
|
||||
class TimedTutorial(EnvExperiment):
|
||||
"""Timed tutorial"""
|
||||
def build(self):
|
||||
pass # no devices used
|
||||
|
||||
def run(self):
|
||||
print("Hello World")
|
||||
time.sleep(10)
|
||||
print("Goodnight World")
|
||||
|
||||
Save it. You will notice that it does not immediately appear in the 'Explorer' dock. For stability reasons, the master operates with a cached idea of the repository, and changes in the file system will often not be reflected until a *repository rescan* is triggered.
|
||||
|
||||
You can ask it to do this through the command-line client: ::
|
||||
|
||||
$ artiq_client scan-repository
|
||||
|
||||
or you can right-click in the Explorer and select 'Scan repository HEAD'. Now you should be able to select and submit the new experiment.
|
||||
|
||||
If you switch the 'Log' dock to its 'Schedule' tab while the experiment is still running, you will see the experiment appear, displaying its RID, status, priority, and other information. Click 'Submit' again while the first experiment is in progress, and a second iteration of the experiment will appear in the Schedule, queued up to execute next in line.
|
||||
|
||||
.. note::
|
||||
You may have noted that experiments can be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, see :ref:`experiment-scheduling`.
|
||||
|
||||
In the meantime, you can try out submitting either of the two experiments with different priority levels and take a look at the queues that ensue. If you are interested, you can try submitting experiments through the command line client at the same time, or even open a second dashboard in a different terminal. Observe that no matter the source, all submitted experiments will be accounted for and handled by the scheduler in an orderly way.
|
||||
.. seealso::
|
||||
You may note that experiments may be submitted with a due date, a priority level, a pipeline identifier, and other specific settings. Some of these are self-explanatory. Many are scheduling-related. For more information on experiment scheduling, especially when submitting longer experiments or submitting across multiple users, see :ref:`experiment-scheduling`.
|
||||
|
||||
.. _mgmt-arguments:
|
||||
|
||||
Adding arguments
|
||||
----------------
|
||||
Adding an argument
|
||||
------------------
|
||||
|
||||
Experiments may have arguments, values which can be set in the dashboard on submission and used in the experiment's code. Create a new experiment called ``argument_tutorial.py``, and give it the following :meth:`~artiq.language.environment.HasEnvironment.build` and :meth:`~artiq.language.environment.Experiment.run` functions: ::
|
||||
Experiments may have arguments whose values can be set in the dashboard and used in the experiment's code. Modify the experiment as follows: ::
|
||||
|
||||
def build(self):
|
||||
self.setattr_argument("count", NumberValue(precision=0, step=1))
|
||||
|
||||
def run(self):
|
||||
for i in range(self.count):
|
||||
print("Hello World", i)
|
||||
for i in range(self.count):
|
||||
print("Hello World", i)
|
||||
|
||||
The method :meth:`~artiq.language.environment.HasEnvironment.setattr_argument` acts to set the argument and make its value accessible, similar to the effect of :meth:`~artiq.language.environment.HasEnvironment.setattr_device`. The second input sets the type of the argument; here, :class:`~artiq.language.environment.NumberValue` represents a floating point numerical value. To learn what other types are supported, see :class:`artiq.language.environment` and :class:`artiq.language.scan`.
|
||||
|
||||
Rescan the repository as before. Open the new experiment in the dashboard. Above the submission options, you should now see a spin box that allows you to set the value of ``count``. Try setting it and submitting it.
|
||||
``NumberValue`` represents a floating point numeric argument. There are many other types, see :class:`~artiq.language.environment` and :class:`~artiq.language.scan`.
|
||||
|
||||
Use the command-line client to trigger a repository rescan: ::
|
||||
|
||||
artiq_client scan-repository
|
||||
|
||||
The dashboard should now display a spin box that allows you to set the value of the ``count`` argument. Try submitting the experiment as before.
|
||||
|
||||
Interactive arguments
|
||||
---------------------
|
||||
|
||||
With standard arguments, it is only possible to use :meth:`~artiq.language.environment.HasEnvironment.setattr_argument` in :meth:`~artiq.language.environment.HasEnvironment.build`; these arguments are always requested at submission time. However, it is also possible to use *interactive* arguments, which can be requested and supplied inside :meth:`~artiq.language.environment.Experiment.run`, while the experiment is being executed. Modify the experiment as follows (and push the result): ::
|
||||
It is also possible to use interactive arguments, which may be requested and supplied while the experiment is running. This time modify the experiment as follows: ::
|
||||
|
||||
def build(self):
|
||||
pass
|
||||
|
@ -162,81 +104,72 @@ With standard arguments, it is only possible to use :meth:`~artiq.language.envir
|
|||
interactive.setattr_argument("repeat", BooleanValue(True))
|
||||
repeat = interactive.repeat
|
||||
|
||||
Close and reopen the submission window, or click on the button labeled 'Recompute all arguments', in order to update the submission parameters. Submit again. It should print once, then wait; you may notice in 'Schedule' that the experiment does not exit, but hangs at status 'running'.
|
||||
|
||||
Now, in the same dock as 'Explorer', navigate to the tab 'Interactive Args'. You can now choose and submit a value for 'repeat'. Every time an interactive argument is requested, the experiment pauses until an input is supplied.
|
||||
|
||||
.. note::
|
||||
If you choose to 'Cancel' instead, an :exc:`~artiq.language.environment.CancelledArgsError` will be raised (which an experiment can catch, instead of halting).
|
||||
|
||||
In order to request and supply multiple interactive arguments at once, simply place them in the same ``with`` block; see also the example ``interactive.py`` in ``examples/no_hardware``.
|
||||
Trigger a repository rescan and click the button labeled "Recompute all arguments". Now submit the experiment. It should print once, then wait; in the same dock as "Explorer", find and navigate to the tab "Interactive Args". You can now choose and supply a value for the argument mid-experiment. Every time an argument is requested, the experiment pauses until the input is supplied. If you choose to "Cancel" instead, an :exc:`~artiq.language.environment.CancelledArgsError` will be raised (which the experiment can choose to catch, rather than halting.)
|
||||
|
||||
While regular arguments are all requested simultaneously before submitting, interactive arguments can be requested at any point. In order to request multiple interactive arguments at once, place them within the same ``with`` block; see also the example ``interactive.py`` in the ``examples/no_hardware`` folder.
|
||||
|
||||
.. _master-setting-up-git:
|
||||
|
||||
Setting up Git integration
|
||||
--------------------------
|
||||
|
||||
So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with tracking modifications to experiments and with the traceability to a particular version of an experiment.
|
||||
So far, we have used the bare filesystem for the experiment repository, without any version control. Using Git to host the experiment repository helps with the tracking of modifications to experiments and with the traceability of a result to a particular version of an experiment.
|
||||
|
||||
.. note::
|
||||
The workflow we will describe in this tutorial corresponds to a situation where the computer running the ARTIQ master is also used as a Git server to which multiple users may contribute code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head. See the :ref:`Management system <mgmt-git-integration>` page for notes on alternate workflows.
|
||||
The workflow we will describe in this tutorial corresponds to a situation where the ARTIQ master machine is also used as a Git server where multiple users may push and pull code. The Git setup can be customized according to your needs; the main point to remember is that when scanning or submitting, the ARTIQ master uses the internal Git data (*not* any working directory that may be present) to fetch the latest *fully completed commit* at the repository's head.
|
||||
|
||||
We will use our current ``repository`` folder as the working directory for making local modifications to the experiments, move it away from the master's data directory, and replace it with a new ``repository`` folder, which will hold only the Git data used by the master. Stop the master with Ctrl+C and enter the following commands: ::
|
||||
We will use the current ``repository`` folder as working directory for making local modifications to the experiments, move it away from the master data directory, and create a new ``repository`` folder that holds the Git data used by the master. Stop the master with Ctrl-C and enter the following commands: ::
|
||||
|
||||
$ cd ~/artiq-master
|
||||
$ mv repository ~/artiq-work
|
||||
$ mkdir repository
|
||||
$ cd repository
|
||||
$ git init bare
|
||||
$ git init --bare
|
||||
|
||||
Now initialize a regular (non-bare) Git repository in our working directory: ::
|
||||
Now, push data to into the bare repository. Initialize a regular (non-bare) Git repository into our working directory: ::
|
||||
|
||||
$ cd ~/artiq-work
|
||||
$ git init
|
||||
|
||||
Then add and commit our experiments: ::
|
||||
Then commit our experiment: ::
|
||||
|
||||
$ git add mgmt_tutorial.py
|
||||
$ git add timed_tutorial.py
|
||||
$ git commit -m "First version of the tutorial experiments"
|
||||
$ git commit -m "First version of the tutorial experiment"
|
||||
|
||||
and finally, connect the two repositories and push the commit upstream to the master's repository: ::
|
||||
and finally, push the commit into the master's bare repository: ::
|
||||
|
||||
$ git remote add origin ~/artiq-master/repository
|
||||
$ git push -u origin master
|
||||
|
||||
.. tip::
|
||||
If you are not familiar with command-line Git and would like to understand these commands in more detail, search for some tutorials in basic use of Git; there are many available online.
|
||||
|
||||
Start the master again with the ``-g`` flag, which tells it to treat its ``repository`` folder as a bare Git repository: ::
|
||||
Start the master again with the ``-g`` flag, telling it to treat the contents of the ``repository`` folder (not ``artiq-work``) as a bare Git repository: ::
|
||||
|
||||
$ cd ~/artiq-master
|
||||
$ artiq_master -g
|
||||
|
||||
.. note::
|
||||
Note that you need at least one commit in the repository before the master can be started.
|
||||
You need at least one commit in the repository before you can start the master.
|
||||
|
||||
Now you should be able to restart the dashboard and see your experiments there.
|
||||
There should be no errors displayed, and if you start the GUI again, you will find the experiment there.
|
||||
|
||||
To make things more convenient, we will make Git tell the master to rescan the repository whenever new data is pushed from downstream. Create a file ``~/artiq-master/repository/hooks/post-receive`` with the following contents: ::
|
||||
To complete the master configuration, we must tell Git to make the master rescan the repository when new data is added to it. Create a file ``~/artiq-master/repository/hooks/post-receive`` with the following contents: ::
|
||||
|
||||
#!/bin/sh
|
||||
artiq_client scan-repository --async
|
||||
|
||||
Then set its execution permissions: ::
|
||||
Then set the execution permission on it: ::
|
||||
|
||||
$ chmod 755 repository/hooks/post-receive
|
||||
$ chmod 755 ~/artiq-master/repository/hooks/post-receive
|
||||
|
||||
.. note::
|
||||
Remote client machines may also push and pull into the master repository, using e.g. Git over SSH.
|
||||
Remote machines may also push and pull into the master's bare repository using e.g. Git over SSH.
|
||||
|
||||
Let's now make a modification to the experiments. In the working directory ``artiq-work``, open ``mgmt_tutorial.py`` again and add an exclamation mark to the end of "Hello World". Before committing it, check that the experiment can still be executed correctly by submitting it directly from the working directory, using the command-line client: ::
|
||||
Let's now make a modification to the experiment. In the source present in the working directory, add an exclamation mark at the end of "Hello World". Before committing it, check that the experiment can still be executed correctly by running it directly from the filesystem using: ::
|
||||
|
||||
$ artiq_client submit ~/artiq-work/mgmt_tutorial.py
|
||||
|
||||
.. note::
|
||||
Alternatively, right-click in the Explorer dock and select the 'Open file outside repository' option for the same effect.
|
||||
You may also use the "Open file outside repository" feature of the GUI, by right-clicking on the explorer.
|
||||
|
||||
Verify the log in the GUI. If you are happy with the result, commit the new version and push it into the master's repository: ::
|
||||
|
||||
|
@ -244,17 +177,86 @@ Verify the log in the GUI. If you are happy with the result, commit the new vers
|
|||
$ git commit -a -m "More enthusiasm"
|
||||
$ git push
|
||||
|
||||
Notice that commands other than ``git commit`` and ``git push`` are no longer necessary. The Git hook should cause a repository rescan automatically, and submitting the experiment in the dashboard should run the new version, with enthusiasm included.
|
||||
.. note::
|
||||
Notice that commands other than ``git push`` are no longer necessary.
|
||||
|
||||
The master should now run the new version from its repository.
|
||||
|
||||
As an exercise, add another experiment to the repository, commit and push the result, and verify that it appears in the GUI.
|
||||
|
||||
.. _getting-started-datasets:
|
||||
|
||||
Datasets
|
||||
--------
|
||||
|
||||
ARTIQ uses the concept of *datasets* to manage the data exchanged with experiments, both supplied *to* experiments (generally, from other experiments) and saved *from* experiments (i.e. results or records).
|
||||
|
||||
Modify the experiment as follows, once again using a single non-interactive argument: ::
|
||||
|
||||
def build(self):
|
||||
self.setattr_argument("count", NumberValue(precision=0, step=1))
|
||||
|
||||
def run(self):
|
||||
self.set_dataset("parabola", np.full(self.count, np.nan), broadcast=True)
|
||||
for i in range(self.count):
|
||||
self.mutate_dataset("parabola", i, i*i)
|
||||
time.sleep(0.5)
|
||||
|
||||
.. tip::
|
||||
You need to import the ``time`` module, and the ``numpy`` module as ``np``.
|
||||
|
||||
Commit, push and submit the experiment as before. Go to the "Datasets" dock of the GUI and observe that a new dataset has been created. Once the experiment has finished executing, navigate to ``~/artiq-master/`` in a terminal or file manager and see that a new directory has been created called ``results``. Your dataset should be stored as an HD5 dump file in ``results`` under ``<date>/<hour>``.
|
||||
|
||||
.. note::
|
||||
By default, datasets are primarily attributes of the experiments that run them, and are not shared with the master or the dashboard. The ``broadcast=True`` argument specifies that an argument should be shared in real-time with the master, which is responsible for dispatching it to the clients. A more detailed description of dataset methods and their arguments can be found under :mod:`artiq.language.environment.HasEnvironment`.
|
||||
|
||||
Open the file for your first dataset with HDFView, h5dump, or any similar third-party tool, and observe the data we just generated as well as the Git commit ID of the experiment (a hexadecimal hash such as ``947acb1f90ae1b8862efb489a9cc29f7d4e0c645`` which represents a particular state of the Git repository). A list of Git commit IDs can be found by running the ``git log`` command in ``~/artiq-master/``.
|
||||
|
||||
Applets
|
||||
-------
|
||||
|
||||
Often, rather than the HDF dump, we would like to see our result datasets in readable graphical form, preferably immediately. In the ARTIQ dashboard, this is achieved by programs called "applets". Applets are independent programs that add simple GUI features and are run as separate processes (to achieve goals of modularity and resilience against poorly written applets). ARTIQ supplies several applets for basic plotting in the ``artiq.applets`` module, and users may write their own using the provided interfaces.
|
||||
|
||||
.. seealso::
|
||||
For developing your own applets, see the references provided on the :ref:`management system page<applet-references>` of this manual.
|
||||
|
||||
For our ``parabola`` dataset, we will create an XY plot using the provided ``artiq.applets.plot_xy``. Applets are configured with simple command line options; we can find the list of available options using the ``-h`` flag. Try running: ::
|
||||
|
||||
$ python3 -m artiq.applets.plot_xy -h
|
||||
|
||||
In our case, we only need to supply our dataset as the y-values to be plotted. Navigate to the "Applet" dock in the dashboard. Right-click in the empty list and select "New applet from template" and "XY". This will generate a version of the applet command that shows all applicable options; edit the command so that it retrieves the ``parabola`` dataset and erase the unused options. The line should now be: ::
|
||||
|
||||
${artiq_applet}plot_xy parabola
|
||||
|
||||
Run the experiment again, and observe how the points are added one by one to the plot.
|
||||
|
||||
RTIO analyzer and the dashboard
|
||||
-------------------------------
|
||||
|
||||
The :ref:`rtio-analyzer-example` is fully integrated into the dashboard. Navigate to the "Waveform" tab in the dashboard. After running the example experiment in that section, or any other experiment producing an analyzer trace, the waveform results will be directly displayed in this tab. It is also possible to save a trace, reopen it, or export it to VCD directly from the GUI.
|
||||
|
||||
Non-RTIO devices and the controller manager
|
||||
-------------------------------------------
|
||||
|
||||
As described in :ref:`artiq-real-time-i-o-concepts`, there are two classes of equipment a laboratory typically finds itself needing to operate. So far, we have largely discussed ARTIQ in terms of one only: the kind of specialized hardware that requires the very high-resolution timing control ARTIQ provides. The other class comprises the broad range of regular, "slow" laboratory devices, which do *not* require nanosecond precision and can generally be operated perfectly well from a regular PC over a non-realtime channel such as USB.
|
||||
|
||||
To handle these "slow" devices, ARTIQ uses *controllers*, intermediate pieces of software which are responsible for the direct I/O to these devices and offer RPC interfaces to the network. Controllers can be started and run standalone, but are generally handled through the *controller manager*, available through the ``artiq-comtools`` package (normally automatically installed together with ARTIQ.) The controller manager in turn communicates with the ARTIQ master, and through it with clients or the GUI.
|
||||
|
||||
To start the controller manager (the master must already be running), the only command necessary is: ::
|
||||
|
||||
$ artiq_ctlmgr
|
||||
|
||||
Controllers may be run on a different machine from the master, or even on multiple different machines, alleviating cabling issues and OS compatibility problems. In this case, communication with the master happens over the network. If multiple machines are running controllers, they must each run their own controller manager (for which only ``artiq-comtools`` and its few dependencies are necessary, not the full ARTIQ installation.) Use the ``-s`` and ``--bind`` flags of ``artiq_ctlmgr`` to set IP addresses or hostnames to connect and bind to.
|
||||
|
||||
Note, however, that the controller for the particular device you are trying to connect to must first exist and be part of a complete Network Device Support Package, or NDSP. :doc:`Some NDSPs are already available <list_of_ndsps>`. If your device is not on this list, the system is designed to make it quite possible to write your own. For this, see the :doc:`developing_a_ndsp` page.
|
||||
|
||||
Once a device is correctly listed in ``device_db.py``, it can be added to an experiment using ``self.setattr_device([device_name])`` and the methods its API offers called straightforwardly as ``self.[device_name].[method_name]``. As long as the requisite controllers are running and available, the experiment can then be executed with ``artiq_run`` or through the management system.
|
||||
|
||||
The ARTIQ session
|
||||
-----------------
|
||||
|
||||
Often, you will want to run an instance of the controller manager and dashboard along with the ARTIQ master, whether or not you also intend to allow other clients to connect remotely. For convenience, all three can be started simultaneously with a single command: ::
|
||||
If (as is often the case) you intend to mostly operate your ARTIQ system and its devices from a single machine, i.e., the networked aspects of the management system are largely unnecessary and you will be running master, dashboard, and controller manager on one computer, they can all be started simultaneously with the single command: ::
|
||||
|
||||
$ artiq_session
|
||||
|
||||
Arguments to the individual tools (including ``-s`` and ``--bind``) can still be specified using the ``-m``, ``-d`` and ``-c`` options for master, dashboard and manager respectively. Use an equals sign to avoid confusion in parsing, for example: ::
|
||||
|
||||
$ artiq_session -m=-g
|
||||
|
||||
to start the session with the master in Git mode. See also :mod:`~artiq.frontend.artiq_session`.
|
||||
Arguments to the individuals (including ``-s`` and ``--bind``) can still be specified using the ``-m``, ``-d`` and ``-c`` options respectively.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue