forked from M-Labs/artiq
Compare commits
2 Commits
Author | SHA1 | Date |
---|---|---|
hartytp | 2ef16331ea | |
Tom Harty | 301c734fb6 |
|
@ -51,7 +51,7 @@ Closes #XXX
|
||||||
|
|
||||||
### Documentation Changes
|
### Documentation Changes
|
||||||
|
|
||||||
- [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`nix build .#artiq-manual-html; nix build .#artiq-manual-pdf`) to ensure no errors.
|
- [ ] Check, test, and update the documentation in [doc/](../doc/). Build documentation (`cd doc/manual/; make html`) to ensure no errors.
|
||||||
|
|
||||||
### Git Logistics
|
### Git Logistics
|
||||||
|
|
||||||
|
|
|
@ -11,7 +11,6 @@ __pycache__/
|
||||||
.ipynb_checkpoints
|
.ipynb_checkpoints
|
||||||
/doc/manual/_build
|
/doc/manual/_build
|
||||||
/build
|
/build
|
||||||
/result
|
|
||||||
/dist
|
/dist
|
||||||
/*.egg-info
|
/*.egg-info
|
||||||
/.coverage
|
/.coverage
|
||||||
|
@ -24,14 +23,12 @@ __pycache__/
|
||||||
/artiq/test/results
|
/artiq/test/results
|
||||||
/artiq/examples/*/results
|
/artiq/examples/*/results
|
||||||
/artiq/examples/*/last_rid.pyon
|
/artiq/examples/*/last_rid.pyon
|
||||||
/artiq/examples/*/dataset_db.mdb
|
/artiq/examples/*/dataset_db.pyon
|
||||||
/artiq/examples/*/dataset_db.mdb-lock
|
|
||||||
|
|
||||||
# when testing ad-hoc experiments at the root:
|
# when testing ad-hoc experiments at the root:
|
||||||
/repository/
|
/repository/
|
||||||
/results
|
/results
|
||||||
/last_rid.pyon
|
/last_rid.pyon
|
||||||
/dataset_db.mdb
|
/dataset_db.pyon
|
||||||
/dataset_db.mdb-lock
|
|
||||||
/device_db*.py
|
/device_db*.py
|
||||||
/test*
|
/test*
|
||||||
|
|
|
@ -8,27 +8,28 @@ Reporting Issues/Bugs
|
||||||
Thanks for `reporting issues to ARTIQ
|
Thanks for `reporting issues to ARTIQ
|
||||||
<https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and
|
<https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and
|
||||||
ask questions on IRC (the #m-labs channel on OFTC), the `Mattermost chat
|
ask questions on IRC (the #m-labs channel on OFTC), the `Mattermost chat
|
||||||
<https://chat.m-labs.hk>`_, or in the `forum <https://forum.m-labs.hk>`_.
|
<https://chat.m-labs.hk>`_, or on the `forum <https://forum.m-labs.hk>`_.
|
||||||
|
|
||||||
The best bug reports are those which contain sufficient information. With
|
The best bug reports are those which contain sufficient information. With
|
||||||
accurate and comprehensive context, an issue can be resolved quickly and
|
accurate and comprehensive context, an issue can be resolved quickly and
|
||||||
efficiently. Please consider adding the following data to your issue
|
efficiently. Please consider adding the following data to your issue
|
||||||
report if possible:
|
report if possible:
|
||||||
|
|
||||||
* A clear and unique summary that fits into one line. Check that this
|
* A clear and unique summary that fits into one line. Also check that
|
||||||
issue has not yet been reported; if it has, add additional information there.
|
this issue has not yet been reported. If it has, add additional information there.
|
||||||
* Precise steps to reproduce (a list of actions that leads to the issue)
|
* Precise steps to reproduce (list of actions that leads to the issue)
|
||||||
* Expected behavior (what should happen)
|
* Expected behavior (what should happen)
|
||||||
* Actual behavior (what happens instead)
|
* Actual behavior (what happens instead)
|
||||||
* Logging message, tracebacks, screenshots, where applicable
|
* Logging message, trace backs, screen shots where relevant
|
||||||
* Components involved (omit irrelevant parts):
|
* Components involved (omit irrelevant parts):
|
||||||
|
|
||||||
* Operating system used
|
* Operating System
|
||||||
* ARTIQ version (run any command in the form of ``artiq_client --version``)
|
* ARTIQ version (with recent versions of ARTIQ, run ``artiq_client --version``)
|
||||||
* Gateware and firmware loaded to the core device (in the output of
|
* Version of the gateware and runtime loaded in the core device (in the output of ``artiq_coremgmt -D .... log``)
|
||||||
``artiq_coremgmt [-D ....] log``)
|
* If using Conda, output of `conda list`
|
||||||
* Hardware involved
|
* Hardware involved
|
||||||
|
|
||||||
|
|
||||||
For in-depth information on bug reporting, see:
|
For in-depth information on bug reporting, see:
|
||||||
|
|
||||||
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
|
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
|
||||||
|
@ -38,10 +39,10 @@ https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines
|
||||||
Contributing Code
|
Contributing Code
|
||||||
=================
|
=================
|
||||||
|
|
||||||
ARTIQ welcomes contributions. Write bite-size patches that can stand alone,
|
ARTIQ welcomes contributions. Write bite-sized patches that can stand alone,
|
||||||
clean them up, write proper commit messages, add docstrings and unit tests;
|
clean them up, write proper commit messages, add docstrings and unittests. Then
|
||||||
``git rebase`` them onto the current master or merge the current master. Verify
|
``git rebase`` them onto the current master or merge the current master. Verify
|
||||||
that the test suite passes. Then submit a pull request. Expect your contribution
|
that the testsuite passes. Then submit a pull request. Expect your contribution
|
||||||
to be held up to coding standards (e.g. use ``flake8`` to check yourself).
|
to be held up to coding standards (e.g. use ``flake8`` to check yourself).
|
||||||
|
|
||||||
Checklist for Code Contributions
|
Checklist for Code Contributions
|
||||||
|
@ -51,7 +52,7 @@ Checklist for Code Contributions
|
||||||
- Use correct spelling and grammar. Use your code editor to help you with
|
- Use correct spelling and grammar. Use your code editor to help you with
|
||||||
syntax, spelling, and style
|
syntax, spelling, and style
|
||||||
- Style: PEP-8 (``flake8``)
|
- Style: PEP-8 (``flake8``)
|
||||||
- Add or update docstrings and comments
|
- Add, check docstrings and comments
|
||||||
- Split your contribution into logically separate changes (``git rebase
|
- Split your contribution into logically separate changes (``git rebase
|
||||||
--interactive``). Merge (squash, fixup) commits that just fix previous commits
|
--interactive``). Merge (squash, fixup) commits that just fix previous commits
|
||||||
or amend them. Remove unintended changes. Clean up your commits.
|
or amend them. Remove unintended changes. Clean up your commits.
|
||||||
|
@ -63,37 +64,12 @@ Checklist for Code Contributions
|
||||||
- Review each of your commits for the above items (``git show``)
|
- Review each of your commits for the above items (``git show``)
|
||||||
- Update ``RELEASE_NOTES.md`` if there are noteworthy changes, especially if
|
- Update ``RELEASE_NOTES.md`` if there are noteworthy changes, especially if
|
||||||
there are changes to existing APIs
|
there are changes to existing APIs
|
||||||
- Check, test, and update the documentation in ``doc/``
|
- Check, test, and update the documentation in `doc/`
|
||||||
- Check, test, and update the unit tests
|
- Check, test, and update the unittests
|
||||||
- Close and/or update issues
|
- Close and/or update issues
|
||||||
|
|
||||||
|
|
||||||
Contributing Documentation
|
|
||||||
==========================
|
|
||||||
|
|
||||||
ARTIQ welcomes documentation contributions. The ARTIQ manual is hosted online in HTML
|
|
||||||
form `here <https://m-labs.hk/artiq/manual/>`__ and in PDF form
|
|
||||||
`here <https://m-labs.hk/artiq/manual.pdf>`__. It is generated from source files
|
|
||||||
in ``doc/manual``, written in a variant of the
|
|
||||||
`reStructured Text <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_
|
|
||||||
markup language processed by `Sphinx <https://www.sphinx-doc.org/en/master/>`_, with
|
|
||||||
some of the additional reference material processed from inline documentation
|
|
||||||
in the ARTIQ source itself.
|
|
||||||
|
|
||||||
Write bite-size patches that can stand alone, clean them up, write proper commit
|
|
||||||
messages. Check that your edits render properly and compile without errors: ::
|
|
||||||
|
|
||||||
$ nix build .#artiq-manual-pdf
|
|
||||||
$ nix build .#artiq-manual-html
|
|
||||||
|
|
||||||
Elaborations, improvements, clarifications and corrections to any of the material
|
|
||||||
are happily accepted, but special attention is drawn to the manual
|
|
||||||
`FAQ <https://m-labs.hk/artiq/manual/faq.html>`_, where tips and solutions
|
|
||||||
are especially easy to add. See also the FAQ's own
|
|
||||||
`section on the subject <https://m-labs.hk/artiq/manual/faq.html#build-documentation>`_.
|
|
||||||
|
|
||||||
Copyright and Sign-Off
|
Copyright and Sign-Off
|
||||||
======================
|
----------------------
|
||||||
|
|
||||||
Authors retain copyright of their contributions to ARTIQ, but whenever possible
|
Authors retain copyright of their contributions to ARTIQ, but whenever possible
|
||||||
should use the GNU LGPL version 3 license for them to be merged.
|
should use the GNU LGPL version 3 license for them to be merged.
|
||||||
|
@ -133,7 +109,7 @@ can certify the below:
|
||||||
maintained indefinitely and may be redistributed consistent with
|
maintained indefinitely and may be redistributed consistent with
|
||||||
this project or the open source license(s) involved.
|
this project or the open source license(s) involved.
|
||||||
|
|
||||||
then add a line saying
|
then you just add a line saying
|
||||||
|
|
||||||
Signed-off-by: Random J Developer <random@developer.example.org>
|
Signed-off-by: Random J Developer <random@developer.example.org>
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
7
|
|
@ -5,4 +5,3 @@ include versioneer.py
|
||||||
include artiq/_version.py
|
include artiq/_version.py
|
||||||
include artiq/coredevice/coredevice_generic.schema.json
|
include artiq/coredevice/coredevice_generic.schema.json
|
||||||
include artiq/compiler/kernel.ld
|
include artiq/compiler/kernel.ld
|
||||||
include artiq/afws.pem
|
|
||||||
|
|
22
README.rst
22
README.rst
|
@ -5,27 +5,31 @@
|
||||||
:target: https://m-labs.hk/artiq
|
:target: https://m-labs.hk/artiq
|
||||||
|
|
||||||
ARTIQ (Advanced Real-Time Infrastructure for Quantum physics) is a leading-edge control and data acquisition system for quantum information experiments.
|
ARTIQ (Advanced Real-Time Infrastructure for Quantum physics) is a leading-edge control and data acquisition system for quantum information experiments.
|
||||||
It is maintained and developed by `M-Labs <https://m-labs.hk>`_ and the initial development was for and in partnership with the `Ion Storage Group at NIST <https://www.nist.gov/pml/time-and-frequency-division/ion-storage>`_. ARTIQ is free software and offered to the entire research community as a solution equally applicable to other challenging control tasks, including outside the field of ion trapping. Many laboratories around the world have adopted ARTIQ as their control system and some have `contributed <https://m-labs.hk/experiment-control/funding/>`_ to it.
|
It is maintained and developed by `M-Labs <https://m-labs.hk>`_ and the initial development was for and in partnership with the `Ion Storage Group at NIST <https://www.nist.gov/pml/time-and-frequency-division/ion-storage>`_. ARTIQ is free software and offered to the entire research community as a solution equally applicable to other challenging control tasks, including outside the field of ion trapping. Many laboratories around the world have adopted ARTIQ as their control system, with over a hundred Sinara hardware crates deployed, and some have `contributed <https://m-labs.hk/experiment-control/funding/>`_ to it.
|
||||||
|
|
||||||
The system features a high-level programming language, capable of describing complex experiments, which is compiled and executed on dedicated hardware with nanosecond timing resolution and sub-microsecond latency. It includes graphical user interfaces to parametrize and schedule experiments and to visualize and explore the results.
|
The system features a high-level programming language that helps describing complex experiments, which is compiled and executed on dedicated hardware with nanosecond timing resolution and sub-microsecond latency. It includes graphical user interfaces to parametrize and schedule experiments and to visualize and explore the results.
|
||||||
|
|
||||||
ARTIQ uses FPGA hardware to perform its time-critical tasks. The `Sinara hardware <https://github.com/sinara-hw>`_, and in particular the Kasli FPGA carrier, are designed to work with ARTIQ. ARTIQ is designed to be portable to hardware platforms from different vendors and FPGA manufacturers. Several different configurations of a `FPGA evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ and a `Zynq evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html>`_ are also used and supported. FPGA platforms can be combined with any number of additional peripherals, either already accessible from ARTIQ or made accessible with little effort.
|
ARTIQ uses FPGA hardware to perform its time-critical tasks. The `Sinara hardware <https://github.com/sinara-hw>`_, and in particular the Kasli FPGA carrier, is designed to work with ARTIQ.
|
||||||
|
ARTIQ is designed to be portable to hardware platforms from different vendors and FPGA manufacturers.
|
||||||
|
Several different configurations of a `FPGA evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ and of a `Zynq evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html>`_ are also used and supported. FPGA platforms can be combined with any number of additional peripherals, either already accessible from ARTIQ or made accessible with little effort.
|
||||||
|
|
||||||
ARTIQ and its dependencies are available in the form of Nix packages (for Linux) and MSYS2 packages (for Windows). See `the manual <https://m-labs.hk/experiment-control/resources/>`_ for installation instructions. Packages containing pre-compiled binary images to be loaded onto the hardware platforms are supplied for each configuration. Like any open-source software ARTIQ can equally be built and installed directly from `source <https://github.com/m-labs/artiq>`_.
|
ARTIQ and its dependencies are available in the form of Nix packages (for Linux) and Conda packages (for Windows and Linux). See `the manual <https://m-labs.hk/experiment-control/resources/>`_ for installation instructions.
|
||||||
|
Packages containing pre-compiled binary images to be loaded onto the hardware platforms are supplied for each configuration.
|
||||||
|
Like any open source software ARTIQ can equally be built and installed directly from `source <https://github.com/m-labs/artiq>`_.
|
||||||
|
|
||||||
ARTIQ is supported by M-Labs and developed openly. Components, features, fixes, improvements, and extensions are often `funded <https://m-labs.hk/experiment-control/funding/>`_ by and developed for the partnering research groups.
|
ARTIQ is supported by M-Labs and developed openly.
|
||||||
|
Components, features, fixes, improvements, and extensions are often `funded <https://m-labs.hk/experiment-control/funding/>`_ by and developed for the partnering research groups.
|
||||||
|
|
||||||
Core technologies employed include `Python <https://www.python.org/>`_, `Migen <https://github.com/m-labs/migen>`_, `Migen-AXI <https://github.com/peteut/migen-axi>`_, `Rust <https://www.rust-lang.org/>`_, `MiSoC <https://github.com/m-labs/misoc>`_/`VexRiscv <https://github.com/SpinalHDL/VexRiscv>`_, `LLVM <https://llvm.org/>`_/`llvmlite <https://github.com/numba/llvmlite>`_, and `Qt6 <https://www.qt.io/>`_.
|
Core technologies employed include `Python <https://www.python.org/>`_, `Migen <https://github.com/m-labs/migen>`_, `Migen-AXI <https://github.com/peteut/migen-axi>`_, `Rust <https://www.rust-lang.org/>`_, `MiSoC <https://github.com/m-labs/misoc>`_/`VexRiscv <https://github.com/SpinalHDL/VexRiscv>`_, `LLVM <https://llvm.org/>`_/`llvmlite <https://github.com/numba/llvmlite>`_, and `Qt5 <https://www.qt.io/>`_.
|
||||||
|
|
||||||
| Website: https://m-labs.hk/experiment-control/artiq
|
Website: https://m-labs.hk/artiq
|
||||||
| (US-hosted mirror: https://m-labs-intl.com/experiment-control/artiq)
|
|
||||||
|
|
||||||
`Cite ARTIQ <http://dx.doi.org/10.5281/zenodo.51303>`_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``.
|
`Cite ARTIQ <http://dx.doi.org/10.5281/zenodo.51303>`_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``.
|
||||||
|
|
||||||
License
|
License
|
||||||
=======
|
=======
|
||||||
|
|
||||||
Copyright (C) 2014-2024 M-Labs Limited.
|
Copyright (C) 2014-2021 M-Labs Limited.
|
||||||
|
|
||||||
ARTIQ is free software: you can redistribute it and/or modify
|
ARTIQ is free software: you can redistribute it and/or modify
|
||||||
it under the terms of the GNU Lesser General Public License as published by
|
it under the terms of the GNU Lesser General Public License as published by
|
||||||
|
|
|
@ -3,178 +3,36 @@
|
||||||
Release notes
|
Release notes
|
||||||
=============
|
=============
|
||||||
|
|
||||||
ARTIQ-9 (Unreleased)
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
* GUI state files are now automatically backed up upon successful loading.
|
|
||||||
* Zotino monitoring in the dashboard now displays the values in volts.
|
|
||||||
* afws_client now uses the "happy eyeballs" algorithm (RFC 6555) for a faster and more
|
|
||||||
reliable connection to the server.
|
|
||||||
* The Zadig driver installer was added to the MSYS2 offline installer.
|
|
||||||
* Fastino monitoring with Moninj is now supported.
|
|
||||||
* Qt6 support.
|
|
||||||
* Python 3.12 support.
|
|
||||||
|
|
||||||
ARTIQ-8
|
|
||||||
-------
|
|
||||||
|
|
||||||
Highlights:
|
|
||||||
|
|
||||||
* New hardware support:
|
|
||||||
- Support for Shuttler, a 16-channel 125MSPS DAC card intended for ion transport.
|
|
||||||
Waveform generator and user API are similar to the NIST PDQ.
|
|
||||||
- Implemented Phaser-servo. This requires recent gateware on Phaser.
|
|
||||||
- Almazny v1.2 with finer RF switch control.
|
|
||||||
- Metlino and Sayma support has been dropped due to complications with synchronous RTIO clocking.
|
|
||||||
- More user LEDs are exposed to RTIO on Kasli.
|
|
||||||
- Implemented Phaser-MIQRO support. This requires the proprietary Phaser MIQRO gateware
|
|
||||||
variant from QUARTIQ.
|
|
||||||
- Sampler: fixed ADC MU to Volt conversion factor for Sampler v2.2+.
|
|
||||||
For earlier hardware versions, specify the hardware version in the device
|
|
||||||
database file (e.g. ``"hw_rev": "v2.1"``) to use the correct conversion factor.
|
|
||||||
* Support for distributed DMA, where DMA is run directly on satellites for corresponding
|
|
||||||
RTIO events, increasing bandwidth in scenarios with heavy satellite usage.
|
|
||||||
* Support for subkernels, where kernels are run on satellite device CPUs to offload some
|
|
||||||
of the processing and RTIO operations.
|
|
||||||
* CPU (on softcore platforms) and AXI bus (on Zynq) are now clocked synchronously with the RTIO
|
|
||||||
clock, to facilitate implementation of local processing on DRTIO satellites, and to slightly
|
|
||||||
reduce RTIO latency.
|
|
||||||
* Support for DRTIO-over-EEM, used with Shuttler.
|
|
||||||
* Support for WRPLL low-noise clock recovery.
|
|
||||||
* Enabled event spreading on DRTIO satellites, using high watermark for lane switching.
|
|
||||||
* Added channel names to RTIO error messages.
|
|
||||||
* The RTIO analyzer is now proxied by ``aqctl_coreanalyzer_proxy`` typically running on the master
|
|
||||||
machine, similarly to ``aqctl_moninj_proxy``.
|
|
||||||
* GUI:
|
|
||||||
- Integrated waveform analyzer, removing the need for external VCD viewers such as GtkWave.
|
|
||||||
- Implemented Applet Request Interfaces which allow applets to modify datasets and set the
|
|
||||||
current values of widgets in the dashboard's experiment windows.
|
|
||||||
- Implemented a new ``EntryArea`` widget which allows argument entry widgets to be used in applets.
|
|
||||||
- The "Close all applets" command (shortcut: Ctrl-Alt-W) now ignores docked applets,
|
|
||||||
making it a convenient way to clean up after exploratory work without destroying a
|
|
||||||
carefully arranged default workspace.
|
|
||||||
- Hotkeys now organize experiment windows in the order they were last interacted with:
|
|
||||||
+ CTRL+SHIFT+T tiles experiment windows
|
|
||||||
+ CTRL+SHIFT+C cascades experiment windows
|
|
||||||
- By enabling the ``quickstyle`` option, ``EnumerationValue`` entry widgets can now alternatively display
|
|
||||||
its choices as buttons that submit the experiment on click.
|
|
||||||
* Datasets can now be associated with units and scale factors, and displayed accordingly in the dashboard
|
|
||||||
including applets, like widgets such as ``NumberValue`` already did in earlier ARTIQ versions.
|
|
||||||
* Experiments can now request arguments interactively from the user at any time.
|
|
||||||
* Persistent datasets are now stored in a LMDB database for improved performance.
|
|
||||||
* Python's built-in types (such as ``float``, or ``List[...]``) can now be used in type annotations on
|
|
||||||
kernel functions.
|
|
||||||
* MSYS2 packaging for Windows, which replaces Conda. Conda packages are still available to
|
|
||||||
support legacy installations, but may be removed in a future release.
|
|
||||||
* Experiments can now be submitted with revisions set to a branch / tag name instead of only git hashes.
|
|
||||||
* Grabber image input now has an optional timeout.
|
|
||||||
* On NAR3-supported devices (Kasli-SoC, ZC706), when a Rust panic occurs, a minimal environment is started
|
|
||||||
where the network and ``artiq_coremgmt`` can be used. This allows the user to inspect logs, change
|
|
||||||
configuration options, update the firmware, and reboot the device.
|
|
||||||
* Full Python 3.11 support.
|
|
||||||
|
|
||||||
Breaking changes:
|
|
||||||
|
|
||||||
* ``SimpleApplet`` now calls widget constructors with an additional ``ctl`` parameter for control
|
|
||||||
operations, which includes dataset operations. It can be ignored if not needed. For an example usage,
|
|
||||||
refer to the ``big_number.py`` applet.
|
|
||||||
* ``SimpleApplet`` and ``TitleApplet`` now call ``data_changed`` with additional parameters. Derived applets
|
|
||||||
should change the function signature as below:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# SimpleApplet
|
|
||||||
def data_changed(self, value, metadata, persist, mods)
|
|
||||||
# SimpleApplet (old version)
|
|
||||||
def data_changed(self, data, mods)
|
|
||||||
# TitleApplet
|
|
||||||
def data_changed(self, value, metadata, persist, mods, title)
|
|
||||||
# TitleApplet (old version)
|
|
||||||
def data_changed(self, data, mods, title)
|
|
||||||
|
|
||||||
Accesses to the data argument should be replaced as below:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
data[key][0] ==> persist[key]
|
|
||||||
data[key][1] ==> value[key]
|
|
||||||
|
|
||||||
* The ``ndecimals`` parameter in ``NumberValue`` and ``Scannable`` has been renamed to ``precision``.
|
|
||||||
Parameters after and including ``scale`` in both constructors are now keyword-only.
|
|
||||||
Refer to the updated ``no_hardware/arguments_demo.py`` example for current usage.
|
|
||||||
* Almazny v1.2 is incompatible with the legacy versions and is the default.
|
|
||||||
To use legacy versions, specify ``almazny_hw_rev`` in the JSON description.
|
|
||||||
* kasli_generic.py has been merged into kasli.py, and the demonstration designs without JSON descriptions
|
|
||||||
have been removed. The base classes remain present in kasli.py to support third-party flows without
|
|
||||||
JSON descriptions.
|
|
||||||
* Legacy PYON databases should be converted to LMDB with the script below:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
from sipyco import pyon
|
|
||||||
import lmdb
|
|
||||||
|
|
||||||
old = pyon.load_file("dataset_db.pyon")
|
|
||||||
new = lmdb.open("dataset_db.mdb", subdir=False, map_size=2**30)
|
|
||||||
with new.begin(write=True) as txn:
|
|
||||||
for key, value in old.items():
|
|
||||||
txn.put(key.encode(), pyon.encode((value, {})).encode())
|
|
||||||
new.close()
|
|
||||||
|
|
||||||
* ``artiq.wavesynth`` has been removed.
|
|
||||||
|
|
||||||
ARTIQ-7
|
ARTIQ-7
|
||||||
-------
|
-------
|
||||||
|
|
||||||
Highlights:
|
Highlights:
|
||||||
|
|
||||||
* New hardware support:
|
* New hardware support:
|
||||||
- Kasli-SoC, a new EEM carrier based on a Zynq SoC, enabling much faster kernel execution
|
- Kasli-SoC, a new EEM carrier based on a Zynq SoC, enabling much faster kernel execution.
|
||||||
(see: https://arxiv.org/abs/2111.15290).
|
|
||||||
- DRTIO support on Zynq-based devices (Kasli-SoC and ZC706).
|
|
||||||
- DRTIO support on KC705.
|
|
||||||
- HVAMP_8CH 8 channel HV amplifier for Fastino / Zotinos
|
- HVAMP_8CH 8 channel HV amplifier for Fastino / Zotinos
|
||||||
- Almazny mezzanine board for Mirny
|
- Almazny mezzanine board for Mirny
|
||||||
- Phaser: improved documentation, exposed the DAC coarse mixer and ``sif_sync``, exposed upconverter calibration
|
|
||||||
and enabling/disabling of upconverter LO & RF outputs, added helpers to align Phaser updates to the
|
|
||||||
RTIO timeline (``get_next_frame_mu()``).
|
|
||||||
- Urukul: ``get()``, ``get_mu()``, ``get_att()``, and ``get_att_mu()`` functions added for AD9910 and AD9912.
|
|
||||||
* Softcore targets now use the RISC-V architecture (VexRiscv) instead of OR1K (mor1kx).
|
* Softcore targets now use the RISC-V architecture (VexRiscv) instead of OR1K (mor1kx).
|
||||||
* Gateware FPU is supported on KC705 and Kasli 2.0.
|
|
||||||
* Faster compilation for large arrays/lists.
|
* Faster compilation for large arrays/lists.
|
||||||
* Faster exception handling.
|
* Phaser:
|
||||||
* Several exception handling bugs fixed.
|
- Improved documentation
|
||||||
* Support for a simpler shared library system with faster calls into the runtime. This is only used by the NAC3
|
- Expose the DAC coarse mixer and ``sif_sync``
|
||||||
compiler (nac3ld) and improves RTIO output performance (test_pulse_rate) by 9-10%.
|
- Exposes upconverter calibration and enabling/disabling of upconverter LO & RF outputs.
|
||||||
* Moninj improvements:
|
- Add helpers to align Phaser updates to the RTIO timeline (``get_next_frame_mu()``)
|
||||||
- Urukul monitoring and frequency setting (through dashboard) is now supported.
|
* ``get()``, ``get_mu()``, ``get_att()``, and ``get_att_mu()`` functions added for AD9910 and AD9912
|
||||||
- Core device moninj is now proxied via the ``aqctl_moninj_proxy`` controller.
|
|
||||||
* The configuration entry ``rtio_clock`` supports multiple clocking settings, deprecating the usage
|
|
||||||
of compile-time options.
|
|
||||||
* Added support for 100MHz RTIO clock in DRTIO.
|
|
||||||
* Previously detected RTIO async errors are reported to the host after each kernel terminates and a
|
|
||||||
warning is logged. The warning is additional to the one already printed in the core device log
|
|
||||||
immediately upon detection of the error.
|
|
||||||
* Extended Kasli gateware JSON description with configuration for SPI over DIO.
|
|
||||||
* TTL outputs can be now configured to work as a clock generator from the JSON.
|
|
||||||
* On Kasli, the number of FIFO lanes in the scalable events dispatcher (SED) can now be configured in
|
* On Kasli, the number of FIFO lanes in the scalable events dispatcher (SED) can now be configured in
|
||||||
the JSON.
|
the JSON hardware description file.
|
||||||
* ``artiq_ddb_template`` generates edge-counter keys that start with the key of the corresponding
|
* ``artiq_ddb_template`` generates edge-counter keys that start with the key of the corresponding
|
||||||
TTL device (e.g. ``ttl_0_counter`` for the edge counter on TTL device ``ttl_0``).
|
TTL device (e.g. ``"ttl_0_counter"`` for the edge counter on TTL device``"ttl_0"``)
|
||||||
* ``artiq_master`` now has an ``--experiment-subdir`` option to scan only a subdirectory of the
|
* ``artiq_master`` now has an ``--experiment-subdir`` option to scan only a subdirectory of the
|
||||||
repository when building the list of experiments.
|
repository when building the list of experiments.
|
||||||
* Experiments can now be submitted by-content.
|
* The configuration entry ``rtio_clock`` supports multiple clocking settings, deprecating the usage
|
||||||
* The master can now optionally log all experiments submitted into a CSV file.
|
of compile-time options.
|
||||||
* Removed worker DB warning for writing a dataset that is also in the archive.
|
* DRTIO: added support for 100MHz clock.
|
||||||
* Experiments can now call ``scheduler.check_termination()`` to test if the user
|
* Previously detected RTIO async errors are reported to the host after each kernel terminates and a
|
||||||
has requested graceful termination.
|
warning is logged. The warning is additional to the one already printed in the core device log upon
|
||||||
* ARTIQ command-line programs and controllers now exit cleanly on Ctrl-C.
|
detection of the error.
|
||||||
* ``artiq_coremgmt reboot`` now reloads gateware as well, providing a more thorough and reliable
|
* Removed worker DB warning for writing a dataset that is also in the archive
|
||||||
device reset (7-series FPGAs only).
|
|
||||||
* Firmware and gateware can now be built on-demand on the M-Labs server using ``afws_client``
|
|
||||||
(subscribers only). Self-compilation remains possible.
|
|
||||||
* Easier-to-use packaging via Nix Flakes.
|
|
||||||
* Python 3.10 support (experimental).
|
|
||||||
|
|
||||||
Breaking changes:
|
Breaking changes:
|
||||||
|
|
||||||
|
@ -187,12 +45,9 @@ Breaking changes:
|
||||||
generated for some configurations.
|
generated for some configurations.
|
||||||
* Phaser: fixed coarse mixer frequency configuration
|
* Phaser: fixed coarse mixer frequency configuration
|
||||||
* Mirny: Added extra delays in ``ADF5356.sync()``. This avoids the need of an extra delay before
|
* Mirny: Added extra delays in ``ADF5356.sync()``. This avoids the need of an extra delay before
|
||||||
calling ``ADF5356.init()``.
|
calling `ADF5356.init()`.
|
||||||
|
* DRTIO: Changed message alignment from 32-bits to 64-bits.
|
||||||
* The deprecated ``set_dataset(..., save=...)`` is no longer supported.
|
* The deprecated ``set_dataset(..., save=...)`` is no longer supported.
|
||||||
* The ``PCA9548`` I2C switch class was renamed to ``I2CSwitch``, to accommodate support for PCA9547,
|
|
||||||
and possibly other switches in future. Readback has been removed, and now only one channel per
|
|
||||||
switch is supported.
|
|
||||||
|
|
||||||
|
|
||||||
ARTIQ-6
|
ARTIQ-6
|
||||||
-------
|
-------
|
||||||
|
|
|
@ -1,7 +1,13 @@
|
||||||
import os
|
import os
|
||||||
|
|
||||||
def get_rev():
|
|
||||||
return os.getenv("VERSIONEER_REV", default="unknown")
|
|
||||||
|
|
||||||
def get_version():
|
def get_version():
|
||||||
return os.getenv("VERSIONEER_OVERRIDE", default="9.0+unknown.beta")
|
override = os.getenv("VERSIONEER_OVERRIDE")
|
||||||
|
if override:
|
||||||
|
return override
|
||||||
|
srcroot = os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir)
|
||||||
|
with open(os.path.join(srcroot, "MAJOR_VERSION"), "r") as f:
|
||||||
|
version = f.read().strip()
|
||||||
|
version += ".unknown"
|
||||||
|
if os.path.exists(os.path.join(srcroot, "BETA")):
|
||||||
|
version += ".beta"
|
||||||
|
return version
|
||||||
|
|
|
@ -0,0 +1,557 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright (c) 2005-2010 ActiveState Software Inc.
|
||||||
|
# Copyright (c) 2013 Eddy Petrișor
|
||||||
|
|
||||||
|
"""Utilities for determining application-specific dirs.
|
||||||
|
|
||||||
|
See <http://github.com/ActiveState/appdirs> for details and usage.
|
||||||
|
"""
|
||||||
|
# Dev Notes:
|
||||||
|
# - MSDN on where to store app data files:
|
||||||
|
# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120
|
||||||
|
# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html
|
||||||
|
# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
|
||||||
|
|
||||||
|
__version_info__ = (1, 4, 1)
|
||||||
|
__version__ = '.'.join(map(str, __version_info__))
|
||||||
|
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
PY3 = sys.version_info[0] == 3
|
||||||
|
|
||||||
|
if PY3:
|
||||||
|
unicode = str
|
||||||
|
|
||||||
|
if sys.platform.startswith('java'):
|
||||||
|
import platform
|
||||||
|
os_name = platform.java_ver()[3][0]
|
||||||
|
if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc.
|
||||||
|
system = 'win32'
|
||||||
|
elif os_name.startswith('Mac'): # "Mac OS X", etc.
|
||||||
|
system = 'darwin'
|
||||||
|
else: # "Linux", "SunOS", "FreeBSD", etc.
|
||||||
|
# Setting this to "linux2" is not ideal, but only Windows or Mac
|
||||||
|
# are actually checked for and the rest of the module expects
|
||||||
|
# *sys.platform* style strings.
|
||||||
|
system = 'linux2'
|
||||||
|
else:
|
||||||
|
system = sys.platform
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def user_data_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||||
|
r"""Return full path to the user-specific data dir for this application.
|
||||||
|
|
||||||
|
"appname" is the name of application.
|
||||||
|
If None, just the system directory is returned.
|
||||||
|
"appauthor" (only used on Windows) is the name of the
|
||||||
|
appauthor or distributing body for this application. Typically
|
||||||
|
it is the owning company name. This falls back to appname. You may
|
||||||
|
pass False to disable it.
|
||||||
|
"version" is an optional version path element to append to the
|
||||||
|
path. You might want to use this if you want multiple versions
|
||||||
|
of your app to be able to run independently. If used, this
|
||||||
|
would typically be "<major>.<minor>".
|
||||||
|
Only applied when appname is present.
|
||||||
|
"roaming" (boolean, default False) can be set True to use the Windows
|
||||||
|
roaming appdata directory. That means that for users on a Windows
|
||||||
|
network setup for roaming profiles, this user data will be
|
||||||
|
sync'd on login. See
|
||||||
|
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||||
|
for a discussion of issues.
|
||||||
|
|
||||||
|
Typical user data directories are:
|
||||||
|
Mac OS X: ~/Library/Application Support/<AppName>
|
||||||
|
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
|
||||||
|
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
|
||||||
|
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
|
||||||
|
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
|
||||||
|
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
|
||||||
|
|
||||||
|
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
|
||||||
|
That means, by default "~/.local/share/<AppName>".
|
||||||
|
"""
|
||||||
|
if system == "win32":
|
||||||
|
if appauthor is None:
|
||||||
|
appauthor = appname
|
||||||
|
const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA"
|
||||||
|
path = os.path.normpath(_get_win_folder(const))
|
||||||
|
if appname:
|
||||||
|
if appauthor is not False:
|
||||||
|
path = os.path.join(path, appauthor, appname)
|
||||||
|
else:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
elif system == 'darwin':
|
||||||
|
path = os.path.expanduser('~/Library/Application Support/')
|
||||||
|
if appname:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
else:
|
||||||
|
path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share"))
|
||||||
|
if appname:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
if appname and version:
|
||||||
|
path = os.path.join(path, version)
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def site_data_dir(appname=None, appauthor=None, version=None, multipath=False):
|
||||||
|
"""Return full path to the user-shared data dir for this application.
|
||||||
|
|
||||||
|
"appname" is the name of application.
|
||||||
|
If None, just the system directory is returned.
|
||||||
|
"appauthor" (only used on Windows) is the name of the
|
||||||
|
appauthor or distributing body for this application. Typically
|
||||||
|
it is the owning company name. This falls back to appname. You may
|
||||||
|
pass False to disable it.
|
||||||
|
"version" is an optional version path element to append to the
|
||||||
|
path. You might want to use this if you want multiple versions
|
||||||
|
of your app to be able to run independently. If used, this
|
||||||
|
would typically be "<major>.<minor>".
|
||||||
|
Only applied when appname is present.
|
||||||
|
"multipath" is an optional parameter only applicable to *nix
|
||||||
|
which indicates that the entire list of data dirs should be
|
||||||
|
returned. By default, the first item from XDG_DATA_DIRS is
|
||||||
|
returned, or '/usr/local/share/<AppName>',
|
||||||
|
if XDG_DATA_DIRS is not set
|
||||||
|
|
||||||
|
Typical user data directories are:
|
||||||
|
Mac OS X: /Library/Application Support/<AppName>
|
||||||
|
Unix: /usr/local/share/<AppName> or /usr/share/<AppName>
|
||||||
|
Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName>
|
||||||
|
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
|
||||||
|
Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7.
|
||||||
|
|
||||||
|
For Unix, this is using the $XDG_DATA_DIRS[0] default.
|
||||||
|
|
||||||
|
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
|
||||||
|
"""
|
||||||
|
if system == "win32":
|
||||||
|
if appauthor is None:
|
||||||
|
appauthor = appname
|
||||||
|
path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA"))
|
||||||
|
if appname:
|
||||||
|
if appauthor is not False:
|
||||||
|
path = os.path.join(path, appauthor, appname)
|
||||||
|
else:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
elif system == 'darwin':
|
||||||
|
path = os.path.expanduser('/Library/Application Support')
|
||||||
|
if appname:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
else:
|
||||||
|
# XDG default for $XDG_DATA_DIRS
|
||||||
|
# only first, if multipath is False
|
||||||
|
path = os.getenv('XDG_DATA_DIRS',
|
||||||
|
os.pathsep.join(['/usr/local/share', '/usr/share']))
|
||||||
|
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
|
||||||
|
if appname:
|
||||||
|
if version:
|
||||||
|
appname = os.path.join(appname, version)
|
||||||
|
pathlist = [os.sep.join([x, appname]) for x in pathlist]
|
||||||
|
|
||||||
|
if multipath:
|
||||||
|
path = os.pathsep.join(pathlist)
|
||||||
|
else:
|
||||||
|
path = pathlist[0]
|
||||||
|
return path
|
||||||
|
|
||||||
|
if appname and version:
|
||||||
|
path = os.path.join(path, version)
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def user_config_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||||
|
r"""Return full path to the user-specific config dir for this application.
|
||||||
|
|
||||||
|
"appname" is the name of application.
|
||||||
|
If None, just the system directory is returned.
|
||||||
|
"appauthor" (only used on Windows) is the name of the
|
||||||
|
appauthor or distributing body for this application. Typically
|
||||||
|
it is the owning company name. This falls back to appname. You may
|
||||||
|
pass False to disable it.
|
||||||
|
"version" is an optional version path element to append to the
|
||||||
|
path. You might want to use this if you want multiple versions
|
||||||
|
of your app to be able to run independently. If used, this
|
||||||
|
would typically be "<major>.<minor>".
|
||||||
|
Only applied when appname is present.
|
||||||
|
"roaming" (boolean, default False) can be set True to use the Windows
|
||||||
|
roaming appdata directory. That means that for users on a Windows
|
||||||
|
network setup for roaming profiles, this user data will be
|
||||||
|
sync'd on login. See
|
||||||
|
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||||
|
for a discussion of issues.
|
||||||
|
|
||||||
|
Typical user data directories are:
|
||||||
|
Mac OS X: same as user_data_dir
|
||||||
|
Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined
|
||||||
|
Win *: same as user_data_dir
|
||||||
|
|
||||||
|
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
|
||||||
|
That means, by deafult "~/.config/<AppName>".
|
||||||
|
"""
|
||||||
|
if system in ["win32", "darwin"]:
|
||||||
|
path = user_data_dir(appname, appauthor, None, roaming)
|
||||||
|
else:
|
||||||
|
path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config"))
|
||||||
|
if appname:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
if appname and version:
|
||||||
|
path = os.path.join(path, version)
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def site_config_dir(appname=None, appauthor=None, version=None, multipath=False):
|
||||||
|
"""Return full path to the user-shared data dir for this application.
|
||||||
|
|
||||||
|
"appname" is the name of application.
|
||||||
|
If None, just the system directory is returned.
|
||||||
|
"appauthor" (only used on Windows) is the name of the
|
||||||
|
appauthor or distributing body for this application. Typically
|
||||||
|
it is the owning company name. This falls back to appname. You may
|
||||||
|
pass False to disable it.
|
||||||
|
"version" is an optional version path element to append to the
|
||||||
|
path. You might want to use this if you want multiple versions
|
||||||
|
of your app to be able to run independently. If used, this
|
||||||
|
would typically be "<major>.<minor>".
|
||||||
|
Only applied when appname is present.
|
||||||
|
"multipath" is an optional parameter only applicable to *nix
|
||||||
|
which indicates that the entire list of config dirs should be
|
||||||
|
returned. By default, the first item from XDG_CONFIG_DIRS is
|
||||||
|
returned, or '/etc/xdg/<AppName>', if XDG_CONFIG_DIRS is not set
|
||||||
|
|
||||||
|
Typical user data directories are:
|
||||||
|
Mac OS X: same as site_data_dir
|
||||||
|
Unix: /etc/xdg/<AppName> or $XDG_CONFIG_DIRS[i]/<AppName> for each value in
|
||||||
|
$XDG_CONFIG_DIRS
|
||||||
|
Win *: same as site_data_dir
|
||||||
|
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
|
||||||
|
|
||||||
|
For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False
|
||||||
|
|
||||||
|
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
|
||||||
|
"""
|
||||||
|
if system in ["win32", "darwin"]:
|
||||||
|
path = site_data_dir(appname, appauthor)
|
||||||
|
if appname and version:
|
||||||
|
path = os.path.join(path, version)
|
||||||
|
else:
|
||||||
|
# XDG default for $XDG_CONFIG_DIRS
|
||||||
|
# only first, if multipath is False
|
||||||
|
path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg')
|
||||||
|
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
|
||||||
|
if appname:
|
||||||
|
if version:
|
||||||
|
appname = os.path.join(appname, version)
|
||||||
|
pathlist = [os.sep.join([x, appname]) for x in pathlist]
|
||||||
|
|
||||||
|
if multipath:
|
||||||
|
path = os.pathsep.join(pathlist)
|
||||||
|
else:
|
||||||
|
path = pathlist[0]
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True):
|
||||||
|
r"""Return full path to the user-specific cache dir for this application.
|
||||||
|
|
||||||
|
"appname" is the name of application.
|
||||||
|
If None, just the system directory is returned.
|
||||||
|
"appauthor" (only used on Windows) is the name of the
|
||||||
|
appauthor or distributing body for this application. Typically
|
||||||
|
it is the owning company name. This falls back to appname. You may
|
||||||
|
pass False to disable it.
|
||||||
|
"version" is an optional version path element to append to the
|
||||||
|
path. You might want to use this if you want multiple versions
|
||||||
|
of your app to be able to run independently. If used, this
|
||||||
|
would typically be "<major>.<minor>".
|
||||||
|
Only applied when appname is present.
|
||||||
|
"opinion" (boolean) can be False to disable the appending of
|
||||||
|
"Cache" to the base app data dir for Windows. See
|
||||||
|
discussion below.
|
||||||
|
|
||||||
|
Typical user cache directories are:
|
||||||
|
Mac OS X: ~/Library/Caches/<AppName>
|
||||||
|
Unix: ~/.cache/<AppName> (XDG default)
|
||||||
|
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache
|
||||||
|
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache
|
||||||
|
|
||||||
|
On Windows the only suggestion in the MSDN docs is that local settings go in
|
||||||
|
the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming
|
||||||
|
app data dir (the default returned by `user_data_dir` above). Apps typically
|
||||||
|
put cache data somewhere *under* the given dir here. Some examples:
|
||||||
|
...\Mozilla\Firefox\Profiles\<ProfileName>\Cache
|
||||||
|
...\Acme\SuperApp\Cache\1.0
|
||||||
|
OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value.
|
||||||
|
This can be disabled with the `opinion=False` option.
|
||||||
|
"""
|
||||||
|
if system == "win32":
|
||||||
|
if appauthor is None:
|
||||||
|
appauthor = appname
|
||||||
|
path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA"))
|
||||||
|
if appname:
|
||||||
|
if appauthor is not False:
|
||||||
|
path = os.path.join(path, appauthor, appname)
|
||||||
|
else:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
if opinion:
|
||||||
|
path = os.path.join(path, "Cache")
|
||||||
|
elif system == 'darwin':
|
||||||
|
path = os.path.expanduser('~/Library/Caches')
|
||||||
|
if appname:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
else:
|
||||||
|
path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache'))
|
||||||
|
if appname:
|
||||||
|
path = os.path.join(path, appname)
|
||||||
|
if appname and version:
|
||||||
|
path = os.path.join(path, version)
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def user_log_dir(appname=None, appauthor=None, version=None, opinion=True):
|
||||||
|
r"""Return full path to the user-specific log dir for this application.
|
||||||
|
|
||||||
|
"appname" is the name of application.
|
||||||
|
If None, just the system directory is returned.
|
||||||
|
"appauthor" (only used on Windows) is the name of the
|
||||||
|
appauthor or distributing body for this application. Typically
|
||||||
|
it is the owning company name. This falls back to appname. You may
|
||||||
|
pass False to disable it.
|
||||||
|
"version" is an optional version path element to append to the
|
||||||
|
path. You might want to use this if you want multiple versions
|
||||||
|
of your app to be able to run independently. If used, this
|
||||||
|
would typically be "<major>.<minor>".
|
||||||
|
Only applied when appname is present.
|
||||||
|
"opinion" (boolean) can be False to disable the appending of
|
||||||
|
"Logs" to the base app data dir for Windows, and "log" to the
|
||||||
|
base cache dir for Unix. See discussion below.
|
||||||
|
|
||||||
|
Typical user cache directories are:
|
||||||
|
Mac OS X: ~/Library/Logs/<AppName>
|
||||||
|
Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if defined
|
||||||
|
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs
|
||||||
|
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs
|
||||||
|
|
||||||
|
On Windows the only suggestion in the MSDN docs is that local settings
|
||||||
|
go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in
|
||||||
|
examples of what some windows apps use for a logs dir.)
|
||||||
|
|
||||||
|
OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA`
|
||||||
|
value for Windows and appends "log" to the user cache dir for Unix.
|
||||||
|
This can be disabled with the `opinion=False` option.
|
||||||
|
"""
|
||||||
|
if system == "darwin":
|
||||||
|
path = os.path.join(
|
||||||
|
os.path.expanduser('~/Library/Logs'),
|
||||||
|
appname)
|
||||||
|
elif system == "win32":
|
||||||
|
path = user_data_dir(appname, appauthor, version)
|
||||||
|
version = False
|
||||||
|
if opinion:
|
||||||
|
path = os.path.join(path, "Logs")
|
||||||
|
else:
|
||||||
|
path = user_cache_dir(appname, appauthor, version)
|
||||||
|
version = False
|
||||||
|
if opinion:
|
||||||
|
path = os.path.join(path, "log")
|
||||||
|
if appname and version:
|
||||||
|
path = os.path.join(path, version)
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
class AppDirs(object):
|
||||||
|
"""Convenience wrapper for getting application dirs."""
|
||||||
|
def __init__(self, appname, appauthor=None, version=None, roaming=False,
|
||||||
|
multipath=False):
|
||||||
|
self.appname = appname
|
||||||
|
self.appauthor = appauthor
|
||||||
|
self.version = version
|
||||||
|
self.roaming = roaming
|
||||||
|
self.multipath = multipath
|
||||||
|
|
||||||
|
@property
|
||||||
|
def user_data_dir(self):
|
||||||
|
return user_data_dir(self.appname, self.appauthor,
|
||||||
|
version=self.version, roaming=self.roaming)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def site_data_dir(self):
|
||||||
|
return site_data_dir(self.appname, self.appauthor,
|
||||||
|
version=self.version, multipath=self.multipath)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def user_config_dir(self):
|
||||||
|
return user_config_dir(self.appname, self.appauthor,
|
||||||
|
version=self.version, roaming=self.roaming)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def site_config_dir(self):
|
||||||
|
return site_config_dir(self.appname, self.appauthor,
|
||||||
|
version=self.version, multipath=self.multipath)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def user_cache_dir(self):
|
||||||
|
return user_cache_dir(self.appname, self.appauthor,
|
||||||
|
version=self.version)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def user_log_dir(self):
|
||||||
|
return user_log_dir(self.appname, self.appauthor,
|
||||||
|
version=self.version)
|
||||||
|
|
||||||
|
|
||||||
|
#---- internal support stuff
|
||||||
|
|
||||||
|
def _get_win_folder_from_registry(csidl_name):
|
||||||
|
"""This is a fallback technique at best. I'm not sure if using the
|
||||||
|
registry for this guarantees us the correct answer for all CSIDL_*
|
||||||
|
names.
|
||||||
|
"""
|
||||||
|
if PY3:
|
||||||
|
import winreg as _winreg
|
||||||
|
else:
|
||||||
|
import _winreg
|
||||||
|
|
||||||
|
shell_folder_name = {
|
||||||
|
"CSIDL_APPDATA": "AppData",
|
||||||
|
"CSIDL_COMMON_APPDATA": "Common AppData",
|
||||||
|
"CSIDL_LOCAL_APPDATA": "Local AppData",
|
||||||
|
}[csidl_name]
|
||||||
|
|
||||||
|
key = _winreg.OpenKey(
|
||||||
|
_winreg.HKEY_CURRENT_USER,
|
||||||
|
r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders"
|
||||||
|
)
|
||||||
|
dir, type = _winreg.QueryValueEx(key, shell_folder_name)
|
||||||
|
return dir
|
||||||
|
|
||||||
|
|
||||||
|
def _get_win_folder_with_pywin32(csidl_name):
|
||||||
|
from win32com.shell import shellcon, shell
|
||||||
|
dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0)
|
||||||
|
# Try to make this a unicode path because SHGetFolderPath does
|
||||||
|
# not return unicode strings when there is unicode data in the
|
||||||
|
# path.
|
||||||
|
try:
|
||||||
|
dir = unicode(dir)
|
||||||
|
|
||||||
|
# Downgrade to short path name if have highbit chars. See
|
||||||
|
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||||
|
has_high_char = False
|
||||||
|
for c in dir:
|
||||||
|
if ord(c) > 255:
|
||||||
|
has_high_char = True
|
||||||
|
break
|
||||||
|
if has_high_char:
|
||||||
|
try:
|
||||||
|
import win32api
|
||||||
|
dir = win32api.GetShortPathName(dir)
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
except UnicodeError:
|
||||||
|
pass
|
||||||
|
return dir
|
||||||
|
|
||||||
|
|
||||||
|
def _get_win_folder_with_ctypes(csidl_name):
|
||||||
|
import ctypes
|
||||||
|
|
||||||
|
csidl_const = {
|
||||||
|
"CSIDL_APPDATA": 26,
|
||||||
|
"CSIDL_COMMON_APPDATA": 35,
|
||||||
|
"CSIDL_LOCAL_APPDATA": 28,
|
||||||
|
}[csidl_name]
|
||||||
|
|
||||||
|
buf = ctypes.create_unicode_buffer(1024)
|
||||||
|
ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
|
||||||
|
|
||||||
|
# Downgrade to short path name if have highbit chars. See
|
||||||
|
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||||
|
has_high_char = False
|
||||||
|
for c in buf:
|
||||||
|
if ord(c) > 255:
|
||||||
|
has_high_char = True
|
||||||
|
break
|
||||||
|
if has_high_char:
|
||||||
|
buf2 = ctypes.create_unicode_buffer(1024)
|
||||||
|
if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
|
||||||
|
buf = buf2
|
||||||
|
|
||||||
|
return buf.value
|
||||||
|
|
||||||
|
def _get_win_folder_with_jna(csidl_name):
|
||||||
|
import array
|
||||||
|
from com.sun import jna
|
||||||
|
from com.sun.jna.platform import win32
|
||||||
|
|
||||||
|
buf_size = win32.WinDef.MAX_PATH * 2
|
||||||
|
buf = array.zeros('c', buf_size)
|
||||||
|
shell = win32.Shell32.INSTANCE
|
||||||
|
shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf)
|
||||||
|
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
|
||||||
|
|
||||||
|
# Downgrade to short path name if have highbit chars. See
|
||||||
|
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||||
|
has_high_char = False
|
||||||
|
for c in dir:
|
||||||
|
if ord(c) > 255:
|
||||||
|
has_high_char = True
|
||||||
|
break
|
||||||
|
if has_high_char:
|
||||||
|
buf = array.zeros('c', buf_size)
|
||||||
|
kernel = win32.Kernel32.INSTANCE
|
||||||
|
if kernel.GetShortPathName(dir, buf, buf_size):
|
||||||
|
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
|
||||||
|
|
||||||
|
return dir
|
||||||
|
|
||||||
|
if system == "win32":
|
||||||
|
try:
|
||||||
|
import win32com.shell
|
||||||
|
_get_win_folder = _get_win_folder_with_pywin32
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
from ctypes import windll
|
||||||
|
_get_win_folder = _get_win_folder_with_ctypes
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
import com.sun.jna
|
||||||
|
_get_win_folder = _get_win_folder_with_jna
|
||||||
|
except ImportError:
|
||||||
|
_get_win_folder = _get_win_folder_from_registry
|
||||||
|
|
||||||
|
|
||||||
|
#---- self test code
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
appname = "MyApp"
|
||||||
|
appauthor = "MyCompany"
|
||||||
|
|
||||||
|
props = ("user_data_dir", "site_data_dir",
|
||||||
|
"user_config_dir", "site_config_dir",
|
||||||
|
"user_cache_dir", "user_log_dir")
|
||||||
|
|
||||||
|
print("-- app dirs %s --" % __version__)
|
||||||
|
|
||||||
|
print("-- app dirs (with optional 'version')")
|
||||||
|
dirs = AppDirs(appname, appauthor, version="1.0")
|
||||||
|
for prop in props:
|
||||||
|
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||||
|
|
||||||
|
print("\n-- app dirs (without optional 'version')")
|
||||||
|
dirs = AppDirs(appname, appauthor)
|
||||||
|
for prop in props:
|
||||||
|
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||||
|
|
||||||
|
print("\n-- app dirs (without optional 'appauthor')")
|
||||||
|
dirs = AppDirs(appname)
|
||||||
|
for prop in props:
|
||||||
|
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||||
|
|
||||||
|
print("\n-- app dirs (with disabled 'appauthor')")
|
||||||
|
dirs = AppDirs(appname, appauthor=False)
|
||||||
|
for prop in props:
|
||||||
|
print("%s: %s" % (prop, getattr(dirs, prop)))
|
|
@ -1,96 +1,22 @@
|
||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
from PyQt6 import QtWidgets, QtCore, QtGui
|
from PyQt5 import QtWidgets
|
||||||
|
|
||||||
from artiq.applets.simple import SimpleApplet
|
from artiq.applets.simple import SimpleApplet
|
||||||
from artiq.tools import scale_from_metadata
|
|
||||||
from artiq.gui.tools import LayoutWidget
|
|
||||||
|
|
||||||
|
|
||||||
class QResponsiveLCDNumber(QtWidgets.QLCDNumber):
|
class NumberWidget(QtWidgets.QLCDNumber):
|
||||||
doubleClicked = QtCore.pyqtSignal()
|
def __init__(self, args):
|
||||||
|
QtWidgets.QLCDNumber.__init__(self)
|
||||||
def mouseDoubleClickEvent(self, event):
|
self.setDigitCount(args.digit_count)
|
||||||
self.doubleClicked.emit()
|
|
||||||
|
|
||||||
|
|
||||||
class QCancellableLineEdit(QtWidgets.QLineEdit):
|
|
||||||
editCancelled = QtCore.pyqtSignal()
|
|
||||||
|
|
||||||
def keyPressEvent(self, event):
|
|
||||||
if event.key() == QtCore.Qt.Key.Key_Escape:
|
|
||||||
self.editCancelled.emit()
|
|
||||||
else:
|
|
||||||
super().keyPressEvent(event)
|
|
||||||
|
|
||||||
|
|
||||||
class NumberWidget(LayoutWidget):
|
|
||||||
def __init__(self, args, req):
|
|
||||||
LayoutWidget.__init__(self)
|
|
||||||
self.dataset_name = args.dataset
|
self.dataset_name = args.dataset
|
||||||
self.req = req
|
|
||||||
self.metadata = dict()
|
|
||||||
|
|
||||||
self.number_area = QtWidgets.QStackedWidget()
|
def data_changed(self, data, mods):
|
||||||
self.addWidget(self.number_area, 0, 0)
|
|
||||||
|
|
||||||
self.unit_area = QtWidgets.QLabel()
|
|
||||||
self.unit_area.setAlignment(QtCore.Qt.AlignmentFlag.AlignRight | QtCore.Qt.AlignmentFlag.AlignTop)
|
|
||||||
self.addWidget(self.unit_area, 0, 1)
|
|
||||||
|
|
||||||
self.lcd_widget = QResponsiveLCDNumber()
|
|
||||||
self.lcd_widget.setDigitCount(args.digit_count)
|
|
||||||
self.lcd_widget.doubleClicked.connect(self.start_edit)
|
|
||||||
self.number_area.addWidget(self.lcd_widget)
|
|
||||||
|
|
||||||
self.edit_widget = QCancellableLineEdit()
|
|
||||||
self.edit_widget.setValidator(QtGui.QDoubleValidator())
|
|
||||||
self.edit_widget.setAlignment(QtCore.Qt.AlignmentFlag.AlignRight | QtCore.Qt.AlignmentFlag.AlignVCenter)
|
|
||||||
self.edit_widget.editCancelled.connect(self.cancel_edit)
|
|
||||||
self.edit_widget.returnPressed.connect(self.confirm_edit)
|
|
||||||
self.number_area.addWidget(self.edit_widget)
|
|
||||||
|
|
||||||
font = QtGui.QFont()
|
|
||||||
font.setPointSize(60)
|
|
||||||
self.edit_widget.setFont(font)
|
|
||||||
|
|
||||||
unit_font = QtGui.QFont()
|
|
||||||
unit_font.setPointSize(20)
|
|
||||||
self.unit_area.setFont(unit_font)
|
|
||||||
|
|
||||||
self.number_area.setCurrentWidget(self.lcd_widget)
|
|
||||||
|
|
||||||
def start_edit(self):
|
|
||||||
# QLCDNumber value property contains the value of zero
|
|
||||||
# if the displayed value is not a number.
|
|
||||||
self.edit_widget.setText(str(self.lcd_widget.value()))
|
|
||||||
self.edit_widget.selectAll()
|
|
||||||
self.edit_widget.setFocus()
|
|
||||||
self.number_area.setCurrentWidget(self.edit_widget)
|
|
||||||
|
|
||||||
def confirm_edit(self):
|
|
||||||
scale = scale_from_metadata(self.metadata)
|
|
||||||
val = float(self.edit_widget.text())
|
|
||||||
val *= scale
|
|
||||||
self.req.set_dataset(self.dataset_name, val, **self.metadata)
|
|
||||||
self.number_area.setCurrentWidget(self.lcd_widget)
|
|
||||||
|
|
||||||
def cancel_edit(self):
|
|
||||||
self.number_area.setCurrentWidget(self.lcd_widget)
|
|
||||||
|
|
||||||
def data_changed(self, value, metadata, persist, mods):
|
|
||||||
try:
|
try:
|
||||||
self.metadata = metadata[self.dataset_name]
|
n = float(data[self.dataset_name][1])
|
||||||
# This applet will degenerate other scalar types to native float on edit
|
|
||||||
# Use the dashboard ChangeEditDialog for consistent type casting
|
|
||||||
val = float(value[self.dataset_name])
|
|
||||||
scale = scale_from_metadata(self.metadata)
|
|
||||||
val /= scale
|
|
||||||
except (KeyError, ValueError, TypeError):
|
except (KeyError, ValueError, TypeError):
|
||||||
val = "---"
|
n = "---"
|
||||||
|
self.display(n)
|
||||||
unit = self.metadata.get("unit", "")
|
|
||||||
self.unit_area.setText(unit)
|
|
||||||
self.lcd_widget.display(val)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
|
|
@ -1,19 +1,19 @@
|
||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
import PyQt6 # make sure pyqtgraph imports Qt6
|
import PyQt5 # make sure pyqtgraph imports Qt5
|
||||||
import pyqtgraph
|
import pyqtgraph
|
||||||
|
|
||||||
from artiq.applets.simple import SimpleApplet
|
from artiq.applets.simple import SimpleApplet
|
||||||
|
|
||||||
|
|
||||||
class Image(pyqtgraph.ImageView):
|
class Image(pyqtgraph.ImageView):
|
||||||
def __init__(self, args, req):
|
def __init__(self, args):
|
||||||
pyqtgraph.ImageView.__init__(self)
|
pyqtgraph.ImageView.__init__(self)
|
||||||
self.args = args
|
self.args = args
|
||||||
|
|
||||||
def data_changed(self, value, metadata, persist, mods):
|
def data_changed(self, data, mods):
|
||||||
try:
|
try:
|
||||||
img = value[self.args.img]
|
img = data[self.args.img][1]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
self.setImage(img)
|
self.setImage(img)
|
||||||
|
|
|
@ -1,27 +1,27 @@
|
||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
import PyQt6 # make sure pyqtgraph imports Qt6
|
import PyQt5 # make sure pyqtgraph imports Qt5
|
||||||
from PyQt6.QtCore import QTimer
|
from PyQt5.QtCore import QTimer
|
||||||
import pyqtgraph
|
import pyqtgraph
|
||||||
|
|
||||||
from artiq.applets.simple import TitleApplet
|
from artiq.applets.simple import TitleApplet
|
||||||
|
|
||||||
|
|
||||||
class HistogramPlot(pyqtgraph.PlotWidget):
|
class HistogramPlot(pyqtgraph.PlotWidget):
|
||||||
def __init__(self, args, req):
|
def __init__(self, args):
|
||||||
pyqtgraph.PlotWidget.__init__(self)
|
pyqtgraph.PlotWidget.__init__(self)
|
||||||
self.args = args
|
self.args = args
|
||||||
self.timer = QTimer()
|
self.timer = QTimer()
|
||||||
self.timer.setSingleShot(True)
|
self.timer.setSingleShot(True)
|
||||||
self.timer.timeout.connect(self.length_warning)
|
self.timer.timeout.connect(self.length_warning)
|
||||||
|
|
||||||
def data_changed(self, value, metadata, persist, mods, title):
|
def data_changed(self, data, mods, title):
|
||||||
try:
|
try:
|
||||||
y = value[self.args.y]
|
y = data[self.args.y][1]
|
||||||
if self.args.x is None:
|
if self.args.x is None:
|
||||||
x = None
|
x = None
|
||||||
else:
|
else:
|
||||||
x = value[self.args.x]
|
x = data[self.args.x][1]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
if x is None:
|
if x is None:
|
||||||
|
|
|
@ -1,15 +1,15 @@
|
||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import PyQt6 # make sure pyqtgraph imports Qt6
|
import PyQt5 # make sure pyqtgraph imports Qt5
|
||||||
from PyQt6.QtCore import QTimer
|
from PyQt5.QtCore import QTimer
|
||||||
import pyqtgraph
|
import pyqtgraph
|
||||||
|
|
||||||
from artiq.applets.simple import TitleApplet
|
from artiq.applets.simple import TitleApplet
|
||||||
|
|
||||||
|
|
||||||
class XYPlot(pyqtgraph.PlotWidget):
|
class XYPlot(pyqtgraph.PlotWidget):
|
||||||
def __init__(self, args, req):
|
def __init__(self, args):
|
||||||
pyqtgraph.PlotWidget.__init__(self)
|
pyqtgraph.PlotWidget.__init__(self)
|
||||||
self.args = args
|
self.args = args
|
||||||
self.timer = QTimer()
|
self.timer = QTimer()
|
||||||
|
@ -19,16 +19,16 @@ class XYPlot(pyqtgraph.PlotWidget):
|
||||||
'Error bars': False,
|
'Error bars': False,
|
||||||
'Fit values': False}
|
'Fit values': False}
|
||||||
|
|
||||||
def data_changed(self, value, metadata, persist, mods, title):
|
def data_changed(self, data, mods, title):
|
||||||
try:
|
try:
|
||||||
y = value[self.args.y]
|
y = data[self.args.y][1]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
x = value.get(self.args.x)
|
x = data.get(self.args.x, (False, None))[1]
|
||||||
if x is None:
|
if x is None:
|
||||||
x = np.arange(len(y))
|
x = np.arange(len(y))
|
||||||
error = value.get(self.args.error)
|
error = data.get(self.args.error, (False, None))[1]
|
||||||
fit = value.get(self.args.fit)
|
fit = data.get(self.args.fit, (False, None))[1]
|
||||||
|
|
||||||
if not len(y) or len(y) != len(x):
|
if not len(y) or len(y) != len(x):
|
||||||
self.mismatch['X values'] = True
|
self.mismatch['X values'] = True
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from PyQt6 import QtWidgets
|
from PyQt5 import QtWidgets
|
||||||
from PyQt6.QtCore import QTimer
|
from PyQt5.QtCore import QTimer
|
||||||
import pyqtgraph
|
import pyqtgraph
|
||||||
|
|
||||||
from artiq.applets.simple import SimpleApplet
|
from artiq.applets.simple import SimpleApplet
|
||||||
|
@ -22,7 +22,7 @@ def _compute_ys(histogram_bins, histograms_counts):
|
||||||
# pyqtgraph.GraphicsWindow fails to behave like a regular Qt widget
|
# pyqtgraph.GraphicsWindow fails to behave like a regular Qt widget
|
||||||
# and breaks embedding. Do not use as top widget.
|
# and breaks embedding. Do not use as top widget.
|
||||||
class XYHistPlot(QtWidgets.QSplitter):
|
class XYHistPlot(QtWidgets.QSplitter):
|
||||||
def __init__(self, args, req):
|
def __init__(self, args):
|
||||||
QtWidgets.QSplitter.__init__(self)
|
QtWidgets.QSplitter.__init__(self)
|
||||||
self.resize(1000, 600)
|
self.resize(1000, 600)
|
||||||
self.setWindowTitle("XY/Histogram")
|
self.setWindowTitle("XY/Histogram")
|
||||||
|
@ -124,11 +124,11 @@ class XYHistPlot(QtWidgets.QSplitter):
|
||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def data_changed(self, value, metadata, persist, mods):
|
def data_changed(self, data, mods):
|
||||||
try:
|
try:
|
||||||
xs = value[self.args.xs]
|
xs = data[self.args.xs][1]
|
||||||
histogram_bins = value[self.args.histogram_bins]
|
histogram_bins = data[self.args.histogram_bins][1]
|
||||||
histograms_counts = value[self.args.histograms_counts]
|
histograms_counts = data[self.args.histograms_counts][1]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
if len(xs) != histograms_counts.shape[0]:
|
if len(xs) != histograms_counts.shape[0]:
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
from PyQt6 import QtWidgets
|
|
||||||
|
|
||||||
from artiq.applets.simple import SimpleApplet
|
|
||||||
|
|
||||||
|
|
||||||
class ProgressWidget(QtWidgets.QProgressBar):
|
|
||||||
def __init__(self, args, req):
|
|
||||||
QtWidgets.QProgressBar.__init__(self)
|
|
||||||
self.setMinimum(args.min)
|
|
||||||
self.setMaximum(args.max)
|
|
||||||
self.dataset_value = args.value
|
|
||||||
|
|
||||||
def data_changed(self, value, metadata, persist, mods):
|
|
||||||
try:
|
|
||||||
val = round(value[self.dataset_value])
|
|
||||||
except (KeyError, ValueError, TypeError):
|
|
||||||
val = 0
|
|
||||||
self.setValue(val)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
applet = SimpleApplet(ProgressWidget)
|
|
||||||
applet.add_dataset("value", "counter")
|
|
||||||
applet.argparser.add_argument("--min", type=int, default=0,
|
|
||||||
help="minimum (left) value of the bar")
|
|
||||||
applet.argparser.add_argument("--max", type=int, default=100,
|
|
||||||
help="maximum (right) value of the bar")
|
|
||||||
applet.run()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
|
@ -7,113 +7,13 @@ import string
|
||||||
from qasync import QEventLoop, QtWidgets, QtCore
|
from qasync import QEventLoop, QtWidgets, QtCore
|
||||||
|
|
||||||
from sipyco.sync_struct import Subscriber, process_mod
|
from sipyco.sync_struct import Subscriber, process_mod
|
||||||
from sipyco.pc_rpc import AsyncioClient as RPCClient
|
|
||||||
from sipyco import pyon
|
from sipyco import pyon
|
||||||
from sipyco.pipe_ipc import AsyncioChildComm
|
from sipyco.pipe_ipc import AsyncioChildComm
|
||||||
|
|
||||||
from artiq.language.scan import ScanObject
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class _AppletRequestInterface:
|
|
||||||
def __init__(self):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
|
|
||||||
"""
|
|
||||||
Set a dataset.
|
|
||||||
See documentation of :meth:`~artiq.language.environment.HasEnvironment.set_dataset`.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def mutate_dataset(self, key, index, value):
|
|
||||||
"""
|
|
||||||
Mutate a dataset.
|
|
||||||
See documentation of :meth:`~artiq.language.environment.HasEnvironment.mutate_dataset`.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def append_to_dataset(self, key, value):
|
|
||||||
"""
|
|
||||||
Append to a dataset.
|
|
||||||
See documentation of :meth:`~artiq.language.environment.HasEnvironment.append_to_dataset`.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def set_argument_value(self, expurl, key, value):
|
|
||||||
"""
|
|
||||||
Temporarily set the value of an argument in a experiment in the dashboard.
|
|
||||||
The value resets to default value when recomputing the argument.
|
|
||||||
|
|
||||||
:param expurl: Experiment URL identifying the experiment in the dashboard. Example: 'repo:ArgumentsDemo'.
|
|
||||||
:param key: Name of the argument in the experiment.
|
|
||||||
:param value: Object representing the new temporary value of the argument. For :class:`~artiq.language.scan.Scannable` arguments,
|
|
||||||
this parameter should be a :class:`~artiq.language.scan.ScanObject`. The type of the :class:`~artiq.language.scan.ScanObject`
|
|
||||||
will be set as the selected type when this function is called.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
|
|
||||||
class AppletRequestIPC(_AppletRequestInterface):
|
|
||||||
def __init__(self, ipc):
|
|
||||||
self.ipc = ipc
|
|
||||||
|
|
||||||
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
|
|
||||||
metadata = {}
|
|
||||||
if unit is not None:
|
|
||||||
metadata["unit"] = unit
|
|
||||||
if scale is not None:
|
|
||||||
metadata["scale"] = scale
|
|
||||||
if precision is not None:
|
|
||||||
metadata["precision"] = precision
|
|
||||||
self.ipc.set_dataset(key, value, metadata, persist)
|
|
||||||
|
|
||||||
def mutate_dataset(self, key, index, value):
|
|
||||||
mod = {"action": "setitem", "path": [key, 1], "key": index, "value": value}
|
|
||||||
self.ipc.update_dataset(mod)
|
|
||||||
|
|
||||||
def append_to_dataset(self, key, value):
|
|
||||||
mod = {"action": "append", "path": [key, 1], "x": value}
|
|
||||||
self.ipc.update_dataset(mod)
|
|
||||||
|
|
||||||
def set_argument_value(self, expurl, key, value):
|
|
||||||
if isinstance(value, ScanObject):
|
|
||||||
value = value.describe()
|
|
||||||
self.ipc.set_argument_value(expurl, key, value)
|
|
||||||
|
|
||||||
|
|
||||||
class AppletRequestRPC(_AppletRequestInterface):
|
|
||||||
def __init__(self, loop, dataset_ctl):
|
|
||||||
self.loop = loop
|
|
||||||
self.dataset_ctl = dataset_ctl
|
|
||||||
self.background_tasks = set()
|
|
||||||
|
|
||||||
def _background(self, coro, *args, **kwargs):
|
|
||||||
task = self.loop.create_task(coro(*args, **kwargs))
|
|
||||||
self.background_tasks.add(task)
|
|
||||||
task.add_done_callback(self.background_tasks.discard)
|
|
||||||
|
|
||||||
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
|
|
||||||
metadata = {}
|
|
||||||
if unit is not None:
|
|
||||||
metadata["unit"] = unit
|
|
||||||
if scale is not None:
|
|
||||||
metadata["scale"] = scale
|
|
||||||
if precision is not None:
|
|
||||||
metadata["precision"] = precision
|
|
||||||
self._background(self.dataset_ctl.set, key, value, metadata=metadata, persist=persist)
|
|
||||||
|
|
||||||
def mutate_dataset(self, key, index, value):
|
|
||||||
mod = {"action": "setitem", "path": [key, 1], "key": index, "value": value}
|
|
||||||
self._background(self.dataset_ctl.update, mod)
|
|
||||||
|
|
||||||
def append_to_dataset(self, key, value):
|
|
||||||
mod = {"action": "append", "path": [key, 1], "x": value}
|
|
||||||
self._background(self.dataset_ctl.update, mod)
|
|
||||||
|
|
||||||
|
|
||||||
class AppletIPCClient(AsyncioChildComm):
|
class AppletIPCClient(AsyncioChildComm):
|
||||||
def set_close_cb(self, close_cb):
|
def set_close_cb(self, close_cb):
|
||||||
self.close_cb = close_cb
|
self.close_cb = close_cb
|
||||||
|
@ -137,8 +37,9 @@ class AppletIPCClient(AsyncioChildComm):
|
||||||
logger.error("unexpected action reply to embed request: %s",
|
logger.error("unexpected action reply to embed request: %s",
|
||||||
reply["action"])
|
reply["action"])
|
||||||
self.close_cb()
|
self.close_cb()
|
||||||
else:
|
|
||||||
return reply["size_w"], reply["size_h"]
|
def fix_initial_size(self):
|
||||||
|
self.write_pyon({"action": "fix_initial_size"})
|
||||||
|
|
||||||
async def listen(self):
|
async def listen(self):
|
||||||
data = None
|
data = None
|
||||||
|
@ -163,30 +64,12 @@ class AppletIPCClient(AsyncioChildComm):
|
||||||
exc_info=True)
|
exc_info=True)
|
||||||
self.close_cb()
|
self.close_cb()
|
||||||
|
|
||||||
def subscribe(self, datasets, init_cb, mod_cb, dataset_prefixes=[], *, loop):
|
def subscribe(self, datasets, init_cb, mod_cb):
|
||||||
self.write_pyon({"action": "subscribe",
|
self.write_pyon({"action": "subscribe",
|
||||||
"datasets": datasets,
|
"datasets": datasets})
|
||||||
"dataset_prefixes": dataset_prefixes})
|
|
||||||
self.init_cb = init_cb
|
self.init_cb = init_cb
|
||||||
self.mod_cb = mod_cb
|
self.mod_cb = mod_cb
|
||||||
self.listen_task = loop.create_task(self.listen())
|
asyncio.ensure_future(self.listen())
|
||||||
|
|
||||||
def set_dataset(self, key, value, metadata, persist=None):
|
|
||||||
self.write_pyon({"action": "set_dataset",
|
|
||||||
"key": key,
|
|
||||||
"value": value,
|
|
||||||
"metadata": metadata,
|
|
||||||
"persist": persist})
|
|
||||||
|
|
||||||
def update_dataset(self, mod):
|
|
||||||
self.write_pyon({"action": "update_dataset",
|
|
||||||
"mod": mod})
|
|
||||||
|
|
||||||
def set_argument_value(self, expurl, key, value):
|
|
||||||
self.write_pyon({"action": "set_argument_value",
|
|
||||||
"expurl": expurl,
|
|
||||||
"key": key,
|
|
||||||
"value": value})
|
|
||||||
|
|
||||||
|
|
||||||
class SimpleApplet:
|
class SimpleApplet:
|
||||||
|
@ -208,11 +91,8 @@ class SimpleApplet:
|
||||||
"for dataset notifications "
|
"for dataset notifications "
|
||||||
"(ignored in embedded mode)")
|
"(ignored in embedded mode)")
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--port-notify", default=3250, type=int,
|
"--port", default=3250, type=int,
|
||||||
help="TCP port to connect to for notifications (ignored in embedded mode)")
|
help="TCP port to connect to")
|
||||||
group.add_argument(
|
|
||||||
"--port-control", default=3251, type=int,
|
|
||||||
help="TCP port to connect to for control (ignored in embedded mode)")
|
|
||||||
|
|
||||||
self._arggroup_datasets = self.argparser.add_argument_group("datasets")
|
self._arggroup_datasets = self.argparser.add_argument_group("datasets")
|
||||||
|
|
||||||
|
@ -233,9 +113,6 @@ class SimpleApplet:
|
||||||
self.embed = os.getenv("ARTIQ_APPLET_EMBED")
|
self.embed = os.getenv("ARTIQ_APPLET_EMBED")
|
||||||
self.datasets = {getattr(self.args, arg.replace("-", "_"))
|
self.datasets = {getattr(self.args, arg.replace("-", "_"))
|
||||||
for arg in self.dataset_args}
|
for arg in self.dataset_args}
|
||||||
# Optional prefixes (dataset sub-trees) to match subscriptions against;
|
|
||||||
# currently only used by out-of-tree subclasses (ndscan).
|
|
||||||
self.dataset_prefixes = []
|
|
||||||
|
|
||||||
def qasync_init(self):
|
def qasync_init(self):
|
||||||
app = QtWidgets.QApplication([])
|
app = QtWidgets.QApplication([])
|
||||||
|
@ -251,28 +128,15 @@ class SimpleApplet:
|
||||||
if self.embed is not None:
|
if self.embed is not None:
|
||||||
self.ipc.close()
|
self.ipc.close()
|
||||||
|
|
||||||
def req_init(self):
|
|
||||||
if self.embed is None:
|
|
||||||
dataset_ctl = RPCClient()
|
|
||||||
self.loop.run_until_complete(dataset_ctl.connect_rpc(
|
|
||||||
self.args.server, self.args.port_control, "dataset_db"))
|
|
||||||
self.req = AppletRequestRPC(self.loop, dataset_ctl)
|
|
||||||
else:
|
|
||||||
self.req = AppletRequestIPC(self.ipc)
|
|
||||||
|
|
||||||
def req_close(self):
|
|
||||||
if self.embed is None:
|
|
||||||
self.req.dataset_ctl.close_rpc()
|
|
||||||
|
|
||||||
def create_main_widget(self):
|
def create_main_widget(self):
|
||||||
self.main_widget = self.main_widget_class(self.args, self.req)
|
self.main_widget = self.main_widget_class(self.args)
|
||||||
if self.embed is not None:
|
if self.embed is not None:
|
||||||
self.ipc.set_close_cb(self.main_widget.close)
|
self.ipc.set_close_cb(self.main_widget.close)
|
||||||
if os.name == "nt":
|
if os.name == "nt":
|
||||||
# HACK: if the window has a frame, there will be garbage
|
# HACK: if the window has a frame, there will be garbage
|
||||||
# (usually white) displayed at its right and bottom borders
|
# (usually white) displayed at its right and bottom borders
|
||||||
# after it is embedded.
|
# after it is embedded.
|
||||||
self.main_widget.setWindowFlags(QtCore.Qt.WindowType.FramelessWindowHint)
|
self.main_widget.setWindowFlags(QtCore.Qt.FramelessWindowHint)
|
||||||
self.main_widget.show()
|
self.main_widget.show()
|
||||||
win_id = int(self.main_widget.winId())
|
win_id = int(self.main_widget.winId())
|
||||||
self.loop.run_until_complete(self.ipc.embed(win_id))
|
self.loop.run_until_complete(self.ipc.embed(win_id))
|
||||||
|
@ -285,13 +149,12 @@ class SimpleApplet:
|
||||||
# 2. applet creates native window without showing it, and
|
# 2. applet creates native window without showing it, and
|
||||||
# gets its ID
|
# gets its ID
|
||||||
# 3. applet sends the ID to host, host embeds the widget
|
# 3. applet sends the ID to host, host embeds the widget
|
||||||
# and returns embedded size
|
# 4. applet shows the widget
|
||||||
# 4. applet is resized to that given size
|
# 5. parent resizes the widget
|
||||||
# 5. applet shows the widget
|
|
||||||
win_id = int(self.main_widget.winId())
|
win_id = int(self.main_widget.winId())
|
||||||
size_w, size_h = self.loop.run_until_complete(self.ipc.embed(win_id))
|
self.loop.run_until_complete(self.ipc.embed(win_id))
|
||||||
self.main_widget.resize(size_w, size_h)
|
|
||||||
self.main_widget.show()
|
self.main_widget.show()
|
||||||
|
self.ipc.fix_initial_size()
|
||||||
else:
|
else:
|
||||||
self.main_widget.show()
|
self.main_widget.show()
|
||||||
|
|
||||||
|
@ -299,14 +162,6 @@ class SimpleApplet:
|
||||||
self.data = data
|
self.data = data
|
||||||
return data
|
return data
|
||||||
|
|
||||||
def is_dataset_subscribed(self, key):
|
|
||||||
if key in self.datasets:
|
|
||||||
return True
|
|
||||||
for prefix in self.dataset_prefixes:
|
|
||||||
if key.startswith(prefix):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def filter_mod(self, mod):
|
def filter_mod(self, mod):
|
||||||
if self.embed is not None:
|
if self.embed is not None:
|
||||||
# the parent already filters for us
|
# the parent already filters for us
|
||||||
|
@ -315,19 +170,14 @@ class SimpleApplet:
|
||||||
if mod["action"] == "init":
|
if mod["action"] == "init":
|
||||||
return True
|
return True
|
||||||
if mod["path"]:
|
if mod["path"]:
|
||||||
return self.is_dataset_subscribed(mod["path"][0])
|
return mod["path"][0] in self.datasets
|
||||||
elif mod["action"] in {"setitem", "delitem"}:
|
elif mod["action"] in {"setitem", "delitem"}:
|
||||||
return self.is_dataset_subscribed(mod["key"])
|
return mod["key"] in self.datasets
|
||||||
else:
|
else:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def emit_data_changed(self, data, mod_buffer):
|
def emit_data_changed(self, data, mod_buffer):
|
||||||
persist = dict()
|
self.main_widget.data_changed(data, mod_buffer)
|
||||||
value = dict()
|
|
||||||
metadata = dict()
|
|
||||||
for k, d in data.items():
|
|
||||||
persist[k], value[k], metadata[k] = d
|
|
||||||
self.main_widget.data_changed(value, metadata, persist, mod_buffer)
|
|
||||||
|
|
||||||
def flush_mod_buffer(self):
|
def flush_mod_buffer(self):
|
||||||
self.emit_data_changed(self.data, self.mod_buffer)
|
self.emit_data_changed(self.data, self.mod_buffer)
|
||||||
|
@ -342,8 +192,8 @@ class SimpleApplet:
|
||||||
self.mod_buffer.append(mod)
|
self.mod_buffer.append(mod)
|
||||||
else:
|
else:
|
||||||
self.mod_buffer = [mod]
|
self.mod_buffer = [mod]
|
||||||
self.loop.call_later(self.args.update_delay,
|
asyncio.get_event_loop().call_later(self.args.update_delay,
|
||||||
self.flush_mod_buffer)
|
self.flush_mod_buffer)
|
||||||
else:
|
else:
|
||||||
self.emit_data_changed(self.data, [mod])
|
self.emit_data_changed(self.data, [mod])
|
||||||
|
|
||||||
|
@ -352,11 +202,9 @@ class SimpleApplet:
|
||||||
self.subscriber = Subscriber("datasets",
|
self.subscriber = Subscriber("datasets",
|
||||||
self.sub_init, self.sub_mod)
|
self.sub_init, self.sub_mod)
|
||||||
self.loop.run_until_complete(self.subscriber.connect(
|
self.loop.run_until_complete(self.subscriber.connect(
|
||||||
self.args.server, self.args.port_notify))
|
self.args.server, self.args.port))
|
||||||
else:
|
else:
|
||||||
self.ipc.subscribe(self.datasets, self.sub_init, self.sub_mod,
|
self.ipc.subscribe(self.datasets, self.sub_init, self.sub_mod)
|
||||||
dataset_prefixes=self.dataset_prefixes,
|
|
||||||
loop=self.loop)
|
|
||||||
|
|
||||||
def unsubscribe(self):
|
def unsubscribe(self):
|
||||||
if self.embed is None:
|
if self.embed is None:
|
||||||
|
@ -368,16 +216,12 @@ class SimpleApplet:
|
||||||
try:
|
try:
|
||||||
self.ipc_init()
|
self.ipc_init()
|
||||||
try:
|
try:
|
||||||
self.req_init()
|
self.create_main_widget()
|
||||||
|
self.subscribe()
|
||||||
try:
|
try:
|
||||||
self.create_main_widget()
|
self.loop.run_forever()
|
||||||
self.subscribe()
|
|
||||||
try:
|
|
||||||
self.loop.run_forever()
|
|
||||||
finally:
|
|
||||||
self.unsubscribe()
|
|
||||||
finally:
|
finally:
|
||||||
self.req_close()
|
self.unsubscribe()
|
||||||
finally:
|
finally:
|
||||||
self.ipc_close()
|
self.ipc_close()
|
||||||
finally:
|
finally:
|
||||||
|
@ -416,9 +260,4 @@ class TitleApplet(SimpleApplet):
|
||||||
title = self.args.title
|
title = self.args.title
|
||||||
else:
|
else:
|
||||||
title = None
|
title = None
|
||||||
persist = dict()
|
self.main_widget.data_changed(data, mod_buffer, title)
|
||||||
value = dict()
|
|
||||||
metadata = dict()
|
|
||||||
for k, d in data.items():
|
|
||||||
persist[k], value[k], metadata[k] = d
|
|
||||||
self.main_widget.data_changed(value, metadata, persist, mod_buffer, title)
|
|
||||||
|
|
|
@ -1,12 +1,12 @@
|
||||||
import logging
|
import logging
|
||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
from PyQt5 import QtCore, QtWidgets
|
||||||
|
|
||||||
from sipyco.pc_rpc import AsyncioClient as RPCClient
|
from sipyco.pc_rpc import AsyncioClient as RPCClient
|
||||||
|
|
||||||
from artiq.tools import short_format
|
from artiq.tools import short_format
|
||||||
from artiq.gui.tools import LayoutWidget
|
from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel
|
||||||
from artiq.gui.models import DictSyncTreeSepModel
|
from artiq.gui.models import DictSyncTreeSepModel
|
||||||
|
|
||||||
# reduced read-only version of artiq.dashboard.datasets
|
# reduced read-only version of artiq.dashboard.datasets
|
||||||
|
@ -20,50 +20,15 @@ class Model(DictSyncTreeSepModel):
|
||||||
DictSyncTreeSepModel.__init__(self, ".", ["Dataset", "Value"], init)
|
DictSyncTreeSepModel.__init__(self, ".", ["Dataset", "Value"], init)
|
||||||
|
|
||||||
def convert(self, k, v, column):
|
def convert(self, k, v, column):
|
||||||
return short_format(v[1], v[2])
|
return short_format(v[1])
|
||||||
|
|
||||||
|
|
||||||
class DatasetCtl:
|
|
||||||
def __init__(self, master_host, master_port):
|
|
||||||
self.master_host = master_host
|
|
||||||
self.master_port = master_port
|
|
||||||
|
|
||||||
async def _execute_rpc(self, op_name, key_or_mod, value=None, persist=None, metadata=None):
|
|
||||||
logger.info("Starting %s operation on %s", op_name, key_or_mod)
|
|
||||||
try:
|
|
||||||
remote = RPCClient()
|
|
||||||
await remote.connect_rpc(self.master_host, self.master_port,
|
|
||||||
"dataset_db")
|
|
||||||
try:
|
|
||||||
if op_name == "set":
|
|
||||||
await remote.set(key_or_mod, value, persist, metadata)
|
|
||||||
elif op_name == "update":
|
|
||||||
await remote.update(key_or_mod)
|
|
||||||
else:
|
|
||||||
logger.error("Invalid operation: %s", op_name)
|
|
||||||
return
|
|
||||||
finally:
|
|
||||||
remote.close_rpc()
|
|
||||||
except:
|
|
||||||
logger.error("Failed %s operation on %s", op_name,
|
|
||||||
key_or_mod, exc_info=True)
|
|
||||||
else:
|
|
||||||
logger.info("Finished %s operation on %s", op_name,
|
|
||||||
key_or_mod)
|
|
||||||
|
|
||||||
async def set(self, key, value, persist=None, metadata=None):
|
|
||||||
await self._execute_rpc("set", key, value, persist, metadata)
|
|
||||||
|
|
||||||
async def update(self, mod):
|
|
||||||
await self._execute_rpc("update", mod)
|
|
||||||
|
|
||||||
|
|
||||||
class DatasetsDock(QtWidgets.QDockWidget):
|
class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
def __init__(self, dataset_sub, dataset_ctl):
|
def __init__(self, datasets_sub, master_host, master_port):
|
||||||
QtWidgets.QDockWidget.__init__(self, "Datasets")
|
QtWidgets.QDockWidget.__init__(self, "Datasets")
|
||||||
self.setObjectName("Datasets")
|
self.setObjectName("Datasets")
|
||||||
self.setFeatures(self.DockWidgetFeature.DockWidgetMovable |
|
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||||
self.DockWidgetFeature.DockWidgetFloatable)
|
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||||
|
|
||||||
grid = LayoutWidget()
|
grid = LayoutWidget()
|
||||||
self.setWidget(grid)
|
self.setWidget(grid)
|
||||||
|
@ -74,9 +39,9 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
grid.addWidget(self.search, 0, 0)
|
grid.addWidget(self.search, 0, 0)
|
||||||
|
|
||||||
self.table = QtWidgets.QTreeView()
|
self.table = QtWidgets.QTreeView()
|
||||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectRows)
|
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
|
||||||
self.table.setSelectionMode(
|
self.table.setSelectionMode(
|
||||||
QtWidgets.QAbstractItemView.SelectionMode.SingleSelection)
|
QtWidgets.QAbstractItemView.SingleSelection)
|
||||||
grid.addWidget(self.table, 1, 0)
|
grid.addWidget(self.table, 1, 0)
|
||||||
|
|
||||||
metadata_grid = LayoutWidget()
|
metadata_grid = LayoutWidget()
|
||||||
|
@ -85,21 +50,22 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
"rid start_time".split()):
|
"rid start_time".split()):
|
||||||
metadata_grid.addWidget(QtWidgets.QLabel(label), i, 0)
|
metadata_grid.addWidget(QtWidgets.QLabel(label), i, 0)
|
||||||
v = QtWidgets.QLabel()
|
v = QtWidgets.QLabel()
|
||||||
v.setTextInteractionFlags(QtCore.Qt.TextInteractionFlag.TextSelectableByMouse)
|
v.setTextInteractionFlags(QtCore.Qt.TextSelectableByMouse)
|
||||||
metadata_grid.addWidget(v, i, 1)
|
metadata_grid.addWidget(v, i, 1)
|
||||||
self.metadata[label] = v
|
self.metadata[label] = v
|
||||||
grid.addWidget(metadata_grid, 2, 0)
|
grid.addWidget(metadata_grid, 2, 0)
|
||||||
|
|
||||||
self.table.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||||
upload_action = QtGui.QAction("Upload dataset to master",
|
upload_action = QtWidgets.QAction("Upload dataset to master",
|
||||||
self.table)
|
self.table)
|
||||||
upload_action.triggered.connect(self.upload_clicked)
|
upload_action.triggered.connect(self.upload_clicked)
|
||||||
self.table.addAction(upload_action)
|
self.table.addAction(upload_action)
|
||||||
|
|
||||||
self.set_model(Model(dict()))
|
self.set_model(Model(dict()))
|
||||||
dataset_sub.add_setmodel_callback(self.set_model)
|
datasets_sub.add_setmodel_callback(self.set_model)
|
||||||
|
|
||||||
self.dataset_ctl = dataset_ctl
|
self.master_host = master_host
|
||||||
|
self.master_port = master_port
|
||||||
|
|
||||||
def _search_datasets(self):
|
def _search_datasets(self):
|
||||||
if hasattr(self, "table_model_filter"):
|
if hasattr(self, "table_model_filter"):
|
||||||
|
@ -112,19 +78,34 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
|
|
||||||
def set_model(self, model):
|
def set_model(self, model):
|
||||||
self.table_model = model
|
self.table_model = model
|
||||||
self.table_model_filter = QtCore.QSortFilterProxyModel()
|
self.table_model_filter = QRecursiveFilterProxyModel()
|
||||||
self.table_model_filter.setRecursiveFilteringEnabled(True)
|
|
||||||
self.table_model_filter.setSourceModel(self.table_model)
|
self.table_model_filter.setSourceModel(self.table_model)
|
||||||
self.table.setModel(self.table_model_filter)
|
self.table.setModel(self.table_model_filter)
|
||||||
|
|
||||||
|
async def _upload_dataset(self, name, value,):
|
||||||
|
logger.info("Uploading dataset '%s' to master...", name)
|
||||||
|
try:
|
||||||
|
remote = RPCClient()
|
||||||
|
await remote.connect_rpc(self.master_host, self.master_port,
|
||||||
|
"master_dataset_db")
|
||||||
|
try:
|
||||||
|
await remote.set(name, value)
|
||||||
|
finally:
|
||||||
|
remote.close_rpc()
|
||||||
|
except:
|
||||||
|
logger.error("Failed uploading dataset '%s'",
|
||||||
|
name, exc_info=True)
|
||||||
|
else:
|
||||||
|
logger.info("Finished uploading dataset '%s'", name)
|
||||||
|
|
||||||
def upload_clicked(self):
|
def upload_clicked(self):
|
||||||
idx = self.table.selectedIndexes()
|
idx = self.table.selectedIndexes()
|
||||||
if idx:
|
if idx:
|
||||||
idx = self.table_model_filter.mapToSource(idx[0])
|
idx = self.table_model_filter.mapToSource(idx[0])
|
||||||
key = self.table_model.index_to_key(idx)
|
key = self.table_model.index_to_key(idx)
|
||||||
if key is not None:
|
if key is not None:
|
||||||
persist, value, metadata = self.table_model.backing_store[key]
|
persist, value = self.table_model.backing_store[key]
|
||||||
asyncio.ensure_future(self.dataset_ctl.set(key, value, metadata=metadata))
|
asyncio.ensure_future(self._upload_dataset(key, value))
|
||||||
|
|
||||||
def save_state(self):
|
def save_state(self):
|
||||||
return bytes(self.table.header().saveState())
|
return bytes(self.table.header().saveState())
|
||||||
|
|
|
@ -4,42 +4,111 @@ import os
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||||
import h5py
|
import h5py
|
||||||
|
|
||||||
from sipyco import pyon
|
from sipyco import pyon
|
||||||
|
|
||||||
from artiq import __artiq_dir__ as artiq_dir
|
from artiq import __artiq_dir__ as artiq_dir
|
||||||
from artiq.gui.tools import (LayoutWidget, log_level_to_name, get_open_file_name)
|
from artiq.gui.tools import LayoutWidget, log_level_to_name, get_open_file_name
|
||||||
from artiq.gui.entries import procdesc_to_entry, EntryTreeWidget
|
from artiq.gui.entries import procdesc_to_entry
|
||||||
from artiq.master.worker import Worker, log_worker_exception
|
from artiq.master.worker import Worker, log_worker_exception
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class _ArgumentEditor(EntryTreeWidget):
|
class _WheelFilter(QtCore.QObject):
|
||||||
|
def eventFilter(self, obj, event):
|
||||||
|
if (event.type() == QtCore.QEvent.Wheel and
|
||||||
|
event.modifiers() != QtCore.Qt.NoModifier):
|
||||||
|
event.ignore()
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class _ArgumentEditor(QtWidgets.QTreeWidget):
|
||||||
def __init__(self, dock):
|
def __init__(self, dock):
|
||||||
EntryTreeWidget.__init__(self)
|
QtWidgets.QTreeWidget.__init__(self)
|
||||||
|
self.setColumnCount(3)
|
||||||
|
self.header().setStretchLastSection(False)
|
||||||
|
try:
|
||||||
|
set_resize_mode = self.header().setSectionResizeMode
|
||||||
|
except AttributeError:
|
||||||
|
set_resize_mode = self.header().setResizeMode
|
||||||
|
set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents)
|
||||||
|
set_resize_mode(1, QtWidgets.QHeaderView.Stretch)
|
||||||
|
set_resize_mode(2, QtWidgets.QHeaderView.ResizeToContents)
|
||||||
|
self.header().setVisible(False)
|
||||||
|
self.setSelectionMode(self.NoSelection)
|
||||||
|
self.setHorizontalScrollMode(self.ScrollPerPixel)
|
||||||
|
self.setVerticalScrollMode(self.ScrollPerPixel)
|
||||||
|
|
||||||
|
self.setStyleSheet("QTreeWidget {background: " +
|
||||||
|
self.palette().midlight().color().name() + " ;}")
|
||||||
|
|
||||||
|
self.viewport().installEventFilter(_WheelFilter(self.viewport()))
|
||||||
|
|
||||||
|
self._groups = dict()
|
||||||
|
self._arg_to_widgets = dict()
|
||||||
self._dock = dock
|
self._dock = dock
|
||||||
|
|
||||||
if not self._dock.arguments:
|
if not self._dock.arguments:
|
||||||
self.insertTopLevelItem(0, QtWidgets.QTreeWidgetItem(["No arguments"]))
|
self.addTopLevelItem(QtWidgets.QTreeWidgetItem(["No arguments"]))
|
||||||
|
gradient = QtGui.QLinearGradient(
|
||||||
|
0, 0, 0, QtGui.QFontMetrics(self.font()).lineSpacing()*2.5)
|
||||||
|
gradient.setColorAt(0, self.palette().base().color())
|
||||||
|
gradient.setColorAt(1, self.palette().midlight().color())
|
||||||
|
|
||||||
for name, argument in self._dock.arguments.items():
|
for name, argument in self._dock.arguments.items():
|
||||||
self.set_argument(name, argument)
|
widgets = dict()
|
||||||
|
self._arg_to_widgets[name] = widgets
|
||||||
|
|
||||||
self.quickStyleClicked.connect(self._dock._run_clicked)
|
entry = procdesc_to_entry(argument["desc"])(argument)
|
||||||
|
widget_item = QtWidgets.QTreeWidgetItem([name])
|
||||||
|
if argument["tooltip"]:
|
||||||
|
widget_item.setToolTip(0, argument["tooltip"])
|
||||||
|
widgets["entry"] = entry
|
||||||
|
widgets["widget_item"] = widget_item
|
||||||
|
|
||||||
|
for col in range(3):
|
||||||
|
widget_item.setBackground(col, gradient)
|
||||||
|
font = widget_item.font(0)
|
||||||
|
font.setBold(True)
|
||||||
|
widget_item.setFont(0, font)
|
||||||
|
|
||||||
|
if argument["group"] is None:
|
||||||
|
self.addTopLevelItem(widget_item)
|
||||||
|
else:
|
||||||
|
self._get_group(argument["group"]).addChild(widget_item)
|
||||||
|
fix_layout = LayoutWidget()
|
||||||
|
widgets["fix_layout"] = fix_layout
|
||||||
|
fix_layout.addWidget(entry)
|
||||||
|
self.setItemWidget(widget_item, 1, fix_layout)
|
||||||
|
|
||||||
|
recompute_argument = QtWidgets.QToolButton()
|
||||||
|
recompute_argument.setToolTip("Re-run the experiment's build "
|
||||||
|
"method and take the default value")
|
||||||
|
recompute_argument.setIcon(
|
||||||
|
QtWidgets.QApplication.style().standardIcon(
|
||||||
|
QtWidgets.QStyle.SP_BrowserReload))
|
||||||
|
recompute_argument.clicked.connect(
|
||||||
|
partial(self._recompute_argument_clicked, name))
|
||||||
|
fix_layout = LayoutWidget()
|
||||||
|
fix_layout.addWidget(recompute_argument)
|
||||||
|
self.setItemWidget(widget_item, 2, fix_layout)
|
||||||
|
|
||||||
|
widget_item = QtWidgets.QTreeWidgetItem()
|
||||||
|
self.addTopLevelItem(widget_item)
|
||||||
recompute_arguments = QtWidgets.QPushButton("Recompute all arguments")
|
recompute_arguments = QtWidgets.QPushButton("Recompute all arguments")
|
||||||
recompute_arguments.setIcon(
|
recompute_arguments.setIcon(
|
||||||
QtWidgets.QApplication.style().standardIcon(
|
QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
QtWidgets.QStyle.SP_BrowserReload))
|
||||||
recompute_arguments.clicked.connect(self._recompute_arguments_clicked)
|
recompute_arguments.clicked.connect(self._recompute_arguments_clicked)
|
||||||
|
|
||||||
load = QtWidgets.QPushButton("Set arguments from HDF5")
|
load = QtWidgets.QPushButton("Set arguments from HDF5")
|
||||||
load.setToolTip("Set arguments from currently selected HDF5 file")
|
load.setToolTip("Set arguments from currently selected HDF5 file")
|
||||||
load.setIcon(QtWidgets.QApplication.style().standardIcon(
|
load.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogApplyButton))
|
QtWidgets.QStyle.SP_DialogApplyButton))
|
||||||
load.clicked.connect(self._load_clicked)
|
load.clicked.connect(self._load_clicked)
|
||||||
|
|
||||||
buttons = LayoutWidget()
|
buttons = LayoutWidget()
|
||||||
|
@ -47,7 +116,21 @@ class _ArgumentEditor(EntryTreeWidget):
|
||||||
buttons.addWidget(load, 1, 2)
|
buttons.addWidget(load, 1, 2)
|
||||||
for i, s in enumerate((1, 0, 0, 1)):
|
for i, s in enumerate((1, 0, 0, 1)):
|
||||||
buttons.layout.setColumnStretch(i, s)
|
buttons.layout.setColumnStretch(i, s)
|
||||||
self.setItemWidget(self.bottom_item, 1, buttons)
|
self.setItemWidget(widget_item, 1, buttons)
|
||||||
|
|
||||||
|
def _get_group(self, name):
|
||||||
|
if name in self._groups:
|
||||||
|
return self._groups[name]
|
||||||
|
group = QtWidgets.QTreeWidgetItem([name])
|
||||||
|
for col in range(3):
|
||||||
|
group.setBackground(col, self.palette().mid())
|
||||||
|
group.setForeground(col, self.palette().brightText())
|
||||||
|
font = group.font(col)
|
||||||
|
font.setBold(True)
|
||||||
|
group.setFont(col, font)
|
||||||
|
self.addTopLevelItem(group)
|
||||||
|
self._groups[name] = group
|
||||||
|
return group
|
||||||
|
|
||||||
def _load_clicked(self):
|
def _load_clicked(self):
|
||||||
asyncio.ensure_future(self._dock.load_hdf5_task())
|
asyncio.ensure_future(self._dock.load_hdf5_task())
|
||||||
|
@ -55,8 +138,8 @@ class _ArgumentEditor(EntryTreeWidget):
|
||||||
def _recompute_arguments_clicked(self):
|
def _recompute_arguments_clicked(self):
|
||||||
asyncio.ensure_future(self._dock._recompute_arguments())
|
asyncio.ensure_future(self._dock._recompute_arguments())
|
||||||
|
|
||||||
def reset_entry(self, key):
|
def _recompute_argument_clicked(self, name):
|
||||||
asyncio.ensure_future(self._recompute_argument(key))
|
asyncio.ensure_future(self._recompute_argument(name))
|
||||||
|
|
||||||
async def _recompute_argument(self, name):
|
async def _recompute_argument(self, name):
|
||||||
try:
|
try:
|
||||||
|
@ -71,7 +154,29 @@ class _ArgumentEditor(EntryTreeWidget):
|
||||||
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
||||||
argument["desc"] = procdesc
|
argument["desc"] = procdesc
|
||||||
argument["state"] = state
|
argument["state"] = state
|
||||||
self.update_argument(name, argument)
|
|
||||||
|
widgets = self._arg_to_widgets[name]
|
||||||
|
|
||||||
|
widgets["entry"].deleteLater()
|
||||||
|
widgets["entry"] = procdesc_to_entry(procdesc)(argument)
|
||||||
|
widgets["fix_layout"] = LayoutWidget()
|
||||||
|
widgets["fix_layout"].addWidget(widgets["entry"])
|
||||||
|
self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"])
|
||||||
|
self.updateGeometries()
|
||||||
|
|
||||||
|
def save_state(self):
|
||||||
|
expanded = []
|
||||||
|
for k, v in self._groups.items():
|
||||||
|
if v.isExpanded():
|
||||||
|
expanded.append(k)
|
||||||
|
return {"expanded": expanded}
|
||||||
|
|
||||||
|
def restore_state(self, state):
|
||||||
|
for e in state["expanded"]:
|
||||||
|
try:
|
||||||
|
self._groups[e].setExpanded(True)
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
|
log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
|
||||||
|
@ -86,7 +191,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
self.resize(100*qfm.averageCharWidth(), 30*qfm.lineSpacing())
|
self.resize(100*qfm.averageCharWidth(), 30*qfm.lineSpacing())
|
||||||
self.setWindowTitle(expurl)
|
self.setWindowTitle(expurl)
|
||||||
self.setWindowIcon(QtWidgets.QApplication.style().standardIcon(
|
self.setWindowIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogContentsView))
|
QtWidgets.QStyle.SP_FileDialogContentsView))
|
||||||
self.setAcceptDrops(True)
|
self.setAcceptDrops(True)
|
||||||
|
|
||||||
self.layout = QtWidgets.QGridLayout()
|
self.layout = QtWidgets.QGridLayout()
|
||||||
|
@ -126,22 +231,22 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
|
|
||||||
run = QtWidgets.QPushButton("Analyze")
|
run = QtWidgets.QPushButton("Analyze")
|
||||||
run.setIcon(QtWidgets.QApplication.style().standardIcon(
|
run.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
QtWidgets.QStyle.SP_DialogOkButton))
|
||||||
run.setToolTip("Run analysis stage (Ctrl+Return)")
|
run.setToolTip("Run analysis stage (Ctrl+Return)")
|
||||||
run.setShortcut("CTRL+RETURN")
|
run.setShortcut("CTRL+RETURN")
|
||||||
run.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
run.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
QtWidgets.QSizePolicy.Expanding)
|
||||||
self.layout.addWidget(run, 2, 4)
|
self.layout.addWidget(run, 2, 4)
|
||||||
run.clicked.connect(self._run_clicked)
|
run.clicked.connect(self._run_clicked)
|
||||||
self._run = run
|
self._run = run
|
||||||
|
|
||||||
terminate = QtWidgets.QPushButton("Terminate")
|
terminate = QtWidgets.QPushButton("Terminate")
|
||||||
terminate.setIcon(QtWidgets.QApplication.style().standardIcon(
|
terminate.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogCancelButton))
|
QtWidgets.QStyle.SP_DialogCancelButton))
|
||||||
terminate.setToolTip("Terminate analysis (Ctrl+Backspace)")
|
terminate.setToolTip("Terminate analysis (Ctrl+Backspace)")
|
||||||
terminate.setShortcut("CTRL+BACKSPACE")
|
terminate.setShortcut("CTRL+BACKSPACE")
|
||||||
terminate.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
terminate.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
QtWidgets.QSizePolicy.Expanding)
|
||||||
self.layout.addWidget(terminate, 3, 4)
|
self.layout.addWidget(terminate, 3, 4)
|
||||||
terminate.clicked.connect(self._terminate_clicked)
|
terminate.clicked.connect(self._terminate_clicked)
|
||||||
terminate.setEnabled(False)
|
terminate.setEnabled(False)
|
||||||
|
@ -180,8 +285,8 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
state = self.argeditor.save_state()
|
state = self.argeditor.save_state()
|
||||||
self.argeditor.deleteLater()
|
self.argeditor.deleteLater()
|
||||||
self.argeditor = _ArgumentEditor(self)
|
self.argeditor = _ArgumentEditor(self)
|
||||||
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
|
|
||||||
self.argeditor.restore_state(state)
|
self.argeditor.restore_state(state)
|
||||||
|
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
|
||||||
|
|
||||||
async def load_hdf5_task(self, filename=None):
|
async def load_hdf5_task(self, filename=None):
|
||||||
if filename is None:
|
if filename is None:
|
||||||
|
@ -273,9 +378,9 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
|
|
||||||
|
|
||||||
class LocalDatasetDB:
|
class LocalDatasetDB:
|
||||||
def __init__(self, dataset_sub):
|
def __init__(self, datasets_sub):
|
||||||
self.dataset_sub = dataset_sub
|
self.datasets_sub = datasets_sub
|
||||||
dataset_sub.add_setmodel_callback(self.init)
|
datasets_sub.add_setmodel_callback(self.init)
|
||||||
|
|
||||||
def init(self, data):
|
def init(self, data):
|
||||||
self._data = data
|
self._data = data
|
||||||
|
@ -284,11 +389,11 @@ class LocalDatasetDB:
|
||||||
return self._data.backing_store[key][1]
|
return self._data.backing_store[key][1]
|
||||||
|
|
||||||
def update(self, mod):
|
def update(self, mod):
|
||||||
self.dataset_sub.update(mod)
|
self.datasets_sub.update(mod)
|
||||||
|
|
||||||
|
|
||||||
class ExperimentsArea(QtWidgets.QMdiArea):
|
class ExperimentsArea(QtWidgets.QMdiArea):
|
||||||
def __init__(self, root, dataset_sub):
|
def __init__(self, root, datasets_sub):
|
||||||
QtWidgets.QMdiArea.__init__(self)
|
QtWidgets.QMdiArea.__init__(self)
|
||||||
self.pixmap = QtGui.QPixmap(os.path.join(
|
self.pixmap = QtGui.QPixmap(os.path.join(
|
||||||
artiq_dir, "gui", "logo_ver.svg"))
|
artiq_dir, "gui", "logo_ver.svg"))
|
||||||
|
@ -297,11 +402,11 @@ class ExperimentsArea(QtWidgets.QMdiArea):
|
||||||
|
|
||||||
self.open_experiments = []
|
self.open_experiments = []
|
||||||
|
|
||||||
self._ddb = LocalDatasetDB(dataset_sub)
|
self._ddb = LocalDatasetDB(datasets_sub)
|
||||||
|
|
||||||
self.worker_handlers = {
|
self.worker_handlers = {
|
||||||
"get_device_db": lambda: {},
|
"get_device_db": lambda: {},
|
||||||
"get_device": lambda key, resolve_alias=False: {"type": "dummy"},
|
"get_device": lambda k: {"type": "dummy"},
|
||||||
"get_dataset": self._ddb.get,
|
"get_dataset": self._ddb.get,
|
||||||
"update_dataset": self._ddb.update,
|
"update_dataset": self._ddb.update,
|
||||||
}
|
}
|
||||||
|
@ -316,7 +421,7 @@ class ExperimentsArea(QtWidgets.QMdiArea):
|
||||||
asyncio.ensure_future(sub.load_hdf5_task(path))
|
asyncio.ensure_future(sub.load_hdf5_task(path))
|
||||||
|
|
||||||
def mousePressEvent(self, ev):
|
def mousePressEvent(self, ev):
|
||||||
if ev.button() == QtCore.Qt.MouseButton.LeftButton:
|
if ev.button() == QtCore.Qt.LeftButton:
|
||||||
self.select_experiment()
|
self.select_experiment()
|
||||||
|
|
||||||
def paintEvent(self, event):
|
def paintEvent(self, event):
|
||||||
|
@ -369,8 +474,6 @@ class ExperimentsArea(QtWidgets.QMdiArea):
|
||||||
def initialize_submission_arguments(self, arginfo):
|
def initialize_submission_arguments(self, arginfo):
|
||||||
arguments = OrderedDict()
|
arguments = OrderedDict()
|
||||||
for name, (procdesc, group, tooltip) in arginfo.items():
|
for name, (procdesc, group, tooltip) in arginfo.items():
|
||||||
if procdesc["ty"] == "EnumerationValue" and procdesc["quickstyle"]:
|
|
||||||
procdesc["quickstyle"] = False
|
|
||||||
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
||||||
arguments[name] = {
|
arguments[name] = {
|
||||||
"desc": procdesc,
|
"desc": procdesc,
|
||||||
|
@ -406,16 +509,12 @@ class ExperimentsArea(QtWidgets.QMdiArea):
|
||||||
exc_info=True)
|
exc_info=True)
|
||||||
dock = _ExperimentDock(self, expurl, {})
|
dock = _ExperimentDock(self, expurl, {})
|
||||||
asyncio.ensure_future(dock._recompute_arguments())
|
asyncio.ensure_future(dock._recompute_arguments())
|
||||||
dock.setAttribute(QtCore.Qt.WidgetAttribute.WA_DeleteOnClose)
|
dock.setAttribute(QtCore.Qt.WA_DeleteOnClose)
|
||||||
self.addSubWindow(dock)
|
self.addSubWindow(dock)
|
||||||
dock.show()
|
dock.show()
|
||||||
dock.sigClosed.connect(partial(self.on_dock_closed, dock))
|
dock.sigClosed.connect(partial(self.on_dock_closed, dock))
|
||||||
self.open_experiments.append(dock)
|
self.open_experiments.append(dock)
|
||||||
return dock
|
return dock
|
||||||
|
|
||||||
def set_argument_value(self, expurl, name, value):
|
|
||||||
logger.warning("Unable to set argument '%s', dropping change. "
|
|
||||||
"'set_argument_value' not supported in browser.", name)
|
|
||||||
|
|
||||||
def on_dock_closed(self, dock):
|
def on_dock_closed(self, dock):
|
||||||
self.open_experiments.remove(dock)
|
self.open_experiments.remove(dock)
|
||||||
|
|
|
@ -3,7 +3,7 @@ import os
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
import h5py
|
import h5py
|
||||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
from PyQt5 import QtCore, QtWidgets, QtGui
|
||||||
|
|
||||||
from sipyco import pyon
|
from sipyco import pyon
|
||||||
|
|
||||||
|
@ -69,52 +69,51 @@ class ZoomIconView(QtWidgets.QListView):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
QtWidgets.QListView.__init__(self)
|
QtWidgets.QListView.__init__(self)
|
||||||
self._char_width = QtGui.QFontMetrics(self.font()).averageCharWidth()
|
self._char_width = QtGui.QFontMetrics(self.font()).averageCharWidth()
|
||||||
self.setViewMode(self.ViewMode.IconMode)
|
self.setViewMode(self.IconMode)
|
||||||
w = self._char_width*self.default_size
|
w = self._char_width*self.default_size
|
||||||
self.setIconSize(QtCore.QSize(w, int(w*self.aspect)))
|
self.setIconSize(QtCore.QSize(w, w*self.aspect))
|
||||||
self.setFlow(self.Flow.LeftToRight)
|
self.setFlow(self.LeftToRight)
|
||||||
self.setResizeMode(self.ResizeMode.Adjust)
|
self.setResizeMode(self.Adjust)
|
||||||
self.setWrapping(True)
|
self.setWrapping(True)
|
||||||
|
|
||||||
def wheelEvent(self, ev):
|
def wheelEvent(self, ev):
|
||||||
if ev.modifiers() & QtCore.Qt.KeyboardModifier.ControlModifier:
|
if ev.modifiers() & QtCore.Qt.ControlModifier:
|
||||||
a = self._char_width*self.min_size
|
a = self._char_width*self.min_size
|
||||||
b = self._char_width*self.max_size
|
b = self._char_width*self.max_size
|
||||||
w = self.iconSize().width()*self.zoom_step**(
|
w = self.iconSize().width()*self.zoom_step**(
|
||||||
ev.angleDelta().y()/120.)
|
ev.angleDelta().y()/120.)
|
||||||
if a <= w <= b:
|
if a <= w <= b:
|
||||||
self.setIconSize(QtCore.QSize(int(w), int(w*self.aspect)))
|
self.setIconSize(QtCore.QSize(w, w*self.aspect))
|
||||||
else:
|
else:
|
||||||
QtWidgets.QListView.wheelEvent(self, ev)
|
QtWidgets.QListView.wheelEvent(self, ev)
|
||||||
|
|
||||||
|
|
||||||
class Hdf5FileSystemModel(QtGui.QFileSystemModel):
|
class Hdf5FileSystemModel(QtWidgets.QFileSystemModel):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
QtGui.QFileSystemModel.__init__(self)
|
QtWidgets.QFileSystemModel.__init__(self)
|
||||||
self.setFilter(QtCore.QDir.Filter.Drives | QtCore.QDir.Filter.NoDotAndDotDot |
|
self.setFilter(QtCore.QDir.Drives | QtCore.QDir.NoDotAndDotDot |
|
||||||
QtCore.QDir.Filter.AllDirs | QtCore.QDir.Filter.Files)
|
QtCore.QDir.AllDirs | QtCore.QDir.Files)
|
||||||
self.setNameFilterDisables(False)
|
self.setNameFilterDisables(False)
|
||||||
self.setIconProvider(ThumbnailIconProvider())
|
self.setIconProvider(ThumbnailIconProvider())
|
||||||
|
|
||||||
def data(self, idx, role):
|
def data(self, idx, role):
|
||||||
if role == QtCore.Qt.ItemDataRole.ToolTipRole:
|
if role == QtCore.Qt.ToolTipRole:
|
||||||
info = self.fileInfo(idx)
|
info = self.fileInfo(idx)
|
||||||
h5 = open_h5(info)
|
h5 = open_h5(info)
|
||||||
if h5 is not None:
|
if h5 is not None:
|
||||||
try:
|
try:
|
||||||
expid = pyon.decode(h5["expid"][()]) if "expid" in h5 else dict()
|
expid = pyon.decode(h5["expid"][()])
|
||||||
start_time = datetime.fromtimestamp(h5["start_time"][()]) if "start_time" in h5 else "<none>"
|
start_time = datetime.fromtimestamp(h5["start_time"][()])
|
||||||
v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n"
|
v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n"
|
||||||
"class_name: {}\nrid: {}\nstart_time: {}").format(
|
"class_name: {}\nrid: {}\nstart_time: {}").format(
|
||||||
h5["artiq_version"].asstr()[()] if "artiq_version" in h5 else "<none>",
|
h5["artiq_version"][()], expid["repo_rev"],
|
||||||
expid.get("repo_rev", "<none>"),
|
expid["file"], expid["class_name"],
|
||||||
expid.get("file", "<none>"), expid.get("class_name", "<none>"),
|
h5["rid"][()], start_time)
|
||||||
h5["rid"][()] if "rid" in h5 else "<none>", start_time)
|
|
||||||
return v
|
return v
|
||||||
except:
|
except:
|
||||||
logger.warning("unable to read metadata from %s",
|
logger.warning("unable to read metadata from %s",
|
||||||
info.filePath(), exc_info=True)
|
info.filePath(), exc_info=True)
|
||||||
return QtGui.QFileSystemModel.data(self, idx, role)
|
return QtWidgets.QFileSystemModel.data(self, idx, role)
|
||||||
|
|
||||||
|
|
||||||
class FilesDock(QtWidgets.QDockWidget):
|
class FilesDock(QtWidgets.QDockWidget):
|
||||||
|
@ -125,7 +124,7 @@ class FilesDock(QtWidgets.QDockWidget):
|
||||||
def __init__(self, datasets, browse_root=""):
|
def __init__(self, datasets, browse_root=""):
|
||||||
QtWidgets.QDockWidget.__init__(self, "Files")
|
QtWidgets.QDockWidget.__init__(self, "Files")
|
||||||
self.setObjectName("Files")
|
self.setObjectName("Files")
|
||||||
self.setFeatures(self.DockWidgetFeature.DockWidgetMovable | self.DockWidgetFeature.DockWidgetFloatable)
|
self.setFeatures(self.DockWidgetMovable | self.DockWidgetFloatable)
|
||||||
|
|
||||||
self.splitter = QtWidgets.QSplitter()
|
self.splitter = QtWidgets.QSplitter()
|
||||||
self.setWidget(self.splitter)
|
self.setWidget(self.splitter)
|
||||||
|
@ -147,8 +146,8 @@ class FilesDock(QtWidgets.QDockWidget):
|
||||||
self.rt.setRootIndex(rt_model.mapFromSource(
|
self.rt.setRootIndex(rt_model.mapFromSource(
|
||||||
self.model.setRootPath(browse_root)))
|
self.model.setRootPath(browse_root)))
|
||||||
self.rt.setHeaderHidden(True)
|
self.rt.setHeaderHidden(True)
|
||||||
self.rt.setSelectionBehavior(self.rt.SelectionBehavior.SelectRows)
|
self.rt.setSelectionBehavior(self.rt.SelectRows)
|
||||||
self.rt.setSelectionMode(self.rt.SelectionMode.SingleSelection)
|
self.rt.setSelectionMode(self.rt.SingleSelection)
|
||||||
self.rt.selectionModel().currentChanged.connect(
|
self.rt.selectionModel().currentChanged.connect(
|
||||||
self.tree_current_changed)
|
self.tree_current_changed)
|
||||||
self.rt.setRootIsDecorated(False)
|
self.rt.setRootIsDecorated(False)
|
||||||
|
@ -175,45 +174,31 @@ class FilesDock(QtWidgets.QDockWidget):
|
||||||
logger.debug("loading datasets from %s", info.filePath())
|
logger.debug("loading datasets from %s", info.filePath())
|
||||||
with f:
|
with f:
|
||||||
try:
|
try:
|
||||||
expid = pyon.decode(f["expid"][()]) if "expid" in f else dict()
|
expid = pyon.decode(f["expid"][()])
|
||||||
start_time = datetime.fromtimestamp(f["start_time"][()]) if "start_time" in f else "<none>"
|
start_time = datetime.fromtimestamp(f["start_time"][()])
|
||||||
v = {
|
v = {
|
||||||
"artiq_version": f["artiq_version"].asstr()[()] if "artiq_version" in f else "<none>",
|
"artiq_version": f["artiq_version"][()],
|
||||||
"repo_rev": expid.get("repo_rev", "<none>"),
|
"repo_rev": expid["repo_rev"],
|
||||||
"file": expid.get("file", "<none>"),
|
"file": expid["file"],
|
||||||
"class_name": expid.get("class_name", "<none>"),
|
"class_name": expid["class_name"],
|
||||||
"rid": f["rid"][()] if "rid" in f else "<none>",
|
"rid": f["rid"][()],
|
||||||
"start_time": start_time,
|
"start_time": start_time,
|
||||||
}
|
}
|
||||||
self.metadata_changed.emit(v)
|
self.metadata_changed.emit(v)
|
||||||
except:
|
except:
|
||||||
logger.warning("unable to read metadata from %s",
|
logger.warning("unable to read metadata from %s",
|
||||||
info.filePath(), exc_info=True)
|
info.filePath(), exc_info=True)
|
||||||
|
rd = dict()
|
||||||
rd = {}
|
|
||||||
if "archive" in f:
|
if "archive" in f:
|
||||||
def visitor(k, v):
|
rd = {k: (True, v[()]) for k, v in f["archive"].items()}
|
||||||
if isinstance(v, h5py.Dataset):
|
|
||||||
# v.attrs is a non-serializable h5py.AttributeManager, need to convert to dict
|
|
||||||
# See https://docs.h5py.org/en/stable/high/attr.html#h5py.AttributeManager
|
|
||||||
rd[k] = (True, v[()], dict(v.attrs))
|
|
||||||
|
|
||||||
f["archive"].visititems(visitor)
|
|
||||||
|
|
||||||
if "datasets" in f:
|
if "datasets" in f:
|
||||||
def visitor(k, v):
|
for k, v in f["datasets"].items():
|
||||||
if isinstance(v, h5py.Dataset):
|
if k in rd:
|
||||||
if k in rd:
|
logger.warning("dataset '%s' is both in archive and "
|
||||||
logger.warning("dataset '%s' is both in archive "
|
"outputs", k)
|
||||||
"and outputs", k)
|
rd[k] = (True, v[()])
|
||||||
# v.attrs is a non-serializable h5py.AttributeManager, need to convert to dict
|
if rd:
|
||||||
# See https://docs.h5py.org/en/stable/high/attr.html#h5py.AttributeManager
|
self.datasets.init(rd)
|
||||||
rd[k] = (True, v[()], dict(v.attrs))
|
|
||||||
|
|
||||||
f["datasets"].visititems(visitor)
|
|
||||||
|
|
||||||
self.datasets.init(rd)
|
|
||||||
|
|
||||||
self.dataset_changed.emit(info.filePath())
|
self.dataset_changed.emit(info.filePath())
|
||||||
|
|
||||||
def list_activated(self, idx):
|
def list_activated(self, idx):
|
||||||
|
@ -252,7 +237,7 @@ class FilesDock(QtWidgets.QDockWidget):
|
||||||
100,
|
100,
|
||||||
lambda: self.rt.scrollTo(
|
lambda: self.rt.scrollTo(
|
||||||
self.rt.model().mapFromSource(self.model.index(path)),
|
self.rt.model().mapFromSource(self.model.index(path)),
|
||||||
self.rt.ScrollHint.PositionAtCenter)
|
self.rt.PositionAtCenter)
|
||||||
)
|
)
|
||||||
self.model.directoryLoaded.connect(scroll_when_loaded)
|
self.model.directoryLoaded.connect(scroll_when_loaded)
|
||||||
idx = self.rt.model().mapFromSource(idx)
|
idx = self.rt.model().mapFromSource(idx)
|
||||||
|
|
|
@ -59,18 +59,19 @@ def build_artiq_soc(soc, argdict):
|
||||||
builder.software_packages = []
|
builder.software_packages = []
|
||||||
builder.add_software_package("bootloader", os.path.join(firmware_dir, "bootloader"))
|
builder.add_software_package("bootloader", os.path.join(firmware_dir, "bootloader"))
|
||||||
is_kasli_v1 = isinstance(soc.platform, kasli.Platform) and soc.platform.hw_rev in ("v1.0", "v1.1")
|
is_kasli_v1 = isinstance(soc.platform, kasli.Platform) and soc.platform.hw_rev in ("v1.0", "v1.1")
|
||||||
kernel_cpu_type = "vexriscv" if is_kasli_v1 else "vexriscv-g"
|
if isinstance(soc, AMPSoC):
|
||||||
builder.add_software_package("libm", cpu_type=kernel_cpu_type)
|
kernel_cpu_type = "vexriscv" if is_kasli_v1 else "vexriscv-g"
|
||||||
builder.add_software_package("libprintf", cpu_type=kernel_cpu_type)
|
builder.add_software_package("libm", cpu_type=kernel_cpu_type)
|
||||||
builder.add_software_package("libunwind", cpu_type=kernel_cpu_type)
|
builder.add_software_package("libprintf", cpu_type=kernel_cpu_type)
|
||||||
builder.add_software_package("ksupport", os.path.join(firmware_dir, "ksupport"), cpu_type=kernel_cpu_type)
|
builder.add_software_package("libunwind", cpu_type=kernel_cpu_type)
|
||||||
# Generate unwinder for soft float target (ARTIQ runtime)
|
builder.add_software_package("ksupport", os.path.join(firmware_dir, "ksupport"), cpu_type=kernel_cpu_type)
|
||||||
# If the kernel lacks FPU, then the runtime unwinder is already generated
|
# Generate unwinder for soft float target (ARTIQ runtime)
|
||||||
if not is_kasli_v1:
|
# If the kernel lacks FPU, then the runtime unwinder is already generated
|
||||||
builder.add_software_package("libunwind")
|
if not is_kasli_v1:
|
||||||
if not soc.config["DRTIO_ROLE"] == "satellite":
|
builder.add_software_package("libunwind")
|
||||||
builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime"))
|
builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime"))
|
||||||
else:
|
else:
|
||||||
|
# Assume DRTIO satellite.
|
||||||
builder.add_software_package("satman", os.path.join(firmware_dir, "satman"))
|
builder.add_software_package("satman", os.path.join(firmware_dir, "satman"))
|
||||||
try:
|
try:
|
||||||
builder.build()
|
builder.build()
|
||||||
|
|
|
@ -21,19 +21,13 @@ class scoped(object):
|
||||||
set of variables resolved as globals
|
set of variables resolved as globals
|
||||||
"""
|
"""
|
||||||
|
|
||||||
class remote(object):
|
|
||||||
"""
|
|
||||||
:ivar remote_fn: (bool) whether function is ran on a remote device,
|
|
||||||
meaning arguments are received remotely and return is sent remotely
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Typed versions of untyped nodes
|
# Typed versions of untyped nodes
|
||||||
class argT(ast.arg, commontyped):
|
class argT(ast.arg, commontyped):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
class ClassDefT(ast.ClassDef):
|
class ClassDefT(ast.ClassDef):
|
||||||
_types = ("constructor_type",)
|
_types = ("constructor_type",)
|
||||||
class FunctionDefT(ast.FunctionDef, scoped, remote):
|
class FunctionDefT(ast.FunctionDef, scoped):
|
||||||
_types = ("signature_type",)
|
_types = ("signature_type",)
|
||||||
class QuotedFunctionDefT(FunctionDefT):
|
class QuotedFunctionDefT(FunctionDefT):
|
||||||
"""
|
"""
|
||||||
|
@ -64,7 +58,7 @@ class BinOpT(ast.BinOp, commontyped):
|
||||||
pass
|
pass
|
||||||
class BoolOpT(ast.BoolOp, commontyped):
|
class BoolOpT(ast.BoolOp, commontyped):
|
||||||
pass
|
pass
|
||||||
class CallT(ast.Call, commontyped, remote):
|
class CallT(ast.Call, commontyped):
|
||||||
"""
|
"""
|
||||||
:ivar iodelay: (:class:`iodelay.Expr`)
|
:ivar iodelay: (:class:`iodelay.Expr`)
|
||||||
:ivar arg_exprs: (dict of str to :class:`iodelay.Expr`)
|
:ivar arg_exprs: (dict of str to :class:`iodelay.Expr`)
|
||||||
|
|
|
@ -38,9 +38,6 @@ class TInt(types.TMono):
|
||||||
def one():
|
def one():
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
def TInt8():
|
|
||||||
return TInt(types.TValue(8))
|
|
||||||
|
|
||||||
def TInt32():
|
def TInt32():
|
||||||
return TInt(types.TValue(32))
|
return TInt(types.TValue(32))
|
||||||
|
|
||||||
|
@ -126,23 +123,18 @@ class TException(types.TMono):
|
||||||
# * File, line and column where it was raised (str, int, int).
|
# * File, line and column where it was raised (str, int, int).
|
||||||
# * Message, which can contain substitutions {0}, {1} and {2} (str).
|
# * Message, which can contain substitutions {0}, {1} and {2} (str).
|
||||||
# * Three 64-bit integers, parameterizing the message (numpy.int64).
|
# * Three 64-bit integers, parameterizing the message (numpy.int64).
|
||||||
# These attributes are prefixed with `#` so that users cannot access them,
|
|
||||||
# and we don't have to do string allocation in the runtime.
|
|
||||||
# #__name__ is now a string key in the host. TStr may not be an actual
|
|
||||||
# CSlice in the runtime, they might be a CSlice with length = i32::MAX and
|
|
||||||
# ptr = string key in the host.
|
|
||||||
|
|
||||||
# Keep this in sync with the function ARTIQIRGenerator.alloc_exn.
|
# Keep this in sync with the function ARTIQIRGenerator.alloc_exn.
|
||||||
attributes = OrderedDict([
|
attributes = OrderedDict([
|
||||||
("#__name__", TInt32()),
|
("__name__", TStr()),
|
||||||
("#__file__", TStr()),
|
("__file__", TStr()),
|
||||||
("#__line__", TInt32()),
|
("__line__", TInt32()),
|
||||||
("#__col__", TInt32()),
|
("__col__", TInt32()),
|
||||||
("#__func__", TStr()),
|
("__func__", TStr()),
|
||||||
("#__message__", TStr()),
|
("__message__", TStr()),
|
||||||
("#__param0__", TInt64()),
|
("__param0__", TInt64()),
|
||||||
("#__param1__", TInt64()),
|
("__param1__", TInt64()),
|
||||||
("#__param2__", TInt64()),
|
("__param2__", TInt64()),
|
||||||
])
|
])
|
||||||
|
|
||||||
def __init__(self, name="Exception", id=0):
|
def __init__(self, name="Exception", id=0):
|
||||||
|
@ -177,9 +169,7 @@ def fn_list():
|
||||||
return types.TConstructor(TList())
|
return types.TConstructor(TList())
|
||||||
|
|
||||||
def fn_array():
|
def fn_array():
|
||||||
# numpy.array() is actually a "magic" macro that is expanded in-place, but
|
return types.TConstructor(TArray())
|
||||||
# just as for builtin functions, we do not want to quote it, etc.
|
|
||||||
return types.TBuiltinFunction("array")
|
|
||||||
|
|
||||||
def fn_Exception():
|
def fn_Exception():
|
||||||
return types.TExceptionConstructor(TException("Exception"))
|
return types.TExceptionConstructor(TException("Exception"))
|
||||||
|
@ -247,18 +237,6 @@ def fn_at_mu():
|
||||||
def fn_rtio_log():
|
def fn_rtio_log():
|
||||||
return types.TBuiltinFunction("rtio_log")
|
return types.TBuiltinFunction("rtio_log")
|
||||||
|
|
||||||
def fn_subkernel_await():
|
|
||||||
return types.TBuiltinFunction("subkernel_await")
|
|
||||||
|
|
||||||
def fn_subkernel_preload():
|
|
||||||
return types.TBuiltinFunction("subkernel_preload")
|
|
||||||
|
|
||||||
def fn_subkernel_send():
|
|
||||||
return types.TBuiltinFunction("subkernel_send")
|
|
||||||
|
|
||||||
def fn_subkernel_recv():
|
|
||||||
return types.TBuiltinFunction("subkernel_recv")
|
|
||||||
|
|
||||||
# Accessors
|
# Accessors
|
||||||
|
|
||||||
def is_none(typ):
|
def is_none(typ):
|
||||||
|
@ -341,7 +319,7 @@ def get_iterable_elt(typ):
|
||||||
# n-dimensional arrays, rather than the n-1 dimensional result of iterating over
|
# n-dimensional arrays, rather than the n-1 dimensional result of iterating over
|
||||||
# the first axis, which makes the name a bit misleading.
|
# the first axis, which makes the name a bit misleading.
|
||||||
if is_str(typ) or is_bytes(typ) or is_bytearray(typ):
|
if is_str(typ) or is_bytes(typ) or is_bytearray(typ):
|
||||||
return TInt8()
|
return TInt(types.TValue(8))
|
||||||
elif types._is_pointer(typ) or is_iterable(typ):
|
elif types._is_pointer(typ) or is_iterable(typ):
|
||||||
return typ.find()["elt"].find()
|
return typ.find()["elt"].find()
|
||||||
else:
|
else:
|
||||||
|
@ -357,5 +335,5 @@ def is_allocated(typ):
|
||||||
is_float(typ) or is_range(typ) or
|
is_float(typ) or is_range(typ) or
|
||||||
types._is_pointer(typ) or types.is_function(typ) or
|
types._is_pointer(typ) or types.is_function(typ) or
|
||||||
types.is_external_function(typ) or types.is_rpc(typ) or
|
types.is_external_function(typ) or types.is_rpc(typ) or
|
||||||
types.is_subkernel(typ) or types.is_method(typ) or
|
types.is_method(typ) or types.is_tuple(typ) or
|
||||||
types.is_tuple(typ) or types.is_value(typ))
|
types.is_value(typ))
|
||||||
|
|
|
@ -5,7 +5,6 @@ the references to the host objects and translates the functions
|
||||||
annotated as ``@kernel`` when they are referenced.
|
annotated as ``@kernel`` when they are referenced.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import typing
|
|
||||||
import os, re, linecache, inspect, textwrap, types as pytypes, numpy
|
import os, re, linecache, inspect, textwrap, types as pytypes, numpy
|
||||||
from collections import OrderedDict, defaultdict
|
from collections import OrderedDict, defaultdict
|
||||||
|
|
||||||
|
@ -19,13 +18,6 @@ from . import types, builtins, asttyped, math_fns, prelude
|
||||||
from .transforms import ASTTypedRewriter, Inferencer, IntMonomorphizer, TypedtreePrinter
|
from .transforms import ASTTypedRewriter, Inferencer, IntMonomorphizer, TypedtreePrinter
|
||||||
from .transforms.asttyped_rewriter import LocalExtractor
|
from .transforms.asttyped_rewriter import LocalExtractor
|
||||||
|
|
||||||
try:
|
|
||||||
# From numpy=1.25.0 dispatching for `__array_function__` is done via
|
|
||||||
# a C wrapper: https://github.com/numpy/numpy/pull/23020
|
|
||||||
from numpy.core._multiarray_umath import _ArrayFunctionDispatcher
|
|
||||||
except ImportError:
|
|
||||||
_ArrayFunctionDispatcher = None
|
|
||||||
|
|
||||||
|
|
||||||
class SpecializedFunction:
|
class SpecializedFunction:
|
||||||
def __init__(self, instance_type, host_function):
|
def __init__(self, instance_type, host_function):
|
||||||
|
@ -47,93 +39,14 @@ class SpecializedFunction:
|
||||||
return hash((self.instance_type, self.host_function))
|
return hash((self.instance_type, self.host_function))
|
||||||
|
|
||||||
|
|
||||||
class SubkernelMessageType:
|
|
||||||
def __init__(self, name, value_type):
|
|
||||||
self.name = name
|
|
||||||
self.value_type = value_type
|
|
||||||
self.send_loc = None
|
|
||||||
self.recv_loc = None
|
|
||||||
|
|
||||||
class EmbeddingMap:
|
class EmbeddingMap:
|
||||||
def __init__(self, old_embedding_map=None):
|
def __init__(self):
|
||||||
self.object_current_key = 0
|
self.object_current_key = 0
|
||||||
self.object_forward_map = {}
|
self.object_forward_map = {}
|
||||||
self.object_reverse_map = {}
|
self.object_reverse_map = {}
|
||||||
self.module_map = {}
|
self.module_map = {}
|
||||||
|
|
||||||
# type_map connects the host Python `type` to the pair of associated
|
|
||||||
# `(TInstance, TConstructor)`s. The `used_…_names` sets cache the
|
|
||||||
# respective `.name`s for O(1) collision avoidance.
|
|
||||||
self.type_map = {}
|
self.type_map = {}
|
||||||
self.used_instance_type_names = set()
|
|
||||||
self.used_constructor_type_names = set()
|
|
||||||
|
|
||||||
self.function_map = {}
|
self.function_map = {}
|
||||||
self.str_forward_map = {}
|
|
||||||
self.str_reverse_map = {}
|
|
||||||
|
|
||||||
# mapping `name` to object ID
|
|
||||||
self.subkernel_message_map = {}
|
|
||||||
|
|
||||||
# subkernels: dict of ID: function, just like object_forward_map
|
|
||||||
# allow the embedding map to be aware of subkernels from other kernels
|
|
||||||
if not old_embedding_map is None:
|
|
||||||
for key, obj_ref in old_embedding_map.subkernels().items():
|
|
||||||
self.object_forward_map[key] = obj_ref
|
|
||||||
obj_id = id(obj_ref)
|
|
||||||
self.object_reverse_map[obj_id] = key
|
|
||||||
for msg_id, msg_type in old_embedding_map.subkernel_messages().items():
|
|
||||||
self.object_forward_map[msg_id] = msg_type
|
|
||||||
obj_id = id(msg_type)
|
|
||||||
self.subkernel_message_map[msg_type.name] = msg_id
|
|
||||||
self.object_reverse_map[obj_id] = msg_id
|
|
||||||
|
|
||||||
# Keep this list of exceptions in sync with `EXCEPTION_ID_LOOKUP` in `artiq::firmware::ksupport::eh_artiq`
|
|
||||||
# The exceptions declared here must be defined in `artiq.coredevice.exceptions`
|
|
||||||
# Verify synchronization by running the test cases in `artiq.test.coredevice.test_exceptions`
|
|
||||||
self.preallocate_runtime_exception_names([
|
|
||||||
"RTIOUnderflow",
|
|
||||||
"RTIOOverflow",
|
|
||||||
"RTIODestinationUnreachable",
|
|
||||||
"DMAError",
|
|
||||||
"I2CError",
|
|
||||||
"CacheError",
|
|
||||||
"SPIError",
|
|
||||||
"SubkernelError",
|
|
||||||
|
|
||||||
"0:AssertionError",
|
|
||||||
"0:AttributeError",
|
|
||||||
"0:IndexError",
|
|
||||||
"0:IOError",
|
|
||||||
"0:KeyError",
|
|
||||||
"0:NotImplementedError",
|
|
||||||
"0:OverflowError",
|
|
||||||
"0:RuntimeError",
|
|
||||||
"0:TimeoutError",
|
|
||||||
"0:TypeError",
|
|
||||||
"0:ValueError",
|
|
||||||
"0:ZeroDivisionError",
|
|
||||||
"0:LinAlgError",
|
|
||||||
"UnwrapNoneError",
|
|
||||||
])
|
|
||||||
|
|
||||||
def preallocate_runtime_exception_names(self, names):
|
|
||||||
for i, name in enumerate(names):
|
|
||||||
if ":" not in name:
|
|
||||||
name = "0:artiq.coredevice.exceptions." + name
|
|
||||||
exn_id = self.store_str(name)
|
|
||||||
assert exn_id == i
|
|
||||||
|
|
||||||
def store_str(self, s):
|
|
||||||
if s in self.str_forward_map:
|
|
||||||
return self.str_forward_map[s]
|
|
||||||
str_id = len(self.str_forward_map)
|
|
||||||
self.str_forward_map[s] = str_id
|
|
||||||
self.str_reverse_map[str_id] = s
|
|
||||||
return str_id
|
|
||||||
|
|
||||||
def retrieve_str(self, str_id):
|
|
||||||
return self.str_reverse_map[str_id]
|
|
||||||
|
|
||||||
# Modules
|
# Modules
|
||||||
def store_module(self, module, module_type):
|
def store_module(self, module, module_type):
|
||||||
|
@ -147,6 +60,16 @@ class EmbeddingMap:
|
||||||
|
|
||||||
# Types
|
# Types
|
||||||
def store_type(self, host_type, instance_type, constructor_type):
|
def store_type(self, host_type, instance_type, constructor_type):
|
||||||
|
self._rename_type(instance_type)
|
||||||
|
self.type_map[host_type] = (instance_type, constructor_type)
|
||||||
|
|
||||||
|
def retrieve_type(self, host_type):
|
||||||
|
return self.type_map[host_type]
|
||||||
|
|
||||||
|
def has_type(self, host_type):
|
||||||
|
return host_type in self.type_map
|
||||||
|
|
||||||
|
def _rename_type(self, new_instance_type):
|
||||||
# Generally, user-defined types that have exact same name (which is to say, classes
|
# Generally, user-defined types that have exact same name (which is to say, classes
|
||||||
# defined inside functions) do not pose a problem to the compiler. The two places which
|
# defined inside functions) do not pose a problem to the compiler. The two places which
|
||||||
# cannot handle this are:
|
# cannot handle this are:
|
||||||
|
@ -155,29 +78,12 @@ class EmbeddingMap:
|
||||||
# Since handling #2 requires renaming on ARTIQ side anyway, it's more straightforward
|
# Since handling #2 requires renaming on ARTIQ side anyway, it's more straightforward
|
||||||
# to do it once when embedding (since non-embedded code cannot define classes in
|
# to do it once when embedding (since non-embedded code cannot define classes in
|
||||||
# functions). Also, easier to debug.
|
# functions). Also, easier to debug.
|
||||||
suffix = 0
|
n = 0
|
||||||
new_instance_name = instance_type.name
|
for host_type in self.type_map:
|
||||||
new_constructor_name = constructor_type.name
|
instance_type, constructor_type = self.type_map[host_type]
|
||||||
while True:
|
if instance_type.name == new_instance_type.name:
|
||||||
if (new_instance_name not in self.used_instance_type_names
|
n += 1
|
||||||
and new_constructor_name not in self.used_constructor_type_names):
|
new_instance_type.name = "{}.{}".format(new_instance_type.name, n)
|
||||||
break
|
|
||||||
suffix += 1
|
|
||||||
new_instance_name = f"{instance_type.name}.{suffix}"
|
|
||||||
new_constructor_name = f"{constructor_type.name}.{suffix}"
|
|
||||||
|
|
||||||
self.used_instance_type_names.add(new_instance_name)
|
|
||||||
instance_type.name = new_instance_name
|
|
||||||
self.used_constructor_type_names.add(new_constructor_name)
|
|
||||||
constructor_type.name = new_constructor_name
|
|
||||||
|
|
||||||
self.type_map[host_type] = (instance_type, constructor_type)
|
|
||||||
|
|
||||||
def retrieve_type(self, host_type):
|
|
||||||
return self.type_map[host_type]
|
|
||||||
|
|
||||||
def has_type(self, host_type):
|
|
||||||
return host_type in self.type_map
|
|
||||||
|
|
||||||
def attribute_count(self):
|
def attribute_count(self):
|
||||||
count = 0
|
count = 0
|
||||||
|
@ -204,11 +110,6 @@ class EmbeddingMap:
|
||||||
return self.object_reverse_map[obj_id]
|
return self.object_reverse_map[obj_id]
|
||||||
|
|
||||||
self.object_current_key += 1
|
self.object_current_key += 1
|
||||||
while self.object_forward_map.get(self.object_current_key):
|
|
||||||
# make sure there's no collisions with previously inserted subkernels
|
|
||||||
# their identifiers must be consistent across all kernels/subkernels
|
|
||||||
self.object_current_key += 1
|
|
||||||
|
|
||||||
self.object_forward_map[self.object_current_key] = obj_ref
|
self.object_forward_map[self.object_current_key] = obj_ref
|
||||||
self.object_reverse_map[obj_id] = self.object_current_key
|
self.object_reverse_map[obj_id] = self.object_current_key
|
||||||
return self.object_current_key
|
return self.object_current_key
|
||||||
|
@ -221,7 +122,7 @@ class EmbeddingMap:
|
||||||
obj_ref = self.object_forward_map[obj_id]
|
obj_ref = self.object_forward_map[obj_id]
|
||||||
if isinstance(obj_ref, (pytypes.FunctionType, pytypes.MethodType,
|
if isinstance(obj_ref, (pytypes.FunctionType, pytypes.MethodType,
|
||||||
pytypes.BuiltinFunctionType, pytypes.ModuleType,
|
pytypes.BuiltinFunctionType, pytypes.ModuleType,
|
||||||
SpecializedFunction, SubkernelMessageType)):
|
SpecializedFunction)):
|
||||||
continue
|
continue
|
||||||
elif isinstance(obj_ref, type):
|
elif isinstance(obj_ref, type):
|
||||||
_, obj_typ = self.type_map[obj_ref]
|
_, obj_typ = self.type_map[obj_ref]
|
||||||
|
@ -229,55 +130,14 @@ class EmbeddingMap:
|
||||||
obj_typ, _ = self.type_map[type(obj_ref)]
|
obj_typ, _ = self.type_map[type(obj_ref)]
|
||||||
yield obj_id, obj_ref, obj_typ
|
yield obj_id, obj_ref, obj_typ
|
||||||
|
|
||||||
def subkernels(self):
|
|
||||||
subkernels = {}
|
|
||||||
for k, v in self.object_forward_map.items():
|
|
||||||
if hasattr(v, "artiq_embedded"):
|
|
||||||
if v.artiq_embedded.destination is not None:
|
|
||||||
subkernels[k] = v
|
|
||||||
return subkernels
|
|
||||||
|
|
||||||
def store_subkernel_message(self, name, value_type, function_type, function_loc):
|
|
||||||
if name in self.subkernel_message_map:
|
|
||||||
msg_id = self.subkernel_message_map[name]
|
|
||||||
else:
|
|
||||||
msg_id = self.store_object(SubkernelMessageType(name, value_type))
|
|
||||||
self.subkernel_message_map[name] = msg_id
|
|
||||||
subkernel_msg = self.retrieve_object(msg_id)
|
|
||||||
if function_type == "send":
|
|
||||||
subkernel_msg.send_loc = function_loc
|
|
||||||
elif function_type == "recv":
|
|
||||||
subkernel_msg.recv_loc = function_loc
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
return msg_id, subkernel_msg
|
|
||||||
|
|
||||||
def subkernel_messages(self):
|
|
||||||
messages = {}
|
|
||||||
for msg_id in self.subkernel_message_map.values():
|
|
||||||
messages[msg_id] = self.retrieve_object(msg_id)
|
|
||||||
return messages
|
|
||||||
|
|
||||||
def subkernel_messages_unpaired(self):
|
|
||||||
unpaired = []
|
|
||||||
for msg_id in self.subkernel_message_map.values():
|
|
||||||
msg_obj = self.retrieve_object(msg_id)
|
|
||||||
if msg_obj.send_loc is None or msg_obj.recv_loc is None:
|
|
||||||
unpaired.append(msg_obj)
|
|
||||||
return unpaired
|
|
||||||
|
|
||||||
def has_rpc(self):
|
def has_rpc(self):
|
||||||
return any(filter(
|
return any(filter(lambda x: inspect.isfunction(x) or inspect.ismethod(x),
|
||||||
lambda x: (inspect.isfunction(x) or inspect.ismethod(x)) and \
|
self.object_forward_map.values()))
|
||||||
(not hasattr(x, "artiq_embedded") or x.artiq_embedded.destination is None),
|
|
||||||
self.object_forward_map.values()
|
|
||||||
))
|
|
||||||
|
|
||||||
|
|
||||||
class ASTSynthesizer:
|
class ASTSynthesizer:
|
||||||
def __init__(self, embedding_map, value_map, quote_function=None, expanded_from=None):
|
def __init__(self, embedding_map, value_map, quote_function=None, expanded_from=None):
|
||||||
self.source = ""
|
self.source = ""
|
||||||
self.source_last_new_line = 0
|
|
||||||
self.source_buffer = source.Buffer(self.source, "<synthesized>")
|
self.source_buffer = source.Buffer(self.source, "<synthesized>")
|
||||||
self.embedding_map = embedding_map
|
self.embedding_map = embedding_map
|
||||||
self.value_map = value_map
|
self.value_map = value_map
|
||||||
|
@ -296,14 +156,6 @@ class ASTSynthesizer:
|
||||||
return source.Range(self.source_buffer, range_from, range_to,
|
return source.Range(self.source_buffer, range_from, range_to,
|
||||||
expanded_from=self.expanded_from)
|
expanded_from=self.expanded_from)
|
||||||
|
|
||||||
def _add_iterable(self, fragment):
|
|
||||||
# Since DILocation points on the beginning of the piece of source
|
|
||||||
# we don't care if the fragment's end will overflow LLVM's limit.
|
|
||||||
if len(self.source) - self.source_last_new_line >= 2**16:
|
|
||||||
fragment = "\\\n" + fragment
|
|
||||||
self.source_last_new_line = len(self.source) + 2
|
|
||||||
return self._add(fragment)
|
|
||||||
|
|
||||||
def fast_quote_list(self, value):
|
def fast_quote_list(self, value):
|
||||||
elts = [None] * len(value)
|
elts = [None] * len(value)
|
||||||
is_T = False
|
is_T = False
|
||||||
|
@ -362,7 +214,7 @@ class ASTSynthesizer:
|
||||||
for index, elt in enumerate(value):
|
for index, elt in enumerate(value):
|
||||||
elts[index] = self.quote(elt)
|
elts[index] = self.quote(elt)
|
||||||
if index < len(value) - 1:
|
if index < len(value) - 1:
|
||||||
self._add_iterable(", ")
|
self._add(", ")
|
||||||
return elts
|
return elts
|
||||||
|
|
||||||
def quote(self, value):
|
def quote(self, value):
|
||||||
|
@ -413,28 +265,28 @@ class ASTSynthesizer:
|
||||||
loc=self._add(repr(value)))
|
loc=self._add(repr(value)))
|
||||||
elif isinstance(value, str):
|
elif isinstance(value, str):
|
||||||
return asttyped.StrT(s=value, ctx=None, type=builtins.TStr(),
|
return asttyped.StrT(s=value, ctx=None, type=builtins.TStr(),
|
||||||
loc=self._add_iterable(repr(value)))
|
loc=self._add(repr(value)))
|
||||||
elif isinstance(value, bytes):
|
elif isinstance(value, bytes):
|
||||||
return asttyped.StrT(s=value, ctx=None, type=builtins.TBytes(),
|
return asttyped.StrT(s=value, ctx=None, type=builtins.TBytes(),
|
||||||
loc=self._add_iterable(repr(value)))
|
loc=self._add(repr(value)))
|
||||||
elif isinstance(value, bytearray):
|
elif isinstance(value, bytearray):
|
||||||
quote_loc = self._add_iterable('`')
|
quote_loc = self._add('`')
|
||||||
repr_loc = self._add_iterable(repr(value))
|
repr_loc = self._add(repr(value))
|
||||||
unquote_loc = self._add_iterable('`')
|
unquote_loc = self._add('`')
|
||||||
loc = quote_loc.join(unquote_loc)
|
loc = quote_loc.join(unquote_loc)
|
||||||
|
|
||||||
return asttyped.QuoteT(value=value, type=builtins.TByteArray(), loc=loc)
|
return asttyped.QuoteT(value=value, type=builtins.TByteArray(), loc=loc)
|
||||||
elif isinstance(value, list):
|
elif isinstance(value, list):
|
||||||
begin_loc = self._add_iterable("[")
|
begin_loc = self._add("[")
|
||||||
elts = self.fast_quote_list(value)
|
elts = self.fast_quote_list(value)
|
||||||
end_loc = self._add_iterable("]")
|
end_loc = self._add("]")
|
||||||
return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(),
|
return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(),
|
||||||
begin_loc=begin_loc, end_loc=end_loc,
|
begin_loc=begin_loc, end_loc=end_loc,
|
||||||
loc=begin_loc.join(end_loc))
|
loc=begin_loc.join(end_loc))
|
||||||
elif isinstance(value, tuple):
|
elif isinstance(value, tuple):
|
||||||
begin_loc = self._add_iterable("(")
|
begin_loc = self._add("(")
|
||||||
elts = self.fast_quote_list(value)
|
elts = self.fast_quote_list(value)
|
||||||
end_loc = self._add_iterable(")")
|
end_loc = self._add(")")
|
||||||
return asttyped.TupleT(elts=elts, ctx=None,
|
return asttyped.TupleT(elts=elts, ctx=None,
|
||||||
type=types.TTuple([e.type for e in elts]),
|
type=types.TTuple([e.type for e in elts]),
|
||||||
begin_loc=begin_loc, end_loc=end_loc,
|
begin_loc=begin_loc, end_loc=end_loc,
|
||||||
|
@ -444,9 +296,7 @@ class ASTSynthesizer:
|
||||||
elif inspect.isfunction(value) or inspect.ismethod(value) or \
|
elif inspect.isfunction(value) or inspect.ismethod(value) or \
|
||||||
isinstance(value, pytypes.BuiltinFunctionType) or \
|
isinstance(value, pytypes.BuiltinFunctionType) or \
|
||||||
isinstance(value, SpecializedFunction) or \
|
isinstance(value, SpecializedFunction) or \
|
||||||
isinstance(value, numpy.ufunc) or \
|
isinstance(value, numpy.ufunc):
|
||||||
(isinstance(value, _ArrayFunctionDispatcher) if
|
|
||||||
_ArrayFunctionDispatcher is not None else False):
|
|
||||||
if inspect.ismethod(value):
|
if inspect.ismethod(value):
|
||||||
quoted_self = self.quote(value.__self__)
|
quoted_self = self.quote(value.__self__)
|
||||||
function_type = self.quote_function(value.__func__, self.expanded_from)
|
function_type = self.quote_function(value.__func__, self.expanded_from)
|
||||||
|
@ -555,7 +405,7 @@ class ASTSynthesizer:
|
||||||
return asttyped.QuoteT(value=value, type=instance_type,
|
return asttyped.QuoteT(value=value, type=instance_type,
|
||||||
loc=loc)
|
loc=loc)
|
||||||
|
|
||||||
def call(self, callee, args, kwargs, callback=None, remote_fn=False):
|
def call(self, callee, args, kwargs, callback=None):
|
||||||
"""
|
"""
|
||||||
Construct an AST fragment calling a function specified by
|
Construct an AST fragment calling a function specified by
|
||||||
an AST node `function_node`, with given arguments.
|
an AST node `function_node`, with given arguments.
|
||||||
|
@ -599,7 +449,7 @@ class ASTSynthesizer:
|
||||||
starargs=None, kwargs=None,
|
starargs=None, kwargs=None,
|
||||||
type=types.TVar(), iodelay=None, arg_exprs={},
|
type=types.TVar(), iodelay=None, arg_exprs={},
|
||||||
begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None,
|
begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None,
|
||||||
loc=callee_node.loc.join(end_loc), remote_fn=remote_fn)
|
loc=callee_node.loc.join(end_loc))
|
||||||
|
|
||||||
if callback is not None:
|
if callback is not None:
|
||||||
node = asttyped.CallT(
|
node = asttyped.CallT(
|
||||||
|
@ -634,7 +484,7 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
|
||||||
arg=node.arg, annotation=None,
|
arg=node.arg, annotation=None,
|
||||||
arg_loc=node.arg_loc, colon_loc=node.colon_loc, loc=node.loc)
|
arg_loc=node.arg_loc, colon_loc=node.colon_loc, loc=node.loc)
|
||||||
|
|
||||||
def visit_quoted_function(self, node, function, remote_fn):
|
def visit_quoted_function(self, node, function):
|
||||||
extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine)
|
extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine)
|
||||||
extractor.visit(node)
|
extractor.visit(node)
|
||||||
|
|
||||||
|
@ -651,11 +501,11 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
|
||||||
node = asttyped.QuotedFunctionDefT(
|
node = asttyped.QuotedFunctionDefT(
|
||||||
typing_env=extractor.typing_env, globals_in_scope=extractor.global_,
|
typing_env=extractor.typing_env, globals_in_scope=extractor.global_,
|
||||||
signature_type=types.TVar(), return_type=types.TVar(),
|
signature_type=types.TVar(), return_type=types.TVar(),
|
||||||
name=node.name, args=node.args, returns=None,
|
name=node.name, args=node.args, returns=node.returns,
|
||||||
body=node.body, decorator_list=node.decorator_list,
|
body=node.body, decorator_list=node.decorator_list,
|
||||||
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
|
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
|
||||||
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
|
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
|
||||||
loc=node.loc, remote_fn=remote_fn)
|
loc=node.loc)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.env_stack.append(node.typing_env)
|
self.env_stack.append(node.typing_env)
|
||||||
|
@ -763,9 +613,9 @@ class StitchingInferencer(Inferencer):
|
||||||
if elt.__class__ == float:
|
if elt.__class__ == float:
|
||||||
state |= IS_FLOAT
|
state |= IS_FLOAT
|
||||||
elif elt.__class__ == int:
|
elif elt.__class__ == int:
|
||||||
if -2**31 <= elt <= 2**31-1:
|
if -2**31 < elt < 2**31-1:
|
||||||
state |= IS_INT32
|
state |= IS_INT32
|
||||||
elif -2**63 <= elt <= 2**63-1:
|
elif -2**63 < elt < 2**63-1:
|
||||||
state |= IS_INT64
|
state |= IS_INT64
|
||||||
else:
|
else:
|
||||||
state = -1
|
state = -1
|
||||||
|
@ -863,7 +713,7 @@ class TypedtreeHasher(algorithm.Visitor):
|
||||||
return hash(tuple(freeze(getattr(node, field_name)) for field_name in fields))
|
return hash(tuple(freeze(getattr(node, field_name)) for field_name in fields))
|
||||||
|
|
||||||
class Stitcher:
|
class Stitcher:
|
||||||
def __init__(self, core, dmgr, engine=None, print_as_rpc=True, destination=0, subkernel_arg_types=[], old_embedding_map=None):
|
def __init__(self, core, dmgr, engine=None, print_as_rpc=True):
|
||||||
self.core = core
|
self.core = core
|
||||||
self.dmgr = dmgr
|
self.dmgr = dmgr
|
||||||
if engine is None:
|
if engine is None:
|
||||||
|
@ -885,23 +735,15 @@ class Stitcher:
|
||||||
|
|
||||||
self.functions = {}
|
self.functions = {}
|
||||||
|
|
||||||
self.embedding_map = EmbeddingMap(old_embedding_map)
|
self.embedding_map = EmbeddingMap()
|
||||||
self.value_map = defaultdict(lambda: [])
|
self.value_map = defaultdict(lambda: [])
|
||||||
self.definitely_changed = False
|
self.definitely_changed = False
|
||||||
|
|
||||||
self.destination = destination
|
|
||||||
self.first_call = True
|
|
||||||
# for non-annotated subkernels:
|
|
||||||
# main kernel inferencer output with types of arguments
|
|
||||||
self.subkernel_arg_types = subkernel_arg_types
|
|
||||||
|
|
||||||
def stitch_call(self, function, args, kwargs, callback=None):
|
def stitch_call(self, function, args, kwargs, callback=None):
|
||||||
# We synthesize source code for the initial call so that
|
# We synthesize source code for the initial call so that
|
||||||
# diagnostics would have something meaningful to display to the user.
|
# diagnostics would have something meaningful to display to the user.
|
||||||
synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function))
|
synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function))
|
||||||
# first call of a subkernel will get its arguments from remote (DRTIO)
|
call_node = synthesizer.call(function, args, kwargs, callback)
|
||||||
remote_fn = self.destination != 0
|
|
||||||
call_node = synthesizer.call(function, args, kwargs, callback, remote_fn=remote_fn)
|
|
||||||
synthesizer.finalize()
|
synthesizer.finalize()
|
||||||
self.typedtree.append(call_node)
|
self.typedtree.append(call_node)
|
||||||
|
|
||||||
|
@ -1013,10 +855,6 @@ class Stitcher:
|
||||||
return [diagnostic.Diagnostic("note",
|
return [diagnostic.Diagnostic("note",
|
||||||
"in kernel function here", {},
|
"in kernel function here", {},
|
||||||
call_loc)]
|
call_loc)]
|
||||||
elif fn_kind == 'subkernel':
|
|
||||||
return [diagnostic.Diagnostic("note",
|
|
||||||
"in subkernel call here", {},
|
|
||||||
call_loc)]
|
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
else:
|
else:
|
||||||
|
@ -1036,7 +874,7 @@ class Stitcher:
|
||||||
self._function_loc(function),
|
self._function_loc(function),
|
||||||
notes=self._call_site_note(loc, fn_kind))
|
notes=self._call_site_note(loc, fn_kind))
|
||||||
self.engine.process(diag)
|
self.engine.process(diag)
|
||||||
elif fn_kind == 'rpc' or fn_kind == 'subkernel' and param.default is not inspect.Parameter.empty:
|
elif fn_kind == 'rpc' and param.default is not inspect.Parameter.empty:
|
||||||
notes = []
|
notes = []
|
||||||
notes.append(diagnostic.Diagnostic("note",
|
notes.append(diagnostic.Diagnostic("note",
|
||||||
"expanded from here while trying to infer a type for an"
|
"expanded from here while trying to infer a type for an"
|
||||||
|
@ -1055,18 +893,11 @@ class Stitcher:
|
||||||
Inferencer(engine=self.engine).visit(ast)
|
Inferencer(engine=self.engine).visit(ast)
|
||||||
IntMonomorphizer(engine=self.engine).visit(ast)
|
IntMonomorphizer(engine=self.engine).visit(ast)
|
||||||
return ast.type
|
return ast.type
|
||||||
elif fn_kind == 'kernel' and self.first_call and self.destination != 0:
|
else:
|
||||||
# subkernels do not have access to the main kernel code to infer
|
# Let the rest of the program decide.
|
||||||
# arg types - so these are cached and passed onto subkernel
|
return types.TVar()
|
||||||
# compilation, to avoid having to annotate them fully
|
|
||||||
for name, typ in self.subkernel_arg_types:
|
|
||||||
if param.name == name:
|
|
||||||
return typ
|
|
||||||
|
|
||||||
# Let the rest of the program decide.
|
def _quote_embedded_function(self, function, flags):
|
||||||
return types.TVar()
|
|
||||||
|
|
||||||
def _quote_embedded_function(self, function, flags, remote_fn=False):
|
|
||||||
# we are now parsing new functions... definitely changed the type
|
# we are now parsing new functions... definitely changed the type
|
||||||
self.definitely_changed = True
|
self.definitely_changed = True
|
||||||
|
|
||||||
|
@ -1165,7 +996,7 @@ class Stitcher:
|
||||||
engine=self.engine, prelude=self.prelude,
|
engine=self.engine, prelude=self.prelude,
|
||||||
globals=self.globals, host_environment=host_environment,
|
globals=self.globals, host_environment=host_environment,
|
||||||
quote=self._quote)
|
quote=self._quote)
|
||||||
function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function, remote_fn)
|
function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function)
|
||||||
function_node.flags = flags
|
function_node.flags = flags
|
||||||
|
|
||||||
# Add it into our typedtree so that it gets inferenced and codegen'd.
|
# Add it into our typedtree so that it gets inferenced and codegen'd.
|
||||||
|
@ -1177,6 +1008,9 @@ class Stitcher:
|
||||||
return function_node
|
return function_node
|
||||||
|
|
||||||
def _extract_annot(self, function, annot, kind, call_loc, fn_kind):
|
def _extract_annot(self, function, annot, kind, call_loc, fn_kind):
|
||||||
|
if annot is None:
|
||||||
|
annot = builtins.TNone()
|
||||||
|
|
||||||
if isinstance(function, SpecializedFunction):
|
if isinstance(function, SpecializedFunction):
|
||||||
host_function = function.host_function
|
host_function = function.host_function
|
||||||
else:
|
else:
|
||||||
|
@ -1190,20 +1024,9 @@ class Stitcher:
|
||||||
if isinstance(embedded_function, str):
|
if isinstance(embedded_function, str):
|
||||||
embedded_function = host_function
|
embedded_function = host_function
|
||||||
|
|
||||||
return self._to_artiq_type(
|
|
||||||
annot,
|
|
||||||
function=function,
|
|
||||||
kind=kind,
|
|
||||||
eval_in_scope=lambda x: eval(x, embedded_function.__globals__),
|
|
||||||
call_loc=call_loc,
|
|
||||||
fn_kind=fn_kind)
|
|
||||||
|
|
||||||
def _to_artiq_type(
|
|
||||||
self, annot, *, function, kind: str, eval_in_scope, call_loc: str, fn_kind: str
|
|
||||||
) -> types.Type:
|
|
||||||
if isinstance(annot, str):
|
if isinstance(annot, str):
|
||||||
try:
|
try:
|
||||||
annot = eval_in_scope(annot)
|
annot = eval(annot, embedded_function.__globals__)
|
||||||
except Exception:
|
except Exception:
|
||||||
diag = diagnostic.Diagnostic(
|
diag = diagnostic.Diagnostic(
|
||||||
"error",
|
"error",
|
||||||
|
@ -1213,72 +1036,23 @@ class Stitcher:
|
||||||
notes=self._call_site_note(call_loc, fn_kind))
|
notes=self._call_site_note(call_loc, fn_kind))
|
||||||
self.engine.process(diag)
|
self.engine.process(diag)
|
||||||
|
|
||||||
if isinstance(annot, types.Type):
|
if not isinstance(annot, types.Type):
|
||||||
return annot
|
diag = diagnostic.Diagnostic("error",
|
||||||
|
"type annotation for {kind}, '{annot}', is not an ARTIQ type",
|
||||||
|
{"kind": kind, "annot": repr(annot)},
|
||||||
|
self._function_loc(function),
|
||||||
|
notes=self._call_site_note(call_loc, fn_kind))
|
||||||
|
self.engine.process(diag)
|
||||||
|
|
||||||
# Convert built-in Python types to ARTIQ ones.
|
return types.TVar()
|
||||||
if annot is None:
|
|
||||||
return builtins.TNone()
|
|
||||||
elif annot is numpy.int64:
|
|
||||||
return builtins.TInt64()
|
|
||||||
elif annot is numpy.int32:
|
|
||||||
return builtins.TInt32()
|
|
||||||
elif annot is float:
|
|
||||||
return builtins.TFloat()
|
|
||||||
elif annot is bool:
|
|
||||||
return builtins.TBool()
|
|
||||||
elif annot is str:
|
|
||||||
return builtins.TStr()
|
|
||||||
elif annot is bytes:
|
|
||||||
return builtins.TBytes()
|
|
||||||
elif annot is bytearray:
|
|
||||||
return builtins.TByteArray()
|
|
||||||
|
|
||||||
# Convert generic Python types to ARTIQ ones.
|
|
||||||
generic_ty = typing.get_origin(annot)
|
|
||||||
if generic_ty is not None:
|
|
||||||
type_args = typing.get_args(annot)
|
|
||||||
artiq_args = [
|
|
||||||
self._to_artiq_type(
|
|
||||||
x,
|
|
||||||
function=function,
|
|
||||||
kind=kind,
|
|
||||||
eval_in_scope=eval_in_scope,
|
|
||||||
call_loc=call_loc,
|
|
||||||
fn_kind=fn_kind)
|
|
||||||
for x in type_args
|
|
||||||
]
|
|
||||||
|
|
||||||
if generic_ty is list and len(artiq_args) == 1:
|
|
||||||
return builtins.TList(artiq_args[0])
|
|
||||||
elif generic_ty is tuple:
|
|
||||||
return types.TTuple(artiq_args)
|
|
||||||
|
|
||||||
# Otherwise report an unknown type and just use a fresh tyvar.
|
|
||||||
|
|
||||||
if annot is int:
|
|
||||||
message = (
|
|
||||||
"type annotation for {kind}, 'int' cannot be used as an ARTIQ type. "
|
|
||||||
"Use numpy's int32 or int64 instead."
|
|
||||||
)
|
|
||||||
ty = builtins.TInt()
|
|
||||||
else:
|
else:
|
||||||
message = "type annotation for {kind}, '{annot}', is not an ARTIQ type"
|
return annot
|
||||||
ty = types.TVar()
|
|
||||||
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
message,
|
|
||||||
{"kind": kind, "annot": repr(annot)},
|
|
||||||
self._function_loc(function),
|
|
||||||
notes=self._call_site_note(call_loc, fn_kind))
|
|
||||||
self.engine.process(diag)
|
|
||||||
|
|
||||||
return ty
|
|
||||||
|
|
||||||
def _quote_syscall(self, function, loc):
|
def _quote_syscall(self, function, loc):
|
||||||
signature = inspect.signature(function)
|
signature = inspect.signature(function)
|
||||||
|
|
||||||
arg_types = OrderedDict()
|
arg_types = OrderedDict()
|
||||||
|
optarg_types = OrderedDict()
|
||||||
for param in signature.parameters.values():
|
for param in signature.parameters.values():
|
||||||
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
|
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
|
||||||
diag = diagnostic.Diagnostic("error",
|
diag = diagnostic.Diagnostic("error",
|
||||||
|
@ -1316,40 +1090,6 @@ class Stitcher:
|
||||||
self.functions[function] = function_type
|
self.functions[function] = function_type
|
||||||
return function_type
|
return function_type
|
||||||
|
|
||||||
def _quote_subkernel(self, function, loc):
|
|
||||||
if isinstance(function, SpecializedFunction):
|
|
||||||
host_function = function.host_function
|
|
||||||
else:
|
|
||||||
host_function = function
|
|
||||||
ret_type = builtins.TNone()
|
|
||||||
signature = inspect.signature(host_function)
|
|
||||||
|
|
||||||
if signature.return_annotation is not inspect.Signature.empty:
|
|
||||||
ret_type = self._extract_annot(host_function, signature.return_annotation,
|
|
||||||
"return type", loc, fn_kind='subkernel')
|
|
||||||
arg_types = OrderedDict()
|
|
||||||
optarg_types = OrderedDict()
|
|
||||||
for param in signature.parameters.values():
|
|
||||||
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"subkernels must only use positional arguments; '{argument}' isn't",
|
|
||||||
{"argument": param.name},
|
|
||||||
self._function_loc(function),
|
|
||||||
notes=self._call_site_note(loc, fn_kind='subkernel'))
|
|
||||||
self.engine.process(diag)
|
|
||||||
|
|
||||||
arg_type = self._type_of_param(function, loc, param, fn_kind='subkernel')
|
|
||||||
if param.default is inspect.Parameter.empty:
|
|
||||||
arg_types[param.name] = arg_type
|
|
||||||
else:
|
|
||||||
optarg_types[param.name] = arg_type
|
|
||||||
|
|
||||||
function_type = types.TSubkernel(arg_types, optarg_types, ret_type,
|
|
||||||
sid=self.embedding_map.store_object(host_function),
|
|
||||||
destination=host_function.artiq_embedded.destination)
|
|
||||||
self.functions[function] = function_type
|
|
||||||
return function_type
|
|
||||||
|
|
||||||
def _quote_rpc(self, function, loc):
|
def _quote_rpc(self, function, loc):
|
||||||
if isinstance(function, SpecializedFunction):
|
if isinstance(function, SpecializedFunction):
|
||||||
host_function = function.host_function
|
host_function = function.host_function
|
||||||
|
@ -1409,18 +1149,8 @@ class Stitcher:
|
||||||
(host_function.artiq_embedded.core_name is None and
|
(host_function.artiq_embedded.core_name is None and
|
||||||
host_function.artiq_embedded.portable is False and
|
host_function.artiq_embedded.portable is False and
|
||||||
host_function.artiq_embedded.syscall is None and
|
host_function.artiq_embedded.syscall is None and
|
||||||
host_function.artiq_embedded.destination is None and
|
|
||||||
host_function.artiq_embedded.forbidden is False):
|
host_function.artiq_embedded.forbidden is False):
|
||||||
self._quote_rpc(function, loc)
|
self._quote_rpc(function, loc)
|
||||||
elif host_function.artiq_embedded.destination is not None and \
|
|
||||||
host_function.artiq_embedded.destination != self.destination:
|
|
||||||
# treat subkernels as kernels if running on the same device
|
|
||||||
if not 0 < host_function.artiq_embedded.destination <= 255:
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"subkernel destination must be between 1 and 255 (inclusive)", {},
|
|
||||||
self._function_loc(host_function))
|
|
||||||
self.engine.process(diag)
|
|
||||||
self._quote_subkernel(function, loc)
|
|
||||||
elif host_function.artiq_embedded.function is not None:
|
elif host_function.artiq_embedded.function is not None:
|
||||||
if host_function.__name__ == "<lambda>":
|
if host_function.__name__ == "<lambda>":
|
||||||
note = diagnostic.Diagnostic("note",
|
note = diagnostic.Diagnostic("note",
|
||||||
|
@ -1444,13 +1174,8 @@ class Stitcher:
|
||||||
notes=[note])
|
notes=[note])
|
||||||
self.engine.process(diag)
|
self.engine.process(diag)
|
||||||
|
|
||||||
destination = host_function.artiq_embedded.destination
|
|
||||||
# remote_fn only for first call in subkernels
|
|
||||||
remote_fn = destination is not None and self.first_call
|
|
||||||
self._quote_embedded_function(function,
|
self._quote_embedded_function(function,
|
||||||
flags=host_function.artiq_embedded.flags,
|
flags=host_function.artiq_embedded.flags)
|
||||||
remote_fn=remote_fn)
|
|
||||||
self.first_call = False
|
|
||||||
elif host_function.artiq_embedded.syscall is not None:
|
elif host_function.artiq_embedded.syscall is not None:
|
||||||
# Insert a storage-less global whose type instructs the compiler
|
# Insert a storage-less global whose type instructs the compiler
|
||||||
# to perform a system call instead of a regular call.
|
# to perform a system call instead of a regular call.
|
||||||
|
|
|
@ -135,7 +135,6 @@ class NamedValue(Value):
|
||||||
def __init__(self, typ, name):
|
def __init__(self, typ, name):
|
||||||
super().__init__(typ)
|
super().__init__(typ)
|
||||||
self.name, self.function = name, None
|
self.name, self.function = name, None
|
||||||
self.is_removed = False
|
|
||||||
|
|
||||||
def set_name(self, new_name):
|
def set_name(self, new_name):
|
||||||
if self.function is not None:
|
if self.function is not None:
|
||||||
|
@ -236,7 +235,7 @@ class Instruction(User):
|
||||||
self.drop_references()
|
self.drop_references()
|
||||||
# Check this after drop_references in case this
|
# Check this after drop_references in case this
|
||||||
# is a self-referencing phi.
|
# is a self-referencing phi.
|
||||||
assert all(use.is_removed for use in self.uses)
|
assert not any(self.uses)
|
||||||
|
|
||||||
def replace_with(self, value):
|
def replace_with(self, value):
|
||||||
self.replace_all_uses_with(value)
|
self.replace_all_uses_with(value)
|
||||||
|
@ -371,7 +370,7 @@ class BasicBlock(NamedValue):
|
||||||
self.remove_from_parent()
|
self.remove_from_parent()
|
||||||
# Check this after erasing instructions in case the block
|
# Check this after erasing instructions in case the block
|
||||||
# loops into itself.
|
# loops into itself.
|
||||||
assert all(use.is_removed for use in self.uses)
|
assert not any(self.uses)
|
||||||
|
|
||||||
def prepend(self, insn):
|
def prepend(self, insn):
|
||||||
assert isinstance(insn, Instruction)
|
assert isinstance(insn, Instruction)
|
||||||
|
@ -706,81 +705,6 @@ class SetLocal(Instruction):
|
||||||
def value(self):
|
def value(self):
|
||||||
return self.operands[1]
|
return self.operands[1]
|
||||||
|
|
||||||
class GetArgFromRemote(Instruction):
|
|
||||||
"""
|
|
||||||
An instruction that receives function arguments from remote
|
|
||||||
(ie. subkernel in DRTIO context)
|
|
||||||
|
|
||||||
:ivar arg_name: (string) argument name
|
|
||||||
:ivar arg_type: argument type
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
:param arg_name: (string) argument name
|
|
||||||
:param arg_type: argument type
|
|
||||||
"""
|
|
||||||
def __init__(self, arg_name, arg_type, name=""):
|
|
||||||
assert isinstance(arg_name, str)
|
|
||||||
super().__init__([], arg_type, name)
|
|
||||||
self.arg_name = arg_name
|
|
||||||
self.arg_type = arg_type
|
|
||||||
|
|
||||||
def copy(self, mapper):
|
|
||||||
self_copy = super().copy(mapper)
|
|
||||||
self_copy.arg_name = self.arg_name
|
|
||||||
self_copy.arg_type = self.arg_type
|
|
||||||
return self_copy
|
|
||||||
|
|
||||||
def opcode(self):
|
|
||||||
return "getargfromremote({})".format(repr(self.arg_name))
|
|
||||||
|
|
||||||
class GetOptArgFromRemote(GetArgFromRemote):
|
|
||||||
"""
|
|
||||||
An instruction that may or may not retrieve an optional function argument
|
|
||||||
from remote, depending on number of values received by firmware.
|
|
||||||
|
|
||||||
:ivar rcv_count: number of received values,
|
|
||||||
determined by firmware
|
|
||||||
:ivar index: (integer) index of the current argument,
|
|
||||||
in reference to remote arguments
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
:param rcv_count: number of received valuese
|
|
||||||
:param index: (integer) index of the current argument,
|
|
||||||
in reference to remote arguments
|
|
||||||
"""
|
|
||||||
def __init__(self, arg_name, arg_type, rcv_count, index, name=""):
|
|
||||||
super().__init__(arg_name, arg_type, name)
|
|
||||||
self.rcv_count = rcv_count
|
|
||||||
self.index = index
|
|
||||||
|
|
||||||
def copy(self, mapper):
|
|
||||||
self_copy = super().copy(mapper)
|
|
||||||
self_copy.rcv_count = self.rcv_count
|
|
||||||
self_copy.index = self.index
|
|
||||||
return self_copy
|
|
||||||
|
|
||||||
def opcode(self):
|
|
||||||
return "getoptargfromremote({})".format(repr(self.arg_name))
|
|
||||||
|
|
||||||
class SubkernelAwaitArgs(Instruction):
|
|
||||||
"""
|
|
||||||
A builtin instruction that takes min and max received messages as operands,
|
|
||||||
and a list of received types.
|
|
||||||
|
|
||||||
:ivar arg_types: (list of types) types of passed arguments (including optional)
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
:param arg_types: (list of types) types of passed arguments (including optional)
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, operands, arg_types, name=None):
|
|
||||||
assert isinstance(arg_types, list)
|
|
||||||
self.arg_types = arg_types
|
|
||||||
super().__init__(operands, builtins.TNone(), name)
|
|
||||||
|
|
||||||
class GetAttr(Instruction):
|
class GetAttr(Instruction):
|
||||||
"""
|
"""
|
||||||
An intruction that loads an attribute from an object,
|
An intruction that loads an attribute from an object,
|
||||||
|
@ -803,7 +727,7 @@ class GetAttr(Instruction):
|
||||||
typ = obj.type.attributes[attr]
|
typ = obj.type.attributes[attr]
|
||||||
else:
|
else:
|
||||||
typ = obj.type.constructor.attributes[attr]
|
typ = obj.type.constructor.attributes[attr]
|
||||||
if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
|
if types.is_function(typ) or types.is_rpc(typ):
|
||||||
typ = types.TMethod(obj.type, typ)
|
typ = types.TMethod(obj.type, typ)
|
||||||
super().__init__([obj], typ, name)
|
super().__init__([obj], typ, name)
|
||||||
self.attr = attr
|
self.attr = attr
|
||||||
|
@ -1047,42 +971,6 @@ class Builtin(Instruction):
|
||||||
def opcode(self):
|
def opcode(self):
|
||||||
return "builtin({})".format(self.op)
|
return "builtin({})".format(self.op)
|
||||||
|
|
||||||
class BuiltinInvoke(Terminator):
|
|
||||||
"""
|
|
||||||
A builtin operation which can raise exceptions.
|
|
||||||
|
|
||||||
:ivar op: (string) operation name
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
:param op: (string) operation name
|
|
||||||
:param normal: (:class:`BasicBlock`) normal target
|
|
||||||
:param exn: (:class:`BasicBlock`) exceptional target
|
|
||||||
"""
|
|
||||||
def __init__(self, op, operands, typ, normal, exn, name=None):
|
|
||||||
assert isinstance(op, str)
|
|
||||||
for operand in operands: assert isinstance(operand, Value)
|
|
||||||
assert isinstance(normal, BasicBlock)
|
|
||||||
assert isinstance(exn, BasicBlock)
|
|
||||||
if name is None:
|
|
||||||
name = "BLTINV.{}".format(op)
|
|
||||||
super().__init__(operands + [normal, exn], typ, name)
|
|
||||||
self.op = op
|
|
||||||
|
|
||||||
def copy(self, mapper):
|
|
||||||
self_copy = super().copy(mapper)
|
|
||||||
self_copy.op = self.op
|
|
||||||
return self_copy
|
|
||||||
|
|
||||||
def normal_target(self):
|
|
||||||
return self.operands[-2]
|
|
||||||
|
|
||||||
def exception_target(self):
|
|
||||||
return self.operands[-1]
|
|
||||||
|
|
||||||
def opcode(self):
|
|
||||||
return "builtinInvokable({})".format(self.op)
|
|
||||||
|
|
||||||
class Closure(Instruction):
|
class Closure(Instruction):
|
||||||
"""
|
"""
|
||||||
A closure creation operation.
|
A closure creation operation.
|
||||||
|
@ -1301,18 +1189,14 @@ class IndirectBranch(Terminator):
|
||||||
class Return(Terminator):
|
class Return(Terminator):
|
||||||
"""
|
"""
|
||||||
A return instruction.
|
A return instruction.
|
||||||
:param remote_return: (bool)
|
|
||||||
marks a return in subkernel context,
|
|
||||||
where the return value is sent back through DRTIO
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
"""
|
"""
|
||||||
:param value: (:class:`Value`) return value
|
:param value: (:class:`Value`) return value
|
||||||
"""
|
"""
|
||||||
def __init__(self, value, remote_return=False, name=""):
|
def __init__(self, value, name=""):
|
||||||
assert isinstance(value, Value)
|
assert isinstance(value, Value)
|
||||||
super().__init__([value], builtins.TNone(), name)
|
super().__init__([value], builtins.TNone(), name)
|
||||||
self.remote_return = remote_return
|
|
||||||
|
|
||||||
def opcode(self):
|
def opcode(self):
|
||||||
return "return"
|
return "return"
|
||||||
|
@ -1361,9 +1245,9 @@ class Raise(Terminator):
|
||||||
if len(self.operands) > 1:
|
if len(self.operands) > 1:
|
||||||
return self.operands[1]
|
return self.operands[1]
|
||||||
|
|
||||||
class Resume(Terminator):
|
class Reraise(Terminator):
|
||||||
"""
|
"""
|
||||||
A resume instruction.
|
A reraise instruction.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
@ -1377,7 +1261,7 @@ class Resume(Terminator):
|
||||||
super().__init__(operands, builtins.TNone(), name)
|
super().__init__(operands, builtins.TNone(), name)
|
||||||
|
|
||||||
def opcode(self):
|
def opcode(self):
|
||||||
return "resume"
|
return "reraise"
|
||||||
|
|
||||||
def exception_target(self):
|
def exception_target(self):
|
||||||
if len(self.operands) > 0:
|
if len(self.operands) > 0:
|
||||||
|
@ -1476,6 +1360,14 @@ class LandingPad(Terminator):
|
||||||
def cleanup(self):
|
def cleanup(self):
|
||||||
return self.operands[0]
|
return self.operands[0]
|
||||||
|
|
||||||
|
def erase(self):
|
||||||
|
self.remove_from_parent()
|
||||||
|
# we should erase all clauses as well
|
||||||
|
for block in set(self.operands):
|
||||||
|
block.uses.remove(self)
|
||||||
|
block.erase()
|
||||||
|
assert not any(self.uses)
|
||||||
|
|
||||||
def clauses(self):
|
def clauses(self):
|
||||||
return zip(self.operands[1:], self.types)
|
return zip(self.operands[1:], self.types)
|
||||||
|
|
||||||
|
|
|
@ -33,19 +33,9 @@ SECTIONS
|
||||||
KEEP(*(.eh_frame_hdr))
|
KEEP(*(.eh_frame_hdr))
|
||||||
} : text : eh_frame
|
} : text : eh_frame
|
||||||
|
|
||||||
.got :
|
|
||||||
{
|
|
||||||
*(.got)
|
|
||||||
} : text
|
|
||||||
|
|
||||||
.got.plt :
|
|
||||||
{
|
|
||||||
*(.got.plt)
|
|
||||||
} : text
|
|
||||||
|
|
||||||
.data :
|
.data :
|
||||||
{
|
{
|
||||||
*(.data .data.*)
|
*(.data)
|
||||||
} : data
|
} : data
|
||||||
|
|
||||||
.dynamic :
|
.dynamic :
|
||||||
|
@ -61,10 +51,6 @@ SECTIONS
|
||||||
_end = .;
|
_end = .;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Kernel stack grows downward from end of memory, so put guard page after
|
|
||||||
* all the program contents. Note: This requires all loaded sections (at
|
|
||||||
* least those accessed) to be explicitly listed in the above!
|
|
||||||
*/
|
|
||||||
. = ALIGN(0x1000);
|
. = ALIGN(0x1000);
|
||||||
_sstack_guard = .;
|
_sstack_guard = .;
|
||||||
}
|
}
|
||||||
|
|
|
@ -61,6 +61,22 @@ unary_fp_runtime_calls = [
|
||||||
("cbrt", "cbrt"),
|
("cbrt", "cbrt"),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
#: float -> float numpy.* math functions lowered to runtime calls.
|
||||||
|
unary_fp_runtime_calls = [
|
||||||
|
("tan", "tan"),
|
||||||
|
("arcsin", "asin"),
|
||||||
|
("arccos", "acos"),
|
||||||
|
("arctan", "atan"),
|
||||||
|
("sinh", "sinh"),
|
||||||
|
("cosh", "cosh"),
|
||||||
|
("tanh", "tanh"),
|
||||||
|
("arcsinh", "asinh"),
|
||||||
|
("arccosh", "acosh"),
|
||||||
|
("arctanh", "atanh"),
|
||||||
|
("expm1", "expm1"),
|
||||||
|
("cbrt", "cbrt"),
|
||||||
|
]
|
||||||
|
|
||||||
scipy_special_unary_runtime_calls = [
|
scipy_special_unary_runtime_calls = [
|
||||||
("erf", "erf"),
|
("erf", "erf"),
|
||||||
("erfc", "erfc"),
|
("erfc", "erfc"),
|
||||||
|
|
|
@ -10,7 +10,7 @@ string and infers types for it using a trivial :module:`prelude`.
|
||||||
|
|
||||||
import os
|
import os
|
||||||
from pythonparser import source, diagnostic, parse_buffer
|
from pythonparser import source, diagnostic, parse_buffer
|
||||||
from . import prelude, types, transforms, analyses, validators, embedding
|
from . import prelude, types, transforms, analyses, validators
|
||||||
|
|
||||||
class Source:
|
class Source:
|
||||||
def __init__(self, source_buffer, engine=None):
|
def __init__(self, source_buffer, engine=None):
|
||||||
|
@ -18,7 +18,7 @@ class Source:
|
||||||
self.engine = diagnostic.Engine(all_errors_are_fatal=True)
|
self.engine = diagnostic.Engine(all_errors_are_fatal=True)
|
||||||
else:
|
else:
|
||||||
self.engine = engine
|
self.engine = engine
|
||||||
self.embedding_map = embedding.EmbeddingMap()
|
self.embedding_map = None
|
||||||
self.name, _ = os.path.splitext(os.path.basename(source_buffer.name))
|
self.name, _ = os.path.splitext(os.path.basename(source_buffer.name))
|
||||||
|
|
||||||
asttyped_rewriter = transforms.ASTTypedRewriter(engine=engine,
|
asttyped_rewriter = transforms.ASTTypedRewriter(engine=engine,
|
||||||
|
@ -57,8 +57,7 @@ class Module:
|
||||||
constness_validator = validators.ConstnessValidator(engine=self.engine)
|
constness_validator = validators.ConstnessValidator(engine=self.engine)
|
||||||
artiq_ir_generator = transforms.ARTIQIRGenerator(engine=self.engine,
|
artiq_ir_generator = transforms.ARTIQIRGenerator(engine=self.engine,
|
||||||
module_name=src.name,
|
module_name=src.name,
|
||||||
ref_period=ref_period,
|
ref_period=ref_period)
|
||||||
embedding_map=self.embedding_map)
|
|
||||||
dead_code_eliminator = transforms.DeadCodeEliminator(engine=self.engine)
|
dead_code_eliminator = transforms.DeadCodeEliminator(engine=self.engine)
|
||||||
local_access_validator = validators.LocalAccessValidator(engine=self.engine)
|
local_access_validator = validators.LocalAccessValidator(engine=self.engine)
|
||||||
local_demoter = transforms.LocalDemoter()
|
local_demoter = transforms.LocalDemoter()
|
||||||
|
@ -84,8 +83,6 @@ class Module:
|
||||||
constant_hoister.process(self.artiq_ir)
|
constant_hoister.process(self.artiq_ir)
|
||||||
if remarks:
|
if remarks:
|
||||||
invariant_detection.process(self.artiq_ir)
|
invariant_detection.process(self.artiq_ir)
|
||||||
# for subkernels: main kernel inferencer output, to be passed to further compilations
|
|
||||||
self.subkernel_arg_types = inferencer.subkernel_arg_types
|
|
||||||
|
|
||||||
def build_llvm_ir(self, target):
|
def build_llvm_ir(self, target):
|
||||||
"""Compile the module to LLVM IR for the specified target."""
|
"""Compile the module to LLVM IR for the specified target."""
|
||||||
|
|
|
@ -37,7 +37,6 @@ def globals():
|
||||||
|
|
||||||
# ARTIQ decorators
|
# ARTIQ decorators
|
||||||
"kernel": builtins.fn_kernel(),
|
"kernel": builtins.fn_kernel(),
|
||||||
"subkernel": builtins.fn_kernel(),
|
|
||||||
"portable": builtins.fn_kernel(),
|
"portable": builtins.fn_kernel(),
|
||||||
"rpc": builtins.fn_kernel(),
|
"rpc": builtins.fn_kernel(),
|
||||||
|
|
||||||
|
@ -55,10 +54,4 @@ def globals():
|
||||||
# ARTIQ utility functions
|
# ARTIQ utility functions
|
||||||
"rtio_log": builtins.fn_rtio_log(),
|
"rtio_log": builtins.fn_rtio_log(),
|
||||||
"core_log": builtins.fn_print(),
|
"core_log": builtins.fn_print(),
|
||||||
|
|
||||||
# ARTIQ subkernel utility functions
|
|
||||||
"subkernel_await": builtins.fn_subkernel_await(),
|
|
||||||
"subkernel_preload": builtins.fn_subkernel_preload(),
|
|
||||||
"subkernel_send": builtins.fn_subkernel_send(),
|
|
||||||
"subkernel_recv": builtins.fn_subkernel_recv(),
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -74,8 +74,6 @@ class Target:
|
||||||
LLVM target data layout, e.g. ``"E-m:e-p:32:32-i64:32-f64:32-v64:32-v128:32-a:0:32-n32"``
|
LLVM target data layout, e.g. ``"E-m:e-p:32:32-i64:32-f64:32-v64:32-v128:32-a:0:32-n32"``
|
||||||
:var features: (list of string)
|
:var features: (list of string)
|
||||||
LLVM target CPU features, e.g. ``["mul", "div", "ffl1"]``
|
LLVM target CPU features, e.g. ``["mul", "div", "ffl1"]``
|
||||||
:var additional_linker_options: (list of string)
|
|
||||||
Linker options for the target in addition to the target-independent ones, e.g. ``["--target2=rel"]``
|
|
||||||
:var print_function: (string)
|
:var print_function: (string)
|
||||||
Name of a formatted print functions (with the signature of ``printf``)
|
Name of a formatted print functions (with the signature of ``printf``)
|
||||||
provided by the target, e.g. ``"printf"``.
|
provided by the target, e.g. ``"printf"``.
|
||||||
|
@ -85,18 +83,16 @@ class Target:
|
||||||
triple = "unknown"
|
triple = "unknown"
|
||||||
data_layout = ""
|
data_layout = ""
|
||||||
features = []
|
features = []
|
||||||
additional_linker_options = []
|
|
||||||
print_function = "printf"
|
print_function = "printf"
|
||||||
now_pinning = True
|
now_pinning = True
|
||||||
|
|
||||||
tool_ld = "ld.lld"
|
tool_ld = "ld.lld"
|
||||||
tool_strip = "llvm-strip"
|
tool_strip = "llvm-strip"
|
||||||
tool_symbolizer = "llvm-symbolizer"
|
tool_addr2line = "llvm-addr2line"
|
||||||
tool_cxxfilt = "llvm-cxxfilt"
|
tool_cxxfilt = "llvm-cxxfilt"
|
||||||
|
|
||||||
def __init__(self, subkernel_id=None):
|
def __init__(self):
|
||||||
self.llcontext = ll.Context()
|
self.llcontext = ll.Context()
|
||||||
self.subkernel_id = subkernel_id
|
|
||||||
|
|
||||||
def target_machine(self):
|
def target_machine(self):
|
||||||
lltarget = llvm.Target.from_triple(self.triple)
|
lltarget = llvm.Target.from_triple(self.triple)
|
||||||
|
@ -149,8 +145,7 @@ class Target:
|
||||||
ir.BasicBlock._dump_loc = False
|
ir.BasicBlock._dump_loc = False
|
||||||
|
|
||||||
type_printer = types.TypePrinter()
|
type_printer = types.TypePrinter()
|
||||||
suffix = "_subkernel_{}".format(self.subkernel_id) if self.subkernel_id is not None else ""
|
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", ".txt",
|
||||||
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", suffix + ".txt",
|
|
||||||
lambda: "\n".join(fn.as_entity(type_printer) for fn in module.artiq_ir))
|
lambda: "\n".join(fn.as_entity(type_printer) for fn in module.artiq_ir))
|
||||||
|
|
||||||
llmod = module.build_llvm_ir(self)
|
llmod = module.build_llvm_ir(self)
|
||||||
|
@ -162,12 +157,12 @@ class Target:
|
||||||
_dump("", "LLVM IR (broken)", ".ll", lambda: str(llmod))
|
_dump("", "LLVM IR (broken)", ".ll", lambda: str(llmod))
|
||||||
raise
|
raise
|
||||||
|
|
||||||
_dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", suffix + "_unopt.ll",
|
_dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", "_unopt.ll",
|
||||||
lambda: str(llparsedmod))
|
lambda: str(llparsedmod))
|
||||||
|
|
||||||
self.optimize(llparsedmod)
|
self.optimize(llparsedmod)
|
||||||
|
|
||||||
_dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", suffix + ".ll",
|
_dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", ".ll",
|
||||||
lambda: str(llparsedmod))
|
lambda: str(llparsedmod))
|
||||||
|
|
||||||
return llparsedmod
|
return llparsedmod
|
||||||
|
@ -186,7 +181,6 @@ class Target:
|
||||||
def link(self, objects):
|
def link(self, objects):
|
||||||
"""Link the relocatable objects into a shared library for this target."""
|
"""Link the relocatable objects into a shared library for this target."""
|
||||||
with RunTool([self.tool_ld, "-shared", "--eh-frame-hdr"] +
|
with RunTool([self.tool_ld, "-shared", "--eh-frame-hdr"] +
|
||||||
self.additional_linker_options +
|
|
||||||
["-T" + os.path.join(os.path.dirname(__file__), "kernel.ld")] +
|
["-T" + os.path.join(os.path.dirname(__file__), "kernel.ld")] +
|
||||||
["{{obj{}}}".format(index) for index in range(len(objects))] +
|
["{{obj{}}}".format(index) for index in range(len(objects))] +
|
||||||
["-x"] +
|
["-x"] +
|
||||||
|
@ -218,10 +212,9 @@ class Target:
|
||||||
# just after the call. Offset them back to get an address somewhere
|
# just after the call. Offset them back to get an address somewhere
|
||||||
# inside the call instruction (or its delay slot), since that's what
|
# inside the call instruction (or its delay slot), since that's what
|
||||||
# the backtrace entry should point at.
|
# the backtrace entry should point at.
|
||||||
last_inlined = None
|
|
||||||
offset_addresses = [hex(addr - 1) for addr in addresses]
|
offset_addresses = [hex(addr - 1) for addr in addresses]
|
||||||
with RunTool([self.tool_symbolizer, "--addresses", "--functions", "--inlines",
|
with RunTool([self.tool_addr2line, "--addresses", "--functions", "--inlines",
|
||||||
"--demangle", "--output-style=GNU", "--exe={library}"] + offset_addresses,
|
"--demangle", "--exe={library}"] + offset_addresses,
|
||||||
library=library) \
|
library=library) \
|
||||||
as results:
|
as results:
|
||||||
lines = iter(results["__stdout__"].read().rstrip().split("\n"))
|
lines = iter(results["__stdout__"].read().rstrip().split("\n"))
|
||||||
|
@ -234,11 +227,9 @@ class Target:
|
||||||
if address_or_function[:2] == "0x":
|
if address_or_function[:2] == "0x":
|
||||||
address = int(address_or_function[2:], 16) + 1 # remove offset
|
address = int(address_or_function[2:], 16) + 1 # remove offset
|
||||||
function = next(lines)
|
function = next(lines)
|
||||||
inlined = False
|
|
||||||
else:
|
else:
|
||||||
address = backtrace[-1][4] # inlined
|
address = backtrace[-1][4] # inlined
|
||||||
function = address_or_function
|
function = address_or_function
|
||||||
inlined = True
|
|
||||||
location = next(lines)
|
location = next(lines)
|
||||||
|
|
||||||
filename, line = location.rsplit(":", 1)
|
filename, line = location.rsplit(":", 1)
|
||||||
|
@ -249,17 +240,10 @@ class Target:
|
||||||
else:
|
else:
|
||||||
line = int(line)
|
line = int(line)
|
||||||
# can't get column out of addr2line D:
|
# can't get column out of addr2line D:
|
||||||
if inlined:
|
backtrace.append((filename, line, -1, function, address))
|
||||||
last_inlined.append((filename, line, -1, function, address))
|
|
||||||
else:
|
|
||||||
last_inlined = []
|
|
||||||
backtrace.append((filename, line, -1, function, address,
|
|
||||||
last_inlined))
|
|
||||||
return backtrace
|
return backtrace
|
||||||
|
|
||||||
def demangle(self, names):
|
def demangle(self, names):
|
||||||
if not any(names):
|
|
||||||
return names
|
|
||||||
with RunTool([self.tool_cxxfilt] + names) as results:
|
with RunTool([self.tool_cxxfilt] + names) as results:
|
||||||
return results["__stdout__"].read().rstrip().split("\n")
|
return results["__stdout__"].read().rstrip().split("\n")
|
||||||
|
|
||||||
|
@ -267,43 +251,40 @@ class NativeTarget(Target):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.triple = llvm.get_default_triple()
|
self.triple = llvm.get_default_triple()
|
||||||
self.data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data)
|
host_data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data)
|
||||||
|
|
||||||
class RV32IMATarget(Target):
|
class RV32IMATarget(Target):
|
||||||
triple = "riscv32-unknown-linux"
|
triple = "riscv32-unknown-linux"
|
||||||
data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
|
data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
|
||||||
features = ["m", "a"]
|
features = ["m", "a"]
|
||||||
additional_linker_options = ["-m", "elf32lriscv"]
|
|
||||||
print_function = "core_log"
|
print_function = "core_log"
|
||||||
now_pinning = True
|
now_pinning = True
|
||||||
|
|
||||||
tool_ld = "ld.lld"
|
tool_ld = "ld.lld"
|
||||||
tool_strip = "llvm-strip"
|
tool_strip = "llvm-strip"
|
||||||
tool_symbolizer = "llvm-symbolizer"
|
tool_addr2line = "llvm-addr2line"
|
||||||
tool_cxxfilt = "llvm-cxxfilt"
|
tool_cxxfilt = "llvm-cxxfilt"
|
||||||
|
|
||||||
class RV32GTarget(Target):
|
class RV32GTarget(Target):
|
||||||
triple = "riscv32-unknown-linux"
|
triple = "riscv32-unknown-linux"
|
||||||
data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
|
data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
|
||||||
features = ["m", "a", "f", "d"]
|
features = ["m", "a", "f", "d"]
|
||||||
additional_linker_options = ["-m", "elf32lriscv"]
|
|
||||||
print_function = "core_log"
|
print_function = "core_log"
|
||||||
now_pinning = True
|
now_pinning = True
|
||||||
|
|
||||||
tool_ld = "ld.lld"
|
tool_ld = "ld.lld"
|
||||||
tool_strip = "llvm-strip"
|
tool_strip = "llvm-strip"
|
||||||
tool_symbolizer = "llvm-symbolizer"
|
tool_addr2line = "llvm-addr2line"
|
||||||
tool_cxxfilt = "llvm-cxxfilt"
|
tool_cxxfilt = "llvm-cxxfilt"
|
||||||
|
|
||||||
class CortexA9Target(Target):
|
class CortexA9Target(Target):
|
||||||
triple = "armv7-unknown-linux-gnueabihf"
|
triple = "armv7-unknown-linux-gnueabihf"
|
||||||
data_layout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
|
data_layout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
|
||||||
features = ["dsp", "fp16", "neon", "vfp3"]
|
features = ["dsp", "fp16", "neon", "vfp3"]
|
||||||
additional_linker_options = ["-m", "armelf_linux_eabi", "--target2=rel"]
|
|
||||||
print_function = "core_log"
|
print_function = "core_log"
|
||||||
now_pinning = False
|
now_pinning = False
|
||||||
|
|
||||||
tool_ld = "ld.lld"
|
tool_ld = "ld.lld"
|
||||||
tool_strip = "llvm-strip"
|
tool_strip = "llvm-strip"
|
||||||
tool_symbolizer = "llvm-symbolizer"
|
tool_addr2line = "llvm-addr2line"
|
||||||
tool_cxxfilt = "llvm-cxxfilt"
|
tool_cxxfilt = "llvm-cxxfilt"
|
||||||
|
|
|
@ -30,9 +30,8 @@ def main():
|
||||||
device_db_path = os.path.join(os.path.dirname(sys.argv[1]), "device_db.py")
|
device_db_path = os.path.join(os.path.dirname(sys.argv[1]), "device_db.py")
|
||||||
device_mgr = DeviceManager(DeviceDB(device_db_path))
|
device_mgr = DeviceManager(DeviceDB(device_db_path))
|
||||||
|
|
||||||
dataset_db_path = os.path.join(os.path.dirname(sys.argv[1]), "dataset_db.mdb")
|
dataset_db_path = os.path.join(os.path.dirname(sys.argv[1]), "dataset_db.pyon")
|
||||||
dataset_db = DatasetDB(dataset_db_path)
|
dataset_mgr = DatasetManager(DatasetDB(dataset_db_path))
|
||||||
dataset_mgr = DatasetManager()
|
|
||||||
|
|
||||||
argument_mgr = ProcessArgumentManager({})
|
argument_mgr = ProcessArgumentManager({})
|
||||||
|
|
||||||
|
@ -69,7 +68,5 @@ def main():
|
||||||
benchmark(lambda: target.strip(elf_shlib),
|
benchmark(lambda: target.strip(elf_shlib),
|
||||||
"Stripping debug information")
|
"Stripping debug information")
|
||||||
|
|
||||||
dataset_db.close_db()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|
|
@ -8,7 +8,6 @@ semantics explicitly.
|
||||||
|
|
||||||
from collections import OrderedDict, defaultdict
|
from collections import OrderedDict, defaultdict
|
||||||
from functools import reduce
|
from functools import reduce
|
||||||
from itertools import chain
|
|
||||||
from pythonparser import algorithm, diagnostic, ast
|
from pythonparser import algorithm, diagnostic, ast
|
||||||
from .. import types, builtins, asttyped, ir, iodelay
|
from .. import types, builtins, asttyped, ir, iodelay
|
||||||
|
|
||||||
|
@ -62,9 +61,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
the basic block to which ``return`` will transfer control
|
the basic block to which ``return`` will transfer control
|
||||||
:ivar unwind_target: (:class:`ir.BasicBlock` or None)
|
:ivar unwind_target: (:class:`ir.BasicBlock` or None)
|
||||||
the basic block to which unwinding will transfer control
|
the basic block to which unwinding will transfer control
|
||||||
:ivar catch_clauses: (list of (:class:`ir.BasicBlock`, :class:`types.Type` or None))
|
|
||||||
a list of catch clauses that should be appended to inner try block
|
|
||||||
landingpad
|
|
||||||
:ivar final_branch: (function (target: :class:`ir.BasicBlock`, block: :class:`ir.BasicBlock)
|
:ivar final_branch: (function (target: :class:`ir.BasicBlock`, block: :class:`ir.BasicBlock)
|
||||||
or None)
|
or None)
|
||||||
the function that appends to ``block`` a jump through the ``finally`` statement
|
the function that appends to ``block`` a jump through the ``finally`` statement
|
||||||
|
@ -92,9 +88,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
|
|
||||||
_size_type = builtins.TInt32()
|
_size_type = builtins.TInt32()
|
||||||
|
|
||||||
def __init__(self, module_name, engine, ref_period, embedding_map):
|
def __init__(self, module_name, engine, ref_period):
|
||||||
self.engine = engine
|
self.engine = engine
|
||||||
self.embedding_map = embedding_map
|
|
||||||
self.functions = []
|
self.functions = []
|
||||||
self.name = [module_name] if module_name != "" else []
|
self.name = [module_name] if module_name != "" else []
|
||||||
self.ref_period = ir.Constant(ref_period, builtins.TFloat())
|
self.ref_period = ir.Constant(ref_period, builtins.TFloat())
|
||||||
|
@ -107,13 +102,10 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
self.current_private_env = None
|
self.current_private_env = None
|
||||||
self.current_args = None
|
self.current_args = None
|
||||||
self.current_assign = None
|
self.current_assign = None
|
||||||
self.current_exception = None
|
|
||||||
self.current_remote_fn = False
|
|
||||||
self.break_target = None
|
self.break_target = None
|
||||||
self.continue_target = None
|
self.continue_target = None
|
||||||
self.return_target = None
|
self.return_target = None
|
||||||
self.unwind_target = None
|
self.unwind_target = None
|
||||||
self.catch_clauses = []
|
|
||||||
self.final_branch = None
|
self.final_branch = None
|
||||||
self.function_map = dict()
|
self.function_map = dict()
|
||||||
self.variable_map = dict()
|
self.variable_map = dict()
|
||||||
|
@ -212,8 +204,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
old_priv_env, self.current_private_env = self.current_private_env, priv_env
|
old_priv_env, self.current_private_env = self.current_private_env, priv_env
|
||||||
|
|
||||||
self.generic_visit(node)
|
self.generic_visit(node)
|
||||||
self.terminate(ir.Return(ir.Constant(None, builtins.TNone()),
|
self.terminate(ir.Return(ir.Constant(None, builtins.TNone())))
|
||||||
remote_return=self.current_remote_fn))
|
|
||||||
|
|
||||||
return self.functions
|
return self.functions
|
||||||
finally:
|
finally:
|
||||||
|
@ -296,8 +287,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
old_block, self.current_block = self.current_block, entry
|
old_block, self.current_block = self.current_block, entry
|
||||||
|
|
||||||
old_globals, self.current_globals = self.current_globals, node.globals_in_scope
|
old_globals, self.current_globals = self.current_globals, node.globals_in_scope
|
||||||
old_remote_fn = self.current_remote_fn
|
|
||||||
self.current_remote_fn = getattr(node, "remote_fn", False)
|
|
||||||
|
|
||||||
env_without_globals = \
|
env_without_globals = \
|
||||||
{var: node.typing_env[var]
|
{var: node.typing_env[var]
|
||||||
|
@ -330,8 +319,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
self.terminate(ir.Return(result))
|
self.terminate(ir.Return(result))
|
||||||
elif builtins.is_none(typ.ret):
|
elif builtins.is_none(typ.ret):
|
||||||
if not self.current_block.is_terminated():
|
if not self.current_block.is_terminated():
|
||||||
self.current_block.append(ir.Return(ir.Constant(None, builtins.TNone()),
|
self.current_block.append(ir.Return(ir.Constant(None, builtins.TNone())))
|
||||||
remote_return=self.current_remote_fn))
|
|
||||||
else:
|
else:
|
||||||
if not self.current_block.is_terminated():
|
if not self.current_block.is_terminated():
|
||||||
if len(self.current_block.predecessors()) != 0:
|
if len(self.current_block.predecessors()) != 0:
|
||||||
|
@ -350,7 +338,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
self.current_block = old_block
|
self.current_block = old_block
|
||||||
self.current_globals = old_globals
|
self.current_globals = old_globals
|
||||||
self.current_env = old_env
|
self.current_env = old_env
|
||||||
self.current_remote_fn = old_remote_fn
|
|
||||||
if not is_lambda:
|
if not is_lambda:
|
||||||
self.current_private_env = old_priv_env
|
self.current_private_env = old_priv_env
|
||||||
|
|
||||||
|
@ -373,8 +360,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
return_value = self.visit(node.value)
|
return_value = self.visit(node.value)
|
||||||
|
|
||||||
if self.return_target is None:
|
if self.return_target is None:
|
||||||
self.append(ir.Return(return_value,
|
self.append(ir.Return(return_value))
|
||||||
remote_return=self.current_remote_fn))
|
|
||||||
else:
|
else:
|
||||||
self.append(ir.SetLocal(self.current_private_env, "$return", return_value))
|
self.append(ir.SetLocal(self.current_private_env, "$return", return_value))
|
||||||
self.append(ir.Branch(self.return_target))
|
self.append(ir.Branch(self.return_target))
|
||||||
|
@ -652,10 +638,10 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
loc_column = ir.Constant(loc.column(), builtins.TInt32())
|
loc_column = ir.Constant(loc.column(), builtins.TInt32())
|
||||||
loc_function = ir.Constant(".".join(self.name), builtins.TStr())
|
loc_function = ir.Constant(".".join(self.name), builtins.TStr())
|
||||||
|
|
||||||
self.append(ir.SetAttr(exn, "#__file__", loc_file))
|
self.append(ir.SetAttr(exn, "__file__", loc_file))
|
||||||
self.append(ir.SetAttr(exn, "#__line__", loc_line))
|
self.append(ir.SetAttr(exn, "__line__", loc_line))
|
||||||
self.append(ir.SetAttr(exn, "#__col__", loc_column))
|
self.append(ir.SetAttr(exn, "__col__", loc_column))
|
||||||
self.append(ir.SetAttr(exn, "#__func__", loc_function))
|
self.append(ir.SetAttr(exn, "__func__", loc_function))
|
||||||
|
|
||||||
if self.unwind_target is not None:
|
if self.unwind_target is not None:
|
||||||
self.append(ir.Raise(exn, self.unwind_target))
|
self.append(ir.Raise(exn, self.unwind_target))
|
||||||
|
@ -663,9 +649,9 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
self.append(ir.Raise(exn))
|
self.append(ir.Raise(exn))
|
||||||
else:
|
else:
|
||||||
if self.unwind_target is not None:
|
if self.unwind_target is not None:
|
||||||
self.append(ir.Resume(self.unwind_target))
|
self.append(ir.Reraise(self.unwind_target))
|
||||||
else:
|
else:
|
||||||
self.append(ir.Resume())
|
self.append(ir.Reraise())
|
||||||
|
|
||||||
def visit_Raise(self, node):
|
def visit_Raise(self, node):
|
||||||
if node.exc is not None and types.is_exn_constructor(node.exc.type):
|
if node.exc is not None and types.is_exn_constructor(node.exc.type):
|
||||||
|
@ -675,9 +661,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
|
|
||||||
def visit_Try(self, node):
|
def visit_Try(self, node):
|
||||||
dispatcher = self.add_block("try.dispatch")
|
dispatcher = self.add_block("try.dispatch")
|
||||||
cleanup = self.add_block('handler.cleanup')
|
|
||||||
landingpad = ir.LandingPad(cleanup)
|
|
||||||
dispatcher.append(landingpad)
|
|
||||||
|
|
||||||
if any(node.finalbody):
|
if any(node.finalbody):
|
||||||
# k for continuation
|
# k for continuation
|
||||||
|
@ -693,6 +676,15 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
final_targets.append(target)
|
final_targets.append(target)
|
||||||
final_paths.append(block)
|
final_paths.append(block)
|
||||||
|
|
||||||
|
final_exn_targets = []
|
||||||
|
final_exn_paths = []
|
||||||
|
# raise has to be treated differently
|
||||||
|
# we cannot follow indirectbr for local access validation, so we
|
||||||
|
# have to construct the control flow explicitly
|
||||||
|
def exception_final_branch(target, block):
|
||||||
|
final_exn_targets.append(target)
|
||||||
|
final_exn_paths.append(block)
|
||||||
|
|
||||||
if self.break_target is not None:
|
if self.break_target is not None:
|
||||||
break_proxy = self.add_block("try.break")
|
break_proxy = self.add_block("try.break")
|
||||||
old_break, self.break_target = self.break_target, break_proxy
|
old_break, self.break_target = self.break_target, break_proxy
|
||||||
|
@ -712,51 +704,16 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
value = return_action.append(ir.GetLocal(self.current_private_env, "$return"))
|
value = return_action.append(ir.GetLocal(self.current_private_env, "$return"))
|
||||||
return_action.append(ir.Return(value))
|
return_action.append(ir.Return(value))
|
||||||
final_branch(return_action, return_proxy)
|
final_branch(return_action, return_proxy)
|
||||||
else:
|
|
||||||
landingpad.has_cleanup = False
|
|
||||||
|
|
||||||
# we should propagate the clauses to nested try catch blocks
|
|
||||||
# so nested try catch will jump to our clause if the inner one does not
|
|
||||||
# match
|
|
||||||
# note that the phi instruction here requires some hack, see
|
|
||||||
# llvm_ir_generator process_function for details
|
|
||||||
clauses = []
|
|
||||||
found_catch_all = False
|
|
||||||
for handler_node in node.handlers:
|
|
||||||
if found_catch_all:
|
|
||||||
self.warn_unreachable(handler_node)
|
|
||||||
continue
|
|
||||||
exn_type = handler_node.name_type.find()
|
|
||||||
if handler_node.filter is not None and \
|
|
||||||
not builtins.is_exception(exn_type, 'Exception'):
|
|
||||||
handler = self.add_block("handler." + exn_type.name)
|
|
||||||
phi = ir.Phi(builtins.TException(), 'exn')
|
|
||||||
handler.append(phi)
|
|
||||||
clauses.append((handler, exn_type, phi))
|
|
||||||
else:
|
|
||||||
handler = self.add_block("handler.catchall")
|
|
||||||
phi = ir.Phi(builtins.TException(), 'exn')
|
|
||||||
handler.append(phi)
|
|
||||||
clauses.append((handler, None, phi))
|
|
||||||
found_catch_all = True
|
|
||||||
|
|
||||||
all_clauses = clauses[:]
|
|
||||||
for clause in self.catch_clauses:
|
|
||||||
# if the last clause is accept all, do not add further clauses
|
|
||||||
if len(all_clauses) == 0 or all_clauses[-1][1] is not None:
|
|
||||||
all_clauses.append(clause)
|
|
||||||
|
|
||||||
body = self.add_block("try.body")
|
body = self.add_block("try.body")
|
||||||
self.append(ir.Branch(body))
|
self.append(ir.Branch(body))
|
||||||
self.current_block = body
|
self.current_block = body
|
||||||
|
|
||||||
old_unwind, self.unwind_target = self.unwind_target, dispatcher
|
|
||||||
old_clauses, self.catch_clauses = self.catch_clauses, all_clauses
|
|
||||||
try:
|
try:
|
||||||
|
old_unwind, self.unwind_target = self.unwind_target, dispatcher
|
||||||
self.visit(node.body)
|
self.visit(node.body)
|
||||||
finally:
|
finally:
|
||||||
self.unwind_target = old_unwind
|
self.unwind_target = old_unwind
|
||||||
self.catch_clauses = old_clauses
|
|
||||||
|
|
||||||
if not self.current_block.is_terminated():
|
if not self.current_block.is_terminated():
|
||||||
self.visit(node.orelse)
|
self.visit(node.orelse)
|
||||||
|
@ -765,149 +722,95 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
body = self.current_block
|
body = self.current_block
|
||||||
|
|
||||||
if any(node.finalbody):
|
if any(node.finalbody):
|
||||||
# if we have a final block, we should not append clauses to our
|
|
||||||
# landingpad or we will skip the finally block.
|
|
||||||
# when the finally block calls resume, it will unwind to the outer
|
|
||||||
# try catch block automatically
|
|
||||||
all_clauses = clauses
|
|
||||||
# reset targets
|
|
||||||
if self.break_target:
|
if self.break_target:
|
||||||
self.break_target = old_break
|
self.break_target = old_break
|
||||||
if self.continue_target:
|
if self.continue_target:
|
||||||
self.continue_target = old_continue
|
self.continue_target = old_continue
|
||||||
self.return_target = old_return
|
self.return_target = old_return
|
||||||
|
|
||||||
if any(node.finalbody):
|
old_final_branch, self.final_branch = self.final_branch, exception_final_branch
|
||||||
# create new unwind target for cleanup
|
|
||||||
final_dispatcher = self.add_block("try.final.dispatch")
|
|
||||||
final_landingpad = ir.LandingPad(cleanup)
|
|
||||||
final_dispatcher.append(final_landingpad)
|
|
||||||
|
|
||||||
# make sure that exception clauses are unwinded to the finally block
|
cleanup = self.add_block('handler.cleanup')
|
||||||
old_unwind, self.unwind_target = self.unwind_target, final_dispatcher
|
landingpad = dispatcher.append(ir.LandingPad(cleanup))
|
||||||
|
if not any(node.finalbody):
|
||||||
if any(node.finalbody):
|
landingpad.has_cleanup = False
|
||||||
# if we have a while:try/finally continue must execute finally
|
|
||||||
# before continuing the while
|
|
||||||
redirect = final_branch
|
|
||||||
else:
|
|
||||||
redirect = lambda dest, proxy: proxy.append(ir.Branch(dest))
|
|
||||||
|
|
||||||
# we need to set break/continue/return to execute end_catch
|
|
||||||
if self.break_target is not None:
|
|
||||||
break_proxy = self.add_block("try.break")
|
|
||||||
break_proxy.append(ir.Builtin("end_catch", [], builtins.TNone()))
|
|
||||||
old_break, self.break_target = self.break_target, break_proxy
|
|
||||||
redirect(old_break, break_proxy)
|
|
||||||
|
|
||||||
if self.continue_target is not None:
|
|
||||||
continue_proxy = self.add_block("try.continue")
|
|
||||||
continue_proxy.append(ir.Builtin("end_catch", [],
|
|
||||||
builtins.TNone()))
|
|
||||||
old_continue, self.continue_target = self.continue_target, continue_proxy
|
|
||||||
redirect(old_continue, continue_proxy)
|
|
||||||
|
|
||||||
return_proxy = self.add_block("try.return")
|
|
||||||
return_proxy.append(ir.Builtin("end_catch", [], builtins.TNone()))
|
|
||||||
old_return, self.return_target = self.return_target, return_proxy
|
|
||||||
old_return_target = old_return
|
|
||||||
if old_return_target is None:
|
|
||||||
old_return_target = self.add_block("try.doreturn")
|
|
||||||
value = old_return_target.append(ir.GetLocal(self.current_private_env, "$return"))
|
|
||||||
old_return_target.append(ir.Return(value))
|
|
||||||
redirect(old_return_target, return_proxy)
|
|
||||||
|
|
||||||
handlers = []
|
handlers = []
|
||||||
|
for handler_node in node.handlers:
|
||||||
|
exn_type = handler_node.name_type.find()
|
||||||
|
if handler_node.filter is not None and \
|
||||||
|
not builtins.is_exception(exn_type, 'Exception'):
|
||||||
|
handler = self.add_block("handler." + exn_type.name)
|
||||||
|
landingpad.add_clause(handler, exn_type)
|
||||||
|
else:
|
||||||
|
handler = self.add_block("handler.catchall")
|
||||||
|
landingpad.add_clause(handler, None)
|
||||||
|
|
||||||
for (handler_node, (handler, exn_type, phi)) in zip(node.handlers, clauses):
|
|
||||||
self.current_block = handler
|
self.current_block = handler
|
||||||
if handler_node.name is not None:
|
if handler_node.name is not None:
|
||||||
exn = self.append(ir.Builtin("exncast", [phi], handler_node.name_type))
|
exn = self.append(ir.Builtin("exncast", [landingpad], handler_node.name_type))
|
||||||
self._set_local(handler_node.name, exn)
|
self._set_local(handler_node.name, exn)
|
||||||
self.visit(handler_node.body)
|
self.visit(handler_node.body)
|
||||||
# only need to call end_catch if the current block is not terminated
|
|
||||||
# other possible paths: break/continue/return/raise
|
|
||||||
# we will call end_catch in the first 3 cases, and we should not
|
|
||||||
# end_catch in the last case for nested exception
|
|
||||||
if not self.current_block.is_terminated():
|
|
||||||
self.append(ir.Builtin("end_catch", [], builtins.TNone()))
|
|
||||||
post_handler = self.current_block
|
post_handler = self.current_block
|
||||||
handlers.append(post_handler)
|
|
||||||
|
|
||||||
# branch to all possible clauses, including those from outer try catch
|
handlers.append((handler, post_handler))
|
||||||
# block
|
|
||||||
# if we have a finally block, all_clauses will not include those from
|
|
||||||
# the outer block
|
|
||||||
for (handler, clause, phi) in all_clauses:
|
|
||||||
phi.add_incoming(landingpad, dispatcher)
|
|
||||||
landingpad.add_clause(handler, clause)
|
|
||||||
|
|
||||||
if self.break_target:
|
|
||||||
self.break_target = old_break
|
|
||||||
if self.continue_target:
|
|
||||||
self.continue_target = old_continue
|
|
||||||
self.return_target = old_return
|
|
||||||
|
|
||||||
if any(node.finalbody):
|
if any(node.finalbody):
|
||||||
# Finalize and continue after try statement.
|
# Finalize and continue after try statement.
|
||||||
self.unwind_target = old_unwind
|
self.final_branch = old_final_branch
|
||||||
# Exception path
|
|
||||||
finalizer_reraise = self.add_block("finally.resume")
|
for (i, (target, block)) in enumerate(zip(final_exn_targets, final_exn_paths)):
|
||||||
self.current_block = finalizer_reraise
|
finalizer = self.add_block(f"finally{i}")
|
||||||
self.visit(node.finalbody)
|
self.current_block = block
|
||||||
self.terminate(ir.Resume(self.unwind_target))
|
self.terminate(ir.Branch(finalizer))
|
||||||
cleanup.append(ir.Branch(finalizer_reraise))
|
self.current_block = finalizer
|
||||||
|
self.visit(node.finalbody)
|
||||||
|
self.terminate(ir.Branch(target))
|
||||||
|
|
||||||
# Normal path
|
|
||||||
finalizer = self.add_block("finally")
|
finalizer = self.add_block("finally")
|
||||||
self.current_block = finalizer
|
self.current_block = finalizer
|
||||||
|
|
||||||
self.visit(node.finalbody)
|
self.visit(node.finalbody)
|
||||||
post_finalizer = self.current_block
|
post_finalizer = self.current_block
|
||||||
self.current_block = tail = self.add_block("try.tail")
|
|
||||||
|
# Finalize and reraise. Separate from previous case to expose flow
|
||||||
|
# to LocalAccessValidator.
|
||||||
|
finalizer_reraise = self.add_block("finally.reraise")
|
||||||
|
self.current_block = finalizer_reraise
|
||||||
|
|
||||||
|
self.visit(node.finalbody)
|
||||||
|
self.terminate(ir.Reraise(self.unwind_target))
|
||||||
|
|
||||||
|
self.current_block = tail = self.add_block("try.tail")
|
||||||
|
if any(node.finalbody):
|
||||||
final_targets.append(tail)
|
final_targets.append(tail)
|
||||||
|
|
||||||
# if final block is not terminated, branch to tail
|
for block in final_paths:
|
||||||
|
block.append(ir.Branch(finalizer))
|
||||||
|
|
||||||
|
if not body.is_terminated():
|
||||||
|
body.append(ir.SetLocal(final_state, "$cont", tail))
|
||||||
|
body.append(ir.Branch(finalizer))
|
||||||
|
|
||||||
|
cleanup.append(ir.Branch(finalizer_reraise))
|
||||||
|
|
||||||
|
for handler, post_handler in handlers:
|
||||||
|
if not post_handler.is_terminated():
|
||||||
|
post_handler.append(ir.SetLocal(final_state, "$cont", tail))
|
||||||
|
post_handler.append(ir.Branch(finalizer))
|
||||||
|
|
||||||
if not post_finalizer.is_terminated():
|
if not post_finalizer.is_terminated():
|
||||||
dest = post_finalizer.append(ir.GetLocal(final_state, "$cont"))
|
dest = post_finalizer.append(ir.GetLocal(final_state, "$cont"))
|
||||||
post_finalizer.append(ir.IndirectBranch(dest, final_targets))
|
post_finalizer.append(ir.IndirectBranch(dest, final_targets))
|
||||||
# make sure proxies will branch to finalizer
|
|
||||||
for block in final_paths:
|
|
||||||
if finalizer in block.predecessors():
|
|
||||||
# avoid producing irreducible graphs
|
|
||||||
# generate a new finalizer
|
|
||||||
self.current_block = tmp_finalizer = self.add_block("finally.tmp")
|
|
||||||
self.visit(node.finalbody)
|
|
||||||
if not self.current_block.is_terminated():
|
|
||||||
assert isinstance(block.instructions[-1], ir.SetLocal)
|
|
||||||
self.current_block.append(ir.Branch(block.instructions[-1].operands[-1]))
|
|
||||||
block.instructions[-1].erase()
|
|
||||||
block.append(ir.Branch(tmp_finalizer))
|
|
||||||
self.current_block = tail
|
|
||||||
else:
|
|
||||||
block.append(ir.Branch(finalizer))
|
|
||||||
# if no raise in body/handlers, branch to finalizer
|
|
||||||
for block in chain([body], handlers):
|
|
||||||
if not block.is_terminated():
|
|
||||||
if finalizer in block.predecessors():
|
|
||||||
# similar to the above case
|
|
||||||
self.current_block = tmp_finalizer = self.add_block("finally.tmp")
|
|
||||||
self.visit(node.finalbody)
|
|
||||||
self.terminate(ir.Branch(tail))
|
|
||||||
block.append(ir.Branch(tmp_finalizer))
|
|
||||||
self.current_block = tail
|
|
||||||
else:
|
|
||||||
block.append(ir.SetLocal(final_state, "$cont", tail))
|
|
||||||
block.append(ir.Branch(finalizer))
|
|
||||||
else:
|
else:
|
||||||
self.current_block = tail = self.add_block("try.tail")
|
|
||||||
if not body.is_terminated():
|
if not body.is_terminated():
|
||||||
body.append(ir.Branch(tail))
|
body.append(ir.Branch(tail))
|
||||||
|
|
||||||
cleanup.append(ir.Resume(self.unwind_target))
|
cleanup.append(ir.Reraise(self.unwind_target))
|
||||||
|
|
||||||
for handler in handlers:
|
for handler, post_handler in handlers:
|
||||||
if not handler.is_terminated():
|
if not post_handler.is_terminated():
|
||||||
handler.append(ir.Branch(tail))
|
post_handler.append(ir.Branch(tail))
|
||||||
|
|
||||||
def _try_finally(self, body_gen, finally_gen, name):
|
def _try_finally(self, body_gen, finally_gen, name):
|
||||||
dispatcher = self.add_block("{}.dispatch".format(name))
|
dispatcher = self.add_block("{}.dispatch".format(name))
|
||||||
|
@ -926,7 +829,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
self.current_block = self.add_block("{}.cleanup".format(name))
|
self.current_block = self.add_block("{}.cleanup".format(name))
|
||||||
dispatcher.append(ir.LandingPad(self.current_block))
|
dispatcher.append(ir.LandingPad(self.current_block))
|
||||||
finally_gen()
|
finally_gen()
|
||||||
self.terminate(ir.Resume(self.unwind_target))
|
self.raise_exn()
|
||||||
|
|
||||||
self.current_block = self.post_body
|
self.current_block = self.post_body
|
||||||
|
|
||||||
|
@ -1205,27 +1108,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
finally:
|
finally:
|
||||||
self.current_assign = old_assign
|
self.current_assign = old_assign
|
||||||
|
|
||||||
if types.is_tuple(node.value.type):
|
if isinstance(node.slice, ast.Index):
|
||||||
assert isinstance(node.slice, ast.Index), \
|
|
||||||
"Internal compiler error: tuple index should be an Index"
|
|
||||||
assert isinstance(node.slice.value, ast.Num), \
|
|
||||||
"Internal compiler error: tuple index should be a constant"
|
|
||||||
|
|
||||||
if self.current_assign is not None:
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"cannot assign to a tuple element",
|
|
||||||
{}, node.loc)
|
|
||||||
self.engine.process(diag)
|
|
||||||
|
|
||||||
index = node.slice.value.n
|
|
||||||
indexed = self.append(
|
|
||||||
ir.GetAttr(value, index, name="{}.e{}".format(value.name, index)),
|
|
||||||
loc=node.loc
|
|
||||||
)
|
|
||||||
|
|
||||||
return indexed
|
|
||||||
|
|
||||||
elif isinstance(node.slice, ast.Index):
|
|
||||||
try:
|
try:
|
||||||
old_assign, self.current_assign = self.current_assign, None
|
old_assign, self.current_assign = self.current_assign, None
|
||||||
index = self.visit(node.slice.value)
|
index = self.visit(node.slice.value)
|
||||||
|
@ -2219,13 +2102,11 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
return phi
|
return phi
|
||||||
|
|
||||||
# Keep this function with builtins.TException.attributes.
|
# Keep this function with builtins.TException.attributes.
|
||||||
def alloc_exn(self, typ, message=None, param0=None, param1=None,
|
def alloc_exn(self, typ, message=None, param0=None, param1=None, param2=None):
|
||||||
param2=None, nomsgcheck=False):
|
|
||||||
typ = typ.find()
|
typ = typ.find()
|
||||||
name = "{}:{}".format(typ.id, typ.name)
|
name = "{}:{}".format(typ.id, typ.name)
|
||||||
name_id = self.embedding_map.store_str(name)
|
|
||||||
attributes = [
|
attributes = [
|
||||||
ir.Constant(name_id, builtins.TInt32()), # typeinfo
|
ir.Constant(name, builtins.TStr()), # typeinfo
|
||||||
ir.Constant("<not thrown>", builtins.TStr()), # file
|
ir.Constant("<not thrown>", builtins.TStr()), # file
|
||||||
ir.Constant(0, builtins.TInt32()), # line
|
ir.Constant(0, builtins.TInt32()), # line
|
||||||
ir.Constant(0, builtins.TInt32()), # column
|
ir.Constant(0, builtins.TInt32()), # column
|
||||||
|
@ -2234,16 +2115,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
|
|
||||||
if message is None:
|
if message is None:
|
||||||
attributes.append(ir.Constant(typ.name, builtins.TStr()))
|
attributes.append(ir.Constant(typ.name, builtins.TStr()))
|
||||||
elif isinstance(message, ir.Constant) or nomsgcheck:
|
|
||||||
attributes.append(message) # message
|
|
||||||
else:
|
else:
|
||||||
diag = diagnostic.Diagnostic(
|
attributes.append(message) # message
|
||||||
"error",
|
|
||||||
"only constant exception messages are supported",
|
|
||||||
{},
|
|
||||||
self.current_loc if message.loc is None else message.loc
|
|
||||||
)
|
|
||||||
self.engine.process(diag)
|
|
||||||
|
|
||||||
param_type = builtins.TInt64()
|
param_type = builtins.TInt64()
|
||||||
for param in [param0, param1, param2]:
|
for param in [param0, param1, param2]:
|
||||||
|
@ -2531,89 +2404,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
or types.is_builtin(typ, "at_mu"):
|
or types.is_builtin(typ, "at_mu"):
|
||||||
return self.append(ir.Builtin(typ.name,
|
return self.append(ir.Builtin(typ.name,
|
||||||
[self.visit(arg) for arg in node.args], node.type))
|
[self.visit(arg) for arg in node.args], node.type))
|
||||||
elif types.is_builtin(typ, "subkernel_await"):
|
|
||||||
if len(node.args) == 2 and len(node.keywords) == 0:
|
|
||||||
fn = node.args[0].type
|
|
||||||
timeout = self.visit(node.args[1])
|
|
||||||
elif len(node.args) == 1 and len(node.keywords) == 0:
|
|
||||||
fn = node.args[0].type
|
|
||||||
timeout = ir.Constant(-1, builtins.TInt64())
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
if types.is_method(fn):
|
|
||||||
fn = types.get_method_function(fn)
|
|
||||||
sid = ir.Constant(fn.sid, builtins.TInt32())
|
|
||||||
if not builtins.is_none(fn.ret):
|
|
||||||
if self.unwind_target is None:
|
|
||||||
ret = self.append(ir.Builtin("subkernel_retrieve_return", [sid, timeout], fn.ret))
|
|
||||||
else:
|
|
||||||
after_invoke = self.add_block("invoke")
|
|
||||||
ret = self.append(ir.BuiltinInvoke("subkernel_retrieve_return", [sid, timeout],
|
|
||||||
fn.ret, after_invoke, self.unwind_target))
|
|
||||||
self.current_block = after_invoke
|
|
||||||
else:
|
|
||||||
ret = ir.Constant(None, builtins.TNone())
|
|
||||||
if self.unwind_target is None:
|
|
||||||
self.append(ir.Builtin("subkernel_await_finish", [sid, timeout], builtins.TNone()))
|
|
||||||
else:
|
|
||||||
after_invoke = self.add_block("invoke")
|
|
||||||
self.append(ir.BuiltinInvoke("subkernel_await_finish", [sid, timeout],
|
|
||||||
builtins.TNone(), after_invoke, self.unwind_target))
|
|
||||||
self.current_block = after_invoke
|
|
||||||
return ret
|
|
||||||
elif types.is_builtin(typ, "subkernel_preload"):
|
|
||||||
if len(node.args) == 1 and len(node.keywords) == 0:
|
|
||||||
fn = node.args[0].type
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
if types.is_method(fn):
|
|
||||||
fn = types.get_method_function(fn)
|
|
||||||
sid = ir.Constant(fn.sid, builtins.TInt32())
|
|
||||||
dest = ir.Constant(fn.destination, builtins.TInt32())
|
|
||||||
return self.append(ir.Builtin("subkernel_preload", [sid, dest], builtins.TNone()))
|
|
||||||
elif types.is_builtin(typ, "subkernel_send"):
|
|
||||||
if len(node.args) == 3 and len(node.keywords) == 0:
|
|
||||||
dest = self.visit(node.args[0])
|
|
||||||
name = node.args[1].s
|
|
||||||
value = self.visit(node.args[2])
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
msg_id, msg = self.embedding_map.store_subkernel_message(name, value.type, "send", node.loc)
|
|
||||||
msg_id = ir.Constant(msg_id, builtins.TInt32())
|
|
||||||
if value.type != msg.value_type:
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"type mismatch for subkernel message '{name}', receiver expects {recv} while sending {send}",
|
|
||||||
{"name": name, "recv": msg.value_type, "send": value.type},
|
|
||||||
node.loc)
|
|
||||||
self.engine.process(diag)
|
|
||||||
return self.append(ir.Builtin("subkernel_send", [msg_id, dest, value], builtins.TNone()))
|
|
||||||
elif types.is_builtin(typ, "subkernel_recv"):
|
|
||||||
if len(node.args) == 2 and len(node.keywords) == 0:
|
|
||||||
name = node.args[0].s
|
|
||||||
vartype = node.args[1].value
|
|
||||||
timeout = ir.Constant(-1, builtins.TInt64())
|
|
||||||
elif len(node.args) == 3 and len(node.keywords) == 0:
|
|
||||||
name = node.args[0].s
|
|
||||||
vartype = node.args[1].value
|
|
||||||
timeout = self.visit(node.args[2])
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
msg_id, msg = self.embedding_map.store_subkernel_message(name, vartype, "recv", node.loc)
|
|
||||||
msg_id = ir.Constant(msg_id, builtins.TInt32())
|
|
||||||
if vartype != msg.value_type:
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"type mismatch for subkernel message '{name}', receiver expects {recv} while sending {send}",
|
|
||||||
{"name": name, "recv": vartype, "send": msg.value_type},
|
|
||||||
node.loc)
|
|
||||||
self.engine.process(diag)
|
|
||||||
if self.unwind_target is None:
|
|
||||||
ret = self.append(ir.Builtin("subkernel_recv", [msg_id, timeout], vartype))
|
|
||||||
else:
|
|
||||||
after_invoke = self.add_block("invoke")
|
|
||||||
ret = self.append(ir.BuiltinInvoke("subkernel_recv", [msg_id, timeout],
|
|
||||||
vartype, after_invoke, self.unwind_target))
|
|
||||||
self.current_block = after_invoke
|
|
||||||
return ret
|
|
||||||
elif types.is_exn_constructor(typ):
|
elif types.is_exn_constructor(typ):
|
||||||
return self.alloc_exn(node.type, *[self.visit(arg_node) for arg_node in node.args])
|
return self.alloc_exn(node.type, *[self.visit(arg_node) for arg_node in node.args])
|
||||||
elif types.is_constructor(typ):
|
elif types.is_constructor(typ):
|
||||||
|
@ -2625,8 +2415,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
node.loc)
|
node.loc)
|
||||||
self.engine.process(diag)
|
self.engine.process(diag)
|
||||||
|
|
||||||
def _user_call(self, callee, positional, keywords, arg_exprs={}, remote_fn=False):
|
def _user_call(self, callee, positional, keywords, arg_exprs={}):
|
||||||
if types.is_function(callee.type) or types.is_rpc(callee.type) or types.is_subkernel(callee.type):
|
if types.is_function(callee.type) or types.is_rpc(callee.type):
|
||||||
func = callee
|
func = callee
|
||||||
self_arg = None
|
self_arg = None
|
||||||
fn_typ = callee.type
|
fn_typ = callee.type
|
||||||
|
@ -2641,51 +2431,16 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
|
|
||||||
if types.is_rpc(fn_typ) or types.is_subkernel(fn_typ):
|
if types.is_rpc(fn_typ):
|
||||||
if self_arg is None or types.is_subkernel(fn_typ):
|
if self_arg is None:
|
||||||
# self is not passed to subkernels by remote
|
|
||||||
args = positional
|
args = positional
|
||||||
elif self_arg is not None:
|
else:
|
||||||
args = [self_arg] + positional
|
args = [self_arg] + positional
|
||||||
|
|
||||||
for keyword in keywords:
|
for keyword in keywords:
|
||||||
arg = keywords[keyword]
|
arg = keywords[keyword]
|
||||||
args.append(self.append(ir.Alloc([ir.Constant(keyword, builtins.TStr()), arg],
|
args.append(self.append(ir.Alloc([ir.Constant(keyword, builtins.TStr()), arg],
|
||||||
ir.TKeyword(arg.type))))
|
ir.TKeyword(arg.type))))
|
||||||
elif remote_fn:
|
|
||||||
assert self_arg is None
|
|
||||||
assert len(fn_typ.args) >= len(positional)
|
|
||||||
assert len(keywords) == 0 # no keyword support
|
|
||||||
args = [None] * fn_typ.arity()
|
|
||||||
index = 0
|
|
||||||
# fill in first available args
|
|
||||||
for arg in positional:
|
|
||||||
args[index] = arg
|
|
||||||
index += 1
|
|
||||||
|
|
||||||
# remaining args are received through DRTIO
|
|
||||||
if index < len(args):
|
|
||||||
# min/max args received remotely (minus already filled)
|
|
||||||
offset = index
|
|
||||||
min_args = ir.Constant(len(fn_typ.args)-offset, builtins.TInt8())
|
|
||||||
max_args = ir.Constant(fn_typ.arity()-offset, builtins.TInt8())
|
|
||||||
|
|
||||||
arg_types = list(fn_typ.args.items())[offset:]
|
|
||||||
arg_type_list = [a[1] for a in arg_types] + [a[1] for a in fn_typ.optargs.items()]
|
|
||||||
rcvd_count = self.append(ir.SubkernelAwaitArgs([min_args, max_args], arg_type_list))
|
|
||||||
# obligatory arguments
|
|
||||||
for arg_name, arg_type in arg_types:
|
|
||||||
args[index] = self.append(ir.GetArgFromRemote(arg_name, arg_type,
|
|
||||||
name="ARG.{}".format(arg_name)))
|
|
||||||
index += 1
|
|
||||||
|
|
||||||
# optional arguments
|
|
||||||
for optarg_name, optarg_type in fn_typ.optargs.items():
|
|
||||||
idx = ir.Constant(index-offset, builtins.TInt8())
|
|
||||||
args[index] = \
|
|
||||||
self.append(ir.GetOptArgFromRemote(optarg_name, optarg_type, rcvd_count, idx))
|
|
||||||
index += 1
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
args = [None] * (len(fn_typ.args) + len(fn_typ.optargs))
|
args = [None] * (len(fn_typ.args) + len(fn_typ.optargs))
|
||||||
|
|
||||||
|
@ -2771,8 +2526,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
else:
|
else:
|
||||||
assert False, "Broadcasting for {} arguments not implemented".format(len)
|
assert False, "Broadcasting for {} arguments not implemented".format(len)
|
||||||
else:
|
else:
|
||||||
remote_fn = getattr(node, "remote_fn", False)
|
insn = self._user_call(callee, args, keywords, node.arg_exprs)
|
||||||
insn = self._user_call(callee, args, keywords, node.arg_exprs, remote_fn)
|
|
||||||
if isinstance(node.func, asttyped.AttributeT):
|
if isinstance(node.func, asttyped.AttributeT):
|
||||||
attr_node = node.func
|
attr_node = node.func
|
||||||
self.method_map[(attr_node.value.type.find(),
|
self.method_map[(attr_node.value.type.find(),
|
||||||
|
@ -2823,12 +2577,11 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
old_final_branch, self.final_branch = self.final_branch, None
|
old_final_branch, self.final_branch = self.final_branch, None
|
||||||
old_unwind, self.unwind_target = self.unwind_target, None
|
old_unwind, self.unwind_target = self.unwind_target, None
|
||||||
|
|
||||||
exn = self.alloc_exn(builtins.TException("AssertionError"),
|
exn = self.alloc_exn(builtins.TException("AssertionError"), message=msg)
|
||||||
message=msg, nomsgcheck=True)
|
self.append(ir.SetAttr(exn, "__file__", file))
|
||||||
self.append(ir.SetAttr(exn, "#__file__", file))
|
self.append(ir.SetAttr(exn, "__line__", line))
|
||||||
self.append(ir.SetAttr(exn, "#__line__", line))
|
self.append(ir.SetAttr(exn, "__col__", col))
|
||||||
self.append(ir.SetAttr(exn, "#__col__", col))
|
self.append(ir.SetAttr(exn, "__func__", function))
|
||||||
self.append(ir.SetAttr(exn, "#__func__", function))
|
|
||||||
self.append(ir.Raise(exn))
|
self.append(ir.Raise(exn))
|
||||||
finally:
|
finally:
|
||||||
self.current_function = old_func
|
self.current_function = old_func
|
||||||
|
@ -2964,15 +2717,14 @@ class ARTIQIRGenerator(algorithm.Visitor):
|
||||||
|
|
||||||
format_string += ")"
|
format_string += ")"
|
||||||
elif builtins.is_exception(value.type):
|
elif builtins.is_exception(value.type):
|
||||||
# message may not be an actual string...
|
name = self.append(ir.GetAttr(value, "__name__"))
|
||||||
# so we cannot really print itInvoke
|
message = self.append(ir.GetAttr(value, "__message__"))
|
||||||
name = self.append(ir.GetAttr(value, "#__name__"))
|
param1 = self.append(ir.GetAttr(value, "__param0__"))
|
||||||
param1 = self.append(ir.GetAttr(value, "#__param0__"))
|
param2 = self.append(ir.GetAttr(value, "__param1__"))
|
||||||
param2 = self.append(ir.GetAttr(value, "#__param1__"))
|
param3 = self.append(ir.GetAttr(value, "__param2__"))
|
||||||
param3 = self.append(ir.GetAttr(value, "#__param2__"))
|
|
||||||
|
|
||||||
format_string += "%ld(%lld, %lld, %lld)"
|
format_string += "%.*s(%.*s, %lld, %lld, %lld)"
|
||||||
args += [name, param1, param2, param3]
|
args += [name, message, param1, param2, param3]
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
|
|
||||||
|
|
|
@ -238,7 +238,7 @@ class ASTTypedRewriter(algorithm.Transformer):
|
||||||
body=node.body, decorator_list=node.decorator_list,
|
body=node.body, decorator_list=node.decorator_list,
|
||||||
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
|
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
|
||||||
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
|
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
|
||||||
loc=node.loc, remote_fn=False)
|
loc=node.loc)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.env_stack.append(node.typing_env)
|
self.env_stack.append(node.typing_env)
|
||||||
|
@ -439,9 +439,8 @@ class ASTTypedRewriter(algorithm.Transformer):
|
||||||
|
|
||||||
def visit_Call(self, node):
|
def visit_Call(self, node):
|
||||||
node = self.generic_visit(node)
|
node = self.generic_visit(node)
|
||||||
node = asttyped.CallT(type=types.TVar(), iodelay=None, arg_exprs={},
|
node = asttyped.CallT(type=types.TVar(), iodelay=None, arg_exprs={},
|
||||||
remote_fn=False, func=node.func,
|
func=node.func, args=node.args, keywords=node.keywords,
|
||||||
args=node.args, keywords=node.keywords,
|
|
||||||
starargs=node.starargs, kwargs=node.kwargs,
|
starargs=node.starargs, kwargs=node.kwargs,
|
||||||
star_loc=node.star_loc, dstar_loc=node.dstar_loc,
|
star_loc=node.star_loc, dstar_loc=node.dstar_loc,
|
||||||
begin_loc=node.begin_loc, end_loc=node.end_loc, loc=node.loc)
|
begin_loc=node.begin_loc, end_loc=node.end_loc, loc=node.loc)
|
||||||
|
|
|
@ -15,26 +15,13 @@ class DeadCodeEliminator:
|
||||||
self.process_function(func)
|
self.process_function(func)
|
||||||
|
|
||||||
def process_function(self, func):
|
def process_function(self, func):
|
||||||
# defer removing those blocks, so our use checks will ignore deleted blocks
|
modified = True
|
||||||
preserve = [func.entry()]
|
while modified:
|
||||||
work_list = [func.entry()]
|
modified = False
|
||||||
while any(work_list):
|
for block in list(func.basic_blocks):
|
||||||
block = work_list.pop()
|
if not any(block.predecessors()) and block != func.entry():
|
||||||
for succ in block.successors():
|
self.remove_block(block)
|
||||||
if succ not in preserve:
|
modified = True
|
||||||
preserve.append(succ)
|
|
||||||
work_list.append(succ)
|
|
||||||
|
|
||||||
to_be_removed = []
|
|
||||||
for block in func.basic_blocks:
|
|
||||||
if block not in preserve:
|
|
||||||
block.is_removed = True
|
|
||||||
to_be_removed.append(block)
|
|
||||||
for insn in block.instructions:
|
|
||||||
insn.is_removed = True
|
|
||||||
|
|
||||||
for block in to_be_removed:
|
|
||||||
self.remove_block(block)
|
|
||||||
|
|
||||||
modified = True
|
modified = True
|
||||||
while modified:
|
while modified:
|
||||||
|
@ -55,8 +42,6 @@ class DeadCodeEliminator:
|
||||||
def remove_block(self, block):
|
def remove_block(self, block):
|
||||||
# block.uses are updated while iterating
|
# block.uses are updated while iterating
|
||||||
for use in set(block.uses):
|
for use in set(block.uses):
|
||||||
if use.is_removed:
|
|
||||||
continue
|
|
||||||
if isinstance(use, ir.Phi):
|
if isinstance(use, ir.Phi):
|
||||||
use.remove_incoming_block(block)
|
use.remove_incoming_block(block)
|
||||||
if not any(use.operands):
|
if not any(use.operands):
|
||||||
|
@ -71,8 +56,6 @@ class DeadCodeEliminator:
|
||||||
|
|
||||||
def remove_instruction(self, insn):
|
def remove_instruction(self, insn):
|
||||||
for use in set(insn.uses):
|
for use in set(insn.uses):
|
||||||
if use.is_removed:
|
|
||||||
continue
|
|
||||||
if isinstance(use, ir.Phi):
|
if isinstance(use, ir.Phi):
|
||||||
use.remove_incoming_value(insn)
|
use.remove_incoming_value(insn)
|
||||||
if not any(use.operands):
|
if not any(use.operands):
|
||||||
|
|
|
@ -46,7 +46,6 @@ class Inferencer(algorithm.Visitor):
|
||||||
self.function = None # currently visited function, for Return inference
|
self.function = None # currently visited function, for Return inference
|
||||||
self.in_loop = False
|
self.in_loop = False
|
||||||
self.has_return = False
|
self.has_return = False
|
||||||
self.subkernel_arg_types = dict()
|
|
||||||
|
|
||||||
def _unify(self, typea, typeb, loca, locb, makenotes=None, when=""):
|
def _unify(self, typea, typeb, loca, locb, makenotes=None, when=""):
|
||||||
try:
|
try:
|
||||||
|
@ -179,7 +178,7 @@ class Inferencer(algorithm.Visitor):
|
||||||
# Convert to a method.
|
# Convert to a method.
|
||||||
attr_type = types.TMethod(object_type, attr_type)
|
attr_type = types.TMethod(object_type, attr_type)
|
||||||
self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc)
|
self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc)
|
||||||
elif types.is_rpc(attr_type) or types.is_subkernel(attr_type):
|
elif types.is_rpc(attr_type):
|
||||||
# Convert to a method. We don't have to bother typechecking
|
# Convert to a method. We don't have to bother typechecking
|
||||||
# the self argument, since for RPCs anything goes.
|
# the self argument, since for RPCs anything goes.
|
||||||
attr_type = types.TMethod(object_type, attr_type)
|
attr_type = types.TMethod(object_type, attr_type)
|
||||||
|
@ -260,31 +259,7 @@ class Inferencer(algorithm.Visitor):
|
||||||
|
|
||||||
def visit_SubscriptT(self, node):
|
def visit_SubscriptT(self, node):
|
||||||
self.generic_visit(node)
|
self.generic_visit(node)
|
||||||
|
if isinstance(node.slice, ast.Index):
|
||||||
if types.is_tuple(node.value.type):
|
|
||||||
if (not isinstance(node.slice, ast.Index) or
|
|
||||||
not isinstance(node.slice.value, ast.Num)):
|
|
||||||
diag = diagnostic.Diagnostic(
|
|
||||||
"error", "tuples can only be indexed by a constant", {},
|
|
||||||
node.slice.loc, []
|
|
||||||
)
|
|
||||||
self.engine.process(diag)
|
|
||||||
return
|
|
||||||
|
|
||||||
tuple_type = node.value.type.find()
|
|
||||||
index = node.slice.value.n
|
|
||||||
if index < 0 or index >= len(tuple_type.elts):
|
|
||||||
diag = diagnostic.Diagnostic(
|
|
||||||
"error",
|
|
||||||
"index {index} is out of range for tuple of size {size}",
|
|
||||||
{"index": index, "size": len(tuple_type.elts)},
|
|
||||||
node.slice.loc, []
|
|
||||||
)
|
|
||||||
self.engine.process(diag)
|
|
||||||
return
|
|
||||||
|
|
||||||
self._unify(node.type, tuple_type.elts[index], node.loc, node.value.loc)
|
|
||||||
elif isinstance(node.slice, ast.Index):
|
|
||||||
if types.is_tuple(node.slice.value.type):
|
if types.is_tuple(node.slice.value.type):
|
||||||
if types.is_var(node.value.type):
|
if types.is_var(node.value.type):
|
||||||
return
|
return
|
||||||
|
@ -1294,106 +1269,6 @@ class Inferencer(algorithm.Visitor):
|
||||||
# Ignored.
|
# Ignored.
|
||||||
self._unify(node.type, builtins.TNone(),
|
self._unify(node.type, builtins.TNone(),
|
||||||
node.loc, None)
|
node.loc, None)
|
||||||
elif types.is_builtin(typ, "subkernel_await"):
|
|
||||||
valid_forms = lambda: [
|
|
||||||
valid_form("subkernel_await(f: subkernel) -> f return type"),
|
|
||||||
valid_form("subkernel_await(f: subkernel, timeout: numpy.int64) -> f return type")
|
|
||||||
]
|
|
||||||
if 1 <= len(node.args) <= 2:
|
|
||||||
arg0 = node.args[0].type
|
|
||||||
if types.is_var(arg0):
|
|
||||||
pass # undetermined yet
|
|
||||||
else:
|
|
||||||
if types.is_method(arg0):
|
|
||||||
fn = types.get_method_function(arg0)
|
|
||||||
elif types.is_function(arg0) or types.is_subkernel(arg0):
|
|
||||||
fn = arg0
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
self._unify(node.type, fn.ret,
|
|
||||||
node.loc, None)
|
|
||||||
if len(node.args) == 2:
|
|
||||||
arg1 = node.args[1]
|
|
||||||
if types.is_var(arg1.type):
|
|
||||||
pass
|
|
||||||
elif builtins.is_int(arg1.type):
|
|
||||||
# promote to TInt64
|
|
||||||
self._unify(arg1.type, builtins.TInt64(),
|
|
||||||
arg1.loc, None)
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
elif types.is_builtin(typ, "subkernel_preload"):
|
|
||||||
valid_forms = lambda: [
|
|
||||||
valid_form("subkernel_preload(f: subkernel) -> None")
|
|
||||||
]
|
|
||||||
if len(node.args) == 1:
|
|
||||||
arg0 = node.args[0].type
|
|
||||||
if types.is_var(arg0):
|
|
||||||
pass # undetermined yet
|
|
||||||
else:
|
|
||||||
if types.is_method(arg0):
|
|
||||||
fn = types.get_method_function(arg0)
|
|
||||||
elif types.is_function(arg0) or types.is_subkernel(arg0):
|
|
||||||
fn = arg0
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
self._unify(node.type, fn.ret,
|
|
||||||
node.loc, None)
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
elif types.is_builtin(typ, "subkernel_send"):
|
|
||||||
valid_forms = lambda: [
|
|
||||||
valid_form("subkernel_send(dest: numpy.int?, name: str, value: V) -> None"),
|
|
||||||
]
|
|
||||||
self._unify(node.type, builtins.TNone(),
|
|
||||||
node.loc, None)
|
|
||||||
if len(node.args) == 3:
|
|
||||||
arg0 = node.args[0]
|
|
||||||
if types.is_var(arg0.type):
|
|
||||||
pass # undetermined yet
|
|
||||||
else:
|
|
||||||
if builtins.is_int(arg0.type):
|
|
||||||
self._unify(arg0.type, builtins.TInt8(),
|
|
||||||
arg0.loc, None)
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
arg1 = node.args[1]
|
|
||||||
self._unify(arg1.type, builtins.TStr(),
|
|
||||||
arg1.loc, None)
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
elif types.is_builtin(typ, "subkernel_recv"):
|
|
||||||
valid_forms = lambda: [
|
|
||||||
valid_form("subkernel_recv(name: str, value_type: type) -> value_type"),
|
|
||||||
valid_form("subkernel_recv(name: str, value_type: type, timeout: numpy.int64) -> value_type"),
|
|
||||||
]
|
|
||||||
if 2 <= len(node.args) <= 3:
|
|
||||||
arg0 = node.args[0]
|
|
||||||
if types.is_var(arg0.type):
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
self._unify(arg0.type, builtins.TStr(),
|
|
||||||
arg0.loc, None)
|
|
||||||
arg1 = node.args[1]
|
|
||||||
if types.is_var(arg1.type):
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
self._unify(node.type, arg1.value,
|
|
||||||
node.loc, None)
|
|
||||||
if len(node.args) == 3:
|
|
||||||
arg2 = node.args[2]
|
|
||||||
if types.is_var(arg2.type):
|
|
||||||
pass
|
|
||||||
elif builtins.is_int(arg2.type):
|
|
||||||
# promote to TInt64
|
|
||||||
self._unify(arg2.type, builtins.TInt64(),
|
|
||||||
arg2.loc, None)
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
else:
|
|
||||||
diagnose(valid_forms())
|
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
|
|
||||||
|
@ -1432,7 +1307,6 @@ class Inferencer(algorithm.Visitor):
|
||||||
typ_args = typ.args
|
typ_args = typ.args
|
||||||
typ_optargs = typ.optargs
|
typ_optargs = typ.optargs
|
||||||
typ_ret = typ.ret
|
typ_ret = typ.ret
|
||||||
typ_func = typ
|
|
||||||
else:
|
else:
|
||||||
typ_self = types.get_method_self(typ)
|
typ_self = types.get_method_self(typ)
|
||||||
typ_func = types.get_method_function(typ)
|
typ_func = types.get_method_function(typ)
|
||||||
|
@ -1490,23 +1364,12 @@ class Inferencer(algorithm.Visitor):
|
||||||
other_node=node.args[0])
|
other_node=node.args[0])
|
||||||
self._unify(node.type, ret, node.loc, None)
|
self._unify(node.type, ret, node.loc, None)
|
||||||
return
|
return
|
||||||
if types.is_subkernel(typ_func) and typ_func.sid not in self.subkernel_arg_types:
|
|
||||||
self.subkernel_arg_types[typ_func.sid] = []
|
|
||||||
|
|
||||||
for actualarg, (formalname, formaltyp) in \
|
for actualarg, (formalname, formaltyp) in \
|
||||||
zip(node.args, list(typ_args.items()) + list(typ_optargs.items())):
|
zip(node.args, list(typ_args.items()) + list(typ_optargs.items())):
|
||||||
self._unify(actualarg.type, formaltyp,
|
self._unify(actualarg.type, formaltyp,
|
||||||
actualarg.loc, None)
|
actualarg.loc, None)
|
||||||
passed_args[formalname] = actualarg.loc
|
passed_args[formalname] = actualarg.loc
|
||||||
if types.is_subkernel(typ_func):
|
|
||||||
if types.is_instance(actualarg.type):
|
|
||||||
# objects cannot be passed to subkernels, as rpc code doesn't support them
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"argument '{name}' of type: {typ} is not supported in subkernels",
|
|
||||||
{"name": formalname, "typ": actualarg.type},
|
|
||||||
actualarg.loc, [])
|
|
||||||
self.engine.process(diag)
|
|
||||||
self.subkernel_arg_types[typ_func.sid].append((formalname, formaltyp))
|
|
||||||
|
|
||||||
for keyword in node.keywords:
|
for keyword in node.keywords:
|
||||||
if keyword.arg in passed_args:
|
if keyword.arg in passed_args:
|
||||||
|
@ -1537,7 +1400,7 @@ class Inferencer(algorithm.Visitor):
|
||||||
passed_args[keyword.arg] = keyword.arg_loc
|
passed_args[keyword.arg] = keyword.arg_loc
|
||||||
|
|
||||||
for formalname in typ_args:
|
for formalname in typ_args:
|
||||||
if formalname not in passed_args and not node.remote_fn:
|
if formalname not in passed_args:
|
||||||
note = diagnostic.Diagnostic("note",
|
note = diagnostic.Diagnostic("note",
|
||||||
"the called function is of type {type}",
|
"the called function is of type {type}",
|
||||||
{"type": types.TypePrinter().name(node.func.type)},
|
{"type": types.TypePrinter().name(node.func.type)},
|
||||||
|
|
|
@ -14,9 +14,9 @@ class IntMonomorphizer(algorithm.Visitor):
|
||||||
def visit_NumT(self, node):
|
def visit_NumT(self, node):
|
||||||
if builtins.is_int(node.type):
|
if builtins.is_int(node.type):
|
||||||
if types.is_var(node.type["width"]):
|
if types.is_var(node.type["width"]):
|
||||||
if -2**31 <= node.n <= 2**31-1:
|
if -2**31 < node.n < 2**31-1:
|
||||||
width = 32
|
width = 32
|
||||||
elif -2**63 <= node.n <= 2**63-1:
|
elif -2**63 < node.n < 2**63-1:
|
||||||
width = 64
|
width = 64
|
||||||
else:
|
else:
|
||||||
diag = diagnostic.Diagnostic("error",
|
diag = diagnostic.Diagnostic("error",
|
||||||
|
|
|
@ -280,7 +280,7 @@ class IODelayEstimator(algorithm.Visitor):
|
||||||
context="as an argument for delay_mu()")
|
context="as an argument for delay_mu()")
|
||||||
call_delay = value
|
call_delay = value
|
||||||
elif not types.is_builtin(typ):
|
elif not types.is_builtin(typ):
|
||||||
if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
|
if types.is_function(typ) or types.is_rpc(typ):
|
||||||
offset = 0
|
offset = 0
|
||||||
elif types.is_method(typ):
|
elif types.is_method(typ):
|
||||||
offset = 1
|
offset = 1
|
||||||
|
@ -288,7 +288,7 @@ class IODelayEstimator(algorithm.Visitor):
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
|
|
||||||
if types.is_rpc(typ) or types.is_subkernel(typ):
|
if types.is_rpc(typ):
|
||||||
call_delay = iodelay.Const(0)
|
call_delay = iodelay.Const(0)
|
||||||
else:
|
else:
|
||||||
delay = typ.find().delay.find()
|
delay = typ.find().delay.find()
|
||||||
|
@ -311,20 +311,13 @@ class IODelayEstimator(algorithm.Visitor):
|
||||||
args[arg_name] = arg_node
|
args[arg_name] = arg_node
|
||||||
|
|
||||||
free_vars = delay.duration.free_vars()
|
free_vars = delay.duration.free_vars()
|
||||||
try:
|
node.arg_exprs = {
|
||||||
node.arg_exprs = {
|
arg: self.evaluate(args[arg], abort=abort,
|
||||||
arg: self.evaluate(args[arg], abort=abort,
|
context="in the expression for argument '{}' "
|
||||||
context="in the expression for argument '{}' "
|
"that affects I/O delay".format(arg))
|
||||||
"that affects I/O delay".format(arg))
|
for arg in free_vars
|
||||||
for arg in free_vars
|
}
|
||||||
}
|
call_delay = delay.duration.fold(node.arg_exprs)
|
||||||
call_delay = delay.duration.fold(node.arg_exprs)
|
|
||||||
except KeyError as e:
|
|
||||||
if getattr(node, "remote_fn", False):
|
|
||||||
note = diagnostic.Diagnostic("note",
|
|
||||||
"function called here", {},
|
|
||||||
node.loc)
|
|
||||||
self.abort("due to arguments passed remotely", node.loc, note)
|
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
else:
|
else:
|
||||||
|
|
|
@ -171,26 +171,11 @@ class LLVMIRGenerator:
|
||||||
self.llfunction = None
|
self.llfunction = None
|
||||||
self.llmap = {}
|
self.llmap = {}
|
||||||
self.llobject_map = {}
|
self.llobject_map = {}
|
||||||
self.llpred_map = {}
|
|
||||||
self.phis = []
|
self.phis = []
|
||||||
self.debug_info_emitter = DebugInfoEmitter(self.llmodule)
|
self.debug_info_emitter = DebugInfoEmitter(self.llmodule)
|
||||||
self.empty_metadata = self.llmodule.add_metadata([])
|
self.empty_metadata = self.llmodule.add_metadata([])
|
||||||
self.quote_fail_msg = None
|
self.quote_fail_msg = None
|
||||||
|
|
||||||
# Maximum alignment required according to the target platform ABI. As this is
|
|
||||||
# not directly exposed by LLVM, just take the maximum across all the "big"
|
|
||||||
# elementary types we use. (Vector types, should we ever support them, are
|
|
||||||
# likely contenders for even larger alignment requirements.)
|
|
||||||
self.max_target_alignment = max(map(
|
|
||||||
lambda t: self.abi_layout_info.get_size_align(t)[1],
|
|
||||||
[lli64, lldouble, llptr]
|
|
||||||
))
|
|
||||||
|
|
||||||
def add_pred(self, pred, block):
|
|
||||||
if block not in self.llpred_map:
|
|
||||||
self.llpred_map[block] = set()
|
|
||||||
self.llpred_map[block].add(pred)
|
|
||||||
|
|
||||||
def needs_sret(self, lltyp, may_be_large=True):
|
def needs_sret(self, lltyp, may_be_large=True):
|
||||||
if isinstance(lltyp, ll.VoidType):
|
if isinstance(lltyp, ll.VoidType):
|
||||||
return False
|
return False
|
||||||
|
@ -215,7 +200,7 @@ class LLVMIRGenerator:
|
||||||
typ = typ.find()
|
typ = typ.find()
|
||||||
if types.is_tuple(typ):
|
if types.is_tuple(typ):
|
||||||
return ll.LiteralStructType([self.llty_of_type(eltty) for eltty in typ.elts])
|
return ll.LiteralStructType([self.llty_of_type(eltty) for eltty in typ.elts])
|
||||||
elif types.is_rpc(typ) or types.is_external_function(typ) or types.is_subkernel(typ):
|
elif types.is_rpc(typ) or types.is_external_function(typ):
|
||||||
if for_return:
|
if for_return:
|
||||||
return llvoid
|
return llvoid
|
||||||
else:
|
else:
|
||||||
|
@ -262,10 +247,7 @@ class LLVMIRGenerator:
|
||||||
return ll.LiteralStructType([llbufferty, llshapety])
|
return ll.LiteralStructType([llbufferty, llshapety])
|
||||||
elif builtins.is_listish(typ):
|
elif builtins.is_listish(typ):
|
||||||
lleltty = self.llty_of_type(builtins.get_iterable_elt(typ))
|
lleltty = self.llty_of_type(builtins.get_iterable_elt(typ))
|
||||||
lltyp = ll.LiteralStructType([lleltty.as_pointer(), lli32])
|
return ll.LiteralStructType([lleltty.as_pointer(), lli32])
|
||||||
if builtins.is_list(typ):
|
|
||||||
lltyp = lltyp.as_pointer()
|
|
||||||
return lltyp
|
|
||||||
elif builtins.is_range(typ):
|
elif builtins.is_range(typ):
|
||||||
lleltty = self.llty_of_type(builtins.get_iterable_elt(typ))
|
lleltty = self.llty_of_type(builtins.get_iterable_elt(typ))
|
||||||
return ll.LiteralStructType([lleltty, lleltty, lleltty])
|
return ll.LiteralStructType([lleltty, lleltty, lleltty])
|
||||||
|
@ -344,8 +326,8 @@ class LLVMIRGenerator:
|
||||||
else:
|
else:
|
||||||
value = const.value
|
value = const.value
|
||||||
|
|
||||||
llptr = self.llstr_of_str(value, linkage="private", unnamed_addr=True)
|
llptr = self.llstr_of_str(const.value, linkage="private", unnamed_addr=True)
|
||||||
lllen = ll.Constant(lli32, len(value))
|
lllen = ll.Constant(lli32, len(const.value))
|
||||||
return ll.Constant(llty, (llptr, lllen))
|
return ll.Constant(llty, (llptr, lllen))
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
|
@ -385,9 +367,7 @@ class LLVMIRGenerator:
|
||||||
llty = ll.FunctionType(lli32, [], var_arg=True)
|
llty = ll.FunctionType(lli32, [], var_arg=True)
|
||||||
elif name == "__artiq_raise":
|
elif name == "__artiq_raise":
|
||||||
llty = ll.FunctionType(llvoid, [self.llty_of_type(builtins.TException())])
|
llty = ll.FunctionType(llvoid, [self.llty_of_type(builtins.TException())])
|
||||||
elif name == "__artiq_resume":
|
elif name == "__artiq_reraise":
|
||||||
llty = ll.FunctionType(llvoid, [])
|
|
||||||
elif name == "__artiq_end_catch":
|
|
||||||
llty = ll.FunctionType(llvoid, [])
|
llty = ll.FunctionType(llvoid, [])
|
||||||
elif name == "memcmp":
|
elif name == "memcmp":
|
||||||
llty = ll.FunctionType(lli32, [llptr, llptr, lli32])
|
llty = ll.FunctionType(lli32, [llptr, llptr, lli32])
|
||||||
|
@ -398,15 +378,6 @@ class LLVMIRGenerator:
|
||||||
elif name == "rpc_recv":
|
elif name == "rpc_recv":
|
||||||
llty = ll.FunctionType(lli32, [llptr])
|
llty = ll.FunctionType(lli32, [llptr])
|
||||||
|
|
||||||
elif name == "subkernel_send_message":
|
|
||||||
llty = ll.FunctionType(llvoid, [lli32, lli1, lli8, lli8, llsliceptr, llptrptr])
|
|
||||||
elif name == "subkernel_load_run":
|
|
||||||
llty = ll.FunctionType(llvoid, [lli32, lli8, lli1])
|
|
||||||
elif name == "subkernel_await_finish":
|
|
||||||
llty = ll.FunctionType(llvoid, [lli32, lli64])
|
|
||||||
elif name == "subkernel_await_message":
|
|
||||||
llty = ll.FunctionType(lli8, [lli32, lli64, llsliceptr, lli8, lli8])
|
|
||||||
|
|
||||||
# with now-pinning
|
# with now-pinning
|
||||||
elif name == "now":
|
elif name == "now":
|
||||||
llty = lli64
|
llty = lli64
|
||||||
|
@ -424,7 +395,7 @@ class LLVMIRGenerator:
|
||||||
|
|
||||||
if isinstance(llty, ll.FunctionType):
|
if isinstance(llty, ll.FunctionType):
|
||||||
llglobal = ll.Function(self.llmodule, llty, name)
|
llglobal = ll.Function(self.llmodule, llty, name)
|
||||||
if name in ("__artiq_raise", "__artiq_resume", "llvm.trap"):
|
if name in ("__artiq_raise", "__artiq_reraise", "llvm.trap"):
|
||||||
llglobal.attributes.add("noreturn")
|
llglobal.attributes.add("noreturn")
|
||||||
if name in ("rtio_log", "rpc_send", "rpc_send_async",
|
if name in ("rtio_log", "rpc_send", "rpc_send_async",
|
||||||
self.target.print_function):
|
self.target.print_function):
|
||||||
|
@ -682,28 +653,6 @@ class LLVMIRGenerator:
|
||||||
self.llbuilder = ll.IRBuilder()
|
self.llbuilder = ll.IRBuilder()
|
||||||
llblock_map = {}
|
llblock_map = {}
|
||||||
|
|
||||||
# this is the predecessor map, from basic block to the set of its
|
|
||||||
# predecessors
|
|
||||||
# handling for branch and cbranch is here, and the handling of
|
|
||||||
# indirectbr and landingpad are in their respective process_*
|
|
||||||
# function
|
|
||||||
self.llpred_map = llpred_map = {}
|
|
||||||
branch_fn = self.llbuilder.branch
|
|
||||||
cbranch_fn = self.llbuilder.cbranch
|
|
||||||
def override_branch(block):
|
|
||||||
nonlocal self, branch_fn
|
|
||||||
self.add_pred(self.llbuilder.basic_block, block)
|
|
||||||
return branch_fn(block)
|
|
||||||
|
|
||||||
def override_cbranch(pred, bbif, bbelse):
|
|
||||||
nonlocal self, cbranch_fn
|
|
||||||
self.add_pred(self.llbuilder.basic_block, bbif)
|
|
||||||
self.add_pred(self.llbuilder.basic_block, bbelse)
|
|
||||||
return cbranch_fn(pred, bbif, bbelse)
|
|
||||||
|
|
||||||
self.llbuilder.branch = override_branch
|
|
||||||
self.llbuilder.cbranch = override_cbranch
|
|
||||||
|
|
||||||
if not func.is_generated:
|
if not func.is_generated:
|
||||||
lldisubprogram = self.debug_info_emitter.emit_subprogram(func, self.llfunction)
|
lldisubprogram = self.debug_info_emitter.emit_subprogram(func, self.llfunction)
|
||||||
self.llfunction.set_metadata('dbg', lldisubprogram)
|
self.llfunction.set_metadata('dbg', lldisubprogram)
|
||||||
|
@ -726,10 +675,6 @@ class LLVMIRGenerator:
|
||||||
# Third, translate all instructions.
|
# Third, translate all instructions.
|
||||||
for block in func.basic_blocks:
|
for block in func.basic_blocks:
|
||||||
self.llbuilder.position_at_end(self.llmap[block])
|
self.llbuilder.position_at_end(self.llmap[block])
|
||||||
old_block = None
|
|
||||||
if len(block.instructions) == 1 and \
|
|
||||||
isinstance(block.instructions[0], ir.LandingPad):
|
|
||||||
old_block = self.llbuilder.basic_block
|
|
||||||
for insn in block.instructions:
|
for insn in block.instructions:
|
||||||
if insn.loc is not None and not func.is_generated:
|
if insn.loc is not None and not func.is_generated:
|
||||||
self.llbuilder.debug_metadata = \
|
self.llbuilder.debug_metadata = \
|
||||||
|
@ -744,28 +689,12 @@ class LLVMIRGenerator:
|
||||||
# instruction so that the result spans several LLVM basic
|
# instruction so that the result spans several LLVM basic
|
||||||
# blocks. This only really matters for phis, which are thus
|
# blocks. This only really matters for phis, which are thus
|
||||||
# using a different map (the following one).
|
# using a different map (the following one).
|
||||||
if old_block is None:
|
llblock_map[block] = self.llbuilder.basic_block
|
||||||
llblock_map[block] = self.llbuilder.basic_block
|
|
||||||
else:
|
|
||||||
llblock_map[block] = old_block
|
|
||||||
|
|
||||||
# Fourth, add incoming values to phis.
|
# Fourth, add incoming values to phis.
|
||||||
for phi, llphi in self.phis:
|
for phi, llphi in self.phis:
|
||||||
for value, block in phi.incoming():
|
for value, block in phi.incoming():
|
||||||
if isinstance(phi.type, builtins.TException):
|
llphi.add_incoming(self.map(value), llblock_map[block])
|
||||||
# a hack to patch phi from landingpad
|
|
||||||
# because landingpad is a single bb in artiq IR, but
|
|
||||||
# generates multiple bb, we need to find out the
|
|
||||||
# predecessor to figure out the actual bb
|
|
||||||
landingpad = llblock_map[block]
|
|
||||||
for pred in llpred_map[llphi.parent]:
|
|
||||||
if pred in llpred_map and landingpad in llpred_map[pred]:
|
|
||||||
llphi.add_incoming(self.map(value), pred)
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
llphi.add_incoming(self.map(value), landingpad)
|
|
||||||
else:
|
|
||||||
llphi.add_incoming(self.map(value), llblock_map[block])
|
|
||||||
finally:
|
finally:
|
||||||
self.function_flags = None
|
self.function_flags = None
|
||||||
self.llfunction = None
|
self.llfunction = None
|
||||||
|
@ -803,20 +732,9 @@ class LLVMIRGenerator:
|
||||||
llalloc = self.llbuilder.alloca(lleltty, size=llsize)
|
llalloc = self.llbuilder.alloca(lleltty, size=llsize)
|
||||||
if types._is_pointer(insn.type):
|
if types._is_pointer(insn.type):
|
||||||
return llalloc
|
return llalloc
|
||||||
if builtins.is_list(insn.type):
|
llvalue = ll.Constant(self.llty_of_type(insn.type), ll.Undefined)
|
||||||
llvalue = self.llbuilder.alloca(self.llty_of_type(insn.type).pointee, size=1)
|
llvalue = self.llbuilder.insert_value(llvalue, llalloc, 0, name=insn.name)
|
||||||
self.llbuilder.store(llalloc, self.llbuilder.gep(llvalue,
|
llvalue = self.llbuilder.insert_value(llvalue, llsize, 1)
|
||||||
[self.llindex(0),
|
|
||||||
self.llindex(0)],
|
|
||||||
inbounds=True))
|
|
||||||
self.llbuilder.store(llsize, self.llbuilder.gep(llvalue,
|
|
||||||
[self.llindex(0),
|
|
||||||
self.llindex(1)],
|
|
||||||
inbounds=True))
|
|
||||||
else:
|
|
||||||
llvalue = ll.Constant(self.llty_of_type(insn.type), ll.Undefined)
|
|
||||||
llvalue = self.llbuilder.insert_value(llvalue, llalloc, 0)
|
|
||||||
llvalue = self.llbuilder.insert_value(llvalue, llsize, 1)
|
|
||||||
return llvalue
|
return llvalue
|
||||||
elif (not builtins.is_allocated(insn.type) or ir.is_keyword(insn.type)
|
elif (not builtins.is_allocated(insn.type) or ir.is_keyword(insn.type)
|
||||||
or builtins.is_array(insn.type)):
|
or builtins.is_array(insn.type)):
|
||||||
|
@ -883,53 +801,6 @@ class LLVMIRGenerator:
|
||||||
llvalue = self.llbuilder.bitcast(llvalue, llptr.type.pointee)
|
llvalue = self.llbuilder.bitcast(llvalue, llptr.type.pointee)
|
||||||
return self.llbuilder.store(llvalue, llptr)
|
return self.llbuilder.store(llvalue, llptr)
|
||||||
|
|
||||||
def process_GetArgFromRemote(self, insn):
|
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
|
||||||
name="subkernel.arg.stack")
|
|
||||||
llval = self._build_rpc_recv(insn.arg_type, llstackptr)
|
|
||||||
return llval
|
|
||||||
|
|
||||||
def process_GetOptArgFromRemote(self, insn):
|
|
||||||
# optarg = index < rcv_count ? Some(rcv_recv()) : None
|
|
||||||
llhead = self.llbuilder.basic_block
|
|
||||||
llrcv = self.llbuilder.append_basic_block(name="optarg.get.{}".format(insn.arg_name))
|
|
||||||
|
|
||||||
# argument received
|
|
||||||
self.llbuilder.position_at_end(llrcv)
|
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
|
||||||
name="subkernel.arg.stack")
|
|
||||||
llval = self._build_rpc_recv(insn.arg_type, llstackptr)
|
|
||||||
llrpcretblock = self.llbuilder.basic_block # 'return' from rpc_recv, will be needed later
|
|
||||||
|
|
||||||
# create the tail block, needs to be after the rpc recv tail block
|
|
||||||
lltail = self.llbuilder.append_basic_block(name="optarg.tail.{}".format(insn.arg_name))
|
|
||||||
self.llbuilder.branch(lltail)
|
|
||||||
|
|
||||||
# go back to head to add a branch to the tail
|
|
||||||
self.llbuilder.position_at_end(llhead)
|
|
||||||
llargrcvd = self.llbuilder.icmp_unsigned("<", self.map(insn.index), self.map(insn.rcv_count))
|
|
||||||
self.llbuilder.cbranch(llargrcvd, llrcv, lltail)
|
|
||||||
|
|
||||||
# argument not received/after arg recvd
|
|
||||||
self.llbuilder.position_at_end(lltail)
|
|
||||||
|
|
||||||
llargtype = self.llty_of_type(insn.arg_type)
|
|
||||||
|
|
||||||
llphi_arg_present = self.llbuilder.phi(lli1, name="optarg.phi.present.{}".format(insn.arg_name))
|
|
||||||
llphi_arg = self.llbuilder.phi(llargtype, name="optarg.phi.{}".format(insn.arg_name))
|
|
||||||
|
|
||||||
llphi_arg_present.add_incoming(ll.Constant(lli1, 0), llhead)
|
|
||||||
llphi_arg.add_incoming(ll.Constant(llargtype, ll.Undefined), llhead)
|
|
||||||
|
|
||||||
llphi_arg_present.add_incoming(ll.Constant(lli1, 1), llrpcretblock)
|
|
||||||
llphi_arg.add_incoming(llval, llrpcretblock)
|
|
||||||
|
|
||||||
lloptarg = ll.Constant(ll.LiteralStructType([lli1, llargtype]), ll.Undefined)
|
|
||||||
lloptarg = self.llbuilder.insert_value(lloptarg, llphi_arg_present, 0)
|
|
||||||
lloptarg = self.llbuilder.insert_value(lloptarg, llphi_arg, 1)
|
|
||||||
|
|
||||||
return lloptarg
|
|
||||||
|
|
||||||
def attr_index(self, typ, attr):
|
def attr_index(self, typ, attr):
|
||||||
return list(typ.attributes.keys()).index(attr)
|
return list(typ.attributes.keys()).index(attr)
|
||||||
|
|
||||||
|
@ -954,8 +825,8 @@ class LLVMIRGenerator:
|
||||||
def get_global_closure_ptr(self, typ, attr):
|
def get_global_closure_ptr(self, typ, attr):
|
||||||
closure_type = typ.attributes[attr]
|
closure_type = typ.attributes[attr]
|
||||||
assert types.is_constructor(typ)
|
assert types.is_constructor(typ)
|
||||||
assert types.is_function(closure_type) or types.is_rpc(closure_type) or types.is_subkernel(closure_type)
|
assert types.is_function(closure_type) or types.is_rpc(closure_type)
|
||||||
if types.is_external_function(closure_type) or types.is_rpc(closure_type) or types.is_subkernel(closure_type):
|
if types.is_external_function(closure_type) or types.is_rpc(closure_type):
|
||||||
return None
|
return None
|
||||||
|
|
||||||
llty = self.llty_of_type(typ.attributes[attr])
|
llty = self.llty_of_type(typ.attributes[attr])
|
||||||
|
@ -1067,15 +938,9 @@ class LLVMIRGenerator:
|
||||||
def process_Offset(self, insn):
|
def process_Offset(self, insn):
|
||||||
base, idx = insn.base(), insn.index()
|
base, idx = insn.base(), insn.index()
|
||||||
llelts, llidx = map(self.map, (base, idx))
|
llelts, llidx = map(self.map, (base, idx))
|
||||||
if builtins.is_listish(base.type):
|
if not types._is_pointer(base.type):
|
||||||
# This is list-ish.
|
# This is list-ish.
|
||||||
if builtins.is_list(base.type):
|
llelts = self.llbuilder.extract_value(llelts, 0)
|
||||||
llelts = self.llbuilder.load(self.llbuilder.gep(llelts,
|
|
||||||
[self.llindex(0),
|
|
||||||
self.llindex(0)],
|
|
||||||
inbounds=True))
|
|
||||||
else:
|
|
||||||
llelts = self.llbuilder.extract_value(llelts, 0)
|
|
||||||
llelt = self.llbuilder.gep(llelts, [llidx], inbounds=True)
|
llelt = self.llbuilder.gep(llelts, [llidx], inbounds=True)
|
||||||
return llelt
|
return llelt
|
||||||
|
|
||||||
|
@ -1089,15 +954,9 @@ class LLVMIRGenerator:
|
||||||
def process_SetElem(self, insn):
|
def process_SetElem(self, insn):
|
||||||
base, idx = insn.base(), insn.index()
|
base, idx = insn.base(), insn.index()
|
||||||
llelts, llidx = map(self.map, (base, idx))
|
llelts, llidx = map(self.map, (base, idx))
|
||||||
if builtins.is_listish(base.type):
|
if not types._is_pointer(base.type):
|
||||||
# This is list-ish.
|
# This is list-ish.
|
||||||
if builtins.is_list(base.type):
|
llelts = self.llbuilder.extract_value(llelts, 0)
|
||||||
llelts = self.llbuilder.load(self.llbuilder.gep(llelts,
|
|
||||||
[self.llindex(0),
|
|
||||||
self.llindex(0)],
|
|
||||||
inbounds=True))
|
|
||||||
else:
|
|
||||||
llelts = self.llbuilder.extract_value(llelts, 0)
|
|
||||||
llelt = self.llbuilder.gep(llelts, [llidx], inbounds=True)
|
llelt = self.llbuilder.gep(llelts, [llidx], inbounds=True)
|
||||||
return self.llbuilder.store(self.map(insn.value()), llelt)
|
return self.llbuilder.store(self.map(insn.value()), llelt)
|
||||||
|
|
||||||
|
@ -1243,11 +1102,6 @@ class LLVMIRGenerator:
|
||||||
lllhs, llrhs = map(self.map, (insn.lhs(), insn.rhs()))
|
lllhs, llrhs = map(self.map, (insn.lhs(), insn.rhs()))
|
||||||
assert lllhs.type == llrhs.type
|
assert lllhs.type == llrhs.type
|
||||||
|
|
||||||
if isinstance(lllhs.type, ll.PointerType) and \
|
|
||||||
isinstance(lllhs.type.pointee, ll.LiteralStructType):
|
|
||||||
lllhs = self.llbuilder.load(lllhs)
|
|
||||||
llrhs = self.llbuilder.load(llrhs)
|
|
||||||
|
|
||||||
if isinstance(lllhs.type, ll.IntType):
|
if isinstance(lllhs.type, ll.IntType):
|
||||||
return self.llbuilder.icmp_signed(op, lllhs, llrhs,
|
return self.llbuilder.icmp_signed(op, lllhs, llrhs,
|
||||||
name=insn.name)
|
name=insn.name)
|
||||||
|
@ -1318,12 +1172,7 @@ class LLVMIRGenerator:
|
||||||
shape = self.llbuilder.extract_value(self.map(collection),
|
shape = self.llbuilder.extract_value(self.map(collection),
|
||||||
self.attr_index(collection.type, "shape"))
|
self.attr_index(collection.type, "shape"))
|
||||||
return self.llbuilder.extract_value(shape, 0)
|
return self.llbuilder.extract_value(shape, 0)
|
||||||
elif builtins.is_list(collection.type):
|
return self.llbuilder.extract_value(self.map(collection), 1)
|
||||||
return self.llbuilder.load(self.llbuilder.gep(self.map(collection),
|
|
||||||
[self.llindex(0),
|
|
||||||
self.llindex(1)]))
|
|
||||||
else:
|
|
||||||
return self.llbuilder.extract_value(self.map(collection), 1)
|
|
||||||
elif insn.op in ("printf", "rtio_log"):
|
elif insn.op in ("printf", "rtio_log"):
|
||||||
# We only get integers, floats, pointers and strings here.
|
# We only get integers, floats, pointers and strings here.
|
||||||
lloperands = []
|
lloperands = []
|
||||||
|
@ -1398,91 +1247,9 @@ class LLVMIRGenerator:
|
||||||
return llstore_lo
|
return llstore_lo
|
||||||
else:
|
else:
|
||||||
return self.llbuilder.call(self.llbuiltin("delay_mu"), [llinterval])
|
return self.llbuilder.call(self.llbuiltin("delay_mu"), [llinterval])
|
||||||
elif insn.op == "end_catch":
|
|
||||||
return self.llbuilder.call(self.llbuiltin("__artiq_end_catch"), [])
|
|
||||||
elif insn.op == "subkernel_await_finish":
|
|
||||||
llsid = self.map(insn.operands[0])
|
|
||||||
lltimeout = self.map(insn.operands[1])
|
|
||||||
return self.llbuilder.call(self.llbuiltin("subkernel_await_finish"), [llsid, lltimeout],
|
|
||||||
name="subkernel.await.finish")
|
|
||||||
elif insn.op == "subkernel_retrieve_return":
|
|
||||||
llsid = self.map(insn.operands[0])
|
|
||||||
lltimeout = self.map(insn.operands[1])
|
|
||||||
lltagptr = self._build_subkernel_tags([insn.type])
|
|
||||||
self.llbuilder.call(self.llbuiltin("subkernel_await_message"),
|
|
||||||
[llsid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
|
|
||||||
name="subkernel.await.message")
|
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
|
||||||
name="subkernel.arg.stack")
|
|
||||||
return self._build_rpc_recv(insn.type, llstackptr)
|
|
||||||
elif insn.op == "subkernel_preload":
|
|
||||||
llsid = self.map(insn.operands[0])
|
|
||||||
lldest = ll.Constant(lli8, insn.operands[1].value)
|
|
||||||
return self.llbuilder.call(self.llbuiltin("subkernel_load_run"), [llsid, lldest, ll.Constant(lli1, 0)],
|
|
||||||
name="subkernel.preload")
|
|
||||||
elif insn.op == "subkernel_send":
|
|
||||||
llmsgid = self.map(insn.operands[0])
|
|
||||||
lldest = self.map(insn.operands[1])
|
|
||||||
return self._build_subkernel_message(llmsgid, lldest, [insn.operands[2]])
|
|
||||||
elif insn.op == "subkernel_recv":
|
|
||||||
llmsgid = self.map(insn.operands[0])
|
|
||||||
lltimeout = self.map(insn.operands[1])
|
|
||||||
lltagptr = self._build_subkernel_tags([insn.type])
|
|
||||||
self.llbuilder.call(self.llbuiltin("subkernel_await_message"),
|
|
||||||
[llmsgid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
|
|
||||||
name="subkernel.await.message")
|
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
|
||||||
name="subkernel.arg.stack")
|
|
||||||
return self._build_rpc_recv(insn.type, llstackptr)
|
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
|
|
||||||
def process_BuiltinInvoke(self, insn):
|
|
||||||
llnormalblock = self.map(insn.normal_target())
|
|
||||||
llunwindblock = self.map(insn.exception_target())
|
|
||||||
if insn.op == "subkernel_retrieve_return":
|
|
||||||
llsid = self.map(insn.operands[0])
|
|
||||||
lltimeout = self.map(insn.operands[1])
|
|
||||||
lltagptr = self._build_subkernel_tags([insn.type])
|
|
||||||
llheadu = self.llbuilder.append_basic_block(name="subkernel.await.unwind")
|
|
||||||
self.llbuilder.invoke(self.llbuiltin("subkernel_await_message"),
|
|
||||||
[llsid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
|
|
||||||
llheadu, llunwindblock,
|
|
||||||
name="subkernel.await.message")
|
|
||||||
self.llbuilder.position_at_end(llheadu)
|
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
|
||||||
name="subkernel.arg.stack")
|
|
||||||
return self._build_rpc_recv(insn.type, llstackptr, llnormalblock, llunwindblock)
|
|
||||||
elif insn.op == "subkernel_await_finish":
|
|
||||||
llsid = self.map(insn.operands[0])
|
|
||||||
lltimeout = self.map(insn.operands[1])
|
|
||||||
return self.llbuilder.invoke(self.llbuiltin("subkernel_await_finish"), [llsid, lltimeout],
|
|
||||||
llnormalblock, llunwindblock,
|
|
||||||
name="subkernel.await.finish")
|
|
||||||
elif insn.op == "subkernel_recv":
|
|
||||||
llmsgid = self.map(insn.operands[0])
|
|
||||||
lltimeout = self.map(insn.operands[1])
|
|
||||||
lltagptr = self._build_subkernel_tags([insn.type])
|
|
||||||
llheadu = self.llbuilder.append_basic_block(name="subkernel.await.unwind")
|
|
||||||
self.llbuilder.invoke(self.llbuiltin("subkernel_await_message"),
|
|
||||||
[llmsgid, lltimeout, lltagptr, ll.Constant(lli8, 1), ll.Constant(lli8, 1)],
|
|
||||||
llheadu, llunwindblock,
|
|
||||||
name="subkernel.await.message")
|
|
||||||
self.llbuilder.position_at_end(llheadu)
|
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
|
||||||
name="subkernel.arg.stack")
|
|
||||||
return self._build_rpc_recv(insn.type, llstackptr, llnormalblock, llunwindblock)
|
|
||||||
else:
|
|
||||||
assert False
|
|
||||||
|
|
||||||
def process_SubkernelAwaitArgs(self, insn):
|
|
||||||
llmin = self.map(insn.operands[0])
|
|
||||||
llmax = self.map(insn.operands[1])
|
|
||||||
lltagptr = self._build_subkernel_tags(insn.arg_types)
|
|
||||||
return self.llbuilder.call(self.llbuiltin("subkernel_await_message"),
|
|
||||||
[ll.Constant(lli32, -1), ll.Constant(lli64, 10_000), lltagptr, llmin, llmax],
|
|
||||||
name="subkernel.await.args")
|
|
||||||
|
|
||||||
def process_Closure(self, insn):
|
def process_Closure(self, insn):
|
||||||
llenv = self.map(insn.environment())
|
llenv = self.map(insn.environment())
|
||||||
llenv = self.llbuilder.bitcast(llenv, llptr)
|
llenv = self.llbuilder.bitcast(llenv, llptr)
|
||||||
|
@ -1504,24 +1271,11 @@ class LLVMIRGenerator:
|
||||||
else:
|
else:
|
||||||
llfun = self.map(insn.static_target_function)
|
llfun = self.map(insn.static_target_function)
|
||||||
llenv = self.llbuilder.extract_value(llclosure, 0, name="env.fun")
|
llenv = self.llbuilder.extract_value(llclosure, 0, name="env.fun")
|
||||||
return llfun, [llenv] + list(llargs), {}, None
|
return llfun, [llenv] + list(llargs), {}
|
||||||
|
|
||||||
def _prepare_ffi_call(self, insn):
|
def _prepare_ffi_call(self, insn):
|
||||||
llargs = []
|
llargs = []
|
||||||
llarg_attrs = {}
|
llarg_attrs = {}
|
||||||
|
|
||||||
stack_save_needed = False
|
|
||||||
for i, arg in enumerate(insn.arguments()):
|
|
||||||
llarg = self.map(arg)
|
|
||||||
if isinstance(llarg.type, (ll.LiteralStructType, ll.IdentifiedStructType)):
|
|
||||||
stack_save_needed = True
|
|
||||||
break
|
|
||||||
|
|
||||||
if stack_save_needed:
|
|
||||||
llcallstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [])
|
|
||||||
else:
|
|
||||||
llcallstackptr = None
|
|
||||||
|
|
||||||
for i, arg in enumerate(insn.arguments()):
|
for i, arg in enumerate(insn.arguments()):
|
||||||
llarg = self.map(arg)
|
llarg = self.map(arg)
|
||||||
if isinstance(llarg.type, (ll.LiteralStructType, ll.IdentifiedStructType)):
|
if isinstance(llarg.type, (ll.LiteralStructType, ll.IdentifiedStructType)):
|
||||||
|
@ -1554,86 +1308,16 @@ class LLVMIRGenerator:
|
||||||
llfun.args[idx].add_attribute(attr)
|
llfun.args[idx].add_attribute(attr)
|
||||||
if 'nounwind' in insn.target_function().type.flags:
|
if 'nounwind' in insn.target_function().type.flags:
|
||||||
llfun.attributes.add('nounwind')
|
llfun.attributes.add('nounwind')
|
||||||
if 'nowrite' in insn.target_function().type.flags and not is_sret:
|
if 'nowrite' in insn.target_function().type.flags:
|
||||||
# Even if "nowrite" is correct from the user's perspective (doesn't
|
|
||||||
# access any other memory observable to ARTIQ Python), this isn't
|
|
||||||
# true on the LLVM IR level for sret return values.
|
|
||||||
llfun.attributes.add('inaccessiblememonly')
|
llfun.attributes.add('inaccessiblememonly')
|
||||||
|
|
||||||
return llfun, list(llargs), llarg_attrs, llcallstackptr
|
return llfun, list(llargs), llarg_attrs
|
||||||
|
|
||||||
def _build_subkernel_tags(self, tag_list):
|
def _build_rpc(self, fun_loc, fun_type, args, llnormalblock, llunwindblock):
|
||||||
def ret_error_handler(typ):
|
llservice = ll.Constant(lli32, fun_type.service)
|
||||||
printer = types.TypePrinter()
|
|
||||||
note = diagnostic.Diagnostic("note",
|
|
||||||
"value of type {type}",
|
|
||||||
{"type": printer.name(typ)},
|
|
||||||
fun_loc)
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"type {type} is not supported in subkernels",
|
|
||||||
{"type": printer.name(fun_type.ret)},
|
|
||||||
fun_loc, notes=[note])
|
|
||||||
self.engine.process(diag)
|
|
||||||
tag = b"".join([ir.rpc_tag(arg_type, ret_error_handler) for arg_type in tag_list])
|
|
||||||
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
|
|
||||||
lltagptr = self.llbuilder.alloca(lltag.type)
|
|
||||||
self.llbuilder.store(lltag, lltagptr)
|
|
||||||
return lltagptr
|
|
||||||
|
|
||||||
def _build_rpc_recv(self, ret, llstackptr, llnormalblock=None, llunwindblock=None):
|
|
||||||
# T result = {
|
|
||||||
# void *ret_ptr = alloca(sizeof(T));
|
|
||||||
# void *ptr = ret_ptr;
|
|
||||||
# loop: int size = rpc_recv(ptr);
|
|
||||||
# // Non-zero: Provide `size` bytes of extra storage for variable-length data.
|
|
||||||
# if(size) { ptr = alloca(size); goto loop; }
|
|
||||||
# else *(T*)ret_ptr
|
|
||||||
# }
|
|
||||||
llprehead = self.llbuilder.basic_block
|
|
||||||
llhead = self.llbuilder.append_basic_block(name="rpc.head")
|
|
||||||
if llunwindblock:
|
|
||||||
llheadu = self.llbuilder.append_basic_block(name="rpc.head.unwind")
|
|
||||||
llalloc = self.llbuilder.append_basic_block(name="rpc.continue")
|
|
||||||
lltail = self.llbuilder.append_basic_block(name="rpc.tail")
|
|
||||||
|
|
||||||
llretty = self.llty_of_type(ret)
|
|
||||||
llslot = self.llbuilder.alloca(llretty, name="rpc.ret.alloc")
|
|
||||||
llslotgen = self.llbuilder.bitcast(llslot, llptr, name="rpc.ret.ptr")
|
|
||||||
self.llbuilder.branch(llhead)
|
|
||||||
|
|
||||||
self.llbuilder.position_at_end(llhead)
|
|
||||||
llphi = self.llbuilder.phi(llslotgen.type, name="rpc.ptr")
|
|
||||||
llphi.add_incoming(llslotgen, llprehead)
|
|
||||||
if llunwindblock:
|
|
||||||
llsize = self.llbuilder.invoke(self.llbuiltin("rpc_recv"), [llphi],
|
|
||||||
llheadu, llunwindblock,
|
|
||||||
name="rpc.size.next")
|
|
||||||
self.llbuilder.position_at_end(llheadu)
|
|
||||||
else:
|
|
||||||
llsize = self.llbuilder.call(self.llbuiltin("rpc_recv"), [llphi],
|
|
||||||
name="rpc.size.next")
|
|
||||||
lldone = self.llbuilder.icmp_unsigned('==', llsize, ll.Constant(llsize.type, 0),
|
|
||||||
name="rpc.done")
|
|
||||||
self.llbuilder.cbranch(lldone, lltail, llalloc)
|
|
||||||
|
|
||||||
self.llbuilder.position_at_end(llalloc)
|
|
||||||
llalloca = self.llbuilder.alloca(lli8, llsize, name="rpc.alloc")
|
|
||||||
llalloca.align = self.max_target_alignment
|
|
||||||
llphi.add_incoming(llalloca, llalloc)
|
|
||||||
self.llbuilder.branch(llhead)
|
|
||||||
|
|
||||||
self.llbuilder.position_at_end(lltail)
|
|
||||||
llret = self.llbuilder.load(llslot, name="rpc.ret")
|
|
||||||
if not ret.fold(False, lambda r, t: r or builtins.is_allocated(t)):
|
|
||||||
# We didn't allocate anything except the slot for the value itself.
|
|
||||||
# Don't waste stack space.
|
|
||||||
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
|
|
||||||
if llnormalblock:
|
|
||||||
self.llbuilder.branch(llnormalblock)
|
|
||||||
return llret
|
|
||||||
|
|
||||||
def _build_arg_tag(self, args, call_type):
|
|
||||||
tag = b""
|
tag = b""
|
||||||
|
|
||||||
for arg in args:
|
for arg in args:
|
||||||
def arg_error_handler(typ):
|
def arg_error_handler(typ):
|
||||||
printer = types.TypePrinter()
|
printer = types.TypePrinter()
|
||||||
|
@ -1642,18 +1326,12 @@ class LLVMIRGenerator:
|
||||||
{"type": printer.name(typ)},
|
{"type": printer.name(typ)},
|
||||||
arg.loc)
|
arg.loc)
|
||||||
diag = diagnostic.Diagnostic("error",
|
diag = diagnostic.Diagnostic("error",
|
||||||
"type {type} is not supported in {call_type} calls",
|
"type {type} is not supported in remote procedure calls",
|
||||||
{"type": printer.name(arg.type), "call_type": call_type},
|
{"type": printer.name(arg.type)},
|
||||||
arg.loc, notes=[note])
|
arg.loc, notes=[note])
|
||||||
self.engine.process(diag)
|
self.engine.process(diag)
|
||||||
tag += ir.rpc_tag(arg.type, arg_error_handler)
|
tag += ir.rpc_tag(arg.type, arg_error_handler)
|
||||||
tag += b":"
|
tag += b":"
|
||||||
return tag
|
|
||||||
|
|
||||||
def _build_rpc(self, fun_loc, fun_type, args, llnormalblock, llunwindblock):
|
|
||||||
llservice = ll.Constant(lli32, fun_type.service)
|
|
||||||
|
|
||||||
tag = self._build_arg_tag(args, call_type="remote procedure")
|
|
||||||
|
|
||||||
def ret_error_handler(typ):
|
def ret_error_handler(typ):
|
||||||
printer = types.TypePrinter()
|
printer = types.TypePrinter()
|
||||||
|
@ -1710,94 +1388,57 @@ class LLVMIRGenerator:
|
||||||
|
|
||||||
return ll.Undefined
|
return ll.Undefined
|
||||||
|
|
||||||
llret = self._build_rpc_recv(fun_type.ret, llstackptr, llnormalblock, llunwindblock)
|
# T result = {
|
||||||
|
# void *ret_ptr = alloca(sizeof(T));
|
||||||
|
# void *ptr = ret_ptr;
|
||||||
|
# loop: int size = rpc_recv(ptr);
|
||||||
|
# // Non-zero: Provide `size` bytes of extra storage for variable-length data.
|
||||||
|
# if(size) { ptr = alloca(size); goto loop; }
|
||||||
|
# else *(T*)ret_ptr
|
||||||
|
# }
|
||||||
|
llprehead = self.llbuilder.basic_block
|
||||||
|
llhead = self.llbuilder.append_basic_block(name="rpc.head")
|
||||||
|
if llunwindblock:
|
||||||
|
llheadu = self.llbuilder.append_basic_block(name="rpc.head.unwind")
|
||||||
|
llalloc = self.llbuilder.append_basic_block(name="rpc.continue")
|
||||||
|
lltail = self.llbuilder.append_basic_block(name="rpc.tail")
|
||||||
|
|
||||||
|
llretty = self.llty_of_type(fun_type.ret)
|
||||||
|
llslot = self.llbuilder.alloca(llretty, name="rpc.ret.alloc")
|
||||||
|
llslotgen = self.llbuilder.bitcast(llslot, llptr, name="rpc.ret.ptr")
|
||||||
|
self.llbuilder.branch(llhead)
|
||||||
|
|
||||||
|
self.llbuilder.position_at_end(llhead)
|
||||||
|
llphi = self.llbuilder.phi(llslotgen.type, name="rpc.ptr")
|
||||||
|
llphi.add_incoming(llslotgen, llprehead)
|
||||||
|
if llunwindblock:
|
||||||
|
llsize = self.llbuilder.invoke(self.llbuiltin("rpc_recv"), [llphi],
|
||||||
|
llheadu, llunwindblock,
|
||||||
|
name="rpc.size.next")
|
||||||
|
self.llbuilder.position_at_end(llheadu)
|
||||||
|
else:
|
||||||
|
llsize = self.llbuilder.call(self.llbuiltin("rpc_recv"), [llphi],
|
||||||
|
name="rpc.size.next")
|
||||||
|
lldone = self.llbuilder.icmp_unsigned('==', llsize, ll.Constant(llsize.type, 0),
|
||||||
|
name="rpc.done")
|
||||||
|
self.llbuilder.cbranch(lldone, lltail, llalloc)
|
||||||
|
|
||||||
|
self.llbuilder.position_at_end(llalloc)
|
||||||
|
llalloca = self.llbuilder.alloca(lli8, llsize, name="rpc.alloc")
|
||||||
|
llalloca.align = 4 # maximum alignment required by OR1K ABI
|
||||||
|
llphi.add_incoming(llalloca, llalloc)
|
||||||
|
self.llbuilder.branch(llhead)
|
||||||
|
|
||||||
|
self.llbuilder.position_at_end(lltail)
|
||||||
|
llret = self.llbuilder.load(llslot, name="rpc.ret")
|
||||||
|
if not fun_type.ret.fold(False, lambda r, t: r or builtins.is_allocated(t)):
|
||||||
|
# We didn't allocate anything except the slot for the value itself.
|
||||||
|
# Don't waste stack space.
|
||||||
|
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
|
||||||
|
if llnormalblock:
|
||||||
|
self.llbuilder.branch(llnormalblock)
|
||||||
return llret
|
return llret
|
||||||
|
|
||||||
def _build_subkernel_call(self, fun_loc, fun_type, args):
|
|
||||||
llsid = ll.Constant(lli32, fun_type.sid)
|
|
||||||
lldest = ll.Constant(lli8, fun_type.destination)
|
|
||||||
# run the kernel first
|
|
||||||
self.llbuilder.call(self.llbuiltin("subkernel_load_run"), [llsid, lldest, ll.Constant(lli1, 1)])
|
|
||||||
|
|
||||||
if args:
|
|
||||||
# only send args if there's anything to send, 'self' is excluded
|
|
||||||
self._build_subkernel_message(llsid, lldest, args)
|
|
||||||
|
|
||||||
return llsid
|
|
||||||
|
|
||||||
def _build_subkernel_message(self, llid, lldest, args):
|
|
||||||
# args (or messages) are sent in the same vein as RPC
|
|
||||||
tag = self._build_arg_tag(args, call_type="subkernel")
|
|
||||||
|
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [],
|
|
||||||
name="subkernel.stack")
|
|
||||||
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
|
|
||||||
lltagptr = self.llbuilder.alloca(lltag.type)
|
|
||||||
self.llbuilder.store(lltag, lltagptr)
|
|
||||||
|
|
||||||
llargs = self.llbuilder.alloca(llptr, ll.Constant(lli32, len(args)),
|
|
||||||
name="subkernel.args")
|
|
||||||
for index, arg in enumerate(args):
|
|
||||||
if builtins.is_none(arg.type):
|
|
||||||
llargslot = self.llbuilder.alloca(llunit,
|
|
||||||
name="subkernel.arg{}".format(index))
|
|
||||||
else:
|
|
||||||
llarg = self.map(arg)
|
|
||||||
llargslot = self.llbuilder.alloca(llarg.type,
|
|
||||||
name="subkernel.arg{}".format(index))
|
|
||||||
self.llbuilder.store(llarg, llargslot)
|
|
||||||
llargslot = self.llbuilder.bitcast(llargslot, llptr)
|
|
||||||
|
|
||||||
llargptr = self.llbuilder.gep(llargs, [ll.Constant(lli32, index)])
|
|
||||||
self.llbuilder.store(llargslot, llargptr)
|
|
||||||
|
|
||||||
llargcount = ll.Constant(lli8, len(args))
|
|
||||||
|
|
||||||
llisreturn = ll.Constant(lli1, False)
|
|
||||||
self.llbuilder.call(self.llbuiltin("subkernel_send_message"),
|
|
||||||
[llid, llisreturn, lldest, llargcount, lltagptr, llargs])
|
|
||||||
return self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
|
|
||||||
|
|
||||||
def _build_subkernel_return(self, insn):
|
|
||||||
# builds a remote return.
|
|
||||||
# unlike args, return only sends one thing.
|
|
||||||
if builtins.is_none(insn.value().type):
|
|
||||||
# do not waste time and bandwidth on Nones
|
|
||||||
return
|
|
||||||
|
|
||||||
def ret_error_handler(typ):
|
|
||||||
printer = types.TypePrinter()
|
|
||||||
note = diagnostic.Diagnostic("note",
|
|
||||||
"value of type {type}",
|
|
||||||
{"type": printer.name(typ)},
|
|
||||||
fun_loc)
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"return type {type} is not supported in subkernel returns",
|
|
||||||
{"type": printer.name(fun_type.ret)},
|
|
||||||
fun_loc, notes=[note])
|
|
||||||
self.engine.process(diag)
|
|
||||||
tag = ir.rpc_tag(insn.value().type, ret_error_handler)
|
|
||||||
tag += b":"
|
|
||||||
lltag = self.llconst_of_const(ir.Constant(tag, builtins.TStr()))
|
|
||||||
lltagptr = self.llbuilder.alloca(lltag.type)
|
|
||||||
self.llbuilder.store(lltag, lltagptr)
|
|
||||||
|
|
||||||
llrets = self.llbuilder.alloca(llptr, ll.Constant(lli32, 1),
|
|
||||||
name="subkernel.return")
|
|
||||||
llret = self.map(insn.value())
|
|
||||||
llretslot = self.llbuilder.alloca(llret.type, name="subkernel.retval")
|
|
||||||
self.llbuilder.store(llret, llretslot)
|
|
||||||
llretslot = self.llbuilder.bitcast(llretslot, llptr)
|
|
||||||
self.llbuilder.store(llretslot, llrets)
|
|
||||||
|
|
||||||
llsid = ll.Constant(lli32, 0) # return goes back to the caller, sid is ignored
|
|
||||||
lltagcount = ll.Constant(lli8, 1) # only one thing is returned
|
|
||||||
llisreturn = ll.Constant(lli1, True) # it's a return, so destination is ignored
|
|
||||||
lldest = ll.Constant(lli8, 0)
|
|
||||||
self.llbuilder.call(self.llbuiltin("subkernel_send_message"),
|
|
||||||
[llsid, llisreturn, lldest, lltagcount, lltagptr, llrets])
|
|
||||||
|
|
||||||
def process_Call(self, insn):
|
def process_Call(self, insn):
|
||||||
functiontyp = insn.target_function().type
|
functiontyp = insn.target_function().type
|
||||||
if types.is_rpc(functiontyp):
|
if types.is_rpc(functiontyp):
|
||||||
|
@ -1805,14 +1446,10 @@ class LLVMIRGenerator:
|
||||||
functiontyp,
|
functiontyp,
|
||||||
insn.arguments(),
|
insn.arguments(),
|
||||||
llnormalblock=None, llunwindblock=None)
|
llnormalblock=None, llunwindblock=None)
|
||||||
elif types.is_subkernel(functiontyp):
|
|
||||||
return self._build_subkernel_call(insn.target_function().loc,
|
|
||||||
functiontyp,
|
|
||||||
insn.arguments())
|
|
||||||
elif types.is_external_function(functiontyp):
|
elif types.is_external_function(functiontyp):
|
||||||
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn)
|
llfun, llargs, llarg_attrs = self._prepare_ffi_call(insn)
|
||||||
else:
|
else:
|
||||||
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_closure_call(insn)
|
llfun, llargs, llarg_attrs = self._prepare_closure_call(insn)
|
||||||
|
|
||||||
if self.has_sret(functiontyp):
|
if self.has_sret(functiontyp):
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [])
|
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [])
|
||||||
|
@ -1831,9 +1468,6 @@ class LLVMIRGenerator:
|
||||||
# {} elsewhere.
|
# {} elsewhere.
|
||||||
llresult = ll.Constant(llunit, [])
|
llresult = ll.Constant(llunit, [])
|
||||||
|
|
||||||
if llcallstackptr != None:
|
|
||||||
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llcallstackptr])
|
|
||||||
|
|
||||||
return llresult
|
return llresult
|
||||||
|
|
||||||
def process_Invoke(self, insn):
|
def process_Invoke(self, insn):
|
||||||
|
@ -1845,15 +1479,10 @@ class LLVMIRGenerator:
|
||||||
functiontyp,
|
functiontyp,
|
||||||
insn.arguments(),
|
insn.arguments(),
|
||||||
llnormalblock, llunwindblock)
|
llnormalblock, llunwindblock)
|
||||||
elif types.is_subkernel(functiontyp):
|
|
||||||
return self._build_subkernel_call(insn.target_function().loc,
|
|
||||||
functiontyp,
|
|
||||||
insn.arguments(),
|
|
||||||
llnormalblock, llunwindblock)
|
|
||||||
elif types.is_external_function(functiontyp):
|
elif types.is_external_function(functiontyp):
|
||||||
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_ffi_call(insn)
|
llfun, llargs, llarg_attrs = self._prepare_ffi_call(insn)
|
||||||
else:
|
else:
|
||||||
llfun, llargs, llarg_attrs, llcallstackptr = self._prepare_closure_call(insn)
|
llfun, llargs, llarg_attrs = self._prepare_closure_call(insn)
|
||||||
|
|
||||||
if self.has_sret(functiontyp):
|
if self.has_sret(functiontyp):
|
||||||
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [])
|
llstackptr = self.llbuilder.call(self.llbuiltin("llvm.stacksave"), [])
|
||||||
|
@ -1875,9 +1504,6 @@ class LLVMIRGenerator:
|
||||||
# The !tbaa metadata is not legal to use with the invoke instruction,
|
# The !tbaa metadata is not legal to use with the invoke instruction,
|
||||||
# so unlike process_Call, we do not set it here.
|
# so unlike process_Call, we do not set it here.
|
||||||
|
|
||||||
if llcallstackptr != None:
|
|
||||||
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llcallstackptr])
|
|
||||||
|
|
||||||
return llresult
|
return llresult
|
||||||
|
|
||||||
def _quote_listish_to_llglobal(self, value, elt_type, path, kind_name):
|
def _quote_listish_to_llglobal(self, value, elt_type, path, kind_name):
|
||||||
|
@ -1928,8 +1554,7 @@ class LLVMIRGenerator:
|
||||||
attrvalue = getattr(value, attr)
|
attrvalue = getattr(value, attr)
|
||||||
is_class_function = (types.is_constructor(typ) and
|
is_class_function = (types.is_constructor(typ) and
|
||||||
types.is_function(typ.attributes[attr]) and
|
types.is_function(typ.attributes[attr]) and
|
||||||
not types.is_external_function(typ.attributes[attr]) and
|
not types.is_external_function(typ.attributes[attr]))
|
||||||
not types.is_subkernel(typ.attributes[attr]))
|
|
||||||
if is_class_function:
|
if is_class_function:
|
||||||
attrvalue = self.embedding_map.specialize_function(typ.instance, attrvalue)
|
attrvalue = self.embedding_map.specialize_function(typ.instance, attrvalue)
|
||||||
if not (types.is_instance(typ) and attr in typ.constant_attributes):
|
if not (types.is_instance(typ) and attr in typ.constant_attributes):
|
||||||
|
@ -2000,13 +1625,6 @@ class LLVMIRGenerator:
|
||||||
assert isinstance(value, (list, numpy.ndarray)), fail_msg
|
assert isinstance(value, (list, numpy.ndarray)), fail_msg
|
||||||
elt_type = builtins.get_iterable_elt(typ)
|
elt_type = builtins.get_iterable_elt(typ)
|
||||||
lleltsptr = self._quote_listish_to_llglobal(value, elt_type, path, typ.find().name)
|
lleltsptr = self._quote_listish_to_llglobal(value, elt_type, path, typ.find().name)
|
||||||
if builtins.is_list(typ):
|
|
||||||
llconst = ll.Constant(llty.pointee, [lleltsptr, ll.Constant(lli32, len(value))])
|
|
||||||
name = self.llmodule.scope.deduplicate("quoted.{}".format(typ.find().name))
|
|
||||||
llglobal = ll.GlobalVariable(self.llmodule, llconst.type, name)
|
|
||||||
llglobal.initializer = llconst
|
|
||||||
llglobal.linkage = "private"
|
|
||||||
return llglobal
|
|
||||||
llconst = ll.Constant(llty, [lleltsptr, ll.Constant(lli32, len(value))])
|
llconst = ll.Constant(llty, [lleltsptr, ll.Constant(lli32, len(value))])
|
||||||
return llconst
|
return llconst
|
||||||
elif types.is_tuple(typ):
|
elif types.is_tuple(typ):
|
||||||
|
@ -2014,8 +1632,7 @@ class LLVMIRGenerator:
|
||||||
llelts = [self._quote(v, t, lambda: path() + [str(i)])
|
llelts = [self._quote(v, t, lambda: path() + [str(i)])
|
||||||
for i, (v, t) in enumerate(zip(value, typ.elts))]
|
for i, (v, t) in enumerate(zip(value, typ.elts))]
|
||||||
return ll.Constant(llty, llelts)
|
return ll.Constant(llty, llelts)
|
||||||
elif types.is_rpc(typ) or types.is_external_function(typ) or \
|
elif types.is_rpc(typ) or types.is_external_function(typ) or types.is_builtin_function(typ):
|
||||||
types.is_builtin_function(typ) or types.is_subkernel(typ):
|
|
||||||
# RPC, C and builtin functions have no runtime representation.
|
# RPC, C and builtin functions have no runtime representation.
|
||||||
return ll.Constant(llty, ll.Undefined)
|
return ll.Constant(llty, ll.Undefined)
|
||||||
elif types.is_function(typ):
|
elif types.is_function(typ):
|
||||||
|
@ -2061,17 +1678,10 @@ class LLVMIRGenerator:
|
||||||
def process_IndirectBranch(self, insn):
|
def process_IndirectBranch(self, insn):
|
||||||
llinsn = self.llbuilder.branch_indirect(self.map(insn.target()))
|
llinsn = self.llbuilder.branch_indirect(self.map(insn.target()))
|
||||||
for dest in insn.destinations():
|
for dest in insn.destinations():
|
||||||
dest = self.map(dest)
|
llinsn.add_destination(self.map(dest))
|
||||||
self.add_pred(self.llbuilder.basic_block, dest)
|
|
||||||
if dest not in self.llpred_map:
|
|
||||||
self.llpred_map[dest] = set()
|
|
||||||
self.llpred_map[dest].add(self.llbuilder.basic_block)
|
|
||||||
llinsn.add_destination(dest)
|
|
||||||
return llinsn
|
return llinsn
|
||||||
|
|
||||||
def process_Return(self, insn):
|
def process_Return(self, insn):
|
||||||
if insn.remote_return:
|
|
||||||
self._build_subkernel_return(insn)
|
|
||||||
if builtins.is_none(insn.value().type):
|
if builtins.is_none(insn.value().type):
|
||||||
return self.llbuilder.ret_void()
|
return self.llbuilder.ret_void()
|
||||||
else:
|
else:
|
||||||
|
@ -2106,8 +1716,8 @@ class LLVMIRGenerator:
|
||||||
llexn = self.map(insn.value())
|
llexn = self.map(insn.value())
|
||||||
return self._gen_raise(insn, self.llbuiltin("__artiq_raise"), [llexn])
|
return self._gen_raise(insn, self.llbuiltin("__artiq_raise"), [llexn])
|
||||||
|
|
||||||
def process_Resume(self, insn):
|
def process_Reraise(self, insn):
|
||||||
return self._gen_raise(insn, self.llbuiltin("__artiq_resume"), [])
|
return self._gen_raise(insn, self.llbuiltin("__artiq_reraise"), [])
|
||||||
|
|
||||||
def process_LandingPad(self, insn):
|
def process_LandingPad(self, insn):
|
||||||
# Layout on return from landing pad: {%_Unwind_Exception*, %Exception*}
|
# Layout on return from landing pad: {%_Unwind_Exception*, %Exception*}
|
||||||
|
@ -2116,11 +1726,10 @@ class LLVMIRGenerator:
|
||||||
cleanup=insn.has_cleanup)
|
cleanup=insn.has_cleanup)
|
||||||
llrawexn = self.llbuilder.extract_value(lllandingpad, 1)
|
llrawexn = self.llbuilder.extract_value(lllandingpad, 1)
|
||||||
llexn = self.llbuilder.bitcast(llrawexn, self.llty_of_type(insn.type))
|
llexn = self.llbuilder.bitcast(llrawexn, self.llty_of_type(insn.type))
|
||||||
llexnidptr = self.llbuilder.gep(llexn, [self.llindex(0), self.llindex(0)],
|
llexnnameptr = self.llbuilder.gep(llexn, [self.llindex(0), self.llindex(0)],
|
||||||
inbounds=True)
|
inbounds=True)
|
||||||
llexnid = self.llbuilder.load(llexnidptr)
|
llexnname = self.llbuilder.load(llexnnameptr)
|
||||||
|
|
||||||
landingpadbb = self.llbuilder.basic_block
|
|
||||||
for target, typ in insn.clauses():
|
for target, typ in insn.clauses():
|
||||||
if typ is None:
|
if typ is None:
|
||||||
# we use a null pointer here, similar to how cpp does it
|
# we use a null pointer here, similar to how cpp does it
|
||||||
|
@ -2133,40 +1742,42 @@ class LLVMIRGenerator:
|
||||||
ll.Constant(lli32, 0).inttoptr(llptr)
|
ll.Constant(lli32, 0).inttoptr(llptr)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
# typ is None means that we match all exceptions, so no need to
|
|
||||||
# compare
|
|
||||||
target = self.map(target)
|
|
||||||
self.add_pred(landingpadbb, target)
|
|
||||||
self.add_pred(landingpadbb, self.llbuilder.basic_block)
|
|
||||||
self.llbuilder.branch(target)
|
|
||||||
else:
|
else:
|
||||||
exnname = "{}:{}".format(typ.id, typ.name)
|
exnname = "{}:{}".format(typ.id, typ.name)
|
||||||
llclauseexnidptr = self.llmodule.globals.get("exn.{}".format(exnname))
|
|
||||||
exnid = ll.Constant(lli32, self.embedding_map.store_str(exnname))
|
llclauseexnname = self.llconst_of_const(
|
||||||
if llclauseexnidptr is None:
|
ir.Constant(exnname, builtins.TStr()))
|
||||||
llclauseexnidptr = ll.GlobalVariable(self.llmodule, lli32,
|
llclauseexnnameptr = self.llmodule.globals.get("exn.{}".format(exnname))
|
||||||
name="exn.{}".format(exnname))
|
if llclauseexnnameptr is None:
|
||||||
llclauseexnidptr.global_constant = True
|
llclauseexnnameptr = ll.GlobalVariable(self.llmodule, llclauseexnname.type,
|
||||||
llclauseexnidptr.initializer = exnid
|
name="exn.{}".format(exnname))
|
||||||
llclauseexnidptr.linkage = "private"
|
llclauseexnnameptr.global_constant = True
|
||||||
llclauseexnidptr.unnamed_addr = True
|
llclauseexnnameptr.initializer = llclauseexnname
|
||||||
lllandingpad.add_clause(ll.CatchClause(llclauseexnidptr))
|
llclauseexnnameptr.linkage = "private"
|
||||||
llmatchingdata = self.llbuilder.icmp_unsigned("==", llexnid,
|
llclauseexnnameptr.unnamed_addr = True
|
||||||
exnid)
|
lllandingpad.add_clause(ll.CatchClause(llclauseexnnameptr))
|
||||||
with self.llbuilder.if_then(llmatchingdata):
|
|
||||||
target = self.map(target)
|
if typ is None:
|
||||||
self.add_pred(landingpadbb, target)
|
# typ is None means that we match all exceptions, so no need to
|
||||||
self.add_pred(landingpadbb, self.llbuilder.basic_block)
|
# compare
|
||||||
self.llbuilder.branch(target)
|
self.llbuilder.branch(self.map(target))
|
||||||
self.add_pred(landingpadbb, self.llbuilder.basic_block)
|
else:
|
||||||
|
llexnlen = self.llbuilder.extract_value(llexnname, 1)
|
||||||
|
llclauseexnlen = self.llbuilder.extract_value(llclauseexnname, 1)
|
||||||
|
llmatchinglen = self.llbuilder.icmp_unsigned('==', llexnlen, llclauseexnlen)
|
||||||
|
with self.llbuilder.if_then(llmatchinglen):
|
||||||
|
llexnptr = self.llbuilder.extract_value(llexnname, 0)
|
||||||
|
llclauseexnptr = self.llbuilder.extract_value(llclauseexnname, 0)
|
||||||
|
llcomparedata = self.llbuilder.call(self.llbuiltin("memcmp"),
|
||||||
|
[llexnptr, llclauseexnptr, llexnlen])
|
||||||
|
llmatchingdata = self.llbuilder.icmp_unsigned('==', llcomparedata,
|
||||||
|
ll.Constant(lli32, 0))
|
||||||
|
with self.llbuilder.if_then(llmatchingdata):
|
||||||
|
self.llbuilder.branch(self.map(target))
|
||||||
|
|
||||||
if self.llbuilder.basic_block.terminator is None:
|
if self.llbuilder.basic_block.terminator is None:
|
||||||
if insn.has_cleanup:
|
if insn.has_cleanup:
|
||||||
target = self.map(insn.cleanup())
|
self.llbuilder.branch(self.map(insn.cleanup()))
|
||||||
self.add_pred(landingpadbb, target)
|
|
||||||
self.add_pred(landingpadbb, self.llbuilder.basic_block)
|
|
||||||
self.llbuilder.branch(target)
|
|
||||||
else:
|
else:
|
||||||
self.llbuilder.resume(lllandingpad)
|
self.llbuilder.resume(lllandingpad)
|
||||||
|
|
||||||
|
|
|
@ -385,50 +385,6 @@ class TRPC(Type):
|
||||||
def __hash__(self):
|
def __hash__(self):
|
||||||
return hash(self.service)
|
return hash(self.service)
|
||||||
|
|
||||||
class TSubkernel(TFunction):
|
|
||||||
"""
|
|
||||||
A kernel to be run on a satellite.
|
|
||||||
|
|
||||||
:ivar args: (:class:`collections.OrderedDict` of string to :class:`Type`)
|
|
||||||
function arguments
|
|
||||||
:ivar ret: (:class:`Type`)
|
|
||||||
return type
|
|
||||||
:ivar sid: (int) subkernel ID number
|
|
||||||
:ivar destination: (int) satellite destination number
|
|
||||||
"""
|
|
||||||
|
|
||||||
attributes = OrderedDict()
|
|
||||||
|
|
||||||
def __init__(self, args, optargs, ret, sid, destination):
|
|
||||||
assert isinstance(ret, Type)
|
|
||||||
super().__init__(args, optargs, ret)
|
|
||||||
self.sid, self.destination = sid, destination
|
|
||||||
self.delay = TFixedDelay(iodelay.Const(0))
|
|
||||||
|
|
||||||
def unify(self, other):
|
|
||||||
if other is self:
|
|
||||||
return
|
|
||||||
if isinstance(other, TSubkernel) and \
|
|
||||||
self.sid == other.sid and \
|
|
||||||
self.destination == other.destination:
|
|
||||||
self.ret.unify(other.ret)
|
|
||||||
elif isinstance(other, TVar):
|
|
||||||
other.unify(self)
|
|
||||||
else:
|
|
||||||
raise UnificationError(self, other)
|
|
||||||
|
|
||||||
def __repr__(self):
|
|
||||||
if getattr(builtins, "__in_sphinx__", False):
|
|
||||||
return str(self)
|
|
||||||
return "artiq.compiler.types.TSubkernel({})".format(repr(self.ret))
|
|
||||||
|
|
||||||
def __eq__(self, other):
|
|
||||||
return isinstance(other, TSubkernel) and \
|
|
||||||
self.sid == other.sid
|
|
||||||
|
|
||||||
def __hash__(self):
|
|
||||||
return hash(self.sid)
|
|
||||||
|
|
||||||
class TBuiltin(Type):
|
class TBuiltin(Type):
|
||||||
"""
|
"""
|
||||||
An instance of builtin type. Every instance of a builtin
|
An instance of builtin type. Every instance of a builtin
|
||||||
|
@ -688,9 +644,6 @@ def is_function(typ):
|
||||||
def is_rpc(typ):
|
def is_rpc(typ):
|
||||||
return isinstance(typ.find(), TRPC)
|
return isinstance(typ.find(), TRPC)
|
||||||
|
|
||||||
def is_subkernel(typ):
|
|
||||||
return isinstance(typ.find(), TSubkernel)
|
|
||||||
|
|
||||||
def is_external_function(typ, name=None):
|
def is_external_function(typ, name=None):
|
||||||
typ = typ.find()
|
typ = typ.find()
|
||||||
if name is None:
|
if name is None:
|
||||||
|
@ -857,10 +810,6 @@ class TypePrinter(object):
|
||||||
return "[rpc{} #{}](...)->{}".format(typ.service,
|
return "[rpc{} #{}](...)->{}".format(typ.service,
|
||||||
" async" if typ.is_async else "",
|
" async" if typ.is_async else "",
|
||||||
self.name(typ.ret, depth + 1))
|
self.name(typ.ret, depth + 1))
|
||||||
elif isinstance(typ, TSubkernel):
|
|
||||||
return "<subkernel{} dest#{}>->{}".format(typ.sid,
|
|
||||||
typ.destination,
|
|
||||||
self.name(typ.ret, depth + 1))
|
|
||||||
elif isinstance(typ, TBuiltinFunction):
|
elif isinstance(typ, TBuiltinFunction):
|
||||||
return "<function {}>".format(typ.name)
|
return "<function {}>".format(typ.name)
|
||||||
elif isinstance(typ, (TConstructor, TExceptionConstructor)):
|
elif isinstance(typ, (TConstructor, TExceptionConstructor)):
|
||||||
|
|
|
@ -102,20 +102,8 @@ class RegionOf(algorithm.Visitor):
|
||||||
if types.is_external_function(node.func.type, "cache_get"):
|
if types.is_external_function(node.func.type, "cache_get"):
|
||||||
# The cache is borrow checked dynamically
|
# The cache is borrow checked dynamically
|
||||||
return Global()
|
return Global()
|
||||||
|
else:
|
||||||
if (types.is_builtin_function(node.func.type, "array")
|
self.visit_sometimes_allocating(node)
|
||||||
or types.is_builtin_function(node.func.type, "make_array")
|
|
||||||
or types.is_builtin_function(node.func.type, "numpy.transpose")):
|
|
||||||
# While lifetime tracking across function calls in general is currently
|
|
||||||
# broken (see below), these special builtins that allocate an array on
|
|
||||||
# the stack of the caller _always_ allocate regardless of the parameters,
|
|
||||||
# and we can thus handle them without running into the precision issue
|
|
||||||
# mentioned in commit ae999db.
|
|
||||||
return self.visit_allocating(node)
|
|
||||||
|
|
||||||
# FIXME: Return statement missing here, but see m-labs/artiq#1497 and
|
|
||||||
# commit ae999db.
|
|
||||||
self.visit_sometimes_allocating(node)
|
|
||||||
|
|
||||||
# Value lives as long as the object/container, if it's mutable,
|
# Value lives as long as the object/container, if it's mutable,
|
||||||
# or else forever
|
# or else forever
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
"""RTIO driver for the Analog Devices AD53[67][0123] family of multi-channel
|
""""RTIO driver for the Analog Devices AD53[67][0123] family of multi-channel
|
||||||
Digital to Analog Converters.
|
Digital to Analog Converters.
|
||||||
|
|
||||||
Output event replacement is not supported and issuing commands at the same
|
Output event replacement is not supported and issuing commands at the same
|
||||||
time results in a collision error.
|
time is an error.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Designed from the data sheets and somewhat after the linux kernel
|
# Designed from the data sheets and somewhat after the linux kernel
|
||||||
|
@ -131,10 +131,10 @@ class AD53xx:
|
||||||
optimized for speed; datasheet says t22: 25ns min SCLK edge to SDO
|
optimized for speed; datasheet says t22: 25ns min SCLK edge to SDO
|
||||||
valid, and suggests the SPI speed for reads should be <=20 MHz)
|
valid, and suggests the SPI speed for reads should be <=20 MHz)
|
||||||
:param vref: DAC reference voltage (default: 5.)
|
:param vref: DAC reference voltage (default: 5.)
|
||||||
:param offset_dacs: Initial register value for the two offset DACs
|
:param offset_dacs: Initial register value for the two offset DACs, device
|
||||||
(default: 8192). Device dependent and must be set correctly for
|
dependent and must be set correctly for correct voltage to mu
|
||||||
correct voltage-to-mu conversions. Knowledge of this state is
|
conversions. Knowledge of his state is not transferred between
|
||||||
not transferred between experiments.
|
experiments. (default: 8192)
|
||||||
:param core_device: Core device name (default: "core")
|
:param core_device: Core device name (default: "core")
|
||||||
"""
|
"""
|
||||||
kernel_invariants = {"bus", "ldac", "clr", "chip_select", "div_write",
|
kernel_invariants = {"bus", "ldac", "clr", "chip_select", "div_write",
|
||||||
|
@ -202,7 +202,7 @@ class AD53xx:
|
||||||
:param op: Operation to perform, one of :const:`AD53XX_READ_X1A`,
|
:param op: Operation to perform, one of :const:`AD53XX_READ_X1A`,
|
||||||
:const:`AD53XX_READ_X1B`, :const:`AD53XX_READ_OFFSET`,
|
:const:`AD53XX_READ_X1B`, :const:`AD53XX_READ_OFFSET`,
|
||||||
:const:`AD53XX_READ_GAIN` etc. (default: :const:`AD53XX_READ_X1A`).
|
:const:`AD53XX_READ_GAIN` etc. (default: :const:`AD53XX_READ_X1A`).
|
||||||
:return: The 16-bit register value
|
:return: The 16 bit register value
|
||||||
"""
|
"""
|
||||||
self.bus.write(ad53xx_cmd_read_ch(channel, op) << 8)
|
self.bus.write(ad53xx_cmd_read_ch(channel, op) << 8)
|
||||||
self.bus.set_config_mu(SPI_AD53XX_CONFIG | spi.SPI_INPUT, 24,
|
self.bus.set_config_mu(SPI_AD53XX_CONFIG | spi.SPI_INPUT, 24,
|
||||||
|
@ -233,7 +233,7 @@ class AD53xx:
|
||||||
def write_gain_mu(self, channel, gain=0xffff):
|
def write_gain_mu(self, channel, gain=0xffff):
|
||||||
"""Program the gain register for a DAC channel.
|
"""Program the gain register for a DAC channel.
|
||||||
|
|
||||||
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
|
The DAC output is not updated until LDAC is pulsed (see :meth load:).
|
||||||
This method advances the timeline by the duration of one SPI transfer.
|
This method advances the timeline by the duration of one SPI transfer.
|
||||||
|
|
||||||
:param gain: 16-bit gain register value (default: 0xffff)
|
:param gain: 16-bit gain register value (default: 0xffff)
|
||||||
|
@ -245,7 +245,7 @@ class AD53xx:
|
||||||
def write_offset_mu(self, channel, offset=0x8000):
|
def write_offset_mu(self, channel, offset=0x8000):
|
||||||
"""Program the offset register for a DAC channel.
|
"""Program the offset register for a DAC channel.
|
||||||
|
|
||||||
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
|
The DAC output is not updated until LDAC is pulsed (see :meth load:).
|
||||||
This method advances the timeline by the duration of one SPI transfer.
|
This method advances the timeline by the duration of one SPI transfer.
|
||||||
|
|
||||||
:param offset: 16-bit offset register value (default: 0x8000)
|
:param offset: 16-bit offset register value (default: 0x8000)
|
||||||
|
@ -258,7 +258,7 @@ class AD53xx:
|
||||||
"""Program the DAC offset voltage for a channel.
|
"""Program the DAC offset voltage for a channel.
|
||||||
|
|
||||||
An offset of +V can be used to trim out a DAC offset error of -V.
|
An offset of +V can be used to trim out a DAC offset error of -V.
|
||||||
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
|
The DAC output is not updated until LDAC is pulsed (see :meth load:).
|
||||||
This method advances the timeline by the duration of one SPI transfer.
|
This method advances the timeline by the duration of one SPI transfer.
|
||||||
|
|
||||||
:param voltage: the offset voltage
|
:param voltage: the offset voltage
|
||||||
|
@ -270,7 +270,7 @@ class AD53xx:
|
||||||
def write_dac_mu(self, channel, value):
|
def write_dac_mu(self, channel, value):
|
||||||
"""Program the DAC input register for a channel.
|
"""Program the DAC input register for a channel.
|
||||||
|
|
||||||
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
|
The DAC output is not updated until LDAC is pulsed (see :meth load:).
|
||||||
This method advances the timeline by the duration of one SPI transfer.
|
This method advances the timeline by the duration of one SPI transfer.
|
||||||
"""
|
"""
|
||||||
self.bus.write(
|
self.bus.write(
|
||||||
|
@ -280,7 +280,7 @@ class AD53xx:
|
||||||
def write_dac(self, channel, voltage):
|
def write_dac(self, channel, voltage):
|
||||||
"""Program the DAC output voltage for a channel.
|
"""Program the DAC output voltage for a channel.
|
||||||
|
|
||||||
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
|
The DAC output is not updated until LDAC is pulsed (see :meth load:).
|
||||||
This method advances the timeline by the duration of one SPI transfer.
|
This method advances the timeline by the duration of one SPI transfer.
|
||||||
"""
|
"""
|
||||||
self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs,
|
self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs,
|
||||||
|
@ -309,11 +309,11 @@ class AD53xx:
|
||||||
|
|
||||||
This method does not advance the timeline; write events are scheduled
|
This method does not advance the timeline; write events are scheduled
|
||||||
in the past. The DACs will synchronously start changing their output
|
in the past. The DACs will synchronously start changing their output
|
||||||
levels ``now``.
|
levels `now`.
|
||||||
|
|
||||||
If no LDAC device was defined, the LDAC pulse is skipped.
|
If no LDAC device was defined, the LDAC pulse is skipped.
|
||||||
|
|
||||||
See :meth:`load`.
|
See :meth load:.
|
||||||
|
|
||||||
:param values: list of DAC values to program
|
:param values: list of DAC values to program
|
||||||
:param channels: list of DAC channels to program. If not specified,
|
:param channels: list of DAC channels to program. If not specified,
|
||||||
|
@ -355,7 +355,7 @@ class AD53xx:
|
||||||
""" Two-point calibration of a DAC channel.
|
""" Two-point calibration of a DAC channel.
|
||||||
|
|
||||||
Programs the offset and gain register to trim out DAC errors. Does not
|
Programs the offset and gain register to trim out DAC errors. Does not
|
||||||
take effect until LDAC is pulsed (see :meth:`load`).
|
take effect until LDAC is pulsed (see :meth load:).
|
||||||
|
|
||||||
Calibration consists of measuring the DAC output voltage for a channel
|
Calibration consists of measuring the DAC output voltage for a channel
|
||||||
with the DAC set to zero-scale (0x0000) and full-scale (0xffff).
|
with the DAC set to zero-scale (0x0000) and full-scale (0xffff).
|
||||||
|
@ -364,8 +364,8 @@ class AD53xx:
|
||||||
high) can be calibrated in this fashion.
|
high) can be calibrated in this fashion.
|
||||||
|
|
||||||
:param channel: The number of the calibrated channel
|
:param channel: The number of the calibrated channel
|
||||||
:param vzs: Measured voltage with the DAC set to zero-scale (0x0000)
|
:params vzs: Measured voltage with the DAC set to zero-scale (0x0000)
|
||||||
:param vfs: Measured voltage with the DAC set to full-scale (0xffff)
|
:params vfs: Measured voltage with the DAC set to full-scale (0xffff)
|
||||||
"""
|
"""
|
||||||
offset_err = voltage_to_mu(vzs, self.offset_dacs, self.vref)
|
offset_err = voltage_to_mu(vzs, self.offset_dacs, self.vref)
|
||||||
gain_err = voltage_to_mu(vfs, self.offset_dacs, self.vref) - (
|
gain_err = voltage_to_mu(vfs, self.offset_dacs, self.vref) - (
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,23 @@
|
||||||
|
from artiq.language.core import kernel
|
||||||
|
|
||||||
|
|
||||||
|
class AD9154:
|
||||||
|
"""Kernel interface to AD9154 registers, using non-realtime SPI."""
|
||||||
|
|
||||||
|
def __init__(self, dmgr, spi_device, chip_select):
|
||||||
|
self.core = dmgr.get("core")
|
||||||
|
self.bus = dmgr.get(spi_device)
|
||||||
|
self.chip_select = chip_select
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def setup_bus(self, div=16):
|
||||||
|
self.bus.set_config_mu(0, 24, div, self.chip_select)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def write(self, addr, data):
|
||||||
|
self.bus.write((addr << 16) | (data<< 8))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def read(self, addr):
|
||||||
|
self.write((1 << 15) | addr, 0)
|
||||||
|
return self.bus.read()
|
|
@ -114,27 +114,27 @@ class AD9910:
|
||||||
(as configured through CFG_MASK_NU), 4-7 for individual channels.
|
(as configured through CFG_MASK_NU), 4-7 for individual channels.
|
||||||
:param cpld_device: Name of the Urukul CPLD this device is on.
|
:param cpld_device: Name of the Urukul CPLD this device is on.
|
||||||
:param sw_device: Name of the RF switch device. The RF switch is a
|
:param sw_device: Name of the RF switch device. The RF switch is a
|
||||||
TTLOut channel available as the ``sw`` attribute of this instance.
|
TTLOut channel available as the :attr:`sw` attribute of this instance.
|
||||||
:param pll_n: DDS PLL multiplier. The DDS sample clock is
|
:param pll_n: DDS PLL multiplier. The DDS sample clock is
|
||||||
``f_ref / clk_div * pll_n`` where ``f_ref`` is the reference frequency and
|
f_ref/clk_div*pll_n where f_ref is the reference frequency and
|
||||||
``clk_div`` is the reference clock divider (both set in the parent
|
clk_div is the reference clock divider (both set in the parent
|
||||||
Urukul CPLD instance).
|
Urukul CPLD instance).
|
||||||
:param pll_en: PLL enable bit, set to 0 to bypass PLL (default: 1).
|
:param pll_en: PLL enable bit, set to 0 to bypass PLL (default: 1).
|
||||||
Note that when bypassing the PLL the red front panel LED may remain on.
|
Note that when bypassing the PLL the red front panel LED may remain on.
|
||||||
:param pll_cp: DDS PLL charge pump setting.
|
:param pll_cp: DDS PLL charge pump setting.
|
||||||
:param pll_vco: DDS PLL VCO range selection.
|
:param pll_vco: DDS PLL VCO range selection.
|
||||||
:param sync_delay_seed: ``SYNC_IN`` delay tuning starting value.
|
:param sync_delay_seed: SYNC_IN delay tuning starting value.
|
||||||
To stabilize the ``SYNC_IN`` delay tuning, run :meth:`tune_sync_delay` once
|
To stabilize the SYNC_IN delay tuning, run :meth:`tune_sync_delay` once
|
||||||
and set this to the delay tap number returned (default: -1 to signal no
|
and set this to the delay tap number returned (default: -1 to signal no
|
||||||
synchronization and no tuning during :meth:`init`).
|
synchronization and no tuning during :meth:`init`).
|
||||||
Can be a string of the form ``eeprom_device:byte_offset`` to read the
|
Can be a string of the form "eeprom_device:byte_offset" to read the
|
||||||
value from a I2C EEPROM, in which case ``io_update_delay`` must be set
|
value from a I2C EEPROM; in which case, `io_update_delay` must be set
|
||||||
to the same string value.
|
to the same string value.
|
||||||
:param io_update_delay: ``IO_UPDATE`` pulse alignment delay.
|
:param io_update_delay: IO_UPDATE pulse alignment delay.
|
||||||
To align ``IO_UPDATE`` to ``SYNC_CLK``, run :meth:`tune_io_update_delay` and
|
To align IO_UPDATE to SYNC_CLK, run :meth:`tune_io_update_delay` and
|
||||||
set this to the delay tap number returned.
|
set this to the delay tap number returned.
|
||||||
Can be a string of the form ``eeprom_device:byte_offset`` to read the
|
Can be a string of the form "eeprom_device:byte_offset" to read the
|
||||||
value from a I2C EEPROM, in which case ``sync_delay_seed`` must be set
|
value from a I2C EEPROM; in which case, `sync_delay_seed` must be set
|
||||||
to the same string value.
|
to the same string value.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -188,7 +188,9 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_phase_mode(self, phase_mode: TInt32):
|
def set_phase_mode(self, phase_mode: TInt32):
|
||||||
r"""Set the default phase mode for future calls to :meth:`set` and
|
r"""Set the default phase mode.
|
||||||
|
|
||||||
|
for future calls to :meth:`set` and
|
||||||
:meth:`set_mu`. Supported phase modes are:
|
:meth:`set_mu`. Supported phase modes are:
|
||||||
|
|
||||||
* :const:`PHASE_MODE_CONTINUOUS`: the phase accumulator is unchanged
|
* :const:`PHASE_MODE_CONTINUOUS`: the phase accumulator is unchanged
|
||||||
|
@ -231,7 +233,7 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write16(self, addr: TInt32, data: TInt32):
|
def write16(self, addr: TInt32, data: TInt32):
|
||||||
"""Write to 16-bit register.
|
"""Write to 16 bit register.
|
||||||
|
|
||||||
:param addr: Register address
|
:param addr: Register address
|
||||||
:param data: Data to be written
|
:param data: Data to be written
|
||||||
|
@ -242,7 +244,7 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write32(self, addr: TInt32, data: TInt32):
|
def write32(self, addr: TInt32, data: TInt32):
|
||||||
"""Write to 32-bit register.
|
"""Write to 32 bit register.
|
||||||
|
|
||||||
:param addr: Register address
|
:param addr: Register address
|
||||||
:param data: Data to be written
|
:param data: Data to be written
|
||||||
|
@ -256,7 +258,7 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def read16(self, addr: TInt32) -> TInt32:
|
def read16(self, addr: TInt32) -> TInt32:
|
||||||
"""Read from 16-bit register.
|
"""Read from 16 bit register.
|
||||||
|
|
||||||
:param addr: Register address
|
:param addr: Register address
|
||||||
"""
|
"""
|
||||||
|
@ -271,7 +273,7 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def read32(self, addr: TInt32) -> TInt32:
|
def read32(self, addr: TInt32) -> TInt32:
|
||||||
"""Read from 32-bit register.
|
"""Read from 32 bit register.
|
||||||
|
|
||||||
:param addr: Register address
|
:param addr: Register address
|
||||||
"""
|
"""
|
||||||
|
@ -286,10 +288,10 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def read64(self, addr: TInt32) -> TInt64:
|
def read64(self, addr: TInt32) -> TInt64:
|
||||||
"""Read from 64-bit register.
|
"""Read from 64 bit register.
|
||||||
|
|
||||||
:param addr: Register address
|
:param addr: Register address
|
||||||
:return: 64-bit integer register value
|
:return: 64 bit integer register value
|
||||||
"""
|
"""
|
||||||
self.bus.set_config_mu(
|
self.bus.set_config_mu(
|
||||||
urukul.SPI_CONFIG, 8,
|
urukul.SPI_CONFIG, 8,
|
||||||
|
@ -309,10 +311,10 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write64(self, addr: TInt32, data_high: TInt32, data_low: TInt32):
|
def write64(self, addr: TInt32, data_high: TInt32, data_low: TInt32):
|
||||||
"""Write to 64-bit register.
|
"""Write to 64 bit register.
|
||||||
|
|
||||||
:param addr: Register address
|
:param addr: Register address
|
||||||
:param data_high: High (MSB) 32 data bits
|
:param data_high: High (MSB) 32 bits of the data
|
||||||
:param data_low: Low (LSB) 32 data bits
|
:param data_low: Low (LSB) 32 data bits
|
||||||
"""
|
"""
|
||||||
self.bus.set_config_mu(urukul.SPI_CONFIG, 8,
|
self.bus.set_config_mu(urukul.SPI_CONFIG, 8,
|
||||||
|
@ -330,9 +332,8 @@ class AD9910:
|
||||||
"""Write data to RAM.
|
"""Write data to RAM.
|
||||||
|
|
||||||
The profile to write to and the step, start, and end address
|
The profile to write to and the step, start, and end address
|
||||||
need to be configured in advance and separately using
|
need to be configured before and separately using
|
||||||
:meth:`set_profile_ram` and the parent CPLD
|
:meth:`set_profile_ram` and the parent CPLD `set_profile`.
|
||||||
:meth:`~artiq.coredevice.urukul.CPLD.set_profile`.
|
|
||||||
|
|
||||||
:param data: Data to be written to RAM.
|
:param data: Data to be written to RAM.
|
||||||
"""
|
"""
|
||||||
|
@ -353,8 +354,7 @@ class AD9910:
|
||||||
|
|
||||||
The profile to read from and the step, start, and end address
|
The profile to read from and the step, start, and end address
|
||||||
need to be configured before and separately using
|
need to be configured before and separately using
|
||||||
:meth:`set_profile_ram` and the parent CPLD
|
:meth:`set_profile_ram` and the parent CPLD `set_profile`.
|
||||||
:meth:`~artiq.coredevice.urukul.CPLD.set_profile`.
|
|
||||||
|
|
||||||
:param data: List to be filled with data read from RAM.
|
:param data: List to be filled with data read from RAM.
|
||||||
"""
|
"""
|
||||||
|
@ -390,9 +390,9 @@ class AD9910:
|
||||||
manual_osk_external: TInt32 = 0,
|
manual_osk_external: TInt32 = 0,
|
||||||
osk_enable: TInt32 = 0,
|
osk_enable: TInt32 = 0,
|
||||||
select_auto_osk: TInt32 = 0):
|
select_auto_osk: TInt32 = 0):
|
||||||
"""Set CFR1. See the AD9910 datasheet for parameter meanings and sizes.
|
"""Set CFR1. See the AD9910 datasheet for parameter meanings.
|
||||||
|
|
||||||
This method does not pulse ``IO_UPDATE.``
|
This method does not pulse IO_UPDATE.
|
||||||
|
|
||||||
:param power_down: Power down bits.
|
:param power_down: Power down bits.
|
||||||
:param phase_autoclear: Autoclear phase accumulator.
|
:param phase_autoclear: Autoclear phase accumulator.
|
||||||
|
@ -429,9 +429,9 @@ class AD9910:
|
||||||
effective_ftw: TInt32 = 1,
|
effective_ftw: TInt32 = 1,
|
||||||
sync_validation_disable: TInt32 = 0,
|
sync_validation_disable: TInt32 = 0,
|
||||||
matched_latency_enable: TInt32 = 0):
|
matched_latency_enable: TInt32 = 0):
|
||||||
"""Set CFR2. See the AD9910 datasheet for parameter meanings and sizes.
|
"""Set CFR2. See the AD9910 datasheet for parameter meanings.
|
||||||
|
|
||||||
This method does not pulse ``IO_UPDATE``.
|
This method does not pulse IO_UPDATE.
|
||||||
|
|
||||||
:param asf_profile_enable: Enable amplitude scale from single tone profiles.
|
:param asf_profile_enable: Enable amplitude scale from single tone profiles.
|
||||||
:param drg_enable: Digital ramp enable.
|
:param drg_enable: Digital ramp enable.
|
||||||
|
@ -456,14 +456,14 @@ class AD9910:
|
||||||
"""Initialize and configure the DDS.
|
"""Initialize and configure the DDS.
|
||||||
|
|
||||||
Sets up SPI mode, confirms chip presence, powers down unused blocks,
|
Sets up SPI mode, confirms chip presence, powers down unused blocks,
|
||||||
configures the PLL, waits for PLL lock. Uses the ``IO_UPDATE``
|
configures the PLL, waits for PLL lock. Uses the
|
||||||
signal multiple times.
|
IO_UPDATE signal multiple times.
|
||||||
|
|
||||||
:param blind: Do not read back DDS identity and do not wait for lock.
|
:param blind: Do not read back DDS identity and do not wait for lock.
|
||||||
"""
|
"""
|
||||||
self.sync_data.init()
|
self.sync_data.init()
|
||||||
if self.sync_data.sync_delay_seed >= 0 and not self.cpld.sync_div:
|
if self.sync_data.sync_delay_seed >= 0 and not self.cpld.sync_div:
|
||||||
raise ValueError("parent CPLD does not drive SYNC")
|
raise ValueError("parent cpld does not drive SYNC")
|
||||||
if self.sync_data.sync_delay_seed >= 0:
|
if self.sync_data.sync_delay_seed >= 0:
|
||||||
if self.sysclk_per_mu != self.sysclk * self.core.ref_period:
|
if self.sysclk_per_mu != self.sysclk * self.core.ref_period:
|
||||||
raise ValueError("incorrect clock ratio for synchronization")
|
raise ValueError("incorrect clock ratio for synchronization")
|
||||||
|
@ -514,7 +514,7 @@ class AD9910:
|
||||||
def power_down(self, bits: TInt32 = 0b1111):
|
def power_down(self, bits: TInt32 = 0b1111):
|
||||||
"""Power down DDS.
|
"""Power down DDS.
|
||||||
|
|
||||||
:param bits: Power-down bits, see datasheet
|
:param bits: Power down bits, see datasheet
|
||||||
"""
|
"""
|
||||||
self.set_cfr1(power_down=bits)
|
self.set_cfr1(power_down=bits)
|
||||||
self.cpld.io_update.pulse(1 * us)
|
self.cpld.io_update.pulse(1 * us)
|
||||||
|
@ -534,23 +534,23 @@ class AD9910:
|
||||||
After the SPI transfer, the shared IO update pin is pulsed to
|
After the SPI transfer, the shared IO update pin is pulsed to
|
||||||
activate the data.
|
activate the data.
|
||||||
|
|
||||||
.. seealso: :meth:`AD9910.set_phase_mode` for a definition of the different
|
.. seealso: :meth:`set_phase_mode` for a definition of the different
|
||||||
phase modes.
|
phase modes.
|
||||||
|
|
||||||
:param ftw: Frequency tuning word: 32-bit.
|
:param ftw: Frequency tuning word: 32 bit.
|
||||||
:param pow_: Phase tuning word: 16-bit unsigned.
|
:param pow_: Phase tuning word: 16 bit unsigned.
|
||||||
:param asf: Amplitude scale factor: 14-bit unsigned.
|
:param asf: Amplitude scale factor: 14 bit unsigned.
|
||||||
:param phase_mode: If specified, overrides the default phase mode set
|
:param phase_mode: If specified, overrides the default phase mode set
|
||||||
by :meth:`set_phase_mode` for this call.
|
by :meth:`set_phase_mode` for this call.
|
||||||
:param ref_time_mu: Fiducial time used to compute absolute or tracking
|
:param ref_time_mu: Fiducial time used to compute absolute or tracking
|
||||||
phase updates. In machine units as obtained by :meth:`~artiq.language.core.now_mu()`.
|
phase updates. In machine units as obtained by `now_mu()`.
|
||||||
:param profile: Single tone profile number to set (0-7, default: 7).
|
:param profile: Single tone profile number to set (0-7, default: 7).
|
||||||
Ineffective if ``ram_destination`` is specified.
|
Ineffective if `ram_destination` is specified.
|
||||||
:param ram_destination: RAM destination (:const:`RAM_DEST_FTW`,
|
:param ram_destination: RAM destination (:const:`RAM_DEST_FTW`,
|
||||||
:const:`RAM_DEST_POW`, :const:`RAM_DEST_ASF`,
|
:const:`RAM_DEST_POW`, :const:`RAM_DEST_ASF`,
|
||||||
:const:`RAM_DEST_POWASF`). If specified, write free DDS parameters
|
:const:`RAM_DEST_POWASF`). If specified, write free DDS parameters
|
||||||
to the ASF/FTW/POW registers instead of to the single tone profile
|
to the ASF/FTW/POW registers instead of to the single tone profile
|
||||||
register (default behaviour, see ``profile``).
|
register (default behaviour, see `profile`).
|
||||||
:return: Resulting phase offset word after application of phase
|
:return: Resulting phase offset word after application of phase
|
||||||
tracking offset. When using :const:`PHASE_MODE_CONTINUOUS` in
|
tracking offset. When using :const:`PHASE_MODE_CONTINUOUS` in
|
||||||
subsequent calls, use this value as the "current" phase.
|
subsequent calls, use this value as the "current" phase.
|
||||||
|
@ -598,10 +598,10 @@ class AD9910:
|
||||||
"""Get the frequency tuning word, phase offset word,
|
"""Get the frequency tuning word, phase offset word,
|
||||||
and amplitude scale factor.
|
and amplitude scale factor.
|
||||||
|
|
||||||
See also :meth:`AD9910.get`.
|
.. seealso:: :meth:`get`
|
||||||
|
|
||||||
:param profile: Profile number to get (0-7, default: 7)
|
:param profile: Profile number to get (0-7, default: 7)
|
||||||
:return: A tuple (FTW, POW, ASF)
|
:return: A tuple ``(ftw, pow, asf)``
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Read data
|
# Read data
|
||||||
|
@ -617,12 +617,12 @@ class AD9910:
|
||||||
profile: TInt32 = _DEFAULT_PROFILE_RAM,
|
profile: TInt32 = _DEFAULT_PROFILE_RAM,
|
||||||
nodwell_high: TInt32 = 0, zero_crossing: TInt32 = 0,
|
nodwell_high: TInt32 = 0, zero_crossing: TInt32 = 0,
|
||||||
mode: TInt32 = 1):
|
mode: TInt32 = 1):
|
||||||
"""Set the RAM profile settings. See also AD9910 datasheet.
|
"""Set the RAM profile settings.
|
||||||
|
|
||||||
:param start: Profile start address in RAM (10-bit).
|
:param start: Profile start address in RAM.
|
||||||
:param end: Profile end address in RAM, inclusive (10-bit).
|
:param end: Profile end address in RAM (last address).
|
||||||
:param step: Profile time step, counted in DDS sample clock
|
:param step: Profile time step in units of t_DDS, typically 4 ns
|
||||||
cycles, typically 4 ns (16-bit, default: 1)
|
(default: 1).
|
||||||
:param profile: Profile index (0 to 7) (default: 0).
|
:param profile: Profile index (0 to 7) (default: 0).
|
||||||
:param nodwell_high: No-dwell high bit (default: 0,
|
:param nodwell_high: No-dwell high bit (default: 0,
|
||||||
see AD9910 documentation).
|
see AD9910 documentation).
|
||||||
|
@ -850,7 +850,7 @@ class AD9910:
|
||||||
ram_destination: TInt32 = -1) -> TFloat:
|
ram_destination: TInt32 = -1) -> TFloat:
|
||||||
"""Set DDS data in SI units.
|
"""Set DDS data in SI units.
|
||||||
|
|
||||||
See also :meth:`AD9910.set_mu`.
|
.. seealso:: :meth:`set_mu`
|
||||||
|
|
||||||
:param frequency: Frequency in Hz
|
:param frequency: Frequency in Hz
|
||||||
:param phase: Phase tuning word in turns
|
:param phase: Phase tuning word in turns
|
||||||
|
@ -871,10 +871,10 @@ class AD9910:
|
||||||
) -> TTuple([TFloat, TFloat, TFloat]):
|
) -> TTuple([TFloat, TFloat, TFloat]):
|
||||||
"""Get the frequency, phase, and amplitude.
|
"""Get the frequency, phase, and amplitude.
|
||||||
|
|
||||||
See also :meth:`AD9910.get_mu`.
|
.. seealso:: :meth:`get_mu`
|
||||||
|
|
||||||
:param profile: Profile number to get (0-7, default: 7)
|
:param profile: Profile number to get (0-7, default: 7)
|
||||||
:return: A tuple (frequency, phase, amplitude)
|
:return: A tuple ``(frequency, phase, amplitude)``
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Get values
|
# Get values
|
||||||
|
@ -887,10 +887,11 @@ class AD9910:
|
||||||
def set_att_mu(self, att: TInt32):
|
def set_att_mu(self, att: TInt32):
|
||||||
"""Set digital step attenuator in machine units.
|
"""Set digital step attenuator in machine units.
|
||||||
|
|
||||||
This method will write the attenuator settings of all four channels. See also
|
This method will write the attenuator settings of all four channels.
|
||||||
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.set_att_mu>`.
|
|
||||||
|
|
||||||
:param att: Attenuation setting, 8-bit digital.
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu`
|
||||||
|
|
||||||
|
:param att: Attenuation setting, 8 bit digital.
|
||||||
"""
|
"""
|
||||||
self.cpld.set_att_mu(self.chip_select - 4, att)
|
self.cpld.set_att_mu(self.chip_select - 4, att)
|
||||||
|
|
||||||
|
@ -898,8 +899,9 @@ class AD9910:
|
||||||
def set_att(self, att: TFloat):
|
def set_att(self, att: TFloat):
|
||||||
"""Set digital step attenuator in SI units.
|
"""Set digital step attenuator in SI units.
|
||||||
|
|
||||||
This method will write the attenuator settings of all four channels. See also
|
This method will write the attenuator settings of all four channels.
|
||||||
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.set_att>`.
|
|
||||||
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att`
|
||||||
|
|
||||||
:param att: Attenuation in dB.
|
:param att: Attenuation in dB.
|
||||||
"""
|
"""
|
||||||
|
@ -907,17 +909,19 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def get_att_mu(self) -> TInt32:
|
def get_att_mu(self) -> TInt32:
|
||||||
"""Get digital step attenuator value in machine units. See also
|
"""Get digital step attenuator value in machine units.
|
||||||
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.get_channel_att_mu>`.
|
|
||||||
|
|
||||||
:return: Attenuation setting, 8-bit digital.
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att_mu`
|
||||||
|
|
||||||
|
:return: Attenuation setting, 8 bit digital.
|
||||||
"""
|
"""
|
||||||
return self.cpld.get_channel_att_mu(self.chip_select - 4)
|
return self.cpld.get_channel_att_mu(self.chip_select - 4)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def get_att(self) -> TFloat:
|
def get_att(self) -> TFloat:
|
||||||
"""Get digital step attenuator value in SI units. See also
|
"""Get digital step attenuator value in SI units.
|
||||||
:meth:`CPLD.get_channel_att <artiq.coredevice.urukul.CPLD.get_channel_att>`.
|
|
||||||
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att`
|
||||||
|
|
||||||
:return: Attenuation in dB.
|
:return: Attenuation in dB.
|
||||||
"""
|
"""
|
||||||
|
@ -939,16 +943,16 @@ class AD9910:
|
||||||
window: TInt32,
|
window: TInt32,
|
||||||
en_sync_gen: TInt32 = 0):
|
en_sync_gen: TInt32 = 0):
|
||||||
"""Set the relevant parameters in the multi device synchronization
|
"""Set the relevant parameters in the multi device synchronization
|
||||||
register. See the AD9910 datasheet for details. The ``SYNC`` clock
|
register. See the AD9910 datasheet for details. The SYNC clock
|
||||||
generator preset value is set to zero, and the ``SYNC_OUT`` generator is
|
generator preset value is set to zero, and the SYNC_OUT generator is
|
||||||
disabled by default.
|
disabled by default.
|
||||||
|
|
||||||
:param in_delay: ``SYNC_IN`` delay tap (0-31) in steps of ~75ps
|
:param in_delay: SYNC_IN delay tap (0-31) in steps of ~75ps
|
||||||
:param window: Symmetric ``SYNC_IN`` validation window (0-15) in
|
:param window: Symmetric SYNC_IN validation window (0-15) in
|
||||||
steps of ~75ps for both hold and setup margin.
|
steps of ~75ps for both hold and setup margin.
|
||||||
:param en_sync_gen: Whether to enable the DDS-internal sync generator
|
:param en_sync_gen: Whether to enable the DDS-internal sync generator
|
||||||
(``SYNC_OUT``, cf. ``sync_sel == 1``). Should be left off for the normal
|
(SYNC_OUT, cf. sync_sel == 1). Should be left off for the normal
|
||||||
use case, where the ``SYNC`` clock is supplied by the core device.
|
use case, where the SYNC clock is supplied by the core device.
|
||||||
"""
|
"""
|
||||||
self.write32(_AD9910_REG_SYNC,
|
self.write32(_AD9910_REG_SYNC,
|
||||||
(window << 28) | # SYNC S/H validation delay
|
(window << 28) | # SYNC S/H validation delay
|
||||||
|
@ -961,9 +965,9 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def clear_smp_err(self):
|
def clear_smp_err(self):
|
||||||
"""Clear the ``SMP_ERR`` flag and enables ``SMP_ERR`` validity monitoring.
|
"""Clear the SMP_ERR flag and enables SMP_ERR validity monitoring.
|
||||||
|
|
||||||
Violations of the ``SYNC_IN`` sample and hold margins will result in
|
Violations of the SYNC_IN sample and hold margins will result in
|
||||||
SMP_ERR being asserted. This then also activates the red LED on
|
SMP_ERR being asserted. This then also activates the red LED on
|
||||||
the respective Urukul channel.
|
the respective Urukul channel.
|
||||||
|
|
||||||
|
@ -978,9 +982,9 @@ class AD9910:
|
||||||
@kernel
|
@kernel
|
||||||
def tune_sync_delay(self,
|
def tune_sync_delay(self,
|
||||||
search_seed: TInt32 = 15) -> TTuple([TInt32, TInt32]):
|
search_seed: TInt32 = 15) -> TTuple([TInt32, TInt32]):
|
||||||
"""Find a stable ``SYNC_IN`` delay.
|
"""Find a stable SYNC_IN delay.
|
||||||
|
|
||||||
This method first locates a valid ``SYNC_IN`` delay at zero validation
|
This method first locates a valid SYNC_IN delay at zero validation
|
||||||
window size (setup/hold margin) by scanning around `search_seed`. It
|
window size (setup/hold margin) by scanning around `search_seed`. It
|
||||||
then looks for similar valid delays at successively larger validation
|
then looks for similar valid delays at successively larger validation
|
||||||
window sizes until none can be found. It then decreases the validation
|
window sizes until none can be found. It then decreases the validation
|
||||||
|
@ -989,13 +993,13 @@ class AD9910:
|
||||||
|
|
||||||
This method and :meth:`tune_io_update_delay` can be run in any order.
|
This method and :meth:`tune_io_update_delay` can be run in any order.
|
||||||
|
|
||||||
:param search_seed: Start value for valid ``SYNC_IN`` delay search.
|
:param search_seed: Start value for valid SYNC_IN delay search.
|
||||||
Defaults to 15 (half range).
|
Defaults to 15 (half range).
|
||||||
:return: Tuple of optimal delay and window size.
|
:return: Tuple of optimal delay and window size.
|
||||||
"""
|
"""
|
||||||
if not self.cpld.sync_div:
|
if not self.cpld.sync_div:
|
||||||
raise ValueError("parent cpld does not drive SYNC")
|
raise ValueError("parent cpld does not drive SYNC")
|
||||||
search_span = 13
|
search_span = 31
|
||||||
# FIXME https://github.com/sinara-hw/Urukul/issues/16
|
# FIXME https://github.com/sinara-hw/Urukul/issues/16
|
||||||
# should both be 2-4 once kasli sync_in jitter is identified
|
# should both be 2-4 once kasli sync_in jitter is identified
|
||||||
min_window = 0
|
min_window = 0
|
||||||
|
@ -1036,16 +1040,16 @@ class AD9910:
|
||||||
def measure_io_update_alignment(self, delay_start: TInt64,
|
def measure_io_update_alignment(self, delay_start: TInt64,
|
||||||
delay_stop: TInt64) -> TInt32:
|
delay_stop: TInt64) -> TInt32:
|
||||||
"""Use the digital ramp generator to locate the alignment between
|
"""Use the digital ramp generator to locate the alignment between
|
||||||
``IO_UPDATE`` and ``SYNC_CLK``.
|
IO_UPDATE and SYNC_CLK.
|
||||||
|
|
||||||
The ramp generator is set up to a linear frequency ramp
|
The ramp generator is set up to a linear frequency ramp
|
||||||
``(dFTW/t_SYNC_CLK=1)`` and started at a coarse RTIO time stamp plus
|
(dFTW/t_SYNC_CLK=1) and started at a coarse RTIO time stamp plus
|
||||||
``delay_start`` and stopped at a coarse RTIO time stamp plus
|
`delay_start` and stopped at a coarse RTIO time stamp plus
|
||||||
``delay_stop``.
|
`delay_stop`.
|
||||||
|
|
||||||
:param delay_start: Start ``IO_UPDATE`` delay in machine units.
|
:param delay_start: Start IO_UPDATE delay in machine units.
|
||||||
:param delay_stop: Stop ``IO_UPDATE`` delay in machine units.
|
:param delay_stop: Stop IO_UPDATE delay in machine units.
|
||||||
:return: Odd/even ``SYNC_CLK`` cycle indicator.
|
:return: Odd/even SYNC_CLK cycle indicator.
|
||||||
"""
|
"""
|
||||||
# set up DRG
|
# set up DRG
|
||||||
self.set_cfr1(drg_load_lrr=1, drg_autoclear=1)
|
self.set_cfr1(drg_load_lrr=1, drg_autoclear=1)
|
||||||
|
@ -1077,19 +1081,19 @@ class AD9910:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def tune_io_update_delay(self) -> TInt32:
|
def tune_io_update_delay(self) -> TInt32:
|
||||||
"""Find a stable ``IO_UPDATE`` delay alignment.
|
"""Find a stable IO_UPDATE delay alignment.
|
||||||
|
|
||||||
Scan through increasing ``IO_UPDATE`` delays until a delay is found that
|
Scan through increasing IO_UPDATE delays until a delay is found that
|
||||||
lets ``IO_UPDATE`` be registered in the next ``SYNC_CLK`` cycle. Return a
|
lets IO_UPDATE be registered in the next SYNC_CLK cycle. Return a
|
||||||
``IO_UPDATE`` delay that is as far away from that ``SYNC_CLK`` edge
|
IO_UPDATE delay that is as far away from that SYNC_CLK edge
|
||||||
as possible.
|
as possible.
|
||||||
|
|
||||||
This method assumes that the ``IO_UPDATE`` TTLOut device has one machine
|
This method assumes that the IO_UPDATE TTLOut device has one machine
|
||||||
unit resolution (SERDES).
|
unit resolution (SERDES).
|
||||||
|
|
||||||
This method and :meth:`tune_sync_delay` can be run in any order.
|
This method and :meth:`tune_sync_delay` can be run in any order.
|
||||||
|
|
||||||
:return: Stable ``IO_UPDATE`` delay to be passed to the constructor
|
:return: Stable IO_UPDATE delay to be passed to the constructor
|
||||||
:class:`AD9910` via the device database.
|
:class:`AD9910` via the device database.
|
||||||
"""
|
"""
|
||||||
period = self.sysclk_per_mu * 4 # SYNC_CLK period
|
period = self.sysclk_per_mu * 4 # SYNC_CLK period
|
||||||
|
|
|
@ -11,7 +11,7 @@ from artiq.coredevice import urukul
|
||||||
|
|
||||||
class AD9912:
|
class AD9912:
|
||||||
"""
|
"""
|
||||||
AD9912 DDS channel on Urukul.
|
AD9912 DDS channel on Urukul
|
||||||
|
|
||||||
This class supports a single DDS channel and exposes the DDS,
|
This class supports a single DDS channel and exposes the DDS,
|
||||||
the digital step attenuator, and the RF switch.
|
the digital step attenuator, and the RF switch.
|
||||||
|
@ -22,17 +22,15 @@ class AD9912:
|
||||||
:param sw_device: Name of the RF switch device. The RF switch is a
|
:param sw_device: Name of the RF switch device. The RF switch is a
|
||||||
TTLOut channel available as the :attr:`sw` attribute of this instance.
|
TTLOut channel available as the :attr:`sw` attribute of this instance.
|
||||||
:param pll_n: DDS PLL multiplier. The DDS sample clock is
|
:param pll_n: DDS PLL multiplier. The DDS sample clock is
|
||||||
``f_ref / clk_div * pll_n`` where ``f_ref`` is the reference frequency and
|
f_ref/clk_div*pll_n where f_ref is the reference frequency and clk_div
|
||||||
``clk_div`` is the reference clock divider (both set in the parent
|
is the reference clock divider (both set in the parent Urukul CPLD
|
||||||
Urukul CPLD instance).
|
instance).
|
||||||
:param pll_en: PLL enable bit, set to 0 to bypass PLL (default: 1).
|
|
||||||
Note that when bypassing the PLL the red front panel LED may remain on.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, dmgr, chip_select, cpld_device, sw_device=None,
|
def __init__(self, dmgr, chip_select, cpld_device, sw_device=None,
|
||||||
pll_n=10, pll_en=1):
|
pll_n=10):
|
||||||
self.kernel_invariants = {"cpld", "core", "bus", "chip_select",
|
self.kernel_invariants = {"cpld", "core", "bus", "chip_select",
|
||||||
"pll_n", "pll_en", "ftw_per_hz"}
|
"pll_n", "ftw_per_hz"}
|
||||||
self.cpld = dmgr.get(cpld_device)
|
self.cpld = dmgr.get(cpld_device)
|
||||||
self.core = self.cpld.core
|
self.core = self.cpld.core
|
||||||
self.bus = self.cpld.bus
|
self.bus = self.cpld.bus
|
||||||
|
@ -41,16 +39,8 @@ class AD9912:
|
||||||
if sw_device:
|
if sw_device:
|
||||||
self.sw = dmgr.get(sw_device)
|
self.sw = dmgr.get(sw_device)
|
||||||
self.kernel_invariants.add("sw")
|
self.kernel_invariants.add("sw")
|
||||||
self.pll_en = pll_en
|
|
||||||
self.pll_n = pll_n
|
self.pll_n = pll_n
|
||||||
if pll_en:
|
sysclk = self.cpld.refclk / [1, 1, 2, 4][self.cpld.clk_div] * pll_n
|
||||||
refclk = self.cpld.refclk
|
|
||||||
if refclk < 11e6:
|
|
||||||
# use SYSCLK PLL Doubler
|
|
||||||
refclk = refclk * 2
|
|
||||||
sysclk = refclk / [1, 1, 2, 4][self.cpld.clk_div] * pll_n
|
|
||||||
else:
|
|
||||||
sysclk = self.cpld.refclk
|
|
||||||
assert sysclk <= 1e9
|
assert sysclk <= 1e9
|
||||||
self.ftw_per_hz = 1 / sysclk * (int64(1) << 48)
|
self.ftw_per_hz = 1 / sysclk * (int64(1) << 48)
|
||||||
|
|
||||||
|
@ -101,7 +91,7 @@ class AD9912:
|
||||||
|
|
||||||
Sets up SPI mode, confirms chip presence, powers down unused blocks,
|
Sets up SPI mode, confirms chip presence, powers down unused blocks,
|
||||||
and configures the PLL. Does not wait for PLL lock. Uses the
|
and configures the PLL. Does not wait for PLL lock. Uses the
|
||||||
``IO_UPDATE`` signal multiple times.
|
IO_UPDATE signal multiple times.
|
||||||
"""
|
"""
|
||||||
# SPI mode
|
# SPI mode
|
||||||
self.write(AD9912_SER_CONF, 0x99, length=1)
|
self.write(AD9912_SER_CONF, 0x99, length=1)
|
||||||
|
@ -112,19 +102,13 @@ class AD9912:
|
||||||
raise ValueError("Urukul AD9912 product id mismatch")
|
raise ValueError("Urukul AD9912 product id mismatch")
|
||||||
delay(50 * us)
|
delay(50 * us)
|
||||||
# HSTL power down, CMOS power down
|
# HSTL power down, CMOS power down
|
||||||
pwrcntrl1 = 0x80 | ((~self.pll_en & 1) << 4)
|
self.write(AD9912_PWRCNTRL1, 0x80, length=1)
|
||||||
self.write(AD9912_PWRCNTRL1, pwrcntrl1, length=1)
|
self.cpld.io_update.pulse(2 * us)
|
||||||
|
self.write(AD9912_N_DIV, self.pll_n // 2 - 2, length=1)
|
||||||
|
self.cpld.io_update.pulse(2 * us)
|
||||||
|
# I_cp = 375 µA, VCO high range
|
||||||
|
self.write(AD9912_PLLCFG, 0b00000101, length=1)
|
||||||
self.cpld.io_update.pulse(2 * us)
|
self.cpld.io_update.pulse(2 * us)
|
||||||
if self.pll_en:
|
|
||||||
self.write(AD9912_N_DIV, self.pll_n // 2 - 2, length=1)
|
|
||||||
self.cpld.io_update.pulse(2 * us)
|
|
||||||
# I_cp = 375 µA, VCO high range
|
|
||||||
if self.cpld.refclk < 11e6:
|
|
||||||
# enable SYSCLK PLL Doubler
|
|
||||||
self.write(AD9912_PLLCFG, 0b00001101, length=1)
|
|
||||||
else:
|
|
||||||
self.write(AD9912_PLLCFG, 0b00000101, length=1)
|
|
||||||
self.cpld.io_update.pulse(2 * us)
|
|
||||||
delay(1 * ms)
|
delay(1 * ms)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
|
@ -133,9 +117,9 @@ class AD9912:
|
||||||
|
|
||||||
This method will write the attenuator settings of all four channels.
|
This method will write the attenuator settings of all four channels.
|
||||||
|
|
||||||
See also :meth:`~artiq.coredevice.urukul.CPLD.set_att_mu`.
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu`
|
||||||
|
|
||||||
:param att: Attenuation setting, 8-bit digital.
|
:param att: Attenuation setting, 8 bit digital.
|
||||||
"""
|
"""
|
||||||
self.cpld.set_att_mu(self.chip_select - 4, att)
|
self.cpld.set_att_mu(self.chip_select - 4, att)
|
||||||
|
|
||||||
|
@ -145,7 +129,7 @@ class AD9912:
|
||||||
|
|
||||||
This method will write the attenuator settings of all four channels.
|
This method will write the attenuator settings of all four channels.
|
||||||
|
|
||||||
See also :meth:`~artiq.coredevice.urukul.CPLD.set_att`.
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att`
|
||||||
|
|
||||||
:param att: Attenuation in dB. Higher values mean more attenuation.
|
:param att: Attenuation in dB. Higher values mean more attenuation.
|
||||||
"""
|
"""
|
||||||
|
@ -155,9 +139,9 @@ class AD9912:
|
||||||
def get_att_mu(self) -> TInt32:
|
def get_att_mu(self) -> TInt32:
|
||||||
"""Get digital step attenuator value in machine units.
|
"""Get digital step attenuator value in machine units.
|
||||||
|
|
||||||
See also :meth:`~artiq.coredevice.urukul.CPLD.get_channel_att_mu`.
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att_mu`
|
||||||
|
|
||||||
:return: Attenuation setting, 8-bit digital.
|
:return: Attenuation setting, 8 bit digital.
|
||||||
"""
|
"""
|
||||||
return self.cpld.get_channel_att_mu(self.chip_select - 4)
|
return self.cpld.get_channel_att_mu(self.chip_select - 4)
|
||||||
|
|
||||||
|
@ -165,7 +149,7 @@ class AD9912:
|
||||||
def get_att(self) -> TFloat:
|
def get_att(self) -> TFloat:
|
||||||
"""Get digital step attenuator value in SI units.
|
"""Get digital step attenuator value in SI units.
|
||||||
|
|
||||||
See also :meth:`~artiq.coredevice.urukul.CPLD.get_channel_att`.
|
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att`
|
||||||
|
|
||||||
:return: Attenuation in dB.
|
:return: Attenuation in dB.
|
||||||
"""
|
"""
|
||||||
|
@ -178,8 +162,8 @@ class AD9912:
|
||||||
After the SPI transfer, the shared IO update pin is pulsed to
|
After the SPI transfer, the shared IO update pin is pulsed to
|
||||||
activate the data.
|
activate the data.
|
||||||
|
|
||||||
:param ftw: Frequency tuning word: 48-bit unsigned.
|
:param ftw: Frequency tuning word: 48 bit unsigned.
|
||||||
:param pow_: Phase tuning word: 16-bit unsigned.
|
:param pow_: Phase tuning word: 16 bit unsigned.
|
||||||
"""
|
"""
|
||||||
# streaming transfer of FTW and POW
|
# streaming transfer of FTW and POW
|
||||||
self.bus.set_config_mu(urukul.SPI_CONFIG, 16,
|
self.bus.set_config_mu(urukul.SPI_CONFIG, 16,
|
||||||
|
@ -197,9 +181,9 @@ class AD9912:
|
||||||
def get_mu(self) -> TTuple([TInt64, TInt32]):
|
def get_mu(self) -> TTuple([TInt64, TInt32]):
|
||||||
"""Get the frequency tuning word and phase offset word.
|
"""Get the frequency tuning word and phase offset word.
|
||||||
|
|
||||||
See also :meth:`AD9912.get`.
|
.. seealso:: :meth:`get`
|
||||||
|
|
||||||
:return: A tuple (FTW, POW).
|
:return: A tuple ``(ftw, pow)``.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Read data
|
# Read data
|
||||||
|
@ -247,7 +231,7 @@ class AD9912:
|
||||||
def set(self, frequency: TFloat, phase: TFloat = 0.0):
|
def set(self, frequency: TFloat, phase: TFloat = 0.0):
|
||||||
"""Set profile 0 data in SI units.
|
"""Set profile 0 data in SI units.
|
||||||
|
|
||||||
See also :meth:`AD9912.set_mu`.
|
.. seealso:: :meth:`set_mu`
|
||||||
|
|
||||||
:param frequency: Frequency in Hz
|
:param frequency: Frequency in Hz
|
||||||
:param phase: Phase tuning word in turns
|
:param phase: Phase tuning word in turns
|
||||||
|
@ -259,9 +243,9 @@ class AD9912:
|
||||||
def get(self) -> TTuple([TFloat, TFloat]):
|
def get(self) -> TTuple([TFloat, TFloat]):
|
||||||
"""Get the frequency and phase.
|
"""Get the frequency and phase.
|
||||||
|
|
||||||
See also :meth:`AD9912.get_mu`.
|
.. seealso:: :meth:`get_mu`
|
||||||
|
|
||||||
:return: A tuple (frequency, phase).
|
:return: A tuple ``(frequency, phase)``.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Get values
|
# Get values
|
||||||
|
|
|
@ -49,7 +49,7 @@ class AD9914:
|
||||||
The time cursor is not modified by any function in this class.
|
The time cursor is not modified by any function in this class.
|
||||||
|
|
||||||
Output event replacement is not supported and issuing commands at the same
|
Output event replacement is not supported and issuing commands at the same
|
||||||
time results in collision errors.
|
time is an error.
|
||||||
|
|
||||||
:param sysclk: DDS system frequency. The DDS system clock must be a
|
:param sysclk: DDS system frequency. The DDS system clock must be a
|
||||||
phase-locked multiple of the RTIO clock.
|
phase-locked multiple of the RTIO clock.
|
||||||
|
@ -80,13 +80,6 @@ class AD9914:
|
||||||
self.set_x_duration_mu = 7 * self.write_duration_mu
|
self.set_x_duration_mu = 7 * self.write_duration_mu
|
||||||
self.exit_x_duration_mu = 3 * self.write_duration_mu
|
self.exit_x_duration_mu = 3 * self.write_duration_mu
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(bus_channel, channel, **kwargs):
|
|
||||||
# return only first entry, as there are several devices with the same RTIO channel
|
|
||||||
if channel == 0:
|
|
||||||
return [(bus_channel, None)]
|
|
||||||
return []
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write(self, addr, data):
|
def write(self, addr, data):
|
||||||
rtio_output((self.bus_channel << 8) | addr, data)
|
rtio_output((self.bus_channel << 8) | addr, data)
|
||||||
|
@ -134,7 +127,7 @@ class AD9914:
|
||||||
timing margin.
|
timing margin.
|
||||||
|
|
||||||
:param sync_delay: integer from 0 to 0x3f that sets the value of
|
:param sync_delay: integer from 0 to 0x3f that sets the value of
|
||||||
``SYNC_OUT`` (bits 3-5) and ``SYNC_IN`` (bits 0-2) delay ADJ bits.
|
SYNC_OUT (bits 3-5) and SYNC_IN (bits 0-2) delay ADJ bits.
|
||||||
"""
|
"""
|
||||||
delay_mu(-self.init_sync_duration_mu)
|
delay_mu(-self.init_sync_duration_mu)
|
||||||
self.write(AD9914_GPIO, (1 << self.channel) << 1)
|
self.write(AD9914_GPIO, (1 << self.channel) << 1)
|
||||||
|
|
|
@ -73,10 +73,6 @@ class ADF5356:
|
||||||
|
|
||||||
self._init_registers()
|
self._init_registers()
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(**kwargs):
|
|
||||||
return []
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def init(self, blind=False):
|
def init(self, blind=False):
|
||||||
"""
|
"""
|
||||||
|
@ -84,11 +80,10 @@ class ADF5356:
|
||||||
|
|
||||||
:param blind: Do not attempt to verify presence.
|
:param blind: Do not attempt to verify presence.
|
||||||
"""
|
"""
|
||||||
self.sync()
|
|
||||||
if not blind:
|
if not blind:
|
||||||
# MUXOUT = VDD
|
# MUXOUT = VDD
|
||||||
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 1)
|
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 1)
|
||||||
self.write(self.regs[4])
|
self.sync()
|
||||||
delay(1000 * us)
|
delay(1000 * us)
|
||||||
if not self.read_muxout():
|
if not self.read_muxout():
|
||||||
raise ValueError("MUXOUT not high")
|
raise ValueError("MUXOUT not high")
|
||||||
|
@ -96,7 +91,7 @@ class ADF5356:
|
||||||
|
|
||||||
# MUXOUT = DGND
|
# MUXOUT = DGND
|
||||||
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 2)
|
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 2)
|
||||||
self.write(self.regs[4])
|
self.sync()
|
||||||
delay(1000 * us)
|
delay(1000 * us)
|
||||||
if self.read_muxout():
|
if self.read_muxout():
|
||||||
raise ValueError("MUXOUT not low")
|
raise ValueError("MUXOUT not low")
|
||||||
|
@ -104,25 +99,14 @@ class ADF5356:
|
||||||
|
|
||||||
# MUXOUT = digital lock-detect
|
# MUXOUT = digital lock-detect
|
||||||
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 6)
|
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 6)
|
||||||
self.write(self.regs[4])
|
else:
|
||||||
|
self.sync()
|
||||||
@kernel
|
|
||||||
def set_att(self, att):
|
|
||||||
"""Set digital step attenuator in SI units.
|
|
||||||
|
|
||||||
This method will write the attenuator settings of the channel.
|
|
||||||
|
|
||||||
See also :meth:`Mirny.set_att<artiq.coredevice.mirny.Mirny.set_att>`.
|
|
||||||
|
|
||||||
:param att: Attenuation in dB.
|
|
||||||
"""
|
|
||||||
self.cpld.set_att(self.channel, att)
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_att_mu(self, att):
|
def set_att_mu(self, att):
|
||||||
"""Set digital step attenuator in machine units.
|
"""Set digital step attenuator in machine units.
|
||||||
|
|
||||||
:param att: Attenuation setting, 8-bit digital.
|
:param att: Attenuation setting, 8 bit digital.
|
||||||
"""
|
"""
|
||||||
self.cpld.set_att_mu(self.channel, att)
|
self.cpld.set_att_mu(self.channel, att)
|
||||||
|
|
||||||
|
@ -531,14 +515,14 @@ class ADF5356:
|
||||||
@portable
|
@portable
|
||||||
def _compute_pfd_frequency(self, r, d, t) -> TInt64:
|
def _compute_pfd_frequency(self, r, d, t) -> TInt64:
|
||||||
"""
|
"""
|
||||||
Calculate the PFD frequency from the given reference path parameters.
|
Calculate the PFD frequency from the given reference path parameters
|
||||||
"""
|
"""
|
||||||
return int64(self.sysclk * ((1 + d) / (r * (1 + t))))
|
return int64(self.sysclk * ((1 + d) / (r * (1 + t))))
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def _compute_reference_counter(self) -> TInt32:
|
def _compute_reference_counter(self) -> TInt32:
|
||||||
"""
|
"""
|
||||||
Determine the reference counter R that maximizes the PFD frequency.
|
Determine the reference counter R that maximizes the PFD frequency
|
||||||
"""
|
"""
|
||||||
d = ADF5356_REG4_R_DOUBLER_GET(self.regs[4])
|
d = ADF5356_REG4_R_DOUBLER_GET(self.regs[4])
|
||||||
t = ADF5356_REG4_R_DIVIDER_GET(self.regs[4])
|
t = ADF5356_REG4_R_DIVIDER_GET(self.regs[4])
|
||||||
|
@ -565,15 +549,14 @@ def calculate_pll(f_vco: TInt64, f_pfd: TInt64):
|
||||||
"""
|
"""
|
||||||
Calculate fractional-N PLL parameters such that
|
Calculate fractional-N PLL parameters such that
|
||||||
|
|
||||||
``f_vco = f_pfd * (n + (frac1 + frac2/mod2) / mod1)``
|
``f_vco`` = ``f_pfd`` * (``n`` + (``frac1`` + ``frac2``/``mod2``) / ``mod1``)
|
||||||
|
|
||||||
where
|
where
|
||||||
|
``mod1 = 2**24`` and ``mod2 <= 2**28``
|
||||||
``mod1 = 2**24`` and ``mod2 <= 2**28``
|
|
||||||
|
|
||||||
:param f_vco: target VCO frequency
|
:param f_vco: target VCO frequency
|
||||||
:param f_pfd: PFD frequency
|
:param f_pfd: PFD frequency
|
||||||
:return: (``n``, ``frac1``, ``(frac2_msb, frac2_lsb)``, ``(mod2_msb, mod2_lsb)``)
|
:return: ``(n, frac1, (frac2_msb, frac2_lsb), (mod2_msb, mod2_lsb))``
|
||||||
"""
|
"""
|
||||||
f_pfd = int64(f_pfd)
|
f_pfd = int64(f_pfd)
|
||||||
f_vco = int64(f_vco)
|
f_vco = int64(f_vco)
|
||||||
|
|
|
@ -1,182 +0,0 @@
|
||||||
from artiq.language.core import kernel, portable
|
|
||||||
from artiq.language.units import us
|
|
||||||
|
|
||||||
from numpy import int32
|
|
||||||
|
|
||||||
|
|
||||||
# almazny-specific data
|
|
||||||
ALMAZNY_LEGACY_REG_BASE = 0x0C
|
|
||||||
ALMAZNY_LEGACY_OE_SHIFT = 12
|
|
||||||
|
|
||||||
# higher SPI write divider to match almazny shift register timing
|
|
||||||
# min SER time before SRCLK rise = 125ns
|
|
||||||
# -> div=32 gives 125ns for data before clock rise
|
|
||||||
# works at faster dividers too but could be less reliable
|
|
||||||
ALMAZNY_LEGACY_SPIT_WR = 32
|
|
||||||
|
|
||||||
|
|
||||||
class AlmaznyLegacy:
|
|
||||||
"""
|
|
||||||
Almazny (High-frequency mezzanine board for Mirny)
|
|
||||||
|
|
||||||
This applies to Almazny hardware v1.1 and earlier.
|
|
||||||
Use :class:`~artiq.coredevice.almazny.AlmaznyChannel` for Almazny v1.2 and later.
|
|
||||||
|
|
||||||
:param host_mirny: :class:`~artiq.coredevice.mirny.Mirny` device Almazny is connected to
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, dmgr, host_mirny):
|
|
||||||
self.mirny_cpld = dmgr.get(host_mirny)
|
|
||||||
self.att_mu = [0x3f] * 4
|
|
||||||
self.channel_sw = [0] * 4
|
|
||||||
self.output_enable = False
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def init(self):
|
|
||||||
self.output_toggle(self.output_enable)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def att_to_mu(self, att):
|
|
||||||
"""
|
|
||||||
Convert an attenuator setting in dB to machine units.
|
|
||||||
|
|
||||||
:param att: attenuator setting in dB [0-31.5]
|
|
||||||
:return: attenuator setting in machine units
|
|
||||||
"""
|
|
||||||
mu = round(att * 2.0)
|
|
||||||
if mu > 63 or mu < 0:
|
|
||||||
raise ValueError("Invalid Almazny attenuator settings!")
|
|
||||||
return mu
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def mu_to_att(self, att_mu):
|
|
||||||
"""
|
|
||||||
Convert a digital attenuator setting to dB.
|
|
||||||
|
|
||||||
:param att_mu: attenuator setting in machine units
|
|
||||||
:return: attenuator setting in dB
|
|
||||||
"""
|
|
||||||
return att_mu / 2
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_att(self, channel, att, rf_switch=True):
|
|
||||||
"""
|
|
||||||
Sets attenuators on chosen shift register (channel).
|
|
||||||
|
|
||||||
:param channel: index of the register [0-3]
|
|
||||||
:param att: attenuation setting in dBm [0-31.5]
|
|
||||||
:param rf_switch: rf switch (bool)
|
|
||||||
"""
|
|
||||||
self.set_att_mu(channel, self.att_to_mu(att), rf_switch)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_att_mu(self, channel, att_mu, rf_switch=True):
|
|
||||||
"""
|
|
||||||
Sets attenuators on chosen shift register (channel).
|
|
||||||
|
|
||||||
:param channel: index of the register [0-3]
|
|
||||||
:param att_mu: attenuation setting in machine units [0-63]
|
|
||||||
:param rf_switch: rf switch (bool)
|
|
||||||
"""
|
|
||||||
self.channel_sw[channel] = 1 if rf_switch else 0
|
|
||||||
self.att_mu[channel] = att_mu
|
|
||||||
self._update_register(channel)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def output_toggle(self, oe):
|
|
||||||
"""
|
|
||||||
Toggles output on all shift registers on or off.
|
|
||||||
|
|
||||||
:param oe: toggle output enable (bool)
|
|
||||||
"""
|
|
||||||
self.output_enable = oe
|
|
||||||
cfg_reg = self.mirny_cpld.read_reg(1)
|
|
||||||
en = 1 if self.output_enable else 0
|
|
||||||
delay(100 * us)
|
|
||||||
new_reg = (en << ALMAZNY_LEGACY_OE_SHIFT) | (cfg_reg & 0x3FF)
|
|
||||||
self.mirny_cpld.write_reg(1, new_reg)
|
|
||||||
delay(100 * us)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def _flip_mu_bits(self, mu):
|
|
||||||
# in this form MSB is actually 0.5dB attenuator
|
|
||||||
# unnatural for users, so we flip the six bits
|
|
||||||
return (((mu & 0x01) << 5)
|
|
||||||
| ((mu & 0x02) << 3)
|
|
||||||
| ((mu & 0x04) << 1)
|
|
||||||
| ((mu & 0x08) >> 1)
|
|
||||||
| ((mu & 0x10) >> 3)
|
|
||||||
| ((mu & 0x20) >> 5))
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def _update_register(self, ch):
|
|
||||||
self.mirny_cpld.write_ext(
|
|
||||||
ALMAZNY_LEGACY_REG_BASE + ch,
|
|
||||||
8,
|
|
||||||
self._flip_mu_bits(self.att_mu[ch]) | (self.channel_sw[ch] << 6),
|
|
||||||
ALMAZNY_LEGACY_SPIT_WR
|
|
||||||
)
|
|
||||||
delay(100 * us)
|
|
||||||
|
|
||||||
|
|
||||||
class AlmaznyChannel:
|
|
||||||
"""
|
|
||||||
Driver for one Almazny channel.
|
|
||||||
|
|
||||||
Almazny is a mezzanine for the Quad PLL RF source Mirny that exposes and
|
|
||||||
controls the frequency-doubled outputs.
|
|
||||||
This driver requires Almazny hardware revision v1.2 or later
|
|
||||||
and Mirny CPLD gateware v0.3 or later.
|
|
||||||
Use :class:`~artiq.coredevice.almazny.AlmaznyLegacy` for Almazny hardware v1.1 and earlier.
|
|
||||||
|
|
||||||
:param host_mirny: Mirny CPLD device name
|
|
||||||
:param channel: channel index (0-3)
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, dmgr, host_mirny, channel):
|
|
||||||
self.channel = channel
|
|
||||||
self.mirny_cpld = dmgr.get(host_mirny)
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def to_mu(self, att, enable, led):
|
|
||||||
"""
|
|
||||||
Convert an attenuation in dB, RF switch state and LED state to machine
|
|
||||||
units.
|
|
||||||
|
|
||||||
:param att: attenuator setting in dB (0-31.5)
|
|
||||||
:param enable: RF switch state (bool)
|
|
||||||
:param led: LED state (bool)
|
|
||||||
:return: channel setting in machine units
|
|
||||||
"""
|
|
||||||
mu = int32(round(att * 2.))
|
|
||||||
if mu >= 64 or mu < 0:
|
|
||||||
raise ValueError("Attenuation out of range")
|
|
||||||
# unfortunate hardware design: bit reverse
|
|
||||||
mu = ((mu & 0x15) << 1) | ((mu >> 1) & 0x15)
|
|
||||||
mu = ((mu & 0x03) << 4) | (mu & 0x0c) | ((mu >> 4) & 0x03)
|
|
||||||
if enable:
|
|
||||||
mu |= 1 << 6
|
|
||||||
if led:
|
|
||||||
mu |= 1 << 7
|
|
||||||
return mu
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_mu(self, mu):
|
|
||||||
"""
|
|
||||||
Set channel state (machine units).
|
|
||||||
|
|
||||||
:param mu: channel state in machine units.
|
|
||||||
"""
|
|
||||||
self.mirny_cpld.write_ext(
|
|
||||||
addr=0xc + self.channel, length=8, data=mu, ext_div=32)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set(self, att, enable, led=False):
|
|
||||||
"""
|
|
||||||
Set attenuation, RF switch, and LED state (SI units).
|
|
||||||
|
|
||||||
:param att: attenuator setting in dB (0-31.5)
|
|
||||||
:param enable: RF switch state (bool)
|
|
||||||
:param led: LED state (bool)
|
|
||||||
"""
|
|
||||||
self.set_mu(self.to_mu(att, enable, led))
|
|
|
@ -0,0 +1,79 @@
|
||||||
|
from artiq.language.core import kernel, portable, delay
|
||||||
|
from artiq.language.units import us, ms
|
||||||
|
from artiq.coredevice.shiftreg import ShiftReg
|
||||||
|
|
||||||
|
|
||||||
|
@portable
|
||||||
|
def to_mu(att):
|
||||||
|
return round(att*2.0) ^ 0x3f
|
||||||
|
|
||||||
|
@portable
|
||||||
|
def from_mu(att_mu):
|
||||||
|
return 0.5*(att_mu ^ 0x3f)
|
||||||
|
|
||||||
|
|
||||||
|
class BaseModAtt:
|
||||||
|
def __init__(self, dmgr, rst_n, clk, le, mosi, miso):
|
||||||
|
self.rst_n = dmgr.get(rst_n)
|
||||||
|
self.shift_reg = ShiftReg(dmgr,
|
||||||
|
clk=clk, ser=mosi, latch=le, ser_in=miso, n=8*4)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def reset(self):
|
||||||
|
# HMC's incompetence in digital design and interfaces means that
|
||||||
|
# the HMC542 needs a level low on RST_N and then a rising edge
|
||||||
|
# on Latch Enable. Their "latch" isn't a latch but a DFF.
|
||||||
|
# Of course, it also powers up with a random attenuation, and
|
||||||
|
# that cannot be fixed with simple pull-ups/pull-downs.
|
||||||
|
self.rst_n.off()
|
||||||
|
self.shift_reg.latch.off()
|
||||||
|
delay(1*us)
|
||||||
|
self.shift_reg.latch.on()
|
||||||
|
delay(1*us)
|
||||||
|
self.shift_reg.latch.off()
|
||||||
|
self.rst_n.on()
|
||||||
|
delay(1*us)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_mu(self, att0, att1, att2, att3):
|
||||||
|
"""
|
||||||
|
Sets the four attenuators on BaseMod.
|
||||||
|
The values are in half decibels, between 0 (no attenuation)
|
||||||
|
and 63 (31.5dB attenuation).
|
||||||
|
"""
|
||||||
|
word = (
|
||||||
|
(att0 << 2) |
|
||||||
|
(att1 << 10) |
|
||||||
|
(att2 << 18) |
|
||||||
|
(att3 << 26)
|
||||||
|
)
|
||||||
|
self.shift_reg.set(word)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def get_mu(self):
|
||||||
|
"""
|
||||||
|
Retrieves the current settings of the four attenuators on BaseMod.
|
||||||
|
"""
|
||||||
|
word = self.shift_reg.get()
|
||||||
|
att0 = (word >> 2) & 0x3f
|
||||||
|
att1 = (word >> 10) & 0x3f
|
||||||
|
att2 = (word >> 18) & 0x3f
|
||||||
|
att3 = (word >> 26) & 0x3f
|
||||||
|
return att0, att1, att2, att3
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set(self, att0, att1, att2, att3):
|
||||||
|
"""
|
||||||
|
Sets the four attenuators on BaseMod.
|
||||||
|
The values are in decibels.
|
||||||
|
"""
|
||||||
|
self.set_mu(to_mu(att0), to_mu(att1), to_mu(att2), to_mu(att3))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def get(self):
|
||||||
|
"""
|
||||||
|
Retrieves the current settings of the four attenuators on BaseMod.
|
||||||
|
The values are in decibels.
|
||||||
|
"""
|
||||||
|
att0, att1, att2, att3 = self.get_mu()
|
||||||
|
return from_mu(att0), from_mu(att1), from_mu(att2), from_mu(att3)
|
|
@ -21,9 +21,9 @@ class CoreCache:
|
||||||
"""Extract a value from the core device cache.
|
"""Extract a value from the core device cache.
|
||||||
After a value is extracted, it cannot be replaced with another value using
|
After a value is extracted, it cannot be replaced with another value using
|
||||||
:meth:`put` until all kernel functions finish executing; attempting
|
:meth:`put` until all kernel functions finish executing; attempting
|
||||||
to replace it will result in a :class:`~artiq.coredevice.exceptions.CacheError`.
|
to replace it will result in a :class:`artiq.coredevice.exceptions.CacheError`.
|
||||||
|
|
||||||
If the cache does not contain any value associated with `key`, an empty list
|
If the cache does not contain any value associated with ``key``, an empty list
|
||||||
is returned.
|
is returned.
|
||||||
|
|
||||||
The value is not copied, so mutating it will change what's stored in the cache.
|
The value is not copied, so mutating it will change what's stored in the cache.
|
||||||
|
|
|
@ -0,0 +1,28 @@
|
||||||
|
import sys
|
||||||
|
import socket
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def set_keepalive(sock, after_idle, interval, max_fails):
|
||||||
|
if sys.platform.startswith("linux"):
|
||||||
|
sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||||
|
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, after_idle)
|
||||||
|
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, interval)
|
||||||
|
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, max_fails)
|
||||||
|
elif sys.platform.startswith("win") or sys.platform.startswith("cygwin"):
|
||||||
|
# setting max_fails is not supported, typically ends up being 5 or 10
|
||||||
|
# depending on Windows version
|
||||||
|
sock.ioctl(socket.SIO_KEEPALIVE_VALS,
|
||||||
|
(1, after_idle * 1000, interval * 1000))
|
||||||
|
else:
|
||||||
|
logger.warning("TCP keepalive not supported on platform '%s', ignored",
|
||||||
|
sys.platform)
|
||||||
|
|
||||||
|
|
||||||
|
def initialize_connection(host, port):
|
||||||
|
sock = socket.create_connection((host, port))
|
||||||
|
set_keepalive(sock, 10, 10, 3)
|
||||||
|
logger.debug("connected to %s:%d", host, port)
|
||||||
|
return sock
|
|
@ -2,22 +2,15 @@ from operator import itemgetter
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
from itertools import count
|
from itertools import count
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from sipyco import keepalive
|
|
||||||
import asyncio
|
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
import struct
|
import struct
|
||||||
import logging
|
import logging
|
||||||
import socket
|
import socket
|
||||||
import math
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
DEFAULT_REF_PERIOD = 1e-9
|
|
||||||
ANALYZER_MAGIC = b"ARTIQ Analyzer Proxy\n"
|
|
||||||
|
|
||||||
|
|
||||||
class MessageType(Enum):
|
class MessageType(Enum):
|
||||||
output = 0b00
|
output = 0b00
|
||||||
input = 0b01
|
input = 0b01
|
||||||
|
@ -41,13 +34,6 @@ class ExceptionType(Enum):
|
||||||
i_overflow = 0b100001
|
i_overflow = 0b100001
|
||||||
|
|
||||||
|
|
||||||
class WaveformType(Enum):
|
|
||||||
ANALOG = 0
|
|
||||||
BIT = 1
|
|
||||||
VECTOR = 2
|
|
||||||
LOG = 3
|
|
||||||
|
|
||||||
|
|
||||||
def get_analyzer_dump(host, port=1382):
|
def get_analyzer_dump(host, port=1382):
|
||||||
sock = socket.create_connection((host, port))
|
sock = socket.create_connection((host, port))
|
||||||
try:
|
try:
|
||||||
|
@ -116,98 +102,28 @@ def decode_dump(data):
|
||||||
# messages are big endian
|
# messages are big endian
|
||||||
parts = struct.unpack(endian + "IQbbb", data[:15])
|
parts = struct.unpack(endian + "IQbbb", data[:15])
|
||||||
(sent_bytes, total_byte_count,
|
(sent_bytes, total_byte_count,
|
||||||
error_occurred, log_channel, dds_onehot_sel) = parts
|
error_occured, log_channel, dds_onehot_sel) = parts
|
||||||
|
|
||||||
logger.debug("analyzer dump has length %d", sent_bytes)
|
|
||||||
|
|
||||||
expected_len = sent_bytes + 15
|
expected_len = sent_bytes + 15
|
||||||
if expected_len != len(data):
|
if expected_len != len(data):
|
||||||
raise ValueError("analyzer dump has incorrect length "
|
raise ValueError("analyzer dump has incorrect length "
|
||||||
"(got {}, expected {})".format(
|
"(got {}, expected {})".format(
|
||||||
len(data), expected_len))
|
len(data), expected_len))
|
||||||
if error_occurred:
|
if error_occured:
|
||||||
logger.warning("error occurred within the analyzer, "
|
logger.warning("error occured within the analyzer, "
|
||||||
"data may be corrupted")
|
"data may be corrupted")
|
||||||
if total_byte_count > sent_bytes:
|
if total_byte_count > sent_bytes:
|
||||||
logger.info("analyzer ring buffer has wrapped %d times",
|
logger.info("analyzer ring buffer has wrapped %d times",
|
||||||
total_byte_count//sent_bytes)
|
total_byte_count//sent_bytes)
|
||||||
if sent_bytes == 0:
|
|
||||||
logger.warning("analyzer dump is empty")
|
|
||||||
|
|
||||||
position = 15
|
position = 15
|
||||||
messages = []
|
messages = []
|
||||||
for _ in range(sent_bytes//32):
|
for _ in range(sent_bytes//32):
|
||||||
messages.append(decode_message(data[position:position+32]))
|
messages.append(decode_message(data[position:position+32]))
|
||||||
position += 32
|
position += 32
|
||||||
|
|
||||||
if len(messages) == 1 and isinstance(messages[0], StoppedMessage):
|
|
||||||
logger.warning("analyzer dump is empty aside from stop message")
|
|
||||||
|
|
||||||
return DecodedDump(log_channel, bool(dds_onehot_sel), messages)
|
return DecodedDump(log_channel, bool(dds_onehot_sel), messages)
|
||||||
|
|
||||||
|
|
||||||
# simplified from sipyco broadcast Receiver
|
|
||||||
class AnalyzerProxyReceiver:
|
|
||||||
def __init__(self, receive_cb, disconnect_cb=None):
|
|
||||||
self.receive_cb = receive_cb
|
|
||||||
self.disconnect_cb = disconnect_cb
|
|
||||||
|
|
||||||
async def connect(self, host, port):
|
|
||||||
self.reader, self.writer = \
|
|
||||||
await keepalive.async_open_connection(host, port)
|
|
||||||
try:
|
|
||||||
line = await self.reader.readline()
|
|
||||||
assert line == ANALYZER_MAGIC
|
|
||||||
self.receive_task = asyncio.create_task(self._receive_cr())
|
|
||||||
except:
|
|
||||||
self.writer.close()
|
|
||||||
del self.reader
|
|
||||||
del self.writer
|
|
||||||
raise
|
|
||||||
|
|
||||||
async def close(self):
|
|
||||||
self.disconnect_cb = None
|
|
||||||
try:
|
|
||||||
self.receive_task.cancel()
|
|
||||||
try:
|
|
||||||
await self.receive_task
|
|
||||||
except asyncio.CancelledError:
|
|
||||||
pass
|
|
||||||
finally:
|
|
||||||
self.writer.close()
|
|
||||||
del self.reader
|
|
||||||
del self.writer
|
|
||||||
|
|
||||||
async def _receive_cr(self):
|
|
||||||
try:
|
|
||||||
while True:
|
|
||||||
data = bytearray()
|
|
||||||
data.extend(await self.reader.read(1))
|
|
||||||
if len(data) == 0:
|
|
||||||
# EOF reached, connection lost
|
|
||||||
return
|
|
||||||
if data[0] == ord("E"):
|
|
||||||
endian = '>'
|
|
||||||
elif data[0] == ord("e"):
|
|
||||||
endian = '<'
|
|
||||||
else:
|
|
||||||
raise ValueError
|
|
||||||
data.extend(await self.reader.readexactly(4))
|
|
||||||
payload_length = struct.unpack(endian + "I", data[1:5])[0]
|
|
||||||
if payload_length > 10 * 512 * 1024:
|
|
||||||
# 10x buffer size of firmware
|
|
||||||
raise ValueError
|
|
||||||
|
|
||||||
# The remaining header length is 11 bytes.
|
|
||||||
data.extend(await self.reader.readexactly(payload_length + 11))
|
|
||||||
self.receive_cb(data)
|
|
||||||
except Exception:
|
|
||||||
logger.error("analyzer receiver connection terminating with exception", exc_info=True)
|
|
||||||
finally:
|
|
||||||
if self.disconnect_cb is not None:
|
|
||||||
self.disconnect_cb()
|
|
||||||
|
|
||||||
|
|
||||||
def vcd_codes():
|
def vcd_codes():
|
||||||
codechars = [chr(i) for i in range(33, 127)]
|
codechars = [chr(i) for i in range(33, 127)]
|
||||||
for n in count():
|
for n in count():
|
||||||
|
@ -234,129 +150,38 @@ class VCDChannel:
|
||||||
integer_cast = struct.unpack(">Q", struct.pack(">d", x))[0]
|
integer_cast = struct.unpack(">Q", struct.pack(">d", x))[0]
|
||||||
self.set_value("{:064b}".format(integer_cast))
|
self.set_value("{:064b}".format(integer_cast))
|
||||||
|
|
||||||
def set_log(self, log_message):
|
|
||||||
value = ""
|
|
||||||
for c in log_message:
|
|
||||||
value += "{:08b}".format(ord(c))
|
|
||||||
self.set_value(value)
|
|
||||||
|
|
||||||
|
|
||||||
class VCDManager:
|
class VCDManager:
|
||||||
def __init__(self, fileobj):
|
def __init__(self, fileobj):
|
||||||
self.out = fileobj
|
self.out = fileobj
|
||||||
self.codes = vcd_codes()
|
self.codes = vcd_codes()
|
||||||
self.current_time = None
|
self.current_time = None
|
||||||
self.start_time = 0
|
|
||||||
|
|
||||||
def set_timescale_ps(self, timescale):
|
def set_timescale_ps(self, timescale):
|
||||||
self.out.write("$timescale {}ps $end\n".format(round(timescale)))
|
self.out.write("$timescale {}ps $end\n".format(round(timescale)))
|
||||||
|
|
||||||
def get_channel(self, name, width, ty, precision=0, unit=""):
|
def get_channel(self, name, width):
|
||||||
code = next(self.codes)
|
code = next(self.codes)
|
||||||
self.out.write("$var wire {width} {code} {name} $end\n"
|
self.out.write("$var wire {width} {code} {name} $end\n"
|
||||||
.format(name=name, code=code, width=width))
|
.format(name=name, code=code, width=width))
|
||||||
return VCDChannel(self.out, code)
|
return VCDChannel(self.out, code)
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def scope(self, scope, name):
|
def scope(self, name):
|
||||||
self.out.write("$scope module {}/{} $end\n".format(scope, name))
|
self.out.write("$scope module {} $end\n".format(name))
|
||||||
yield
|
yield
|
||||||
self.out.write("$upscope $end\n")
|
self.out.write("$upscope $end\n")
|
||||||
|
|
||||||
def set_time(self, time):
|
def set_time(self, time):
|
||||||
time -= self.start_time
|
|
||||||
if time != self.current_time:
|
if time != self.current_time:
|
||||||
self.out.write("#{}\n".format(time))
|
self.out.write("#{}\n".format(time))
|
||||||
self.current_time = time
|
self.current_time = time
|
||||||
|
|
||||||
def set_start_time(self, time):
|
|
||||||
self.start_time = time
|
|
||||||
|
|
||||||
def set_end_time(self, time):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class WaveformManager:
|
|
||||||
def __init__(self):
|
|
||||||
self.current_time = 0
|
|
||||||
self.start_time = 0
|
|
||||||
self.end_time = 0
|
|
||||||
self.channels = list()
|
|
||||||
self.current_scope = ""
|
|
||||||
self.trace = {"timescale": 1, "stopped_x": None, "logs": dict(), "data": dict()}
|
|
||||||
|
|
||||||
def set_timescale_ps(self, timescale):
|
|
||||||
self.trace["timescale"] = int(timescale)
|
|
||||||
|
|
||||||
def get_channel(self, name, width, ty, precision=0, unit=""):
|
|
||||||
if ty == WaveformType.LOG:
|
|
||||||
self.trace["logs"][self.current_scope + name] = (ty, width, precision, unit)
|
|
||||||
data = self.trace["data"][self.current_scope + name] = list()
|
|
||||||
channel = WaveformChannel(data, self.current_time)
|
|
||||||
self.channels.append(channel)
|
|
||||||
return channel
|
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def scope(self, scope, name):
|
|
||||||
old_scope = self.current_scope
|
|
||||||
self.current_scope = scope + "/"
|
|
||||||
yield
|
|
||||||
self.current_scope = old_scope
|
|
||||||
|
|
||||||
def set_time(self, time):
|
|
||||||
time -= self.start_time
|
|
||||||
for channel in self.channels:
|
|
||||||
channel.set_time(time)
|
|
||||||
|
|
||||||
def set_start_time(self, time):
|
|
||||||
self.start_time = time
|
|
||||||
if self.trace["stopped_x"] is not None:
|
|
||||||
self.trace["stopped_x"] = self.end_time - self.start_time
|
|
||||||
|
|
||||||
def set_end_time(self, time):
|
|
||||||
self.end_time = time
|
|
||||||
self.trace["stopped_x"] = self.end_time - self.start_time
|
|
||||||
|
|
||||||
|
|
||||||
class WaveformChannel:
|
|
||||||
def __init__(self, data, current_time):
|
|
||||||
self.data = data
|
|
||||||
self.current_time = current_time
|
|
||||||
|
|
||||||
def set_value(self, value):
|
|
||||||
self.data.append((self.current_time, value))
|
|
||||||
|
|
||||||
def set_value_double(self, x):
|
|
||||||
self.data.append((self.current_time, x))
|
|
||||||
|
|
||||||
def set_time(self, time):
|
|
||||||
self.current_time = time
|
|
||||||
|
|
||||||
def set_log(self, log_message):
|
|
||||||
self.data.append((self.current_time, log_message))
|
|
||||||
|
|
||||||
|
|
||||||
class ChannelSignatureManager:
|
|
||||||
def __init__(self):
|
|
||||||
self.current_scope = ""
|
|
||||||
self.channels = dict()
|
|
||||||
|
|
||||||
def get_channel(self, name, width, ty, precision=0, unit=""):
|
|
||||||
self.channels[self.current_scope + name] = (ty, width, precision, unit)
|
|
||||||
return None
|
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def scope(self, scope, name):
|
|
||||||
old_scope = self.current_scope
|
|
||||||
self.current_scope = scope + "/"
|
|
||||||
yield
|
|
||||||
self.current_scope = old_scope
|
|
||||||
|
|
||||||
|
|
||||||
class TTLHandler:
|
class TTLHandler:
|
||||||
def __init__(self, manager, name):
|
def __init__(self, vcd_manager, name):
|
||||||
self.name = name
|
self.name = name
|
||||||
self.channel_value = manager.get_channel("ttl/" + name, 1, ty=WaveformType.BIT)
|
self.channel_value = vcd_manager.get_channel("ttl/" + name, 1)
|
||||||
self.last_value = "X"
|
self.last_value = "X"
|
||||||
self.oe = True
|
self.oe = True
|
||||||
|
|
||||||
|
@ -381,12 +206,11 @@ class TTLHandler:
|
||||||
|
|
||||||
|
|
||||||
class TTLClockGenHandler:
|
class TTLClockGenHandler:
|
||||||
def __init__(self, manager, name, ref_period):
|
def __init__(self, vcd_manager, name, ref_period):
|
||||||
self.name = name
|
self.name = name
|
||||||
self.ref_period = ref_period
|
self.ref_period = ref_period
|
||||||
precision = max(0, math.ceil(math.log10(2**24 * ref_period) + 6))
|
self.channel_frequency = vcd_manager.get_channel(
|
||||||
self.channel_frequency = manager.get_channel(
|
"ttl_clkgen/" + name, 64)
|
||||||
"ttl_clkgen/" + name, 64, ty=WaveformType.ANALOG, precision=precision, unit="MHz")
|
|
||||||
|
|
||||||
def process_message(self, message):
|
def process_message(self, message):
|
||||||
if isinstance(message, OutputMessage):
|
if isinstance(message, OutputMessage):
|
||||||
|
@ -397,8 +221,8 @@ class TTLClockGenHandler:
|
||||||
|
|
||||||
|
|
||||||
class DDSHandler:
|
class DDSHandler:
|
||||||
def __init__(self, manager, onehot_sel, sysclk):
|
def __init__(self, vcd_manager, onehot_sel, sysclk):
|
||||||
self.manager = manager
|
self.vcd_manager = vcd_manager
|
||||||
self.onehot_sel = onehot_sel
|
self.onehot_sel = onehot_sel
|
||||||
self.sysclk = sysclk
|
self.sysclk = sysclk
|
||||||
|
|
||||||
|
@ -407,18 +231,11 @@ class DDSHandler:
|
||||||
|
|
||||||
def add_dds_channel(self, name, dds_channel_nr):
|
def add_dds_channel(self, name, dds_channel_nr):
|
||||||
dds_channel = dict()
|
dds_channel = dict()
|
||||||
frequency_precision = max(0, math.ceil(math.log10(2**32 / self.sysclk) + 6))
|
with self.vcd_manager.scope("dds/{}".format(name)):
|
||||||
phase_precision = max(0, math.ceil(math.log10(2**16)))
|
|
||||||
with self.manager.scope("dds", name):
|
|
||||||
dds_channel["vcd_frequency"] = \
|
dds_channel["vcd_frequency"] = \
|
||||||
self.manager.get_channel(name + "/frequency", 64,
|
self.vcd_manager.get_channel(name + "/frequency", 64)
|
||||||
ty=WaveformType.ANALOG,
|
|
||||||
precision=frequency_precision,
|
|
||||||
unit="MHz")
|
|
||||||
dds_channel["vcd_phase"] = \
|
dds_channel["vcd_phase"] = \
|
||||||
self.manager.get_channel(name + "/phase", 64,
|
self.vcd_manager.get_channel(name + "/phase", 64)
|
||||||
ty=WaveformType.ANALOG,
|
|
||||||
precision=phase_precision)
|
|
||||||
dds_channel["ftw"] = [None, None]
|
dds_channel["ftw"] = [None, None]
|
||||||
dds_channel["pow"] = None
|
dds_channel["pow"] = None
|
||||||
self.dds_channels[dds_channel_nr] = dds_channel
|
self.dds_channels[dds_channel_nr] = dds_channel
|
||||||
|
@ -468,10 +285,10 @@ class DDSHandler:
|
||||||
|
|
||||||
|
|
||||||
class WishboneHandler:
|
class WishboneHandler:
|
||||||
def __init__(self, manager, name, read_bit):
|
def __init__(self, vcd_manager, name, read_bit):
|
||||||
self._reads = []
|
self._reads = []
|
||||||
self._read_bit = read_bit
|
self._read_bit = read_bit
|
||||||
self.stb = manager.get_channel(name + "/stb", 1, ty=WaveformType.BIT)
|
self.stb = vcd_manager.get_channel("{}/{}".format(name, "stb"), 1)
|
||||||
|
|
||||||
def process_message(self, message):
|
def process_message(self, message):
|
||||||
self.stb.set_value("1")
|
self.stb.set_value("1")
|
||||||
|
@ -501,17 +318,16 @@ class WishboneHandler:
|
||||||
|
|
||||||
|
|
||||||
class SPIMasterHandler(WishboneHandler):
|
class SPIMasterHandler(WishboneHandler):
|
||||||
def __init__(self, manager, name):
|
def __init__(self, vcd_manager, name):
|
||||||
self.channels = {}
|
self.channels = {}
|
||||||
self.scope = "spi"
|
with vcd_manager.scope("spi/{}".format(name)):
|
||||||
with manager.scope("spi", name):
|
super().__init__(vcd_manager, name, read_bit=0b100)
|
||||||
super().__init__(manager, name, read_bit=0b100)
|
|
||||||
for reg_name, reg_width in [
|
for reg_name, reg_width in [
|
||||||
("config", 32), ("chip_select", 16),
|
("config", 32), ("chip_select", 16),
|
||||||
("write_length", 8), ("read_length", 8),
|
("write_length", 8), ("read_length", 8),
|
||||||
("write", 32), ("read", 32)]:
|
("write", 32), ("read", 32)]:
|
||||||
self.channels[reg_name] = manager.get_channel(
|
self.channels[reg_name] = vcd_manager.get_channel(
|
||||||
"{}/{}".format(name, reg_name), reg_width, ty=WaveformType.VECTOR)
|
"{}/{}".format(name, reg_name), reg_width)
|
||||||
|
|
||||||
def process_write(self, address, data):
|
def process_write(self, address, data):
|
||||||
if address == 0:
|
if address == 0:
|
||||||
|
@ -536,12 +352,11 @@ class SPIMasterHandler(WishboneHandler):
|
||||||
|
|
||||||
|
|
||||||
class SPIMaster2Handler(WishboneHandler):
|
class SPIMaster2Handler(WishboneHandler):
|
||||||
def __init__(self, manager, name):
|
def __init__(self, vcd_manager, name):
|
||||||
self._reads = []
|
self._reads = []
|
||||||
self.channels = {}
|
self.channels = {}
|
||||||
self.scope = "spi2"
|
with vcd_manager.scope("spi2/{}".format(name)):
|
||||||
with manager.scope("spi2", name):
|
self.stb = vcd_manager.get_channel("{}/{}".format(name, "stb"), 1)
|
||||||
self.stb = manager.get_channel(name + "/stb", 1, ty=WaveformType.BIT)
|
|
||||||
for reg_name, reg_width in [
|
for reg_name, reg_width in [
|
||||||
("flags", 8),
|
("flags", 8),
|
||||||
("length", 5),
|
("length", 5),
|
||||||
|
@ -549,8 +364,8 @@ class SPIMaster2Handler(WishboneHandler):
|
||||||
("chip_select", 8),
|
("chip_select", 8),
|
||||||
("write", 32),
|
("write", 32),
|
||||||
("read", 32)]:
|
("read", 32)]:
|
||||||
self.channels[reg_name] = manager.get_channel(
|
self.channels[reg_name] = vcd_manager.get_channel(
|
||||||
"{}/{}".format(name, reg_name), reg_width, ty=WaveformType.VECTOR)
|
"{}/{}".format(name, reg_name), reg_width)
|
||||||
|
|
||||||
def process_message(self, message):
|
def process_message(self, message):
|
||||||
self.stb.set_value("1")
|
self.stb.set_value("1")
|
||||||
|
@ -598,12 +413,11 @@ def _extract_log_chars(data):
|
||||||
|
|
||||||
|
|
||||||
class LogHandler:
|
class LogHandler:
|
||||||
def __init__(self, manager, log_channels):
|
def __init__(self, vcd_manager, vcd_log_channels):
|
||||||
self.channels = dict()
|
self.vcd_channels = dict()
|
||||||
for name, maxlength in log_channels.items():
|
for name, maxlength in vcd_log_channels.items():
|
||||||
self.channels[name] = manager.get_channel("logs/" + name,
|
self.vcd_channels[name] = vcd_manager.get_channel("log/" + name,
|
||||||
maxlength * 8,
|
maxlength*8)
|
||||||
ty=WaveformType.LOG)
|
|
||||||
self.current_entry = ""
|
self.current_entry = ""
|
||||||
|
|
||||||
def process_message(self, message):
|
def process_message(self, message):
|
||||||
|
@ -611,12 +425,15 @@ class LogHandler:
|
||||||
self.current_entry += _extract_log_chars(message.data)
|
self.current_entry += _extract_log_chars(message.data)
|
||||||
if len(self.current_entry) > 1 and self.current_entry[-1] == "\x1D":
|
if len(self.current_entry) > 1 and self.current_entry[-1] == "\x1D":
|
||||||
channel_name, log_message = self.current_entry[:-1].split("\x1E", maxsplit=1)
|
channel_name, log_message = self.current_entry[:-1].split("\x1E", maxsplit=1)
|
||||||
self.channels[channel_name].set_log(log_message)
|
vcd_value = ""
|
||||||
|
for c in log_message:
|
||||||
|
vcd_value += "{:08b}".format(ord(c))
|
||||||
|
self.vcd_channels[channel_name].set_value(vcd_value)
|
||||||
self.current_entry = ""
|
self.current_entry = ""
|
||||||
|
|
||||||
|
|
||||||
def get_log_channels(log_channel, messages):
|
def get_vcd_log_channels(log_channel, messages):
|
||||||
log_channels = dict()
|
vcd_log_channels = dict()
|
||||||
log_entry = ""
|
log_entry = ""
|
||||||
for message in messages:
|
for message in messages:
|
||||||
if (isinstance(message, OutputMessage)
|
if (isinstance(message, OutputMessage)
|
||||||
|
@ -625,13 +442,13 @@ def get_log_channels(log_channel, messages):
|
||||||
if len(log_entry) > 1 and log_entry[-1] == "\x1D":
|
if len(log_entry) > 1 and log_entry[-1] == "\x1D":
|
||||||
channel_name, log_message = log_entry[:-1].split("\x1E", maxsplit=1)
|
channel_name, log_message = log_entry[:-1].split("\x1E", maxsplit=1)
|
||||||
l = len(log_message)
|
l = len(log_message)
|
||||||
if channel_name in log_channels:
|
if channel_name in vcd_log_channels:
|
||||||
if log_channels[channel_name] < l:
|
if vcd_log_channels[channel_name] < l:
|
||||||
log_channels[channel_name] = l
|
vcd_log_channels[channel_name] = l
|
||||||
else:
|
else:
|
||||||
log_channels[channel_name] = l
|
vcd_log_channels[channel_name] = l
|
||||||
log_entry = ""
|
log_entry = ""
|
||||||
return log_channels
|
return vcd_log_channels
|
||||||
|
|
||||||
|
|
||||||
def get_single_device_argument(devices, module, cls, argument):
|
def get_single_device_argument(devices, module, cls, argument):
|
||||||
|
@ -658,7 +475,7 @@ def get_dds_sysclk(devices):
|
||||||
("AD9914",), "sysclk")
|
("AD9914",), "sysclk")
|
||||||
|
|
||||||
|
|
||||||
def create_channel_handlers(manager, devices, ref_period,
|
def create_channel_handlers(vcd_manager, devices, ref_period,
|
||||||
dds_sysclk, dds_onehot_sel):
|
dds_sysclk, dds_onehot_sel):
|
||||||
channel_handlers = dict()
|
channel_handlers = dict()
|
||||||
for name, desc in sorted(devices.items(), key=itemgetter(0)):
|
for name, desc in sorted(devices.items(), key=itemgetter(0)):
|
||||||
|
@ -666,11 +483,11 @@ def create_channel_handlers(manager, devices, ref_period,
|
||||||
if (desc["module"] == "artiq.coredevice.ttl"
|
if (desc["module"] == "artiq.coredevice.ttl"
|
||||||
and desc["class"] in {"TTLOut", "TTLInOut"}):
|
and desc["class"] in {"TTLOut", "TTLInOut"}):
|
||||||
channel = desc["arguments"]["channel"]
|
channel = desc["arguments"]["channel"]
|
||||||
channel_handlers[channel] = TTLHandler(manager, name)
|
channel_handlers[channel] = TTLHandler(vcd_manager, name)
|
||||||
if (desc["module"] == "artiq.coredevice.ttl"
|
if (desc["module"] == "artiq.coredevice.ttl"
|
||||||
and desc["class"] == "TTLClockGen"):
|
and desc["class"] == "TTLClockGen"):
|
||||||
channel = desc["arguments"]["channel"]
|
channel = desc["arguments"]["channel"]
|
||||||
channel_handlers[channel] = TTLClockGenHandler(manager, name, ref_period)
|
channel_handlers[channel] = TTLClockGenHandler(vcd_manager, name, ref_period)
|
||||||
if (desc["module"] == "artiq.coredevice.ad9914"
|
if (desc["module"] == "artiq.coredevice.ad9914"
|
||||||
and desc["class"] == "AD9914"):
|
and desc["class"] == "AD9914"):
|
||||||
dds_bus_channel = desc["arguments"]["bus_channel"]
|
dds_bus_channel = desc["arguments"]["bus_channel"]
|
||||||
|
@ -678,60 +495,37 @@ def create_channel_handlers(manager, devices, ref_period,
|
||||||
if dds_bus_channel in channel_handlers:
|
if dds_bus_channel in channel_handlers:
|
||||||
dds_handler = channel_handlers[dds_bus_channel]
|
dds_handler = channel_handlers[dds_bus_channel]
|
||||||
else:
|
else:
|
||||||
dds_handler = DDSHandler(manager, dds_onehot_sel, dds_sysclk)
|
dds_handler = DDSHandler(vcd_manager, dds_onehot_sel, dds_sysclk)
|
||||||
channel_handlers[dds_bus_channel] = dds_handler
|
channel_handlers[dds_bus_channel] = dds_handler
|
||||||
dds_handler.add_dds_channel(name, dds_channel)
|
dds_handler.add_dds_channel(name, dds_channel)
|
||||||
if (desc["module"] == "artiq.coredevice.spi2" and
|
if (desc["module"] == "artiq.coredevice.spi2" and
|
||||||
desc["class"] == "SPIMaster"):
|
desc["class"] == "SPIMaster"):
|
||||||
channel = desc["arguments"]["channel"]
|
channel = desc["arguments"]["channel"]
|
||||||
channel_handlers[channel] = SPIMaster2Handler(
|
channel_handlers[channel] = SPIMaster2Handler(
|
||||||
manager, name)
|
vcd_manager, name)
|
||||||
return channel_handlers
|
return channel_handlers
|
||||||
|
|
||||||
|
|
||||||
def get_channel_list(devices):
|
|
||||||
manager = ChannelSignatureManager()
|
|
||||||
create_channel_handlers(manager, devices, 1e-9, 3e9, False)
|
|
||||||
ref_period = get_ref_period(devices)
|
|
||||||
if ref_period is None:
|
|
||||||
ref_period = DEFAULT_REF_PERIOD
|
|
||||||
precision = max(0, math.ceil(math.log10(1 / ref_period) - 6))
|
|
||||||
manager.get_channel("rtio_slack", 64, ty=WaveformType.ANALOG, precision=precision, unit="us")
|
|
||||||
return manager.channels
|
|
||||||
|
|
||||||
|
|
||||||
def get_message_time(message):
|
def get_message_time(message):
|
||||||
return getattr(message, "timestamp", message.rtio_counter)
|
return getattr(message, "timestamp", message.rtio_counter)
|
||||||
|
|
||||||
|
|
||||||
def decoded_dump_to_vcd(fileobj, devices, dump, uniform_interval=False):
|
def decoded_dump_to_vcd(fileobj, devices, dump, uniform_interval=False):
|
||||||
vcd_manager = VCDManager(fileobj)
|
vcd_manager = VCDManager(fileobj)
|
||||||
decoded_dump_to_target(vcd_manager, devices, dump, uniform_interval)
|
|
||||||
|
|
||||||
|
|
||||||
def decoded_dump_to_waveform_data(devices, dump, uniform_interval=False):
|
|
||||||
manager = WaveformManager()
|
|
||||||
decoded_dump_to_target(manager, devices, dump, uniform_interval)
|
|
||||||
return manager.trace
|
|
||||||
|
|
||||||
|
|
||||||
def decoded_dump_to_target(manager, devices, dump, uniform_interval):
|
|
||||||
ref_period = get_ref_period(devices)
|
ref_period = get_ref_period(devices)
|
||||||
|
|
||||||
if ref_period is None:
|
if ref_period is not None:
|
||||||
|
if not uniform_interval:
|
||||||
|
vcd_manager.set_timescale_ps(ref_period*1e12)
|
||||||
|
else:
|
||||||
logger.warning("unable to determine core device ref_period")
|
logger.warning("unable to determine core device ref_period")
|
||||||
ref_period = DEFAULT_REF_PERIOD
|
ref_period = 1e-9 # guess
|
||||||
if not uniform_interval:
|
|
||||||
manager.set_timescale_ps(ref_period*1e12)
|
|
||||||
dds_sysclk = get_dds_sysclk(devices)
|
dds_sysclk = get_dds_sysclk(devices)
|
||||||
if dds_sysclk is None:
|
if dds_sysclk is None:
|
||||||
logger.warning("unable to determine DDS sysclk")
|
logger.warning("unable to determine DDS sysclk")
|
||||||
dds_sysclk = 3e9 # guess
|
dds_sysclk = 3e9 # guess
|
||||||
|
|
||||||
if isinstance(dump.messages[-1], StoppedMessage):
|
if isinstance(dump.messages[-1], StoppedMessage):
|
||||||
m = dump.messages[-1]
|
|
||||||
end_time = get_message_time(m)
|
|
||||||
manager.set_end_time(end_time)
|
|
||||||
messages = dump.messages[:-1]
|
messages = dump.messages[:-1]
|
||||||
else:
|
else:
|
||||||
logger.warning("StoppedMessage missing")
|
logger.warning("StoppedMessage missing")
|
||||||
|
@ -739,39 +533,38 @@ def decoded_dump_to_target(manager, devices, dump, uniform_interval):
|
||||||
messages = sorted(messages, key=get_message_time)
|
messages = sorted(messages, key=get_message_time)
|
||||||
|
|
||||||
channel_handlers = create_channel_handlers(
|
channel_handlers = create_channel_handlers(
|
||||||
manager, devices, ref_period,
|
vcd_manager, devices, ref_period,
|
||||||
dds_sysclk, dump.dds_onehot_sel)
|
dds_sysclk, dump.dds_onehot_sel)
|
||||||
log_channels = get_log_channels(dump.log_channel, messages)
|
vcd_log_channels = get_vcd_log_channels(dump.log_channel, messages)
|
||||||
channel_handlers[dump.log_channel] = LogHandler(
|
channel_handlers[dump.log_channel] = LogHandler(
|
||||||
manager, log_channels)
|
vcd_manager, vcd_log_channels)
|
||||||
if uniform_interval:
|
if uniform_interval:
|
||||||
# RTIO event timestamp in machine units
|
# RTIO event timestamp in machine units
|
||||||
timestamp = manager.get_channel("timestamp", 64, ty=WaveformType.VECTOR)
|
timestamp = vcd_manager.get_channel("timestamp", 64)
|
||||||
# RTIO time interval between this and the next timed event
|
# RTIO time interval between this and the next timed event
|
||||||
# in SI seconds
|
# in SI seconds
|
||||||
interval = manager.get_channel("interval", 64, ty=WaveformType.ANALOG)
|
interval = vcd_manager.get_channel("interval", 64)
|
||||||
slack = manager.get_channel("rtio_slack", 64, ty=WaveformType.ANALOG)
|
slack = vcd_manager.get_channel("rtio_slack", 64)
|
||||||
|
|
||||||
manager.set_time(0)
|
vcd_manager.set_time(0)
|
||||||
start_time = 0
|
start_time = 0
|
||||||
for m in messages:
|
for m in messages:
|
||||||
start_time = get_message_time(m)
|
start_time = get_message_time(m)
|
||||||
if start_time:
|
if start_time:
|
||||||
break
|
break
|
||||||
if not uniform_interval:
|
|
||||||
manager.set_start_time(start_time)
|
t0 = 0
|
||||||
t0 = start_time
|
|
||||||
for i, message in enumerate(messages):
|
for i, message in enumerate(messages):
|
||||||
if message.channel in channel_handlers:
|
if message.channel in channel_handlers:
|
||||||
t = get_message_time(message)
|
t = get_message_time(message) - start_time
|
||||||
if t >= 0:
|
if t >= 0:
|
||||||
if uniform_interval:
|
if uniform_interval:
|
||||||
interval.set_value_double((t - t0)*ref_period)
|
interval.set_value_double((t - t0)*ref_period)
|
||||||
manager.set_time(i)
|
vcd_manager.set_time(i)
|
||||||
timestamp.set_value("{:064b}".format(t))
|
timestamp.set_value("{:064b}".format(t))
|
||||||
t0 = t
|
t0 = t
|
||||||
else:
|
else:
|
||||||
manager.set_time(t)
|
vcd_manager.set_time(t)
|
||||||
channel_handlers[message.channel].process_message(message)
|
channel_handlers[message.channel].process_message(message)
|
||||||
if isinstance(message, OutputMessage):
|
if isinstance(message, OutputMessage):
|
||||||
slack.set_value_double(
|
slack.set_value_double(
|
||||||
|
|
|
@ -3,14 +3,14 @@ import logging
|
||||||
import traceback
|
import traceback
|
||||||
import numpy
|
import numpy
|
||||||
import socket
|
import socket
|
||||||
import builtins
|
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from fractions import Fraction
|
from fractions import Fraction
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
|
||||||
from artiq.coredevice import exceptions
|
from artiq.coredevice import exceptions
|
||||||
|
from artiq.coredevice.comm import initialize_connection
|
||||||
from artiq import __version__ as software_version
|
from artiq import __version__ as software_version
|
||||||
from sipyco.keepalive import create_connection
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -24,8 +24,6 @@ class Request(Enum):
|
||||||
RPCReply = 7
|
RPCReply = 7
|
||||||
RPCException = 8
|
RPCException = 8
|
||||||
|
|
||||||
SubkernelUpload = 9
|
|
||||||
|
|
||||||
|
|
||||||
class Reply(Enum):
|
class Reply(Enum):
|
||||||
SystemInfo = 2
|
SystemInfo = 2
|
||||||
|
@ -173,16 +171,6 @@ class CommKernelDummy:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
def incompatible_versions(v1, v2):
|
|
||||||
if v1.endswith(".beta") or v2.endswith(".beta"):
|
|
||||||
# Beta branches may introduce breaking changes. Check version strictly.
|
|
||||||
return v1 != v2
|
|
||||||
else:
|
|
||||||
# On stable branches, runtime/software protocol backward compatibility is kept.
|
|
||||||
# Runtime and software with the same major version number are compatible.
|
|
||||||
return v1.split(".", maxsplit=1)[0] != v2.split(".", maxsplit=1)[0]
|
|
||||||
|
|
||||||
|
|
||||||
class CommKernel:
|
class CommKernel:
|
||||||
warned_of_mismatch = False
|
warned_of_mismatch = False
|
||||||
|
|
||||||
|
@ -197,7 +185,7 @@ class CommKernel:
|
||||||
def open(self):
|
def open(self):
|
||||||
if hasattr(self, "socket"):
|
if hasattr(self, "socket"):
|
||||||
return
|
return
|
||||||
self.socket = create_connection(self.host, self.port)
|
self.socket = initialize_connection(self.host, self.port)
|
||||||
self.socket.sendall(b"ARTIQ coredev\n")
|
self.socket.sendall(b"ARTIQ coredev\n")
|
||||||
endian = self._read(1)
|
endian = self._read(1)
|
||||||
if endian == b"e":
|
if endian == b"e":
|
||||||
|
@ -211,7 +199,6 @@ class CommKernel:
|
||||||
self.unpack_float64 = struct.Struct(self.endian + "d").unpack
|
self.unpack_float64 = struct.Struct(self.endian + "d").unpack
|
||||||
|
|
||||||
self.pack_header = struct.Struct(self.endian + "lB").pack
|
self.pack_header = struct.Struct(self.endian + "lB").pack
|
||||||
self.pack_int8 = struct.Struct(self.endian + "B").pack
|
|
||||||
self.pack_int32 = struct.Struct(self.endian + "l").pack
|
self.pack_int32 = struct.Struct(self.endian + "l").pack
|
||||||
self.pack_int64 = struct.Struct(self.endian + "q").pack
|
self.pack_int64 = struct.Struct(self.endian + "q").pack
|
||||||
self.pack_float64 = struct.Struct(self.endian + "d").pack
|
self.pack_float64 = struct.Struct(self.endian + "d").pack
|
||||||
|
@ -326,7 +313,7 @@ class CommKernel:
|
||||||
self._write(chunk)
|
self._write(chunk)
|
||||||
|
|
||||||
def _write_int8(self, value):
|
def _write_int8(self, value):
|
||||||
self._write(self.pack_int8(value))
|
self._write(value)
|
||||||
|
|
||||||
def _write_int32(self, value):
|
def _write_int32(self, value):
|
||||||
self._write(self.pack_int32(value))
|
self._write(self.pack_int32(value))
|
||||||
|
@ -360,7 +347,7 @@ class CommKernel:
|
||||||
runtime_id = self._read(4)
|
runtime_id = self._read(4)
|
||||||
if runtime_id == b"AROR":
|
if runtime_id == b"AROR":
|
||||||
gateware_version = self._read_string().split(";")[0]
|
gateware_version = self._read_string().split(";")[0]
|
||||||
if not self.warned_of_mismatch and incompatible_versions(gateware_version, software_version):
|
if gateware_version != software_version and not self.warned_of_mismatch:
|
||||||
logger.warning("Mismatch between gateware (%s) "
|
logger.warning("Mismatch between gateware (%s) "
|
||||||
"and software (%s) versions",
|
"and software (%s) versions",
|
||||||
gateware_version, software_version)
|
gateware_version, software_version)
|
||||||
|
@ -386,19 +373,6 @@ class CommKernel:
|
||||||
else:
|
else:
|
||||||
self._read_expect(Reply.LoadCompleted)
|
self._read_expect(Reply.LoadCompleted)
|
||||||
|
|
||||||
def upload_subkernel(self, kernel_library, id, destination):
|
|
||||||
self._write_header(Request.SubkernelUpload)
|
|
||||||
self._write_int32(id)
|
|
||||||
self._write_int8(destination)
|
|
||||||
self._write_bytes(kernel_library)
|
|
||||||
self._flush()
|
|
||||||
|
|
||||||
self._read_header()
|
|
||||||
if self._read_type == Reply.LoadFailed:
|
|
||||||
raise LoadError(self._read_string())
|
|
||||||
else:
|
|
||||||
self._read_expect(Reply.LoadCompleted)
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
self._write_empty(Request.RunKernel)
|
self._write_empty(Request.RunKernel)
|
||||||
self._flush()
|
self._flush()
|
||||||
|
@ -435,9 +409,6 @@ class CommKernel:
|
||||||
self._skip_rpc_value(tags)
|
self._skip_rpc_value(tags)
|
||||||
elif tag == "r":
|
elif tag == "r":
|
||||||
self._skip_rpc_value(tags)
|
self._skip_rpc_value(tags)
|
||||||
elif tag == "a":
|
|
||||||
_ndims = tags.pop(0)
|
|
||||||
self._skip_rpc_value(tags)
|
|
||||||
else:
|
else:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -466,12 +437,12 @@ class CommKernel:
|
||||||
self._write_bool(value)
|
self._write_bool(value)
|
||||||
elif tag == "i":
|
elif tag == "i":
|
||||||
check(isinstance(value, (int, numpy.int32)) and
|
check(isinstance(value, (int, numpy.int32)) and
|
||||||
(-2**31 <= value <= 2**31-1),
|
(-2**31 <= value < 2**31),
|
||||||
lambda: "32-bit int")
|
lambda: "32-bit int")
|
||||||
self._write_int32(value)
|
self._write_int32(value)
|
||||||
elif tag == "I":
|
elif tag == "I":
|
||||||
check(isinstance(value, (int, numpy.int32, numpy.int64)) and
|
check(isinstance(value, (int, numpy.int32, numpy.int64)) and
|
||||||
(-2**63 <= value <= 2**63-1),
|
(-2**63 <= value < 2**63),
|
||||||
lambda: "64-bit int")
|
lambda: "64-bit int")
|
||||||
self._write_int64(value)
|
self._write_int64(value)
|
||||||
elif tag == "f":
|
elif tag == "f":
|
||||||
|
@ -480,8 +451,8 @@ class CommKernel:
|
||||||
self._write_float64(value)
|
self._write_float64(value)
|
||||||
elif tag == "F":
|
elif tag == "F":
|
||||||
check(isinstance(value, Fraction) and
|
check(isinstance(value, Fraction) and
|
||||||
(-2**63 <= value.numerator <= 2**63-1) and
|
(-2**63 <= value.numerator < 2**63) and
|
||||||
(-2**63 <= value.denominator <= 2**63-1),
|
(-2**63 <= value.denominator < 2**63),
|
||||||
lambda: "64-bit Fraction")
|
lambda: "64-bit Fraction")
|
||||||
self._write_int64(value.numerator)
|
self._write_int64(value.numerator)
|
||||||
self._write_int64(value.denominator)
|
self._write_int64(value.denominator)
|
||||||
|
@ -600,34 +571,29 @@ class CommKernel:
|
||||||
|
|
||||||
self._write_header(Request.RPCException)
|
self._write_header(Request.RPCException)
|
||||||
|
|
||||||
# Note: instead of sending strings, we send object ID
|
|
||||||
# This is to avoid the need of allocatio on the device side
|
|
||||||
# This is a special case: this only applies to exceptions
|
|
||||||
if hasattr(exn, "artiq_core_exception"):
|
if hasattr(exn, "artiq_core_exception"):
|
||||||
exn = exn.artiq_core_exception
|
exn = exn.artiq_core_exception
|
||||||
self._write_int32(embedding_map.store_str(exn.name))
|
self._write_string(exn.name)
|
||||||
self._write_int32(embedding_map.store_str(self._truncate_message(exn.message)))
|
self._write_string(self._truncate_message(exn.message))
|
||||||
for index in range(3):
|
for index in range(3):
|
||||||
self._write_int64(exn.param[index])
|
self._write_int64(exn.param[index])
|
||||||
|
|
||||||
filename, line, column, function = exn.traceback[-1]
|
filename, line, column, function = exn.traceback[-1]
|
||||||
self._write_int32(embedding_map.store_str(filename))
|
self._write_string(filename)
|
||||||
self._write_int32(line)
|
self._write_int32(line)
|
||||||
self._write_int32(column)
|
self._write_int32(column)
|
||||||
self._write_int32(embedding_map.store_str(function))
|
self._write_string(function)
|
||||||
else:
|
else:
|
||||||
exn_type = type(exn)
|
exn_type = type(exn)
|
||||||
if exn_type in builtins.__dict__.values():
|
if exn_type in (ZeroDivisionError, ValueError, IndexError, RuntimeError) or \
|
||||||
name = "0:{}".format(exn_type.__qualname__)
|
hasattr(exn, "artiq_builtin"):
|
||||||
elif hasattr(exn, "artiq_builtin"):
|
self._write_string("0:{}".format(exn_type.__name__))
|
||||||
name = "0:{}.{}".format(exn_type.__module__, exn_type.__qualname__)
|
|
||||||
else:
|
else:
|
||||||
exn_id = embedding_map.store_object(exn_type)
|
exn_id = embedding_map.store_object(exn_type)
|
||||||
name = "{}:{}.{}".format(exn_id,
|
self._write_string("{}:{}.{}".format(exn_id,
|
||||||
exn_type.__module__,
|
exn_type.__module__,
|
||||||
exn_type.__qualname__)
|
exn_type.__qualname__))
|
||||||
self._write_int32(embedding_map.store_str(name))
|
self._write_string(self._truncate_message(str(exn)))
|
||||||
self._write_int32(embedding_map.store_str(self._truncate_message(str(exn))))
|
|
||||||
for index in range(3):
|
for index in range(3):
|
||||||
self._write_int64(0)
|
self._write_int64(0)
|
||||||
|
|
||||||
|
@ -638,10 +604,10 @@ class CommKernel:
|
||||||
((filename, line, function, _), ) = tb
|
((filename, line, function, _), ) = tb
|
||||||
else:
|
else:
|
||||||
assert False
|
assert False
|
||||||
self._write_int32(embedding_map.store_str(filename))
|
self._write_string(filename)
|
||||||
self._write_int32(line)
|
self._write_int32(line)
|
||||||
self._write_int32(-1) # column not known
|
self._write_int32(-1) # column not known
|
||||||
self._write_int32(embedding_map.store_str(function))
|
self._write_string(function)
|
||||||
self._flush()
|
self._flush()
|
||||||
else:
|
else:
|
||||||
logger.debug("rpc service: %d %r %r = %r",
|
logger.debug("rpc service: %d %r %r = %r",
|
||||||
|
@ -653,65 +619,28 @@ class CommKernel:
|
||||||
self._flush()
|
self._flush()
|
||||||
|
|
||||||
def _serve_exception(self, embedding_map, symbolizer, demangler):
|
def _serve_exception(self, embedding_map, symbolizer, demangler):
|
||||||
exception_count = self._read_int32()
|
name = self._read_string()
|
||||||
nested_exceptions = []
|
message = self._read_string()
|
||||||
|
params = [self._read_int64() for _ in range(3)]
|
||||||
|
|
||||||
def read_exception_string():
|
filename = self._read_string()
|
||||||
# note: if length == -1, the following int32 is the object key
|
line = self._read_int32()
|
||||||
length = self._read_int32()
|
column = self._read_int32()
|
||||||
if length == -1:
|
function = self._read_string()
|
||||||
return embedding_map.retrieve_str(self._read_int32())
|
|
||||||
else:
|
|
||||||
return self._read(length).decode("utf-8")
|
|
||||||
|
|
||||||
for _ in range(exception_count):
|
|
||||||
name = embedding_map.retrieve_str(self._read_int32())
|
|
||||||
message = read_exception_string()
|
|
||||||
params = [self._read_int64() for _ in range(3)]
|
|
||||||
|
|
||||||
filename = read_exception_string()
|
|
||||||
line = self._read_int32()
|
|
||||||
column = self._read_int32()
|
|
||||||
function = read_exception_string()
|
|
||||||
nested_exceptions.append([name, message, params,
|
|
||||||
filename, line, column, function])
|
|
||||||
|
|
||||||
demangled_names = demangler([ex[6] for ex in nested_exceptions])
|
|
||||||
for i in range(exception_count):
|
|
||||||
nested_exceptions[i][6] = demangled_names[i]
|
|
||||||
|
|
||||||
exception_info = []
|
|
||||||
for _ in range(exception_count):
|
|
||||||
sp = self._read_int32()
|
|
||||||
initial_backtrace = self._read_int32()
|
|
||||||
current_backtrace = self._read_int32()
|
|
||||||
exception_info.append((sp, initial_backtrace, current_backtrace))
|
|
||||||
|
|
||||||
backtrace = []
|
|
||||||
stack_pointers = []
|
|
||||||
for _ in range(self._read_int32()):
|
|
||||||
backtrace.append(self._read_int32())
|
|
||||||
stack_pointers.append(self._read_int32())
|
|
||||||
|
|
||||||
|
backtrace = [self._read_int32() for _ in range(self._read_int32())]
|
||||||
self._process_async_error()
|
self._process_async_error()
|
||||||
|
|
||||||
traceback = list(symbolizer(backtrace))
|
traceback = list(reversed(symbolizer(backtrace))) + \
|
||||||
core_exn = exceptions.CoreException(nested_exceptions, exception_info,
|
[(filename, line, column, *demangler([function]), None)]
|
||||||
traceback, stack_pointers)
|
core_exn = exceptions.CoreException(name, message, params, traceback)
|
||||||
|
|
||||||
if core_exn.id == 0:
|
if core_exn.id == 0:
|
||||||
python_exn_type = getattr(exceptions, core_exn.name.split('.')[-1])
|
python_exn_type = getattr(exceptions, core_exn.name.split('.')[-1])
|
||||||
else:
|
else:
|
||||||
python_exn_type = embedding_map.retrieve_object(core_exn.id)
|
python_exn_type = embedding_map.retrieve_object(core_exn.id)
|
||||||
|
|
||||||
try:
|
python_exn = python_exn_type(message.format(*params))
|
||||||
python_exn = python_exn_type(
|
|
||||||
nested_exceptions[-1][1].format(*nested_exceptions[0][2]))
|
|
||||||
except Exception as ex:
|
|
||||||
python_exn = RuntimeError(
|
|
||||||
f"Exception type={python_exn_type}, which couldn't be "
|
|
||||||
f"reconstructed ({ex})"
|
|
||||||
)
|
|
||||||
python_exn.artiq_core_exception = core_exn
|
python_exn.artiq_core_exception = core_exn
|
||||||
raise python_exn
|
raise python_exn
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,8 @@ from enum import Enum
|
||||||
import logging
|
import logging
|
||||||
import struct
|
import struct
|
||||||
|
|
||||||
from sipyco.keepalive import create_connection
|
from artiq.coredevice.comm import initialize_connection
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -53,7 +54,7 @@ class CommMgmt:
|
||||||
def open(self):
|
def open(self):
|
||||||
if hasattr(self, "socket"):
|
if hasattr(self, "socket"):
|
||||||
return
|
return
|
||||||
self.socket = create_connection(self.host, self.port)
|
self.socket = initialize_connection(self.host, self.port)
|
||||||
self.socket.sendall(b"ARTIQ management\n")
|
self.socket.sendall(b"ARTIQ management\n")
|
||||||
endian = self._read(1)
|
endian = self._read(1)
|
||||||
if endian == b"e":
|
if endian == b"e":
|
||||||
|
@ -110,10 +111,9 @@ class CommMgmt:
|
||||||
return ty
|
return ty
|
||||||
|
|
||||||
def _read_expect(self, ty):
|
def _read_expect(self, ty):
|
||||||
header = self._read_header()
|
if self._read_header() != ty:
|
||||||
if header != ty:
|
|
||||||
raise IOError("Incorrect reply from device: {} (expected {})".
|
raise IOError("Incorrect reply from device: {} (expected {})".
|
||||||
format(header, ty))
|
format(self._read_type, ty))
|
||||||
|
|
||||||
def _read_int32(self):
|
def _read_int32(self):
|
||||||
(value, ) = struct.unpack(self.endian + "l", self._read(4))
|
(value, ) = struct.unpack(self.endian + "l", self._read(4))
|
||||||
|
@ -160,12 +160,7 @@ class CommMgmt:
|
||||||
def config_read(self, key):
|
def config_read(self, key):
|
||||||
self._write_header(Request.ConfigRead)
|
self._write_header(Request.ConfigRead)
|
||||||
self._write_string(key)
|
self._write_string(key)
|
||||||
ty = self._read_header()
|
self._read_expect(Reply.ConfigData)
|
||||||
if ty == Reply.Error:
|
|
||||||
raise IOError("Device failed to read config. The key may not exist.")
|
|
||||||
elif ty != Reply.ConfigData:
|
|
||||||
raise IOError("Incorrect reply from device: {} (expected {})".
|
|
||||||
format(ty, Reply.ConfigData))
|
|
||||||
return self._read_string()
|
return self._read_string()
|
||||||
|
|
||||||
def config_write(self, key, value):
|
def config_write(self, key, value):
|
||||||
|
@ -174,7 +169,7 @@ class CommMgmt:
|
||||||
self._write_bytes(value)
|
self._write_bytes(value)
|
||||||
ty = self._read_header()
|
ty = self._read_header()
|
||||||
if ty == Reply.Error:
|
if ty == Reply.Error:
|
||||||
raise IOError("Device failed to write config. More information may be available in the log.")
|
raise IOError("Flash storage is full")
|
||||||
elif ty != Reply.Success:
|
elif ty != Reply.Success:
|
||||||
raise IOError("Incorrect reply from device: {} (expected {})".
|
raise IOError("Incorrect reply from device: {} (expected {})".
|
||||||
format(ty, Reply.Success))
|
format(ty, Reply.Success))
|
||||||
|
|
|
@ -3,7 +3,6 @@ import logging
|
||||||
import struct
|
import struct
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
|
|
||||||
from sipyco.keepalive import async_open_connection
|
|
||||||
|
|
||||||
__all__ = ["TTLProbe", "TTLOverride", "CommMonInj"]
|
__all__ = ["TTLProbe", "TTLOverride", "CommMonInj"]
|
||||||
|
|
||||||
|
@ -29,16 +28,17 @@ class CommMonInj:
|
||||||
self.disconnect_cb = disconnect_cb
|
self.disconnect_cb = disconnect_cb
|
||||||
|
|
||||||
async def connect(self, host, port=1383):
|
async def connect(self, host, port=1383):
|
||||||
self._reader, self._writer = await async_open_connection(
|
self._reader, self._writer = await asyncio.open_connection(host, port)
|
||||||
host,
|
|
||||||
port,
|
|
||||||
after_idle=1,
|
|
||||||
interval=1,
|
|
||||||
max_fails=3,
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self._writer.write(b"ARTIQ moninj\n")
|
self._writer.write(b"ARTIQ moninj\n")
|
||||||
|
# get device endian
|
||||||
|
endian = await self._reader.read(1)
|
||||||
|
if endian == b"e":
|
||||||
|
self.endian = "<"
|
||||||
|
elif endian == b"E":
|
||||||
|
self.endian = ">"
|
||||||
|
else:
|
||||||
|
raise IOError("Incorrect reply from device: expected e/E.")
|
||||||
self._receive_task = asyncio.ensure_future(self._receive_cr())
|
self._receive_task = asyncio.ensure_future(self._receive_cr())
|
||||||
except:
|
except:
|
||||||
self._writer.close()
|
self._writer.close()
|
||||||
|
@ -46,9 +46,6 @@ class CommMonInj:
|
||||||
del self._writer
|
del self._writer
|
||||||
raise
|
raise
|
||||||
|
|
||||||
def wait_terminate(self):
|
|
||||||
return self._receive_task
|
|
||||||
|
|
||||||
async def close(self):
|
async def close(self):
|
||||||
self.disconnect_cb = None
|
self.disconnect_cb = None
|
||||||
try:
|
try:
|
||||||
|
@ -63,19 +60,19 @@ class CommMonInj:
|
||||||
del self._writer
|
del self._writer
|
||||||
|
|
||||||
def monitor_probe(self, enable, channel, probe):
|
def monitor_probe(self, enable, channel, probe):
|
||||||
packet = struct.pack("<bblb", 0, enable, channel, probe)
|
packet = struct.pack(self.endian + "bblb", 0, enable, channel, probe)
|
||||||
self._writer.write(packet)
|
self._writer.write(packet)
|
||||||
|
|
||||||
def monitor_injection(self, enable, channel, overrd):
|
def monitor_injection(self, enable, channel, overrd):
|
||||||
packet = struct.pack("<bblb", 3, enable, channel, overrd)
|
packet = struct.pack(self.endian + "bblb", 3, enable, channel, overrd)
|
||||||
self._writer.write(packet)
|
self._writer.write(packet)
|
||||||
|
|
||||||
def inject(self, channel, override, value):
|
def inject(self, channel, override, value):
|
||||||
packet = struct.pack("<blbb", 1, channel, override, value)
|
packet = struct.pack(self.endian + "blbb", 1, channel, override, value)
|
||||||
self._writer.write(packet)
|
self._writer.write(packet)
|
||||||
|
|
||||||
def get_injection_status(self, channel, override):
|
def get_injection_status(self, channel, override):
|
||||||
packet = struct.pack("<blb", 2, channel, override)
|
packet = struct.pack(self.endian + "blb", 2, channel, override)
|
||||||
self._writer.write(packet)
|
self._writer.write(packet)
|
||||||
|
|
||||||
async def _receive_cr(self):
|
async def _receive_cr(self):
|
||||||
|
@ -85,17 +82,17 @@ class CommMonInj:
|
||||||
if not ty:
|
if not ty:
|
||||||
return
|
return
|
||||||
if ty == b"\x00":
|
if ty == b"\x00":
|
||||||
payload = await self._reader.readexactly(13)
|
payload = await self._reader.readexactly(9)
|
||||||
channel, probe, value = struct.unpack("<lbq", payload)
|
channel, probe, value = struct.unpack(
|
||||||
|
self.endian + "lbl", payload)
|
||||||
self.monitor_cb(channel, probe, value)
|
self.monitor_cb(channel, probe, value)
|
||||||
elif ty == b"\x01":
|
elif ty == b"\x01":
|
||||||
payload = await self._reader.readexactly(6)
|
payload = await self._reader.readexactly(6)
|
||||||
channel, override, value = struct.unpack("<lbb", payload)
|
channel, override, value = struct.unpack(
|
||||||
|
self.endian + "lbb", payload)
|
||||||
self.injection_status_cb(channel, override, value)
|
self.injection_status_cb(channel, override, value)
|
||||||
else:
|
else:
|
||||||
raise ValueError("Unknown packet type", ty)
|
raise ValueError("Unknown packet type", ty)
|
||||||
except Exception:
|
|
||||||
logger.error("Moninj connection terminating with exception", exc_info=True)
|
|
||||||
finally:
|
finally:
|
||||||
if self.disconnect_cb is not None:
|
if self.disconnect_cb is not None:
|
||||||
self.disconnect_cb()
|
self.disconnect_cb()
|
||||||
|
|
|
@ -1,7 +1,5 @@
|
||||||
import os, sys
|
import os, sys
|
||||||
import numpy
|
import numpy
|
||||||
from inspect import getfullargspec
|
|
||||||
from functools import wraps
|
|
||||||
|
|
||||||
from pythonparser import diagnostic
|
from pythonparser import diagnostic
|
||||||
|
|
||||||
|
@ -53,20 +51,6 @@ def rtio_get_destination_status(linkno: TInt32) -> TBool:
|
||||||
def rtio_get_counter() -> TInt64:
|
def rtio_get_counter() -> TInt64:
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
@syscall
|
|
||||||
def test_exception_id_sync(id: TInt32) -> TNone:
|
|
||||||
raise NotImplementedError("syscall not simulated")
|
|
||||||
|
|
||||||
def get_target_cls(target):
|
|
||||||
if target == "rv32g":
|
|
||||||
return RV32GTarget
|
|
||||||
elif target == "rv32ima":
|
|
||||||
return RV32IMATarget
|
|
||||||
elif target == "cortexa9":
|
|
||||||
return CortexA9Target
|
|
||||||
else:
|
|
||||||
raise ValueError("Unsupported target")
|
|
||||||
|
|
||||||
|
|
||||||
class Core:
|
class Core:
|
||||||
"""Core device driver.
|
"""Core device driver.
|
||||||
|
@ -76,209 +60,90 @@ class Core:
|
||||||
On platforms that use clock multiplication and SERDES-based PHYs,
|
On platforms that use clock multiplication and SERDES-based PHYs,
|
||||||
this is the period after multiplication. For example, with a RTIO core
|
this is the period after multiplication. For example, with a RTIO core
|
||||||
clocked at 125MHz and a SERDES multiplication factor of 8, the
|
clocked at 125MHz and a SERDES multiplication factor of 8, the
|
||||||
reference period is ``1 ns``.
|
reference period is 1ns.
|
||||||
The machine time unit (``mu``) is equal to this period.
|
The time machine unit is equal to this period.
|
||||||
:param ref_multiplier: ratio between the RTIO fine timestamp frequency
|
:param ref_multiplier: ratio between the RTIO fine timestamp frequency
|
||||||
and the RTIO coarse timestamp frequency (e.g. SERDES multiplication
|
and the RTIO coarse timestamp frequency (e.g. SERDES multiplication
|
||||||
factor).
|
factor).
|
||||||
:param analyzer_proxy: name of the core device analyzer proxy to trigger
|
|
||||||
(optional).
|
|
||||||
:param analyze_at_run_end: automatically trigger the core device analyzer
|
|
||||||
proxy after the Experiment's run stage finishes.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
kernel_invariants = {
|
kernel_invariants = {
|
||||||
"core", "ref_period", "coarse_ref_period", "ref_multiplier",
|
"core", "ref_period", "coarse_ref_period", "ref_multiplier",
|
||||||
}
|
}
|
||||||
|
|
||||||
def __init__(self, dmgr,
|
def __init__(self, dmgr, host, ref_period, ref_multiplier=8, target="rv32g"):
|
||||||
host, ref_period,
|
|
||||||
analyzer_proxy=None, analyze_at_run_end=False,
|
|
||||||
ref_multiplier=8,
|
|
||||||
target="rv32g", satellite_cpu_targets={}):
|
|
||||||
self.ref_period = ref_period
|
self.ref_period = ref_period
|
||||||
self.ref_multiplier = ref_multiplier
|
self.ref_multiplier = ref_multiplier
|
||||||
self.satellite_cpu_targets = satellite_cpu_targets
|
if target == "rv32g":
|
||||||
self.target_cls = get_target_cls(target)
|
self.target_cls = RV32GTarget
|
||||||
|
elif target == "rv32ima":
|
||||||
|
self.target_cls = RV32IMATarget
|
||||||
|
elif target == "cortexa9":
|
||||||
|
self.target_cls = CortexA9Target
|
||||||
|
else:
|
||||||
|
raise ValueError("Unsupported target")
|
||||||
self.coarse_ref_period = ref_period*ref_multiplier
|
self.coarse_ref_period = ref_period*ref_multiplier
|
||||||
if host is None:
|
if host is None:
|
||||||
self.comm = CommKernelDummy()
|
self.comm = CommKernelDummy()
|
||||||
else:
|
else:
|
||||||
self.comm = CommKernel(host)
|
self.comm = CommKernel(host)
|
||||||
self.analyzer_proxy_name = analyzer_proxy
|
|
||||||
self.analyze_at_run_end = analyze_at_run_end
|
|
||||||
|
|
||||||
self.first_run = True
|
self.first_run = True
|
||||||
self.dmgr = dmgr
|
self.dmgr = dmgr
|
||||||
self.core = self
|
self.core = self
|
||||||
self.comm.core = self
|
self.comm.core = self
|
||||||
self.analyzer_proxy = None
|
|
||||||
|
|
||||||
def notify_run_end(self):
|
|
||||||
if self.analyze_at_run_end:
|
|
||||||
self.trigger_analyzer_proxy()
|
|
||||||
|
|
||||||
def close(self):
|
def close(self):
|
||||||
"""Disconnect core device and close sockets.
|
|
||||||
"""
|
|
||||||
self.comm.close()
|
self.comm.close()
|
||||||
|
|
||||||
def compile(self, function, args, kwargs, set_result=None,
|
def compile(self, function, args, kwargs, set_result=None,
|
||||||
attribute_writeback=True, print_as_rpc=True,
|
attribute_writeback=True, print_as_rpc=True):
|
||||||
target=None, destination=0, subkernel_arg_types=[],
|
|
||||||
old_embedding_map=None):
|
|
||||||
try:
|
try:
|
||||||
engine = _DiagnosticEngine(all_errors_are_fatal=True)
|
engine = _DiagnosticEngine(all_errors_are_fatal=True)
|
||||||
|
|
||||||
stitcher = Stitcher(engine=engine, core=self, dmgr=self.dmgr,
|
stitcher = Stitcher(engine=engine, core=self, dmgr=self.dmgr,
|
||||||
print_as_rpc=print_as_rpc,
|
print_as_rpc=print_as_rpc)
|
||||||
destination=destination, subkernel_arg_types=subkernel_arg_types,
|
|
||||||
old_embedding_map=old_embedding_map)
|
|
||||||
stitcher.stitch_call(function, args, kwargs, set_result)
|
stitcher.stitch_call(function, args, kwargs, set_result)
|
||||||
stitcher.finalize()
|
stitcher.finalize()
|
||||||
|
|
||||||
module = Module(stitcher,
|
module = Module(stitcher,
|
||||||
ref_period=self.ref_period,
|
ref_period=self.ref_period,
|
||||||
attribute_writeback=attribute_writeback)
|
attribute_writeback=attribute_writeback)
|
||||||
target = target if target is not None else self.target_cls()
|
target = self.target_cls()
|
||||||
|
|
||||||
library = target.compile_and_link([module])
|
library = target.compile_and_link([module])
|
||||||
stripped_library = target.strip(library)
|
stripped_library = target.strip(library)
|
||||||
|
|
||||||
return stitcher.embedding_map, stripped_library, \
|
return stitcher.embedding_map, stripped_library, \
|
||||||
lambda addresses: target.symbolize(library, addresses), \
|
lambda addresses: target.symbolize(library, addresses), \
|
||||||
lambda symbols: target.demangle(symbols), \
|
lambda symbols: target.demangle(symbols)
|
||||||
module.subkernel_arg_types
|
|
||||||
except diagnostic.Error as error:
|
except diagnostic.Error as error:
|
||||||
raise CompileError(error.diagnostic) from error
|
raise CompileError(error.diagnostic) from error
|
||||||
|
|
||||||
def _run_compiled(self, kernel_library, embedding_map, symbolizer, demangler):
|
|
||||||
if self.first_run:
|
|
||||||
self.comm.check_system_info()
|
|
||||||
self.first_run = False
|
|
||||||
self.comm.load(kernel_library)
|
|
||||||
self.comm.run()
|
|
||||||
self.comm.serve(embedding_map, symbolizer, demangler)
|
|
||||||
|
|
||||||
def run(self, function, args, kwargs):
|
def run(self, function, args, kwargs):
|
||||||
result = None
|
result = None
|
||||||
@rpc(flags={"async"})
|
@rpc(flags={"async"})
|
||||||
def set_result(new_result):
|
def set_result(new_result):
|
||||||
nonlocal result
|
nonlocal result
|
||||||
result = new_result
|
result = new_result
|
||||||
embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
|
|
||||||
|
embedding_map, kernel_library, symbolizer, demangler = \
|
||||||
self.compile(function, args, kwargs, set_result)
|
self.compile(function, args, kwargs, set_result)
|
||||||
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
|
|
||||||
self._run_compiled(kernel_library, embedding_map, symbolizer, demangler)
|
if self.first_run:
|
||||||
|
self.comm.check_system_info()
|
||||||
|
self.first_run = False
|
||||||
|
|
||||||
|
self.comm.load(kernel_library)
|
||||||
|
self.comm.run()
|
||||||
|
self.comm.serve(embedding_map, symbolizer, demangler)
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
def compile_subkernel(self, sid, subkernel_fn, embedding_map, args, subkernel_arg_types, subkernels):
|
|
||||||
# pass self to subkernels (if applicable)
|
|
||||||
# assuming the first argument is self
|
|
||||||
subkernel_args = getfullargspec(subkernel_fn.artiq_embedded.function)
|
|
||||||
self_arg = []
|
|
||||||
if len(subkernel_args[0]) > 0:
|
|
||||||
if subkernel_args[0][0] == 'self':
|
|
||||||
self_arg = args[:1]
|
|
||||||
destination = subkernel_fn.artiq_embedded.destination
|
|
||||||
destination_tgt = self.satellite_cpu_targets[destination]
|
|
||||||
target = get_target_cls(destination_tgt)(subkernel_id=sid)
|
|
||||||
object_map, kernel_library, _, _, _ = \
|
|
||||||
self.compile(subkernel_fn, self_arg, {}, attribute_writeback=False,
|
|
||||||
print_as_rpc=False, target=target, destination=destination,
|
|
||||||
subkernel_arg_types=subkernel_arg_types.get(sid, []),
|
|
||||||
old_embedding_map=embedding_map)
|
|
||||||
if object_map.has_rpc():
|
|
||||||
raise ValueError("Subkernel must not use RPC")
|
|
||||||
return destination, kernel_library, object_map
|
|
||||||
|
|
||||||
def compile_and_upload_subkernels(self, embedding_map, args, subkernel_arg_types):
|
|
||||||
subkernels = embedding_map.subkernels()
|
|
||||||
subkernels_compiled = []
|
|
||||||
while True:
|
|
||||||
new_subkernels = {}
|
|
||||||
for sid, subkernel_fn in subkernels.items():
|
|
||||||
if sid in subkernels_compiled:
|
|
||||||
continue
|
|
||||||
destination, kernel_library, embedding_map = \
|
|
||||||
self.compile_subkernel(sid, subkernel_fn, embedding_map,
|
|
||||||
args, subkernel_arg_types, subkernels)
|
|
||||||
self.comm.upload_subkernel(kernel_library, sid, destination)
|
|
||||||
new_subkernels.update(embedding_map.subkernels())
|
|
||||||
subkernels_compiled.append(sid)
|
|
||||||
if new_subkernels == subkernels:
|
|
||||||
break
|
|
||||||
subkernels.update(new_subkernels)
|
|
||||||
# check for messages without a send/recv pair
|
|
||||||
unpaired_messages = embedding_map.subkernel_messages_unpaired()
|
|
||||||
if unpaired_messages:
|
|
||||||
for unpaired_message in unpaired_messages:
|
|
||||||
engine = _DiagnosticEngine(all_errors_are_fatal=False)
|
|
||||||
# errors are non-fatal in order to display
|
|
||||||
# all unpaired message errors before raising an excption
|
|
||||||
if unpaired_message.send_loc is None:
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"subkernel message '{name}' only has a receiver but no sender",
|
|
||||||
{"name": unpaired_message.name},
|
|
||||||
unpaired_message.recv_loc)
|
|
||||||
else:
|
|
||||||
diag = diagnostic.Diagnostic("error",
|
|
||||||
"subkernel message '{name}' only has a sender but no receiver",
|
|
||||||
{"name": unpaired_message.name},
|
|
||||||
unpaired_message.send_loc)
|
|
||||||
engine.process(diag)
|
|
||||||
raise ValueError("Found subkernel message(s) without a full send/recv pair")
|
|
||||||
|
|
||||||
|
|
||||||
def precompile(self, function, *args, **kwargs):
|
|
||||||
"""Precompile a kernel and return a callable that executes it on the core device
|
|
||||||
at a later time.
|
|
||||||
|
|
||||||
Arguments to the kernel are set at compilation time and passed to this function,
|
|
||||||
as additional positional and keyword arguments.
|
|
||||||
The returned callable accepts no arguments.
|
|
||||||
|
|
||||||
Precompiled kernels may use RPCs and subkernels.
|
|
||||||
|
|
||||||
Object attributes at the beginning of a precompiled kernel execution have the
|
|
||||||
values they had at precompilation time. If up-to-date values are required,
|
|
||||||
use RPC to read them.
|
|
||||||
Similarly, modified values are not written back, and explicit RPC should be used
|
|
||||||
to modify host objects.
|
|
||||||
Carefully review the source code of drivers calls used in precompiled kernels, as
|
|
||||||
they may rely on host object attributes being transferred between kernel calls.
|
|
||||||
Examples include code used to control DDS phase and Urukul RF switch control
|
|
||||||
via the CPLD register.
|
|
||||||
|
|
||||||
The return value of the callable is the return value of the kernel, if any.
|
|
||||||
|
|
||||||
The callable may be called several times.
|
|
||||||
"""
|
|
||||||
if not hasattr(function, "artiq_embedded"):
|
|
||||||
raise ValueError("Argument is not a kernel")
|
|
||||||
|
|
||||||
result = None
|
|
||||||
@rpc(flags={"async"})
|
|
||||||
def set_result(new_result):
|
|
||||||
nonlocal result
|
|
||||||
result = new_result
|
|
||||||
|
|
||||||
embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
|
|
||||||
self.compile(function, args, kwargs, set_result, attribute_writeback=False)
|
|
||||||
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
|
|
||||||
|
|
||||||
@wraps(function)
|
|
||||||
def run_precompiled():
|
|
||||||
nonlocal result
|
|
||||||
self._run_compiled(kernel_library, embedding_map, symbolizer, demangler)
|
|
||||||
return result
|
|
||||||
|
|
||||||
return run_precompiled
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def seconds_to_mu(self, seconds):
|
def seconds_to_mu(self, seconds):
|
||||||
"""Convert seconds to the corresponding number of machine units
|
"""Convert seconds to the corresponding number of machine units
|
||||||
(fine RTIO cycles).
|
(RTIO cycles).
|
||||||
|
|
||||||
:param seconds: time (in seconds) to convert.
|
:param seconds: time (in seconds) to convert.
|
||||||
"""
|
"""
|
||||||
|
@ -286,7 +151,7 @@ class Core:
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def mu_to_seconds(self, mu):
|
def mu_to_seconds(self, mu):
|
||||||
"""Convert machine units (fine RTIO cycles) to seconds.
|
"""Convert machine units (RTIO cycles) to seconds.
|
||||||
|
|
||||||
:param mu: cycle count to convert.
|
:param mu: cycle count to convert.
|
||||||
"""
|
"""
|
||||||
|
@ -301,7 +166,7 @@ class Core:
|
||||||
for the actual value of the hardware register at the instant when
|
for the actual value of the hardware register at the instant when
|
||||||
execution resumes in the caller.
|
execution resumes in the caller.
|
||||||
|
|
||||||
For a more detailed description of these concepts, see :doc:`rtio`.
|
For a more detailed description of these concepts, see :doc:`/rtio`.
|
||||||
"""
|
"""
|
||||||
return rtio_get_counter()
|
return rtio_get_counter()
|
||||||
|
|
||||||
|
@ -320,7 +185,7 @@ class Core:
|
||||||
def get_rtio_destination_status(self, destination):
|
def get_rtio_destination_status(self, destination):
|
||||||
"""Returns whether the specified RTIO destination is up.
|
"""Returns whether the specified RTIO destination is up.
|
||||||
This is particularly useful in startup kernels to delay
|
This is particularly useful in startup kernels to delay
|
||||||
startup until certain DRTIO destinations are available."""
|
startup until certain DRTIO destinations are up."""
|
||||||
return rtio_get_destination_status(destination)
|
return rtio_get_destination_status(destination)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
|
@ -341,21 +206,3 @@ class Core:
|
||||||
min_now = rtio_get_counter() + 125000
|
min_now = rtio_get_counter() + 125000
|
||||||
if now_mu() < min_now:
|
if now_mu() < min_now:
|
||||||
at_mu(min_now)
|
at_mu(min_now)
|
||||||
|
|
||||||
def trigger_analyzer_proxy(self):
|
|
||||||
"""Causes the core analyzer proxy to retrieve a dump from the device,
|
|
||||||
and distribute it to all connected clients (typically dashboards).
|
|
||||||
|
|
||||||
Returns only after the dump has been retrieved from the device.
|
|
||||||
|
|
||||||
Raises :exc:`IOError` if no analyzer proxy has been configured, or if the
|
|
||||||
analyzer proxy fails. In the latter case, more details would be
|
|
||||||
available in the proxy log.
|
|
||||||
"""
|
|
||||||
if self.analyzer_proxy is None:
|
|
||||||
if self.analyzer_proxy_name is not None:
|
|
||||||
self.analyzer_proxy = self.dmgr.get(self.analyzer_proxy_name)
|
|
||||||
if self.analyzer_proxy is None:
|
|
||||||
raise IOError("No analyzer proxy configured")
|
|
||||||
else:
|
|
||||||
self.analyzer_proxy.trigger()
|
|
||||||
|
|
|
@ -19,24 +19,16 @@
|
||||||
},
|
},
|
||||||
"min_artiq_version": {
|
"min_artiq_version": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Minimum required ARTIQ version",
|
"description": "Minimum required ARTIQ version"
|
||||||
"default": "0"
|
|
||||||
},
|
},
|
||||||
"hw_rev": {
|
"hw_rev": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Hardware revision"
|
"description": "Hardware revision"
|
||||||
},
|
},
|
||||||
"base": {
|
"base": {
|
||||||
"type": "string",
|
|
||||||
"enum": ["use_drtio_role", "standalone", "master", "satellite"],
|
|
||||||
"description": "Deprecated, use drtio_role instead",
|
|
||||||
"default": "use_drtio_role"
|
|
||||||
},
|
|
||||||
"drtio_role": {
|
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"enum": ["standalone", "master", "satellite"],
|
"enum": ["standalone", "master", "satellite"],
|
||||||
"description": "Role that this device takes in a DRTIO network; 'standalone' means no DRTIO",
|
"description": "SoC base; value depends on intended system topology"
|
||||||
"default": "standalone"
|
|
||||||
},
|
},
|
||||||
"ext_ref_frequency": {
|
"ext_ref_frequency": {
|
||||||
"type": "number",
|
"type": "number",
|
||||||
|
@ -49,10 +41,6 @@
|
||||||
"default": 125e6,
|
"default": 125e6,
|
||||||
"description": "RTIO frequency"
|
"description": "RTIO frequency"
|
||||||
},
|
},
|
||||||
"enable_wrpll": {
|
|
||||||
"type": "boolean",
|
|
||||||
"default": false
|
|
||||||
},
|
|
||||||
"core_addr": {
|
"core_addr": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"format": "ipv4",
|
"format": "ipv4",
|
||||||
|
@ -134,7 +122,7 @@
|
||||||
},
|
},
|
||||||
"hw_rev": {
|
"hw_rev": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"enum": ["v1.0", "v1.1"]
|
"enum": ["v1.0"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -146,7 +134,7 @@
|
||||||
"properties": {
|
"properties": {
|
||||||
"type": {
|
"type": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"enum": ["dio", "dio_spi", "urukul", "novogorny", "sampler", "suservo", "zotino", "grabber", "mirny", "fastino", "phaser", "hvamp", "shuttler"]
|
"enum": ["dio", "urukul", "novogorny", "sampler", "suservo", "zotino", "grabber", "mirny", "fastino", "phaser", "hvamp"]
|
||||||
},
|
},
|
||||||
"board": {
|
"board": {
|
||||||
"type": "string"
|
"type": "string"
|
||||||
|
@ -182,101 +170,15 @@
|
||||||
},
|
},
|
||||||
"bank_direction_low": {
|
"bank_direction_low": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"enum": ["input", "output", "clkgen"]
|
"enum": ["input", "output"]
|
||||||
},
|
},
|
||||||
"bank_direction_high": {
|
"bank_direction_high": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"enum": ["input", "output", "clkgen"]
|
"enum": ["input", "output"]
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"required": ["ports", "bank_direction_low", "bank_direction_high"]
|
"required": ["ports", "bank_direction_low", "bank_direction_high"]
|
||||||
}
|
}
|
||||||
}, {
|
|
||||||
"title": "DIO_SPI",
|
|
||||||
"if": {
|
|
||||||
"properties": {
|
|
||||||
"type": {
|
|
||||||
"const": "dio_spi"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"properties": {
|
|
||||||
"ports": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "integer"
|
|
||||||
},
|
|
||||||
"minItems": 1,
|
|
||||||
"maxItems": 1
|
|
||||||
},
|
|
||||||
"spi": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"name": {
|
|
||||||
"type": "string",
|
|
||||||
"default": "dio_spi"
|
|
||||||
},
|
|
||||||
"clk": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 7
|
|
||||||
},
|
|
||||||
"mosi": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 7
|
|
||||||
},
|
|
||||||
"miso": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 7
|
|
||||||
},
|
|
||||||
"cs": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 7
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["clk"]
|
|
||||||
},
|
|
||||||
"minItems": 1
|
|
||||||
},
|
|
||||||
"ttl": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"name": {
|
|
||||||
"type": "string",
|
|
||||||
"default": "ttl"
|
|
||||||
},
|
|
||||||
"pin": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 7
|
|
||||||
},
|
|
||||||
"direction": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["input", "output"]
|
|
||||||
},
|
|
||||||
"edge_counter": {
|
|
||||||
"type": "boolean",
|
|
||||||
"default": false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["pin", "direction"]
|
|
||||||
},
|
|
||||||
"default": []
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["ports", "spi"]
|
|
||||||
}
|
|
||||||
}, {
|
}, {
|
||||||
"title": "Urukul",
|
"title": "Urukul",
|
||||||
"if": {
|
"if": {
|
||||||
|
@ -317,12 +219,6 @@
|
||||||
"pll_n": {
|
"pll_n": {
|
||||||
"type": "integer"
|
"type": "integer"
|
||||||
},
|
},
|
||||||
"pll_en": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 1,
|
|
||||||
"default": 1
|
|
||||||
},
|
|
||||||
"pll_vco": {
|
"pll_vco": {
|
||||||
"type": "integer"
|
"type": "integer"
|
||||||
},
|
},
|
||||||
|
@ -397,11 +293,6 @@
|
||||||
"minItems": 2,
|
"minItems": 2,
|
||||||
"maxItems": 2
|
"maxItems": 2
|
||||||
},
|
},
|
||||||
"sampler_hw_rev": {
|
|
||||||
"type": "string",
|
|
||||||
"pattern": "^v[0-9]+\\.[0-9]+",
|
|
||||||
"default": "v2.2"
|
|
||||||
},
|
|
||||||
"urukul0_ports": {
|
"urukul0_ports": {
|
||||||
"type": "array",
|
"type": "array",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -431,12 +322,6 @@
|
||||||
"type": "integer",
|
"type": "integer",
|
||||||
"default": 32
|
"default": 32
|
||||||
},
|
},
|
||||||
"pll_en": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 1,
|
|
||||||
"default": 1
|
|
||||||
},
|
|
||||||
"pll_vco": {
|
"pll_vco": {
|
||||||
"type": "integer"
|
"type": "integer"
|
||||||
}
|
}
|
||||||
|
@ -528,11 +413,6 @@
|
||||||
"almazny": {
|
"almazny": {
|
||||||
"type": "boolean",
|
"type": "boolean",
|
||||||
"default": false
|
"default": false
|
||||||
},
|
|
||||||
"almazny_hw_rev": {
|
|
||||||
"type": "string",
|
|
||||||
"pattern": "^v[0-9]+\\.[0-9]+",
|
|
||||||
"default": "v1.2"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"required": ["ports"]
|
"required": ["ports"]
|
||||||
|
@ -582,11 +462,6 @@
|
||||||
},
|
},
|
||||||
"minItems": 1,
|
"minItems": 1,
|
||||||
"maxItems": 1
|
"maxItems": 1
|
||||||
},
|
|
||||||
"mode": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["base", "miqro"],
|
|
||||||
"default": "base"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"required": ["ports"]
|
"required": ["ports"]
|
||||||
|
@ -613,35 +488,6 @@
|
||||||
},
|
},
|
||||||
"required": ["ports"]
|
"required": ["ports"]
|
||||||
}
|
}
|
||||||
},{
|
|
||||||
"title": "Shuttler",
|
|
||||||
"if": {
|
|
||||||
"properties": {
|
|
||||||
"type": {
|
|
||||||
"const": "shuttler"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"then": {
|
|
||||||
"properties": {
|
|
||||||
"ports": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "integer"
|
|
||||||
},
|
|
||||||
"minItems": 1,
|
|
||||||
"maxItems": 2
|
|
||||||
},
|
|
||||||
"drtio_destination": {
|
|
||||||
"type": "integer"
|
|
||||||
},
|
|
||||||
"hw_rev": {
|
|
||||||
"type": "string",
|
|
||||||
"enum": ["v1.0", "v1.1"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"required": ["ports"]
|
|
||||||
}
|
|
||||||
}]
|
}]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,7 +6,7 @@ alone could achieve.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from artiq.language.core import syscall, kernel
|
from artiq.language.core import syscall, kernel
|
||||||
from artiq.language.types import TInt32, TInt64, TStr, TNone, TTuple, TBool
|
from artiq.language.types import TInt32, TInt64, TStr, TNone, TTuple
|
||||||
from artiq.coredevice.exceptions import DMAError
|
from artiq.coredevice.exceptions import DMAError
|
||||||
|
|
||||||
from numpy import int64
|
from numpy import int64
|
||||||
|
@ -17,7 +17,7 @@ def dma_record_start(name: TStr) -> TNone:
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
@syscall
|
@syscall
|
||||||
def dma_record_stop(duration: TInt64, enable_ddma: TBool) -> TNone:
|
def dma_record_stop(duration: TInt64) -> TNone:
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
@syscall
|
@syscall
|
||||||
|
@ -25,11 +25,11 @@ def dma_erase(name: TStr) -> TNone:
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
@syscall
|
@syscall
|
||||||
def dma_retrieve(name: TStr) -> TTuple([TInt64, TInt32, TBool]):
|
def dma_retrieve(name: TStr) -> TTuple([TInt64, TInt32]):
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
@syscall
|
@syscall
|
||||||
def dma_playback(timestamp: TInt64, ptr: TInt32, enable_ddma: TBool) -> TNone:
|
def dma_playback(timestamp: TInt64, ptr: TInt32) -> TNone:
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
|
@ -47,7 +47,6 @@ class DMARecordContextManager:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.name = ""
|
self.name = ""
|
||||||
self.saved_now_mu = int64(0)
|
self.saved_now_mu = int64(0)
|
||||||
self.enable_ddma = False
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def __enter__(self):
|
def __enter__(self):
|
||||||
|
@ -57,7 +56,7 @@ class DMARecordContextManager:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def __exit__(self, type, value, traceback):
|
def __exit__(self, type, value, traceback):
|
||||||
dma_record_stop(now_mu(), self.enable_ddma) # see above
|
dma_record_stop(now_mu()) # see above
|
||||||
at_mu(self.saved_now_mu)
|
at_mu(self.saved_now_mu)
|
||||||
|
|
||||||
|
|
||||||
|
@ -75,20 +74,12 @@ class CoreDMA:
|
||||||
self.epoch = 0
|
self.epoch = 0
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def record(self, name, enable_ddma=False):
|
def record(self, name):
|
||||||
"""Returns a context manager that will record a DMA trace called `name`.
|
"""Returns a context manager that will record a DMA trace called ``name``.
|
||||||
Any previously recorded trace with the same name is overwritten.
|
Any previously recorded trace with the same name is overwritten.
|
||||||
The trace will persist across kernel switches.
|
The trace will persist across kernel switches."""
|
||||||
|
|
||||||
In DRTIO context, distributed DMA can be toggled with `enable_ddma`.
|
|
||||||
Enabling it allows running DMA on satellites, rather than sending all
|
|
||||||
events from the master.
|
|
||||||
|
|
||||||
Keeping it disabled it may improve performance in some scenarios,
|
|
||||||
e.g. when there are many small satellite buffers."""
|
|
||||||
self.epoch += 1
|
self.epoch += 1
|
||||||
self.recorder.name = name
|
self.recorder.name = name
|
||||||
self.recorder.enable_ddma = enable_ddma
|
|
||||||
return self.recorder
|
return self.recorder
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
|
@ -101,24 +92,24 @@ class CoreDMA:
|
||||||
def playback(self, name):
|
def playback(self, name):
|
||||||
"""Replays a previously recorded DMA trace. This function blocks until
|
"""Replays a previously recorded DMA trace. This function blocks until
|
||||||
the entire trace is submitted to the RTIO FIFOs."""
|
the entire trace is submitted to the RTIO FIFOs."""
|
||||||
(advance_mu, ptr, uses_ddma) = dma_retrieve(name)
|
(advance_mu, ptr) = dma_retrieve(name)
|
||||||
dma_playback(now_mu(), ptr, uses_ddma)
|
dma_playback(now_mu(), ptr)
|
||||||
delay_mu(advance_mu)
|
delay_mu(advance_mu)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def get_handle(self, name):
|
def get_handle(self, name):
|
||||||
"""Returns a handle to a previously recorded DMA trace. The returned handle
|
"""Returns a handle to a previously recorded DMA trace. The returned handle
|
||||||
is only valid until the next call to :meth:`record` or :meth:`erase`."""
|
is only valid until the next call to :meth:`record` or :meth:`erase`."""
|
||||||
(advance_mu, ptr, uses_ddma) = dma_retrieve(name)
|
(advance_mu, ptr) = dma_retrieve(name)
|
||||||
return (self.epoch, advance_mu, ptr, uses_ddma)
|
return (self.epoch, advance_mu, ptr)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def playback_handle(self, handle):
|
def playback_handle(self, handle):
|
||||||
"""Replays a handle obtained with :meth:`get_handle`. Using this function
|
"""Replays a handle obtained with :meth:`get_handle`. Using this function
|
||||||
is much faster than :meth:`playback` for replaying a set of traces repeatedly,
|
is much faster than :meth:`playback` for replaying a set of traces repeatedly,
|
||||||
but offloads the overhead of managing the handles onto the programmer."""
|
but incurs the overhead of managing the handles onto the programmer."""
|
||||||
(epoch, advance_mu, ptr, uses_ddma) = handle
|
(epoch, advance_mu, ptr) = handle
|
||||||
if self.epoch != epoch:
|
if self.epoch != epoch:
|
||||||
raise DMAError("Invalid handle")
|
raise DMAError("Invalid handle")
|
||||||
dma_playback(now_mu(), ptr, uses_ddma)
|
dma_playback(now_mu(), ptr)
|
||||||
delay_mu(advance_mu)
|
delay_mu(advance_mu)
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
"""Driver for RTIO-enabled TTL edge counter.
|
"""Driver for RTIO-enabled TTL edge counter.
|
||||||
|
|
||||||
As for the TTL input PHYs, sensitivity can be configured over RTIO
|
Like for the TTL input PHYs, sensitivity can be configured over RTIO
|
||||||
(:meth:`gate_rising<EdgeCounter.gate_rising>`, etc.). In contrast to the former, however, the count is
|
(``gate_rising()``, etc.). In contrast to the former, however, the count is
|
||||||
accumulated in gateware, and only a single input event is generated at the end
|
accumulated in gateware, and only a single input event is generated at the end
|
||||||
of each gate period: ::
|
of each gate period::
|
||||||
|
|
||||||
with parallel:
|
with parallel:
|
||||||
doppler_cool()
|
doppler_cool()
|
||||||
|
@ -17,12 +17,12 @@ of each gate period: ::
|
||||||
print("Readout counts:", self.pmt_counter.fetch_count())
|
print("Readout counts:", self.pmt_counter.fetch_count())
|
||||||
|
|
||||||
For applications where the timestamps of the individual input events are not
|
For applications where the timestamps of the individual input events are not
|
||||||
required, this has two advantages over :meth:`TTLInOut.count<artiq.coredevice.ttl.TTLInOut.count>`
|
required, this has two advantages over ``TTLInOut.count()`` beyond raw
|
||||||
beyond raw throughput. First, it is easy to count events during multiple separate
|
throughput. First, it is easy to count events during multiple separate periods
|
||||||
periods without blocking to read back counts in between, as illustrated in the
|
without blocking to read back counts in between, as illustrated in the above
|
||||||
above example. Secondly, as each count total only takes up a single input event,
|
example. Secondly, as each count total only takes up a single input event, it
|
||||||
it is much easier to acquire counts on several channels in parallel without
|
is much easier to acquire counts on several channels in parallel without
|
||||||
risking input RTIO overflows: ::
|
risking input FIFO overflows::
|
||||||
|
|
||||||
# Using the TTLInOut driver, pmt_1 input events are only processed
|
# Using the TTLInOut driver, pmt_1 input events are only processed
|
||||||
# after pmt_0 is done counting. To avoid RTIOOverflows, a round-robin
|
# after pmt_0 is done counting. To avoid RTIOOverflows, a round-robin
|
||||||
|
@ -35,6 +35,8 @@ risking input RTIO overflows: ::
|
||||||
counts_0 = self.pmt_0.count(now_mu()) # blocks
|
counts_0 = self.pmt_0.count(now_mu()) # blocks
|
||||||
counts_1 = self.pmt_1.count(now_mu())
|
counts_1 = self.pmt_1.count(now_mu())
|
||||||
|
|
||||||
|
#
|
||||||
|
|
||||||
# Using gateware counters, only a single input event each is
|
# Using gateware counters, only a single input event each is
|
||||||
# generated, greatly reducing the load on the input FIFOs:
|
# generated, greatly reducing the load on the input FIFOs:
|
||||||
|
|
||||||
|
@ -45,7 +47,7 @@ risking input RTIO overflows: ::
|
||||||
counts_0 = self.pmt_0_counter.fetch_count() # blocks
|
counts_0 = self.pmt_0_counter.fetch_count() # blocks
|
||||||
counts_1 = self.pmt_1_counter.fetch_count()
|
counts_1 = self.pmt_1_counter.fetch_count()
|
||||||
|
|
||||||
See the sources of :mod:`artiq.gateware.rtio.phy.edge_counter` and
|
See :mod:`artiq.gateware.rtio.phy.edge_counter` and
|
||||||
:meth:`artiq.gateware.eem.DIO.add_std` for the gateware components.
|
:meth:`artiq.gateware.eem.DIO.add_std` for the gateware components.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -89,10 +91,6 @@ class EdgeCounter:
|
||||||
self.channel = channel
|
self.channel = channel
|
||||||
self.counter_max = (1 << (gateware_width - 1)) - 1
|
self.counter_max = (1 << (gateware_width - 1)) - 1
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def gate_rising(self, duration):
|
def gate_rising(self, duration):
|
||||||
"""Count rising edges for the given duration and request the total at
|
"""Count rising edges for the given duration and request the total at
|
||||||
|
@ -174,13 +172,13 @@ class EdgeCounter:
|
||||||
"""Emit an RTIO event at the current timeline position to set the
|
"""Emit an RTIO event at the current timeline position to set the
|
||||||
gateware configuration.
|
gateware configuration.
|
||||||
|
|
||||||
For most use cases, the ``gate_*`` wrappers will be more convenient.
|
For most use cases, the `gate_*` wrappers will be more convenient.
|
||||||
|
|
||||||
:param count_rising: Whether to count rising signal edges.
|
:param count_rising: Whether to count rising signal edges.
|
||||||
:param count_falling: Whether to count falling signal edges.
|
:param count_falling: Whether to count falling signal edges.
|
||||||
:param send_count_event: If ``True``, an input event with the current
|
:param send_count_event: If `True`, an input event with the current
|
||||||
counter value is generated on the next clock cycle (once).
|
counter value is generated on the next clock cycle (once).
|
||||||
:param reset_to_zero: If ``True``, the counter value is reset to zero on
|
:param reset_to_zero: If `True`, the counter value is reset to zero on
|
||||||
the next clock cycle (once).
|
the next clock cycle (once).
|
||||||
"""
|
"""
|
||||||
config = int32(0)
|
config = int32(0)
|
||||||
|
|
|
@ -2,123 +2,64 @@ import builtins
|
||||||
import linecache
|
import linecache
|
||||||
import re
|
import re
|
||||||
import os
|
import os
|
||||||
from numpy.linalg import LinAlgError
|
|
||||||
|
|
||||||
from artiq import __artiq_dir__ as artiq_dir
|
from artiq import __artiq_dir__ as artiq_dir
|
||||||
from artiq.coredevice.runtime import source_loader
|
from artiq.coredevice.runtime import source_loader
|
||||||
|
|
||||||
"""
|
|
||||||
This file provides class definition for all the exceptions declared in `EmbeddingMap` in `artiq.compiler.embedding`
|
|
||||||
|
|
||||||
For Python builtin exceptions, use the `builtins` module
|
|
||||||
For ARTIQ specific exceptions, inherit from `Exception` class
|
|
||||||
"""
|
|
||||||
|
|
||||||
AssertionError = builtins.AssertionError
|
|
||||||
AttributeError = builtins.AttributeError
|
|
||||||
IndexError = builtins.IndexError
|
|
||||||
IOError = builtins.IOError
|
|
||||||
KeyError = builtins.KeyError
|
|
||||||
NotImplementedError = builtins.NotImplementedError
|
|
||||||
OverflowError = builtins.OverflowError
|
|
||||||
RuntimeError = builtins.RuntimeError
|
|
||||||
TimeoutError = builtins.TimeoutError
|
|
||||||
TypeError = builtins.TypeError
|
|
||||||
ValueError = builtins.ValueError
|
|
||||||
ZeroDivisionError = builtins.ZeroDivisionError
|
ZeroDivisionError = builtins.ZeroDivisionError
|
||||||
OSError = builtins.OSError
|
ValueError = builtins.ValueError
|
||||||
|
IndexError = builtins.IndexError
|
||||||
|
RuntimeError = builtins.RuntimeError
|
||||||
|
AssertionError = builtins.AssertionError
|
||||||
|
|
||||||
|
|
||||||
class CoreException:
|
class CoreException:
|
||||||
"""Information about an exception raised or passed through the core device."""
|
"""Information about an exception raised or passed through the core device."""
|
||||||
def __init__(self, exceptions, exception_info, traceback, stack_pointers):
|
|
||||||
self.exceptions = exceptions
|
|
||||||
self.exception_info = exception_info
|
|
||||||
self.traceback = list(traceback)
|
|
||||||
self.stack_pointers = stack_pointers
|
|
||||||
|
|
||||||
first_exception = exceptions[0]
|
def __init__(self, name, message, params, traceback):
|
||||||
name = first_exception[0]
|
|
||||||
if ':' in name:
|
if ':' in name:
|
||||||
exn_id, self.name = name.split(':', 2)
|
exn_id, self.name = name.split(':', 2)
|
||||||
self.id = int(exn_id)
|
self.id = int(exn_id)
|
||||||
else:
|
else:
|
||||||
self.id, self.name = 0, name
|
self.id, self.name = 0, name
|
||||||
self.message = first_exception[1]
|
self.message, self.params = message, params
|
||||||
self.params = first_exception[2]
|
self.traceback = list(traceback)
|
||||||
|
|
||||||
def append_backtrace(self, record, inlined=False):
|
|
||||||
filename, line, column, function, address = record
|
|
||||||
stub_globals = {"__name__": filename, "__loader__": source_loader}
|
|
||||||
source_line = linecache.getline(filename, line, stub_globals)
|
|
||||||
indentation = re.search(r"^\s*", source_line).end()
|
|
||||||
|
|
||||||
if address is None:
|
|
||||||
formatted_address = ""
|
|
||||||
elif inlined:
|
|
||||||
formatted_address = " (inlined)"
|
|
||||||
else:
|
|
||||||
formatted_address = " (RA=+0x{:x})".format(address)
|
|
||||||
|
|
||||||
filename = filename.replace(artiq_dir, "<artiq>")
|
|
||||||
lines = []
|
|
||||||
if column == -1:
|
|
||||||
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
|
|
||||||
lines.append(" File \"{file}\", line {line}, in {function}{address}".
|
|
||||||
format(file=filename, line=line, function=function,
|
|
||||||
address=formatted_address))
|
|
||||||
else:
|
|
||||||
lines.append(" {}^".format(" " * (column - indentation)))
|
|
||||||
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
|
|
||||||
lines.append(" File \"{file}\", line {line}, column {column},"
|
|
||||||
" in {function}{address}".
|
|
||||||
format(file=filename, line=line, column=column + 1,
|
|
||||||
function=function, address=formatted_address))
|
|
||||||
return lines
|
|
||||||
|
|
||||||
def single_traceback(self, exception_index):
|
|
||||||
# note that we insert in reversed order
|
|
||||||
lines = []
|
|
||||||
last_sp = 0
|
|
||||||
start_backtrace_index = self.exception_info[exception_index][1]
|
|
||||||
zipped = list(zip(self.traceback[start_backtrace_index:],
|
|
||||||
self.stack_pointers[start_backtrace_index:]))
|
|
||||||
exception = self.exceptions[exception_index]
|
|
||||||
name = exception[0]
|
|
||||||
message = exception[1]
|
|
||||||
params = exception[2]
|
|
||||||
if ':' in name:
|
|
||||||
exn_id, name = name.split(':', 2)
|
|
||||||
exn_id = int(exn_id)
|
|
||||||
else:
|
|
||||||
exn_id = 0
|
|
||||||
lines.append("{}({}): {}".format(name, exn_id, message.format(*params)))
|
|
||||||
zipped.append(((exception[3], exception[4], exception[5], exception[6],
|
|
||||||
None, []), None))
|
|
||||||
|
|
||||||
for ((filename, line, column, function, address, inlined), sp) in zipped:
|
|
||||||
# backtrace of nested exceptions may be discontinuous
|
|
||||||
# but the stack pointer must increase monotonically
|
|
||||||
if sp is not None and sp <= last_sp:
|
|
||||||
continue
|
|
||||||
last_sp = sp
|
|
||||||
|
|
||||||
for record in reversed(inlined):
|
|
||||||
lines += self.append_backtrace(record, True)
|
|
||||||
lines += self.append_backtrace((filename, line, column, function,
|
|
||||||
address))
|
|
||||||
|
|
||||||
lines.append("Traceback (most recent call first):")
|
|
||||||
|
|
||||||
return "\n".join(reversed(lines))
|
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
tracebacks = [self.single_traceback(i) for i in range(len(self.exceptions))]
|
lines = []
|
||||||
traceback_str = ('\n\nDuring handling of the above exception, ' +
|
lines.append("Core Device Traceback (most recent call last):")
|
||||||
'another exception occurred:\n\n').join(tracebacks)
|
last_address = 0
|
||||||
return 'Core Device Traceback:\n' +\
|
for (filename, line, column, function, address) in self.traceback:
|
||||||
traceback_str +\
|
stub_globals = {"__name__": filename, "__loader__": source_loader}
|
||||||
'\n\nEnd of Core Device Traceback\n'
|
source_line = linecache.getline(filename, line, stub_globals)
|
||||||
|
indentation = re.search(r"^\s*", source_line).end()
|
||||||
|
|
||||||
|
if address is None:
|
||||||
|
formatted_address = ""
|
||||||
|
elif address == last_address:
|
||||||
|
formatted_address = " (inlined)"
|
||||||
|
else:
|
||||||
|
formatted_address = " (RA=+0x{:x})".format(address)
|
||||||
|
last_address = address
|
||||||
|
|
||||||
|
filename = filename.replace(artiq_dir, "<artiq>")
|
||||||
|
if column == -1:
|
||||||
|
lines.append(" File \"{file}\", line {line}, in {function}{address}".
|
||||||
|
format(file=filename, line=line, function=function,
|
||||||
|
address=formatted_address))
|
||||||
|
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
|
||||||
|
else:
|
||||||
|
lines.append(" File \"{file}\", line {line}, column {column},"
|
||||||
|
" in {function}{address}".
|
||||||
|
format(file=filename, line=line, column=column + 1,
|
||||||
|
function=function, address=formatted_address))
|
||||||
|
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
|
||||||
|
lines.append(" {}^".format(" " * (column - indentation)))
|
||||||
|
|
||||||
|
lines.append("{}({}): {}".format(self.name, self.id,
|
||||||
|
self.message.format(*self.params)))
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
class InternalError(Exception):
|
class InternalError(Exception):
|
||||||
|
@ -152,7 +93,7 @@ class RTIOOverflow(Exception):
|
||||||
|
|
||||||
|
|
||||||
class RTIODestinationUnreachable(Exception):
|
class RTIODestinationUnreachable(Exception):
|
||||||
"""Raised when a RTIO operation could not be completed due to a DRTIO link
|
"""Raised with a RTIO operation could not be completed due to a DRTIO link
|
||||||
being down.
|
being down.
|
||||||
"""
|
"""
|
||||||
artiq_builtin = True
|
artiq_builtin = True
|
||||||
|
@ -163,27 +104,15 @@ class DMAError(Exception):
|
||||||
artiq_builtin = True
|
artiq_builtin = True
|
||||||
|
|
||||||
|
|
||||||
class SubkernelError(Exception):
|
|
||||||
"""Raised when an operation regarding a subkernel is invalid
|
|
||||||
or cannot be completed.
|
|
||||||
"""
|
|
||||||
artiq_builtin = True
|
|
||||||
|
|
||||||
|
|
||||||
class ClockFailure(Exception):
|
class ClockFailure(Exception):
|
||||||
"""Raised when RTIO PLL has lost lock."""
|
"""Raised when RTIO PLL has lost lock."""
|
||||||
artiq_builtin = True
|
|
||||||
|
|
||||||
class I2CError(Exception):
|
class I2CError(Exception):
|
||||||
"""Raised when a I2C transaction fails."""
|
"""Raised when a I2C transaction fails."""
|
||||||
artiq_builtin = True
|
pass
|
||||||
|
|
||||||
|
|
||||||
class SPIError(Exception):
|
class SPIError(Exception):
|
||||||
"""Raised when a SPI transaction fails."""
|
"""Raised when a SPI transaction fails."""
|
||||||
artiq_builtin = True
|
pass
|
||||||
|
|
||||||
|
|
||||||
class UnwrapNoneError(Exception):
|
|
||||||
"""Raised when unwrapping a none Option."""
|
|
||||||
artiq_builtin = True
|
|
||||||
|
|
|
@ -1,12 +1,12 @@
|
||||||
"""RTIO driver for the Fastino 32-channel, 16-bit, 2.5 MS/s per channel
|
"""RTIO driver for the Fastino 32channel, 16 bit, 2.5 MS/s per channel,
|
||||||
streaming DAC.
|
streaming DAC.
|
||||||
"""
|
"""
|
||||||
from numpy import int32, int64
|
from numpy import int32
|
||||||
|
|
||||||
from artiq.language.core import kernel, portable, delay, delay_mu
|
from artiq.language.core import kernel, portable, delay, delay_mu
|
||||||
from artiq.coredevice.rtio import (rtio_output, rtio_output_wide,
|
from artiq.coredevice.rtio import (rtio_output, rtio_output_wide,
|
||||||
rtio_input_data)
|
rtio_input_data)
|
||||||
from artiq.language.units import ns
|
from artiq.language.units import us
|
||||||
from artiq.language.types import TInt32, TList
|
from artiq.language.types import TInt32, TList
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,22 +17,22 @@ class Fastino:
|
||||||
to the DAC RTIO addresses, if a channel is not "held" by setting its bit
|
to the DAC RTIO addresses, if a channel is not "held" by setting its bit
|
||||||
using :meth:`set_hold`, the next frame will contain the update. For the
|
using :meth:`set_hold`, the next frame will contain the update. For the
|
||||||
DACs held, the update is triggered explicitly by setting the corresponding
|
DACs held, the update is triggered explicitly by setting the corresponding
|
||||||
bit using :meth:`update`. Update is self-clearing. This enables atomic
|
bit using :meth:`set_update`. Update is self-clearing. This enables atomic
|
||||||
DAC updates synchronized to a frame edge.
|
DAC updates synchronized to a frame edge.
|
||||||
|
|
||||||
The ``log2_width=0`` RTIO layout uses one DAC channel per RTIO address and a
|
The `log2_width=0` RTIO layout uses one DAC channel per RTIO address and a
|
||||||
dense RTIO address space. The RTIO words are narrow (32-bit) and
|
dense RTIO address space. The RTIO words are narrow. (32 bit) and
|
||||||
few-channel updates are efficient. There is the least amount of DAC state
|
few-channel updates are efficient. There is the least amount of DAC state
|
||||||
tracking in kernels, at the cost of more DMA and RTIO data.
|
tracking in kernels, at the cost of more DMA and RTIO data.
|
||||||
The setting here and in the RTIO PHY (gateware) must match.
|
The setting here and in the RTIO PHY (gateware) must match.
|
||||||
|
|
||||||
Other ``log2_width`` (up to ``log2_width=5``) settings pack multiple
|
Other `log2_width` (up to `log2_width=5`) settings pack multiple
|
||||||
(in powers of two) DAC channels into one group and into one RTIO write.
|
(in powers of two) DAC channels into one group and into one RTIO write.
|
||||||
The RTIO data width increases accordingly. The ``log2_width``
|
The RTIO data width increases accordingly. The `log2_width`
|
||||||
LSBs of the RTIO address for a DAC channel write must be zero and the
|
LSBs of the RTIO address for a DAC channel write must be zero and the
|
||||||
address space is sparse. For ``log2_width=5`` the RTIO data is 512-bit wide.
|
address space is sparse. For `log2_width=5` the RTIO data is 512 bit wide.
|
||||||
|
|
||||||
If ``log2_width`` is zero, the :meth:`set_dac`/:meth:`set_dac_mu` interface
|
If `log2_width` is zero, the :meth:`set_dac`/:meth:`set_dac_mu` interface
|
||||||
must be used. If non-zero, the :meth:`set_group`/:meth:`set_group_mu`
|
must be used. If non-zero, the :meth:`set_group`/:meth:`set_group_mu`
|
||||||
interface must be used.
|
interface must be used.
|
||||||
|
|
||||||
|
@ -41,53 +41,24 @@ class Fastino:
|
||||||
:param log2_width: Width of DAC channel group (logarithm base 2).
|
:param log2_width: Width of DAC channel group (logarithm base 2).
|
||||||
Value must match the corresponding value in the RTIO PHY (gateware).
|
Value must match the corresponding value in the RTIO PHY (gateware).
|
||||||
"""
|
"""
|
||||||
kernel_invariants = {"core", "channel", "width", "t_frame"}
|
kernel_invariants = {"core", "channel", "width"}
|
||||||
|
|
||||||
def __init__(self, dmgr, channel, core_device="core", log2_width=0):
|
def __init__(self, dmgr, channel, core_device="core", log2_width=0):
|
||||||
self.channel = channel << 8
|
self.channel = channel << 8
|
||||||
self.core = dmgr.get(core_device)
|
self.core = dmgr.get(core_device)
|
||||||
self.width = 1 << log2_width
|
self.width = 1 << log2_width
|
||||||
# frame duration in mu (14 words each 7 clock cycles each 4 ns)
|
|
||||||
# self.core.seconds_to_mu(14*7*4*ns) # unfortunately this may round wrong
|
|
||||||
assert self.core.ref_period == 1*ns
|
|
||||||
self.t_frame = int64(14*7*4)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def init(self):
|
def init(self):
|
||||||
"""Initialize the device.
|
"""Initialize the device.
|
||||||
|
|
||||||
* disables RESET, DAC_CLR, enables AFE_PWR
|
This clears reset, unsets DAC_CLR, enables AFE_PWR,
|
||||||
* clears error counters, enables error counting
|
clears error counters, then enables error counting
|
||||||
* turns LEDs off
|
|
||||||
* clears ``hold`` and ``continuous`` on all channels
|
|
||||||
* clear and resets interpolators to unit rate change on all
|
|
||||||
channels
|
|
||||||
|
|
||||||
It does not change set channel voltages and does not reset the PLLs or clock
|
|
||||||
domains.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
On Fastino gateware before v0.2 this may lead to 0 voltage being emitted
|
|
||||||
transiently.
|
|
||||||
"""
|
"""
|
||||||
self.set_cfg(reset=0, afe_power_down=0, dac_clr=0, clr_err=1)
|
self.set_cfg(reset=0, afe_power_down=0, dac_clr=0, clr_err=1)
|
||||||
delay_mu(self.t_frame)
|
delay(1*us)
|
||||||
self.set_cfg(reset=0, afe_power_down=0, dac_clr=0, clr_err=0)
|
self.set_cfg(reset=0, afe_power_down=0, dac_clr=0, clr_err=0)
|
||||||
delay_mu(self.t_frame)
|
delay(1*us)
|
||||||
self.set_continuous(0)
|
|
||||||
delay_mu(self.t_frame)
|
|
||||||
self.stage_cic(1)
|
|
||||||
delay_mu(self.t_frame)
|
|
||||||
self.apply_cic(0xffffffff)
|
|
||||||
delay_mu(self.t_frame)
|
|
||||||
self.set_leds(0)
|
|
||||||
delay_mu(self.t_frame)
|
|
||||||
self.set_hold(0)
|
|
||||||
delay_mu(self.t_frame)
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write(self, addr, data):
|
def write(self, addr, data):
|
||||||
|
@ -107,16 +78,15 @@ class Fastino:
|
||||||
:param addr: Address to read from.
|
:param addr: Address to read from.
|
||||||
:return: The data read.
|
:return: The data read.
|
||||||
"""
|
"""
|
||||||
raise NotImplementedError
|
rtio_output(self.channel | addr | 0x80)
|
||||||
# rtio_output(self.channel | addr | 0x80)
|
return rtio_input_data(self.channel >> 8)
|
||||||
# return rtio_input_data(self.channel >> 8)
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_dac_mu(self, dac, data):
|
def set_dac_mu(self, dac, data):
|
||||||
"""Write DAC data in machine units.
|
"""Write DAC data in machine units.
|
||||||
|
|
||||||
:param dac: DAC channel to write to (0-31).
|
:param dac: DAC channel to write to (0-31).
|
||||||
:param data: DAC word to write, 16-bit unsigned integer, in machine
|
:param data: DAC word to write, 16 bit unsigned integer, in machine
|
||||||
units.
|
units.
|
||||||
"""
|
"""
|
||||||
self.write(dac, data)
|
self.write(dac, data)
|
||||||
|
@ -125,9 +95,9 @@ class Fastino:
|
||||||
def set_group_mu(self, dac: TInt32, data: TList(TInt32)):
|
def set_group_mu(self, dac: TInt32, data: TList(TInt32)):
|
||||||
"""Write a group of DAC channels in machine units.
|
"""Write a group of DAC channels in machine units.
|
||||||
|
|
||||||
:param dac: First channel in DAC channel group (0-31). The ``log2_width``
|
:param dac: First channel in DAC channel group (0-31). The `log2_width`
|
||||||
LSBs must be zero.
|
LSBs must be zero.
|
||||||
:param data: List of DAC data pairs (2x16-bit unsigned) to write,
|
:param data: List of DAC data pairs (2x16 bit unsigned) to write,
|
||||||
in machine units. Data exceeding group size is ignored.
|
in machine units. Data exceeding group size is ignored.
|
||||||
If the list length is less than group size, the remaining
|
If the list length is less than group size, the remaining
|
||||||
DAC channels within the group are cleared to 0 (machine units).
|
DAC channels within the group are cleared to 0 (machine units).
|
||||||
|
@ -138,10 +108,10 @@ class Fastino:
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def voltage_to_mu(self, voltage):
|
def voltage_to_mu(self, voltage):
|
||||||
"""Convert SI volts to DAC machine units.
|
"""Convert SI Volts to DAC machine units.
|
||||||
|
|
||||||
:param voltage: Voltage in SI volts.
|
:param voltage: Voltage in SI Volts.
|
||||||
:return: DAC data word in machine units, 16-bit integer.
|
:return: DAC data word in machine units, 16 bit integer.
|
||||||
"""
|
"""
|
||||||
data = int32(round((0x8000/10.)*voltage)) + int32(0x8000)
|
data = int32(round((0x8000/10.)*voltage)) + int32(0x8000)
|
||||||
if data < 0 or data > 0xffff:
|
if data < 0 or data > 0xffff:
|
||||||
|
@ -150,9 +120,9 @@ class Fastino:
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def voltage_group_to_mu(self, voltage, data):
|
def voltage_group_to_mu(self, voltage, data):
|
||||||
"""Convert SI volts to packed DAC channel group machine units.
|
"""Convert SI Volts to packed DAC channel group machine units.
|
||||||
|
|
||||||
:param voltage: List of SI volt voltages.
|
:param voltage: List of SI Volt voltages.
|
||||||
:param data: List of DAC channel data pairs to write to.
|
:param data: List of DAC channel data pairs to write to.
|
||||||
Half the length of `voltage`.
|
Half the length of `voltage`.
|
||||||
"""
|
"""
|
||||||
|
@ -186,7 +156,7 @@ class Fastino:
|
||||||
def update(self, update):
|
def update(self, update):
|
||||||
"""Schedule channels for update.
|
"""Schedule channels for update.
|
||||||
|
|
||||||
:param update: Bit mask of channels to update (32-bit).
|
:param update: Bit mask of channels to update (32 bit).
|
||||||
"""
|
"""
|
||||||
self.write(0x20, update)
|
self.write(0x20, update)
|
||||||
|
|
||||||
|
@ -194,7 +164,7 @@ class Fastino:
|
||||||
def set_hold(self, hold):
|
def set_hold(self, hold):
|
||||||
"""Set channels to manual update.
|
"""Set channels to manual update.
|
||||||
|
|
||||||
:param hold: Bit mask of channels to hold (32-bit).
|
:param hold: Bit mask of channels to hold (32 bit).
|
||||||
"""
|
"""
|
||||||
self.write(0x21, hold)
|
self.write(0x21, hold)
|
||||||
|
|
||||||
|
@ -215,9 +185,9 @@ class Fastino:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_leds(self, leds):
|
def set_leds(self, leds):
|
||||||
"""Set the green user-defined LEDs.
|
"""Set the green user-defined LEDs
|
||||||
|
|
||||||
:param leds: LED status, 8-bit integer each bit corresponding to one
|
:param leds: LED status, 8 bit integer each bit corresponding to one
|
||||||
green LED.
|
green LED.
|
||||||
"""
|
"""
|
||||||
self.write(0x23, leds)
|
self.write(0x23, leds)
|
||||||
|
@ -246,16 +216,16 @@ class Fastino:
|
||||||
def stage_cic(self, rate) -> TInt32:
|
def stage_cic(self, rate) -> TInt32:
|
||||||
"""Compute and stage interpolator configuration.
|
"""Compute and stage interpolator configuration.
|
||||||
|
|
||||||
This method approximates the desired interpolation rate using a 10-bit
|
This method approximates the desired interpolation rate using a 10 bit
|
||||||
floating point representation (6-bit mantissa, 4-bit exponent) and
|
floating point representation (6 bit mantissa, 4 bit exponent) and
|
||||||
then determines an optimal interpolation gain compensation exponent
|
then determines an optimal interpolation gain compensation exponent
|
||||||
to avoid clipping. Gains for rates that are powers of two are accurately
|
to avoid clipping. Gains for rates that are powers of two are accurately
|
||||||
compensated. Other rates lead to overall less than unity gain (but more
|
compensated. Other rates lead to overall less than unity gain (but more
|
||||||
than 0.5 gain).
|
than 0.5 gain).
|
||||||
|
|
||||||
The overall gain including gain compensation is ``actual_rate ** order /
|
The overall gain including gain compensation is
|
||||||
2 ** ceil(log2(actual_rate ** order))``
|
`actual_rate**order/2**ceil(log2(actual_rate**order))`
|
||||||
where ``order = 3``.
|
where `order = 3`.
|
||||||
|
|
||||||
Returns the actual interpolation rate.
|
Returns the actual interpolation rate.
|
||||||
"""
|
"""
|
||||||
|
@ -282,19 +252,16 @@ class Fastino:
|
||||||
def apply_cic(self, channel_mask):
|
def apply_cic(self, channel_mask):
|
||||||
"""Apply the staged interpolator configuration on the specified channels.
|
"""Apply the staged interpolator configuration on the specified channels.
|
||||||
|
|
||||||
Each Fastino channel starting with gateware v0.2 includes a fourth order
|
Each Fastino channel includes a fourth order (cubic) CIC interpolator with
|
||||||
(cubic) CIC interpolator with variable rate change and variable output
|
variable rate change and variable output gain compensation (see
|
||||||
gain compensation (see :meth:`stage_cic`).
|
:meth:`stage_cic`).
|
||||||
|
|
||||||
Fastino gateware before v0.2 does not include the interpolators and the
|
|
||||||
methods affecting the CICs should not be used.
|
|
||||||
|
|
||||||
Channels using non-unity interpolation rate should have
|
Channels using non-unity interpolation rate should have
|
||||||
continous DAC updates enabled (see :meth:`set_continuous`) unless
|
continous DAC updates enabled (see :meth:`set_continuous`) unless
|
||||||
their output is supposed to be constant.
|
their output is supposed to be constant.
|
||||||
|
|
||||||
This method resets and settles the affected interpolators. There will be
|
This method resets and settles the affected interpolators. There will be
|
||||||
no output updates for the next ``order = 3`` input samples.
|
no output updates for the next `order = 3` input samples.
|
||||||
Affected channels will only accept one input sample per input sample
|
Affected channels will only accept one input sample per input sample
|
||||||
period. This method synchronizes the input sample period to the current
|
period. This method synchronizes the input sample period to the current
|
||||||
frame on the affected channels.
|
frame on the affected channels.
|
||||||
|
|
|
@ -2,7 +2,7 @@ from numpy import int32, int64
|
||||||
|
|
||||||
from artiq.language.core import *
|
from artiq.language.core import *
|
||||||
from artiq.language.types import *
|
from artiq.language.types import *
|
||||||
from artiq.coredevice.rtio import rtio_output, rtio_input_timestamped_data
|
from artiq.coredevice.rtio import rtio_output, rtio_input_data
|
||||||
|
|
||||||
|
|
||||||
class OutOfSyncException(Exception):
|
class OutOfSyncException(Exception):
|
||||||
|
@ -11,11 +11,6 @@ class OutOfSyncException(Exception):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class GrabberTimeoutException(Exception):
|
|
||||||
"""Raised when a timeout occurs while attempting to read Grabber RTIO input events."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class Grabber:
|
class Grabber:
|
||||||
"""Driver for the Grabber camera interface."""
|
"""Driver for the Grabber camera interface."""
|
||||||
kernel_invariants = {"core", "channel_base", "sentinel"}
|
kernel_invariants = {"core", "channel_base", "sentinel"}
|
||||||
|
@ -30,10 +25,6 @@ class Grabber:
|
||||||
# ROI engine outputs for one video frame.
|
# ROI engine outputs for one video frame.
|
||||||
self.sentinel = int32(int64(2**count_width))
|
self.sentinel = int32(int64(2**count_width))
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel_base, **kwargs):
|
|
||||||
return [(channel_base, "ROI coordinates"), (channel_base + 1, "ROI mask")]
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def setup_roi(self, n, x0, y0, x1, y1):
|
def setup_roi(self, n, x0, y0, x1, y1):
|
||||||
"""
|
"""
|
||||||
|
@ -87,10 +78,10 @@ class Grabber:
|
||||||
self.gate_roi(0)
|
self.gate_roi(0)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def input_mu(self, data, timeout_mu=-1):
|
def input_mu(self, data):
|
||||||
"""
|
"""
|
||||||
Retrieves the accumulated values for one frame from the ROI engines.
|
Retrieves the accumulated values for one frame from the ROI engines.
|
||||||
Blocks until values are available or timeout is reached.
|
Blocks until values are available.
|
||||||
|
|
||||||
The input list must be a list of integers of the same length as there
|
The input list must be a list of integers of the same length as there
|
||||||
are enabled ROI engines. This method replaces the elements of the
|
are enabled ROI engines. This method replaces the elements of the
|
||||||
|
@ -100,26 +91,15 @@ class Grabber:
|
||||||
If the number of elements in the list does not match the number of
|
If the number of elements in the list does not match the number of
|
||||||
ROI engines that produced output, an exception will be raised during
|
ROI engines that produced output, an exception will be raised during
|
||||||
this call or the next.
|
this call or the next.
|
||||||
|
|
||||||
If the timeout is reached before data is available, the exception
|
|
||||||
:exc:`GrabberTimeoutException` is raised.
|
|
||||||
|
|
||||||
:param timeout_mu: Timestamp at which a timeout will occur. Set to -1
|
|
||||||
(default) to disable timeout.
|
|
||||||
"""
|
"""
|
||||||
channel = self.channel_base + 1
|
channel = self.channel_base + 1
|
||||||
|
|
||||||
timestamp, sentinel = rtio_input_timestamped_data(timeout_mu, channel)
|
sentinel = rtio_input_data(channel)
|
||||||
if timestamp == -1:
|
|
||||||
raise GrabberTimeoutException("Timeout before Grabber frame available")
|
|
||||||
if sentinel != self.sentinel:
|
if sentinel != self.sentinel:
|
||||||
raise OutOfSyncException
|
raise OutOfSyncException
|
||||||
|
|
||||||
for i in range(len(data)):
|
for i in range(len(data)):
|
||||||
timestamp, roi_output = rtio_input_timestamped_data(timeout_mu, channel)
|
roi_output = rtio_input_data(channel)
|
||||||
if roi_output == self.sentinel:
|
if roi_output == self.sentinel:
|
||||||
raise OutOfSyncException
|
raise OutOfSyncException
|
||||||
if timestamp == -1:
|
|
||||||
raise GrabberTimeoutException(
|
|
||||||
"Timeout retrieving ROIs (attempting to read more ROIs than enabled?)")
|
|
||||||
data[i] = roi_output
|
data[i] = roi_output
|
||||||
|
|
|
@ -33,17 +33,12 @@ def i2c_read(busno: TInt32, ack: TBool) -> TInt32:
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
||||||
|
|
||||||
@syscall(flags={"nounwind", "nowrite"})
|
|
||||||
def i2c_switch_select(busno: TInt32, address: TInt32, mask: TInt32) -> TNone:
|
|
||||||
raise NotImplementedError("syscall not simulated")
|
|
||||||
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def i2c_poll(busno, busaddr):
|
def i2c_poll(busno, busaddr):
|
||||||
"""Poll I2C device at address.
|
"""Poll I2C device at address.
|
||||||
|
|
||||||
:param busno: I2C bus number
|
:param busno: I2C bus number
|
||||||
:param busaddr: 8-bit I2C device address (LSB=0)
|
:param busaddr: 8 bit I2C device address (LSB=0)
|
||||||
:returns: True if the poll was ACKed
|
:returns: True if the poll was ACKed
|
||||||
"""
|
"""
|
||||||
i2c_start(busno)
|
i2c_start(busno)
|
||||||
|
@ -57,7 +52,7 @@ def i2c_write_byte(busno, busaddr, data, ack=True):
|
||||||
"""Write one byte to a device.
|
"""Write one byte to a device.
|
||||||
|
|
||||||
:param busno: I2C bus number
|
:param busno: I2C bus number
|
||||||
:param busaddr: 8-bit I2C device address (LSB=0)
|
:param busaddr: 8 bit I2C device address (LSB=0)
|
||||||
:param data: Data byte to be written
|
:param data: Data byte to be written
|
||||||
:param nack: Allow NACK
|
:param nack: Allow NACK
|
||||||
"""
|
"""
|
||||||
|
@ -76,7 +71,7 @@ def i2c_read_byte(busno, busaddr):
|
||||||
"""Read one byte from a device.
|
"""Read one byte from a device.
|
||||||
|
|
||||||
:param busno: I2C bus number
|
:param busno: I2C bus number
|
||||||
:param busaddr: 8-bit I2C device address (LSB=0)
|
:param busaddr: 8 bit I2C device address (LSB=0)
|
||||||
:returns: Byte read
|
:returns: Byte read
|
||||||
"""
|
"""
|
||||||
i2c_start(busno)
|
i2c_start(busno)
|
||||||
|
@ -95,10 +90,10 @@ def i2c_write_many(busno, busaddr, addr, data, ack_last=True):
|
||||||
"""Transfer multiple bytes to a device.
|
"""Transfer multiple bytes to a device.
|
||||||
|
|
||||||
:param busno: I2c bus number
|
:param busno: I2c bus number
|
||||||
:param busaddr: 8-bit I2C device address (LSB=0)
|
:param busaddr: 8 bit I2C device address (LSB=0)
|
||||||
:param addr: 8-bit data address
|
:param addr: 8 bit data address
|
||||||
:param data: Data bytes to be written
|
:param data: Data bytes to be written
|
||||||
:param ack_last: Expect I2C ACK of the last byte written. If ``False``,
|
:param ack_last: Expect I2C ACK of the last byte written. If `False`,
|
||||||
the last byte may be NACKed (e.g. EEPROM full page writes).
|
the last byte may be NACKed (e.g. EEPROM full page writes).
|
||||||
"""
|
"""
|
||||||
n = len(data)
|
n = len(data)
|
||||||
|
@ -121,8 +116,8 @@ def i2c_read_many(busno, busaddr, addr, data):
|
||||||
"""Transfer multiple bytes from a device.
|
"""Transfer multiple bytes from a device.
|
||||||
|
|
||||||
:param busno: I2c bus number
|
:param busno: I2c bus number
|
||||||
:param busaddr: 8-bit I2C device address (LSB=0)
|
:param busaddr: 8 bit I2C device address (LSB=0)
|
||||||
:param addr: 8-bit data address
|
:param addr: 8 bit data address
|
||||||
:param data: List of integers to be filled with the data read.
|
:param data: List of integers to be filled with the data read.
|
||||||
One entry ber byte.
|
One entry ber byte.
|
||||||
"""
|
"""
|
||||||
|
@ -142,12 +137,10 @@ def i2c_read_many(busno, busaddr, addr, data):
|
||||||
i2c_stop(busno)
|
i2c_stop(busno)
|
||||||
|
|
||||||
|
|
||||||
class I2CSwitch:
|
class PCA9548:
|
||||||
"""Driver for the I2C bus switch.
|
"""Driver for the PCA9548 I2C bus switch.
|
||||||
|
|
||||||
PCA954X (or other) type detection is done by the CPU during I2C init.
|
I2C transactions not real-time, and are performed by the CPU without
|
||||||
|
|
||||||
I2C transactions are not real-time, and are performed by the CPU without
|
|
||||||
involving RTIO.
|
involving RTIO.
|
||||||
|
|
||||||
On the KC705, this chip is used for selecting the I2C buses on the two FMC
|
On the KC705, this chip is used for selecting the I2C buses on the two FMC
|
||||||
|
@ -158,25 +151,31 @@ class I2CSwitch:
|
||||||
self.busno = busno
|
self.busno = busno
|
||||||
self.address = address
|
self.address = address
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def select(self, mask):
|
||||||
|
"""Enable/disable channels.
|
||||||
|
|
||||||
|
:param mask: Bit mask of enabled channels
|
||||||
|
"""
|
||||||
|
i2c_write_byte(self.busno, self.address, mask)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set(self, channel):
|
def set(self, channel):
|
||||||
"""Enable one channel.
|
"""Enable one channel.
|
||||||
|
|
||||||
:param channel: channel number (0-7)
|
:param channel: channel number (0-7)
|
||||||
"""
|
"""
|
||||||
i2c_switch_select(self.busno, self.address >> 1, 1 << channel)
|
self.select(1 << channel)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def unset(self):
|
def readback(self):
|
||||||
"""Disable output of the I2C switch.
|
return i2c_read_byte(self.busno, self.address)
|
||||||
"""
|
|
||||||
i2c_switch_select(self.busno, self.address >> 1, 0)
|
|
||||||
|
|
||||||
|
|
||||||
class TCA6424A:
|
class TCA6424A:
|
||||||
"""Driver for the TCA6424A I2C I/O expander.
|
"""Driver for the TCA6424A I2C I/O expander.
|
||||||
|
|
||||||
I2C transactions are not real-time, and are performed by the CPU without
|
I2C transactions not real-time, and are performed by the CPU without
|
||||||
involving RTIO.
|
involving RTIO.
|
||||||
|
|
||||||
On the NIST QC2 hardware, this chip is used for switching the directions
|
On the NIST QC2 hardware, this chip is used for switching the directions
|
||||||
|
@ -208,46 +207,3 @@ class TCA6424A:
|
||||||
|
|
||||||
self._write24(0x8c, 0) # set all directions to output
|
self._write24(0x8c, 0) # set all directions to output
|
||||||
self._write24(0x84, outputs_le) # set levels
|
self._write24(0x84, outputs_le) # set levels
|
||||||
|
|
||||||
class PCF8574A:
|
|
||||||
"""Driver for the PCF8574 I2C remote 8-bit I/O expander.
|
|
||||||
|
|
||||||
I2C transactions are not real-time, and are performed by the CPU without
|
|
||||||
involving RTIO.
|
|
||||||
"""
|
|
||||||
def __init__(self, dmgr, busno=0, address=0x7c, core_device="core"):
|
|
||||||
self.core = dmgr.get(core_device)
|
|
||||||
self.busno = busno
|
|
||||||
self.address = address
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set(self, data):
|
|
||||||
"""Drive data on the quasi-bidirectional pins.
|
|
||||||
|
|
||||||
:param data: Pin data. High bits are weakly driven high
|
|
||||||
(and thus inputs), low bits are strongly driven low.
|
|
||||||
"""
|
|
||||||
i2c_start(self.busno)
|
|
||||||
try:
|
|
||||||
if not i2c_write(self.busno, self.address):
|
|
||||||
raise I2CError("PCF8574A failed to ack address")
|
|
||||||
if not i2c_write(self.busno, data):
|
|
||||||
raise I2CError("PCF8574A failed to ack data")
|
|
||||||
finally:
|
|
||||||
i2c_stop(self.busno)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def get(self):
|
|
||||||
"""Retrieve quasi-bidirectional pin input data.
|
|
||||||
|
|
||||||
:return: Pin data
|
|
||||||
"""
|
|
||||||
i2c_start(self.busno)
|
|
||||||
ret = 0
|
|
||||||
try:
|
|
||||||
if not i2c_write(self.busno, self.address | 1):
|
|
||||||
raise I2CError("PCF8574A failed to ack address")
|
|
||||||
ret = i2c_read(self.busno, False)
|
|
||||||
finally:
|
|
||||||
i2c_stop(self.busno)
|
|
||||||
return ret
|
|
||||||
|
|
|
@ -32,7 +32,4 @@ def load(description_path):
|
||||||
global validator
|
global validator
|
||||||
validator.validate(result)
|
validator.validate(result)
|
||||||
|
|
||||||
if result["base"] != "use_drtio_role":
|
|
||||||
result["drtio_role"] = result["base"]
|
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
|
@ -25,29 +25,25 @@ port_mapping = {
|
||||||
|
|
||||||
|
|
||||||
class KasliEEPROM:
|
class KasliEEPROM:
|
||||||
def __init__(self, dmgr, port, address=0xa0, busno=0,
|
def __init__(self, dmgr, port, busno=0,
|
||||||
core_device="core", sw0_device="i2c_switch0", sw1_device="i2c_switch1"):
|
core_device="core", sw0_device="i2c_switch0", sw1_device="i2c_switch1"):
|
||||||
self.core = dmgr.get(core_device)
|
self.core = dmgr.get(core_device)
|
||||||
self.sw0 = dmgr.get(sw0_device)
|
self.sw0 = dmgr.get(sw0_device)
|
||||||
self.sw1 = dmgr.get(sw1_device)
|
self.sw1 = dmgr.get(sw1_device)
|
||||||
self.busno = busno
|
self.busno = busno
|
||||||
self.port = port_mapping[port]
|
self.port = port_mapping[port]
|
||||||
self.address = address # i2c 8 bit
|
self.address = 0xa0 # i2c 8 bit
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def select(self):
|
def select(self):
|
||||||
mask = 1 << self.port
|
mask = 1 << self.port
|
||||||
if self.port < 8:
|
self.sw0.select(mask)
|
||||||
self.sw0.set(self.port)
|
self.sw1.select(mask >> 8)
|
||||||
self.sw1.unset()
|
|
||||||
else:
|
|
||||||
self.sw0.unset()
|
|
||||||
self.sw1.set(self.port - 8)
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def deselect(self):
|
def deselect(self):
|
||||||
self.sw0.unset()
|
self.sw0.select(0)
|
||||||
self.sw1.unset()
|
self.sw1.select(0)
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write_i32(self, addr, value):
|
def write_i32(self, addr, value):
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
"""RTIO driver for Mirny (4-channel GHz PLLs)
|
"""RTIO driver for Mirny (4 channel GHz PLLs)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from artiq.language.core import kernel, delay, portable
|
from artiq.language.core import kernel, delay
|
||||||
from artiq.language.units import us
|
from artiq.language.units import us
|
||||||
|
|
||||||
from numpy import int32
|
from numpy import int32
|
||||||
|
@ -31,6 +31,16 @@ WE = 1 << 24
|
||||||
# supported CPLD code version
|
# supported CPLD code version
|
||||||
PROTO_REV_MATCH = 0x0
|
PROTO_REV_MATCH = 0x0
|
||||||
|
|
||||||
|
# almazny-specific data
|
||||||
|
ALMAZNY_REG_BASE = 0x0C
|
||||||
|
ALMAZNY_OE_SHIFT = 12
|
||||||
|
|
||||||
|
# higher SPI write divider to match almazny shift register timing
|
||||||
|
# min SER time before SRCLK rise = 125ns
|
||||||
|
# -> div=32 gives 125ns for data before clock rise
|
||||||
|
# works at faster dividers too but could be less reliable
|
||||||
|
ALMAZNY_SPIT_WR = 32
|
||||||
|
|
||||||
|
|
||||||
class Mirny:
|
class Mirny:
|
||||||
"""
|
"""
|
||||||
|
@ -82,7 +92,7 @@ class Mirny:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def read_reg(self, addr):
|
def read_reg(self, addr):
|
||||||
"""Read a register."""
|
"""Read a register"""
|
||||||
self.bus.set_config_mu(
|
self.bus.set_config_mu(
|
||||||
SPI_CONFIG | spi.SPI_INPUT | spi.SPI_END, 24, SPIT_RD, SPI_CS
|
SPI_CONFIG | spi.SPI_INPUT | spi.SPI_END, 24, SPIT_RD, SPI_CS
|
||||||
)
|
)
|
||||||
|
@ -91,7 +101,7 @@ class Mirny:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write_reg(self, addr, data):
|
def write_reg(self, addr, data):
|
||||||
"""Write a register."""
|
"""Write a register"""
|
||||||
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24, SPIT_WR, SPI_CS)
|
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24, SPIT_WR, SPI_CS)
|
||||||
self.bus.write((addr << 25) | WE | ((data & 0xFFFF) << 8))
|
self.bus.write((addr << 25) | WE | ((data & 0xFFFF) << 8))
|
||||||
|
|
||||||
|
@ -101,9 +111,9 @@ class Mirny:
|
||||||
Initialize and detect Mirny.
|
Initialize and detect Mirny.
|
||||||
|
|
||||||
Select the clock source based the board's hardware revision.
|
Select the clock source based the board's hardware revision.
|
||||||
Raise :exc:`ValueError` if the board's hardware revision is not supported.
|
Raise ValueError if the board's hardware revision is not supported.
|
||||||
|
|
||||||
:param blind: Verify presence and protocol compatibility. Raise :exc:`ValueError` on failure.
|
:param blind: Verify presence and protocol compatibility. Raise ValueError on failure.
|
||||||
"""
|
"""
|
||||||
reg0 = self.read_reg(0)
|
reg0 = self.read_reg(0)
|
||||||
self.hw_rev = reg0 & 0x3
|
self.hw_rev = reg0 & 0x3
|
||||||
|
@ -122,48 +132,124 @@ class Mirny:
|
||||||
self.write_reg(1, (self.clk_sel << 4))
|
self.write_reg(1, (self.clk_sel << 4))
|
||||||
delay(1000 * us)
|
delay(1000 * us)
|
||||||
|
|
||||||
@portable(flags={"fast-math"})
|
|
||||||
def att_to_mu(self, att):
|
|
||||||
"""Convert an attenuation setting in dB to machine units.
|
|
||||||
|
|
||||||
:param att: Attenuation setting in dB.
|
|
||||||
:return: Digital attenuation setting.
|
|
||||||
"""
|
|
||||||
code = int32(255) - int32(round(att * 8))
|
|
||||||
if code < 0 or code > 255:
|
|
||||||
raise ValueError("Invalid Mirny attenuation!")
|
|
||||||
return code
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_att_mu(self, channel, att):
|
def set_att_mu(self, channel, att):
|
||||||
"""Set digital step attenuator in machine units.
|
"""Set digital step attenuator in machine units.
|
||||||
|
|
||||||
:param att: Attenuation setting, 8-bit digital.
|
:param att: Attenuation setting, 8 bit digital.
|
||||||
"""
|
"""
|
||||||
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 16, SPIT_WR, SPI_CS)
|
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 16, SPIT_WR, SPI_CS)
|
||||||
self.bus.write(((channel | 8) << 25) | (att << 16))
|
self.bus.write(((channel | 8) << 25) | (att << 16))
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_att(self, channel, att):
|
|
||||||
"""Set digital step attenuator in SI units.
|
|
||||||
|
|
||||||
This method will write the attenuator settings of the selected channel.
|
|
||||||
|
|
||||||
See also :meth:`Mirny.set_att_mu`.
|
|
||||||
|
|
||||||
:param channel: Attenuator channel (0-3).
|
|
||||||
:param att: Attenuation setting in dB. Higher value is more
|
|
||||||
attenuation. Minimum attenuation is 0*dB, maximum attenuation is
|
|
||||||
31.5*dB.
|
|
||||||
"""
|
|
||||||
self.set_att_mu(channel, self.att_to_mu(att))
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def write_ext(self, addr, length, data, ext_div=SPIT_WR):
|
def write_ext(self, addr, length, data, ext_div=SPIT_WR):
|
||||||
"""Perform SPI write to a prefixed address."""
|
"""Perform SPI write to a prefixed address"""
|
||||||
self.bus.set_config_mu(SPI_CONFIG, 8, SPIT_WR, SPI_CS)
|
self.bus.set_config_mu(SPI_CONFIG, 8, SPIT_WR, SPI_CS)
|
||||||
self.bus.write(addr << 25)
|
self.bus.write(addr << 25)
|
||||||
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, length, ext_div, SPI_CS)
|
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, length, ext_div, SPI_CS)
|
||||||
if length < 32:
|
if length < 32:
|
||||||
data <<= 32 - length
|
data <<= 32 - length
|
||||||
self.bus.write(data)
|
self.bus.write(data)
|
||||||
|
|
||||||
|
|
||||||
|
class Almazny:
|
||||||
|
"""
|
||||||
|
Almazny (High frequency mezzanine board for Mirny)
|
||||||
|
|
||||||
|
:param host_mirny - Mirny device Almazny is connected to
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, dmgr, host_mirny):
|
||||||
|
self.mirny_cpld = dmgr.get(host_mirny)
|
||||||
|
self.att_mu = [0x3f] * 4
|
||||||
|
self.channel_sw = [0] * 4
|
||||||
|
self.output_enable = False
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def init(self):
|
||||||
|
self.output_toggle(self.output_enable)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def att_to_mu(self, att):
|
||||||
|
"""
|
||||||
|
Convert an attenuator setting in dB to machine units.
|
||||||
|
|
||||||
|
:param att: attenuator setting in dB [0-31.5]
|
||||||
|
:return: attenuator setting in machine units
|
||||||
|
"""
|
||||||
|
mu = round(att * 2.0)
|
||||||
|
if mu > 63 or mu < 0:
|
||||||
|
raise ValueError("Invalid Almazny attenuator settings!")
|
||||||
|
return mu
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def mu_to_att(self, att_mu):
|
||||||
|
"""
|
||||||
|
Convert a digital attenuator setting to dB.
|
||||||
|
|
||||||
|
:param att_mu: attenuator setting in machine units
|
||||||
|
:return: attenuator setting in dB
|
||||||
|
"""
|
||||||
|
return att_mu / 2
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_att(self, channel, att, rf_switch=True):
|
||||||
|
"""
|
||||||
|
Sets attenuators on chosen shift register (channel).
|
||||||
|
:param channel - index of the register [0-3]
|
||||||
|
:param att_mu - attenuation setting in dBm [0-31.5]
|
||||||
|
:param rf_switch - rf switch (bool)
|
||||||
|
"""
|
||||||
|
self.set_att_mu(channel, self.att_to_mu(att), rf_switch)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_att_mu(self, channel, att_mu, rf_switch=True):
|
||||||
|
"""
|
||||||
|
Sets attenuators on chosen shift register (channel).
|
||||||
|
:param channel - index of the register [0-3]
|
||||||
|
:param att_mu - attenuation setting in machine units [0-63]
|
||||||
|
:param rf_switch - rf switch (bool)
|
||||||
|
"""
|
||||||
|
self.channel_sw[channel] = 1 if rf_switch else 0
|
||||||
|
self.att_mu[channel] = att_mu
|
||||||
|
self._update_register(channel)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def output_toggle(self, oe):
|
||||||
|
"""
|
||||||
|
Toggles output on all shift registers on or off.
|
||||||
|
:param oe - toggle output enable (bool)
|
||||||
|
"""
|
||||||
|
self.output_enable = oe
|
||||||
|
cfg_reg = self.mirny_cpld.read_reg(1)
|
||||||
|
en = 1 if self.output_enable else 0
|
||||||
|
delay(100 * us)
|
||||||
|
new_reg = (en << ALMAZNY_OE_SHIFT) | (cfg_reg & 0x3FF)
|
||||||
|
self.mirny_cpld.write_reg(1, new_reg)
|
||||||
|
delay(100 * us)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def _flip_mu_bits(self, mu):
|
||||||
|
# in this form MSB is actually 0.5dB attenuator
|
||||||
|
# unnatural for users, so we flip the six bits
|
||||||
|
return (((mu & 0x01) << 5)
|
||||||
|
| ((mu & 0x02) << 3)
|
||||||
|
| ((mu & 0x04) << 1)
|
||||||
|
| ((mu & 0x08) >> 1)
|
||||||
|
| ((mu & 0x10) >> 3)
|
||||||
|
| ((mu & 0x20) >> 5))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def _update_register(self, ch):
|
||||||
|
self.mirny_cpld.write_ext(
|
||||||
|
ALMAZNY_REG_BASE + ch,
|
||||||
|
8,
|
||||||
|
self._flip_mu_bits(self.att_mu[ch]) | (self.channel_sw[ch] << 6),
|
||||||
|
ALMAZNY_SPIT_WR
|
||||||
|
)
|
||||||
|
delay(100 * us)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def _update_all_registers(self):
|
||||||
|
for i in range(4):
|
||||||
|
self._update_register(i)
|
|
@ -16,31 +16,31 @@ SPI_CS_SR = 2
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def adc_ctrl(channel=1, softspan=0b111, valid=1):
|
def adc_ctrl(channel=1, softspan=0b111, valid=1):
|
||||||
"""Build a LTC2335-16 control word."""
|
"""Build a LTC2335-16 control word"""
|
||||||
return (valid << 7) | (channel << 3) | softspan
|
return (valid << 7) | (channel << 3) | softspan
|
||||||
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def adc_softspan(data):
|
def adc_softspan(data):
|
||||||
"""Return the softspan configuration index from a result packet."""
|
"""Return the softspan configuration index from a result packet"""
|
||||||
return data & 0x7
|
return data & 0x7
|
||||||
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def adc_channel(data):
|
def adc_channel(data):
|
||||||
"""Return the channel index from a result packet."""
|
"""Return the channel index from a result packet"""
|
||||||
return (data >> 3) & 0x7
|
return (data >> 3) & 0x7
|
||||||
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def adc_data(data):
|
def adc_data(data):
|
||||||
"""Return the ADC value from a result packet."""
|
"""Return the ADC value from a result packet"""
|
||||||
return (data >> 8) & 0xffff
|
return (data >> 8) & 0xffff
|
||||||
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def adc_value(data, v_ref=5.):
|
def adc_value(data, v_ref=5.):
|
||||||
"""Convert a ADC result packet to SI units (volts)."""
|
"""Convert a ADC result packet to SI units (Volt)"""
|
||||||
softspan = adc_softspan(data)
|
softspan = adc_softspan(data)
|
||||||
data = adc_data(data)
|
data = adc_data(data)
|
||||||
g = 625
|
g = 625
|
||||||
|
@ -107,7 +107,7 @@ class Novogorny:
|
||||||
def configure(self, data):
|
def configure(self, data):
|
||||||
"""Set up the ADC sequencer.
|
"""Set up the ADC sequencer.
|
||||||
|
|
||||||
:param data: List of 8-bit control words to write into the sequencer
|
:param data: List of 8 bit control words to write into the sequencer
|
||||||
table.
|
table.
|
||||||
"""
|
"""
|
||||||
if len(data) > 1:
|
if len(data) > 1:
|
||||||
|
@ -137,10 +137,12 @@ class Novogorny:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def sample(self, next_ctrl=0):
|
def sample(self, next_ctrl=0):
|
||||||
"""Acquire a sample. See also :meth:`Novogorny.sample_mu`.
|
"""Acquire a sample
|
||||||
|
|
||||||
|
.. seealso:: :meth:`sample_mu`
|
||||||
|
|
||||||
:param next_ctrl: ADC control word for the next sample
|
:param next_ctrl: ADC control word for the next sample
|
||||||
:return: The ADC result packet (volts)
|
:return: The ADC result packet (Volt)
|
||||||
"""
|
"""
|
||||||
return adc_value(self.sample_mu(), self.v_ref)
|
return adc_value(self.sample_mu(), self.v_ref)
|
||||||
|
|
||||||
|
@ -149,7 +151,7 @@ class Novogorny:
|
||||||
"""Acquire a burst of samples.
|
"""Acquire a burst of samples.
|
||||||
|
|
||||||
If the burst is too long and the sample rate too high, there will be
|
If the burst is too long and the sample rate too high, there will be
|
||||||
:exc:RTIOOverflow exceptions.
|
RTIO input overflows.
|
||||||
|
|
||||||
High sample rates lead to gain errors since the impedance between the
|
High sample rates lead to gain errors since the impedance between the
|
||||||
instrumentation amplifier and the ADC is high.
|
instrumentation amplifier and the ADC is high.
|
||||||
|
|
|
@ -0,0 +1,47 @@
|
||||||
|
from artiq.experiment import kernel
|
||||||
|
from artiq.coredevice.i2c import (
|
||||||
|
i2c_start, i2c_write, i2c_read, i2c_stop, I2CError)
|
||||||
|
|
||||||
|
|
||||||
|
class PCF8574A:
|
||||||
|
"""Driver for the PCF8574 I2C remote 8-bit I/O expander.
|
||||||
|
|
||||||
|
I2C transactions not real-time, and are performed by the CPU without
|
||||||
|
involving RTIO.
|
||||||
|
"""
|
||||||
|
def __init__(self, dmgr, busno=0, address=0x7c, core_device="core"):
|
||||||
|
self.core = dmgr.get(core_device)
|
||||||
|
self.busno = busno
|
||||||
|
self.address = address
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set(self, data):
|
||||||
|
"""Drive data on the quasi-bidirectional pins.
|
||||||
|
|
||||||
|
:param data: Pin data. High bits are weakly driven high
|
||||||
|
(and thus inputs), low bits are strongly driven low.
|
||||||
|
"""
|
||||||
|
i2c_start(self.busno)
|
||||||
|
try:
|
||||||
|
if not i2c_write(self.busno, self.address):
|
||||||
|
raise I2CError("PCF8574A failed to ack address")
|
||||||
|
if not i2c_write(self.busno, data):
|
||||||
|
raise I2CError("PCF8574A failed to ack data")
|
||||||
|
finally:
|
||||||
|
i2c_stop(self.busno)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def get(self):
|
||||||
|
"""Retrieve quasi-bidirectional pin input data.
|
||||||
|
|
||||||
|
:return: Pin data
|
||||||
|
"""
|
||||||
|
i2c_start(self.busno)
|
||||||
|
ret = 0
|
||||||
|
try:
|
||||||
|
if not i2c_write(self.busno, self.address | 1):
|
||||||
|
raise I2CError("PCF8574A failed to ack address")
|
||||||
|
ret = i2c_read(self.busno, False)
|
||||||
|
finally:
|
||||||
|
i2c_stop(self.busno)
|
||||||
|
return ret
|
File diff suppressed because it is too large
Load Diff
|
@ -25,7 +25,7 @@ def rtio_input_data(channel: TInt32) -> TInt32:
|
||||||
@syscall(flags={"nowrite"})
|
@syscall(flags={"nowrite"})
|
||||||
def rtio_input_timestamped_data(timeout_mu: TInt64,
|
def rtio_input_timestamped_data(timeout_mu: TInt64,
|
||||||
channel: TInt32) -> TTuple([TInt64, TInt32]):
|
channel: TInt32) -> TTuple([TInt64, TInt32]):
|
||||||
"""Wait for an input event up to ``timeout_mu`` on the given channel, and
|
"""Wait for an input event up to timeout_mu on the given channel, and
|
||||||
return a tuple of timestamp and attached data, or (-1, 0) if the timeout is
|
return a tuple of timestamp and attached data, or (-1, 0) if the timeout is
|
||||||
reached."""
|
reached."""
|
||||||
raise NotImplementedError("syscall not simulated")
|
raise NotImplementedError("syscall not simulated")
|
||||||
|
|
|
@ -15,32 +15,30 @@ SPI_CS_PGIA = 1 # separate SPI bus, CS used as RCLK
|
||||||
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def adc_mu_to_volt(data, gain=0, corrected_fs=True):
|
def adc_mu_to_volt(data, gain=0):
|
||||||
"""Convert ADC data in machine units to volts.
|
"""Convert ADC data in machine units to Volts.
|
||||||
|
|
||||||
:param data: 16-bit signed ADC word
|
:param data: 16 bit signed ADC word
|
||||||
:param gain: PGIA gain setting (0: 1, ..., 3: 1000)
|
:param gain: PGIA gain setting (0: 1, ..., 3: 1000)
|
||||||
:param corrected_fs: use corrected ADC FS reference.
|
:return: Voltage in Volts
|
||||||
Should be ``True`` for Sampler revisions after v2.1. ``False`` for v2.1 and earlier.
|
|
||||||
:return: Voltage in volts
|
|
||||||
"""
|
"""
|
||||||
if gain == 0:
|
if gain == 0:
|
||||||
volt_per_lsb = 20.48 / (1 << 16) if corrected_fs else 20. / (1 << 16)
|
volt_per_lsb = 20./(1 << 16)
|
||||||
elif gain == 1:
|
elif gain == 1:
|
||||||
volt_per_lsb = 2.048 / (1 << 16) if corrected_fs else 2. / (1 << 16)
|
volt_per_lsb = 2./(1 << 16)
|
||||||
elif gain == 2:
|
elif gain == 2:
|
||||||
volt_per_lsb = .2048 / (1 << 16) if corrected_fs else .2 / (1 << 16)
|
volt_per_lsb = .2/(1 << 16)
|
||||||
elif gain == 3:
|
elif gain == 3:
|
||||||
volt_per_lsb = 0.02048 / (1 << 16) if corrected_fs else .02 / (1 << 16)
|
volt_per_lsb = .02/(1 << 16)
|
||||||
else:
|
else:
|
||||||
raise ValueError("invalid gain")
|
raise ValueError("invalid gain")
|
||||||
return data * volt_per_lsb
|
return data*volt_per_lsb
|
||||||
|
|
||||||
|
|
||||||
class Sampler:
|
class Sampler:
|
||||||
"""Sampler ADC.
|
"""Sampler ADC.
|
||||||
|
|
||||||
Controls the LTC2320-16 8-channel 16-bit ADC with SPI interface and
|
Controls the LTC2320-16 8 channel 16 bit ADC with SPI interface and
|
||||||
the switchable gain instrumentation amplifiers.
|
the switchable gain instrumentation amplifiers.
|
||||||
|
|
||||||
:param spi_adc_device: ADC SPI bus device name
|
:param spi_adc_device: ADC SPI bus device name
|
||||||
|
@ -50,13 +48,12 @@ class Sampler:
|
||||||
:param gains: Initial value for PGIA gains shift register
|
:param gains: Initial value for PGIA gains shift register
|
||||||
(default: 0x0000). Knowledge of this state is not transferred
|
(default: 0x0000). Knowledge of this state is not transferred
|
||||||
between experiments.
|
between experiments.
|
||||||
:param hw_rev: Sampler's hardware revision string (default 'v2.2')
|
|
||||||
:param core_device: Core device name
|
:param core_device: Core device name
|
||||||
"""
|
"""
|
||||||
kernel_invariants = {"bus_adc", "bus_pgia", "core", "cnv", "div", "corrected_fs"}
|
kernel_invariants = {"bus_adc", "bus_pgia", "core", "cnv", "div"}
|
||||||
|
|
||||||
def __init__(self, dmgr, spi_adc_device, spi_pgia_device, cnv_device,
|
def __init__(self, dmgr, spi_adc_device, spi_pgia_device, cnv_device,
|
||||||
div=8, gains=0x0000, hw_rev="v2.2", core_device="core"):
|
div=8, gains=0x0000, core_device="core"):
|
||||||
self.bus_adc = dmgr.get(spi_adc_device)
|
self.bus_adc = dmgr.get(spi_adc_device)
|
||||||
self.bus_adc.update_xfer_duration_mu(div, 32)
|
self.bus_adc.update_xfer_duration_mu(div, 32)
|
||||||
self.bus_pgia = dmgr.get(spi_pgia_device)
|
self.bus_pgia = dmgr.get(spi_pgia_device)
|
||||||
|
@ -65,11 +62,6 @@ class Sampler:
|
||||||
self.cnv = dmgr.get(cnv_device)
|
self.cnv = dmgr.get(cnv_device)
|
||||||
self.div = div
|
self.div = div
|
||||||
self.gains = gains
|
self.gains = gains
|
||||||
self.corrected_fs = self.use_corrected_fs(hw_rev)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def use_corrected_fs(hw_rev):
|
|
||||||
return hw_rev != "v2.1"
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def init(self):
|
def init(self):
|
||||||
|
@ -119,12 +111,12 @@ class Sampler:
|
||||||
Perform a conversion and transfer the samples.
|
Perform a conversion and transfer the samples.
|
||||||
|
|
||||||
This assumes that the input FIFO of the ADC SPI RTIO channel is deep
|
This assumes that the input FIFO of the ADC SPI RTIO channel is deep
|
||||||
enough to buffer the samples (half the length of ``data`` deep).
|
enough to buffer the samples (half the length of `data` deep).
|
||||||
If it is not, there will be RTIO input overflows.
|
If it is not, there will be RTIO input overflows.
|
||||||
|
|
||||||
:param data: List of data samples to fill. Must have even length.
|
:param data: List of data samples to fill. Must have even length.
|
||||||
Samples are always read from the last channel (channel 7) down.
|
Samples are always read from the last channel (channel 7) down.
|
||||||
The ``data`` list will always be filled with the last item
|
The `data` list will always be filled with the last item
|
||||||
holding to the sample from channel 7.
|
holding to the sample from channel 7.
|
||||||
"""
|
"""
|
||||||
self.cnv.pulse(30*ns) # t_CNVH
|
self.cnv.pulse(30*ns) # t_CNVH
|
||||||
|
@ -142,7 +134,7 @@ class Sampler:
|
||||||
def sample(self, data):
|
def sample(self, data):
|
||||||
"""Acquire a set of samples.
|
"""Acquire a set of samples.
|
||||||
|
|
||||||
See also :meth:`Sampler.sample_mu`.
|
.. seealso:: :meth:`sample_mu`
|
||||||
|
|
||||||
:param data: List of floating point data samples to fill.
|
:param data: List of floating point data samples to fill.
|
||||||
"""
|
"""
|
||||||
|
@ -152,4 +144,4 @@ class Sampler:
|
||||||
for i in range(n):
|
for i in range(n):
|
||||||
channel = i + 8 - len(data)
|
channel = i + 8 - len(data)
|
||||||
gain = (self.gains >> (channel*2)) & 0b11
|
gain = (self.gains >> (channel*2)) & 0b11
|
||||||
data[i] = adc_mu_to_volt(adc_data[i], gain, self.corrected_fs)
|
data[i] = adc_mu_to_volt(adc_data[i], gain)
|
||||||
|
|
|
@ -0,0 +1,372 @@
|
||||||
|
"""
|
||||||
|
Driver for the Smart Arbitrary Waveform Generator (SAWG) on RTIO.
|
||||||
|
|
||||||
|
The SAWG is an "improved DDS" built in gateware and interfacing to
|
||||||
|
high-speed DACs.
|
||||||
|
|
||||||
|
Output event replacement is supported except on the configuration channel.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
from artiq.language.types import TInt32, TFloat
|
||||||
|
from numpy import int32, int64
|
||||||
|
from artiq.language.core import kernel
|
||||||
|
from artiq.coredevice.spline import Spline
|
||||||
|
from artiq.coredevice.rtio import rtio_output
|
||||||
|
|
||||||
|
|
||||||
|
# sawg.Config addresses
|
||||||
|
_SAWG_DIV = 0
|
||||||
|
_SAWG_CLR = 1
|
||||||
|
_SAWG_IQ_EN = 2
|
||||||
|
# _SAWF_PAD = 3 # reserved
|
||||||
|
_SAWG_OUT_MIN = 4
|
||||||
|
_SAWG_OUT_MAX = 5
|
||||||
|
_SAWG_DUC_MIN = 6
|
||||||
|
_SAWG_DUC_MAX = 7
|
||||||
|
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
"""SAWG configuration.
|
||||||
|
|
||||||
|
Exposes the configurable quantities of a single SAWG channel.
|
||||||
|
|
||||||
|
Access to the configuration registers for a SAWG channel can not
|
||||||
|
be concurrent. There must be at least :attr:`_rtio_interval` machine
|
||||||
|
units of delay between accesses. Replacement is not supported and will be
|
||||||
|
lead to an ``RTIOCollision`` as this is likely a programming error.
|
||||||
|
All methods therefore advance the timeline by the duration of one
|
||||||
|
configuration register transfer.
|
||||||
|
|
||||||
|
:param channel: RTIO channel number of the channel.
|
||||||
|
:param core: Core device.
|
||||||
|
"""
|
||||||
|
kernel_invariants = {"channel", "core", "_out_scale", "_duc_scale",
|
||||||
|
"_rtio_interval"}
|
||||||
|
|
||||||
|
def __init__(self, channel, core, cordic_gain=1.):
|
||||||
|
self.channel = channel
|
||||||
|
self.core = core
|
||||||
|
# normalized DAC output
|
||||||
|
self._out_scale = (1 << 15) - 1.
|
||||||
|
# normalized DAC output including DUC cordic gain
|
||||||
|
self._duc_scale = self._out_scale/cordic_gain
|
||||||
|
# configuration channel access interval
|
||||||
|
self._rtio_interval = int64(3*self.core.ref_multiplier)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_div(self, div: TInt32, n: TInt32=0):
|
||||||
|
"""Set the spline evolution divider and current counter value.
|
||||||
|
|
||||||
|
The divider and the spline evolution are synchronized across all
|
||||||
|
spline channels within a SAWG channel. The DDS/DUC phase accumulators
|
||||||
|
always evolves at full speed.
|
||||||
|
|
||||||
|
.. note:: The spline evolution divider has not been tested extensively
|
||||||
|
and is currently considered a technological preview only.
|
||||||
|
|
||||||
|
:param div: Spline evolution divider, such that
|
||||||
|
``t_sawg_spline/t_rtio_coarse = div + 1``. Default: ``0``.
|
||||||
|
:param n: Current value of the counter. Default: ``0``.
|
||||||
|
"""
|
||||||
|
rtio_output((self.channel << 8) | _SAWG_DIV, div | (n << 16))
|
||||||
|
delay_mu(self._rtio_interval)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_clr(self, clr0: TInt32, clr1: TInt32, clr2: TInt32):
|
||||||
|
"""Set the accumulator clear mode for the three phase accumulators.
|
||||||
|
|
||||||
|
When the ``clr`` bit for a given DDS/DUC phase accumulator is
|
||||||
|
set, that phase accumulator will be cleared with every phase offset
|
||||||
|
RTIO command and the output phase of the DDS/DUC will be
|
||||||
|
exactly the phase RTIO value ("absolute phase update mode").
|
||||||
|
|
||||||
|
.. math::
|
||||||
|
q^\prime(t) = p^\prime + (t - t^\prime) f^\prime
|
||||||
|
|
||||||
|
In turn, when the bit is cleared, the phase RTIO channels
|
||||||
|
determine a phase offset to the current (carrier-) value of the
|
||||||
|
DDS/DUC phase accumulator. This "relative phase update mode" is
|
||||||
|
sometimes also called “continuous phase mode”.
|
||||||
|
|
||||||
|
.. math::
|
||||||
|
q^\prime(t) = q(t^\prime) + (p^\prime - p) +
|
||||||
|
(t - t^\prime) f^\prime
|
||||||
|
|
||||||
|
Where:
|
||||||
|
|
||||||
|
* :math:`q`, :math:`q^\prime`: old/new phase accumulator
|
||||||
|
* :math:`p`, :math:`p^\prime`: old/new phase offset
|
||||||
|
* :math:`f^\prime`: new frequency
|
||||||
|
* :math:`t^\prime`: timestamp of setting new :math:`p`, :math:`f`
|
||||||
|
* :math:`t`: running time
|
||||||
|
|
||||||
|
:param clr0: Auto-clear phase accumulator of the ``phase0``/
|
||||||
|
``frequency0`` DUC. Default: ``True``
|
||||||
|
:param clr1: Auto-clear phase accumulator of the ``phase1``/
|
||||||
|
``frequency1`` DDS. Default: ``True``
|
||||||
|
:param clr2: Auto-clear phase accumulator of the ``phase2``/
|
||||||
|
``frequency2`` DDS. Default: ``True``
|
||||||
|
"""
|
||||||
|
rtio_output((self.channel << 8) | _SAWG_CLR, clr0 |
|
||||||
|
(clr1 << 1) | (clr2 << 2))
|
||||||
|
delay_mu(self._rtio_interval)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_iq_en(self, i_enable: TInt32, q_enable: TInt32):
|
||||||
|
"""Enable I/Q data on this DAC channel.
|
||||||
|
|
||||||
|
Every pair of SAWG channels forms a buddy pair.
|
||||||
|
The ``iq_en`` configuration controls which DDS data is emitted to the
|
||||||
|
DACs.
|
||||||
|
|
||||||
|
Refer to the documentation of :class:`SAWG` for a mathematical
|
||||||
|
description of ``i_enable`` and ``q_enable``.
|
||||||
|
|
||||||
|
.. note:: Quadrature data from the buddy channel is currently
|
||||||
|
a technological preview only. The data is ignored in the SAWG
|
||||||
|
gateware and not added to the DAC output.
|
||||||
|
This is equivalent to the ``q_enable`` switch always being ``0``.
|
||||||
|
|
||||||
|
:param i_enable: Controls adding the in-phase
|
||||||
|
DUC-DDS data of *this* SAWG channel to *this* DAC channel.
|
||||||
|
Default: ``1``.
|
||||||
|
:param q_enable: controls adding the quadrature
|
||||||
|
DUC-DDS data of this SAWG's *buddy* channel to *this* DAC
|
||||||
|
channel. Default: ``0``.
|
||||||
|
"""
|
||||||
|
rtio_output((self.channel << 8) | _SAWG_IQ_EN, i_enable |
|
||||||
|
(q_enable << 1))
|
||||||
|
delay_mu(self._rtio_interval)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_duc_max_mu(self, limit: TInt32):
|
||||||
|
"""Set the digital up-converter (DUC) I and Q data summing junctions
|
||||||
|
upper limit. In machine units.
|
||||||
|
|
||||||
|
The default limits are chosen to reach maximum and minimum DAC output
|
||||||
|
amplitude.
|
||||||
|
|
||||||
|
For a description of the limiter functions in normalized units see:
|
||||||
|
|
||||||
|
.. seealso:: :meth:`set_duc_max`
|
||||||
|
"""
|
||||||
|
rtio_output((self.channel << 8) | _SAWG_DUC_MAX, limit)
|
||||||
|
delay_mu(self._rtio_interval)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_duc_min_mu(self, limit: TInt32):
|
||||||
|
""".. seealso:: :meth:`set_duc_max_mu`"""
|
||||||
|
rtio_output((self.channel << 8) | _SAWG_DUC_MIN, limit)
|
||||||
|
delay_mu(self._rtio_interval)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_out_max_mu(self, limit: TInt32):
|
||||||
|
""".. seealso:: :meth:`set_duc_max_mu`"""
|
||||||
|
rtio_output((self.channel << 8) | _SAWG_OUT_MAX, limit)
|
||||||
|
delay_mu(self._rtio_interval)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_out_min_mu(self, limit: TInt32):
|
||||||
|
""".. seealso:: :meth:`set_duc_max_mu`"""
|
||||||
|
rtio_output((self.channel << 8) | _SAWG_OUT_MIN, limit)
|
||||||
|
delay_mu(self._rtio_interval)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_duc_max(self, limit: TFloat):
|
||||||
|
"""Set the digital up-converter (DUC) I and Q data summing junctions
|
||||||
|
upper limit.
|
||||||
|
|
||||||
|
Each of the three summing junctions has a saturating adder with
|
||||||
|
configurable upper and lower limits. The three summing junctions are:
|
||||||
|
|
||||||
|
* At the in-phase input to the ``phase0``/``frequency0`` fast DUC,
|
||||||
|
after the anti-aliasing FIR filter.
|
||||||
|
* At the quadrature input to the ``phase0``/``frequency0``
|
||||||
|
fast DUC, after the anti-aliasing FIR filter. The in-phase and
|
||||||
|
quadrature data paths both use the same limits.
|
||||||
|
* Before the DAC, where the following three data streams
|
||||||
|
are added together:
|
||||||
|
|
||||||
|
* the output of the ``offset`` spline,
|
||||||
|
* (optionally, depending on ``i_enable``) the in-phase output
|
||||||
|
of the ``phase0``/``frequency0`` fast DUC, and
|
||||||
|
* (optionally, depending on ``q_enable``) the quadrature
|
||||||
|
output of the ``phase0``/``frequency0`` fast DUC of the
|
||||||
|
buddy channel.
|
||||||
|
|
||||||
|
Refer to the documentation of :class:`SAWG` for a mathematical
|
||||||
|
description of the summing junctions.
|
||||||
|
|
||||||
|
:param limit: Limit value ``[-1, 1]``. The output of the limiter will
|
||||||
|
never exceed this limit. The default limits are the full range
|
||||||
|
``[-1, 1]``.
|
||||||
|
|
||||||
|
.. seealso::
|
||||||
|
* :meth:`set_duc_max`: Upper limit of the in-phase and quadrature
|
||||||
|
inputs to the DUC.
|
||||||
|
* :meth:`set_duc_min`: Lower limit of the in-phase and quadrature
|
||||||
|
inputs to the DUC.
|
||||||
|
* :meth:`set_out_max`: Upper limit of the DAC output.
|
||||||
|
* :meth:`set_out_min`: Lower limit of the DAC output.
|
||||||
|
"""
|
||||||
|
self.set_duc_max_mu(int32(round(limit*self._duc_scale)))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_duc_min(self, limit: TFloat):
|
||||||
|
""".. seealso:: :meth:`set_duc_max`"""
|
||||||
|
self.set_duc_min_mu(int32(round(limit*self._duc_scale)))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_out_max(self, limit: TFloat):
|
||||||
|
""".. seealso:: :meth:`set_duc_max`"""
|
||||||
|
self.set_out_max_mu(int32(round(limit*self._out_scale)))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_out_min(self, limit: TFloat):
|
||||||
|
""".. seealso:: :meth:`set_duc_max`"""
|
||||||
|
self.set_out_min_mu(int32(round(limit*self._out_scale)))
|
||||||
|
|
||||||
|
|
||||||
|
class SAWG:
|
||||||
|
"""Smart arbitrary waveform generator channel.
|
||||||
|
The channel is parametrized as: ::
|
||||||
|
|
||||||
|
oscillators = exp(2j*pi*(frequency0*t + phase0))*(
|
||||||
|
amplitude1*exp(2j*pi*(frequency1*t + phase1)) +
|
||||||
|
amplitude2*exp(2j*pi*(frequency2*t + phase2)))
|
||||||
|
|
||||||
|
output = (offset +
|
||||||
|
i_enable*Re(oscillators) +
|
||||||
|
q_enable*Im(buddy_oscillators))
|
||||||
|
|
||||||
|
This parametrization can be viewed as two complex (quadrature) oscillators
|
||||||
|
(``frequency1``/``phase1`` and ``frequency2``/``phase2``) that are
|
||||||
|
executing and sampling at the coarse RTIO frequency. They can represent
|
||||||
|
frequencies within the first Nyquist zone from ``-f_rtio_coarse/2`` to
|
||||||
|
``f_rtio_coarse/2``.
|
||||||
|
|
||||||
|
.. note:: The coarse RTIO frequency ``f_rtio_coarse`` is the inverse of
|
||||||
|
``ref_period*multiplier``. Both are arguments of the ``Core`` device,
|
||||||
|
specified in the device database ``device_db.py``.
|
||||||
|
|
||||||
|
The sum of their outputs is then interpolated by a factor of
|
||||||
|
:attr:`parallelism` (2, 4, 8 depending on the bitstream) using a
|
||||||
|
finite-impulse-response (FIR) anti-aliasing filter (more accurately
|
||||||
|
a half-band filter).
|
||||||
|
|
||||||
|
The filter is followed by a configurable saturating limiter.
|
||||||
|
|
||||||
|
After the limiter, the data is shifted in frequency using a complex
|
||||||
|
digital up-converter (DUC, ``frequency0``/``phase0``) running at
|
||||||
|
:attr:`parallelism` times the coarse RTIO frequency. The first Nyquist
|
||||||
|
zone of the DUC extends from ``-f_rtio_coarse*parallelism/2`` to
|
||||||
|
``f_rtio_coarse*parallelism/2``. Other Nyquist zones are usable depending
|
||||||
|
on the interpolation/modulation options configured in the DAC.
|
||||||
|
|
||||||
|
The real/in-phase data after digital up-conversion can be offset using
|
||||||
|
another spline interpolator ``offset``.
|
||||||
|
|
||||||
|
The ``i_enable``/``q_enable`` switches enable emission of quadrature
|
||||||
|
signals for later analog quadrature mixing distinguishing upper and lower
|
||||||
|
sidebands and thus doubling the bandwidth. They can also be used to emit
|
||||||
|
four-tone signals.
|
||||||
|
|
||||||
|
.. note:: Quadrature data from the buddy channel is currently
|
||||||
|
ignored in the SAWG gateware and not added to the DAC output.
|
||||||
|
This is equivalent to the ``q_enable`` switch always being ``0``.
|
||||||
|
|
||||||
|
The configuration channel and the nine
|
||||||
|
:class:`artiq.coredevice.spline.Spline` interpolators are accessible as
|
||||||
|
attributes:
|
||||||
|
|
||||||
|
* :attr:`config`: :class:`Config`
|
||||||
|
* :attr:`offset`, :attr:`amplitude1`, :attr:`amplitude2`: in units
|
||||||
|
of full scale
|
||||||
|
* :attr:`phase0`, :attr:`phase1`, :attr:`phase2`: in units of turns
|
||||||
|
* :attr:`frequency0`, :attr:`frequency1`, :attr:`frequency2`: in units
|
||||||
|
of Hz
|
||||||
|
|
||||||
|
.. note:: The latencies (pipeline depths) of the nine data channels (i.e.
|
||||||
|
all except :attr:`config`) are matched. Equivalent channels (e.g.
|
||||||
|
:attr:`phase1` and :attr:`phase2`) are exactly matched. Channels of
|
||||||
|
different type or functionality (e.g. :attr:`offset` vs
|
||||||
|
:attr:`amplitude1`, DDS vs DUC, :attr:`phase0` vs :attr:`phase1`) are
|
||||||
|
only matched to within one coarse RTIO cycle.
|
||||||
|
|
||||||
|
:param channel_base: RTIO channel number of the first channel (amplitude).
|
||||||
|
The configuration channel and frequency/phase/amplitude channels are
|
||||||
|
then assumed to be successive channels.
|
||||||
|
:param parallelism: Number of output samples per coarse RTIO clock cycle.
|
||||||
|
:param core_device: Name of the core device that this SAWG is on.
|
||||||
|
"""
|
||||||
|
kernel_invariants = {"channel_base", "core", "parallelism",
|
||||||
|
"amplitude1", "frequency1", "phase1",
|
||||||
|
"amplitude2", "frequency2", "phase2",
|
||||||
|
"frequency0", "phase0", "offset"}
|
||||||
|
|
||||||
|
def __init__(self, dmgr, channel_base, parallelism, core_device="core"):
|
||||||
|
self.core = dmgr.get(core_device)
|
||||||
|
self.channel_base = channel_base
|
||||||
|
self.parallelism = parallelism
|
||||||
|
width = 16
|
||||||
|
time_width = 16
|
||||||
|
cordic_gain = 1.646760258057163 # Cordic(width=16, guard=None).gain
|
||||||
|
head_room = 1.001
|
||||||
|
self.config = Config(channel_base, self.core, cordic_gain)
|
||||||
|
self.offset = Spline(width, time_width, channel_base + 1,
|
||||||
|
self.core, 2.*head_room)
|
||||||
|
self.amplitude1 = Spline(width, time_width, channel_base + 2,
|
||||||
|
self.core, 2*head_room*cordic_gain**2)
|
||||||
|
self.frequency1 = Spline(3*width, time_width, channel_base + 3,
|
||||||
|
self.core, 1/self.core.coarse_ref_period)
|
||||||
|
self.phase1 = Spline(width, time_width, channel_base + 4,
|
||||||
|
self.core, 1.)
|
||||||
|
self.amplitude2 = Spline(width, time_width, channel_base + 5,
|
||||||
|
self.core, 2*head_room*cordic_gain**2)
|
||||||
|
self.frequency2 = Spline(3*width, time_width, channel_base + 6,
|
||||||
|
self.core, 1/self.core.coarse_ref_period)
|
||||||
|
self.phase2 = Spline(width, time_width, channel_base + 7,
|
||||||
|
self.core, 1.)
|
||||||
|
self.frequency0 = Spline(2*width, time_width, channel_base + 8,
|
||||||
|
self.core,
|
||||||
|
parallelism/self.core.coarse_ref_period)
|
||||||
|
self.phase0 = Spline(width, time_width, channel_base + 9,
|
||||||
|
self.core, 1.)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def reset(self):
|
||||||
|
"""Re-establish initial conditions.
|
||||||
|
|
||||||
|
This clears all spline interpolators, accumulators and configuration
|
||||||
|
settings.
|
||||||
|
|
||||||
|
This method advances the timeline by the time required to perform all
|
||||||
|
7 writes to the configuration channel, plus 9 coarse RTIO cycles.
|
||||||
|
"""
|
||||||
|
self.config.set_div(0, 0)
|
||||||
|
self.config.set_clr(1, 1, 1)
|
||||||
|
self.config.set_iq_en(1, 0)
|
||||||
|
self.config.set_duc_min(-1.)
|
||||||
|
self.config.set_duc_max(1.)
|
||||||
|
self.config.set_out_min(-1.)
|
||||||
|
self.config.set_out_max(1.)
|
||||||
|
self.frequency0.set_mu(0)
|
||||||
|
coarse_cycle = int64(self.core.ref_multiplier)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.frequency1.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.frequency2.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.phase0.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.phase1.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.phase2.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.amplitude1.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.amplitude2.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
||||||
|
self.offset.set_mu(0)
|
||||||
|
delay_mu(coarse_cycle)
|
|
@ -0,0 +1,54 @@
|
||||||
|
from artiq.language.core import kernel, delay
|
||||||
|
from artiq.language.units import us
|
||||||
|
|
||||||
|
|
||||||
|
class ShiftReg:
|
||||||
|
"""Driver for shift registers/latch combos connected to TTLs"""
|
||||||
|
kernel_invariants = {"dt", "n"}
|
||||||
|
|
||||||
|
def __init__(self, dmgr, clk, ser, latch, n=32, dt=10*us, ser_in=None):
|
||||||
|
self.core = dmgr.get("core")
|
||||||
|
self.clk = dmgr.get(clk)
|
||||||
|
self.ser = dmgr.get(ser)
|
||||||
|
self.latch = dmgr.get(latch)
|
||||||
|
self.n = n
|
||||||
|
self.dt = dt
|
||||||
|
if ser_in is not None:
|
||||||
|
self.ser_in = dmgr.get(ser_in)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set(self, data):
|
||||||
|
"""Sets the values of the latch outputs. This does not
|
||||||
|
advance the timeline and the waveform is generated before
|
||||||
|
`now`."""
|
||||||
|
delay(-2*(self.n + 1)*self.dt)
|
||||||
|
for i in range(self.n):
|
||||||
|
if (data >> (self.n-i-1)) & 1 == 0:
|
||||||
|
self.ser.off()
|
||||||
|
else:
|
||||||
|
self.ser.on()
|
||||||
|
self.clk.off()
|
||||||
|
delay(self.dt)
|
||||||
|
self.clk.on()
|
||||||
|
delay(self.dt)
|
||||||
|
self.clk.off()
|
||||||
|
self.latch.on()
|
||||||
|
delay(self.dt)
|
||||||
|
self.latch.off()
|
||||||
|
delay(self.dt)
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def get(self):
|
||||||
|
delay(-2*(self.n + 1)*self.dt)
|
||||||
|
data = 0
|
||||||
|
for i in range(self.n):
|
||||||
|
data <<= 1
|
||||||
|
self.ser_in.sample_input()
|
||||||
|
if self.ser_in.sample_get():
|
||||||
|
data |= 1
|
||||||
|
delay(self.dt)
|
||||||
|
self.clk.on()
|
||||||
|
delay(self.dt)
|
||||||
|
self.clk.off()
|
||||||
|
delay(self.dt)
|
||||||
|
return data
|
|
@ -1,613 +0,0 @@
|
||||||
from artiq.language.core import *
|
|
||||||
from artiq.language.types import *
|
|
||||||
from artiq.coredevice.rtio import rtio_output, rtio_input_data
|
|
||||||
from artiq.coredevice import spi2 as spi
|
|
||||||
from artiq.language.units import us
|
|
||||||
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_volt_to_mu(volt):
|
|
||||||
"""Return the equivalent DAC code. Valid input range is from -10 to
|
|
||||||
10 - LSB.
|
|
||||||
"""
|
|
||||||
return round((1 << 16) * (volt / 20.0)) & 0xffff
|
|
||||||
|
|
||||||
|
|
||||||
class Config:
|
|
||||||
"""Shuttler configuration registers interface.
|
|
||||||
|
|
||||||
The configuration registers control waveform phase auto-clear, pre-DAC
|
|
||||||
gain and offset values for calibration with ADC on the Shuttler AFE card.
|
|
||||||
|
|
||||||
To find the calibrated DAC code, the Shuttler Core first multiplies the
|
|
||||||
output data with pre-DAC gain, then adds the offset.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
The DAC code is capped at 0x7fff and 0x8000.
|
|
||||||
|
|
||||||
:param channel: RTIO channel number of this interface.
|
|
||||||
:param core_device: Core device name.
|
|
||||||
"""
|
|
||||||
kernel_invariants = {
|
|
||||||
"core", "channel", "target_base", "target_read",
|
|
||||||
"target_gain", "target_offset", "target_clr"
|
|
||||||
}
|
|
||||||
|
|
||||||
def __init__(self, dmgr, channel, core_device="core"):
|
|
||||||
self.core = dmgr.get(core_device)
|
|
||||||
self.channel = channel
|
|
||||||
self.target_base = channel << 8
|
|
||||||
self.target_read = 1 << 6
|
|
||||||
self.target_gain = 0 * (1 << 4)
|
|
||||||
self.target_offset = 1 * (1 << 4)
|
|
||||||
self.target_clr = 1 * (1 << 5)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_clr(self, clr):
|
|
||||||
"""Set/Unset waveform phase clear bits.
|
|
||||||
|
|
||||||
Each bit corresponds to a Shuttler waveform generator core. Setting a
|
|
||||||
clear bit forces the Shuttler Core to clear the phase accumulator on
|
|
||||||
waveform trigger (See :class:`Trigger` for the trigger method).
|
|
||||||
Otherwise, the phase accumulator increments from its original value.
|
|
||||||
|
|
||||||
:param clr: Waveform phase clear bits. The MSB corresponds to Channel
|
|
||||||
15, LSB corresponds to Channel 0.
|
|
||||||
"""
|
|
||||||
rtio_output(self.target_base | self.target_clr, clr)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_gain(self, channel, gain):
|
|
||||||
"""Set the 16-bits pre-DAC gain register of a Shuttler Core channel.
|
|
||||||
|
|
||||||
The `gain` parameter represents the decimal portion of the gain
|
|
||||||
factor. The MSB represents 0.5 and the sign bit. Hence, the valid
|
|
||||||
total gain value (1 +/- 0.gain) ranges from 0.5 to 1.5 - LSB.
|
|
||||||
|
|
||||||
:param channel: Shuttler Core channel to be configured.
|
|
||||||
:param gain: Shuttler Core channel gain.
|
|
||||||
"""
|
|
||||||
rtio_output(self.target_base | self.target_gain | channel, gain)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def get_gain(self, channel):
|
|
||||||
"""Return the pre-DAC gain value of a Shuttler Core channel.
|
|
||||||
|
|
||||||
:param channel: The Shuttler Core channel.
|
|
||||||
:return: Pre-DAC gain value. See :meth:`set_gain`.
|
|
||||||
"""
|
|
||||||
rtio_output(self.target_base | self.target_gain |
|
|
||||||
self.target_read | channel, 0)
|
|
||||||
return rtio_input_data(self.channel)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_offset(self, channel, offset):
|
|
||||||
"""Set the 16-bits pre-DAC offset register of a Shuttler Core channel.
|
|
||||||
|
|
||||||
See also :meth:`shuttler_volt_to_mu`.
|
|
||||||
|
|
||||||
:param channel: Shuttler Core channel to be configured.
|
|
||||||
:param offset: Shuttler Core channel offset.
|
|
||||||
"""
|
|
||||||
rtio_output(self.target_base | self.target_offset | channel, offset)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def get_offset(self, channel):
|
|
||||||
"""Return the pre-DAC offset value of a Shuttler Core channel.
|
|
||||||
|
|
||||||
:param channel: The Shuttler Core channel.
|
|
||||||
:return: Pre-DAC offset value. See :meth:`set_offset`.
|
|
||||||
"""
|
|
||||||
rtio_output(self.target_base | self.target_offset |
|
|
||||||
self.target_read | channel, 0)
|
|
||||||
return rtio_input_data(self.channel)
|
|
||||||
|
|
||||||
|
|
||||||
class DCBias:
|
|
||||||
"""Shuttler Core cubic DC-bias spline.
|
|
||||||
|
|
||||||
A Shuttler channel can generate a waveform `w(t)` that is the sum of a
|
|
||||||
cubic spline `a(t)` and a sinusoid modulated in amplitude by a cubic
|
|
||||||
spline `b(t)` and in phase/frequency by a quadratic spline `c(t)`, where
|
|
||||||
|
|
||||||
.. math::
|
|
||||||
w(t) = a(t) + b(t) * cos(c(t))
|
|
||||||
|
|
||||||
and `t` corresponds to time in seconds.
|
|
||||||
This class controls the cubic spline `a(t)`, in which
|
|
||||||
|
|
||||||
.. math::
|
|
||||||
a(t) = p_0 + p_1t + \\frac{p_2t^2}{2} + \\frac{p_3t^3}{6}
|
|
||||||
|
|
||||||
and `a(t)` is measured in volts.
|
|
||||||
|
|
||||||
:param channel: RTIO channel number of this DC-bias spline interface.
|
|
||||||
:param core_device: Core device name.
|
|
||||||
"""
|
|
||||||
kernel_invariants = {"core", "channel", "target_o"}
|
|
||||||
|
|
||||||
def __init__(self, dmgr, channel, core_device="core"):
|
|
||||||
self.core = dmgr.get(core_device)
|
|
||||||
self.channel = channel
|
|
||||||
self.target_o = channel << 8
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_waveform(self, a0: TInt32, a1: TInt32, a2: TInt64, a3: TInt64):
|
|
||||||
"""Set the DC-bias spline waveform.
|
|
||||||
|
|
||||||
Given `a(t)` as defined in :class:`DCBias`, the coefficients should be
|
|
||||||
configured by the following formulae:
|
|
||||||
|
|
||||||
.. math::
|
|
||||||
T &= 8*10^{-9}
|
|
||||||
|
|
||||||
a_0 &= p_0
|
|
||||||
|
|
||||||
a_1 &= p_1T + \\frac{p_2T^2}{2} + \\frac{p_3T^3}{6}
|
|
||||||
|
|
||||||
a_2 &= p_2T^2 + p_3T^3
|
|
||||||
|
|
||||||
a_3 &= p_3T^3
|
|
||||||
|
|
||||||
:math:`a_0`, :math:`a_1`, :math:`a_2` and :math:`a_3` are 16, 32, 48
|
|
||||||
and 48 bits in width respectively. See :meth:`shuttler_volt_to_mu` for
|
|
||||||
machine unit conversion.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
The waveform is not updated to the Shuttler Core until
|
|
||||||
triggered. See :class:`Trigger` for the update triggering
|
|
||||||
mechanism.
|
|
||||||
|
|
||||||
:param a0: The :math:`a_0` coefficient in machine unit.
|
|
||||||
:param a1: The :math:`a_1` coefficient in machine unit.
|
|
||||||
:param a2: The :math:`a_2` coefficient in machine unit.
|
|
||||||
:param a3: The :math:`a_3` coefficient in machine unit.
|
|
||||||
"""
|
|
||||||
coef_words = [
|
|
||||||
a0,
|
|
||||||
a1,
|
|
||||||
a1 >> 16,
|
|
||||||
a2 & 0xFFFF,
|
|
||||||
(a2 >> 16) & 0xFFFF,
|
|
||||||
(a2 >> 32) & 0xFFFF,
|
|
||||||
a3 & 0xFFFF,
|
|
||||||
(a3 >> 16) & 0xFFFF,
|
|
||||||
(a3 >> 32) & 0xFFFF,
|
|
||||||
]
|
|
||||||
|
|
||||||
for i in range(len(coef_words)):
|
|
||||||
rtio_output(self.target_o | i, coef_words[i])
|
|
||||||
delay_mu(int64(self.core.ref_multiplier))
|
|
||||||
|
|
||||||
|
|
||||||
class DDS:
|
|
||||||
"""Shuttler Core DDS spline.
|
|
||||||
|
|
||||||
A Shuttler channel can generate a waveform `w(t)` that is the sum of a
|
|
||||||
cubic spline `a(t)` and a sinusoid modulated in amplitude by a cubic
|
|
||||||
spline `b(t)` and in phase/frequency by a quadratic spline `c(t)`, where
|
|
||||||
|
|
||||||
.. math::
|
|
||||||
w(t) = a(t) + b(t) * cos(c(t))
|
|
||||||
|
|
||||||
and `t` corresponds to time in seconds.
|
|
||||||
This class controls the cubic spline `b(t)` and quadratic spline `c(t)`,
|
|
||||||
in which
|
|
||||||
|
|
||||||
.. math::
|
|
||||||
b(t) &= g * (q_0 + q_1t + \\frac{q_2t^2}{2} + \\frac{q_3t^3}{6})
|
|
||||||
|
|
||||||
c(t) &= r_0 + r_1t + \\frac{r_2t^2}{2}
|
|
||||||
|
|
||||||
`b(t)` is in volts, `c(t)` is in number of turns. Note that `b(t)`
|
|
||||||
contributes to a constant gain of :math:`g=1.64676`.
|
|
||||||
|
|
||||||
:param channel: RTIO channel number of this DC-bias spline interface.
|
|
||||||
:param core_device: Core device name.
|
|
||||||
"""
|
|
||||||
kernel_invariants = {"core", "channel", "target_o"}
|
|
||||||
|
|
||||||
def __init__(self, dmgr, channel, core_device="core"):
|
|
||||||
self.core = dmgr.get(core_device)
|
|
||||||
self.channel = channel
|
|
||||||
self.target_o = channel << 8
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def set_waveform(self, b0: TInt32, b1: TInt32, b2: TInt64, b3: TInt64,
|
|
||||||
c0: TInt32, c1: TInt32, c2: TInt32):
|
|
||||||
"""Set the DDS spline waveform.
|
|
||||||
|
|
||||||
Given `b(t)` and `c(t)` as defined in :class:`DDS`, the coefficients
|
|
||||||
should be configured by the following formulae.
|
|
||||||
|
|
||||||
.. math::
|
|
||||||
T &= 8*10^{-9}
|
|
||||||
|
|
||||||
b_0 &= q_0
|
|
||||||
|
|
||||||
b_1 &= q_1T + \\frac{q_2T^2}{2} + \\frac{q_3T^3}{6}
|
|
||||||
|
|
||||||
b_2 &= q_2T^2 + q_3T^3
|
|
||||||
|
|
||||||
b_3 &= q_3T^3
|
|
||||||
|
|
||||||
c_0 &= r_0
|
|
||||||
|
|
||||||
c_1 &= r_1T + \\frac{r_2T^2}{2}
|
|
||||||
|
|
||||||
c_2 &= r_2T^2
|
|
||||||
|
|
||||||
:math:`b_0`, :math:`b_1`, :math:`b_2` and :math:`b_3` are 16, 32, 48
|
|
||||||
and 48 bits in width respectively. See :meth:`shuttler_volt_to_mu` for
|
|
||||||
machine unit conversion. :math:`c_0`, :math:`c_1` and :math:`c_2` are
|
|
||||||
16, 32 and 32 bits in width respectively.
|
|
||||||
|
|
||||||
Note: The waveform is not updated to the Shuttler Core until
|
|
||||||
triggered. See :class:`Trigger` for the update triggering mechanism.
|
|
||||||
|
|
||||||
:param b0: The :math:`b_0` coefficient in machine units.
|
|
||||||
:param b1: The :math:`b_1` coefficient in machine units.
|
|
||||||
:param b2: The :math:`b_2` coefficient in machine units.
|
|
||||||
:param b3: The :math:`b_3` coefficient in machine units.
|
|
||||||
:param c0: The :math:`c_0` coefficient in machine units.
|
|
||||||
:param c1: The :math:`c_1` coefficient in machine units.
|
|
||||||
:param c2: The :math:`c_2` coefficient in machine units.
|
|
||||||
"""
|
|
||||||
coef_words = [
|
|
||||||
b0,
|
|
||||||
b1,
|
|
||||||
b1 >> 16,
|
|
||||||
b2 & 0xFFFF,
|
|
||||||
(b2 >> 16) & 0xFFFF,
|
|
||||||
(b2 >> 32) & 0xFFFF,
|
|
||||||
b3 & 0xFFFF,
|
|
||||||
(b3 >> 16) & 0xFFFF,
|
|
||||||
(b3 >> 32) & 0xFFFF,
|
|
||||||
c0,
|
|
||||||
c1,
|
|
||||||
c1 >> 16,
|
|
||||||
c2,
|
|
||||||
c2 >> 16,
|
|
||||||
]
|
|
||||||
|
|
||||||
for i in range(len(coef_words)):
|
|
||||||
rtio_output(self.target_o | i, coef_words[i])
|
|
||||||
delay_mu(int64(self.core.ref_multiplier))
|
|
||||||
|
|
||||||
|
|
||||||
class Trigger:
|
|
||||||
"""Shuttler Core spline coefficients update trigger.
|
|
||||||
|
|
||||||
:param channel: RTIO channel number of the trigger interface.
|
|
||||||
:param core_device: Core device name.
|
|
||||||
"""
|
|
||||||
kernel_invariants = {"core", "channel", "target_o"}
|
|
||||||
|
|
||||||
def __init__(self, dmgr, channel, core_device="core"):
|
|
||||||
self.core = dmgr.get(core_device)
|
|
||||||
self.channel = channel
|
|
||||||
self.target_o = channel << 8
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def trigger(self, trig_out):
|
|
||||||
"""Triggers coefficient update of (a) Shuttler Core channel(s).
|
|
||||||
|
|
||||||
Each bit corresponds to a Shuttler waveform generator core. Setting
|
|
||||||
``trig_out`` bits commits the pending coefficient update (from
|
|
||||||
``set_waveform`` in :class:`DCBias` and :class:`DDS`) to the Shuttler Core
|
|
||||||
synchronously.
|
|
||||||
|
|
||||||
:param trig_out: Coefficient update trigger bits. The MSB corresponds
|
|
||||||
to Channel 15, LSB corresponds to Channel 0.
|
|
||||||
"""
|
|
||||||
rtio_output(self.target_o, trig_out)
|
|
||||||
|
|
||||||
|
|
||||||
RELAY_SPI_CONFIG = (0*spi.SPI_OFFLINE | 1*spi.SPI_END |
|
|
||||||
0*spi.SPI_INPUT | 0*spi.SPI_CS_POLARITY |
|
|
||||||
0*spi.SPI_CLK_POLARITY | 0*spi.SPI_CLK_PHASE |
|
|
||||||
0*spi.SPI_LSB_FIRST | 0*spi.SPI_HALF_DUPLEX)
|
|
||||||
|
|
||||||
ADC_SPI_CONFIG = (0*spi.SPI_OFFLINE | 0*spi.SPI_END |
|
|
||||||
0*spi.SPI_INPUT | 0*spi.SPI_CS_POLARITY |
|
|
||||||
1*spi.SPI_CLK_POLARITY | 1*spi.SPI_CLK_PHASE |
|
|
||||||
0*spi.SPI_LSB_FIRST | 0*spi.SPI_HALF_DUPLEX)
|
|
||||||
|
|
||||||
# SPI clock write and read dividers
|
|
||||||
# CS should assert at least 9.5 ns after clk pulse
|
|
||||||
SPIT_RELAY_WR = 4
|
|
||||||
# 25 ns high/low pulse hold (limiting for write)
|
|
||||||
SPIT_ADC_WR = 4
|
|
||||||
SPIT_ADC_RD = 16
|
|
||||||
|
|
||||||
# SPI CS line
|
|
||||||
CS_RELAY = 1 << 0
|
|
||||||
CS_LED = 1 << 1
|
|
||||||
CS_ADC = 1 << 0
|
|
||||||
|
|
||||||
# Referenced AD4115 registers
|
|
||||||
_AD4115_REG_STATUS = 0x00
|
|
||||||
_AD4115_REG_ADCMODE = 0x01
|
|
||||||
_AD4115_REG_DATA = 0x04
|
|
||||||
_AD4115_REG_ID = 0x07
|
|
||||||
_AD4115_REG_CH0 = 0x10
|
|
||||||
_AD4115_REG_SETUPCON0 = 0x20
|
|
||||||
|
|
||||||
|
|
||||||
class Relay:
|
|
||||||
"""Shuttler AFE relay switches.
|
|
||||||
|
|
||||||
This class controls the AFE relay switches and the LEDs. Switch the relay on to
|
|
||||||
enable AFE output; off to disable the output. The LEDs indicate the
|
|
||||||
relay status.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
The relay does not disable ADC measurements. Voltage of any channels
|
|
||||||
can still be read by the ADC even after switching off the relays.
|
|
||||||
|
|
||||||
:param spi_device: SPI bus device name.
|
|
||||||
:param core_device: Core device name.
|
|
||||||
"""
|
|
||||||
kernel_invariant = {"core", "bus"}
|
|
||||||
|
|
||||||
def __init__(self, dmgr, spi_device, core_device="core"):
|
|
||||||
self.core = dmgr.get(core_device)
|
|
||||||
self.bus = dmgr.get(spi_device)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def init(self):
|
|
||||||
"""Initialize SPI device.
|
|
||||||
|
|
||||||
Configures the SPI bus to 16 bits, write-only, simultaneous relay
|
|
||||||
switches and LED control.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
RELAY_SPI_CONFIG, 16, SPIT_RELAY_WR, CS_RELAY | CS_LED)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def enable(self, en: TInt32):
|
|
||||||
"""Enable/disable relay switches of corresponding channels.
|
|
||||||
|
|
||||||
Each bit corresponds to the relay switch of a channel. Asserting a bit
|
|
||||||
turns on the corresponding relay switch; deasserting the same bit
|
|
||||||
turns off the switch instead.
|
|
||||||
|
|
||||||
:param en: Switch enable bits. The MSB corresponds to Channel 15, LSB
|
|
||||||
corresponds to Channel 0.
|
|
||||||
"""
|
|
||||||
self.bus.write(en << 16)
|
|
||||||
|
|
||||||
|
|
||||||
class ADC:
|
|
||||||
"""Shuttler AFE ADC (AD4115) driver.
|
|
||||||
|
|
||||||
:param spi_device: SPI bus device name.
|
|
||||||
:param core_device: Core device name.
|
|
||||||
"""
|
|
||||||
kernel_invariant = {"core", "bus"}
|
|
||||||
|
|
||||||
def __init__(self, dmgr, spi_device, core_device="core"):
|
|
||||||
self.core = dmgr.get(core_device)
|
|
||||||
self.bus = dmgr.get(spi_device)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def read_id(self) -> TInt32:
|
|
||||||
"""Read the product ID of the ADC.
|
|
||||||
|
|
||||||
The expected return value is 0x38DX, the 4 LSbs are don't cares.
|
|
||||||
|
|
||||||
:return: The read-back product ID.
|
|
||||||
"""
|
|
||||||
return self.read16(_AD4115_REG_ID)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def reset(self):
|
|
||||||
"""AD4115 reset procedure.
|
|
||||||
|
|
||||||
Performs a write operation of 96 serial clock cycles with DIN
|
|
||||||
held at high. This resets the entire device, including the register
|
|
||||||
contents.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
The datasheet only requires 64 cycles, but reasserting ``CS_n`` right
|
|
||||||
after the transfer appears to interrupt the start-up sequence.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(ADC_SPI_CONFIG, 32, SPIT_ADC_WR, CS_ADC)
|
|
||||||
self.bus.write(0xffffffff)
|
|
||||||
self.bus.write(0xffffffff)
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
ADC_SPI_CONFIG | spi.SPI_END, 32, SPIT_ADC_WR, CS_ADC)
|
|
||||||
self.bus.write(0xffffffff)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def read8(self, addr: TInt32) -> TInt32:
|
|
||||||
"""Read from 8-bit register.
|
|
||||||
|
|
||||||
:param addr: Register address.
|
|
||||||
:return: Read-back register content.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
|
|
||||||
16, SPIT_ADC_RD, CS_ADC)
|
|
||||||
self.bus.write((addr | 0x40) << 24)
|
|
||||||
return self.bus.read() & 0xff
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def read16(self, addr: TInt32) -> TInt32:
|
|
||||||
"""Read from 16-bit register.
|
|
||||||
|
|
||||||
:param addr: Register address.
|
|
||||||
:return: Read-back register content.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
|
|
||||||
24, SPIT_ADC_RD, CS_ADC)
|
|
||||||
self.bus.write((addr | 0x40) << 24)
|
|
||||||
return self.bus.read() & 0xffff
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def read24(self, addr: TInt32) -> TInt32:
|
|
||||||
"""Read from 24-bit register.
|
|
||||||
|
|
||||||
:param addr: Register address.
|
|
||||||
:return: Read-back register content.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
|
|
||||||
32, SPIT_ADC_RD, CS_ADC)
|
|
||||||
self.bus.write((addr | 0x40) << 24)
|
|
||||||
return self.bus.read() & 0xffffff
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def write8(self, addr: TInt32, data: TInt32):
|
|
||||||
"""Write to 8-bit register.
|
|
||||||
|
|
||||||
:param addr: Register address.
|
|
||||||
:param data: Data to be written.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
ADC_SPI_CONFIG | spi.SPI_END, 16, SPIT_ADC_WR, CS_ADC)
|
|
||||||
self.bus.write(addr << 24 | (data & 0xff) << 16)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def write16(self, addr: TInt32, data: TInt32):
|
|
||||||
"""Write to 16-bit register.
|
|
||||||
|
|
||||||
:param addr: Register address.
|
|
||||||
:param data: Data to be written.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
ADC_SPI_CONFIG | spi.SPI_END, 24, SPIT_ADC_WR, CS_ADC)
|
|
||||||
self.bus.write(addr << 24 | (data & 0xffff) << 8)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def write24(self, addr: TInt32, data: TInt32):
|
|
||||||
"""Write to 24-bit register.
|
|
||||||
|
|
||||||
:param addr: Register address.
|
|
||||||
:param data: Data to be written.
|
|
||||||
"""
|
|
||||||
self.bus.set_config_mu(
|
|
||||||
ADC_SPI_CONFIG | spi.SPI_END, 32, SPIT_ADC_WR, CS_ADC)
|
|
||||||
self.bus.write(addr << 24 | (data & 0xffffff))
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def read_ch(self, channel: TInt32) -> TFloat:
|
|
||||||
"""Sample a Shuttler channel on the AFE.
|
|
||||||
|
|
||||||
Performs a single conversion using profile 0 and setup 0 on the
|
|
||||||
selected channel. The sample is then recovered and converted to volts.
|
|
||||||
|
|
||||||
:param channel: Shuttler channel to be sampled.
|
|
||||||
:return: Voltage sample in volts.
|
|
||||||
"""
|
|
||||||
# Always configure Profile 0 for single conversion
|
|
||||||
self.write16(_AD4115_REG_CH0, 0x8000 | ((channel * 2 + 1) << 4))
|
|
||||||
self.write16(_AD4115_REG_SETUPCON0, 0x1300)
|
|
||||||
self.single_conversion()
|
|
||||||
|
|
||||||
delay(100*us)
|
|
||||||
adc_code = self.read24(_AD4115_REG_DATA)
|
|
||||||
return ((adc_code / (1 << 23)) - 1) * 2.5 / 0.1
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def single_conversion(self):
|
|
||||||
"""Place the ADC in single conversion mode.
|
|
||||||
|
|
||||||
The ADC returns to standby mode after the conversion is complete.
|
|
||||||
"""
|
|
||||||
self.write16(_AD4115_REG_ADCMODE, 0x8010)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def standby(self):
|
|
||||||
"""Place the ADC in standby mode and disable power down the clock.
|
|
||||||
|
|
||||||
The ADC can be returned to single conversion mode by calling
|
|
||||||
:meth:`single_conversion`.
|
|
||||||
"""
|
|
||||||
# Selecting internal XO (0b00) also disables clock during standby
|
|
||||||
self.write16(_AD4115_REG_ADCMODE, 0x8020)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def power_down(self):
|
|
||||||
"""Place the ADC in power-down mode.
|
|
||||||
|
|
||||||
The ADC must be reset before returning to other modes.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
The AD4115 datasheet suggests placing the ADC in standby mode
|
|
||||||
before power-down. This is to prevent accidental entry into the
|
|
||||||
power-down mode. See also :meth:`standby` and :meth:`power_up`.
|
|
||||||
"""
|
|
||||||
self.write16(_AD4115_REG_ADCMODE, 0x8030)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def power_up(self):
|
|
||||||
"""Exit the ADC power-down mode.
|
|
||||||
|
|
||||||
The ADC should be in power-down mode before calling this method.
|
|
||||||
|
|
||||||
See also :meth:`power_down`.
|
|
||||||
"""
|
|
||||||
self.reset()
|
|
||||||
# Although the datasheet claims 500 us reset wait time, only waiting
|
|
||||||
# for ~500 us can result in DOUT pin stuck in high
|
|
||||||
delay(2500*us)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def calibrate(self, volts, trigger, config, samples=[-5.0, 0.0, 5.0]):
|
|
||||||
"""Calibrate the Shuttler waveform generator using the ADC on the AFE.
|
|
||||||
|
|
||||||
Finds the average slope rate and average offset by samples, and
|
|
||||||
compensates by writing the pre-DAC gain and offset registers in the
|
|
||||||
configuration registers.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If the pre-calibration slope rate is less than 1, the calibration
|
|
||||||
procedure will introduce a pre-DAC gain compensation. However, this
|
|
||||||
may saturate the pre-DAC voltage code (see :class:`Config` notes).
|
|
||||||
Shuttler cannot cover the entire +/- 10 V range in this case.
|
|
||||||
See also :meth:`Config.set_gain` and :meth:`Config.set_offset`.
|
|
||||||
|
|
||||||
:param volts: A list of all 16 cubic DC-bias splines.
|
|
||||||
(See :class:`DCBias`)
|
|
||||||
:param trigger: The Shuttler spline coefficient update trigger.
|
|
||||||
:param config: The Shuttler Core configuration registers.
|
|
||||||
:param samples: A list of sample voltages for calibration. There must
|
|
||||||
be at least 2 samples to perform slope rate calculation.
|
|
||||||
"""
|
|
||||||
assert len(volts) == 16
|
|
||||||
assert len(samples) > 1
|
|
||||||
|
|
||||||
measurements = [0.0] * len(samples)
|
|
||||||
|
|
||||||
for ch in range(16):
|
|
||||||
# Find the average slope rate and offset
|
|
||||||
for i in range(len(samples)):
|
|
||||||
self.core.break_realtime()
|
|
||||||
volts[ch].set_waveform(
|
|
||||||
shuttler_volt_to_mu(samples[i]), 0, 0, 0)
|
|
||||||
trigger.trigger(1 << ch)
|
|
||||||
measurements[i] = self.read_ch(ch)
|
|
||||||
|
|
||||||
# Find the average output slope
|
|
||||||
slope_sum = 0.0
|
|
||||||
for i in range(len(samples) - 1):
|
|
||||||
slope_sum += (measurements[i+1] - measurements[i])/(samples[i+1] - samples[i])
|
|
||||||
slope_avg = slope_sum / (len(samples) - 1)
|
|
||||||
|
|
||||||
gain_code = int32(1 / slope_avg * (2 ** 16)) & 0xffff
|
|
||||||
|
|
||||||
# Scale the measurements by 1/slope, find average offset
|
|
||||||
offset_sum = 0.0
|
|
||||||
for i in range(len(samples)):
|
|
||||||
offset_sum += (measurements[i] / slope_avg) - samples[i]
|
|
||||||
offset_avg = offset_sum / len(samples)
|
|
||||||
|
|
||||||
offset_code = shuttler_volt_to_mu(-offset_avg)
|
|
||||||
|
|
||||||
self.core.break_realtime()
|
|
||||||
config.set_gain(ch, gain_code)
|
|
||||||
|
|
||||||
delay_mu(int64(self.core.ref_multiplier))
|
|
||||||
config.set_offset(ch, offset_code)
|
|
|
@ -4,7 +4,7 @@ Driver for generic SPI on RTIO.
|
||||||
This ARTIQ coredevice driver corresponds to the "new" MiSoC SPI core (v2).
|
This ARTIQ coredevice driver corresponds to the "new" MiSoC SPI core (v2).
|
||||||
|
|
||||||
Output event replacement is not supported and issuing commands at the same
|
Output event replacement is not supported and issuing commands at the same
|
||||||
time results in collision errors.
|
time is an error.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from artiq.language.core import syscall, kernel, portable, delay_mu
|
from artiq.language.core import syscall, kernel, portable, delay_mu
|
||||||
|
@ -51,7 +51,7 @@ class SPIMaster:
|
||||||
event (``SPI_INPUT`` set), then :meth:`read` the ``data``.
|
event (``SPI_INPUT`` set), then :meth:`read` the ``data``.
|
||||||
* If ``SPI_END`` was not set, repeat the transfer sequence.
|
* If ``SPI_END`` was not set, repeat the transfer sequence.
|
||||||
|
|
||||||
A *transaction* consists of one or more *transfers*. The chip select
|
A **transaction** consists of one or more **transfers**. The chip select
|
||||||
pattern is asserted for the entire length of the transaction. All but the
|
pattern is asserted for the entire length of the transaction. All but the
|
||||||
last transfer are submitted with ``SPI_END`` cleared in the configuration
|
last transfer are submitted with ``SPI_END`` cleared in the configuration
|
||||||
register.
|
register.
|
||||||
|
@ -72,10 +72,6 @@ class SPIMaster:
|
||||||
self.channel = channel
|
self.channel = channel
|
||||||
self.update_xfer_duration_mu(div, length)
|
self.update_xfer_duration_mu(div, length)
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def frequency_to_div(self, f):
|
def frequency_to_div(self, f):
|
||||||
"""Convert a SPI clock frequency to the closest SPI clock divider."""
|
"""Convert a SPI clock frequency to the closest SPI clock divider."""
|
||||||
|
@ -138,10 +134,10 @@ class SPIMaster:
|
||||||
* :const:`SPI_LSB_FIRST`: LSB is the first bit on the wire (reset=0)
|
* :const:`SPI_LSB_FIRST`: LSB is the first bit on the wire (reset=0)
|
||||||
* :const:`SPI_HALF_DUPLEX`: 3-wire SPI, in/out on ``mosi`` (reset=0)
|
* :const:`SPI_HALF_DUPLEX`: 3-wire SPI, in/out on ``mosi`` (reset=0)
|
||||||
|
|
||||||
:param flags: A bit map of :const:`SPI_*` flags.
|
:param flags: A bit map of `SPI_*` flags.
|
||||||
:param length: Number of bits to write during the next transfer.
|
:param length: Number of bits to write during the next transfer.
|
||||||
(reset=1)
|
(reset=1)
|
||||||
:param freq: Desired SPI clock frequency. (reset= ``f_rtio/2``)
|
:param freq: Desired SPI clock frequency. (reset=f_rtio/2)
|
||||||
:param cs: Bit pattern of chip selects to assert.
|
:param cs: Bit pattern of chip selects to assert.
|
||||||
Or number of the chip select to assert if ``cs`` is decoded
|
Or number of the chip select to assert if ``cs`` is decoded
|
||||||
downstream. (reset=0)
|
downstream. (reset=0)
|
||||||
|
@ -152,15 +148,16 @@ class SPIMaster:
|
||||||
def set_config_mu(self, flags, length, div, cs):
|
def set_config_mu(self, flags, length, div, cs):
|
||||||
"""Set the ``config`` register (in SPI bus machine units).
|
"""Set the ``config`` register (in SPI bus machine units).
|
||||||
|
|
||||||
See also :meth:`set_config`.
|
.. seealso:: :meth:`set_config`
|
||||||
|
|
||||||
:param flags: A bit map of `SPI_*` flags.
|
:param flags: A bit map of `SPI_*` flags.
|
||||||
:param length: Number of bits to write during the next transfer.
|
:param length: Number of bits to write during the next transfer.
|
||||||
(reset=1)
|
(reset=1)
|
||||||
:param div: Counter load value to divide the RTIO
|
:param div: Counter load value to divide the RTIO
|
||||||
clock by to generate the SPI clock; ``f_rtio_clk/f_spi == div``.
|
clock by to generate the SPI clock. (minimum=2, reset=2)
|
||||||
If ``div`` is odd, the setup phase of the SPI clock is one
|
``f_rtio_clk/f_spi == div``. If ``div`` is odd,
|
||||||
coarse RTIO clock cycle longer than the hold phase. (minimum=2, reset=2)
|
the setup phase of the SPI clock is one coarse RTIO clock cycle
|
||||||
|
longer than the hold phase.
|
||||||
:param cs: Bit pattern of chip selects to assert.
|
:param cs: Bit pattern of chip selects to assert.
|
||||||
Or number of the chip select to assert if ``cs`` is decoded
|
Or number of the chip select to assert if ``cs`` is decoded
|
||||||
downstream. (reset=0)
|
downstream. (reset=0)
|
||||||
|
@ -187,7 +184,7 @@ class SPIMaster:
|
||||||
experiments and are known.
|
experiments and are known.
|
||||||
|
|
||||||
This method is portable and can also be called from e.g.
|
This method is portable and can also be called from e.g.
|
||||||
``__init__``.
|
:meth:`__init__`.
|
||||||
|
|
||||||
.. warning:: If this method is called while recording a DMA
|
.. warning:: If this method is called while recording a DMA
|
||||||
sequence, the playback of the sequence will not update the
|
sequence, the playback of the sequence will not update the
|
||||||
|
@ -207,7 +204,7 @@ class SPIMaster:
|
||||||
* The ``data`` register and the shift register are 32 bits wide.
|
* The ``data`` register and the shift register are 32 bits wide.
|
||||||
* Data writes take one ``ref_period`` cycle.
|
* Data writes take one ``ref_period`` cycle.
|
||||||
* A transaction consisting of a single transfer (``SPI_END``) takes
|
* A transaction consisting of a single transfer (``SPI_END``) takes
|
||||||
:attr:`xfer_duration_mu` `` = (n + 1) * div`` cycles RTIO time, where
|
:attr:`xfer_duration_mu` ``=(n + 1)*div`` cycles RTIO time where
|
||||||
``n`` is the number of bits and ``div`` is the SPI clock divider.
|
``n`` is the number of bits and ``div`` is the SPI clock divider.
|
||||||
* Transfers in a multi-transfer transaction take up to one SPI clock
|
* Transfers in a multi-transfer transaction take up to one SPI clock
|
||||||
cycle less time depending on multiple parameters. Advanced users may
|
cycle less time depending on multiple parameters. Advanced users may
|
||||||
|
@ -276,8 +273,9 @@ class NRTSPIMaster:
|
||||||
def set_config_mu(self, flags=0, length=8, div=6, cs=1):
|
def set_config_mu(self, flags=0, length=8, div=6, cs=1):
|
||||||
"""Set the ``config`` register.
|
"""Set the ``config`` register.
|
||||||
|
|
||||||
In many cases, the SPI configuration is already set by the firmware
|
Note that the non-realtime SPI cores are usually clocked by the system
|
||||||
and you do not need to call this method.
|
clock and not the RTIO clock. In many cases, the SPI configuration is
|
||||||
|
already set by the firmware and you do not need to call this method.
|
||||||
"""
|
"""
|
||||||
spi_set_config(self.busno, flags, length, div, cs)
|
spi_set_config(self.busno, flags, length, div, cs)
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,228 @@
|
||||||
|
from numpy import int32, int64
|
||||||
|
from artiq.language.core import kernel, portable, delay
|
||||||
|
from artiq.coredevice.rtio import rtio_output, rtio_output_wide
|
||||||
|
from artiq.language.types import TInt32, TInt64, TFloat
|
||||||
|
|
||||||
|
|
||||||
|
class Spline:
|
||||||
|
r"""Spline interpolating RTIO channel.
|
||||||
|
|
||||||
|
One knot of a polynomial basis spline (B-spline) :math:`u(t)`
|
||||||
|
is defined by the coefficients :math:`u_n` up to order :math:`n = k`.
|
||||||
|
If the coefficients are evaluated starting at time :math:`t_0`,
|
||||||
|
the output :math:`u(t)` for :math:`t > t_0, t_0` is:
|
||||||
|
|
||||||
|
.. math::
|
||||||
|
u(t) &= \sum_{n=0}^k \frac{u_n}{n!} (t - t_0)^n \\
|
||||||
|
&= u_0 + u_1 (t - t_0) + \frac{u_2}{2} (t - t_0)^2 + \dots
|
||||||
|
|
||||||
|
This class contains multiple methods to convert spline knot data from SI
|
||||||
|
to machine units and multiple methods that set the current spline
|
||||||
|
coefficient data. None of these advance the timeline. The :meth:`smooth`
|
||||||
|
method is the only method that advances the timeline.
|
||||||
|
|
||||||
|
:param width: Width in bits of the quantity that this spline controls
|
||||||
|
:param time_width: Width in bits of the time counter of this spline
|
||||||
|
:param channel: RTIO channel number
|
||||||
|
:param core_device: Core device that this spline is attached to
|
||||||
|
:param scale: Scale for conversion between machine units and physical
|
||||||
|
units; to be given as the "full scale physical value".
|
||||||
|
"""
|
||||||
|
|
||||||
|
kernel_invariants = {"channel", "core", "scale", "width",
|
||||||
|
"time_width", "time_scale"}
|
||||||
|
|
||||||
|
def __init__(self, width, time_width, channel, core_device, scale=1.):
|
||||||
|
self.core = core_device
|
||||||
|
self.channel = channel
|
||||||
|
self.width = width
|
||||||
|
self.scale = float((int64(1) << width) / scale)
|
||||||
|
self.time_width = time_width
|
||||||
|
self.time_scale = float((1 << time_width) *
|
||||||
|
core_device.coarse_ref_period)
|
||||||
|
|
||||||
|
@portable(flags={"fast-math"})
|
||||||
|
def to_mu(self, value: TFloat) -> TInt32:
|
||||||
|
"""Convert floating point ``value`` from physical units to 32 bit
|
||||||
|
integer machine units."""
|
||||||
|
return int32(round(value*self.scale))
|
||||||
|
|
||||||
|
@portable(flags={"fast-math"})
|
||||||
|
def from_mu(self, value: TInt32) -> TFloat:
|
||||||
|
"""Convert 32 bit integer ``value`` from machine units to floating
|
||||||
|
point physical units."""
|
||||||
|
return value/self.scale
|
||||||
|
|
||||||
|
@portable(flags={"fast-math"})
|
||||||
|
def to_mu64(self, value: TFloat) -> TInt64:
|
||||||
|
"""Convert floating point ``value`` from physical units to 64 bit
|
||||||
|
integer machine units."""
|
||||||
|
return int64(round(value*self.scale))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_mu(self, value: TInt32):
|
||||||
|
"""Set spline value (machine units).
|
||||||
|
|
||||||
|
:param value: Spline value in integer machine units.
|
||||||
|
"""
|
||||||
|
rtio_output(self.channel << 8, value)
|
||||||
|
|
||||||
|
@kernel(flags={"fast-math"})
|
||||||
|
def set(self, value: TFloat):
|
||||||
|
"""Set spline value.
|
||||||
|
|
||||||
|
:param value: Spline value relative to full-scale.
|
||||||
|
"""
|
||||||
|
if self.width > 32:
|
||||||
|
l = [int32(0)] * 2
|
||||||
|
self.pack_coeff_mu([self.to_mu64(value)], l)
|
||||||
|
rtio_output_wide(self.channel << 8, l)
|
||||||
|
else:
|
||||||
|
rtio_output(self.channel << 8, self.to_mu(value))
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def set_coeff_mu(self, value): # TList(TInt32)
|
||||||
|
"""Set spline raw values.
|
||||||
|
|
||||||
|
:param value: Spline packed raw values.
|
||||||
|
"""
|
||||||
|
rtio_output_wide(self.channel << 8, value)
|
||||||
|
|
||||||
|
@portable(flags={"fast-math"})
|
||||||
|
def pack_coeff_mu(self, coeff, packed): # TList(TInt64), TList(TInt32)
|
||||||
|
"""Pack coefficients into RTIO data
|
||||||
|
|
||||||
|
:param coeff: TList(TInt64) list of machine units spline coefficients.
|
||||||
|
Lowest (zeroth) order first. The coefficient list is zero-extended
|
||||||
|
by the RTIO gateware.
|
||||||
|
:param packed: TList(TInt32) list for packed RTIO data. Must be
|
||||||
|
pre-allocated. Length in bits is
|
||||||
|
``n*width + (n - 1)*n//2*time_width``
|
||||||
|
"""
|
||||||
|
pos = 0
|
||||||
|
for i in range(len(coeff)):
|
||||||
|
wi = self.width + i*self.time_width
|
||||||
|
ci = coeff[i]
|
||||||
|
while wi != 0:
|
||||||
|
j = pos//32
|
||||||
|
used = pos - 32*j
|
||||||
|
avail = 32 - used
|
||||||
|
if avail > wi:
|
||||||
|
avail = wi
|
||||||
|
cij = int32(ci)
|
||||||
|
if avail != 32:
|
||||||
|
cij &= (1 << avail) - 1
|
||||||
|
packed[j] |= cij << used
|
||||||
|
ci >>= avail
|
||||||
|
wi -= avail
|
||||||
|
pos += avail
|
||||||
|
|
||||||
|
@portable(flags={"fast-math"})
|
||||||
|
def coeff_to_mu(self, coeff, coeff64): # TList(TFloat), TList(TInt64)
|
||||||
|
"""Convert a floating point list of coefficients into a 64 bit
|
||||||
|
integer (preallocated).
|
||||||
|
|
||||||
|
:param coeff: TList(TFloat) list of coefficients in physical units.
|
||||||
|
:param coeff64: TList(TInt64) preallocated list of coefficients in
|
||||||
|
machine units.
|
||||||
|
"""
|
||||||
|
for i in range(len(coeff)):
|
||||||
|
vi = coeff[i] * self.scale
|
||||||
|
for j in range(i):
|
||||||
|
vi *= self.time_scale
|
||||||
|
ci = int64(round(vi))
|
||||||
|
coeff64[i] = ci
|
||||||
|
# artiq.wavesynth.coefficients.discrete_compensate:
|
||||||
|
if i == 2:
|
||||||
|
coeff64[1] += ci >> self.time_width + 1
|
||||||
|
elif i == 3:
|
||||||
|
coeff64[2] += ci >> self.time_width
|
||||||
|
coeff64[1] += ci // 6 >> 2*self.time_width
|
||||||
|
|
||||||
|
def coeff_as_packed_mu(self, coeff64):
|
||||||
|
"""Pack 64 bit integer machine units coefficients into 32 bit integer
|
||||||
|
RTIO data list.
|
||||||
|
|
||||||
|
This is a host-only method that can be used to generate packed
|
||||||
|
spline coefficient data to be frozen into kernels at compile time.
|
||||||
|
"""
|
||||||
|
n = len(coeff64)
|
||||||
|
width = n*self.width + (n - 1)*n//2*self.time_width
|
||||||
|
packed = [int32(0)] * ((width + 31)//32)
|
||||||
|
self.pack_coeff_mu(coeff64, packed)
|
||||||
|
return packed
|
||||||
|
|
||||||
|
def coeff_as_packed(self, coeff):
|
||||||
|
"""Convert floating point spline coefficients into 32 bit integer
|
||||||
|
packed data.
|
||||||
|
|
||||||
|
This is a host-only method that can be used to generate packed
|
||||||
|
spline coefficient data to be frozen into kernels at compile time.
|
||||||
|
"""
|
||||||
|
coeff64 = [int64(0)] * len(coeff)
|
||||||
|
self.coeff_to_mu(coeff, coeff64)
|
||||||
|
return self.coeff_as_packed_mu(coeff64)
|
||||||
|
|
||||||
|
@kernel(flags={"fast-math"})
|
||||||
|
def set_coeff(self, coeff): # TList(TFloat)
|
||||||
|
"""Set spline coefficients.
|
||||||
|
|
||||||
|
Missing coefficients (high order) are zero-extended byt the RTIO
|
||||||
|
gateware.
|
||||||
|
|
||||||
|
If more coefficients are supplied than the gateware supports the extra
|
||||||
|
coefficients are ignored.
|
||||||
|
|
||||||
|
:param value: List of floating point spline coefficients,
|
||||||
|
lowest order (constant) coefficient first. Units are the
|
||||||
|
unit of this spline's value times increasing powers of 1/s.
|
||||||
|
"""
|
||||||
|
n = len(coeff)
|
||||||
|
coeff64 = [int64(0)] * n
|
||||||
|
self.coeff_to_mu(coeff, coeff64)
|
||||||
|
width = n*self.width + (n - 1)*n//2*self.time_width
|
||||||
|
packed = [int32(0)] * ((width + 31)//32)
|
||||||
|
self.pack_coeff_mu(coeff64, packed)
|
||||||
|
self.set_coeff_mu(packed)
|
||||||
|
|
||||||
|
@kernel(flags={"fast-math"})
|
||||||
|
def smooth(self, start: TFloat, stop: TFloat, duration: TFloat,
|
||||||
|
order: TInt32):
|
||||||
|
"""Initiate an interpolated value change.
|
||||||
|
|
||||||
|
For zeroth order (step) interpolation, the step is at
|
||||||
|
``start + duration/2``.
|
||||||
|
|
||||||
|
First order interpolation corresponds to a linear value ramp from
|
||||||
|
``start`` to ``stop`` over ``duration``.
|
||||||
|
|
||||||
|
The third order interpolation is constrained to have zero first
|
||||||
|
order derivative at both `start` and `stop`.
|
||||||
|
|
||||||
|
For first order and third order interpolation (linear and cubic)
|
||||||
|
the interpolator needs to be stopped explicitly at the stop time
|
||||||
|
(e.g. by setting spline coefficient data or starting a new
|
||||||
|
:meth:`smooth` interpolation).
|
||||||
|
|
||||||
|
This method advances the timeline by ``duration``.
|
||||||
|
|
||||||
|
:param start: Initial value of the change. In physical units.
|
||||||
|
:param stop: Final value of the change. In physical units.
|
||||||
|
:param duration: Duration of the interpolation. In physical units.
|
||||||
|
:param order: Order of the interpolation. Only 0, 1,
|
||||||
|
and 3 are valid: step, linear, cubic.
|
||||||
|
"""
|
||||||
|
if order == 0:
|
||||||
|
delay(duration/2.)
|
||||||
|
self.set_coeff([stop])
|
||||||
|
delay(duration/2.)
|
||||||
|
elif order == 1:
|
||||||
|
self.set_coeff([start, (stop - start)/duration])
|
||||||
|
delay(duration)
|
||||||
|
elif order == 3:
|
||||||
|
v2 = 6.*(stop - start)/(duration*duration)
|
||||||
|
self.set_coeff([start, 0., v2, -2.*v2/duration])
|
||||||
|
delay(duration)
|
||||||
|
else:
|
||||||
|
raise ValueError("Invalid interpolation order. "
|
||||||
|
"Supported orders are: 0, 1, 3.")
|
|
@ -23,12 +23,12 @@ def y_mu_to_full_scale(y):
|
||||||
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def adc_mu_to_volts(x, gain, corrected_fs=True):
|
def adc_mu_to_volts(x, gain):
|
||||||
"""Convert servo ADC data from machine units to volts."""
|
"""Convert servo ADC data from machine units to Volt."""
|
||||||
val = (x >> 1) & 0xffff
|
val = (x >> 1) & 0xffff
|
||||||
mask = 1 << 15
|
mask = 1 << 15
|
||||||
val = -(val & mask) + (val & ~mask)
|
val = -(val & mask) + (val & ~mask)
|
||||||
return sampler.adc_mu_to_volt(val, gain, corrected_fs)
|
return sampler.adc_mu_to_volt(val, gain)
|
||||||
|
|
||||||
|
|
||||||
class SUServo:
|
class SUServo:
|
||||||
|
@ -62,15 +62,14 @@ class SUServo:
|
||||||
:param gains: Initial value for PGIA gains shift register
|
:param gains: Initial value for PGIA gains shift register
|
||||||
(default: 0x0000). Knowledge of this state is not transferred
|
(default: 0x0000). Knowledge of this state is not transferred
|
||||||
between experiments.
|
between experiments.
|
||||||
:param sampler_hw_rev: Sampler's revision string
|
|
||||||
:param core_device: Core device name
|
:param core_device: Core device name
|
||||||
"""
|
"""
|
||||||
kernel_invariants = {"channel", "core", "pgia", "cplds", "ddses",
|
kernel_invariants = {"channel", "core", "pgia", "cplds", "ddses",
|
||||||
"ref_period_mu", "corrected_fs"}
|
"ref_period_mu"}
|
||||||
|
|
||||||
def __init__(self, dmgr, channel, pgia_device,
|
def __init__(self, dmgr, channel, pgia_device,
|
||||||
cpld_devices, dds_devices,
|
cpld_devices, dds_devices,
|
||||||
gains=0x0000, sampler_hw_rev="v2.2", core_device="core"):
|
gains=0x0000, core_device="core"):
|
||||||
|
|
||||||
self.core = dmgr.get(core_device)
|
self.core = dmgr.get(core_device)
|
||||||
self.pgia = dmgr.get(pgia_device)
|
self.pgia = dmgr.get(pgia_device)
|
||||||
|
@ -82,13 +81,8 @@ class SUServo:
|
||||||
self.gains = gains
|
self.gains = gains
|
||||||
self.ref_period_mu = self.core.seconds_to_mu(
|
self.ref_period_mu = self.core.seconds_to_mu(
|
||||||
self.core.coarse_ref_period)
|
self.core.coarse_ref_period)
|
||||||
self.corrected_fs = sampler.Sampler.use_corrected_fs(sampler_hw_rev)
|
|
||||||
assert self.ref_period_mu == self.core.ref_multiplier
|
assert self.ref_period_mu == self.core.ref_multiplier
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def init(self):
|
def init(self):
|
||||||
"""Initialize the servo, Sampler and both Urukuls.
|
"""Initialize the servo, Sampler and both Urukuls.
|
||||||
|
@ -155,7 +149,7 @@ class SUServo:
|
||||||
This method advances the timeline by one servo memory access.
|
This method advances the timeline by one servo memory access.
|
||||||
It does not support RTIO event replacement.
|
It does not support RTIO event replacement.
|
||||||
|
|
||||||
:param int enable: Enable servo operation. Enabling starts servo
|
:param enable (int): Enable servo operation. Enabling starts servo
|
||||||
iterations beginning with the ADC sampling stage. The first DDS
|
iterations beginning with the ADC sampling stage. The first DDS
|
||||||
update will happen about two servo cycles (~2.3 µs) after enabling
|
update will happen about two servo cycles (~2.3 µs) after enabling
|
||||||
the servo. The delay is deterministic.
|
the servo. The delay is deterministic.
|
||||||
|
@ -198,7 +192,7 @@ class SUServo:
|
||||||
consistent and valid data, stop the servo before using this method.
|
consistent and valid data, stop the servo before using this method.
|
||||||
|
|
||||||
:param adc: ADC channel number (0-7)
|
:param adc: ADC channel number (0-7)
|
||||||
:return: 17-bit signed X0
|
:return: 17 bit signed X0
|
||||||
"""
|
"""
|
||||||
# State memory entries are 25 bits. Due to the pre-adder dynamic
|
# State memory entries are 25 bits. Due to the pre-adder dynamic
|
||||||
# range, X0/X1/OFFSET are only 24 bits. Finally, the RTIO interface
|
# range, X0/X1/OFFSET are only 24 bits. Finally, the RTIO interface
|
||||||
|
@ -240,7 +234,7 @@ class SUServo:
|
||||||
"""
|
"""
|
||||||
val = self.get_adc_mu(channel)
|
val = self.get_adc_mu(channel)
|
||||||
gain = (self.gains >> (channel*2)) & 0b11
|
gain = (self.gains >> (channel*2)) & 0b11
|
||||||
return adc_mu_to_volts(val, gain, self.corrected_fs)
|
return adc_mu_to_volts(val, gain)
|
||||||
|
|
||||||
|
|
||||||
class Channel:
|
class Channel:
|
||||||
|
@ -261,10 +255,6 @@ class Channel:
|
||||||
self.servo.channel)
|
self.servo.channel)
|
||||||
self.dds = self.servo.ddses[self.servo_channel // 4]
|
self.dds = self.servo.ddses[self.servo_channel // 4]
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set(self, en_out, en_iir=0, profile=0):
|
def set(self, en_out, en_iir=0, profile=0):
|
||||||
"""Operate channel.
|
"""Operate channel.
|
||||||
|
@ -288,12 +278,12 @@ class Channel:
|
||||||
def set_dds_mu(self, profile, ftw, offs, pow_=0):
|
def set_dds_mu(self, profile, ftw, offs, pow_=0):
|
||||||
"""Set profile DDS coefficients in machine units.
|
"""Set profile DDS coefficients in machine units.
|
||||||
|
|
||||||
See also :meth:`Channel.set_dds`.
|
.. seealso:: :meth:`set_amplitude`
|
||||||
|
|
||||||
:param profile: Profile number (0-31)
|
:param profile: Profile number (0-31)
|
||||||
:param ftw: Frequency tuning word (32-bit unsigned)
|
:param ftw: Frequency tuning word (32 bit unsigned)
|
||||||
:param offs: IIR offset (17-bit signed)
|
:param offs: IIR offset (17 bit signed)
|
||||||
:param pow_: Phase offset word (16-bit)
|
:param pow_: Phase offset word (16 bit)
|
||||||
"""
|
"""
|
||||||
base = (self.servo_channel << 8) | (profile << 3)
|
base = (self.servo_channel << 8) | (profile << 3)
|
||||||
self.servo.write(base + 0, ftw >> 16)
|
self.servo.write(base + 0, ftw >> 16)
|
||||||
|
@ -327,7 +317,7 @@ class Channel:
|
||||||
See :meth:`set_dds_mu` for setting the complete DDS profile.
|
See :meth:`set_dds_mu` for setting the complete DDS profile.
|
||||||
|
|
||||||
:param profile: Profile number (0-31)
|
:param profile: Profile number (0-31)
|
||||||
:param offs: IIR offset (17-bit signed)
|
:param offs: IIR offset (17 bit signed)
|
||||||
"""
|
"""
|
||||||
base = (self.servo_channel << 8) | (profile << 3)
|
base = (self.servo_channel << 8) | (profile << 3)
|
||||||
self.servo.write(base + 4, offs)
|
self.servo.write(base + 4, offs)
|
||||||
|
@ -375,15 +365,15 @@ class Channel:
|
||||||
* :math:`b_0` and :math:`b_1` are the feedforward gains for the two
|
* :math:`b_0` and :math:`b_1` are the feedforward gains for the two
|
||||||
delays
|
delays
|
||||||
|
|
||||||
See also :meth:`Channel.set_iir`.
|
.. seealso:: :meth:`set_iir`
|
||||||
|
|
||||||
:param profile: Profile number (0-31)
|
:param profile: Profile number (0-31)
|
||||||
:param adc: ADC channel to take IIR input from (0-7)
|
:param adc: ADC channel to take IIR input from (0-7)
|
||||||
:param a1: 18-bit signed A1 coefficient (Y1 coefficient,
|
:param a1: 18 bit signed A1 coefficient (Y1 coefficient,
|
||||||
feedback, integrator gain)
|
feedback, integrator gain)
|
||||||
:param b0: 18-bit signed B0 coefficient (recent,
|
:param b0: 18 bit signed B0 coefficient (recent,
|
||||||
X0 coefficient, feed forward, proportional gain)
|
X0 coefficient, feed forward, proportional gain)
|
||||||
:param b1: 18-bit signed B1 coefficient (old,
|
:param b1: 18 bit signed B1 coefficient (old,
|
||||||
X1 coefficient, feed forward, proportional gain)
|
X1 coefficient, feed forward, proportional gain)
|
||||||
:param dly: IIR update suppression time. In units of IIR cycles
|
:param dly: IIR update suppression time. In units of IIR cycles
|
||||||
(~1.2 µs, 0-255).
|
(~1.2 µs, 0-255).
|
||||||
|
@ -499,7 +489,7 @@ class Channel:
|
||||||
consistent and valid data, stop the servo before using this method.
|
consistent and valid data, stop the servo before using this method.
|
||||||
|
|
||||||
:param profile: Profile number (0-31)
|
:param profile: Profile number (0-31)
|
||||||
:return: 17-bit unsigned Y0
|
:return: 17 bit unsigned Y0
|
||||||
"""
|
"""
|
||||||
return self.servo.read(STATE_SEL | (self.servo_channel << 5) | profile)
|
return self.servo.read(STATE_SEL | (self.servo_channel << 5) | profile)
|
||||||
|
|
||||||
|
@ -535,7 +525,7 @@ class Channel:
|
||||||
This method advances the timeline by one servo memory access.
|
This method advances the timeline by one servo memory access.
|
||||||
|
|
||||||
:param profile: Profile number (0-31)
|
:param profile: Profile number (0-31)
|
||||||
:param y: 17-bit unsigned Y0
|
:param y: 17 bit unsigned Y0
|
||||||
"""
|
"""
|
||||||
# State memory is 25 bits wide and signed.
|
# State memory is 25 bits wide and signed.
|
||||||
# Reads interact with the 18 MSBs (coefficient memory width)
|
# Reads interact with the 18 MSBs (coefficient memory width)
|
||||||
|
|
|
@ -27,7 +27,7 @@ class TTLOut:
|
||||||
|
|
||||||
This should be used with output-only channels.
|
This should be used with output-only channels.
|
||||||
|
|
||||||
:param channel: Channel number
|
:param channel: channel number
|
||||||
"""
|
"""
|
||||||
kernel_invariants = {"core", "channel", "target_o"}
|
kernel_invariants = {"core", "channel", "target_o"}
|
||||||
|
|
||||||
|
@ -36,10 +36,6 @@ class TTLOut:
|
||||||
self.channel = channel
|
self.channel = channel
|
||||||
self.target_o = channel << 8
|
self.target_o = channel << 8
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def output(self):
|
def output(self):
|
||||||
pass
|
pass
|
||||||
|
@ -109,7 +105,7 @@ class TTLInOut:
|
||||||
API is active (e.g. the gate is open, or the input events have not been
|
API is active (e.g. the gate is open, or the input events have not been
|
||||||
fully read out), another API must not be used simultaneously.
|
fully read out), another API must not be used simultaneously.
|
||||||
|
|
||||||
:param channel: Channel number
|
:param channel: channel number
|
||||||
"""
|
"""
|
||||||
kernel_invariants = {"core", "channel", "gate_latency_mu",
|
kernel_invariants = {"core", "channel", "gate_latency_mu",
|
||||||
"target_o", "target_oe", "target_sens", "target_sample"}
|
"target_o", "target_oe", "target_sens", "target_sample"}
|
||||||
|
@ -132,10 +128,6 @@ class TTLInOut:
|
||||||
self.target_sens = (channel << 8) + 2
|
self.target_sens = (channel << 8) + 2
|
||||||
self.target_sample = (channel << 8) + 3
|
self.target_sample = (channel << 8) + 3
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_oe(self, oe):
|
def set_oe(self, oe):
|
||||||
rtio_output(self.target_oe, 1 if oe else 0)
|
rtio_output(self.target_oe, 1 if oe else 0)
|
||||||
|
@ -145,7 +137,7 @@ class TTLInOut:
|
||||||
"""Set the direction to output at the current position of the time
|
"""Set the direction to output at the current position of the time
|
||||||
cursor.
|
cursor.
|
||||||
|
|
||||||
A delay of at least one RTIO clock cycle is necessary before any
|
There must be a delay of at least one RTIO clock cycle before any
|
||||||
other command can be issued.
|
other command can be issued.
|
||||||
|
|
||||||
This method only configures the direction at the FPGA. When using
|
This method only configures the direction at the FPGA. When using
|
||||||
|
@ -158,7 +150,7 @@ class TTLInOut:
|
||||||
"""Set the direction to input at the current position of the time
|
"""Set the direction to input at the current position of the time
|
||||||
cursor.
|
cursor.
|
||||||
|
|
||||||
A delay of at least one RTIO clock cycle is necessary before any
|
There must be a delay of at least one RTIO clock cycle before any
|
||||||
other command can be issued.
|
other command can be issued.
|
||||||
|
|
||||||
This method only configures the direction at the FPGA. When using
|
This method only configures the direction at the FPGA. When using
|
||||||
|
@ -326,18 +318,17 @@ class TTLInOut:
|
||||||
:return: The number of events before the timeout elapsed (0 if none
|
:return: The number of events before the timeout elapsed (0 if none
|
||||||
observed).
|
observed).
|
||||||
|
|
||||||
**Examples:**
|
Examples:
|
||||||
|
|
||||||
To count events on channel ``ttl_input``, up to the current timeline
|
To count events on channel ``ttl_input``, up to the current timeline
|
||||||
position: ::
|
position::
|
||||||
|
|
||||||
ttl_input.count(now_mu())
|
ttl_input.count(now_mu())
|
||||||
|
|
||||||
If other events are scheduled between the end of the input gate
|
If other events are scheduled between the end of the input gate
|
||||||
period and when the number of events is counted, using
|
period and when the number of events is counted, using ``now_mu()``
|
||||||
:meth:`~artiq.language.core.now_mu()` as timeout consumes an
|
as timeout consumes an unnecessary amount of timeline slack. In
|
||||||
unnecessary amount of timeline slack. In such cases, it can be
|
such cases, it can be beneficial to pass a more precise timestamp,
|
||||||
beneficial to pass a more precise timestamp, for example: ::
|
for example::
|
||||||
|
|
||||||
gate_end_mu = ttl_input.gate_rising(100 * us)
|
gate_end_mu = ttl_input.gate_rising(100 * us)
|
||||||
|
|
||||||
|
@ -351,7 +342,7 @@ class TTLInOut:
|
||||||
num_rising_edges = ttl_input.count(gate_end_mu)
|
num_rising_edges = ttl_input.count(gate_end_mu)
|
||||||
|
|
||||||
The ``gate_*()`` family of methods return the cursor at the end
|
The ``gate_*()`` family of methods return the cursor at the end
|
||||||
of the window, allowing this to be expressed in a compact fashion: ::
|
of the window, allowing this to be expressed in a compact fashion::
|
||||||
|
|
||||||
ttl_input.count(ttl_input.gate_rising(100 * us))
|
ttl_input.count(ttl_input.gate_rising(100 * us))
|
||||||
"""
|
"""
|
||||||
|
@ -442,7 +433,7 @@ class TTLInOut:
|
||||||
was being watched.
|
was being watched.
|
||||||
|
|
||||||
The time cursor is not modified by this function. This function
|
The time cursor is not modified by this function. This function
|
||||||
always results in negative slack.
|
always makes the slack negative.
|
||||||
"""
|
"""
|
||||||
rtio_output(self.target_sens, 0)
|
rtio_output(self.target_sens, 0)
|
||||||
success = True
|
success = True
|
||||||
|
@ -474,10 +465,6 @@ class TTLClockGen:
|
||||||
|
|
||||||
self.acc_width = numpy.int64(acc_width)
|
self.acc_width = numpy.int64(acc_width)
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_rtio_channels(channel, **kwargs):
|
|
||||||
return [(channel, None)]
|
|
||||||
|
|
||||||
@portable
|
@portable
|
||||||
def frequency_to_ftw(self, frequency):
|
def frequency_to_ftw(self, frequency):
|
||||||
"""Returns the frequency tuning word corresponding to the given
|
"""Returns the frequency tuning word corresponding to the given
|
||||||
|
|
|
@ -130,7 +130,7 @@ class CPLD:
|
||||||
:param spi_device: SPI bus device name
|
:param spi_device: SPI bus device name
|
||||||
:param io_update_device: IO update RTIO TTLOut channel name
|
:param io_update_device: IO update RTIO TTLOut channel name
|
||||||
:param dds_reset_device: DDS reset RTIO TTLOut channel name
|
:param dds_reset_device: DDS reset RTIO TTLOut channel name
|
||||||
:param sync_device: AD9910 ``SYNC_IN`` RTIO TTLClockGen channel name
|
:param sync_device: AD9910 SYNC_IN RTIO TTLClockGen channel name
|
||||||
:param refclk: Reference clock (SMA, MMCX or on-board 100 MHz oscillator)
|
:param refclk: Reference clock (SMA, MMCX or on-board 100 MHz oscillator)
|
||||||
frequency in Hz
|
frequency in Hz
|
||||||
:param clk_sel: Reference clock selection. For hardware revision >= 1.3
|
:param clk_sel: Reference clock selection. For hardware revision >= 1.3
|
||||||
|
@ -143,9 +143,9 @@ class CPLD:
|
||||||
1: divide-by-1; 2: divide-by-2; 3: divide-by-4.
|
1: divide-by-1; 2: divide-by-2; 3: divide-by-4.
|
||||||
On Urukul boards with CPLD gateware before v1.3.1 only the default
|
On Urukul boards with CPLD gateware before v1.3.1 only the default
|
||||||
(0, i.e. variant dependent divider) is valid.
|
(0, i.e. variant dependent divider) is valid.
|
||||||
:param sync_sel: ``SYNC`` (multi-chip synchronisation) signal source selection.
|
:param sync_sel: SYNC (multi-chip synchronisation) signal source selection.
|
||||||
0 corresponds to ``SYNC_IN`` being supplied by the FPGA via the EEM
|
0 corresponds to SYNC_IN being supplied by the FPGA via the EEM
|
||||||
connector. 1 corresponds to ``SYNC_OUT`` from DDS0 being distributed to the
|
connector. 1 corresponds to SYNC_OUT from DDS0 being distributed to the
|
||||||
other chips.
|
other chips.
|
||||||
:param rf_sw: Initial CPLD RF switch register setting (default: 0x0).
|
:param rf_sw: Initial CPLD RF switch register setting (default: 0x0).
|
||||||
Knowledge of this state is not transferred between experiments.
|
Knowledge of this state is not transferred between experiments.
|
||||||
|
@ -153,8 +153,8 @@ class CPLD:
|
||||||
0x00000000). See also :meth:`get_att_mu` which retrieves the hardware
|
0x00000000). See also :meth:`get_att_mu` which retrieves the hardware
|
||||||
state without side effects. Knowledge of this state is not transferred
|
state without side effects. Knowledge of this state is not transferred
|
||||||
between experiments.
|
between experiments.
|
||||||
:param sync_div: ``SYNC_IN`` generator divider. The ratio between the coarse
|
:param sync_div: SYNC_IN generator divider. The ratio between the coarse
|
||||||
RTIO frequency and the ``SYNC_IN`` generator frequency (default: 2 if
|
RTIO frequency and the SYNC_IN generator frequency (default: 2 if
|
||||||
`sync_device` was specified).
|
`sync_device` was specified).
|
||||||
:param core_device: Core device name
|
:param core_device: Core device name
|
||||||
|
|
||||||
|
@ -204,7 +204,7 @@ class CPLD:
|
||||||
|
|
||||||
See :func:`urukul_cfg` for possible flags.
|
See :func:`urukul_cfg` for possible flags.
|
||||||
|
|
||||||
:param cfg: 24-bit data to be written. Will be stored at
|
:param cfg: 24 bit data to be written. Will be stored at
|
||||||
:attr:`cfg_reg`.
|
:attr:`cfg_reg`.
|
||||||
"""
|
"""
|
||||||
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24,
|
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24,
|
||||||
|
@ -237,7 +237,7 @@ class CPLD:
|
||||||
|
|
||||||
Resets the DDS I/O interface and verifies correct CPLD gateware
|
Resets the DDS I/O interface and verifies correct CPLD gateware
|
||||||
version.
|
version.
|
||||||
Does not pulse the DDS ``MASTER_RESET`` as that confuses the AD9910.
|
Does not pulse the DDS MASTER_RESET as that confuses the AD9910.
|
||||||
|
|
||||||
:param blind: Do not attempt to verify presence and compatibility.
|
:param blind: Do not attempt to verify presence and compatibility.
|
||||||
"""
|
"""
|
||||||
|
@ -283,7 +283,7 @@ class CPLD:
|
||||||
def cfg_switches(self, state: TInt32):
|
def cfg_switches(self, state: TInt32):
|
||||||
"""Configure all four RF switches through the configuration register.
|
"""Configure all four RF switches through the configuration register.
|
||||||
|
|
||||||
:param state: RF switch state as a 4-bit integer.
|
:param state: RF switch state as a 4 bit integer.
|
||||||
"""
|
"""
|
||||||
self.cfg_write((self.cfg_reg & ~0xf) | state)
|
self.cfg_write((self.cfg_reg & ~0xf) | state)
|
||||||
|
|
||||||
|
@ -326,10 +326,11 @@ class CPLD:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_all_att_mu(self, att_reg: TInt32):
|
def set_all_att_mu(self, att_reg: TInt32):
|
||||||
"""Set all four digital step attenuators (in machine units).
|
"""Set all four digital step attenuators (in machine units).
|
||||||
See also :meth:`set_att_mu`.
|
|
||||||
|
|
||||||
:param att_reg: Attenuator setting string (32-bit)
|
.. seealso:: :meth:`set_att_mu`
|
||||||
|
|
||||||
|
:param att_reg: Attenuator setting string (32 bit)
|
||||||
"""
|
"""
|
||||||
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 32,
|
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 32,
|
||||||
SPIT_ATT_WR, CS_ATT)
|
SPIT_ATT_WR, CS_ATT)
|
||||||
|
@ -341,7 +342,8 @@ class CPLD:
|
||||||
"""Set digital step attenuator in SI units.
|
"""Set digital step attenuator in SI units.
|
||||||
|
|
||||||
This method will write the attenuator settings of all four channels.
|
This method will write the attenuator settings of all four channels.
|
||||||
See also :meth:`set_att_mu`.
|
|
||||||
|
.. seealso:: :meth:`set_att_mu`
|
||||||
|
|
||||||
:param channel: Attenuator channel (0-3).
|
:param channel: Attenuator channel (0-3).
|
||||||
:param att: Attenuation setting in dB. Higher value is more
|
:param att: Attenuation setting in dB. Higher value is more
|
||||||
|
@ -357,9 +359,9 @@ class CPLD:
|
||||||
The result is stored and will be used in future calls of
|
The result is stored and will be used in future calls of
|
||||||
:meth:`set_att_mu` and :meth:`set_att`.
|
:meth:`set_att_mu` and :meth:`set_att`.
|
||||||
|
|
||||||
See also :meth:`get_channel_att_mu`.
|
.. seealso:: :meth:`get_channel_att_mu`
|
||||||
|
|
||||||
:return: 32-bit attenuator settings
|
:return: 32 bit attenuator settings
|
||||||
"""
|
"""
|
||||||
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_INPUT, 32,
|
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_INPUT, 32,
|
||||||
SPIT_ATT_RD, CS_ATT)
|
SPIT_ATT_RD, CS_ATT)
|
||||||
|
@ -378,7 +380,7 @@ class CPLD:
|
||||||
The result is stored and will be used in future calls of
|
The result is stored and will be used in future calls of
|
||||||
:meth:`set_att_mu` and :meth:`set_att`.
|
:meth:`set_att_mu` and :meth:`set_att`.
|
||||||
|
|
||||||
See also :meth:`get_att_mu`.
|
.. seealso:: :meth:`get_att_mu`
|
||||||
|
|
||||||
:param channel: Attenuator channel (0-3).
|
:param channel: Attenuator channel (0-3).
|
||||||
:return: 8-bit digital attenuation setting:
|
:return: 8-bit digital attenuation setting:
|
||||||
|
@ -390,7 +392,7 @@ class CPLD:
|
||||||
def get_channel_att(self, channel: TInt32) -> TFloat:
|
def get_channel_att(self, channel: TInt32) -> TFloat:
|
||||||
"""Get digital step attenuator value for a channel in SI units.
|
"""Get digital step attenuator value for a channel in SI units.
|
||||||
|
|
||||||
See also :meth:`get_channel_att_mu`.
|
.. seealso:: :meth:`get_channel_att_mu`
|
||||||
|
|
||||||
:param channel: Attenuator channel (0-3).
|
:param channel: Attenuator channel (0-3).
|
||||||
:return: Attenuation setting in dB. Higher value is more
|
:return: Attenuation setting in dB. Higher value is more
|
||||||
|
@ -401,14 +403,14 @@ class CPLD:
|
||||||
|
|
||||||
@kernel
|
@kernel
|
||||||
def set_sync_div(self, div: TInt32):
|
def set_sync_div(self, div: TInt32):
|
||||||
"""Set the ``SYNC_IN`` AD9910 pulse generator frequency
|
"""Set the SYNC_IN AD9910 pulse generator frequency
|
||||||
and align it to the current RTIO timestamp.
|
and align it to the current RTIO timestamp.
|
||||||
|
|
||||||
The ``SYNC_IN`` signal is derived from the coarse RTIO clock
|
The SYNC_IN signal is derived from the coarse RTIO clock
|
||||||
and the divider must be a power of two.
|
and the divider must be a power of two.
|
||||||
Configure ``sync_sel == 0``.
|
Configure ``sync_sel == 0``.
|
||||||
|
|
||||||
:param div: ``SYNC_IN`` frequency divider. Must be a power of two.
|
:param div: SYNC_IN frequency divider. Must be a power of two.
|
||||||
Minimum division ratio is 2. Maximum division ratio is 16.
|
Minimum division ratio is 2. Maximum division ratio is 16.
|
||||||
"""
|
"""
|
||||||
ftw_max = 1 << 4
|
ftw_max = 1 << 4
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
"""RTIO driver for the Zotino 32-channel, 16-bit 1MSPS DAC.
|
"""RTIO driver for the Zotino 32-channel, 16-bit 1MSPS DAC.
|
||||||
|
|
||||||
Output event replacement is not supported and issuing commands at the same
|
Output event replacement is not supported and issuing commands at the same
|
||||||
time results in a collision error.
|
time is an error.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from artiq.language.core import kernel
|
from artiq.language.core import kernel
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
import asyncio
|
import asyncio
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
from PyQt5 import QtCore, QtWidgets
|
||||||
|
|
||||||
from artiq.gui import applets
|
from artiq.gui import applets
|
||||||
|
|
||||||
|
@ -13,58 +13,58 @@ class AppletsCCBDock(applets.AppletsDock):
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
applets.AppletsDock.__init__(self, *args, **kwargs)
|
applets.AppletsDock.__init__(self, *args, **kwargs)
|
||||||
|
|
||||||
sep = QtGui.QAction(self.table)
|
sep = QtWidgets.QAction(self.table)
|
||||||
sep.setSeparator(True)
|
sep.setSeparator(True)
|
||||||
self.table.addAction(sep)
|
self.table.addAction(sep)
|
||||||
|
|
||||||
ccbp_group_menu = QtWidgets.QMenu(self.table)
|
ccbp_group_menu = QtWidgets.QMenu()
|
||||||
actiongroup = QtGui.QActionGroup(self.table)
|
actiongroup = QtWidgets.QActionGroup(self.table)
|
||||||
actiongroup.setExclusive(True)
|
actiongroup.setExclusive(True)
|
||||||
self.ccbp_group_none = QtGui.QAction("No policy", self.table)
|
self.ccbp_group_none = QtWidgets.QAction("No policy", self.table)
|
||||||
self.ccbp_group_none.setCheckable(True)
|
self.ccbp_group_none.setCheckable(True)
|
||||||
self.ccbp_group_none.triggered.connect(lambda: self.set_ccbp(""))
|
self.ccbp_group_none.triggered.connect(lambda: self.set_ccbp(""))
|
||||||
ccbp_group_menu.addAction(self.ccbp_group_none)
|
ccbp_group_menu.addAction(self.ccbp_group_none)
|
||||||
actiongroup.addAction(self.ccbp_group_none)
|
actiongroup.addAction(self.ccbp_group_none)
|
||||||
self.ccbp_group_ignore = QtGui.QAction("Ignore requests", self.table)
|
self.ccbp_group_ignore = QtWidgets.QAction("Ignore requests", self.table)
|
||||||
self.ccbp_group_ignore.setCheckable(True)
|
self.ccbp_group_ignore.setCheckable(True)
|
||||||
self.ccbp_group_ignore.triggered.connect(lambda: self.set_ccbp("ignore"))
|
self.ccbp_group_ignore.triggered.connect(lambda: self.set_ccbp("ignore"))
|
||||||
ccbp_group_menu.addAction(self.ccbp_group_ignore)
|
ccbp_group_menu.addAction(self.ccbp_group_ignore)
|
||||||
actiongroup.addAction(self.ccbp_group_ignore)
|
actiongroup.addAction(self.ccbp_group_ignore)
|
||||||
self.ccbp_group_create = QtGui.QAction("Create applets", self.table)
|
self.ccbp_group_create = QtWidgets.QAction("Create applets", self.table)
|
||||||
self.ccbp_group_create.setCheckable(True)
|
self.ccbp_group_create.setCheckable(True)
|
||||||
self.ccbp_group_create.triggered.connect(lambda: self.set_ccbp("create"))
|
self.ccbp_group_create.triggered.connect(lambda: self.set_ccbp("create"))
|
||||||
ccbp_group_menu.addAction(self.ccbp_group_create)
|
ccbp_group_menu.addAction(self.ccbp_group_create)
|
||||||
actiongroup.addAction(self.ccbp_group_create)
|
actiongroup.addAction(self.ccbp_group_create)
|
||||||
self.ccbp_group_enable = QtGui.QAction("Create and enable/disable applets",
|
self.ccbp_group_enable = QtWidgets.QAction("Create and enable/disable applets",
|
||||||
self.table)
|
self.table)
|
||||||
self.ccbp_group_enable.setCheckable(True)
|
self.ccbp_group_enable.setCheckable(True)
|
||||||
self.ccbp_group_enable.triggered.connect(lambda: self.set_ccbp("enable"))
|
self.ccbp_group_enable.triggered.connect(lambda: self.set_ccbp("enable"))
|
||||||
ccbp_group_menu.addAction(self.ccbp_group_enable)
|
ccbp_group_menu.addAction(self.ccbp_group_enable)
|
||||||
actiongroup.addAction(self.ccbp_group_enable)
|
actiongroup.addAction(self.ccbp_group_enable)
|
||||||
self.ccbp_group_action = QtGui.QAction("Group CCB policy", self.table)
|
self.ccbp_group_action = QtWidgets.QAction("Group CCB policy", self.table)
|
||||||
self.ccbp_group_action.setMenu(ccbp_group_menu)
|
self.ccbp_group_action.setMenu(ccbp_group_menu)
|
||||||
self.table.addAction(self.ccbp_group_action)
|
self.table.addAction(self.ccbp_group_action)
|
||||||
self.table.itemSelectionChanged.connect(self.update_group_ccbp_menu)
|
self.table.itemSelectionChanged.connect(self.update_group_ccbp_menu)
|
||||||
self.update_group_ccbp_menu()
|
self.update_group_ccbp_menu()
|
||||||
|
|
||||||
ccbp_global_menu = QtWidgets.QMenu(self.table)
|
ccbp_global_menu = QtWidgets.QMenu()
|
||||||
actiongroup = QtGui.QActionGroup(self.table)
|
actiongroup = QtWidgets.QActionGroup(self.table)
|
||||||
actiongroup.setExclusive(True)
|
actiongroup.setExclusive(True)
|
||||||
self.ccbp_global_ignore = QtGui.QAction("Ignore requests", self.table)
|
self.ccbp_global_ignore = QtWidgets.QAction("Ignore requests", self.table)
|
||||||
self.ccbp_global_ignore.setCheckable(True)
|
self.ccbp_global_ignore.setCheckable(True)
|
||||||
ccbp_global_menu.addAction(self.ccbp_global_ignore)
|
ccbp_global_menu.addAction(self.ccbp_global_ignore)
|
||||||
actiongroup.addAction(self.ccbp_global_ignore)
|
actiongroup.addAction(self.ccbp_global_ignore)
|
||||||
self.ccbp_global_create = QtGui.QAction("Create applets", self.table)
|
self.ccbp_global_create = QtWidgets.QAction("Create applets", self.table)
|
||||||
self.ccbp_global_create.setCheckable(True)
|
self.ccbp_global_create.setCheckable(True)
|
||||||
self.ccbp_global_create.setChecked(True)
|
self.ccbp_global_create.setChecked(True)
|
||||||
ccbp_global_menu.addAction(self.ccbp_global_create)
|
ccbp_global_menu.addAction(self.ccbp_global_create)
|
||||||
actiongroup.addAction(self.ccbp_global_create)
|
actiongroup.addAction(self.ccbp_global_create)
|
||||||
self.ccbp_global_enable = QtGui.QAction("Create and enable/disable applets",
|
self.ccbp_global_enable = QtWidgets.QAction("Create and enable/disable applets",
|
||||||
self.table)
|
self.table)
|
||||||
self.ccbp_global_enable.setCheckable(True)
|
self.ccbp_global_enable.setCheckable(True)
|
||||||
ccbp_global_menu.addAction(self.ccbp_global_enable)
|
ccbp_global_menu.addAction(self.ccbp_global_enable)
|
||||||
actiongroup.addAction(self.ccbp_global_enable)
|
actiongroup.addAction(self.ccbp_global_enable)
|
||||||
ccbp_global_action = QtGui.QAction("Global CCB policy", self.table)
|
ccbp_global_action = QtWidgets.QAction("Global CCB policy", self.table)
|
||||||
ccbp_global_action.setMenu(ccbp_global_menu)
|
ccbp_global_action.setMenu(ccbp_global_menu)
|
||||||
self.table.addAction(ccbp_global_action)
|
self.table.addAction(ccbp_global_action)
|
||||||
|
|
||||||
|
@ -196,7 +196,7 @@ class AppletsCCBDock(applets.AppletsDock):
|
||||||
logger.debug("Applet %s already exists and no update required", name)
|
logger.debug("Applet %s already exists and no update required", name)
|
||||||
|
|
||||||
if ccbp == "enable":
|
if ccbp == "enable":
|
||||||
applet.setCheckState(0, QtCore.Qt.CheckState.Checked)
|
applet.setCheckState(0, QtCore.Qt.Checked)
|
||||||
|
|
||||||
def ccb_disable_applet(self, name, group=None):
|
def ccb_disable_applet(self, name, group=None):
|
||||||
"""Disables an applet.
|
"""Disables an applet.
|
||||||
|
@ -216,7 +216,7 @@ class AppletsCCBDock(applets.AppletsDock):
|
||||||
return
|
return
|
||||||
parent, applet = self.locate_applet(name, group, False)
|
parent, applet = self.locate_applet(name, group, False)
|
||||||
if applet is not None:
|
if applet is not None:
|
||||||
applet.setCheckState(0, QtCore.Qt.CheckState.Unchecked)
|
applet.setCheckState(0, QtCore.Qt.Unchecked)
|
||||||
|
|
||||||
def ccb_disable_applet_group(self, group):
|
def ccb_disable_applet_group(self, group):
|
||||||
"""Disables all the applets in a group.
|
"""Disables all the applets in a group.
|
||||||
|
@ -246,7 +246,7 @@ class AppletsCCBDock(applets.AppletsDock):
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
wi = nwi
|
wi = nwi
|
||||||
wi.setCheckState(0, QtCore.Qt.CheckState.Unchecked)
|
wi.setCheckState(0, QtCore.Qt.Unchecked)
|
||||||
|
|
||||||
def ccb_notify(self, message):
|
def ccb_notify(self, message):
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -2,29 +2,93 @@ import asyncio
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
from PyQt5 import QtCore, QtWidgets
|
||||||
from sipyco import pyon
|
from sipyco import pyon
|
||||||
|
|
||||||
from artiq.tools import scale_from_metadata, short_format, exc_to_warning
|
from artiq.tools import short_format, exc_to_warning
|
||||||
from artiq.gui.tools import LayoutWidget
|
from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel
|
||||||
from artiq.gui.models import DictSyncTreeSepModel
|
from artiq.gui.models import DictSyncTreeSepModel
|
||||||
|
from artiq.gui.scientific_spinbox import ScientificSpinBox
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
async def rename(key, new_key, value, metadata, persist, dataset_ctl):
|
class Editor(QtWidgets.QDialog):
|
||||||
if key != new_key:
|
def __init__(self, parent, dataset_ctl, key, value):
|
||||||
await dataset_ctl.delete(key)
|
QtWidgets.QDialog.__init__(self, parent=parent)
|
||||||
await dataset_ctl.set(new_key, value, metadata=metadata, persist=persist)
|
self.dataset_ctl = dataset_ctl
|
||||||
|
self.key = key
|
||||||
|
self.initial_type = type(value)
|
||||||
|
|
||||||
|
self.setWindowTitle("Edit dataset")
|
||||||
|
grid = QtWidgets.QGridLayout()
|
||||||
|
self.setLayout(grid)
|
||||||
|
|
||||||
|
grid.addWidget(QtWidgets.QLabel("Name:"), 0, 0)
|
||||||
|
grid.addWidget(QtWidgets.QLabel(key), 0, 1)
|
||||||
|
|
||||||
|
grid.addWidget(QtWidgets.QLabel("Value:"), 1, 0)
|
||||||
|
grid.addWidget(self.get_edit_widget(value), 1, 1)
|
||||||
|
|
||||||
|
buttons = QtWidgets.QDialogButtonBox(
|
||||||
|
QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel)
|
||||||
|
grid.setRowStretch(2, 1)
|
||||||
|
grid.addWidget(buttons, 3, 0, 1, 2)
|
||||||
|
buttons.accepted.connect(self.accept)
|
||||||
|
buttons.rejected.connect(self.reject)
|
||||||
|
|
||||||
|
def accept(self):
|
||||||
|
value = self.initial_type(self.get_edit_widget_value())
|
||||||
|
asyncio.ensure_future(self.dataset_ctl.set(self.key, value))
|
||||||
|
QtWidgets.QDialog.accept(self)
|
||||||
|
|
||||||
|
def get_edit_widget(self, initial_value):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def get_edit_widget_value(self):
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
|
||||||
class CreateEditDialog(QtWidgets.QDialog):
|
class NumberEditor(Editor):
|
||||||
def __init__(self, parent, dataset_ctl, key=None, value=None, metadata=None, persist=False):
|
def get_edit_widget(self, initial_value):
|
||||||
|
self.edit_widget = ScientificSpinBox()
|
||||||
|
self.edit_widget.setDecimals(13)
|
||||||
|
self.edit_widget.setPrecision()
|
||||||
|
self.edit_widget.setRelativeStep()
|
||||||
|
self.edit_widget.setValue(float(initial_value))
|
||||||
|
return self.edit_widget
|
||||||
|
|
||||||
|
def get_edit_widget_value(self):
|
||||||
|
return self.edit_widget.value()
|
||||||
|
|
||||||
|
|
||||||
|
class BoolEditor(Editor):
|
||||||
|
def get_edit_widget(self, initial_value):
|
||||||
|
self.edit_widget = QtWidgets.QCheckBox()
|
||||||
|
self.edit_widget.setChecked(bool(initial_value))
|
||||||
|
return self.edit_widget
|
||||||
|
|
||||||
|
def get_edit_widget_value(self):
|
||||||
|
return self.edit_widget.isChecked()
|
||||||
|
|
||||||
|
|
||||||
|
class StringEditor(Editor):
|
||||||
|
def get_edit_widget(self, initial_value):
|
||||||
|
self.edit_widget = QtWidgets.QLineEdit()
|
||||||
|
self.edit_widget.setText(initial_value)
|
||||||
|
return self.edit_widget
|
||||||
|
|
||||||
|
def get_edit_widget_value(self):
|
||||||
|
return self.edit_widget.text()
|
||||||
|
|
||||||
|
|
||||||
|
class Creator(QtWidgets.QDialog):
|
||||||
|
def __init__(self, parent, dataset_ctl):
|
||||||
QtWidgets.QDialog.__init__(self, parent=parent)
|
QtWidgets.QDialog.__init__(self, parent=parent)
|
||||||
self.dataset_ctl = dataset_ctl
|
self.dataset_ctl = dataset_ctl
|
||||||
|
|
||||||
self.setWindowTitle("Create dataset" if key is None else "Edit dataset")
|
self.setWindowTitle("Create dataset")
|
||||||
grid = QtWidgets.QGridLayout()
|
grid = QtWidgets.QGridLayout()
|
||||||
grid.setRowMinimumHeight(1, 40)
|
grid.setRowMinimumHeight(1, 40)
|
||||||
grid.setColumnMinimumWidth(2, 60)
|
grid.setColumnMinimumWidth(2, 60)
|
||||||
|
@ -42,128 +106,47 @@ class CreateEditDialog(QtWidgets.QDialog):
|
||||||
grid.addWidget(self.data_type, 1, 2)
|
grid.addWidget(self.data_type, 1, 2)
|
||||||
self.value_widget.textChanged.connect(self.dtype)
|
self.value_widget.textChanged.connect(self.dtype)
|
||||||
|
|
||||||
grid.addWidget(QtWidgets.QLabel("Unit:"), 2, 0)
|
grid.addWidget(QtWidgets.QLabel("Persist:"), 2, 0)
|
||||||
self.unit_widget = QtWidgets.QLineEdit()
|
|
||||||
grid.addWidget(self.unit_widget, 2, 1)
|
|
||||||
|
|
||||||
grid.addWidget(QtWidgets.QLabel("Scale:"), 3, 0)
|
|
||||||
self.scale_widget = QtWidgets.QLineEdit()
|
|
||||||
grid.addWidget(self.scale_widget, 3, 1)
|
|
||||||
|
|
||||||
grid.addWidget(QtWidgets.QLabel("Precision:"), 4, 0)
|
|
||||||
self.precision_widget = QtWidgets.QLineEdit()
|
|
||||||
grid.addWidget(self.precision_widget, 4, 1)
|
|
||||||
|
|
||||||
grid.addWidget(QtWidgets.QLabel("Persist:"), 5, 0)
|
|
||||||
self.box_widget = QtWidgets.QCheckBox()
|
self.box_widget = QtWidgets.QCheckBox()
|
||||||
grid.addWidget(self.box_widget, 5, 1)
|
grid.addWidget(self.box_widget, 2, 1)
|
||||||
|
|
||||||
self.ok = QtWidgets.QPushButton('&Ok')
|
self.ok = QtWidgets.QPushButton('&Ok')
|
||||||
self.ok.setEnabled(False)
|
self.ok.setEnabled(False)
|
||||||
self.cancel = QtWidgets.QPushButton('&Cancel')
|
self.cancel = QtWidgets.QPushButton('&Cancel')
|
||||||
self.buttons = QtWidgets.QDialogButtonBox(self)
|
self.buttons = QtWidgets.QDialogButtonBox(self)
|
||||||
self.buttons.addButton(
|
self.buttons.addButton(
|
||||||
self.ok, QtWidgets.QDialogButtonBox.ButtonRole.AcceptRole)
|
self.ok, QtWidgets.QDialogButtonBox.AcceptRole)
|
||||||
self.buttons.addButton(
|
self.buttons.addButton(
|
||||||
self.cancel, QtWidgets.QDialogButtonBox.ButtonRole.RejectRole)
|
self.cancel, QtWidgets.QDialogButtonBox.RejectRole)
|
||||||
grid.setRowStretch(6, 1)
|
grid.setRowStretch(3, 1)
|
||||||
grid.addWidget(self.buttons, 7, 0, 1, 3, alignment=QtCore.Qt.AlignmentFlag.AlignHCenter)
|
grid.addWidget(self.buttons, 4, 0, 1, 3)
|
||||||
self.buttons.accepted.connect(self.accept)
|
self.buttons.accepted.connect(self.accept)
|
||||||
self.buttons.rejected.connect(self.reject)
|
self.buttons.rejected.connect(self.reject)
|
||||||
|
|
||||||
self.key = key
|
|
||||||
self.name_widget.setText(key)
|
|
||||||
|
|
||||||
value_edit_string = self.value_to_edit_string(value)
|
|
||||||
if metadata is not None:
|
|
||||||
scale = scale_from_metadata(metadata)
|
|
||||||
t = value.dtype if value is np.ndarray else type(value)
|
|
||||||
if scale != 1 and np.issubdtype(t, np.number):
|
|
||||||
# degenerates to float type
|
|
||||||
value_edit_string = self.value_to_edit_string(value / scale)
|
|
||||||
self.unit_widget.setText(metadata.get('unit', ''))
|
|
||||||
self.scale_widget.setText(str(metadata.get('scale', '')))
|
|
||||||
self.precision_widget.setText(str(metadata.get('precision', '')))
|
|
||||||
|
|
||||||
self.value_widget.setText(value_edit_string)
|
|
||||||
self.box_widget.setChecked(persist)
|
|
||||||
|
|
||||||
def accept(self):
|
def accept(self):
|
||||||
key = self.name_widget.text()
|
key = self.name_widget.text()
|
||||||
value = self.value_widget.text()
|
value = self.value_widget.text()
|
||||||
persist = self.box_widget.isChecked()
|
persist = self.box_widget.isChecked()
|
||||||
unit = self.unit_widget.text()
|
asyncio.ensure_future(exc_to_warning(self.dataset_ctl.set(
|
||||||
scale = self.scale_widget.text()
|
key, pyon.decode(value), persist)))
|
||||||
precision = self.precision_widget.text()
|
|
||||||
metadata = {}
|
|
||||||
if unit != "":
|
|
||||||
metadata['unit'] = unit
|
|
||||||
if scale != "":
|
|
||||||
metadata['scale'] = float(scale)
|
|
||||||
if precision != "":
|
|
||||||
metadata['precision'] = int(precision)
|
|
||||||
scale = scale_from_metadata(metadata)
|
|
||||||
value = self.parse_edit_string(value)
|
|
||||||
t = value.dtype if value is np.ndarray else type(value)
|
|
||||||
if scale != 1 and np.issubdtype(t, np.number):
|
|
||||||
# degenerates to float type
|
|
||||||
value = float(value * scale)
|
|
||||||
if self.key and self.key != key:
|
|
||||||
asyncio.ensure_future(exc_to_warning(rename(self.key, key, value, metadata, persist,
|
|
||||||
self.dataset_ctl)))
|
|
||||||
else:
|
|
||||||
asyncio.ensure_future(exc_to_warning(self.dataset_ctl.set(key, value, metadata=metadata,
|
|
||||||
persist=persist)))
|
|
||||||
self.key = key
|
|
||||||
QtWidgets.QDialog.accept(self)
|
QtWidgets.QDialog.accept(self)
|
||||||
|
|
||||||
def dtype(self):
|
def dtype(self):
|
||||||
txt = self.value_widget.text()
|
txt = self.value_widget.text()
|
||||||
try:
|
try:
|
||||||
result = self.parse_edit_string(txt)
|
result = pyon.decode(txt)
|
||||||
# ensure only pyon compatible types are permissable
|
|
||||||
pyon.encode(result)
|
|
||||||
except:
|
except:
|
||||||
pixmap = self.style().standardPixmap(
|
pixmap = self.style().standardPixmap(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_MessageBoxWarning)
|
QtWidgets.QStyle.SP_MessageBoxWarning)
|
||||||
self.data_type.setPixmap(pixmap)
|
self.data_type.setPixmap(pixmap)
|
||||||
self.ok.setEnabled(False)
|
self.ok.setEnabled(False)
|
||||||
else:
|
else:
|
||||||
self.data_type.setText(type(result).__name__)
|
self.data_type.setText(type(result).__name__)
|
||||||
self.ok.setEnabled(True)
|
self.ok.setEnabled(True)
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse_edit_string(s):
|
|
||||||
if s == "":
|
|
||||||
raise TypeError
|
|
||||||
_eval_dict = {
|
|
||||||
"__builtins__": {},
|
|
||||||
"array": np.array,
|
|
||||||
"null": np.nan,
|
|
||||||
"inf": np.inf
|
|
||||||
}
|
|
||||||
for t_ in pyon._numpy_scalar:
|
|
||||||
_eval_dict[t_] = eval("np.{}".format(t_), {"np": np})
|
|
||||||
return eval(s, _eval_dict, {})
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def value_to_edit_string(v):
|
|
||||||
t = type(v)
|
|
||||||
r = ""
|
|
||||||
if isinstance(v, np.generic):
|
|
||||||
r += t.__name__
|
|
||||||
r += "("
|
|
||||||
r += repr(v)
|
|
||||||
r += ")"
|
|
||||||
elif v is None:
|
|
||||||
return r
|
|
||||||
else:
|
|
||||||
r += repr(v)
|
|
||||||
return r
|
|
||||||
|
|
||||||
|
|
||||||
class Model(DictSyncTreeSepModel):
|
class Model(DictSyncTreeSepModel):
|
||||||
def __init__(self, init):
|
def __init__(self, init):
|
||||||
DictSyncTreeSepModel.__init__(self, ".",
|
DictSyncTreeSepModel.__init__(self, ".",
|
||||||
["Dataset", "Persistent", "Value"],
|
["Dataset", "Persistent", "Value"],
|
||||||
init)
|
init)
|
||||||
|
@ -172,17 +155,17 @@ class Model(DictSyncTreeSepModel):
|
||||||
if column == 1:
|
if column == 1:
|
||||||
return "Y" if v[0] else "N"
|
return "Y" if v[0] else "N"
|
||||||
elif column == 2:
|
elif column == 2:
|
||||||
return short_format(v[1], v[2])
|
return short_format(v[1])
|
||||||
else:
|
else:
|
||||||
raise ValueError
|
raise ValueError
|
||||||
|
|
||||||
|
|
||||||
class DatasetsDock(QtWidgets.QDockWidget):
|
class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
def __init__(self, dataset_sub, dataset_ctl):
|
def __init__(self, datasets_sub, dataset_ctl):
|
||||||
QtWidgets.QDockWidget.__init__(self, "Datasets")
|
QtWidgets.QDockWidget.__init__(self, "Datasets")
|
||||||
self.setObjectName("Datasets")
|
self.setObjectName("Datasets")
|
||||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||||
self.dataset_ctl = dataset_ctl
|
self.dataset_ctl = dataset_ctl
|
||||||
|
|
||||||
grid = LayoutWidget()
|
grid = LayoutWidget()
|
||||||
|
@ -194,31 +177,31 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
grid.addWidget(self.search, 0, 0)
|
grid.addWidget(self.search, 0, 0)
|
||||||
|
|
||||||
self.table = QtWidgets.QTreeView()
|
self.table = QtWidgets.QTreeView()
|
||||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectRows)
|
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
|
||||||
self.table.setSelectionMode(
|
self.table.setSelectionMode(
|
||||||
QtWidgets.QAbstractItemView.SelectionMode.SingleSelection)
|
QtWidgets.QAbstractItemView.SingleSelection)
|
||||||
grid.addWidget(self.table, 1, 0)
|
grid.addWidget(self.table, 1, 0)
|
||||||
|
|
||||||
self.table.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||||
create_action = QtGui.QAction("New dataset", self.table)
|
create_action = QtWidgets.QAction("New dataset", self.table)
|
||||||
create_action.triggered.connect(self.create_clicked)
|
create_action.triggered.connect(self.create_clicked)
|
||||||
create_action.setShortcut("CTRL+N")
|
create_action.setShortcut("CTRL+N")
|
||||||
create_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
create_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.table.addAction(create_action)
|
self.table.addAction(create_action)
|
||||||
edit_action = QtGui.QAction("Edit dataset", self.table)
|
edit_action = QtWidgets.QAction("Edit dataset", self.table)
|
||||||
edit_action.triggered.connect(self.edit_clicked)
|
edit_action.triggered.connect(self.edit_clicked)
|
||||||
edit_action.setShortcut("RETURN")
|
edit_action.setShortcut("RETURN")
|
||||||
edit_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
edit_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.table.doubleClicked.connect(self.edit_clicked)
|
self.table.doubleClicked.connect(self.edit_clicked)
|
||||||
self.table.addAction(edit_action)
|
self.table.addAction(edit_action)
|
||||||
delete_action = QtGui.QAction("Delete dataset", self.table)
|
delete_action = QtWidgets.QAction("Delete dataset", self.table)
|
||||||
delete_action.triggered.connect(self.delete_clicked)
|
delete_action.triggered.connect(self.delete_clicked)
|
||||||
delete_action.setShortcut("DELETE")
|
delete_action.setShortcut("DELETE")
|
||||||
delete_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
delete_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.table.addAction(delete_action)
|
self.table.addAction(delete_action)
|
||||||
|
|
||||||
self.table_model = Model(dict())
|
self.table_model = Model(dict())
|
||||||
dataset_sub.add_setmodel_callback(self.set_model)
|
datasets_sub.add_setmodel_callback(self.set_model)
|
||||||
|
|
||||||
def _search_datasets(self):
|
def _search_datasets(self):
|
||||||
if hasattr(self, "table_model_filter"):
|
if hasattr(self, "table_model_filter"):
|
||||||
|
@ -227,13 +210,12 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
|
|
||||||
def set_model(self, model):
|
def set_model(self, model):
|
||||||
self.table_model = model
|
self.table_model = model
|
||||||
self.table_model_filter = QtCore.QSortFilterProxyModel()
|
self.table_model_filter = QRecursiveFilterProxyModel()
|
||||||
self.table_model_filter.setRecursiveFilteringEnabled(True)
|
|
||||||
self.table_model_filter.setSourceModel(self.table_model)
|
self.table_model_filter.setSourceModel(self.table_model)
|
||||||
self.table.setModel(self.table_model_filter)
|
self.table.setModel(self.table_model_filter)
|
||||||
|
|
||||||
def create_clicked(self):
|
def create_clicked(self):
|
||||||
CreateEditDialog(self, self.dataset_ctl).open()
|
Creator(self, self.dataset_ctl).open()
|
||||||
|
|
||||||
def edit_clicked(self):
|
def edit_clicked(self):
|
||||||
idx = self.table.selectedIndexes()
|
idx = self.table.selectedIndexes()
|
||||||
|
@ -241,8 +223,19 @@ class DatasetsDock(QtWidgets.QDockWidget):
|
||||||
idx = self.table_model_filter.mapToSource(idx[0])
|
idx = self.table_model_filter.mapToSource(idx[0])
|
||||||
key = self.table_model.index_to_key(idx)
|
key = self.table_model.index_to_key(idx)
|
||||||
if key is not None:
|
if key is not None:
|
||||||
persist, value, metadata = self.table_model.backing_store[key]
|
persist, value = self.table_model.backing_store[key]
|
||||||
CreateEditDialog(self, self.dataset_ctl, key, value, metadata, persist).open()
|
t = type(value)
|
||||||
|
if np.issubdtype(t, np.number):
|
||||||
|
dialog_cls = NumberEditor
|
||||||
|
elif np.issubdtype(t, np.bool_):
|
||||||
|
dialog_cls = BoolEditor
|
||||||
|
elif np.issubdtype(t, np.unicode_):
|
||||||
|
dialog_cls = StringEditor
|
||||||
|
else:
|
||||||
|
logger.error("Cannot edit dataset %s: "
|
||||||
|
"type %s is not supported", key, t)
|
||||||
|
return
|
||||||
|
dialog_cls(self, self.dataset_ctl, key, value).open()
|
||||||
|
|
||||||
def delete_clicked(self):
|
def delete_clicked(self):
|
||||||
idx = self.table.selectedIndexes()
|
idx = self.table.selectedIndexes()
|
||||||
|
|
|
@ -4,15 +4,14 @@ import os
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
from PyQt5 import QtCore, QtGui, QtWidgets
|
||||||
import h5py
|
import h5py
|
||||||
|
|
||||||
from sipyco import pyon
|
from sipyco import pyon
|
||||||
|
|
||||||
from artiq.gui.entries import procdesc_to_entry, EntryTreeWidget
|
from artiq.gui.entries import procdesc_to_entry, ScanEntry
|
||||||
from artiq.gui.fuzzy_select import FuzzySelectWidget
|
from artiq.gui.fuzzy_select import FuzzySelectWidget
|
||||||
from artiq.gui.tools import (LayoutWidget, log_level_to_name, get_open_file_name)
|
from artiq.gui.tools import LayoutWidget, log_level_to_name, get_open_file_name
|
||||||
from artiq.tools import parse_devarg_override, unparse_devarg_override
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
@ -24,32 +23,117 @@ logger = logging.getLogger(__name__)
|
||||||
# 2. file:<class name>@<file name>
|
# 2. file:<class name>@<file name>
|
||||||
|
|
||||||
|
|
||||||
class _ArgumentEditor(EntryTreeWidget):
|
class _WheelFilter(QtCore.QObject):
|
||||||
|
def eventFilter(self, obj, event):
|
||||||
|
if (event.type() == QtCore.QEvent.Wheel and
|
||||||
|
event.modifiers() != QtCore.Qt.NoModifier):
|
||||||
|
event.ignore()
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class _ArgumentEditor(QtWidgets.QTreeWidget):
|
||||||
def __init__(self, manager, dock, expurl):
|
def __init__(self, manager, dock, expurl):
|
||||||
self.manager = manager
|
self.manager = manager
|
||||||
self.expurl = expurl
|
self.expurl = expurl
|
||||||
|
|
||||||
EntryTreeWidget.__init__(self)
|
QtWidgets.QTreeWidget.__init__(self)
|
||||||
|
self.setColumnCount(3)
|
||||||
|
self.header().setStretchLastSection(False)
|
||||||
|
if hasattr(self.header(), "setSectionResizeMode"):
|
||||||
|
set_resize_mode = self.header().setSectionResizeMode
|
||||||
|
else:
|
||||||
|
set_resize_mode = self.header().setResizeMode
|
||||||
|
set_resize_mode(0, QtWidgets.QHeaderView.ResizeToContents)
|
||||||
|
set_resize_mode(1, QtWidgets.QHeaderView.Stretch)
|
||||||
|
set_resize_mode(2, QtWidgets.QHeaderView.ResizeToContents)
|
||||||
|
self.header().setVisible(False)
|
||||||
|
self.setSelectionMode(self.NoSelection)
|
||||||
|
self.setHorizontalScrollMode(self.ScrollPerPixel)
|
||||||
|
self.setVerticalScrollMode(self.ScrollPerPixel)
|
||||||
|
|
||||||
|
self.setStyleSheet("QTreeWidget {background: " +
|
||||||
|
self.palette().midlight().color().name() + " ;}")
|
||||||
|
|
||||||
|
self.viewport().installEventFilter(_WheelFilter(self.viewport()))
|
||||||
|
|
||||||
|
self._groups = dict()
|
||||||
|
self._arg_to_widgets = dict()
|
||||||
|
|
||||||
arguments = self.manager.get_submission_arguments(self.expurl)
|
arguments = self.manager.get_submission_arguments(self.expurl)
|
||||||
|
|
||||||
if not arguments:
|
if not arguments:
|
||||||
self.insertTopLevelItem(0, QtWidgets.QTreeWidgetItem(["No arguments"]))
|
self.addTopLevelItem(QtWidgets.QTreeWidgetItem(["No arguments"]))
|
||||||
|
|
||||||
|
gradient = QtGui.QLinearGradient(
|
||||||
|
0, 0, 0, QtGui.QFontMetrics(self.font()).lineSpacing()*2.5)
|
||||||
|
gradient.setColorAt(0, self.palette().base().color())
|
||||||
|
gradient.setColorAt(1, self.palette().midlight().color())
|
||||||
for name, argument in arguments.items():
|
for name, argument in arguments.items():
|
||||||
self.set_argument(name, argument)
|
widgets = dict()
|
||||||
|
self._arg_to_widgets[name] = widgets
|
||||||
|
|
||||||
self.quickStyleClicked.connect(dock.submit_clicked)
|
entry = procdesc_to_entry(argument["desc"])(argument)
|
||||||
|
widget_item = QtWidgets.QTreeWidgetItem([name])
|
||||||
|
if argument["tooltip"]:
|
||||||
|
widget_item.setToolTip(0, argument["tooltip"])
|
||||||
|
widgets["entry"] = entry
|
||||||
|
widgets["widget_item"] = widget_item
|
||||||
|
|
||||||
|
for col in range(3):
|
||||||
|
widget_item.setBackground(col, gradient)
|
||||||
|
font = widget_item.font(0)
|
||||||
|
font.setBold(True)
|
||||||
|
widget_item.setFont(0, font)
|
||||||
|
|
||||||
|
if argument["group"] is None:
|
||||||
|
self.addTopLevelItem(widget_item)
|
||||||
|
else:
|
||||||
|
self._get_group(argument["group"]).addChild(widget_item)
|
||||||
|
fix_layout = LayoutWidget()
|
||||||
|
widgets["fix_layout"] = fix_layout
|
||||||
|
fix_layout.addWidget(entry)
|
||||||
|
self.setItemWidget(widget_item, 1, fix_layout)
|
||||||
|
recompute_argument = QtWidgets.QToolButton()
|
||||||
|
recompute_argument.setToolTip("Re-run the experiment's build "
|
||||||
|
"method and take the default value")
|
||||||
|
recompute_argument.setIcon(
|
||||||
|
QtWidgets.QApplication.style().standardIcon(
|
||||||
|
QtWidgets.QStyle.SP_BrowserReload))
|
||||||
|
recompute_argument.clicked.connect(
|
||||||
|
partial(self._recompute_argument_clicked, name))
|
||||||
|
|
||||||
|
tool_buttons = LayoutWidget()
|
||||||
|
tool_buttons.addWidget(recompute_argument, 1)
|
||||||
|
|
||||||
|
disable_other_scans = QtWidgets.QToolButton()
|
||||||
|
widgets["disable_other_scans"] = disable_other_scans
|
||||||
|
disable_other_scans.setIcon(
|
||||||
|
QtWidgets.QApplication.style().standardIcon(
|
||||||
|
QtWidgets.QStyle.SP_DialogResetButton))
|
||||||
|
disable_other_scans.setToolTip("Disable all other scans in "
|
||||||
|
"this experiment")
|
||||||
|
disable_other_scans.clicked.connect(
|
||||||
|
partial(self._disable_other_scans, name))
|
||||||
|
tool_buttons.layout.setRowStretch(0, 1)
|
||||||
|
tool_buttons.layout.setRowStretch(3, 1)
|
||||||
|
tool_buttons.addWidget(disable_other_scans, 2)
|
||||||
|
if not isinstance(entry, ScanEntry):
|
||||||
|
disable_other_scans.setVisible(False)
|
||||||
|
|
||||||
|
self.setItemWidget(widget_item, 2, tool_buttons)
|
||||||
|
|
||||||
|
widget_item = QtWidgets.QTreeWidgetItem()
|
||||||
|
self.addTopLevelItem(widget_item)
|
||||||
recompute_arguments = QtWidgets.QPushButton("Recompute all arguments")
|
recompute_arguments = QtWidgets.QPushButton("Recompute all arguments")
|
||||||
recompute_arguments.setIcon(
|
recompute_arguments.setIcon(
|
||||||
QtWidgets.QApplication.style().standardIcon(
|
QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
QtWidgets.QStyle.SP_BrowserReload))
|
||||||
recompute_arguments.clicked.connect(dock._recompute_arguments_clicked)
|
recompute_arguments.clicked.connect(dock._recompute_arguments_clicked)
|
||||||
|
|
||||||
load_hdf5 = QtWidgets.QPushButton("Load HDF5")
|
load_hdf5 = QtWidgets.QPushButton("Load HDF5")
|
||||||
load_hdf5.setIcon(QtWidgets.QApplication.style().standardIcon(
|
load_hdf5.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOpenButton))
|
QtWidgets.QStyle.SP_DialogOpenButton))
|
||||||
load_hdf5.clicked.connect(dock._load_hdf5_clicked)
|
load_hdf5.clicked.connect(dock._load_hdf5_clicked)
|
||||||
|
|
||||||
buttons = LayoutWidget()
|
buttons = LayoutWidget()
|
||||||
|
@ -59,14 +143,28 @@ class _ArgumentEditor(EntryTreeWidget):
|
||||||
buttons.layout.setColumnStretch(1, 0)
|
buttons.layout.setColumnStretch(1, 0)
|
||||||
buttons.layout.setColumnStretch(2, 0)
|
buttons.layout.setColumnStretch(2, 0)
|
||||||
buttons.layout.setColumnStretch(3, 1)
|
buttons.layout.setColumnStretch(3, 1)
|
||||||
self.setItemWidget(self.bottom_item, 1, buttons)
|
self.setItemWidget(widget_item, 1, buttons)
|
||||||
|
|
||||||
def reset_entry(self, key):
|
def _get_group(self, name):
|
||||||
asyncio.ensure_future(self._recompute_argument(key))
|
if name in self._groups:
|
||||||
|
return self._groups[name]
|
||||||
|
group = QtWidgets.QTreeWidgetItem([name])
|
||||||
|
for col in range(3):
|
||||||
|
group.setBackground(col, self.palette().mid())
|
||||||
|
group.setForeground(col, self.palette().brightText())
|
||||||
|
font = group.font(col)
|
||||||
|
font.setBold(True)
|
||||||
|
group.setFont(col, font)
|
||||||
|
self.addTopLevelItem(group)
|
||||||
|
self._groups[name] = group
|
||||||
|
return group
|
||||||
|
|
||||||
|
def _recompute_argument_clicked(self, name):
|
||||||
|
asyncio.ensure_future(self._recompute_argument(name))
|
||||||
|
|
||||||
async def _recompute_argument(self, name):
|
async def _recompute_argument(self, name):
|
||||||
try:
|
try:
|
||||||
expdesc, _ = await self.manager.compute_expdesc(self.expurl)
|
expdesc = await self.manager.compute_expdesc(self.expurl)
|
||||||
except:
|
except:
|
||||||
logger.error("Could not recompute argument '%s' of '%s'",
|
logger.error("Could not recompute argument '%s' of '%s'",
|
||||||
name, self.expurl, exc_info=True)
|
name, self.expurl, exc_info=True)
|
||||||
|
@ -77,16 +175,46 @@ class _ArgumentEditor(EntryTreeWidget):
|
||||||
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
||||||
argument["desc"] = procdesc
|
argument["desc"] = procdesc
|
||||||
argument["state"] = state
|
argument["state"] = state
|
||||||
self.update_argument(name, argument)
|
|
||||||
|
|
||||||
# Hooks that allow user-supplied argument editors to react to imminent user
|
# Qt needs a setItemWidget() to handle layout correctly,
|
||||||
# actions. Here, we always keep the manager-stored submission arguments
|
# simply replacing the entry inside the LayoutWidget
|
||||||
# up-to-date, so no further action is required.
|
# results in a bug.
|
||||||
def about_to_submit(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def about_to_close(self):
|
widgets = self._arg_to_widgets[name]
|
||||||
pass
|
|
||||||
|
widgets["entry"].deleteLater()
|
||||||
|
widgets["entry"] = procdesc_to_entry(procdesc)(argument)
|
||||||
|
widgets["disable_other_scans"].setVisible(
|
||||||
|
isinstance(widgets["entry"], ScanEntry))
|
||||||
|
widgets["fix_layout"].deleteLater()
|
||||||
|
widgets["fix_layout"] = LayoutWidget()
|
||||||
|
widgets["fix_layout"].addWidget(widgets["entry"])
|
||||||
|
self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"])
|
||||||
|
self.updateGeometries()
|
||||||
|
|
||||||
|
def _disable_other_scans(self, current_name):
|
||||||
|
for name, widgets in self._arg_to_widgets.items():
|
||||||
|
if (name != current_name
|
||||||
|
and isinstance(widgets["entry"], ScanEntry)):
|
||||||
|
widgets["entry"].disable()
|
||||||
|
|
||||||
|
def save_state(self):
|
||||||
|
expanded = []
|
||||||
|
for k, v in self._groups.items():
|
||||||
|
if v.isExpanded():
|
||||||
|
expanded.append(k)
|
||||||
|
return {
|
||||||
|
"expanded": expanded,
|
||||||
|
"scroll": self.verticalScrollBar().value()
|
||||||
|
}
|
||||||
|
|
||||||
|
def restore_state(self, state):
|
||||||
|
for e in state["expanded"]:
|
||||||
|
try:
|
||||||
|
self._groups[e].setExpanded(True)
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
self.verticalScrollBar().setValue(state["scroll"])
|
||||||
|
|
||||||
|
|
||||||
log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
|
log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
|
||||||
|
@ -98,10 +226,10 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
def __init__(self, manager, expurl):
|
def __init__(self, manager, expurl):
|
||||||
QtWidgets.QMdiSubWindow.__init__(self)
|
QtWidgets.QMdiSubWindow.__init__(self)
|
||||||
qfm = QtGui.QFontMetrics(self.font())
|
qfm = QtGui.QFontMetrics(self.font())
|
||||||
self.resize(100 * qfm.averageCharWidth(), 30 * qfm.lineSpacing())
|
self.resize(100*qfm.averageCharWidth(), 30*qfm.lineSpacing())
|
||||||
self.setWindowTitle(expurl)
|
self.setWindowTitle(expurl)
|
||||||
self.setWindowIcon(QtWidgets.QApplication.style().standardIcon(
|
self.setWindowIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogContentsView))
|
QtWidgets.QStyle.SP_FileDialogContentsView))
|
||||||
|
|
||||||
self.layout = QtWidgets.QGridLayout()
|
self.layout = QtWidgets.QGridLayout()
|
||||||
top_widget = QtWidgets.QWidget()
|
top_widget = QtWidgets.QWidget()
|
||||||
|
@ -113,8 +241,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
self.manager = manager
|
self.manager = manager
|
||||||
self.expurl = expurl
|
self.expurl = expurl
|
||||||
|
|
||||||
editor_class = self.manager.get_argument_editor_class(expurl)
|
self.argeditor = _ArgumentEditor(self.manager, self, self.expurl)
|
||||||
self.argeditor = editor_class(self.manager, self, self.expurl)
|
|
||||||
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
|
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
|
||||||
self.layout.setRowStretch(0, 1)
|
self.layout.setRowStretch(0, 1)
|
||||||
|
|
||||||
|
@ -131,17 +258,17 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
datetime.setDate(QtCore.QDate.currentDate())
|
datetime.setDate(QtCore.QDate.currentDate())
|
||||||
else:
|
else:
|
||||||
datetime.setDateTime(QtCore.QDateTime.fromMSecsSinceEpoch(
|
datetime.setDateTime(QtCore.QDateTime.fromMSecsSinceEpoch(
|
||||||
int(scheduling["due_date"] * 1000)))
|
scheduling["due_date"]*1000))
|
||||||
datetime_en.setChecked(scheduling["due_date"] is not None)
|
datetime_en.setChecked(scheduling["due_date"] is not None)
|
||||||
|
|
||||||
def update_datetime(dt):
|
def update_datetime(dt):
|
||||||
scheduling["due_date"] = dt.toMSecsSinceEpoch() / 1000
|
scheduling["due_date"] = dt.toMSecsSinceEpoch()/1000
|
||||||
datetime_en.setChecked(True)
|
datetime_en.setChecked(True)
|
||||||
datetime.dateTimeChanged.connect(update_datetime)
|
datetime.dateTimeChanged.connect(update_datetime)
|
||||||
|
|
||||||
def update_datetime_en(checked):
|
def update_datetime_en(checked):
|
||||||
if checked:
|
if checked:
|
||||||
due_date = datetime.dateTime().toMSecsSinceEpoch() / 1000
|
due_date = datetime.dateTime().toMSecsSinceEpoch()/1000
|
||||||
else:
|
else:
|
||||||
due_date = None
|
due_date = None
|
||||||
scheduling["due_date"] = due_date
|
scheduling["due_date"] = due_date
|
||||||
|
@ -174,7 +301,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
flush = self.flush
|
flush = self.flush
|
||||||
flush.setToolTip("Flush the pipeline (of current- and higher-priority "
|
flush.setToolTip("Flush the pipeline (of current- and higher-priority "
|
||||||
"experiments) before starting the experiment")
|
"experiments) before starting the experiment")
|
||||||
self.layout.addWidget(flush, 2, 2)
|
self.layout.addWidget(flush, 2, 2, 1, 2)
|
||||||
|
|
||||||
flush.setChecked(scheduling["flush"])
|
flush.setChecked(scheduling["flush"])
|
||||||
|
|
||||||
|
@ -182,20 +309,6 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
scheduling["flush"] = bool(checked)
|
scheduling["flush"] = bool(checked)
|
||||||
flush.stateChanged.connect(update_flush)
|
flush.stateChanged.connect(update_flush)
|
||||||
|
|
||||||
devarg_override = QtWidgets.QComboBox()
|
|
||||||
devarg_override.setEditable(True)
|
|
||||||
devarg_override.lineEdit().setPlaceholderText("Override device arguments")
|
|
||||||
devarg_override.lineEdit().setClearButtonEnabled(True)
|
|
||||||
devarg_override.insertItem(0, "core:analyze_at_run_end=True")
|
|
||||||
self.layout.addWidget(devarg_override, 2, 3)
|
|
||||||
|
|
||||||
devarg_override.setCurrentText(options["devarg_override"])
|
|
||||||
|
|
||||||
def update_devarg_override(text):
|
|
||||||
options["devarg_override"] = text
|
|
||||||
devarg_override.editTextChanged.connect(update_devarg_override)
|
|
||||||
self.devarg_override = devarg_override
|
|
||||||
|
|
||||||
log_level = QtWidgets.QComboBox()
|
log_level = QtWidgets.QComboBox()
|
||||||
log_level.addItems(log_levels)
|
log_level.addItems(log_levels)
|
||||||
log_level.setCurrentIndex(1)
|
log_level.setCurrentIndex(1)
|
||||||
|
@ -216,11 +329,9 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
if "repo_rev" in options:
|
if "repo_rev" in options:
|
||||||
repo_rev = QtWidgets.QLineEdit()
|
repo_rev = QtWidgets.QLineEdit()
|
||||||
repo_rev.setPlaceholderText("current")
|
repo_rev.setPlaceholderText("current")
|
||||||
repo_rev.setClearButtonEnabled(True)
|
repo_rev_label = QtWidgets.QLabel("Revision:")
|
||||||
repo_rev_label = QtWidgets.QLabel("Rev / ref:")
|
|
||||||
repo_rev_label.setToolTip("Experiment repository revision "
|
repo_rev_label.setToolTip("Experiment repository revision "
|
||||||
"(commit ID) or reference (branch "
|
"(commit ID) to use")
|
||||||
"or tag) to use")
|
|
||||||
self.layout.addWidget(repo_rev_label, 3, 2)
|
self.layout.addWidget(repo_rev_label, 3, 2)
|
||||||
self.layout.addWidget(repo_rev, 3, 3)
|
self.layout.addWidget(repo_rev, 3, 3)
|
||||||
|
|
||||||
|
@ -237,28 +348,27 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
|
|
||||||
submit = QtWidgets.QPushButton("Submit")
|
submit = QtWidgets.QPushButton("Submit")
|
||||||
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
QtWidgets.QStyle.SP_DialogOkButton))
|
||||||
submit.setToolTip("Schedule the experiment (Ctrl+Return)")
|
submit.setToolTip("Schedule the experiment (Ctrl+Return)")
|
||||||
submit.setShortcut("CTRL+RETURN")
|
submit.setShortcut("CTRL+RETURN")
|
||||||
submit.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
submit.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
QtWidgets.QSizePolicy.Expanding)
|
||||||
self.layout.addWidget(submit, 1, 4, 2, 1)
|
self.layout.addWidget(submit, 1, 4, 2, 1)
|
||||||
submit.clicked.connect(self.submit_clicked)
|
submit.clicked.connect(self.submit_clicked)
|
||||||
|
|
||||||
reqterm = QtWidgets.QPushButton("Terminate instances")
|
reqterm = QtWidgets.QPushButton("Terminate instances")
|
||||||
reqterm.setIcon(QtWidgets.QApplication.style().standardIcon(
|
reqterm.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogCancelButton))
|
QtWidgets.QStyle.SP_DialogCancelButton))
|
||||||
reqterm.setToolTip("Request termination of instances (Ctrl+Backspace)")
|
reqterm.setToolTip("Request termination of instances (Ctrl+Backspace)")
|
||||||
reqterm.setShortcut("CTRL+BACKSPACE")
|
reqterm.setShortcut("CTRL+BACKSPACE")
|
||||||
reqterm.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding,
|
reqterm.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||||
QtWidgets.QSizePolicy.Policy.Expanding)
|
QtWidgets.QSizePolicy.Expanding)
|
||||||
self.layout.addWidget(reqterm, 3, 4)
|
self.layout.addWidget(reqterm, 3, 4)
|
||||||
reqterm.clicked.connect(self.reqterm_clicked)
|
reqterm.clicked.connect(self.reqterm_clicked)
|
||||||
|
|
||||||
self.hdf5_load_directory = os.path.expanduser("~")
|
self.hdf5_load_directory = os.path.expanduser("~")
|
||||||
|
|
||||||
def submit_clicked(self):
|
def submit_clicked(self):
|
||||||
self.argeditor.about_to_submit()
|
|
||||||
try:
|
try:
|
||||||
self.manager.submit(self.expurl)
|
self.manager.submit(self.expurl)
|
||||||
except:
|
except:
|
||||||
|
@ -281,7 +391,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
|
|
||||||
async def _recompute_arguments_task(self, overrides=dict()):
|
async def _recompute_arguments_task(self, overrides=dict()):
|
||||||
try:
|
try:
|
||||||
expdesc, ui_name = await self.manager.compute_expdesc(self.expurl)
|
expdesc = await self.manager.compute_expdesc(self.expurl)
|
||||||
except:
|
except:
|
||||||
logger.error("Could not recompute experiment description of '%s'",
|
logger.error("Could not recompute experiment description of '%s'",
|
||||||
self.expurl, exc_info=True)
|
self.expurl, exc_info=True)
|
||||||
|
@ -289,30 +399,30 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
arginfo = expdesc["arginfo"]
|
arginfo = expdesc["arginfo"]
|
||||||
for k, v in overrides.items():
|
for k, v in overrides.items():
|
||||||
# Some values (e.g. scans) may have multiple defaults in a list
|
# Some values (e.g. scans) may have multiple defaults in a list
|
||||||
if ("default" in arginfo[k][0] and isinstance(arginfo[k][0]["default"], list)):
|
if ("default" in arginfo[k][0]
|
||||||
|
and isinstance(arginfo[k][0]["default"], list)):
|
||||||
arginfo[k][0]["default"].insert(0, v)
|
arginfo[k][0]["default"].insert(0, v)
|
||||||
else:
|
else:
|
||||||
arginfo[k][0]["default"] = v
|
arginfo[k][0]["default"] = v
|
||||||
self.manager.initialize_submission_arguments(self.expurl, arginfo, ui_name)
|
self.manager.initialize_submission_arguments(self.expurl, arginfo)
|
||||||
|
|
||||||
argeditor_state = self.argeditor.save_state()
|
argeditor_state = self.argeditor.save_state()
|
||||||
self.argeditor.deleteLater()
|
self.argeditor.deleteLater()
|
||||||
|
|
||||||
editor_class = self.manager.get_argument_editor_class(self.expurl)
|
self.argeditor = _ArgumentEditor(self.manager, self, self.expurl)
|
||||||
self.argeditor = editor_class(self.manager, self, self.expurl)
|
|
||||||
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
|
|
||||||
self.argeditor.restore_state(argeditor_state)
|
self.argeditor.restore_state(argeditor_state)
|
||||||
|
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
|
||||||
|
|
||||||
def contextMenuEvent(self, event):
|
def contextMenuEvent(self, event):
|
||||||
menu = QtWidgets.QMenu(self)
|
menu = QtWidgets.QMenu(self)
|
||||||
reset_sched = menu.addAction("Reset scheduler settings")
|
reset_sched = menu.addAction("Reset scheduler settings")
|
||||||
action = menu.exec(self.mapToGlobal(event.pos()))
|
action = menu.exec_(self.mapToGlobal(event.pos()))
|
||||||
if action == reset_sched:
|
if action == reset_sched:
|
||||||
asyncio.ensure_future(self._recompute_sched_options_task())
|
asyncio.ensure_future(self._recompute_sched_options_task())
|
||||||
|
|
||||||
async def _recompute_sched_options_task(self):
|
async def _recompute_sched_options_task(self):
|
||||||
try:
|
try:
|
||||||
expdesc, _ = await self.manager.compute_expdesc(self.expurl)
|
expdesc = await self.manager.compute_expdesc(self.expurl)
|
||||||
except:
|
except:
|
||||||
logger.error("Could not recompute experiment description of '%s'",
|
logger.error("Could not recompute experiment description of '%s'",
|
||||||
self.expurl, exc_info=True)
|
self.expurl, exc_info=True)
|
||||||
|
@ -349,14 +459,11 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if "devarg_override" in expid:
|
|
||||||
self.devarg_override.setCurrentText(
|
|
||||||
unparse_devarg_override(expid["devarg_override"]))
|
|
||||||
self.log_level.setCurrentIndex(log_levels.index(
|
self.log_level.setCurrentIndex(log_levels.index(
|
||||||
log_level_to_name(expid["log_level"])))
|
log_level_to_name(expid["log_level"])))
|
||||||
if "repo_rev" in expid and \
|
if ("repo_rev" in expid and
|
||||||
expid["repo_rev"] != "N/A" and \
|
expid["repo_rev"] != "N/A" and
|
||||||
hasattr(self, "repo_rev"):
|
hasattr(self, "repo_rev")):
|
||||||
self.repo_rev.setText(expid["repo_rev"])
|
self.repo_rev.setText(expid["repo_rev"])
|
||||||
except:
|
except:
|
||||||
logger.error("Could not set submission options from HDF5 expid",
|
logger.error("Could not set submission options from HDF5 expid",
|
||||||
|
@ -366,7 +473,6 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
|
||||||
await self._recompute_arguments_task(arguments)
|
await self._recompute_arguments_task(arguments)
|
||||||
|
|
||||||
def closeEvent(self, event):
|
def closeEvent(self, event):
|
||||||
self.argeditor.about_to_close()
|
|
||||||
self.sigClosed.emit()
|
self.sigClosed.emit()
|
||||||
QtWidgets.QMdiSubWindow.closeEvent(self, event)
|
QtWidgets.QMdiSubWindow.closeEvent(self, event)
|
||||||
|
|
||||||
|
@ -423,7 +529,7 @@ class _QuickOpenDialog(QtWidgets.QDialog):
|
||||||
QtWidgets.QDialog.done(self, r)
|
QtWidgets.QDialog.done(self, r)
|
||||||
|
|
||||||
def _open_experiment(self, exp_name, modifiers):
|
def _open_experiment(self, exp_name, modifiers):
|
||||||
if modifiers & QtCore.Qt.KeyboardModifier.ControlModifier:
|
if modifiers & QtCore.Qt.ControlModifier:
|
||||||
try:
|
try:
|
||||||
self.manager.submit(exp_name)
|
self.manager.submit(exp_name)
|
||||||
except:
|
except:
|
||||||
|
@ -438,13 +544,7 @@ class _QuickOpenDialog(QtWidgets.QDialog):
|
||||||
|
|
||||||
|
|
||||||
class ExperimentManager:
|
class ExperimentManager:
|
||||||
#: Global registry for custom argument editor classes, indexed by the experiment
|
def __init__(self, main_window,
|
||||||
#: `argument_ui` string; can be populated by dashboard plugins such as ndscan.
|
|
||||||
#: If no handler for a requested UI name is found, the default built-in argument
|
|
||||||
#: editor will be used.
|
|
||||||
argument_ui_classes = dict()
|
|
||||||
|
|
||||||
def __init__(self, main_window, dataset_sub,
|
|
||||||
explist_sub, schedule_sub,
|
explist_sub, schedule_sub,
|
||||||
schedule_ctl, experiment_db_ctl):
|
schedule_ctl, experiment_db_ctl):
|
||||||
self.main_window = main_window
|
self.main_window = main_window
|
||||||
|
@ -455,10 +555,7 @@ class ExperimentManager:
|
||||||
self.submission_scheduling = dict()
|
self.submission_scheduling = dict()
|
||||||
self.submission_options = dict()
|
self.submission_options = dict()
|
||||||
self.submission_arguments = dict()
|
self.submission_arguments = dict()
|
||||||
self.argument_ui_names = dict()
|
|
||||||
|
|
||||||
self.datasets = dict()
|
|
||||||
dataset_sub.add_setmodel_callback(self.set_dataset_model)
|
|
||||||
self.explist = dict()
|
self.explist = dict()
|
||||||
explist_sub.add_setmodel_callback(self.set_explist_model)
|
explist_sub.add_setmodel_callback(self.set_explist_model)
|
||||||
self.schedule = dict()
|
self.schedule = dict()
|
||||||
|
@ -467,15 +564,12 @@ class ExperimentManager:
|
||||||
self.open_experiments = dict()
|
self.open_experiments = dict()
|
||||||
|
|
||||||
self.is_quick_open_shown = False
|
self.is_quick_open_shown = False
|
||||||
quick_open_shortcut = QtGui.QShortcut(
|
quick_open_shortcut = QtWidgets.QShortcut(
|
||||||
QtGui.QKeySequence("Ctrl+P"),
|
QtCore.Qt.CTRL + QtCore.Qt.Key_P,
|
||||||
main_window)
|
main_window)
|
||||||
quick_open_shortcut.setContext(QtCore.Qt.ShortcutContext.ApplicationShortcut)
|
quick_open_shortcut.setContext(QtCore.Qt.ApplicationShortcut)
|
||||||
quick_open_shortcut.activated.connect(self.show_quick_open)
|
quick_open_shortcut.activated.connect(self.show_quick_open)
|
||||||
|
|
||||||
def set_dataset_model(self, model):
|
|
||||||
self.datasets = model
|
|
||||||
|
|
||||||
def set_explist_model(self, model):
|
def set_explist_model(self, model):
|
||||||
self.explist = model.backing_store
|
self.explist = model.backing_store
|
||||||
|
|
||||||
|
@ -492,17 +586,6 @@ class ExperimentManager:
|
||||||
else:
|
else:
|
||||||
raise ValueError("Malformed experiment URL")
|
raise ValueError("Malformed experiment URL")
|
||||||
|
|
||||||
def get_argument_editor_class(self, expurl):
|
|
||||||
ui_name = self.argument_ui_names.get(expurl, None)
|
|
||||||
if not ui_name and expurl[:5] == "repo:":
|
|
||||||
ui_name = self.explist.get(expurl[5:], {}).get("argument_ui", None)
|
|
||||||
if ui_name:
|
|
||||||
result = self.argument_ui_classes.get(ui_name, None)
|
|
||||||
if result:
|
|
||||||
return result
|
|
||||||
logger.warning("Ignoring unknown argument UI '%s'", ui_name)
|
|
||||||
return _ArgumentEditor
|
|
||||||
|
|
||||||
def get_submission_scheduling(self, expurl):
|
def get_submission_scheduling(self, expurl):
|
||||||
if expurl in self.submission_scheduling:
|
if expurl in self.submission_scheduling:
|
||||||
return self.submission_scheduling[expurl]
|
return self.submission_scheduling[expurl]
|
||||||
|
@ -525,15 +608,14 @@ class ExperimentManager:
|
||||||
else:
|
else:
|
||||||
# mutated by _ExperimentDock
|
# mutated by _ExperimentDock
|
||||||
options = {
|
options = {
|
||||||
"log_level": logging.WARNING,
|
"log_level": logging.WARNING
|
||||||
"devarg_override": ""
|
|
||||||
}
|
}
|
||||||
if expurl[:5] == "repo:":
|
if expurl[:5] == "repo:":
|
||||||
options["repo_rev"] = None
|
options["repo_rev"] = None
|
||||||
self.submission_options[expurl] = options
|
self.submission_options[expurl] = options
|
||||||
return options
|
return options
|
||||||
|
|
||||||
def initialize_submission_arguments(self, expurl, arginfo, ui_name):
|
def initialize_submission_arguments(self, expurl, arginfo):
|
||||||
arguments = OrderedDict()
|
arguments = OrderedDict()
|
||||||
for name, (procdesc, group, tooltip) in arginfo.items():
|
for name, (procdesc, group, tooltip) in arginfo.items():
|
||||||
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
state = procdesc_to_entry(procdesc).default_state(procdesc)
|
||||||
|
@ -544,24 +626,8 @@ class ExperimentManager:
|
||||||
"state": state, # mutated by entries
|
"state": state, # mutated by entries
|
||||||
}
|
}
|
||||||
self.submission_arguments[expurl] = arguments
|
self.submission_arguments[expurl] = arguments
|
||||||
self.argument_ui_names[expurl] = ui_name
|
|
||||||
return arguments
|
return arguments
|
||||||
|
|
||||||
def set_argument_value(self, expurl, name, value):
|
|
||||||
try:
|
|
||||||
argument = self.submission_arguments[expurl][name]
|
|
||||||
if argument["desc"]["ty"] == "Scannable":
|
|
||||||
ty = value["ty"]
|
|
||||||
argument["state"]["selected"] = ty
|
|
||||||
argument["state"][ty] = value
|
|
||||||
else:
|
|
||||||
argument["state"] = value
|
|
||||||
if expurl in self.open_experiments.keys():
|
|
||||||
self.open_experiments[expurl].argeditor.update_argument(name, argument)
|
|
||||||
except:
|
|
||||||
logger.warn("Failed to set value for argument \"{}\" in experiment: {}."
|
|
||||||
.format(name, expurl), exc_info=1)
|
|
||||||
|
|
||||||
def get_submission_arguments(self, expurl):
|
def get_submission_arguments(self, expurl):
|
||||||
if expurl in self.submission_arguments:
|
if expurl in self.submission_arguments:
|
||||||
return self.submission_arguments[expurl]
|
return self.submission_arguments[expurl]
|
||||||
|
@ -569,9 +635,9 @@ class ExperimentManager:
|
||||||
if expurl[:5] != "repo:":
|
if expurl[:5] != "repo:":
|
||||||
raise ValueError("Submission arguments must be preinitialized "
|
raise ValueError("Submission arguments must be preinitialized "
|
||||||
"when not using repository")
|
"when not using repository")
|
||||||
class_desc = self.explist[expurl[5:]]
|
arginfo = self.explist[expurl[5:]]["arginfo"]
|
||||||
return self.initialize_submission_arguments(expurl, class_desc["arginfo"],
|
arguments = self.initialize_submission_arguments(expurl, arginfo)
|
||||||
class_desc.get("argument_ui", None))
|
return arguments
|
||||||
|
|
||||||
def open_experiment(self, expurl):
|
def open_experiment(self, expurl):
|
||||||
if expurl in self.open_experiments:
|
if expurl in self.open_experiments:
|
||||||
|
@ -589,7 +655,7 @@ class ExperimentManager:
|
||||||
del self.submission_arguments[expurl]
|
del self.submission_arguments[expurl]
|
||||||
dock = _ExperimentDock(self, expurl)
|
dock = _ExperimentDock(self, expurl)
|
||||||
self.open_experiments[expurl] = dock
|
self.open_experiments[expurl] = dock
|
||||||
dock.setAttribute(QtCore.Qt.WidgetAttribute.WA_DeleteOnClose)
|
dock.setAttribute(QtCore.Qt.WA_DeleteOnClose)
|
||||||
self.main_window.centralWidget().addSubWindow(dock)
|
self.main_window.centralWidget().addSubWindow(dock)
|
||||||
dock.show()
|
dock.show()
|
||||||
dock.sigClosed.connect(partial(self.on_dock_closed, expurl))
|
dock.sigClosed.connect(partial(self.on_dock_closed, expurl))
|
||||||
|
@ -608,13 +674,8 @@ class ExperimentManager:
|
||||||
del self.open_experiments[expurl]
|
del self.open_experiments[expurl]
|
||||||
|
|
||||||
async def _submit_task(self, expurl, *args):
|
async def _submit_task(self, expurl, *args):
|
||||||
try:
|
rid = await self.schedule_ctl.submit(*args)
|
||||||
rid = await self.schedule_ctl.submit(*args)
|
logger.info("Submitted '%s', RID is %d", expurl, rid)
|
||||||
except KeyError:
|
|
||||||
expid = args[1]
|
|
||||||
logger.error("Submission failed - revision \"%s\" was not found", expid["repo_rev"])
|
|
||||||
else:
|
|
||||||
logger.info("Submitted '%s', RID is %d", expurl, rid)
|
|
||||||
|
|
||||||
def submit(self, expurl):
|
def submit(self, expurl):
|
||||||
file, class_name, _ = self.resolve_expurl(expurl)
|
file, class_name, _ = self.resolve_expurl(expurl)
|
||||||
|
@ -627,14 +688,7 @@ class ExperimentManager:
|
||||||
entry_cls = procdesc_to_entry(argument["desc"])
|
entry_cls = procdesc_to_entry(argument["desc"])
|
||||||
argument_values[name] = entry_cls.state_to_value(argument["state"])
|
argument_values[name] = entry_cls.state_to_value(argument["state"])
|
||||||
|
|
||||||
try:
|
|
||||||
devarg_override = parse_devarg_override(options["devarg_override"])
|
|
||||||
except:
|
|
||||||
logger.error("Failed to parse device argument overrides for %s", expurl)
|
|
||||||
return
|
|
||||||
|
|
||||||
expid = {
|
expid = {
|
||||||
"devarg_override": devarg_override,
|
|
||||||
"log_level": options["log_level"],
|
"log_level": options["log_level"],
|
||||||
"file": file,
|
"file": file,
|
||||||
"class_name": class_name,
|
"class_name": class_name,
|
||||||
|
@ -671,9 +725,9 @@ class ExperimentManager:
|
||||||
repo_match = "repo_rev" in expid
|
repo_match = "repo_rev" in expid
|
||||||
else:
|
else:
|
||||||
repo_match = "repo_rev" not in expid
|
repo_match = "repo_rev" not in expid
|
||||||
if repo_match and \
|
if (repo_match and
|
||||||
("file" in expid and expid["file"] == file) and \
|
expid["file"] == file and
|
||||||
expid["class_name"] == class_name:
|
expid["class_name"] == class_name):
|
||||||
rids.append(rid)
|
rids.append(rid)
|
||||||
asyncio.ensure_future(self._request_term_multiple(rids))
|
asyncio.ensure_future(self._request_term_multiple(rids))
|
||||||
|
|
||||||
|
@ -685,15 +739,13 @@ class ExperimentManager:
|
||||||
revision = None
|
revision = None
|
||||||
description = await self.experiment_db_ctl.examine(
|
description = await self.experiment_db_ctl.examine(
|
||||||
file, use_repository, revision)
|
file, use_repository, revision)
|
||||||
class_desc = description[class_name]
|
return description[class_name]
|
||||||
return class_desc, class_desc.get("argument_ui", None)
|
|
||||||
|
|
||||||
async def open_file(self, file):
|
async def open_file(self, file):
|
||||||
description = await self.experiment_db_ctl.examine(file, False)
|
description = await self.experiment_db_ctl.examine(file, False)
|
||||||
for class_name, class_desc in description.items():
|
for class_name, class_desc in description.items():
|
||||||
expurl = "file:{}@{}".format(class_name, file)
|
expurl = "file:{}@{}".format(class_name, file)
|
||||||
self.initialize_submission_arguments(expurl, class_desc["arginfo"],
|
self.initialize_submission_arguments(expurl, class_desc["arginfo"])
|
||||||
class_desc.get("argument_ui", None))
|
|
||||||
if expurl in self.open_experiments:
|
if expurl in self.open_experiments:
|
||||||
self.open_experiments[expurl].close()
|
self.open_experiments[expurl].close()
|
||||||
self.open_experiment(expurl)
|
self.open_experiment(expurl)
|
||||||
|
@ -706,7 +758,6 @@ class ExperimentManager:
|
||||||
"options": self.submission_options,
|
"options": self.submission_options,
|
||||||
"arguments": self.submission_arguments,
|
"arguments": self.submission_arguments,
|
||||||
"docks": self.dock_states,
|
"docks": self.dock_states,
|
||||||
"argument_uis": self.argument_ui_names,
|
|
||||||
"open_docks": set(self.open_experiments.keys())
|
"open_docks": set(self.open_experiments.keys())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -717,7 +768,6 @@ class ExperimentManager:
|
||||||
self.submission_scheduling = state["scheduling"]
|
self.submission_scheduling = state["scheduling"]
|
||||||
self.submission_options = state["options"]
|
self.submission_options = state["options"]
|
||||||
self.submission_arguments = state["arguments"]
|
self.submission_arguments = state["arguments"]
|
||||||
self.argument_ui_names = state.get("argument_uis", {})
|
|
||||||
for expurl in state["open_docks"]:
|
for expurl in state["open_docks"]:
|
||||||
self.open_experiment(expurl)
|
self.open_experiment(expurl)
|
||||||
|
|
||||||
|
@ -727,7 +777,6 @@ class ExperimentManager:
|
||||||
|
|
||||||
self.is_quick_open_shown = True
|
self.is_quick_open_shown = True
|
||||||
dialog = _QuickOpenDialog(self)
|
dialog = _QuickOpenDialog(self)
|
||||||
|
|
||||||
def closed():
|
def closed():
|
||||||
self.is_quick_open_shown = False
|
self.is_quick_open_shown = False
|
||||||
dialog.closed.connect(closed)
|
dialog.closed.connect(closed)
|
||||||
|
|
|
@ -3,7 +3,7 @@ import logging
|
||||||
import re
|
import re
|
||||||
from functools import partial
|
from functools import partial
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
from PyQt5 import QtCore, QtWidgets
|
||||||
|
|
||||||
from artiq.gui.tools import LayoutWidget
|
from artiq.gui.tools import LayoutWidget
|
||||||
from artiq.gui.models import DictSyncTreeSepModel
|
from artiq.gui.models import DictSyncTreeSepModel
|
||||||
|
@ -37,8 +37,7 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
||||||
self.file_list.doubleClicked.connect(self.accept)
|
self.file_list.doubleClicked.connect(self.accept)
|
||||||
|
|
||||||
buttons = QtWidgets.QDialogButtonBox(
|
buttons = QtWidgets.QDialogButtonBox(
|
||||||
QtWidgets.QDialogButtonBox.StandardButton.Ok |
|
QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel)
|
||||||
QtWidgets.QDialogButtonBox.StandardButton.Cancel)
|
|
||||||
grid.addWidget(buttons, 2, 0, 1, 2)
|
grid.addWidget(buttons, 2, 0, 1, 2)
|
||||||
buttons.accepted.connect(self.accept)
|
buttons.accepted.connect(self.accept)
|
||||||
buttons.rejected.connect(self.reject)
|
buttons.rejected.connect(self.reject)
|
||||||
|
@ -53,7 +52,7 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
||||||
item = QtWidgets.QListWidgetItem()
|
item = QtWidgets.QListWidgetItem()
|
||||||
item.setText("..")
|
item.setText("..")
|
||||||
item.setIcon(QtWidgets.QApplication.style().standardIcon(
|
item.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogToParent))
|
QtWidgets.QStyle.SP_FileDialogToParent))
|
||||||
self.file_list.addItem(item)
|
self.file_list.addItem(item)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -65,9 +64,9 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
||||||
return
|
return
|
||||||
for name in sorted(contents, key=lambda x: (x[-1] not in "\\/", x)):
|
for name in sorted(contents, key=lambda x: (x[-1] not in "\\/", x)):
|
||||||
if name[-1] in "\\/":
|
if name[-1] in "\\/":
|
||||||
icon = QtWidgets.QStyle.StandardPixmap.SP_DirIcon
|
icon = QtWidgets.QStyle.SP_DirIcon
|
||||||
else:
|
else:
|
||||||
icon = QtWidgets.QStyle.StandardPixmap.SP_FileIcon
|
icon = QtWidgets.QStyle.SP_FileIcon
|
||||||
if name[-3:] != ".py":
|
if name[-3:] != ".py":
|
||||||
continue
|
continue
|
||||||
item = QtWidgets.QListWidgetItem()
|
item = QtWidgets.QListWidgetItem()
|
||||||
|
@ -95,7 +94,7 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
||||||
else:
|
else:
|
||||||
break
|
break
|
||||||
self.explorer.current_directory = \
|
self.explorer.current_directory = \
|
||||||
self.explorer.current_directory[:idx + 1]
|
self.explorer.current_directory[:idx+1]
|
||||||
if self.explorer.current_directory == "/":
|
if self.explorer.current_directory == "/":
|
||||||
self.explorer.current_directory = ""
|
self.explorer.current_directory = ""
|
||||||
asyncio.ensure_future(self.refresh_view())
|
asyncio.ensure_future(self.refresh_view())
|
||||||
|
@ -104,7 +103,6 @@ class _OpenFileDialog(QtWidgets.QDialog):
|
||||||
asyncio.ensure_future(self.refresh_view())
|
asyncio.ensure_future(self.refresh_view())
|
||||||
else:
|
else:
|
||||||
file = self.explorer.current_directory + selected
|
file = self.explorer.current_directory + selected
|
||||||
|
|
||||||
async def open_task():
|
async def open_task():
|
||||||
try:
|
try:
|
||||||
await self.exp_manager.open_file(file)
|
await self.exp_manager.open_file(file)
|
||||||
|
@ -161,11 +159,11 @@ class WaitingPanel(LayoutWidget):
|
||||||
class ExplorerDock(QtWidgets.QDockWidget):
|
class ExplorerDock(QtWidgets.QDockWidget):
|
||||||
def __init__(self, exp_manager, d_shortcuts,
|
def __init__(self, exp_manager, d_shortcuts,
|
||||||
explist_sub, explist_status_sub,
|
explist_sub, explist_status_sub,
|
||||||
schedule_ctl, experiment_db_ctl, device_db_ctl):
|
schedule_ctl, experiment_db_ctl):
|
||||||
QtWidgets.QDockWidget.__init__(self, "Explorer")
|
QtWidgets.QDockWidget.__init__(self, "Explorer")
|
||||||
self.setObjectName("Explorer")
|
self.setObjectName("Explorer")
|
||||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||||
|
|
||||||
top_widget = LayoutWidget()
|
top_widget = LayoutWidget()
|
||||||
self.setWidget(top_widget)
|
self.setWidget(top_widget)
|
||||||
|
@ -176,7 +174,7 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
||||||
|
|
||||||
top_widget.addWidget(QtWidgets.QLabel("Revision:"), 0, 0)
|
top_widget.addWidget(QtWidgets.QLabel("Revision:"), 0, 0)
|
||||||
self.revision = QtWidgets.QLabel()
|
self.revision = QtWidgets.QLabel()
|
||||||
self.revision.setTextInteractionFlags(QtCore.Qt.TextInteractionFlag.TextSelectableByMouse)
|
self.revision.setTextInteractionFlags(QtCore.Qt.TextSelectableByMouse)
|
||||||
top_widget.addWidget(self.revision, 0, 1)
|
top_widget.addWidget(self.revision, 0, 1)
|
||||||
|
|
||||||
self.stack = QtWidgets.QStackedWidget()
|
self.stack = QtWidgets.QStackedWidget()
|
||||||
|
@ -188,14 +186,14 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
||||||
|
|
||||||
self.el = QtWidgets.QTreeView()
|
self.el = QtWidgets.QTreeView()
|
||||||
self.el.setHeaderHidden(True)
|
self.el.setHeaderHidden(True)
|
||||||
self.el.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectItems)
|
self.el.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectItems)
|
||||||
self.el.doubleClicked.connect(
|
self.el.doubleClicked.connect(
|
||||||
partial(self.expname_action, "open_experiment"))
|
partial(self.expname_action, "open_experiment"))
|
||||||
self.el_buttons.addWidget(self.el, 0, 0, colspan=2)
|
self.el_buttons.addWidget(self.el, 0, 0, colspan=2)
|
||||||
|
|
||||||
open = QtWidgets.QPushButton("Open")
|
open = QtWidgets.QPushButton("Open")
|
||||||
open.setIcon(QtWidgets.QApplication.style().standardIcon(
|
open.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOpenButton))
|
QtWidgets.QStyle.SP_DialogOpenButton))
|
||||||
open.setToolTip("Open the selected experiment (Return)")
|
open.setToolTip("Open the selected experiment (Return)")
|
||||||
self.el_buttons.addWidget(open, 1, 0)
|
self.el_buttons.addWidget(open, 1, 0)
|
||||||
open.clicked.connect(
|
open.clicked.connect(
|
||||||
|
@ -203,7 +201,7 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
||||||
|
|
||||||
submit = QtWidgets.QPushButton("Submit")
|
submit = QtWidgets.QPushButton("Submit")
|
||||||
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
QtWidgets.QStyle.SP_DialogOkButton))
|
||||||
submit.setToolTip("Schedule the selected experiment (Ctrl+Return)")
|
submit.setToolTip("Schedule the selected experiment (Ctrl+Return)")
|
||||||
self.el_buttons.addWidget(submit, 1, 1)
|
self.el_buttons.addWidget(submit, 1, 1)
|
||||||
submit.clicked.connect(
|
submit.clicked.connect(
|
||||||
|
@ -212,56 +210,49 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
||||||
self.explist_model = Model(dict())
|
self.explist_model = Model(dict())
|
||||||
explist_sub.add_setmodel_callback(self.set_model)
|
explist_sub.add_setmodel_callback(self.set_model)
|
||||||
|
|
||||||
self.el.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
self.el.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||||
open_action = QtGui.QAction("Open", self.el)
|
open_action = QtWidgets.QAction("Open", self.el)
|
||||||
open_action.triggered.connect(
|
open_action.triggered.connect(
|
||||||
partial(self.expname_action, "open_experiment"))
|
partial(self.expname_action, "open_experiment"))
|
||||||
open_action.setShortcut("RETURN")
|
open_action.setShortcut("RETURN")
|
||||||
open_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
open_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.el.addAction(open_action)
|
self.el.addAction(open_action)
|
||||||
submit_action = QtGui.QAction("Submit", self.el)
|
submit_action = QtWidgets.QAction("Submit", self.el)
|
||||||
submit_action.triggered.connect(
|
submit_action.triggered.connect(
|
||||||
partial(self.expname_action, "submit"))
|
partial(self.expname_action, "submit"))
|
||||||
submit_action.setShortcut("CTRL+RETURN")
|
submit_action.setShortcut("CTRL+RETURN")
|
||||||
submit_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
submit_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.el.addAction(submit_action)
|
self.el.addAction(submit_action)
|
||||||
reqterm_action = QtGui.QAction("Request termination of instances", self.el)
|
reqterm_action = QtWidgets.QAction("Request termination of instances", self.el)
|
||||||
reqterm_action.triggered.connect(
|
reqterm_action.triggered.connect(
|
||||||
partial(self.expname_action, "request_inst_term"))
|
partial(self.expname_action, "request_inst_term"))
|
||||||
reqterm_action.setShortcut("CTRL+BACKSPACE")
|
reqterm_action.setShortcut("CTRL+BACKSPACE")
|
||||||
reqterm_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
reqterm_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.el.addAction(reqterm_action)
|
self.el.addAction(reqterm_action)
|
||||||
|
|
||||||
set_shortcut_menu = QtWidgets.QMenu(self.el)
|
set_shortcut_menu = QtWidgets.QMenu()
|
||||||
for i in range(12):
|
for i in range(12):
|
||||||
action = QtGui.QAction("F" + str(i+1), self.el)
|
action = QtWidgets.QAction("F" + str(i+1), self.el)
|
||||||
action.triggered.connect(partial(self.set_shortcut, i))
|
action.triggered.connect(partial(self.set_shortcut, i))
|
||||||
set_shortcut_menu.addAction(action)
|
set_shortcut_menu.addAction(action)
|
||||||
|
|
||||||
set_shortcut_action = QtGui.QAction("Set shortcut", self.el)
|
set_shortcut_action = QtWidgets.QAction("Set shortcut", self.el)
|
||||||
set_shortcut_action.setMenu(set_shortcut_menu)
|
set_shortcut_action.setMenu(set_shortcut_menu)
|
||||||
self.el.addAction(set_shortcut_action)
|
self.el.addAction(set_shortcut_action)
|
||||||
|
|
||||||
sep = QtGui.QAction(self.el)
|
sep = QtWidgets.QAction(self.el)
|
||||||
sep.setSeparator(True)
|
sep.setSeparator(True)
|
||||||
self.el.addAction(sep)
|
self.el.addAction(sep)
|
||||||
|
|
||||||
scan_repository_action = QtGui.QAction("Scan repository HEAD",
|
scan_repository_action = QtWidgets.QAction("Scan repository HEAD",
|
||||||
self.el)
|
self.el)
|
||||||
|
|
||||||
def scan_repository():
|
def scan_repository():
|
||||||
asyncio.ensure_future(experiment_db_ctl.scan_repository_async())
|
asyncio.ensure_future(experiment_db_ctl.scan_repository_async())
|
||||||
scan_repository_action.triggered.connect(scan_repository)
|
scan_repository_action.triggered.connect(scan_repository)
|
||||||
self.el.addAction(scan_repository_action)
|
self.el.addAction(scan_repository_action)
|
||||||
|
|
||||||
scan_ddb_action = QtGui.QAction("Scan device database", self.el)
|
|
||||||
def scan_ddb():
|
|
||||||
asyncio.ensure_future(device_db_ctl.scan())
|
|
||||||
scan_ddb_action.triggered.connect(scan_ddb)
|
|
||||||
self.el.addAction(scan_ddb_action)
|
|
||||||
|
|
||||||
self.current_directory = ""
|
self.current_directory = ""
|
||||||
open_file_action = QtGui.QAction("Open file outside repository",
|
open_file_action = QtWidgets.QAction("Open file outside repository",
|
||||||
self.el)
|
self.el)
|
||||||
open_file_action.triggered.connect(
|
open_file_action.triggered.connect(
|
||||||
lambda: _OpenFileDialog(self, self.exp_manager,
|
lambda: _OpenFileDialog(self, self.exp_manager,
|
||||||
|
@ -295,7 +286,7 @@ class ExplorerDock(QtWidgets.QDockWidget):
|
||||||
if expname is not None:
|
if expname is not None:
|
||||||
expurl = "repo:" + expname
|
expurl = "repo:" + expname
|
||||||
self.d_shortcuts.set_shortcut(nr, expurl)
|
self.d_shortcuts.set_shortcut(nr, expurl)
|
||||||
logger.info("Set shortcut F%d to '%s'", nr + 1, expurl)
|
logger.info("Set shortcut F%d to '%s'", nr+1, expurl)
|
||||||
|
|
||||||
def update_scanning(self, scanning):
|
def update_scanning(self, scanning):
|
||||||
if scanning:
|
if scanning:
|
||||||
|
|
|
@ -1,158 +0,0 @@
|
||||||
import logging
|
|
||||||
import asyncio
|
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
|
||||||
|
|
||||||
from artiq.gui.models import DictSyncModel
|
|
||||||
from artiq.gui.entries import EntryTreeWidget, procdesc_to_entry
|
|
||||||
from artiq.gui.tools import LayoutWidget
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class Model(DictSyncModel):
|
|
||||||
def __init__(self, init):
|
|
||||||
DictSyncModel.__init__(self, ["RID", "Title", "Args"], init)
|
|
||||||
|
|
||||||
def convert(self, k, v, column):
|
|
||||||
if column == 0:
|
|
||||||
return k
|
|
||||||
elif column == 1:
|
|
||||||
txt = ": " + v["title"] if v["title"] != "" else ""
|
|
||||||
return str(k) + txt
|
|
||||||
elif column == 2:
|
|
||||||
return v["arglist_desc"]
|
|
||||||
else:
|
|
||||||
raise ValueError
|
|
||||||
|
|
||||||
def sort_key(self, k, v):
|
|
||||||
return k
|
|
||||||
|
|
||||||
|
|
||||||
class _InteractiveArgsRequest(EntryTreeWidget):
|
|
||||||
supplied = QtCore.pyqtSignal(int, dict)
|
|
||||||
cancelled = QtCore.pyqtSignal(int)
|
|
||||||
|
|
||||||
def __init__(self, rid, arglist_desc):
|
|
||||||
EntryTreeWidget.__init__(self)
|
|
||||||
self.rid = rid
|
|
||||||
self.arguments = dict()
|
|
||||||
for key, procdesc, group, tooltip in arglist_desc:
|
|
||||||
self.arguments[key] = {"desc": procdesc, "group": group, "tooltip": tooltip}
|
|
||||||
self.set_argument(key, self.arguments[key])
|
|
||||||
self.quickStyleClicked.connect(self.supply)
|
|
||||||
cancel_btn = QtWidgets.QPushButton("Cancel")
|
|
||||||
cancel_btn.setIcon(QtWidgets.QApplication.style().standardIcon(
|
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogCancelButton))
|
|
||||||
cancel_btn.clicked.connect(self.cancel)
|
|
||||||
supply_btn = QtWidgets.QPushButton("Supply")
|
|
||||||
supply_btn.setIcon(QtWidgets.QApplication.style().standardIcon(
|
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
|
||||||
supply_btn.clicked.connect(self.supply)
|
|
||||||
buttons = LayoutWidget()
|
|
||||||
buttons.addWidget(cancel_btn, 1, 1)
|
|
||||||
buttons.addWidget(supply_btn, 1, 2)
|
|
||||||
buttons.layout.setColumnStretch(0, 1)
|
|
||||||
buttons.layout.setColumnStretch(1, 0)
|
|
||||||
buttons.layout.setColumnStretch(2, 0)
|
|
||||||
buttons.layout.setColumnStretch(3, 1)
|
|
||||||
self.setItemWidget(self.bottom_item, 1, buttons)
|
|
||||||
|
|
||||||
def supply(self):
|
|
||||||
argument_values = dict()
|
|
||||||
for key, argument in self.arguments.items():
|
|
||||||
entry_cls = procdesc_to_entry(argument["desc"])
|
|
||||||
argument_values[key] = entry_cls.state_to_value(argument["state"])
|
|
||||||
self.supplied.emit(self.rid, argument_values)
|
|
||||||
|
|
||||||
def cancel(self):
|
|
||||||
self.cancelled.emit(self.rid)
|
|
||||||
|
|
||||||
|
|
||||||
class _InteractiveArgsView(QtWidgets.QStackedWidget):
|
|
||||||
supplied = QtCore.pyqtSignal(int, dict)
|
|
||||||
cancelled = QtCore.pyqtSignal(int)
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
QtWidgets.QStackedWidget.__init__(self)
|
|
||||||
self.tabs = QtWidgets.QTabWidget()
|
|
||||||
self.default_label = QtWidgets.QLabel("No pending interactive arguments requests.")
|
|
||||||
self.default_label.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
|
|
||||||
font = QtGui.QFont(self.default_label.font())
|
|
||||||
font.setItalic(True)
|
|
||||||
self.default_label.setFont(font)
|
|
||||||
self.addWidget(self.tabs)
|
|
||||||
self.addWidget(self.default_label)
|
|
||||||
self.model = Model({})
|
|
||||||
|
|
||||||
def setModel(self, model):
|
|
||||||
self.setCurrentIndex(1)
|
|
||||||
for i in range(self.tabs.count()):
|
|
||||||
widget = self.tabs.widget(i)
|
|
||||||
self.tabs.removeTab(i)
|
|
||||||
widget.deleteLater()
|
|
||||||
self.model = model
|
|
||||||
self.model.rowsInserted.connect(self.rowsInserted)
|
|
||||||
self.model.rowsRemoved.connect(self.rowsRemoved)
|
|
||||||
for i in range(self.model.rowCount(QtCore.QModelIndex())):
|
|
||||||
self._insert_widget(i)
|
|
||||||
|
|
||||||
def _insert_widget(self, row):
|
|
||||||
rid = self.model.data(self.model.index(row, 0),
|
|
||||||
QtCore.Qt.ItemDataRole.DisplayRole)
|
|
||||||
title = self.model.data(self.model.index(row, 1),
|
|
||||||
QtCore.Qt.ItemDataRole.DisplayRole)
|
|
||||||
arglist_desc = self.model.data(self.model.index(row, 2),
|
|
||||||
QtCore.Qt.ItemDataRole.DisplayRole)
|
|
||||||
inter_args_request = _InteractiveArgsRequest(rid, arglist_desc)
|
|
||||||
inter_args_request.supplied.connect(self.supplied)
|
|
||||||
inter_args_request.cancelled.connect(self.cancelled)
|
|
||||||
self.tabs.insertTab(row, inter_args_request, title)
|
|
||||||
|
|
||||||
def rowsInserted(self, parent, first, last):
|
|
||||||
assert first == last
|
|
||||||
self.setCurrentIndex(0)
|
|
||||||
self._insert_widget(first)
|
|
||||||
|
|
||||||
def rowsRemoved(self, parent, first, last):
|
|
||||||
assert first == last
|
|
||||||
widget = self.tabs.widget(first)
|
|
||||||
self.tabs.removeTab(first)
|
|
||||||
widget.deleteLater()
|
|
||||||
if self.tabs.count() == 0:
|
|
||||||
self.setCurrentIndex(1)
|
|
||||||
|
|
||||||
|
|
||||||
class InteractiveArgsDock(QtWidgets.QDockWidget):
|
|
||||||
def __init__(self, interactive_args_sub, interactive_args_rpc):
|
|
||||||
QtWidgets.QDockWidget.__init__(self, "Interactive Args")
|
|
||||||
self.setObjectName("Interactive Args")
|
|
||||||
self.setFeatures(
|
|
||||||
self.DockWidgetFeature.DockWidgetMovable | self.DockWidgetFeature.DockWidgetFloatable)
|
|
||||||
self.interactive_args_rpc = interactive_args_rpc
|
|
||||||
self.request_view = _InteractiveArgsView()
|
|
||||||
self.request_view.supplied.connect(self.supply)
|
|
||||||
self.request_view.cancelled.connect(self.cancel)
|
|
||||||
self.setWidget(self.request_view)
|
|
||||||
interactive_args_sub.add_setmodel_callback(self.request_view.setModel)
|
|
||||||
|
|
||||||
def supply(self, rid, values):
|
|
||||||
asyncio.ensure_future(self._supply_task(rid, values))
|
|
||||||
|
|
||||||
async def _supply_task(self, rid, values):
|
|
||||||
try:
|
|
||||||
await self.interactive_args_rpc.supply(rid, values)
|
|
||||||
except Exception:
|
|
||||||
logger.error("failed to supply interactive arguments for experiment: %d",
|
|
||||||
rid, exc_info=True)
|
|
||||||
|
|
||||||
def cancel(self, rid):
|
|
||||||
asyncio.ensure_future(self._cancel_task(rid))
|
|
||||||
|
|
||||||
async def _cancel_task(self, rid):
|
|
||||||
try:
|
|
||||||
await self.interactive_args_rpc.cancel(rid)
|
|
||||||
except Exception:
|
|
||||||
logger.error("failed to cancel interactive args request for experiment: %d",
|
|
||||||
rid, exc_info=True)
|
|
File diff suppressed because it is too large
Load Diff
|
@ -3,7 +3,7 @@ import time
|
||||||
from functools import partial
|
from functools import partial
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
from PyQt5 import QtCore, QtWidgets, QtGui
|
||||||
|
|
||||||
from artiq.gui.models import DictSyncModel
|
from artiq.gui.models import DictSyncModel
|
||||||
from artiq.tools import elide
|
from artiq.tools import elide
|
||||||
|
@ -15,8 +15,9 @@ logger = logging.getLogger(__name__)
|
||||||
class Model(DictSyncModel):
|
class Model(DictSyncModel):
|
||||||
def __init__(self, init):
|
def __init__(self, init):
|
||||||
DictSyncModel.__init__(self,
|
DictSyncModel.__init__(self,
|
||||||
["RID", "Pipeline", "Status", "Prio", "Due date",
|
["RID", "Pipeline", "Status", "Prio", "Due date",
|
||||||
"Revision", "File", "Class name"], init)
|
"Revision", "File", "Class name"],
|
||||||
|
init)
|
||||||
|
|
||||||
def sort_key(self, k, v):
|
def sort_key(self, k, v):
|
||||||
# order by priority, and then by due date and RID
|
# order by priority, and then by due date and RID
|
||||||
|
@ -47,7 +48,7 @@ class Model(DictSyncModel):
|
||||||
else:
|
else:
|
||||||
return "Outside repo."
|
return "Outside repo."
|
||||||
elif column == 6:
|
elif column == 6:
|
||||||
return v["expid"].get("file", "<none>")
|
return v["expid"]["file"]
|
||||||
elif column == 7:
|
elif column == 7:
|
||||||
if v["expid"]["class_name"] is None:
|
if v["expid"]["class_name"] is None:
|
||||||
return ""
|
return ""
|
||||||
|
@ -61,31 +62,31 @@ class ScheduleDock(QtWidgets.QDockWidget):
|
||||||
def __init__(self, schedule_ctl, schedule_sub):
|
def __init__(self, schedule_ctl, schedule_sub):
|
||||||
QtWidgets.QDockWidget.__init__(self, "Schedule")
|
QtWidgets.QDockWidget.__init__(self, "Schedule")
|
||||||
self.setObjectName("Schedule")
|
self.setObjectName("Schedule")
|
||||||
self.setFeatures(QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetMovable |
|
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||||
QtWidgets.QDockWidget.DockWidgetFeature.DockWidgetFloatable)
|
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||||
|
|
||||||
self.schedule_ctl = schedule_ctl
|
self.schedule_ctl = schedule_ctl
|
||||||
|
|
||||||
self.table = QtWidgets.QTableView()
|
self.table = QtWidgets.QTableView()
|
||||||
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectionBehavior.SelectRows)
|
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
|
||||||
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SelectionMode.SingleSelection)
|
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection)
|
||||||
self.table.verticalHeader().setSectionResizeMode(
|
self.table.verticalHeader().setSectionResizeMode(
|
||||||
QtWidgets.QHeaderView.ResizeMode.ResizeToContents)
|
QtWidgets.QHeaderView.ResizeToContents)
|
||||||
self.table.verticalHeader().hide()
|
self.table.verticalHeader().hide()
|
||||||
self.setWidget(self.table)
|
self.setWidget(self.table)
|
||||||
|
|
||||||
self.table.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
|
||||||
request_termination_action = QtGui.QAction("Request termination", self.table)
|
request_termination_action = QtWidgets.QAction("Request termination", self.table)
|
||||||
request_termination_action.triggered.connect(partial(self.delete_clicked, True))
|
request_termination_action.triggered.connect(partial(self.delete_clicked, True))
|
||||||
request_termination_action.setShortcut("DELETE")
|
request_termination_action.setShortcut("DELETE")
|
||||||
request_termination_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
request_termination_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.table.addAction(request_termination_action)
|
self.table.addAction(request_termination_action)
|
||||||
delete_action = QtGui.QAction("Delete", self.table)
|
delete_action = QtWidgets.QAction("Delete", self.table)
|
||||||
delete_action.triggered.connect(partial(self.delete_clicked, False))
|
delete_action.triggered.connect(partial(self.delete_clicked, False))
|
||||||
delete_action.setShortcut("SHIFT+DELETE")
|
delete_action.setShortcut("SHIFT+DELETE")
|
||||||
delete_action.setShortcutContext(QtCore.Qt.ShortcutContext.WidgetShortcut)
|
delete_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
|
||||||
self.table.addAction(delete_action)
|
self.table.addAction(delete_action)
|
||||||
terminate_pipeline = QtGui.QAction(
|
terminate_pipeline = QtWidgets.QAction(
|
||||||
"Gracefully terminate all in pipeline", self.table)
|
"Gracefully terminate all in pipeline", self.table)
|
||||||
terminate_pipeline.triggered.connect(self.terminate_pipeline_clicked)
|
terminate_pipeline.triggered.connect(self.terminate_pipeline_clicked)
|
||||||
self.table.addAction(terminate_pipeline)
|
self.table.addAction(terminate_pipeline)
|
||||||
|
@ -95,14 +96,14 @@ class ScheduleDock(QtWidgets.QDockWidget):
|
||||||
|
|
||||||
cw = QtGui.QFontMetrics(self.font()).averageCharWidth()
|
cw = QtGui.QFontMetrics(self.font()).averageCharWidth()
|
||||||
h = self.table.horizontalHeader()
|
h = self.table.horizontalHeader()
|
||||||
h.resizeSection(0, 7 * cw)
|
h.resizeSection(0, 7*cw)
|
||||||
h.resizeSection(1, 12 * cw)
|
h.resizeSection(1, 12*cw)
|
||||||
h.resizeSection(2, 16 * cw)
|
h.resizeSection(2, 16*cw)
|
||||||
h.resizeSection(3, 6 * cw)
|
h.resizeSection(3, 6*cw)
|
||||||
h.resizeSection(4, 16 * cw)
|
h.resizeSection(4, 16*cw)
|
||||||
h.resizeSection(5, 30 * cw)
|
h.resizeSection(5, 30*cw)
|
||||||
h.resizeSection(6, 20 * cw)
|
h.resizeSection(6, 20*cw)
|
||||||
h.resizeSection(7, 20 * cw)
|
h.resizeSection(7, 20*cw)
|
||||||
|
|
||||||
def set_model(self, model):
|
def set_model(self, model):
|
||||||
self.table_model = model
|
self.table_model = model
|
||||||
|
@ -142,7 +143,7 @@ class ScheduleDock(QtWidgets.QDockWidget):
|
||||||
selected_rid = self.table_model.row_to_key[row]
|
selected_rid = self.table_model.row_to_key[row]
|
||||||
pipeline = self.table_model.backing_store[selected_rid]["pipeline"]
|
pipeline = self.table_model.backing_store[selected_rid]["pipeline"]
|
||||||
logger.info("Requesting termination of all "
|
logger.info("Requesting termination of all "
|
||||||
"experiments in pipeline '%s'", pipeline)
|
"experiments in pipeline '%s'", pipeline)
|
||||||
|
|
||||||
rids = set()
|
rids = set()
|
||||||
for rid, info in self.table_model.backing_store.items():
|
for rid, info in self.table_model.backing_store.items():
|
||||||
|
@ -150,6 +151,7 @@ class ScheduleDock(QtWidgets.QDockWidget):
|
||||||
rids.add(rid)
|
rids.add(rid)
|
||||||
asyncio.ensure_future(self.request_term_multiple(rids))
|
asyncio.ensure_future(self.request_term_multiple(rids))
|
||||||
|
|
||||||
|
|
||||||
def save_state(self):
|
def save_state(self):
|
||||||
return bytes(self.table.horizontalHeader().saveState())
|
return bytes(self.table.horizontalHeader().saveState())
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,9 @@
|
||||||
import logging
|
import logging
|
||||||
from functools import partial
|
from functools import partial
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtGui, QtWidgets
|
from PyQt5 import QtCore, QtWidgets
|
||||||
|
|
||||||
|
from artiq.gui.tools import LayoutWidget
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
@ -11,8 +13,8 @@ class ShortcutsDock(QtWidgets.QDockWidget):
|
||||||
def __init__(self, main_window, exp_manager):
|
def __init__(self, main_window, exp_manager):
|
||||||
QtWidgets.QDockWidget.__init__(self, "Shortcuts")
|
QtWidgets.QDockWidget.__init__(self, "Shortcuts")
|
||||||
self.setObjectName("Shortcuts")
|
self.setObjectName("Shortcuts")
|
||||||
self.setFeatures(self.DockWidgetFeature.DockWidgetMovable |
|
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
|
||||||
self.DockWidgetFeature.DockWidgetFloatable)
|
QtWidgets.QDockWidget.DockWidgetFloatable)
|
||||||
|
|
||||||
layout = QtWidgets.QGridLayout()
|
layout = QtWidgets.QGridLayout()
|
||||||
top_widget = QtWidgets.QWidget()
|
top_widget = QtWidgets.QWidget()
|
||||||
|
@ -33,28 +35,28 @@ class ShortcutsDock(QtWidgets.QDockWidget):
|
||||||
for i in range(12):
|
for i in range(12):
|
||||||
row = i + 1
|
row = i + 1
|
||||||
|
|
||||||
layout.addWidget(QtWidgets.QLabel("F" + str(i + 1)), row, 0)
|
layout.addWidget(QtWidgets.QLabel("F" + str(i+1)), row, 0)
|
||||||
|
|
||||||
label = QtWidgets.QLabel()
|
label = QtWidgets.QLabel()
|
||||||
label.setSizePolicy(QtWidgets.QSizePolicy.Policy.Ignored,
|
label.setSizePolicy(QtWidgets.QSizePolicy.Ignored,
|
||||||
QtWidgets.QSizePolicy.Policy.Ignored)
|
QtWidgets.QSizePolicy.Ignored)
|
||||||
layout.addWidget(label, row, 1)
|
layout.addWidget(label, row, 1)
|
||||||
|
|
||||||
clear = QtWidgets.QToolButton()
|
clear = QtWidgets.QToolButton()
|
||||||
clear.setIcon(QtWidgets.QApplication.style().standardIcon(
|
clear.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogDiscardButton))
|
QtWidgets.QStyle.SP_DialogDiscardButton))
|
||||||
layout.addWidget(clear, row, 2)
|
layout.addWidget(clear, row, 2)
|
||||||
clear.clicked.connect(partial(self.set_shortcut, i, ""))
|
clear.clicked.connect(partial(self.set_shortcut, i, ""))
|
||||||
|
|
||||||
open = QtWidgets.QToolButton()
|
open = QtWidgets.QToolButton()
|
||||||
open.setIcon(QtWidgets.QApplication.style().standardIcon(
|
open.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOpenButton))
|
QtWidgets.QStyle.SP_DialogOpenButton))
|
||||||
layout.addWidget(open, row, 3)
|
layout.addWidget(open, row, 3)
|
||||||
open.clicked.connect(partial(self._open_experiment, i))
|
open.clicked.connect(partial(self._open_experiment, i))
|
||||||
|
|
||||||
submit = QtWidgets.QPushButton("Submit")
|
submit = QtWidgets.QPushButton("Submit")
|
||||||
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
submit.setIcon(QtWidgets.QApplication.style().standardIcon(
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_DialogOkButton))
|
QtWidgets.QStyle.SP_DialogOkButton))
|
||||||
layout.addWidget(submit, row, 4)
|
layout.addWidget(submit, row, 4)
|
||||||
submit.clicked.connect(partial(self._activated, i))
|
submit.clicked.connect(partial(self._activated, i))
|
||||||
|
|
||||||
|
@ -68,8 +70,8 @@ class ShortcutsDock(QtWidgets.QDockWidget):
|
||||||
"open": open,
|
"open": open,
|
||||||
"submit": submit
|
"submit": submit
|
||||||
}
|
}
|
||||||
shortcut = QtGui.QShortcut("F" + str(i+1), main_window)
|
shortcut = QtWidgets.QShortcut("F" + str(i+1), main_window)
|
||||||
shortcut.setContext(QtCore.Qt.ShortcutContext.ApplicationShortcut)
|
shortcut.setContext(QtCore.Qt.ApplicationShortcut)
|
||||||
shortcut.activated.connect(partial(self._activated, i))
|
shortcut.activated.connect(partial(self._activated, i))
|
||||||
|
|
||||||
def _activated(self, nr):
|
def _activated(self, nr):
|
||||||
|
|
|
@ -1,926 +0,0 @@
|
||||||
import os
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
import bisect
|
|
||||||
import itertools
|
|
||||||
import math
|
|
||||||
|
|
||||||
from PyQt6 import QtCore, QtWidgets, QtGui
|
|
||||||
|
|
||||||
import pyqtgraph as pg
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
from sipyco.pc_rpc import AsyncioClient
|
|
||||||
from sipyco import pyon
|
|
||||||
|
|
||||||
from artiq.tools import exc_to_warning, short_format
|
|
||||||
from artiq.coredevice import comm_analyzer
|
|
||||||
from artiq.coredevice.comm_analyzer import WaveformType
|
|
||||||
from artiq.gui.tools import LayoutWidget, get_open_file_name, get_save_file_name
|
|
||||||
from artiq.gui.models import DictSyncTreeSepModel
|
|
||||||
from artiq.gui.dndwidgets import VDragScrollArea, VDragDropSplitter
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
WAVEFORM_MIN_HEIGHT = 50
|
|
||||||
WAVEFORM_MAX_HEIGHT = 200
|
|
||||||
|
|
||||||
|
|
||||||
class ProxyClient():
|
|
||||||
def __init__(self, receive_cb, timeout=5, timer=5, timer_backoff=1.1):
|
|
||||||
self.receive_cb = receive_cb
|
|
||||||
self.receiver = None
|
|
||||||
self.addr = None
|
|
||||||
self.port_proxy = None
|
|
||||||
self.port = None
|
|
||||||
self._reconnect_event = asyncio.Event()
|
|
||||||
self.timeout = timeout
|
|
||||||
self.timer = timer
|
|
||||||
self.timer_cur = timer
|
|
||||||
self.timer_backoff = timer_backoff
|
|
||||||
self._reconnect_task = asyncio.ensure_future(self._reconnect())
|
|
||||||
|
|
||||||
def update_address(self, addr, port, port_proxy):
|
|
||||||
self.addr = addr
|
|
||||||
self.port = port
|
|
||||||
self.port_proxy = port_proxy
|
|
||||||
self._reconnect_event.set()
|
|
||||||
|
|
||||||
async def trigger_proxy_task(self):
|
|
||||||
remote = AsyncioClient()
|
|
||||||
try:
|
|
||||||
try:
|
|
||||||
if self.addr is None:
|
|
||||||
logger.error("missing core_analyzer host in device db")
|
|
||||||
return
|
|
||||||
await remote.connect_rpc(self.addr, self.port, "coreanalyzer_proxy_control")
|
|
||||||
except:
|
|
||||||
logger.error("error connecting to analyzer proxy control", exc_info=True)
|
|
||||||
return
|
|
||||||
await remote.trigger()
|
|
||||||
except:
|
|
||||||
logger.error("analyzer proxy reported failure", exc_info=True)
|
|
||||||
finally:
|
|
||||||
remote.close_rpc()
|
|
||||||
|
|
||||||
async def _reconnect(self):
|
|
||||||
while True:
|
|
||||||
await self._reconnect_event.wait()
|
|
||||||
self._reconnect_event.clear()
|
|
||||||
if self.receiver is not None:
|
|
||||||
await self.receiver.close()
|
|
||||||
self.receiver = None
|
|
||||||
new_receiver = comm_analyzer.AnalyzerProxyReceiver(
|
|
||||||
self.receive_cb, self.disconnect_cb)
|
|
||||||
try:
|
|
||||||
if self.addr is not None:
|
|
||||||
await asyncio.wait_for(new_receiver.connect(self.addr, self.port_proxy),
|
|
||||||
self.timeout)
|
|
||||||
logger.info("ARTIQ dashboard connected to analyzer proxy (%s)", self.addr)
|
|
||||||
self.timer_cur = self.timer
|
|
||||||
self.receiver = new_receiver
|
|
||||||
continue
|
|
||||||
except Exception:
|
|
||||||
logger.error("error connecting to analyzer proxy", exc_info=True)
|
|
||||||
try:
|
|
||||||
await asyncio.wait_for(self._reconnect_event.wait(), self.timer_cur)
|
|
||||||
except asyncio.TimeoutError:
|
|
||||||
self.timer_cur *= self.timer_backoff
|
|
||||||
self._reconnect_event.set()
|
|
||||||
else:
|
|
||||||
self.timer_cur = self.timer
|
|
||||||
|
|
||||||
async def close(self):
|
|
||||||
self._reconnect_task.cancel()
|
|
||||||
try:
|
|
||||||
await asyncio.wait_for(self._reconnect_task, None)
|
|
||||||
except asyncio.CancelledError:
|
|
||||||
pass
|
|
||||||
if self.receiver is not None:
|
|
||||||
await self.receiver.close()
|
|
||||||
|
|
||||||
def disconnect_cb(self):
|
|
||||||
logger.error("lost connection to analyzer proxy")
|
|
||||||
self._reconnect_event.set()
|
|
||||||
|
|
||||||
|
|
||||||
class _BackgroundItem(pg.GraphicsWidgetAnchor, pg.GraphicsWidget):
|
|
||||||
def __init__(self, parent, rect):
|
|
||||||
pg.GraphicsWidget.__init__(self, parent)
|
|
||||||
pg.GraphicsWidgetAnchor.__init__(self)
|
|
||||||
self.item = QtWidgets.QGraphicsRectItem(rect, self)
|
|
||||||
brush = QtGui.QBrush(QtGui.QColor(10, 10, 10, 140))
|
|
||||||
self.item.setBrush(brush)
|
|
||||||
|
|
||||||
|
|
||||||
class _BaseWaveform(pg.PlotWidget):
|
|
||||||
cursorMove = QtCore.pyqtSignal(float)
|
|
||||||
|
|
||||||
def __init__(self, name, width, precision, unit,
|
|
||||||
parent=None, pen="r", stepMode="right", connect="finite"):
|
|
||||||
pg.PlotWidget.__init__(self,
|
|
||||||
parent=parent,
|
|
||||||
x=None,
|
|
||||||
y=None,
|
|
||||||
pen=pen,
|
|
||||||
stepMode=stepMode,
|
|
||||||
connect=connect)
|
|
||||||
|
|
||||||
self.setMinimumHeight(WAVEFORM_MIN_HEIGHT)
|
|
||||||
self.setMaximumHeight(WAVEFORM_MAX_HEIGHT)
|
|
||||||
self.setMenuEnabled(False)
|
|
||||||
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
|
||||||
|
|
||||||
self.name = name
|
|
||||||
self.width = width
|
|
||||||
self.precision = precision
|
|
||||||
self.unit = unit
|
|
||||||
|
|
||||||
self.x_data = []
|
|
||||||
self.y_data = []
|
|
||||||
|
|
||||||
self.plot_item = self.getPlotItem()
|
|
||||||
self.plot_item.hideButtons()
|
|
||||||
self.plot_item.hideAxis("top")
|
|
||||||
self.plot_item.getAxis("bottom").setStyle(showValues=False, tickLength=0)
|
|
||||||
self.plot_item.getAxis("left").setStyle(showValues=False, tickLength=0)
|
|
||||||
self.plot_item.setRange(yRange=(0, 1), padding=0.1)
|
|
||||||
self.plot_item.showGrid(x=True, y=True)
|
|
||||||
|
|
||||||
self.plot_data_item = self.plot_item.listDataItems()[0]
|
|
||||||
self.plot_data_item.setClipToView(True)
|
|
||||||
|
|
||||||
self.view_box = self.plot_item.getViewBox()
|
|
||||||
self.view_box.setMouseEnabled(x=True, y=False)
|
|
||||||
self.view_box.disableAutoRange(axis=pg.ViewBox.YAxis)
|
|
||||||
self.view_box.setLimits(xMin=0, minXRange=20)
|
|
||||||
|
|
||||||
self.title_label = pg.LabelItem(self.name, parent=self.plot_item)
|
|
||||||
self.title_label.anchor(itemPos=(0, 0), parentPos=(0, 0), offset=(0, 0))
|
|
||||||
self.title_label.setAttr('justify', 'left')
|
|
||||||
self.title_label.setZValue(10)
|
|
||||||
|
|
||||||
rect = self.title_label.boundingRect()
|
|
||||||
rect.setHeight(rect.height() * 2)
|
|
||||||
rect.setWidth(225)
|
|
||||||
self.label_bg = _BackgroundItem(parent=self.plot_item, rect=rect)
|
|
||||||
self.label_bg.anchor(itemPos=(0, 0), parentPos=(0, 0), offset=(0, 0))
|
|
||||||
|
|
||||||
self.cursor = pg.InfiniteLine()
|
|
||||||
self.cursor_y = None
|
|
||||||
self.addItem(self.cursor)
|
|
||||||
|
|
||||||
self.cursor_label = pg.LabelItem('', parent=self.plot_item)
|
|
||||||
self.cursor_label.anchor(itemPos=(0, 0), parentPos=(0, 0), offset=(0, 20))
|
|
||||||
self.cursor_label.setAttr('justify', 'left')
|
|
||||||
self.cursor_label.setZValue(10)
|
|
||||||
|
|
||||||
def setStoppedX(self, stopped_x):
|
|
||||||
self.stopped_x = stopped_x
|
|
||||||
self.view_box.setLimits(xMax=stopped_x)
|
|
||||||
|
|
||||||
def setData(self, data):
|
|
||||||
if len(data) == 0:
|
|
||||||
self.x_data, self.y_data = [], []
|
|
||||||
else:
|
|
||||||
self.x_data, self.y_data = zip(*data)
|
|
||||||
|
|
||||||
def onDataChange(self, data):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def onCursorMove(self, x):
|
|
||||||
self.cursor.setValue(x)
|
|
||||||
if len(self.x_data) < 1:
|
|
||||||
return
|
|
||||||
ind = bisect.bisect_left(self.x_data, x) - 1
|
|
||||||
dr = self.plot_data_item.dataRect()
|
|
||||||
self.cursor_y = None
|
|
||||||
if dr is not None and 0 <= ind < len(self.y_data):
|
|
||||||
self.cursor_y = self.y_data[ind]
|
|
||||||
|
|
||||||
def mouseMoveEvent(self, e):
|
|
||||||
if e.buttons() == QtCore.Qt.MouseButton.LeftButton \
|
|
||||||
and e.modifiers() == QtCore.Qt.KeyboardModifier.ShiftModifier:
|
|
||||||
drag = QtGui.QDrag(self)
|
|
||||||
mime = QtCore.QMimeData()
|
|
||||||
drag.setMimeData(mime)
|
|
||||||
pixmapi = QtWidgets.QApplication.style().standardIcon(
|
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_FileIcon)
|
|
||||||
drag.setPixmap(pixmapi.pixmap(32))
|
|
||||||
drag.exec(QtCore.Qt.DropAction.MoveAction)
|
|
||||||
else:
|
|
||||||
super().mouseMoveEvent(e)
|
|
||||||
|
|
||||||
def wheelEvent(self, e):
|
|
||||||
if e.modifiers() & QtCore.Qt.KeyboardModifier.ControlModifier:
|
|
||||||
super().wheelEvent(e)
|
|
||||||
else:
|
|
||||||
e.ignore()
|
|
||||||
|
|
||||||
|
|
||||||
def mouseDoubleClickEvent(self, e):
|
|
||||||
pos = self.view_box.mapSceneToView(e.position())
|
|
||||||
self.cursorMove.emit(pos.x())
|
|
||||||
|
|
||||||
|
|
||||||
class BitWaveform(_BaseWaveform):
|
|
||||||
def __init__(self, name, width, precision, unit, parent=None):
|
|
||||||
_BaseWaveform.__init__(self, name, width, precision, unit, parent)
|
|
||||||
self.plot_item.showGrid(x=True, y=False)
|
|
||||||
self._arrows = []
|
|
||||||
|
|
||||||
def onDataChange(self, data):
|
|
||||||
try:
|
|
||||||
self.setData(data)
|
|
||||||
for arw in self._arrows:
|
|
||||||
self.removeItem(arw)
|
|
||||||
self._arrows = []
|
|
||||||
l = len(data)
|
|
||||||
display_y = np.empty(l)
|
|
||||||
display_x = np.empty(l)
|
|
||||||
display_map = {
|
|
||||||
"X": 0.5,
|
|
||||||
"1": 1,
|
|
||||||
"0": 0
|
|
||||||
}
|
|
||||||
previous_y = None
|
|
||||||
for i, coord in enumerate(data):
|
|
||||||
x, y = coord
|
|
||||||
dis_y = display_map[y]
|
|
||||||
if previous_y == y:
|
|
||||||
arw = pg.ArrowItem(pxMode=True, angle=90)
|
|
||||||
self.addItem(arw)
|
|
||||||
self._arrows.append(arw)
|
|
||||||
arw.setPos(x, dis_y)
|
|
||||||
display_y[i] = dis_y
|
|
||||||
display_x[i] = x
|
|
||||||
previous_y = y
|
|
||||||
self.plot_data_item.setData(x=display_x, y=display_y)
|
|
||||||
except:
|
|
||||||
logger.error("Error when displaying waveform: %s", self.name, exc_info=True)
|
|
||||||
for arw in self._arrows:
|
|
||||||
self.removeItem(arw)
|
|
||||||
self.plot_data_item.setData(x=[], y=[])
|
|
||||||
|
|
||||||
def onCursorMove(self, x):
|
|
||||||
_BaseWaveform.onCursorMove(self, x)
|
|
||||||
if self.cursor_y is not None:
|
|
||||||
self.cursor_label.setText(self.cursor_y)
|
|
||||||
else:
|
|
||||||
self.cursor_label.setText("")
|
|
||||||
|
|
||||||
|
|
||||||
class AnalogWaveform(_BaseWaveform):
|
|
||||||
def __init__(self, name, width, precision, unit, parent=None):
|
|
||||||
_BaseWaveform.__init__(self, name, width, precision, unit, parent)
|
|
||||||
|
|
||||||
def onDataChange(self, data):
|
|
||||||
try:
|
|
||||||
self.setData(data)
|
|
||||||
self.plot_data_item.setData(x=self.x_data, y=self.y_data)
|
|
||||||
if len(data) > 0:
|
|
||||||
max_y = max(self.y_data)
|
|
||||||
min_y = min(self.y_data)
|
|
||||||
self.plot_item.setRange(yRange=(min_y, max_y), padding=0.1)
|
|
||||||
except:
|
|
||||||
logger.error("Error when displaying waveform: %s", self.name, exc_info=True)
|
|
||||||
self.plot_data_item.setData(x=[], y=[])
|
|
||||||
|
|
||||||
def onCursorMove(self, x):
|
|
||||||
_BaseWaveform.onCursorMove(self, x)
|
|
||||||
if self.cursor_y is not None:
|
|
||||||
t = short_format(self.cursor_y, {"precision": self.precision, "unit": self.unit})
|
|
||||||
else:
|
|
||||||
t = ""
|
|
||||||
self.cursor_label.setText(t)
|
|
||||||
|
|
||||||
|
|
||||||
class BitVectorWaveform(_BaseWaveform):
|
|
||||||
def __init__(self, name, width, precision, unit, parent=None):
|
|
||||||
_BaseWaveform.__init__(self, name, width, precision, parent)
|
|
||||||
self._labels = []
|
|
||||||
self._format_string = "{:0=" + str(math.ceil(width / 4)) + "X}"
|
|
||||||
self.view_box.sigTransformChanged.connect(self._update_labels)
|
|
||||||
self.plot_item.showGrid(x=True, y=False)
|
|
||||||
|
|
||||||
def _update_labels(self):
|
|
||||||
for label in self._labels:
|
|
||||||
self.removeItem(label)
|
|
||||||
xmin, xmax = self.view_box.viewRange()[0]
|
|
||||||
left_label_i = bisect.bisect_left(self.x_data, xmin)
|
|
||||||
right_label_i = bisect.bisect_right(self.x_data, xmax) + 1
|
|
||||||
for i, j in itertools.pairwise(range(left_label_i, right_label_i)):
|
|
||||||
x1 = self.x_data[i]
|
|
||||||
x2 = self.x_data[j] if j < len(self.x_data) else self.stopped_x
|
|
||||||
lbl = self._labels[i]
|
|
||||||
bounds = lbl.boundingRect()
|
|
||||||
bounds_view = self.view_box.mapSceneToView(bounds)
|
|
||||||
if bounds_view.boundingRect().width() < x2 - x1:
|
|
||||||
self.addItem(lbl)
|
|
||||||
|
|
||||||
def onDataChange(self, data):
|
|
||||||
try:
|
|
||||||
self.setData(data)
|
|
||||||
for lbl in self._labels:
|
|
||||||
self.plot_item.removeItem(lbl)
|
|
||||||
self._labels = []
|
|
||||||
l = len(data)
|
|
||||||
display_x = np.empty(l * 2)
|
|
||||||
display_y = np.empty(l * 2)
|
|
||||||
for i, coord in enumerate(data):
|
|
||||||
x, y = coord
|
|
||||||
display_x[i * 2] = x
|
|
||||||
display_x[i * 2 + 1] = x
|
|
||||||
display_y[i * 2] = 0
|
|
||||||
display_y[i * 2 + 1] = int(int(y) != 0)
|
|
||||||
lbl = pg.TextItem(
|
|
||||||
self._format_string.format(int(y, 2)), anchor=(0, 0.5))
|
|
||||||
lbl.setPos(x, 0.5)
|
|
||||||
lbl.setTextWidth(100)
|
|
||||||
self._labels.append(lbl)
|
|
||||||
self.plot_data_item.setData(x=display_x, y=display_y)
|
|
||||||
except:
|
|
||||||
logger.error("Error when displaying waveform: %s", self.name, exc_info=True)
|
|
||||||
for lbl in self._labels:
|
|
||||||
self.plot_item.removeItem(lbl)
|
|
||||||
self.plot_data_item.setData(x=[], y=[])
|
|
||||||
|
|
||||||
def onCursorMove(self, x):
|
|
||||||
_BaseWaveform.onCursorMove(self, x)
|
|
||||||
if self.cursor_y is not None:
|
|
||||||
t = self._format_string.format(int(self.cursor_y, 2))
|
|
||||||
else:
|
|
||||||
t = ""
|
|
||||||
self.cursor_label.setText(t)
|
|
||||||
|
|
||||||
|
|
||||||
class LogWaveform(_BaseWaveform):
|
|
||||||
def __init__(self, name, width, precision, unit, parent=None):
|
|
||||||
_BaseWaveform.__init__(self, name, width, precision, parent)
|
|
||||||
self.plot_data_item.opts['pen'] = None
|
|
||||||
self.plot_data_item.opts['symbol'] = 'x'
|
|
||||||
self._labels = []
|
|
||||||
self.plot_item.showGrid(x=True, y=False)
|
|
||||||
|
|
||||||
def onDataChange(self, data):
|
|
||||||
try:
|
|
||||||
self.setData(data)
|
|
||||||
for lbl in self._labels:
|
|
||||||
self.plot_item.removeItem(lbl)
|
|
||||||
self._labels = []
|
|
||||||
self.plot_data_item.setData(
|
|
||||||
x=self.x_data, y=np.ones(len(self.x_data)))
|
|
||||||
if len(data) == 0:
|
|
||||||
return
|
|
||||||
old_x = data[0][0]
|
|
||||||
old_msg = data[0][1]
|
|
||||||
for x, msg in data[1:]:
|
|
||||||
if x == old_x:
|
|
||||||
old_msg += "\n" + msg
|
|
||||||
else:
|
|
||||||
lbl = pg.TextItem(old_msg)
|
|
||||||
self.addItem(lbl)
|
|
||||||
self._labels.append(lbl)
|
|
||||||
lbl.setPos(old_x, 1)
|
|
||||||
old_msg = msg
|
|
||||||
old_x = x
|
|
||||||
lbl = pg.TextItem(old_msg)
|
|
||||||
self.addItem(lbl)
|
|
||||||
self._labels.append(lbl)
|
|
||||||
lbl.setPos(old_x, 1)
|
|
||||||
except:
|
|
||||||
logger.error("Error when displaying waveform: %s", self.name, exc_info=True)
|
|
||||||
for lbl in self._labels:
|
|
||||||
self.plot_item.removeItem(lbl)
|
|
||||||
self.plot_data_item.setData(x=[], y=[])
|
|
||||||
|
|
||||||
|
|
||||||
# pg.GraphicsView ignores dragEnterEvent but not dragLeaveEvent
|
|
||||||
# https://github.com/pyqtgraph/pyqtgraph/blob/1e98704eac6b85de9c35371079f561042e88ad68/pyqtgraph/widgets/GraphicsView.py#L388
|
|
||||||
class _RefAxis(pg.PlotWidget):
|
|
||||||
def dragLeaveEvent(self, ev):
|
|
||||||
ev.ignore()
|
|
||||||
|
|
||||||
|
|
||||||
class _WaveformView(QtWidgets.QWidget):
|
|
||||||
cursorMove = QtCore.pyqtSignal(float)
|
|
||||||
|
|
||||||
def __init__(self, parent):
|
|
||||||
QtWidgets.QWidget.__init__(self, parent=parent)
|
|
||||||
|
|
||||||
self._stopped_x = None
|
|
||||||
self._timescale = 1
|
|
||||||
self._cursor_x = 0
|
|
||||||
|
|
||||||
layout = QtWidgets.QVBoxLayout()
|
|
||||||
layout.setContentsMargins(0, 0, 0, 0)
|
|
||||||
layout.setSpacing(0)
|
|
||||||
self.setLayout(layout)
|
|
||||||
|
|
||||||
self._ref_axis = _RefAxis()
|
|
||||||
self._ref_axis.hideAxis("bottom")
|
|
||||||
self._ref_axis.hideAxis("left")
|
|
||||||
self._ref_axis.hideButtons()
|
|
||||||
self._ref_axis.setFixedHeight(45)
|
|
||||||
self._ref_axis.setMenuEnabled(False)
|
|
||||||
self._top = pg.AxisItem("top")
|
|
||||||
self._top.setScale(1e-12)
|
|
||||||
self._top.setLabel(units="s")
|
|
||||||
self._ref_axis.setAxisItems({"top": self._top})
|
|
||||||
layout.addWidget(self._ref_axis)
|
|
||||||
|
|
||||||
self._ref_vb = self._ref_axis.getPlotItem().getViewBox()
|
|
||||||
self._ref_vb.setFixedHeight(0)
|
|
||||||
self._ref_vb.setMouseEnabled(x=True, y=False)
|
|
||||||
self._ref_vb.setLimits(xMin=0)
|
|
||||||
|
|
||||||
scroll_area = VDragScrollArea(self)
|
|
||||||
scroll_area.setWidgetResizable(True)
|
|
||||||
scroll_area.setContentsMargins(0, 0, 0, 0)
|
|
||||||
scroll_area.setFrameShape(QtWidgets.QFrame.Shape.NoFrame)
|
|
||||||
scroll_area.setVerticalScrollBarPolicy(
|
|
||||||
QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
|
|
||||||
layout.addWidget(scroll_area)
|
|
||||||
|
|
||||||
self._splitter = VDragDropSplitter(parent=scroll_area)
|
|
||||||
self._splitter.setHandleWidth(1)
|
|
||||||
scroll_area.setWidget(self._splitter)
|
|
||||||
|
|
||||||
self.cursorMove.connect(self.onCursorMove)
|
|
||||||
|
|
||||||
self.confirm_delete_dialog = QtWidgets.QMessageBox(self)
|
|
||||||
self.confirm_delete_dialog.setIcon(
|
|
||||||
QtWidgets.QMessageBox.Icon.Warning
|
|
||||||
)
|
|
||||||
self.confirm_delete_dialog.setText("Delete all waveforms?")
|
|
||||||
self.confirm_delete_dialog.setStandardButtons(
|
|
||||||
QtWidgets.QMessageBox.StandardButton.Ok |
|
|
||||||
QtWidgets.QMessageBox.StandardButton.Cancel
|
|
||||||
)
|
|
||||||
self.confirm_delete_dialog.setDefaultButton(
|
|
||||||
QtWidgets.QMessageBox.StandardButton.Ok
|
|
||||||
)
|
|
||||||
|
|
||||||
def setModel(self, model):
|
|
||||||
self._model = model
|
|
||||||
self._model.dataChanged.connect(self.onDataChange)
|
|
||||||
self._model.rowsInserted.connect(self.onInsert)
|
|
||||||
self._model.rowsRemoved.connect(self.onRemove)
|
|
||||||
self._model.rowsMoved.connect(self.onMove)
|
|
||||||
self._splitter.dropped.connect(self._model.move)
|
|
||||||
self.confirm_delete_dialog.accepted.connect(self._model.clear)
|
|
||||||
|
|
||||||
def setTimescale(self, timescale):
|
|
||||||
self._timescale = timescale
|
|
||||||
self._top.setScale(1e-12 * timescale)
|
|
||||||
|
|
||||||
def setStoppedX(self, stopped_x):
|
|
||||||
self._stopped_x = stopped_x
|
|
||||||
self._ref_vb.setLimits(xMax=stopped_x)
|
|
||||||
self._ref_vb.setRange(xRange=(0, stopped_x))
|
|
||||||
for i in range(self._model.rowCount()):
|
|
||||||
self._splitter.widget(i).setStoppedX(stopped_x)
|
|
||||||
|
|
||||||
def resetZoom(self):
|
|
||||||
if self._stopped_x is not None:
|
|
||||||
self._ref_vb.setRange(xRange=(0, self._stopped_x))
|
|
||||||
|
|
||||||
def onDataChange(self, top, bottom, roles):
|
|
||||||
self.cursorMove.emit(0)
|
|
||||||
first = top.row()
|
|
||||||
last = bottom.row()
|
|
||||||
data_row = self._model.headers.index("data")
|
|
||||||
for i in range(first, last + 1):
|
|
||||||
data = self._model.data(self._model.index(i, data_row))
|
|
||||||
self._splitter.widget(i).onDataChange(data)
|
|
||||||
|
|
||||||
def onInsert(self, parent, first, last):
|
|
||||||
for i in range(first, last + 1):
|
|
||||||
w = self._create_waveform(i)
|
|
||||||
self._splitter.insertWidget(i, w)
|
|
||||||
self._resize()
|
|
||||||
|
|
||||||
def onRemove(self, parent, first, last):
|
|
||||||
for i in reversed(range(first, last + 1)):
|
|
||||||
w = self._splitter.widget(i)
|
|
||||||
w.deleteLater()
|
|
||||||
self._splitter.refresh()
|
|
||||||
self._resize()
|
|
||||||
|
|
||||||
def onMove(self, src_parent, src_start, src_end, dest_parent, dest_row):
|
|
||||||
w = self._splitter.widget(src_start)
|
|
||||||
self._splitter.insertWidget(dest_row, w)
|
|
||||||
|
|
||||||
def onCursorMove(self, x):
|
|
||||||
self._cursor_x = x
|
|
||||||
for i in range(self._model.rowCount()):
|
|
||||||
self._splitter.widget(i).onCursorMove(x)
|
|
||||||
|
|
||||||
def _create_waveform(self, row):
|
|
||||||
name, ty, width, precision, unit = (
|
|
||||||
self._model.data(self._model.index(row, i)) for i in range(5))
|
|
||||||
waveform_cls = {
|
|
||||||
WaveformType.BIT: BitWaveform,
|
|
||||||
WaveformType.VECTOR: BitVectorWaveform,
|
|
||||||
WaveformType.ANALOG: AnalogWaveform,
|
|
||||||
WaveformType.LOG: LogWaveform
|
|
||||||
}[ty]
|
|
||||||
w = waveform_cls(name, width, precision, unit, parent=self._splitter)
|
|
||||||
w.setXLink(self._ref_vb)
|
|
||||||
w.setStoppedX(self._stopped_x)
|
|
||||||
w.cursorMove.connect(self.cursorMove)
|
|
||||||
w.onCursorMove(self._cursor_x)
|
|
||||||
action = QtGui.QAction("Delete waveform", w)
|
|
||||||
action.triggered.connect(lambda: self._delete_waveform(w))
|
|
||||||
w.addAction(action)
|
|
||||||
action = QtGui.QAction("Delete all waveforms", w)
|
|
||||||
action.triggered.connect(self.confirm_delete_dialog.open)
|
|
||||||
w.addAction(action)
|
|
||||||
return w
|
|
||||||
|
|
||||||
def _delete_waveform(self, waveform):
|
|
||||||
row = self._splitter.indexOf(waveform)
|
|
||||||
self._model.pop(row)
|
|
||||||
|
|
||||||
def _resize(self):
|
|
||||||
self._splitter.setFixedHeight(
|
|
||||||
int((WAVEFORM_MIN_HEIGHT + WAVEFORM_MAX_HEIGHT) * self._model.rowCount() / 2))
|
|
||||||
|
|
||||||
|
|
||||||
class _WaveformModel(QtCore.QAbstractTableModel):
|
|
||||||
def __init__(self):
|
|
||||||
self.backing_struct = []
|
|
||||||
self.headers = ["name", "type", "width", "precision", "unit", "data"]
|
|
||||||
QtCore.QAbstractTableModel.__init__(self)
|
|
||||||
|
|
||||||
def rowCount(self, parent=QtCore.QModelIndex()):
|
|
||||||
return len(self.backing_struct)
|
|
||||||
|
|
||||||
def columnCount(self, parent=QtCore.QModelIndex()):
|
|
||||||
return len(self.headers)
|
|
||||||
|
|
||||||
def data(self, index, role=QtCore.Qt.ItemDataRole.DisplayRole):
|
|
||||||
if index.isValid():
|
|
||||||
return self.backing_struct[index.row()][index.column()]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def extend(self, data):
|
|
||||||
length = len(self.backing_struct)
|
|
||||||
len_data = len(data)
|
|
||||||
self.beginInsertRows(QtCore.QModelIndex(), length, length + len_data - 1)
|
|
||||||
self.backing_struct.extend(data)
|
|
||||||
self.endInsertRows()
|
|
||||||
|
|
||||||
def pop(self, row):
|
|
||||||
self.beginRemoveRows(QtCore.QModelIndex(), row, row)
|
|
||||||
self.backing_struct.pop(row)
|
|
||||||
self.endRemoveRows()
|
|
||||||
|
|
||||||
def move(self, src, dest):
|
|
||||||
if src == dest:
|
|
||||||
return
|
|
||||||
if src < dest:
|
|
||||||
dest, src = src, dest
|
|
||||||
self.beginMoveRows(QtCore.QModelIndex(), src, src, QtCore.QModelIndex(), dest)
|
|
||||||
self.backing_struct.insert(dest, self.backing_struct.pop(src))
|
|
||||||
self.endMoveRows()
|
|
||||||
|
|
||||||
def clear(self):
|
|
||||||
self.beginRemoveRows(QtCore.QModelIndex(), 0, len(self.backing_struct) - 1)
|
|
||||||
self.backing_struct.clear()
|
|
||||||
self.endRemoveRows()
|
|
||||||
|
|
||||||
def export_list(self):
|
|
||||||
return [[row[0], row[1].value, *row[2:5]] for row in self.backing_struct]
|
|
||||||
|
|
||||||
def import_list(self, channel_list):
|
|
||||||
self.clear()
|
|
||||||
data = [[row[0], WaveformType(row[1]), *row[2:5], []] for row in channel_list]
|
|
||||||
self.extend(data)
|
|
||||||
|
|
||||||
def update_data(self, waveform_data, top, bottom):
|
|
||||||
name_col = self.headers.index("name")
|
|
||||||
data_col = self.headers.index("data")
|
|
||||||
for i in range(top, bottom):
|
|
||||||
name = self.data(self.index(i, name_col))
|
|
||||||
self.backing_struct[i][data_col] = waveform_data.get(name, [])
|
|
||||||
self.dataChanged.emit(self.index(i, data_col),
|
|
||||||
self.index(i, data_col))
|
|
||||||
|
|
||||||
def update_all(self, waveform_data):
|
|
||||||
self.update_data(waveform_data, 0, self.rowCount())
|
|
||||||
|
|
||||||
|
|
||||||
class _CursorTimeControl(QtWidgets.QLineEdit):
|
|
||||||
submit = QtCore.pyqtSignal(float)
|
|
||||||
|
|
||||||
def __init__(self, parent):
|
|
||||||
QtWidgets.QLineEdit.__init__(self, parent=parent)
|
|
||||||
self._text = ""
|
|
||||||
self._value = 0
|
|
||||||
self._timescale = 1
|
|
||||||
self.setDisplayValue(0)
|
|
||||||
self.textChanged.connect(self._onTextChange)
|
|
||||||
self.returnPressed.connect(self._onReturnPress)
|
|
||||||
|
|
||||||
def setTimescale(self, timescale):
|
|
||||||
self._timescale = timescale
|
|
||||||
|
|
||||||
def _onTextChange(self, text):
|
|
||||||
self._text = text
|
|
||||||
|
|
||||||
def setDisplayValue(self, value):
|
|
||||||
self._value = value
|
|
||||||
self._text = pg.siFormat(value * 1e-12 * self._timescale,
|
|
||||||
suffix="s",
|
|
||||||
allowUnicode=False,
|
|
||||||
precision=15)
|
|
||||||
self.setText(self._text)
|
|
||||||
|
|
||||||
def _setValueFromText(self, text):
|
|
||||||
try:
|
|
||||||
self._value = pg.siEval(text) * (1e12 / self._timescale)
|
|
||||||
except:
|
|
||||||
logger.error("Error when parsing cursor time input", exc_info=True)
|
|
||||||
|
|
||||||
def _onReturnPress(self):
|
|
||||||
self._setValueFromText(self._text)
|
|
||||||
self.setDisplayValue(self._value)
|
|
||||||
self.submit.emit(self._value)
|
|
||||||
self.clearFocus()
|
|
||||||
|
|
||||||
|
|
||||||
class Model(DictSyncTreeSepModel):
|
|
||||||
def __init__(self, init):
|
|
||||||
DictSyncTreeSepModel.__init__(self, "/", ["Channels"], init)
|
|
||||||
|
|
||||||
def clear(self):
|
|
||||||
for k in self.backing_store:
|
|
||||||
self._del_item(self, k.split(self.separator))
|
|
||||||
self.backing_store.clear()
|
|
||||||
|
|
||||||
def update(self, d):
|
|
||||||
for k, v in d.items():
|
|
||||||
self[k] = v
|
|
||||||
|
|
||||||
|
|
||||||
class _AddChannelDialog(QtWidgets.QDialog):
|
|
||||||
def __init__(self, parent, model):
|
|
||||||
QtWidgets.QDialog.__init__(self, parent=parent)
|
|
||||||
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.ActionsContextMenu)
|
|
||||||
self.setWindowTitle("Add channels")
|
|
||||||
|
|
||||||
layout = QtWidgets.QVBoxLayout()
|
|
||||||
self.setLayout(layout)
|
|
||||||
|
|
||||||
self._model = model
|
|
||||||
self._tree_view = QtWidgets.QTreeView()
|
|
||||||
self._tree_view.setHeaderHidden(True)
|
|
||||||
self._tree_view.setSelectionBehavior(
|
|
||||||
QtWidgets.QAbstractItemView.SelectionBehavior.SelectItems)
|
|
||||||
self._tree_view.setSelectionMode(
|
|
||||||
QtWidgets.QAbstractItemView.SelectionMode.ExtendedSelection)
|
|
||||||
self._tree_view.setModel(self._model)
|
|
||||||
layout.addWidget(self._tree_view)
|
|
||||||
|
|
||||||
self._button_box = QtWidgets.QDialogButtonBox(
|
|
||||||
QtWidgets.QDialogButtonBox.StandardButton.Ok | QtWidgets.QDialogButtonBox.StandardButton.Cancel
|
|
||||||
)
|
|
||||||
self._button_box.setCenterButtons(True)
|
|
||||||
self._button_box.accepted.connect(self.add_channels)
|
|
||||||
self._button_box.rejected.connect(self.reject)
|
|
||||||
layout.addWidget(self._button_box)
|
|
||||||
|
|
||||||
def add_channels(self):
|
|
||||||
selection = self._tree_view.selectedIndexes()
|
|
||||||
channels = []
|
|
||||||
for select in selection:
|
|
||||||
key = self._model.index_to_key(select)
|
|
||||||
if key is not None:
|
|
||||||
channels.append([key, *self._model[key].ref, []])
|
|
||||||
self.channels = channels
|
|
||||||
self.accept()
|
|
||||||
|
|
||||||
|
|
||||||
class WaveformDock(QtWidgets.QDockWidget):
|
|
||||||
def __init__(self, timeout, timer, timer_backoff):
|
|
||||||
QtWidgets.QDockWidget.__init__(self, "Waveform")
|
|
||||||
self.setObjectName("Waveform")
|
|
||||||
self.setFeatures(
|
|
||||||
self.DockWidgetFeature.DockWidgetMovable | self.DockWidgetFeature.DockWidgetFloatable)
|
|
||||||
|
|
||||||
self._channel_model = Model({})
|
|
||||||
self._waveform_model = _WaveformModel()
|
|
||||||
|
|
||||||
self._ddb = None
|
|
||||||
self._dump = None
|
|
||||||
|
|
||||||
self._waveform_data = {
|
|
||||||
"timescale": 1,
|
|
||||||
"stopped_x": None,
|
|
||||||
"logs": dict(),
|
|
||||||
"data": dict(),
|
|
||||||
}
|
|
||||||
|
|
||||||
self._current_dir = os.getcwd()
|
|
||||||
|
|
||||||
self.proxy_client = ProxyClient(self.on_dump_receive,
|
|
||||||
timeout,
|
|
||||||
timer,
|
|
||||||
timer_backoff)
|
|
||||||
|
|
||||||
grid = LayoutWidget()
|
|
||||||
self.setWidget(grid)
|
|
||||||
|
|
||||||
self._menu_btn = QtWidgets.QPushButton()
|
|
||||||
self._menu_btn.setIcon(
|
|
||||||
QtWidgets.QApplication.style().standardIcon(
|
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogStart))
|
|
||||||
grid.addWidget(self._menu_btn, 0, 0)
|
|
||||||
|
|
||||||
self._request_dump_btn = QtWidgets.QToolButton()
|
|
||||||
self._request_dump_btn.setToolTip("Fetch analyzer data from device")
|
|
||||||
self._request_dump_btn.setIcon(
|
|
||||||
QtWidgets.QApplication.style().standardIcon(
|
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_BrowserReload))
|
|
||||||
self._request_dump_btn.clicked.connect(
|
|
||||||
lambda: asyncio.ensure_future(exc_to_warning(self.proxy_client.trigger_proxy_task())))
|
|
||||||
grid.addWidget(self._request_dump_btn, 0, 1)
|
|
||||||
|
|
||||||
self._add_channel_dialog = _AddChannelDialog(self, self._channel_model)
|
|
||||||
self._add_channel_dialog.accepted.connect(self._add_channels)
|
|
||||||
|
|
||||||
self._add_btn = QtWidgets.QToolButton()
|
|
||||||
self._add_btn.setToolTip("Add channels...")
|
|
||||||
self._add_btn.setIcon(
|
|
||||||
QtWidgets.QApplication.style().standardIcon(
|
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_FileDialogListView))
|
|
||||||
self._add_btn.clicked.connect(self._add_channel_dialog.open)
|
|
||||||
grid.addWidget(self._add_btn, 0, 2)
|
|
||||||
|
|
||||||
self._file_menu = QtWidgets.QMenu()
|
|
||||||
self._add_async_action("Open trace...", self.load_trace)
|
|
||||||
self._add_async_action("Save trace...", self.save_trace)
|
|
||||||
self._add_async_action("Save trace as VCD...", self.save_vcd)
|
|
||||||
self._add_async_action("Open channel list...", self.load_channels)
|
|
||||||
self._add_async_action("Save channel list...", self.save_channels)
|
|
||||||
self._menu_btn.setMenu(self._file_menu)
|
|
||||||
|
|
||||||
self._waveform_view = _WaveformView(self)
|
|
||||||
self._waveform_view.setModel(self._waveform_model)
|
|
||||||
grid.addWidget(self._waveform_view, 1, 0, colspan=12)
|
|
||||||
|
|
||||||
self._reset_zoom_btn = QtWidgets.QToolButton()
|
|
||||||
self._reset_zoom_btn.setToolTip("Reset zoom")
|
|
||||||
self._reset_zoom_btn.setIcon(
|
|
||||||
QtWidgets.QApplication.style().standardIcon(
|
|
||||||
QtWidgets.QStyle.StandardPixmap.SP_TitleBarMaxButton))
|
|
||||||
self._reset_zoom_btn.clicked.connect(self._waveform_view.resetZoom)
|
|
||||||
grid.addWidget(self._reset_zoom_btn, 0, 3)
|
|
||||||
|
|
||||||
self._cursor_control = _CursorTimeControl(self)
|
|
||||||
self._waveform_view.cursorMove.connect(self._cursor_control.setDisplayValue)
|
|
||||||
self._cursor_control.submit.connect(self._waveform_view.onCursorMove)
|
|
||||||
grid.addWidget(self._cursor_control, 0, 4, colspan=6)
|
|
||||||
|
|
||||||
def _add_async_action(self, label, coro):
|
|
||||||
action = QtGui.QAction(label, self)
|
|
||||||
action.triggered.connect(
|
|
||||||
lambda: asyncio.ensure_future(exc_to_warning(coro())))
|
|
||||||
self._file_menu.addAction(action)
|
|
||||||
|
|
||||||
def _add_channels(self):
|
|
||||||
channels = self._add_channel_dialog.channels
|
|
||||||
count = self._waveform_model.rowCount()
|
|
||||||
self._waveform_model.extend(channels)
|
|
||||||
self._waveform_model.update_data(self._waveform_data['data'],
|
|
||||||
count,
|
|
||||||
count + len(channels))
|
|
||||||
|
|
||||||
def on_dump_receive(self, dump):
|
|
||||||
self._dump = dump
|
|
||||||
decoded_dump = comm_analyzer.decode_dump(dump)
|
|
||||||
waveform_data = comm_analyzer.decoded_dump_to_waveform_data(self._ddb, decoded_dump)
|
|
||||||
self._waveform_data.update(waveform_data)
|
|
||||||
self._channel_model.update(self._waveform_data['logs'])
|
|
||||||
self._waveform_model.update_all(self._waveform_data['data'])
|
|
||||||
self._waveform_view.setStoppedX(self._waveform_data['stopped_x'])
|
|
||||||
self._waveform_view.setTimescale(self._waveform_data['timescale'])
|
|
||||||
self._cursor_control.setTimescale(self._waveform_data['timescale'])
|
|
||||||
|
|
||||||
async def load_trace(self):
|
|
||||||
try:
|
|
||||||
filename = await get_open_file_name(
|
|
||||||
self,
|
|
||||||
"Load Analyzer Trace",
|
|
||||||
self._current_dir,
|
|
||||||
"All files (*.*)")
|
|
||||||
except asyncio.CancelledError:
|
|
||||||
return
|
|
||||||
self._current_dir = os.path.dirname(filename)
|
|
||||||
try:
|
|
||||||
with open(filename, 'rb') as f:
|
|
||||||
dump = f.read()
|
|
||||||
self.on_dump_receive(dump)
|
|
||||||
except:
|
|
||||||
logger.error("Failed to open analyzer trace", exc_info=True)
|
|
||||||
|
|
||||||
async def save_trace(self):
|
|
||||||
if self._dump is None:
|
|
||||||
logger.error("No analyzer trace stored in dashboard, "
|
|
||||||
"try loading from file or fetching from device")
|
|
||||||
return
|
|
||||||
try:
|
|
||||||
filename = await get_save_file_name(
|
|
||||||
self,
|
|
||||||
"Save Analyzer Trace",
|
|
||||||
self._current_dir,
|
|
||||||
"All files (*.*)")
|
|
||||||
except asyncio.CancelledError:
|
|
||||||
return
|
|
||||||
self._current_dir = os.path.dirname(filename)
|
|
||||||
try:
|
|
||||||
with open(filename, 'wb') as f:
|
|
||||||
f.write(self._dump)
|
|
||||||
except:
|
|
||||||
logger.error("Failed to save analyzer trace", exc_info=True)
|
|
||||||
|
|
||||||
async def save_vcd(self):
|
|
||||||
if self._dump is None:
|
|
||||||
logger.error("No analyzer trace stored in dashboard, "
|
|
||||||
"try loading from file or fetching from device")
|
|
||||||
return
|
|
||||||
try:
|
|
||||||
filename = await get_save_file_name(
|
|
||||||
self,
|
|
||||||
"Save VCD",
|
|
||||||
self._current_dir,
|
|
||||||
"All files (*.*)")
|
|
||||||
except asyncio.CancelledError:
|
|
||||||
return
|
|
||||||
self._current_dir = os.path.dirname(filename)
|
|
||||||
try:
|
|
||||||
decoded_dump = comm_analyzer.decode_dump(self._dump)
|
|
||||||
with open(filename, 'w') as f:
|
|
||||||
comm_analyzer.decoded_dump_to_vcd(f, self._ddb, decoded_dump)
|
|
||||||
except:
|
|
||||||
logger.error("Failed to save trace as VCD", exc_info=True)
|
|
||||||
|
|
||||||
async def load_channels(self):
|
|
||||||
try:
|
|
||||||
filename = await get_open_file_name(
|
|
||||||
self,
|
|
||||||
"Open channel list",
|
|
||||||
self._current_dir,
|
|
||||||
"PYON files (*.pyon);;All files (*.*)")
|
|
||||||
except asyncio.CancelledError:
|
|
||||||
return
|
|
||||||
self._current_dir = os.path.dirname(filename)
|
|
||||||
try:
|
|
||||||
channel_list = pyon.load_file(filename)
|
|
||||||
self._waveform_model.import_list(channel_list)
|
|
||||||
self._waveform_model.update_all(self._waveform_data['data'])
|
|
||||||
except:
|
|
||||||
logger.error("Failed to open channel list", exc_info=True)
|
|
||||||
|
|
||||||
async def save_channels(self):
|
|
||||||
try:
|
|
||||||
filename = await get_save_file_name(
|
|
||||||
self,
|
|
||||||
"Save channel list",
|
|
||||||
self._current_dir,
|
|
||||||
"PYON files (*.pyon);;All files (*.*)")
|
|
||||||
except asyncio.CancelledError:
|
|
||||||
return
|
|
||||||
self._current_dir = os.path.dirname(filename)
|
|
||||||
try:
|
|
||||||
channel_list = self._waveform_model.export_list()
|
|
||||||
pyon.store_file(filename, channel_list)
|
|
||||||
except:
|
|
||||||
logger.error("Failed to save channel list", exc_info=True)
|
|
||||||
|
|
||||||
def _process_ddb(self):
|
|
||||||
channel_list = comm_analyzer.get_channel_list(self._ddb)
|
|
||||||
self._channel_model.clear()
|
|
||||||
self._channel_model.update(channel_list)
|
|
||||||
desc = self._ddb.get("core_analyzer")
|
|
||||||
if desc is not None:
|
|
||||||
addr = desc["host"]
|
|
||||||
port_proxy = desc.get("port_proxy", 1385)
|
|
||||||
port = desc.get("port", 1386)
|
|
||||||
self.proxy_client.update_address(addr, port, port_proxy)
|
|
||||||
else:
|
|
||||||
self.proxy_client.update_address(None, None, None)
|
|
||||||
|
|
||||||
def init_ddb(self, ddb):
|
|
||||||
self._ddb = ddb
|
|
||||||
self._process_ddb()
|
|
||||||
return ddb
|
|
||||||
|
|
||||||
def notify_ddb(self, mod):
|
|
||||||
self._process_ddb()
|
|
||||||
|
|
||||||
async def stop(self):
|
|
||||||
if self.proxy_client is not None:
|
|
||||||
await self.proxy_client.close()
|
|
|
@ -127,7 +127,7 @@
|
||||||
"# let's connect to the master\n",
|
"# let's connect to the master\n",
|
||||||
"\n",
|
"\n",
|
||||||
"schedule, exps, datasets = [\n",
|
"schedule, exps, datasets = [\n",
|
||||||
" Client(\"::1\", 3251, i) for i in\n",
|
" Client(\"::1\", 3251, \"master_\" + i) for i in\n",
|
||||||
" \"schedule experiment_db dataset_db\".split()]\n",
|
" \"schedule experiment_db dataset_db\".split()]\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"current schedule\")\n",
|
"print(\"current schedule\")\n",
|
||||||
|
|
|
@ -7,11 +7,7 @@ device_db = {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.core",
|
"module": "artiq.coredevice.core",
|
||||||
"class": "Core",
|
"class": "Core",
|
||||||
"arguments": {
|
"arguments": {"host": core_addr, "ref_period": 1e-9}
|
||||||
"host": core_addr,
|
|
||||||
"ref_period": 1e-9,
|
|
||||||
"analyzer_proxy": "core_analyzer"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"core_log": {
|
"core_log": {
|
||||||
"type": "controller",
|
"type": "controller",
|
||||||
|
@ -19,20 +15,6 @@ device_db = {
|
||||||
"port": 1068,
|
"port": 1068,
|
||||||
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
||||||
},
|
},
|
||||||
"core_moninj": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1383,
|
|
||||||
"port": 1384,
|
|
||||||
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_analyzer": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1385,
|
|
||||||
"port": 1386,
|
|
||||||
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_cache": {
|
"core_cache": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.cache",
|
"module": "artiq.coredevice.cache",
|
||||||
|
@ -47,13 +29,13 @@ device_db = {
|
||||||
"i2c_switch0": {
|
"i2c_switch0": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.i2c",
|
"module": "artiq.coredevice.i2c",
|
||||||
"class": "I2CSwitch",
|
"class": "PCA9548",
|
||||||
"arguments": {"address": 0xe0}
|
"arguments": {"address": 0xe0}
|
||||||
},
|
},
|
||||||
"i2c_switch1": {
|
"i2c_switch1": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.i2c",
|
"module": "artiq.coredevice.i2c",
|
||||||
"class": "I2CSwitch",
|
"class": "PCA9548",
|
||||||
"arguments": {"address": 0xe2}
|
"arguments": {"address": 0xe2}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,11 +5,7 @@ device_db = {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.core",
|
"module": "artiq.coredevice.core",
|
||||||
"class": "Core",
|
"class": "Core",
|
||||||
"arguments": {
|
"arguments": {"host": core_addr, "ref_period": 1/(8*150e6)}
|
||||||
"host": core_addr,
|
|
||||||
"ref_period": 1e-9,
|
|
||||||
"analyzer_proxy": "core_analyzer"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"core_log": {
|
"core_log": {
|
||||||
"type": "controller",
|
"type": "controller",
|
||||||
|
@ -17,20 +13,6 @@ device_db = {
|
||||||
"port": 1068,
|
"port": 1068,
|
||||||
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
||||||
},
|
},
|
||||||
"core_moninj": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1383,
|
|
||||||
"port": 1384,
|
|
||||||
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_analyzer": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1385,
|
|
||||||
"port": 1386,
|
|
||||||
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_cache": {
|
"core_cache": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.cache",
|
"module": "artiq.coredevice.cache",
|
||||||
|
|
|
@ -0,0 +1,177 @@
|
||||||
|
core_addr = "192.168.1.70"
|
||||||
|
|
||||||
|
device_db = {
|
||||||
|
"core": {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.core",
|
||||||
|
"class": "Core",
|
||||||
|
"arguments": {"host": core_addr, "ref_period": 1/(8*150e6)}
|
||||||
|
},
|
||||||
|
"core_log": {
|
||||||
|
"type": "controller",
|
||||||
|
"host": "::1",
|
||||||
|
"port": 1068,
|
||||||
|
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
||||||
|
},
|
||||||
|
"core_cache": {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.cache",
|
||||||
|
"class": "CoreCache"
|
||||||
|
},
|
||||||
|
"core_dma": {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.dma",
|
||||||
|
"class": "CoreDMA"
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
device_db.update(
|
||||||
|
spi_urukul0={
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.spi2",
|
||||||
|
"class": "SPIMaster",
|
||||||
|
"arguments": {"channel": 0}
|
||||||
|
},
|
||||||
|
ttl_urukul0_io_update={
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": 1}
|
||||||
|
},
|
||||||
|
ttl_urukul0_sw0={
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": 2}
|
||||||
|
},
|
||||||
|
ttl_urukul0_sw1={
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": 3}
|
||||||
|
},
|
||||||
|
ttl_urukul0_sw2={
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": 4}
|
||||||
|
},
|
||||||
|
ttl_urukul0_sw3={
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": 5}
|
||||||
|
},
|
||||||
|
urukul0_cpld={
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.urukul",
|
||||||
|
"class": "CPLD",
|
||||||
|
"arguments": {
|
||||||
|
"spi_device": "spi_urukul0",
|
||||||
|
"io_update_device": "ttl_urukul0_io_update",
|
||||||
|
"refclk": 150e6,
|
||||||
|
"clk_sel": 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
for i in range(4):
|
||||||
|
device_db["urukul0_ch" + str(i)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ad9910",
|
||||||
|
"class": "AD9910",
|
||||||
|
"arguments": {
|
||||||
|
"pll_n": 16, # 600MHz sample rate
|
||||||
|
"pll_vco": 2,
|
||||||
|
"chip_select": 4 + i,
|
||||||
|
"cpld_device": "urukul0_cpld",
|
||||||
|
"sw_device": "ttl_urukul0_sw" + str(i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
"""
|
||||||
|
artiq_route routing.bin init
|
||||||
|
artiq_route routing.bin set 0 0
|
||||||
|
artiq_route routing.bin set 1 1 0
|
||||||
|
artiq_route routing.bin set 2 1 1 0
|
||||||
|
artiq_route routing.bin set 3 2 0
|
||||||
|
artiq_route routing.bin set 4 2 1 0
|
||||||
|
artiq_coremgmt -D kasli config write -f routing_table routing.bin
|
||||||
|
"""
|
||||||
|
|
||||||
|
for sayma in range(2):
|
||||||
|
amc_base = 0x010000 + sayma*0x020000
|
||||||
|
rtm_base = 0x020000 + sayma*0x020000
|
||||||
|
for i in range(4):
|
||||||
|
device_db["led" + str(4*sayma+i)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": amc_base + i}
|
||||||
|
}
|
||||||
|
for i in range(2):
|
||||||
|
device_db["ttl_mcx" + str(2*sayma+i)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLInOut",
|
||||||
|
"arguments": {"channel": amc_base + 4 + i}
|
||||||
|
}
|
||||||
|
for i in range(8):
|
||||||
|
device_db["sawg" + str(8*sayma+i)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.sawg",
|
||||||
|
"class": "SAWG",
|
||||||
|
"arguments": {"channel_base": amc_base + 6 + i*10, "parallelism": 4}
|
||||||
|
}
|
||||||
|
for basemod in range(2):
|
||||||
|
for i in range(4):
|
||||||
|
device_db["sawg_sw" + str(8*sayma+4*basemod+i)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": rtm_base + basemod*9 + i}
|
||||||
|
}
|
||||||
|
att_idx = 2*sayma + basemod
|
||||||
|
device_db["basemod_att_rst_n"+str(att_idx)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": rtm_base + basemod*9 + 4}
|
||||||
|
}
|
||||||
|
device_db["basemod_att_clk"+str(att_idx)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": rtm_base + basemod*9 + 5}
|
||||||
|
}
|
||||||
|
device_db["basemod_att_le"+str(att_idx)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": rtm_base + basemod*9 + 6}
|
||||||
|
}
|
||||||
|
device_db["basemod_att_mosi"+str(att_idx)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLOut",
|
||||||
|
"arguments": {"channel": rtm_base + basemod*9 + 7}
|
||||||
|
}
|
||||||
|
device_db["basemod_att_miso"+str(att_idx)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.ttl",
|
||||||
|
"class": "TTLInOut",
|
||||||
|
"arguments": {"channel": rtm_base + basemod*9 + 8}
|
||||||
|
}
|
||||||
|
device_db["basemod_att"+str(att_idx)] = {
|
||||||
|
"type": "local",
|
||||||
|
"module": "artiq.coredevice.basemod_att",
|
||||||
|
"class": "BaseModAtt",
|
||||||
|
"arguments": {
|
||||||
|
"rst_n": "basemod_att_rst_n"+str(att_idx),
|
||||||
|
"clk": "basemod_att_clk"+str(att_idx),
|
||||||
|
"le": "basemod_att_le"+str(att_idx),
|
||||||
|
"mosi": "basemod_att_mosi"+str(att_idx),
|
||||||
|
"miso": "basemod_att_miso"+str(att_idx),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,25 @@
|
||||||
|
from artiq.experiment import *
|
||||||
|
|
||||||
|
|
||||||
|
class BaseMod(EnvExperiment):
|
||||||
|
def build(self):
|
||||||
|
self.setattr_device("core")
|
||||||
|
self.basemods = [self.get_device("basemod_att0"), self.get_device("basemod_att1")]
|
||||||
|
self.rfsws = [self.get_device("sawg_sw"+str(i)) for i in range(8)]
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def run(self):
|
||||||
|
self.core.reset()
|
||||||
|
for basemod in self.basemods:
|
||||||
|
self.core.break_realtime()
|
||||||
|
delay(10*ms)
|
||||||
|
basemod.reset()
|
||||||
|
delay(10*ms)
|
||||||
|
basemod.set(0.0, 0.0, 0.0, 0.0)
|
||||||
|
delay(10*ms)
|
||||||
|
print(basemod.get_mu())
|
||||||
|
|
||||||
|
self.core.break_realtime()
|
||||||
|
for rfsw in self.rfsws:
|
||||||
|
rfsw.on()
|
||||||
|
delay(1*ms)
|
|
@ -0,0 +1,37 @@
|
||||||
|
from artiq.experiment import *
|
||||||
|
|
||||||
|
|
||||||
|
class Sines2Sayma(EnvExperiment):
|
||||||
|
def build(self):
|
||||||
|
self.setattr_device("core")
|
||||||
|
self.sawgs = [self.get_device("sawg"+str(i)) for i in range(16)]
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def drtio_is_up(self):
|
||||||
|
for i in range(5):
|
||||||
|
if not self.core.get_rtio_destination_status(i):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def run(self):
|
||||||
|
while True:
|
||||||
|
print("waiting for DRTIO ready...")
|
||||||
|
while not self.drtio_is_up():
|
||||||
|
pass
|
||||||
|
print("OK")
|
||||||
|
|
||||||
|
self.core.reset()
|
||||||
|
|
||||||
|
for sawg in self.sawgs:
|
||||||
|
delay(1*ms)
|
||||||
|
sawg.reset()
|
||||||
|
|
||||||
|
for sawg in self.sawgs:
|
||||||
|
delay(1*ms)
|
||||||
|
sawg.amplitude1.set(.4)
|
||||||
|
# Do not use a sub-multiple of oscilloscope sample rates.
|
||||||
|
sawg.frequency0.set(9*MHz)
|
||||||
|
|
||||||
|
while self.drtio_is_up():
|
||||||
|
pass
|
|
@ -0,0 +1,89 @@
|
||||||
|
from artiq.experiment import *
|
||||||
|
|
||||||
|
|
||||||
|
class SinesUrukulSayma(EnvExperiment):
|
||||||
|
def build(self):
|
||||||
|
self.setattr_device("core")
|
||||||
|
self.setattr_device("urukul0_cpld")
|
||||||
|
|
||||||
|
# Urukul clock output syntonized to the RTIO clock.
|
||||||
|
# Can be used as HMC830 reference on Sayma RTM.
|
||||||
|
# When using this reference, Sayma must be recalibrated every time Urukul
|
||||||
|
# is rebooted, as Urukul is not synchronized to the Kasli.
|
||||||
|
self.urukul_hmc_ref = self.get_device("urukul0_ch3")
|
||||||
|
|
||||||
|
# Urukul measurement channels - compare with SAWG outputs.
|
||||||
|
# When testing sync, do not reboot Urukul, as it is not
|
||||||
|
# synchronized to the Kasli.
|
||||||
|
self.urukul_meas = [self.get_device("urukul0_ch" + str(i)) for i in range(3)]
|
||||||
|
# The same waveform is output on all first 4 SAWG channels (first DAC).
|
||||||
|
self.sawgs = [self.get_device("sawg"+str(i)) for i in range(4)]
|
||||||
|
self.basemod = self.get_device("basemod_att0")
|
||||||
|
self.rfsws = [self.get_device("sawg_sw"+str(i)) for i in range(4)]
|
||||||
|
|
||||||
|
|
||||||
|
# DRTIO destinations:
|
||||||
|
# 0: local
|
||||||
|
# 1: Sayma AMC
|
||||||
|
# 2: Sayma RTM
|
||||||
|
@kernel
|
||||||
|
def drtio_is_up(self):
|
||||||
|
for i in range(3):
|
||||||
|
if not self.core.get_rtio_destination_status(i):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
@kernel
|
||||||
|
def run(self):
|
||||||
|
f = 9*MHz
|
||||||
|
dds_ftw = self.urukul_meas[0].frequency_to_ftw(f)
|
||||||
|
sawg_ftw = self.sawgs[0].frequency0.to_mu(f)
|
||||||
|
if dds_ftw != sawg_ftw:
|
||||||
|
print("DDS and SAWG FTWs do not match:", dds_ftw, sawg_ftw)
|
||||||
|
return
|
||||||
|
|
||||||
|
self.core.reset()
|
||||||
|
self.urukul0_cpld.init()
|
||||||
|
|
||||||
|
delay(1*ms)
|
||||||
|
self.urukul_hmc_ref.init()
|
||||||
|
self.urukul_hmc_ref.set_mu(0x40000000, asf=self.urukul_hmc_ref.amplitude_to_asf(0.6))
|
||||||
|
self.urukul_hmc_ref.set_att(6.)
|
||||||
|
self.urukul_hmc_ref.sw.on()
|
||||||
|
|
||||||
|
for urukul_ch in self.urukul_meas:
|
||||||
|
delay(1*ms)
|
||||||
|
urukul_ch.init()
|
||||||
|
urukul_ch.set_mu(dds_ftw, asf=urukul_ch.amplitude_to_asf(0.5))
|
||||||
|
urukul_ch.set_att(6.)
|
||||||
|
urukul_ch.sw.on()
|
||||||
|
|
||||||
|
while True:
|
||||||
|
print("waiting for DRTIO ready...")
|
||||||
|
while not self.drtio_is_up():
|
||||||
|
pass
|
||||||
|
print("OK")
|
||||||
|
|
||||||
|
self.core.reset()
|
||||||
|
|
||||||
|
delay(10*ms)
|
||||||
|
self.basemod.reset()
|
||||||
|
delay(10*ms)
|
||||||
|
self.basemod.set(3.0, 3.0, 3.0, 3.0)
|
||||||
|
delay(10*ms)
|
||||||
|
for rfsw in self.rfsws:
|
||||||
|
delay(1*ms)
|
||||||
|
rfsw.on()
|
||||||
|
|
||||||
|
for sawg in self.sawgs:
|
||||||
|
delay(1*ms)
|
||||||
|
sawg.reset()
|
||||||
|
|
||||||
|
for sawg in self.sawgs:
|
||||||
|
delay(1*ms)
|
||||||
|
sawg.amplitude1.set(.4)
|
||||||
|
sawg.frequency0.set_mu(sawg_ftw)
|
||||||
|
sawg.phase0.set_mu(sawg_ftw*now_mu() >> 17)
|
||||||
|
|
||||||
|
while self.drtio_is_up():
|
||||||
|
pass
|
|
@ -1,19 +0,0 @@
|
||||||
{
|
|
||||||
"target": "kasli",
|
|
||||||
"variant": "shuttlerdemo",
|
|
||||||
"hw_rev": "v2.0",
|
|
||||||
"drtio_role": "master",
|
|
||||||
"peripherals": [
|
|
||||||
{
|
|
||||||
"type": "shuttler",
|
|
||||||
"hw_rev": "v1.1",
|
|
||||||
"ports": [0]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "dio",
|
|
||||||
"ports": [1],
|
|
||||||
"bank_direction_low": "input",
|
|
||||||
"bank_direction_high": "output"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
|
@ -1,330 +0,0 @@
|
||||||
from artiq.experiment import *
|
|
||||||
from artiq.coredevice.shuttler import shuttler_volt_to_mu
|
|
||||||
|
|
||||||
DAC_Fs_MHZ = 125
|
|
||||||
CORDIC_GAIN = 1.64676
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_phase_offset(offset_degree):
|
|
||||||
return round(offset_degree / 360 * (2 ** 16))
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_freq_mu(freq_mhz):
|
|
||||||
return round(float(2) ** 32 / DAC_Fs_MHZ * freq_mhz)
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_chirp_rate_mu(freq_mhz_per_us):
|
|
||||||
return round(float(2) ** 32 * freq_mhz_per_us / (DAC_Fs_MHZ ** 2))
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_freq_sweep(start_f_MHz, end_f_MHz, time_us):
|
|
||||||
return shuttler_chirp_rate_mu((end_f_MHz - start_f_MHz)/(time_us))
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_volt_amp_mu(volt):
|
|
||||||
return shuttler_volt_to_mu(volt)
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_volt_damp_mu(volt_per_us):
|
|
||||||
return round(float(2) ** 32 * (volt_per_us / 20) / DAC_Fs_MHZ)
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_volt_ddamp_mu(volt_per_us_square):
|
|
||||||
return round(float(2) ** 48 * (volt_per_us_square / 20) * 2 / (DAC_Fs_MHZ ** 2))
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_volt_dddamp_mu(volt_per_us_cube):
|
|
||||||
return round(float(2) ** 48 * (volt_per_us_cube / 20) * 6 / (DAC_Fs_MHZ ** 3))
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_dds_amp_mu(volt):
|
|
||||||
return shuttler_volt_amp_mu(volt / CORDIC_GAIN)
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_dds_damp_mu(volt_per_us):
|
|
||||||
return shuttler_volt_damp_mu(volt_per_us / CORDIC_GAIN)
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_dds_ddamp_mu(volt_per_us_square):
|
|
||||||
return shuttler_volt_ddamp_mu(volt_per_us_square / CORDIC_GAIN)
|
|
||||||
|
|
||||||
@portable
|
|
||||||
def shuttler_dds_dddamp_mu(volt_per_us_cube):
|
|
||||||
return shuttler_volt_dddamp_mu(volt_per_us_cube / CORDIC_GAIN)
|
|
||||||
|
|
||||||
class Shuttler(EnvExperiment):
|
|
||||||
def build(self):
|
|
||||||
self.setattr_device("core")
|
|
||||||
self.setattr_device("core_dma")
|
|
||||||
self.setattr_device("scheduler")
|
|
||||||
self.shuttler0_leds = [ self.get_device("shuttler0_led{}".format(i)) for i in range(2) ]
|
|
||||||
self.setattr_device("shuttler0_config")
|
|
||||||
self.setattr_device("shuttler0_trigger")
|
|
||||||
self.shuttler0_dcbias = [ self.get_device("shuttler0_dcbias{}".format(i)) for i in range(16) ]
|
|
||||||
self.shuttler0_dds = [ self.get_device("shuttler0_dds{}".format(i)) for i in range(16) ]
|
|
||||||
self.setattr_device("shuttler0_relay")
|
|
||||||
self.setattr_device("shuttler0_adc")
|
|
||||||
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def record(self):
|
|
||||||
with self.core_dma.record("example_waveform"):
|
|
||||||
self.example_waveform()
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def init(self):
|
|
||||||
self.led()
|
|
||||||
self.relay_init()
|
|
||||||
self.adc_init()
|
|
||||||
self.shuttler_reset()
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def run(self):
|
|
||||||
self.core.reset()
|
|
||||||
self.core.break_realtime()
|
|
||||||
self.init()
|
|
||||||
|
|
||||||
self.record()
|
|
||||||
example_waveform_handle = self.core_dma.get_handle("example_waveform")
|
|
||||||
|
|
||||||
print("Example Waveforms are on OUT0 and OUT1")
|
|
||||||
self.core.break_realtime()
|
|
||||||
while not(self.scheduler.check_termination()):
|
|
||||||
delay(1*s)
|
|
||||||
self.core_dma.playback_handle(example_waveform_handle)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def shuttler_reset(self):
|
|
||||||
for i in range(16):
|
|
||||||
self.shuttler_channel_reset(i)
|
|
||||||
# To avoid RTIO Underflow
|
|
||||||
delay(50*us)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def shuttler_channel_reset(self, ch):
|
|
||||||
self.shuttler0_dcbias[ch].set_waveform(
|
|
||||||
a0=0,
|
|
||||||
a1=0,
|
|
||||||
a2=0,
|
|
||||||
a3=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_dds[ch].set_waveform(
|
|
||||||
b0=0,
|
|
||||||
b1=0,
|
|
||||||
b2=0,
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=0,
|
|
||||||
c2=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_trigger.trigger(1 << ch)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def example_waveform(self):
|
|
||||||
# Equation of Output Waveform
|
|
||||||
# w(t_us) = a(t_us) + b(t_us) * cos(c(t_us))
|
|
||||||
# Step 1:
|
|
||||||
# Enable the Output Relay of OUT0 and OUT1
|
|
||||||
# Step 2: Cosine Wave Frequency Sweep from 10kHz to 50kHz in 500us
|
|
||||||
# OUT0: b(t_us) = 1
|
|
||||||
# c(t_us) = 2 * pi * (0.08 * t_us ^ 2 + 0.01 * t_us)
|
|
||||||
# OUT1: b(t_us) = 1
|
|
||||||
# c(t_us) = 2 * pi * (0.05 * t_us)
|
|
||||||
# Step 3(after 500us): Cosine Wave with 180 Degree Phase Offset
|
|
||||||
# OUT0: b(t_us) = 1
|
|
||||||
# c(t_us) = 2 * pi * (0.05 * t_us) + pi
|
|
||||||
# OUT1: b(t_us) = 1
|
|
||||||
# c(t_us) = 2 * pi * (0.05 * t_us)
|
|
||||||
# Step 4(after 500us): Cosine Wave with Amplitude Envelop
|
|
||||||
# OUT0: b(t_us) = -0.0001367187 * t_us ^ 2 + 0.06835937 * t_us
|
|
||||||
# c(t_us) = 2 * pi * (0.05 * t_us)
|
|
||||||
# OUT1: b(t_us) = -0.0001367187 * t_us ^ 2 + 0.06835937 * t_us
|
|
||||||
# c(t_us) = 0
|
|
||||||
# Step 5(after 500us): Sawtooth Wave Modulated with 50kHz Cosine Wave
|
|
||||||
# OUT0: a(t_us) = 0.01 * t_us - 5
|
|
||||||
# b(t_us) = 1
|
|
||||||
# c(t_us) = 2 * pi * (0.05 * t_us)
|
|
||||||
# OUT1: a(t_us) = 0.01 * t_us - 5
|
|
||||||
# Step 6(after 1000us): A Combination of Previous Waveforms
|
|
||||||
# OUT0: a(t_us) = 0.01 * t_us - 5
|
|
||||||
# b(t_us) = -0.0001367187 * t_us ^ 2 + 0.06835937 * t_us
|
|
||||||
# c(t_us) = 2 * pi * (0.08 * t_us ^ 2 + 0.01 * t_us)
|
|
||||||
# Step 7(after 500us): Mirrored Waveform in Step 6
|
|
||||||
# OUT0: a(t_us) = 2.5 + -0.01 * (1000 ^ 2) * t_us
|
|
||||||
# b(t_us) = 0.0001367187 * t_us ^ 2 - 0.06835937 * t_us
|
|
||||||
# c(t_us) = 2 * pi * (-0.08 * t_us ^ 2 + 0.05 * t_us) + pi
|
|
||||||
# Step 8(after 500us):
|
|
||||||
# Disable Output Relay of OUT0 and OUT1
|
|
||||||
# Reset OUT0 and OUT1
|
|
||||||
|
|
||||||
## Step 1 ##
|
|
||||||
self.shuttler0_relay.enable(0b11)
|
|
||||||
|
|
||||||
## Step 2 ##
|
|
||||||
start_f_MHz = 0.01
|
|
||||||
end_f_MHz = 0.05
|
|
||||||
duration_us = 500
|
|
||||||
# OUT0 and OUT1 have their frequency and phase aligned at 500us
|
|
||||||
self.shuttler0_dds[0].set_waveform(
|
|
||||||
b0=shuttler_dds_amp_mu(1.0),
|
|
||||||
b1=0,
|
|
||||||
b2=0,
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=shuttler_freq_mu(start_f_MHz),
|
|
||||||
c2=shuttler_freq_sweep(start_f_MHz, end_f_MHz, duration_us),
|
|
||||||
)
|
|
||||||
self.shuttler0_dds[1].set_waveform(
|
|
||||||
b0=shuttler_dds_amp_mu(1.0),
|
|
||||||
b1=0,
|
|
||||||
b2=0,
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=shuttler_freq_mu(end_f_MHz),
|
|
||||||
c2=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_trigger.trigger(0b11)
|
|
||||||
delay(500*us)
|
|
||||||
|
|
||||||
## Step 3 ##
|
|
||||||
# OUT0 and OUT1 has 180 degree phase difference
|
|
||||||
self.shuttler0_dds[0].set_waveform(
|
|
||||||
b0=shuttler_dds_amp_mu(1.0),
|
|
||||||
b1=0,
|
|
||||||
b2=0,
|
|
||||||
b3=0,
|
|
||||||
c0=shuttler_phase_offset(180.0),
|
|
||||||
c1=shuttler_freq_mu(end_f_MHz),
|
|
||||||
c2=0,
|
|
||||||
)
|
|
||||||
# Phase and Output Setting of OUT1 is retained
|
|
||||||
# if the channel is not triggered or config is not cleared
|
|
||||||
self.shuttler0_trigger.trigger(0b1)
|
|
||||||
delay(500*us)
|
|
||||||
|
|
||||||
## Step 4 ##
|
|
||||||
# b(0) = 0, b(250) = 8.545, b(500) = 0
|
|
||||||
self.shuttler0_dds[0].set_waveform(
|
|
||||||
b0=0,
|
|
||||||
b1=shuttler_dds_damp_mu(0.06835937),
|
|
||||||
b2=shuttler_dds_ddamp_mu(-0.0001367187),
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=shuttler_freq_mu(end_f_MHz),
|
|
||||||
c2=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_dds[1].set_waveform(
|
|
||||||
b0=0,
|
|
||||||
b1=shuttler_dds_damp_mu(0.06835937),
|
|
||||||
b2=shuttler_dds_ddamp_mu(-0.0001367187),
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=0,
|
|
||||||
c2=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_trigger.trigger(0b11)
|
|
||||||
delay(500*us)
|
|
||||||
|
|
||||||
## Step 5 ##
|
|
||||||
self.shuttler0_dcbias[0].set_waveform(
|
|
||||||
a0=shuttler_volt_amp_mu(-5.0),
|
|
||||||
a1=int32(shuttler_volt_damp_mu(0.01)),
|
|
||||||
a2=0,
|
|
||||||
a3=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_dds[0].set_waveform(
|
|
||||||
b0=shuttler_dds_amp_mu(1.0),
|
|
||||||
b1=0,
|
|
||||||
b2=0,
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=shuttler_freq_mu(end_f_MHz),
|
|
||||||
c2=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_dcbias[1].set_waveform(
|
|
||||||
a0=shuttler_volt_amp_mu(-5.0),
|
|
||||||
a1=int32(shuttler_volt_damp_mu(0.01)),
|
|
||||||
a2=0,
|
|
||||||
a3=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_dds[1].set_waveform(
|
|
||||||
b0=0,
|
|
||||||
b1=0,
|
|
||||||
b2=0,
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=0,
|
|
||||||
c2=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_trigger.trigger(0b11)
|
|
||||||
delay(1000*us)
|
|
||||||
|
|
||||||
## Step 6 ##
|
|
||||||
self.shuttler0_dcbias[0].set_waveform(
|
|
||||||
a0=shuttler_volt_amp_mu(-2.5),
|
|
||||||
a1=int32(shuttler_volt_damp_mu(0.01)),
|
|
||||||
a2=0,
|
|
||||||
a3=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_dds[0].set_waveform(
|
|
||||||
b0=0,
|
|
||||||
b1=shuttler_dds_damp_mu(0.06835937),
|
|
||||||
b2=shuttler_dds_ddamp_mu(-0.0001367187),
|
|
||||||
b3=0,
|
|
||||||
c0=0,
|
|
||||||
c1=shuttler_freq_mu(start_f_MHz),
|
|
||||||
c2=shuttler_freq_sweep(start_f_MHz, end_f_MHz, duration_us),
|
|
||||||
)
|
|
||||||
self.shuttler0_trigger.trigger(0b1)
|
|
||||||
self.shuttler_channel_reset(1)
|
|
||||||
delay(500*us)
|
|
||||||
|
|
||||||
## Step 7 ##
|
|
||||||
self.shuttler0_dcbias[0].set_waveform(
|
|
||||||
a0=shuttler_volt_amp_mu(2.5),
|
|
||||||
a1=int32(shuttler_volt_damp_mu(-0.01)),
|
|
||||||
a2=0,
|
|
||||||
a3=0,
|
|
||||||
)
|
|
||||||
self.shuttler0_dds[0].set_waveform(
|
|
||||||
b0=0,
|
|
||||||
b1=shuttler_dds_damp_mu(-0.06835937),
|
|
||||||
b2=shuttler_dds_ddamp_mu(0.0001367187),
|
|
||||||
b3=0,
|
|
||||||
c0=shuttler_phase_offset(180.0),
|
|
||||||
c1=shuttler_freq_mu(end_f_MHz),
|
|
||||||
c2=shuttler_freq_sweep(end_f_MHz, start_f_MHz, duration_us),
|
|
||||||
)
|
|
||||||
self.shuttler0_trigger.trigger(0b1)
|
|
||||||
delay(500*us)
|
|
||||||
|
|
||||||
## Step 8 ##
|
|
||||||
self.shuttler0_relay.enable(0)
|
|
||||||
self.shuttler_channel_reset(0)
|
|
||||||
self.shuttler_channel_reset(1)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def led(self):
|
|
||||||
for i in range(2):
|
|
||||||
for j in range(3):
|
|
||||||
self.shuttler0_leds[i].pulse(.1*s)
|
|
||||||
delay(.1*s)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def relay_init(self):
|
|
||||||
self.shuttler0_relay.init()
|
|
||||||
self.shuttler0_relay.enable(0x0000)
|
|
||||||
|
|
||||||
@kernel
|
|
||||||
def adc_init(self):
|
|
||||||
delay_mu(int64(self.core.ref_multiplier))
|
|
||||||
self.shuttler0_adc.power_up()
|
|
||||||
|
|
||||||
delay_mu(int64(self.core.ref_multiplier))
|
|
||||||
assert self.shuttler0_adc.read_id() >> 4 == 0x038d
|
|
||||||
|
|
||||||
delay_mu(int64(self.core.ref_multiplier))
|
|
||||||
# The actual output voltage is limited by the hardware, the calculated calibration gain and offset.
|
|
||||||
# For example, if the system has a calibration gain of 1.06, then the max output voltage = 10 / 1.06 = 9.43V.
|
|
||||||
# Setting a value larger than 9.43V will result in overflow.
|
|
||||||
self.shuttler0_adc.calibrate(self.shuttler0_dcbias, self.shuttler0_trigger, self.shuttler0_config)
|
|
|
@ -5,11 +5,7 @@ device_db = {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.core",
|
"module": "artiq.coredevice.core",
|
||||||
"class": "Core",
|
"class": "Core",
|
||||||
"arguments": {
|
"arguments": {"host": core_addr, "ref_period": 1e-9}
|
||||||
"host": core_addr,
|
|
||||||
"ref_period": 1e-9,
|
|
||||||
"analyzer_proxy": "core_analyzer"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"core_log": {
|
"core_log": {
|
||||||
"type": "controller",
|
"type": "controller",
|
||||||
|
@ -17,20 +13,6 @@ device_db = {
|
||||||
"port": 1068,
|
"port": 1068,
|
||||||
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
||||||
},
|
},
|
||||||
"core_moninj": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1383,
|
|
||||||
"port": 1384,
|
|
||||||
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_analyzer": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1385,
|
|
||||||
"port": 1386,
|
|
||||||
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_cache": {
|
"core_cache": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.cache",
|
"module": "artiq.coredevice.cache",
|
||||||
|
@ -45,13 +27,13 @@ device_db = {
|
||||||
"i2c_switch0": {
|
"i2c_switch0": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.i2c",
|
"module": "artiq.coredevice.i2c",
|
||||||
"class": "I2CSwitch",
|
"class": "PCA9548",
|
||||||
"arguments": {"address": 0xe0}
|
"arguments": {"address": 0xe0}
|
||||||
},
|
},
|
||||||
"i2c_switch1": {
|
"i2c_switch1": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.i2c",
|
"module": "artiq.coredevice.i2c",
|
||||||
"class": "I2CSwitch",
|
"class": "PCA9548",
|
||||||
"arguments": {"address": 0xe2}
|
"arguments": {"address": 0xe2}
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
|
@ -9,11 +9,7 @@ device_db = {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.core",
|
"module": "artiq.coredevice.core",
|
||||||
"class": "Core",
|
"class": "Core",
|
||||||
"arguments": {
|
"arguments": {"host": core_addr, "ref_period": 1e-9}
|
||||||
"host": core_addr,
|
|
||||||
"ref_period": 1e-9,
|
|
||||||
"analyzer_proxy": "core_analyzer"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"core_log": {
|
"core_log": {
|
||||||
"type": "controller",
|
"type": "controller",
|
||||||
|
@ -21,20 +17,6 @@ device_db = {
|
||||||
"port": 1068,
|
"port": 1068,
|
||||||
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
|
||||||
},
|
},
|
||||||
"core_moninj": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1383,
|
|
||||||
"port": 1384,
|
|
||||||
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_analyzer": {
|
|
||||||
"type": "controller",
|
|
||||||
"host": "::1",
|
|
||||||
"port_proxy": 1385,
|
|
||||||
"port": 1386,
|
|
||||||
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
|
|
||||||
},
|
|
||||||
"core_cache": {
|
"core_cache": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.cache",
|
"module": "artiq.coredevice.cache",
|
||||||
|
@ -49,7 +31,7 @@ device_db = {
|
||||||
"i2c_switch": {
|
"i2c_switch": {
|
||||||
"type": "local",
|
"type": "local",
|
||||||
"module": "artiq.coredevice.i2c",
|
"module": "artiq.coredevice.i2c",
|
||||||
"class": "I2CSwitch"
|
"class": "PCA9548"
|
||||||
},
|
},
|
||||||
|
|
||||||
# Generic TTL
|
# Generic TTL
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue