forked from M-Labs/artiq
1
0
Fork 0

Compare commits

...

1567 Commits

Author SHA1 Message Date
Simon Renblad 76fba538b1 artiq_ddb_template: fixed missing separator 2023-12-18 13:23:39 +08:00
Sebastien Bourdeauducq 8dd8cfa6b0 master: implement devarg_override 2023-12-18 12:11:40 +08:00
Sebastien Bourdeauducq 5df0721811 dashboard,client: add device argument overrides to expid 2023-12-17 19:43:41 +08:00
Sebastien Bourdeauducq 6326051052 flake: forward cmdline arguments in devshell wrappers 2023-12-17 19:42:56 +08:00
Sebastien Bourdeauducq 44a95b5dda dashboard: add repository revision clear button 2023-12-17 16:37:02 +08:00
Sebastien Bourdeauducq 645b9b8c5f flake: add executable wrappers for frontends to devshell 2023-12-17 13:41:49 +08:00
Sebastien Bourdeauducq 858f0479ba aqctl_coreanalyzer_proxy: permissions and shebang 2023-12-17 13:27:38 +08:00
Sebastien Bourdeauducq 133b26b6ce flake: add ARTIQ sources to PYTHONPATH in devshell 2023-12-17 13:05:16 +08:00
Sebastien Bourdeauducq d96213dbbc flake: update dependencies 2023-12-17 12:55:36 +08:00
Sebastien Bourdeauducq 413d33c3d1 core: document analyzer proxy options 2023-12-13 14:29:33 +08:00
Sebastien Bourdeauducq c2b53ecb43 core: add option to trigger analyzer proxy at run end 2023-12-13 14:27:48 +08:00
Sebastien Bourdeauducq ede0b37c6e devices: introduce notify_run_end API 2023-12-13 14:27:04 +08:00
Sebastien Bourdeauducq 795c4372fa DeviceManager: fix close exception error message 2023-12-13 14:06:53 +08:00
Sebastien Bourdeauducq 402a5d3376 core: connect lazily to analyzer proxy
Otherwise artiq_compile and other uses of Core that does not access hardware/network may fail.
2023-12-13 13:46:47 +08:00
Sebastien Bourdeauducq 85850ad9e8 wavesynth: remove 2023-12-13 13:36:21 +08:00
Sebastien Bourdeauducq 7a863b4f5e core: add trigger_analyzer_proxy API 2023-12-13 13:08:54 +08:00
Sebastien Bourdeauducq a26cee6ca7 coreanalyzer_proxy: cleanups/renames 2023-12-13 13:07:35 +08:00
Sebastien Bourdeauducq be08862606 logo: text to path 2023-12-08 19:34:47 +08:00
Sebastien Bourdeauducq 05a9422e67 aqctl_coreanalyzer_proxy: cleanup 2023-12-08 18:56:10 +08:00
Simon Renblad b09a39c82e
add aqctl_coreanalyzer_proxy 2023-12-08 18:55:07 +08:00
mwojcik 49267671f9 core: fix precompile 2023-12-04 12:10:11 +08:00
Sebastien Bourdeauducq 8ca75a3fb9 firmware: deal with rust nonsense
Fixes
"error: edition 2021 is unstable and only available with -Z unstable-options.
error: could not compile `alloc`"
2023-12-03 11:20:18 +08:00
Florian Agbuya 8381b34a79 flake: add new booktabs dependency for artiq-manual-pdf 2023-12-03 11:18:59 +08:00
Sebastien Bourdeauducq d458fc27bf switch to new nixpkgs release 2023-12-03 11:18:25 +08:00
mwojcik 9f4b8db2de repeater: fix setting tsc 2023-12-01 16:43:48 +08:00
Florian Agbuya 1108cebd75 flake: fix ncurses on vivado
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-11-28 17:36:36 +08:00
Florian Agbuya cf7cbd0c3b flake: update nixpkgs
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-11-28 17:36:36 +08:00
mwojcik 1a28069aa2 support for pre-compiling subkernels 2023-11-23 16:49:02 +08:00
Sebastien Bourdeauducq 56418e342e take into account VERSIONEER_REV in artiq._version.get_rev 2023-11-22 20:51:02 +08:00
Sebastien Bourdeauducq 77c6553725 always provide artiq._version.get_rev 2023-11-14 14:14:47 +08:00
Sebastien Bourdeauducq e81e8f28cf gateware: merge kasli_generic into kasli. Closes #2279 2023-11-14 14:01:17 +08:00
mwojcik de10e584f6 support .tar flashed idle/startup kernels 2023-11-13 18:14:35 +08:00
Florian Agbuya 875666f3ec doc: add section on new nix flakes config (closes #2232)
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-11-10 16:47:56 +08:00
Sebastien Bourdeauducq 3ad3fac828 update ARTIQ-8 release notes 2023-11-08 11:17:17 +08:00
Simon Renblad 49afa116b3 RELEASE_NOTES: artiq_ddb_template needs gateware 2023-11-08 10:51:39 +08:00
Simon Renblad 363afb5fc9 artiq_ddb_template: add support for user LEDs
Add support for additional user LEDs.
2023-11-08 10:51:39 +08:00
Simon Renblad e7af219505 kasli_generic: add support for user LEDs
Add additional LED RTIO devices.
2023-11-08 10:51:39 +08:00
linuswck ec2b86b08d kc705: fix gtx clock path durnig init 2023-11-07 18:36:48 +08:00
linuswck 8f7d138dbd gtx: Always enable IBUFDS_GTE2, add clk_path_ready
- Set clk_path_ready to High to start Initialization of GTP TX and RX
2023-11-07 18:36:48 +08:00
Sebastien Bourdeauducq bbe6ff8cac flake: update dependencies 2023-11-07 18:36:11 +08:00
Sebastien Bourdeauducq c0a6252e77 afws_client: improve compatibility with older versions of prettytable. Closes #2264 2023-11-07 14:06:31 +08:00
mwojcik 6640bf0e82 drtioaux/subkernel/ddma: introduce proper errors, more robust 2023-11-07 13:42:04 +08:00
mwojcik b3c0d084d4 drtio: better control state of bigger payloads 2023-11-07 13:42:04 +08:00
linuswck bb0b8a6c00 kasli: Correct the GTP TX clock path during init
- TXOUT must be fed back into TXUSRCLK during initialization
- Now, MMCM Clock Input is switched before GTP TX Init is started instead of after GTP TX Init is done
- Reset in Sys Clock domain is kept asserted when clock is switched and GTP TX Init is NOT done
2023-11-07 13:40:32 +08:00
Sebastien Bourdeauducq ce80bf5717 flake: update dependencies 2023-11-07 13:40:17 +08:00
Florian Agbuya 378dd0e5ca flake: fix and upgrade wavedrom (closes #2266)
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2023-10-30 08:09:00 +01:00
jfniedermeyer 9c68451cae Add hotkeys to organize experiments in dashboard
Signed-off-by: jfniedermeyer <justin.niedermeyer@colorado.edu>
2023-10-27 21:47:30 +02:00
linuswck 93c9d8bcdf artiq_ddb_template:set default Shuttler drtio_dest
- remove default Shuttler "drtio_destination" value in jsonschema
- set the default Shuttler "drtio_destination" value according to
    board "target" and "hw_rev"
2023-10-27 21:46:02 +02:00
mwojcik e480bbe8d8 artiq_ddb_template: move satellite_cpu_target to core 2023-10-27 21:45:12 +02:00
mwojcik b168f0bb4b subkernel: separate tags and data 2023-10-17 12:18:03 +02:00
Sebastien Bourdeauducq 6705c9fbfb flake: update dependencies 2023-10-17 15:37:06 +08:00
mwojcik 5f445f6b92 ad53xx: fix `load()` references in documentation 2023-10-16 13:54:38 +08:00
occheung 363f7327f1 io_expander: initialize before service 2023-10-15 07:45:20 +08:00
Sebastien Bourdeauducq f7abc156cb flake: update dependencies 2023-10-11 16:41:34 +08:00
linuswck de41bd6655 eem_7series: pass through kwargs for shuttler 2023-10-11 12:15:06 +08:00
Simon Renblad 96941d7c04 big_number: fix metadata scaling, add unit label 2023-10-09 15:35:14 +08:00
mwojcik f3c79e71e1 firmware: merge runtime and satman linker scripts 2023-10-09 15:33:29 +08:00
Simon Renblad 333b81f789 set_argument_value warning in browser 2023-10-09 10:38:17 +08:00
Sebastien Bourdeauducq d070826911 flake: update dependencies 2023-10-09 10:13:58 +08:00
Sebastien Bourdeauducq 9c90f923d2 test: check return value of subprocesses in test_compile 2023-10-09 10:07:04 +08:00
Sebastien Bourdeauducq e23e4d39d7 artiq_compile: ignore subkernel_arg_types 2023-10-09 10:03:43 +08:00
David Nadlinger 08eea09d44 compiler: Catch escaping numpy.{array, full, transpose}() results
Function calls in general can still be used to hide escaping
allocations from the compiler (issue #1497), but these calls in
particular always allocate, so we can easily and accurately handle
them.
2023-10-09 09:00:26 +08:00
mwojcik 7ab52af603 docs: subkernel support 2023-10-08 17:12:06 +08:00
mwojcik 973fd88b27 core: compile and upload subkernels 2023-10-08 17:11:51 +08:00
mwojcik 8d7194941e tests: add lit tests for subkernels 2023-10-08 17:11:51 +08:00
mwojcik 0a750c77e8 compiler: support subkernels 2023-10-08 17:11:51 +08:00
mwojcik 1a0fc317df satman: support subkernels 2023-10-08 17:11:32 +08:00
mwojcik e05be2f8e4 runtime: support subkernels 2023-10-08 17:11:32 +08:00
mwojcik 6f4b8c641e drtioaux_proto: use better payload names 2023-10-08 17:11:32 +08:00
mwojcik b42816582e ksupport: support subkernels 2023-10-08 17:11:32 +08:00
Hartmann Michael (IFAG PSS SIS SCE QSE) 76f1318bc0 doc: Extend documentation
Extend the paragraph "Pitfalls" in the documentation of "Compiler" by
problems caused by returning values from the stack.
2023-10-07 07:20:33 +08:00
Sebastien Bourdeauducq 0131a8bef2 shuttler: cleanup 2023-10-06 14:55:51 +08:00
mwojcik e63e2a2897 artiq_ddb_template: better satellite formatting 2023-10-06 13:01:57 +08:00
Simon Renblad 47fc640f75 applets: rename 'ctl' attribute to 'req' 2023-10-05 12:32:01 +08:00
Simon Renblad bb7caacb5f RELEASE_NOTES: applet API extensions 2023-10-05 12:32:01 +08:00
Simon Renblad da9f7cb58a applet extensions documentation 2023-10-05 12:32:01 +08:00
occheung 43926574da shuttler: remove sdm constants 2023-10-05 07:40:00 +08:00
Simon Renblad 4f3e58db52 gui.applets: add EntryArea 2023-10-04 15:35:52 +08:00
Simon Renblad 13271cea64
gui: remove copies of _WheelFilter and refactor with parameter 2023-10-04 13:35:01 +08:00
occheung 0e8fa8933f shuttler: init sigma-delta modulator 2023-09-30 11:51:43 +08:00
David Nadlinger 2eb89cb168 dashboard: Fix occasional "unexpected action" applet errors on startup
This turned out to be a race between the dashboard's dataset db
subscriber being initialised and the applet "embed" request, with
artiq.applet.simple not being able to handle the unexpected "mod"
message. We were only handling the other ordering outcome of this
race before.
2023-09-30 00:27:25 +01:00
occheung a772dee1cc shuttler: change 0th order accumulator width
It now truncates the LSBs instead of the MSBs.
2023-09-29 10:09:39 +08:00
Simon Renblad bafb85a274 custom_applet: change constructor, data_changed signatures 2023-09-28 10:35:14 +01:00
mwojcik 0e8aa33979 core: separate master target from compilation 2023-09-28 10:41:55 +08:00
mwojcik fcf6c90ba2 ddb_template: support different satellite targets 2023-09-28 10:41:55 +08:00
linuswck 0c1b572872 Shuttler: Correct spelling and grammar in docs 2023-09-27 17:29:16 +08:00
linuswck ab0d4c41c3 Shuttler: pdq, efc->shuttler pdq_words->coef_words 2023-09-27 17:29:16 +08:00
Jonathan Coates 6eb81494c5 Allow using Python types in type annotations
This maps basic Python types (float, str, bool, np.int32, np.int64) as well as
some generics (list, tuple) to ARTIQ's own type instances.

Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-09-26 23:46:43 +01:00
Jonathan Coates 586d97c6cb Fix type annotations with mixed tuples
The type checker/inferer visits every node in an AST tree, including
function return annotations. This means for a function definition like

    def f() -> TTuple([TInt32, TBool]):
      ...

We attempt to type check the list [TInt32, TBool], which generates the
unification constraint builtins.TBool ~ builtins.TInt. This causes an
internal error due to compiler weirdness.

We can avoid this by just nulling-out the return annotation in the
embedding stage. The return type isn't actually used anywhere (it's
extracted via the inspect module instead), so this is entirely safe.

Arguments aren't affected by this, as we already nulled out the
annotation (see visit_arg in embedding.py).

Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-09-26 23:43:01 +01:00
David Nadlinger 892b0eaca2 compiler: Fix crash on multiple types with the same name
The original fix in 21574bdfa9
was incomplete, as it only addressed the TInstance types, but
not their linked (typ.constructor) TConstructor instances.

This would (potentially among other issues) cause assertion
errors in llvm_ir_generator due to the wrong associated globals
being referenced; see added test case for an example that
previously caused such a crash.

Also modified the name collision detection from O(len(type_map))
(so quadratic overall in the number of custom types) to cache
names in sets for O(1) lookup.
2023-09-26 23:31:21 +01:00
linuswck eedac7cf71 Shuttler: Patch ddb entries in the example code 2023-09-26 12:20:26 +08:00
linuswck a61bbf5618 Shuttler: Replace ddb with json for the example 2023-09-26 12:20:26 +08:00
occheung b7b8f0efa2
Generate coredevice entries for Shuttler (#2216)
* ddb: generate shuttler coredevice entries

* ddb: split-off all DRTIO-over-EEM peripherals

Only EFC uses DRTIO-over-EEM at this moment. It will be relevant to phaser-DRTIO in the future.

* ddb: generalize efc processing into drtio-over-eem peripherals

* ddb: check DRTIO role validity before processing
2023-09-26 09:44:21 +08:00
occheung b52f253dbd
Simplify OOB reset by clock division (#2217)
* oob: simply logic by dividing into clk100

* replace clk100 clk ctrl with clk200 async reset

* fix comment (singular/plural)

* oob reset: invoke platform commands locally

* cleanup

* oob reset: add async reset import

* fix duplicated comment
2023-09-26 08:02:49 +08:00
occheung 73ab71f443
shuttler: add documentation 2023-09-25 17:47:47 +08:00
linuswck ab8247b3d7 Shuttler: Add coredevice example code for Shuttler
This example code:
    - Demonstrates the init flow for Shuttler
    - Blinks LED L0, L1
    - Demonstrates the real-time control of relay
    - Includes example fns for configuring the PDQ Output Channel in mu
2023-09-25 14:56:47 +08:00
mwojcik 36b3678853 satman: fix ddma reporting wrong destination 2023-09-22 10:25:37 +08:00
mwojcik af77885dfc rtio_mgt: fix drtio reset on standalone 2023-09-22 09:46:40 +08:00
mwojcik eb57b3b393 drtio: async messages become synchronous
They are now a reply for DestinationStatusRequest.
This prevents gateware errors and lost packets if the receiver is busy.
2023-09-21 16:30:00 +08:00
Simon Renblad 40ac2e03ab set_argument_value in applets 2023-09-21 16:26:11 +08:00
occheung a2fbcb8bfd pre-dac gain/offsets: detect overflow & underflow
And output maximum / minimum DAC code when over/underflow
2023-09-19 18:49:20 +08:00
occheung 5c64eac8d2 relay: fix naming 2023-09-19 18:49:20 +08:00
occheung 477a7b693c remove debug for converter 2023-09-19 18:49:20 +08:00
occheung f2694f25eb re-impl ADC using general access methods 2023-09-19 18:49:20 +08:00
occheung 9e1447d104 adc: implement standby & power-down/up 2023-09-19 18:49:20 +08:00
occheung 870020bc9f adc: use a generous upper bound 2023-09-19 18:49:20 +08:00
occheung c2d136f669 shuttler: reorg SPI constants 2023-09-19 18:49:20 +08:00
occheung 06426e0ed9 shuttler: impl general reg access 2023-09-19 18:49:20 +08:00
occheung e443e06e62 shuttler: remove adc calibrate debug lines 2023-09-19 18:49:20 +08:00
occheung 55150ebdbb shuttler: fix calibration channel target 2023-09-19 18:49:20 +08:00
occheung eb08c55abe shuttler: add AFE drivers 2023-09-19 18:49:20 +08:00
occheung 67b6588d95 shuttler: implement gain & offset register access 2023-09-19 18:49:20 +08:00
occheung 1bb7e9ceef shuttler: support pre-DAC gain & offset 2023-09-19 18:49:20 +08:00
Florian Agbuya c02a14ba37 compiler: fix lit tests numpy.transpose error (#2190) 2023-09-18 22:11:46 +08:00
Simon Renblad 1f3b2ef645 dashboard.datasets: fix numpy objects in CreateEditDialog 2023-09-18 14:07:26 +08:00
linuswck 372008cb66 Firmware: AD9117 Add check presence of clk comment 2023-09-18 13:04:51 +08:00
linuswck 85abb1da2c Firmware: Set DACs RETIMER-CLK to Phase 1 Shuttler
- Intend to maintain the same pipeline latency across all DACs on Shuttler
- Force the RETIMER-CLK to be PHASE 1 on all DACs
- See Issue #2200 for details
2023-09-18 12:52:21 +08:00
David Nadlinger 9e5b62a6b1 gateware/targets/kasli: Only set DRTIO_ROLE in *Base classes [nfc]
kasli_generic uses the drtio_role setting to select the particular
*Generic class to use anyway.
2023-09-17 10:24:51 +08:00
David Nadlinger 22ab62324c gateware/targets/kasli: Set DRTIO_ROLE in {Master, Satellite}Base
These were introduced in 82bd913f63, and for Kasli only set from
the JSON description in the *Generic subclasses. Not all firmware
is built through that API, however, e.g. the CI system at the
University of Oxford. The missing attribute breaks artiq.build_soc.
2023-09-17 00:48:42 +01:00
David Nadlinger fc74b78a45 dashboard: Make Ctrl-Alt-W close non-docked applets only
I had introduced this in f11aef74b as a means of quickly cleaning up
after e.g. an exploratory session where a lot of transient applets were
opened from ndscan, or for a dashboard that has been running for a while
with CCBs enabled but without anybody actually working there.

It turns out that one usually wants the few docked applets to stay open,
as they were necessarily arranged manually at some prior point. And as a
corollary to the latter, if one did want to close them as well, doing so
manually would not be too onerous either.
2023-09-16 23:47:23 +01:00
Simon Renblad f01e654b9c gui.entries: fix RangeScan SpinBox size layouts 2023-09-16 16:06:45 +08:00
David Nadlinger e45dc948e9 setup.py: Add lmdb dependency
This has actually been a required dependency since
e710d4badd.
2023-09-15 17:25:45 +01:00
David Mak 460cbf4499 docs: Add section on untrusted substituters in Nix
Signed-off-by: David Mak <david.18.19.21@gmail.com>
2023-09-14 11:55:45 +08:00
Florian Agbuya 6df85478e4 scan: fix deprecated shuffle parameter in python 3.11 2023-09-13 12:24:44 +08:00
Jonathan Coates 5c85cef0c2
Allow indexing tuples in kernel code
This only allows for indexing with a constant value (e.g. x[0]).

While slices would be possible to implement, it's not clear how to
preserve type inference here. The current typing rule is:

  Γ ⊢ x : τ  Γ ⊢ a : Int  Γ ⊢ b : Int
  ------------------------------------
             Γ ⊢ x[a:b] : τ

However, tuples would require a different typing rule, and so we'd need
to defer type inference if τ is a tyvar. I'm not confident that this
won't change behaviour, so we leave as-is for now.

Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-09-12 14:43:38 +01:00
linuswck ccb140a929 Firmware: Add AD9117 DAC Startup Seq for shuttler 2023-09-11 15:07:47 +08:00
linuswck 7c8073c1ce Shuttler: Add DAC Data Interface Gateware
- Add Parallel DDR Data Interface for DAC
- Add MMCM to generate phase shifted DDR Clk(45 degree phase shift by default)
- Connect dac_interface to Shuttler Module
2023-09-11 11:37:13 +08:00
Florian Agbuya 2f3329181c flake: fix deprecated 'U' mode in outputcheck for python 3.11 2023-09-06 19:02:41 +08:00
Sebastien Bourdeauducq 1ec1ab0502 flake: update dependencies 2023-09-06 18:28:08 +08:00
linuswck b49fb841ce Firmware: EFC enables error led when going panic 2023-09-06 15:54:35 +08:00
Florian Agbuya a619c9f3c2 almazny: fix minor doc formatting 2023-09-06 14:12:09 +08:00
Florian Agbuya 0188f31f3a i2c: fix doc formatting 2023-09-05 17:00:27 +08:00
Florian Agbuya 4e770509db almazny: fix doc formatting 2023-09-05 17:00:27 +08:00
occheung 7f63bb322d disable DRTIO-over-EEM OSERDES until clock is stable
This asserts OOB reset on EFC.
2023-09-05 16:59:01 +08:00
occheung 5e5d671f4c kasli: add invoke order comments 2023-09-04 12:05:45 +08:00
occheung 98904ef4c3 kasli: construct DRTIO-EEM modules before adding RTIO 2023-09-04 12:05:45 +08:00
Sebastien Bourdeauducq 73ac414912 flake: update dependencies 2023-09-03 10:59:52 +08:00
occheung 838cc80922
EFC: Implement OOB reset 2023-09-03 10:25:08 +08:00
Simon Renblad 904afe1632 tools: remove trim param 2023-09-01 20:06:19 +08:00
Simon Renblad 01d777c977
dashboard/datasets: fix CreateEditDialog datatype cast (#2176) 2023-09-01 13:59:17 +08:00
Sebastien Bourdeauducq 9556ca53de flake: update dependencies 2023-08-31 17:43:28 +08:00
occheung df99450faa
shuttler: add pdq-based waveform generator 2023-08-30 23:38:39 +08:00
Sebastien Bourdeauducq 1f58cd505c flake: update dependencies 2023-08-30 15:39:46 +08:00
linuswck ddb2b5e3a1 efc: add shuttler DAC parallel data interface pads 2023-08-30 10:25:39 +08:00
linuswck b56f7e429a
drtio: rename drtio_transceiver to gt_drtio 2023-08-28 04:50:46 +00:00
Sebastien Bourdeauducq 3452d0c423 efc: use variant (expected everywhere else) 2023-08-25 15:52:40 +08:00
Sebastien Bourdeauducq 2139456f80 firmware: skip clock switch for efc 2023-08-25 15:06:42 +08:00
Sebastien Bourdeauducq a2a780a3f2 firmware: fix compilation warning 2023-08-25 15:06:02 +08:00
Sebastien Bourdeauducq 3620358f12 flake: build efc firmware 2023-08-25 13:34:56 +08:00
Sebastien Bourdeauducq 72b0a17542 flake: register firmware outputs as hydra build products 2023-08-25 13:25:22 +08:00
Sebastien Bourdeauducq f5cbca9c29 kasli: implement DRTIO-over-EEM 2023-08-25 12:47:33 +08:00
linuswck 737ff79ae7 eem: add efc 2023-08-25 12:01:17 +08:00
linuswck dc97d3aee6 drtio-eem: CONFIG_EEM_TRANSCEIVERS -> CONFIG_EEM_DRTIO_COUNT 2023-08-25 11:49:39 +08:00
Sebastien Bourdeauducq 5d38db19d0 drtio-eem: remove unnecessary rtio_rx clock domain 2023-08-25 11:32:28 +08:00
Sebastien Bourdeauducq 9bee4b9697 flake: update dependencies 2023-08-25 11:13:33 +08:00
linuswck cd22e42cb4
efc: add DRTIO virtual LEDs
- EFC Gateware: Add virtual_leds to rtio
- EFC Firmware: io_expander is kept being serviced to update
  virtual_leds after init
2023-08-23 06:21:14 +00:00
linuswck b7bac8c9d8 EFC: Add SPI Gateware for Shuttler DAC
- Verified by a functional test reading back the rev register
2023-08-23 09:04:16 +08:00
mwojcik e8818c812c satman: fix non-eem satellites failing to build 2023-08-22 16:32:59 +08:00
occheung 68dd0e029f targets: add efc target 2023-08-10 00:02:01 +00:00
occheung 64d3f867a0
add DRTIO-over-EEM PHY
for EFC and perhaps Phaser
2023-08-09 23:59:40 +00:00
Sebastien Bourdeauducq df662c4262 flake: update llvmlite 2023-08-07 23:02:23 +08:00
Sebastien Bourdeauducq d2ac6aceb3 flake: update to Clang 14 2023-08-07 18:45:13 +08:00
Sebastien Bourdeauducq 9b94a09477 flake: update to LLVM 14 2023-08-07 18:28:44 +08:00
David Nadlinger efbae51f9d runtime: Validate ksupport ELF against hard-coded address ranges
This would have caught the reduction in header padding with LLD 14.
In theory, we could just get rid of the hard-coded kernel CPU address
ranges altogether and use ksupport.elf as the one source of truth; the
code already exists in dyld. The actual base address of the file would
still need to be forwarded to the kernel-side libunwind glue, though,
as there doesn't seem to be a clean way to get the equivalent of
KSUPPORT_HEADER_SIZE through the linker script. I have left this as-is
with the hard-coded KERNELCPU_… constants for now.
2023-08-07 10:10:38 +00:00
David Nadlinger 8acfa82586 ksupport: Remove unused sections from linker script [nfc]
We no longer build ksupport.ld in a position-independent fashion, and
the reference to the ld.bfd _GLOBAL_OFFSET_TABLE issue was just a
distraction
2023-08-07 10:10:38 +00:00
David Nadlinger 4d636ea593 Upgrade to LLD 14
Previous linker versions had inserted some zero padding bytes
between the ELF headers and the first section, but LLD 14 does
not anymore.

Hard-coding the offset of the first section in ksupport.elf
manually isn't ideal; we should probably parse the ELF program
headers instead when first setting up the kernel CPU.
2023-08-07 10:10:38 +00:00
Sebastien Bourdeauducq 3ed7e0ed06 flake: update dependencies 2023-08-07 17:52:42 +08:00
Simon Renblad c4259dab18
applets.simple: add kwargs to AppletControlRPC (#2155)
Co-authored-by: Simon Renblad <srenblad@m-labs.hk>
2023-08-05 11:38:07 +08:00
mwojcik c46ac6f87d spi2: update set_config_mu doc 2023-08-04 09:22:57 +00:00
linuswck 758b97426a Bootloader: SDRAM patch for EFC
- Modification of the CFG flag ensure EFC to initialize DDRPHY correctly
Note that Kasli and EFC share the same model of SDRAM
2023-08-02 02:18:45 +00:00
linuswck c206e92f29 Bootloader: Remove kusddrphy support for SDRAM
- Delete all the kusddrphy cfg flags and related code
2023-08-02 02:18:20 +00:00
linuswck cb547c8a46
efc: turn on power of FMC peripheral
- Add efc's io expander method
- Enable VADJ, P3V3_FMC in satman main during startup
2023-08-01 00:29:45 +00:00
linuswck 72a5231493
artiq_flash: add EEM FMC Carrier Board Support
- The code is derived from PR #2134 936f24f6bdf47e577d3e7d73a330797542596ba8
2023-07-25 11:14:19 +08:00
Denis Ovchinnikov 07714be8a7
jsonschema: add kasli_soc HW revision v1.1 2023-07-24 16:32:13 +08:00
Simon Renblad 361088ae72 tools: add trim argument to format funcs 2023-07-21 08:38:49 +00:00
Simon Renblad a384df17a4 docs: add unit and precision explainer 2023-07-21 08:15:39 +00:00
Simon Renblad 6592b6ea1d artiq_client: change set_dataset with units 2023-07-21 08:15:39 +00:00
Simon Renblad 2fb085f1a2 datasets: change dataset value entry with units 2023-07-21 08:15:39 +00:00
Simon Renblad a7569a0b2d tools: add scale_from_metadata helper func 2023-07-21 08:15:39 +00:00
Simon Renblad 4fbff1648c scientific_spinbox: rename precision to sig_figs 2023-07-19 07:01:24 +00:00
Simon Renblad 8f4c8387f9 entries: rename setPrecision to setSigFigs 2023-07-19 07:01:24 +00:00
Simon Renblad a2d62e6006 RELEASE_NOTES: deprecated ndecimals 2023-07-18 08:02:42 +00:00
Simon Renblad 3d0feef614 docs: rename ndecimals to precision 2023-07-18 08:02:42 +00:00
Simon Renblad 59ad873831 examples: rename ndecimals to precision 2023-07-18 08:02:42 +00:00
Simon Renblad 8589da0723 test_arguments: rename ndecimals to precision 2023-07-18 08:02:42 +00:00
Simon Renblad 94e076e976 scan: rename ndecimals to precison 2023-07-18 08:02:42 +00:00
Simon Renblad a0094aafbb entries: rename ndecimals to precision 2023-07-18 08:02:42 +00:00
Simon Renblad 0befadee96 environment: rename ndecimals to precision 2023-07-18 08:02:42 +00:00
sven-oxionics b3dc199e6a Fix panic when receiving empty strings in rpc calls
Receiving an empty string in an RPC call currently panics.

When `length` is zero, a call to the `alloc` function (as implemented in `artiq/firmware/runtime/session.rs`) returns a null pointer. Constructing a `CMutSlice` from a null pointer panics.
A `CMutSlice` consists of a pointer and the length. Rust's documentation of the `core::ptr` module states: "The canonical way to obtain a pointer that is valid for zero-sized accesses is `NonNull::dangling`."
This commits adds a check for the length of a string received in an RPC call. Only for lengths greater than zero a memory allocation is performed. For zero-length strings, a dangling pointer is used.

Test plan:
Invoke the following experiment, which returns an empty string over RPC:
```
class ReturnEmptyString(artiq.experiment.EnvExperiment):
    def build(self):
        self.core: Core = self.get_device("core")

    @kernel
    def run(self):
        x = self.do_rpc()
        print(x)

    @rpc
    def do_rpc(self) -> TStr:
        return ""
```

Signed-off-by: Sven Over (Oxford Ionics) <sven.over@oxionics.com>
2023-07-18 04:00:32 +00:00
Florian Agbuya d73889fb27 gui/experiments: cast Qt timestamp to int preventing float type error 2023-07-14 08:33:27 +00:00
Simon Renblad 9f8bb6445f RELEASE_NOTES: add breaking change data_changed signature 2023-07-12 08:28:28 +00:00
Simon Renblad 068a2d1663 progress_bar: refactor data_changed 2023-07-12 08:28:28 +00:00
Simon Renblad 6c588b83d7 plot_xy_hist: refactor data_changed 2023-07-12 08:28:28 +00:00
Simon Renblad c17f69a51b plot_xy: refactor data_changed 2023-07-12 08:28:28 +00:00
Simon Renblad ac504069d2 plot_hist: refactor data_changed 2023-07-12 08:28:28 +00:00
Simon Renblad b6a83904b5 image: refactor data_changed 2023-07-12 08:28:28 +00:00
Simon Renblad 25959d0cd6 big_number: refactor data_changed 2023-07-12 08:28:28 +00:00
Simon Renblad 5695e9f77e simple: refactor TitleApplet data_changed signature 2023-07-12 08:28:28 +00:00
Simon Renblad fe0f6d8a2c simple: refactor SimpleApplet data_changed signature 2023-07-12 08:28:28 +00:00
Simon Renblad d1f2727126 simple: refactor RPC client set_dataset 2023-07-12 08:28:28 +00:00
Simon Renblad 16a3ce274f applets: add metadata param to set_dataset 2023-07-12 08:28:28 +00:00
Simon Renblad af7622d7ab simple: refactor IPC set_dataset 2023-07-12 08:28:28 +00:00
Jonathan Coates 9a84575649
eem_7series: fix typo in 77293d5
Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-07-11 23:09:15 +00:00
Simon Renblad faf85e815a datasets: add metadata to CreateEditDialog 2023-07-10 06:50:41 +00:00
Simon Renblad 3663a6b8e8 artiq_client: refactor set_dataset, show_datasets 2023-07-10 04:50:54 +00:00
Simon Renblad 91442e2914 browser: refactor upload_clicked for dataset metadata 2023-07-10 04:26:08 +00:00
Simon Renblad 50a6dac178 files: read dataset metadata from HDF5 2023-07-10 04:26:08 +00:00
Simon Renblad 5292a8de82 browser: add metadata param to short_format 2023-07-10 04:26:08 +00:00
Sebastien Bourdeauducq 7791f85a1a flake: update dependencies 2023-07-10 11:29:59 +08:00
Sebastien Bourdeauducq 48bc8a2ecc gtx_7series_init: GTH -> GTX (NFC) 2023-07-10 11:26:07 +08:00
Denis Ovchinnikov 93882eb3ce kasli-soc: fix of SYS CLK switch failure
Change initialization behaviour of GTX transceivers
--
Modify the config parms CPLL of GTX transceiver for PLL to lock correctly
Modify the enabling requirement of GTX input clock buffer IBUFDS_GTE2 so
    that it depends on GTX PLL locked signal instead of TX Init Done
Modify the GTX Init FSM so that BruteForceClock Aligner can reset GTX
    transceiver without resetting the GTX transceiver PLL

kasli-soc: fix of SYS CLK switch failure
Changed initialization of GTX transceivers.
Successful SYS CLK switching requires IBUFDS_GTE2 to be properly enabled and not disabled during GTX transceiver initialization.
For this reason, CPLL is not reset during GTX initialization and clock alignment.

kasli-soc: refractor fix of SYS CLK switch failure
Remove gtXxreset & cpllreset assertion and deassertion
The removed code does not affect the fix
2023-07-10 03:24:28 +00:00
Simon Renblad 7ca02a119d RELEASE_NOTES: update lmdb migrate script 2023-07-10 02:33:59 +00:00
Simon Renblad 373fe3dbe7 test_datasets: add metadata tests 2023-07-10 02:33:59 +00:00
Simon Renblad 1af98727b7 test_scheduler: refactor dataset metadata support 2023-07-10 02:33:59 +00:00
Simon Renblad 376f36c965 datasets: add metadata format param 2023-07-10 02:33:59 +00:00
Simon Renblad e710d4badd databases: read and save metadata in lmdb 2023-07-10 02:33:59 +00:00
Simon Renblad bfbe13e51b worker_db: write hdf5 dataset metadata 2023-07-10 02:33:59 +00:00
Simon Renblad bf38fc8b0f tools: refactor short_format with metadata 2023-07-10 02:33:59 +00:00
Simon Renblad 337273acb6 environment: add get_dataset_metadata 2023-07-10 02:33:59 +00:00
Simon Renblad 748707e157 environment: add unit feature 2023-07-10 02:33:59 +00:00
Leon Riesebos 833fd8760e artiq_ddb_template: use the clk_div field
this field exists in the json schema but was not used.

Signed-off-by: Leon Riesebos <28567817+lriesebos@users.noreply.github.com>
2023-06-29 03:29:18 +00:00
Florian Agbuya 454597915a RELEASE_NOTES: update 2023-06-17 05:01:02 +00:00
Sebastien Bourdeauducq 77293d53e3 json: use schema defaults when applicable 2023-06-16 16:59:08 +08:00
Sebastien Bourdeauducq a792bc5456 json: factor handling of deprecated 'base' 2023-06-16 16:32:42 +08:00
Sebastien Bourdeauducq 20d4712815 json: base -> drtio_role 2023-06-16 16:17:31 +08:00
Spaqin 82bd913f63
satellites: add kernel cpu 2023-06-16 15:44:31 +08:00
Sebastien Bourdeauducq 115415d120 Revert "flake: update to LLVM 14 and llvmlite 0.40.0+master"
This reverts commit c25c0bd55a.
2023-06-14 18:54:33 +08:00
Florian Agbuya d140c960bb
applets: implement dataset modification feature in big number applet 2023-06-12 17:52:46 +08:00
Egor Savkin c25c0bd55a
flake: update to LLVM 14 and llvmlite 0.40.0+master 2023-06-09 13:25:08 +08:00
Egor Savkin 30ef8d8cb4
compiler: skip demangling list of empty names 2023-06-09 13:24:10 +08:00
Florian Agbuya 7ad32d903a browser: add update method to dataset controller 2023-06-06 11:07:08 +00:00
Florian Agbuya bf46ce4a92 applets.simple: add mutate_dataset feature 2023-06-05 12:30:14 +00:00
den512is 1f306a2859
flake: add packaging dependency
Needed for building Kasli firmware
2023-06-05 13:17:47 +08:00
Florian Agbuya 150d325fc1 applets.simple: add append_to_dataset feature 2023-06-02 14:56:00 +00:00
Florian Agbuya c298ec4c2e applets: add update_dataset for dataset mods 2023-06-02 14:56:00 +00:00
Sebastien Bourdeauducq 69bf2dfb81 flake: sleep longer before running HITL tests to allow for clock switch and reboot 2023-06-02 17:41:15 +08:00
mwojcik 29cb7e785d fix missing DIFF_TERM for Sampler and Mirny inputs 2023-06-02 17:21:00 +08:00
Sebastien Bourdeauducq b97f6a9e44 bootloader: fix compilation warning without Ethernet 2023-06-02 10:48:55 +08:00
Sebastien Bourdeauducq e0ebc1b21d applets: fix some asyncio problems 2023-05-31 22:56:48 +08:00
Sebastien Bourdeauducq c6ddd3af17 applets: add controller and set_dataset API 2023-05-31 22:51:48 +08:00
Florian Agbuya e12219e803
gui: add handler for applet set_dataset 2023-05-31 14:08:14 +00:00
Sebastien Bourdeauducq ff11b5df71 flake: add qtsvg 2023-05-31 22:07:05 +08:00
Sebastien Bourdeauducq c8dc2cbf09 browser: decouple dataset controller from dataset dock 2023-05-31 21:57:54 +08:00
Sebastien Bourdeauducq c6b29b30fb Revert "flake: update to LLVM 14 and llvmlite 40"
This reverts commit 748969c21e.
2023-05-31 19:36:43 +08:00
Sebastien Bourdeauducq b20d09aad5 Revert "flake: export llvmlite-new"
This reverts commit fabe88065b.
2023-05-31 19:36:41 +08:00
Sebastien Bourdeauducq 6276182c96 Revert "flake: fix clang version in boards shell"
This reverts commit 9a6bc6dc7b.
2023-05-31 19:36:40 +08:00
Sebastien Bourdeauducq d103cbea31 libboard_misoc: fix clang STB_WEAK warning 2023-05-31 18:59:51 +08:00
Sebastien Bourdeauducq 9a6bc6dc7b flake: fix clang version in boards shell 2023-05-31 18:59:39 +08:00
Sebastien Bourdeauducq fabe88065b flake: export llvmlite-new 2023-05-30 16:54:59 +08:00
Egor Savkin 748969c21e
flake: update to LLVM 14 and llvmlite 40
Signed-off-by: Egor Savkin <es@m-labs.hk>
2023-05-30 16:47:59 +08:00
Sebastien Bourdeauducq 75f6bdb6a1 flake: add boards dev shell 2023-05-30 16:21:06 +08:00
Sebastien Bourdeauducq 41caec797e flake: do not install ARTIQ itself in dev shell, only its dependencies
Otherwise, test runs take a long time when entering the shell, and failing tests stop entering the shell which is not what we want.
Also make jsonschema a regular dependency of ARTIQ, since users can now retrieve JSONs via AFWS.
2023-05-30 16:20:57 +08:00
Sebastien Bourdeauducq 953a8a9555 master: merge master_config and master_terminate 2023-05-30 15:55:19 +08:00
Sebastien Bourdeauducq 444bab2186 gui: datasets_sub -> dataset_sub (nfc) 2023-05-30 15:44:30 +08:00
Sebastien Bourdeauducq 0941d3a29a flake: update dependencies 2023-05-30 11:50:30 +08:00
Denis Ovchinnikov 22e2514ce6 update configuration of IBUFDS_GTE2
Input clock is terminated internally with 50 Ohm on each leg and to 4/5 MGTAVCC.
2023-05-30 11:42:51 +08:00
mwojcik a4895b591a analyzer: fix satellite behavior 2023-05-29 13:13:24 +08:00
Sebastien Bourdeauducq ef2cc2cc12 flake: buildFHSUserEnv -> buildFHSEnv 2023-05-27 18:03:18 +08:00
Sebastien Bourdeauducq 779810163f flake: fix rustPlatform deprecation warnings 2023-05-27 17:40:36 +08:00
Sebastien Bourdeauducq b9c7905b20 nixpkgs 23.05 2023-05-27 17:17:36 +08:00
Charles Baynham c2b0c97640 worker: Wait until datasets are written before quitting
Avoids a race condition in worker_impl.py where HDF5 dataset saving was
cut off before it finished for large datasets.
2023-05-23 21:48:56 +01:00
Sebastien Bourdeauducq 58cc3b8d0a kasli_generic: fix LooseVersion deprecation warning 2023-05-23 19:36:06 +08:00
Sebastien Bourdeauducq 598c7b1d25 flake: update qasync 2023-05-23 11:26:30 +08:00
Jonathan Coates ea9fe9b4e1
dma: fix off-by-one error in RawSlicer (#2090)
Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-05-23 11:15:39 +08:00
mwojcik c1d6fd4bbe satman analyzer: remove forgotten comment 2023-05-19 11:39:14 +08:00
mwojcik ab52748cac analyzer sat: disarm on drop 2023-05-19 11:39:14 +08:00
mwojcik ddfe51e7ac analyzer: use transactions for data transmission 2023-05-19 11:39:14 +08:00
mwojcik 6c96033d41 analyzer: implement querying up satellites for data 2023-05-19 11:39:14 +08:00
mwojcik 0b03126038 satman: support analyzer packets 2023-05-19 11:39:14 +08:00
mwojcik fdca1ab7fc drtioaux: add analyzer related messages 2023-05-19 11:39:14 +08:00
mwojcik c36b6b3b65 master: only local rtio events in analyzer 2023-05-19 11:39:14 +08:00
mwojcik c0ca27e6cf satellite: add rtio_analyzer, only for local rtio 2023-05-19 11:39:14 +08:00
Jonathan Coates 3ca47537b8 Fix mismatched signatures for the wide interface
Lists are passed by-reference from python code, and so should be
&CSlice<_> not CSlice<_>.

Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-05-19 10:18:06 +08:00
Hartmann Michael (IFAG PSS SIS SCE QSE) df15f53ee9 doc: conda installation notes 2023-05-12 17:44:38 +08:00
Sebastien Bourdeauducq e015483e48 RELEASE_NOTES: add LMDB migration script (#1743) 2023-05-09 14:56:43 +08:00
Sebastien Bourdeauducq c53d333d46 almazny: fix parameter 2023-05-09 14:27:37 +08:00
Sebastien Bourdeauducq 5b94ce82e4 artiq_ddb_template: fix almazny 2023-05-09 14:27:15 +08:00
Sebastien Bourdeauducq 45cd438fb8 Almazny v1.2 support
Based on PR #2060 by Robert Jördens.
2023-05-09 12:54:48 +08:00
Sebastien Bourdeauducq 0e7e30d46e test: fix hardware testbench trying to write to ARTIQ_ROOT 2023-04-30 17:16:36 +08:00
Sebastien Bourdeauducq d5a7755584 test: improve tmpdir names 2023-04-30 17:15:34 +08:00
Sebastien Bourdeauducq 3ff0be6540 PEP440 compliant version numbers 2023-04-30 16:55:49 +08:00
Sebastien Bourdeauducq 8409a6bb94 update gitignore 2023-04-30 16:53:49 +08:00
Sebastien Bourdeauducq 2c1438c4b9 coredevice: add missing pattern to sampler_hw_rev 2023-04-30 16:07:56 +08:00
Egor Savkin 5199bea353
master: emit warning if datasets will not be stored 2023-04-30 15:22:21 +08:00
mwojcik a533f2a0cd rtio: SED, InputCollector use rio clock domain 2023-04-28 17:49:12 +08:00
Jonathan Coates 0bf57f4ebd Fix ADF3536 having RTIO channel names
The channel in this device refers to a channel on the mirny, not an RTIO
channel.
2023-04-24 20:05:14 +08:00
Sebastien Bourdeauducq 4417acd13b flake: update dependencies 2023-04-24 17:36:13 +08:00
Sebastien Bourdeauducq 4056168875 master: store datasets in LMDB (#1743) 2023-04-24 17:34:30 +08:00
Egor Savkin 9331911139
add tests for client submit functionality 2023-04-24 11:43:24 +08:00
Spaqin 2f35869eb1
satman: fix PMP and L2 flush 2023-04-20 15:45:15 +08:00
Egor Savkin aed47d79ff
master: add terminate API 2023-04-18 15:03:06 +08:00
mwojcik 918d30b900 dma: pass "uses_ddma" for non-remote recordings 2023-04-18 12:35:37 +08:00
Egor Savkin b5d9062ba9 Fix AD9914 channel map
Signed-off-by: Egor Savkin <es@m-labs.hk>
2023-04-17 09:23:30 +08:00
Egor Savkin 8984f5104a Move RTIO errors formatting to the session_proto
This would be closer to the artiq-zynq implementation

Signed-off-by: Egor Savkin <es@m-labs.hk>
2023-04-17 09:23:30 +08:00
Egor Savkin d0b8818688 Add 125 MHz from 80 MHz reference option to rtio clocking
Signed-off-by: Egor Savkin <es@m-labs.hk>
2023-04-13 14:57:24 +08:00
Sebastien Bourdeauducq 757c00b0fe afws_client: improve UX of common build errors 2023-04-08 16:50:15 +08:00
Sebastien Bourdeauducq c1474c134a remove obsolete AFWS certificate 2023-04-07 16:09:47 +08:00
Sebastien Bourdeauducq dc3db8bb66 afws_client: WebSocket, system certificates 2023-04-07 16:03:33 +08:00
Sebastien Bourdeauducq 97161a3df2 firmware: improve RTIO map error reporting 2023-04-04 11:27:31 +08:00
Ikko Eltociear Ashimine 7ba06bfe61 fix typo in comm_analyzer.py
error_occured -> error_occurred
occured -> occurred
2023-04-02 09:17:37 +08:00
Spaqin b225717ddb
DDMA: documentation 2023-03-29 13:46:33 +08:00
mwojcik 696bda5c03 handle playback status in aux_transact 2023-03-28 14:18:29 +08:00
mwojcik 9150230ea7 dma: gate ddma features behind cfg(has_drtio) 2023-03-28 14:18:29 +08:00
Spaqin e9a153b985
runtime: implement distributed DMA 2023-03-22 11:16:25 +08:00
David Nadlinger 8b1f38b015 worker_impl: Remove misleading update() from ExamineDatasetMgr [nfc]
`update(mod)` would be on the DatasetDB, not the manager. Rather,
modifications currently just fail due to e.g. `set(…)` not being
defined.
2023-03-20 13:20:40 +08:00
Egor Savkin bbf80875fb
firmware: assume empty config records as removed (#2064)
This will return `KeyNotFound` for empty values, which are produced by `remove` operation

Signed-off-by: Egor Savkin <es@m-labs.hk>
2023-03-13 18:18:26 +08:00
Egor Savkin 1ca09b9484
Add support for default route (IPv4 and IPv6) (#2059)
Based on code by Michael Birtwell <michael.birtwell@oxionics.com>
2023-03-13 17:29:10 +08:00
Spaqin 84e7515721
satman: distributed DMA support 2023-03-11 18:36:36 +08:00
Ikko Eltociear Ashimine 15c18bdc81 fix typo in developing_a_ndsp.rst
occurence -> occurrence
2023-03-11 18:32:14 +08:00
Sebastien Bourdeauducq a9360823b1 libproto: remove obsolete Jdac packets 2023-03-02 20:29:09 +08:00
Egor Savkin 1ec0abbfcf Add Urukul PLL bypass option to the JSON
Signed-off-by: Egor Savkin <es@m-labs.hk>
2023-03-01 19:05:16 +08:00
mwojcik 90a6fe1c35 satellite: add dma to gateware 2023-02-23 17:33:23 +08:00
mwojcik d0437f5672 rtio core: fix minimum_coarse_timestamp 2023-02-22 10:44:25 +08:00
Michael Hartmann 07d684a35d doc: Add jsonschema to nix package list
Add jsonschema to the nix package list as it is required by
artiq_ddb_template.
2023-01-31 18:24:15 +08:00
Michael Hartmann 2371c825f5 doc: Add remark about FTDI drivers
Add a remark that on Windows you might need to install the FTDI drivers
first before you can connect to the serial port.
2023-01-26 21:05:47 +08:00
Egor Savkin 394138f00f
firmware: block session on startup kernel to be finished (#2046) 2023-01-19 16:46:53 +08:00
Sebastien Bourdeauducq 3f5cc4aa10 RELEASE_NOTES: fix formatting 2023-01-19 16:42:52 +08:00
Sebastien Bourdeauducq e9c65abebe manual: fix Nix flakes installation. Closes #2036 2023-01-15 13:03:15 +08:00
Sebastien Bourdeauducq 20e8f17b3d artiq_ddb_template: fix mistake in 18524911 2023-01-15 12:27:13 +08:00
Sebastien Bourdeauducq 57e87c9717 sampler: fix mistake in c591e7e3 2023-01-15 12:27:10 +08:00
Sebastien Bourdeauducq 248cd69673 flake: use nixpkgs cargo-xbuild 2023-01-12 18:03:46 +08:00
Sebastien Bourdeauducq b8968262d7 Merge branch 'syncrtio' 2023-01-12 16:44:54 +08:00
Sebastien Bourdeauducq babbbfadb3 update release notes 2023-01-12 13:12:05 +08:00
Sebastien Bourdeauducq 514ac953ce remove obsolete SI5324_AS_SYNTHESIZER config option 2023-01-12 13:01:08 +08:00
Sebastien Bourdeauducq 0a37a1a4c1 Merge branch 'syncrtio' 2023-01-12 12:58:19 +08:00
Sebastien Bourdeauducq 6d37d9d52c gui/state: fix asyncio loop management 2023-01-12 12:41:08 +08:00
Sebastien Bourdeauducq 5f77d4f5fa applets: fix asyncio loop management 2023-01-12 12:35:02 +08:00
Sebastien Bourdeauducq 2f289c552f remove unused import 2023-01-12 12:18:17 +08:00
Sebastien Bourdeauducq 9e8bb3c701 browser,dashboard: fix asyncio loop management 2023-01-12 12:17:16 +08:00
Sebastien Bourdeauducq d872c3ab4d aqctl_moninj_proxy: fix asyncio loop management 2023-01-12 12:16:53 +08:00
Sebastien Bourdeauducq f8d93813e9 aqctl_corelog: fix asyncio loop management 2023-01-12 10:52:26 +08:00
Sebastien Bourdeauducq 628b671433 update copyright year 2023-01-12 10:41:10 +08:00
Sebastien Bourdeauducq daad3d263a master: commit missing part of 7fd6dead8 2023-01-12 10:39:53 +08:00
Sebastien Bourdeauducq 80f261437a flake: update dependencies 2023-01-11 18:47:30 +08:00
Sebastien Bourdeauducq 7fd6dead8f master: fix asyncio loop management 2023-01-11 18:46:54 +08:00
Sebastien Bourdeauducq 73a4ef89ec scheduler: make asyncio loop a keyword-only argument, like in other asyncio APIs 2023-01-11 18:45:35 +08:00
mwojcik 70edc9c5c6 test_write_underflow: decrease underflow delay 2023-01-11 12:02:51 +08:00
mwojcik 9042426872 echo test: add two more yields 2023-01-11 12:02:51 +08:00
mwojcik cd860beda2 test_full_stack: restore missing check_ttls 2023-01-11 12:02:51 +08:00
mwojcik 627504b60e test_dma: remove redundant clock 2023-01-11 12:02:51 +08:00
Sebastien Bourdeauducq c8ab6c1b2b test_worker: fix asyncio event loop management 2023-01-10 12:36:33 +08:00
Sebastien Bourdeauducq a96bbd8508 test_scheduler: fix asyncio event loop management 2023-01-10 12:30:08 +08:00
Sebastien Bourdeauducq 6cfd1480a7 scheduler: support passing event loop 2023-01-10 12:26:24 +08:00
Sebastien Bourdeauducq c401559ed5 flake: update dependencies 2023-01-10 12:26:00 +08:00
Sebastien Bourdeauducq ea21f474a7 gateware: remove SAWG simulations 2023-01-09 18:37:19 +08:00
Sebastien Bourdeauducq cee9f3f44e flake: run gateware simulations 2023-01-09 18:36:55 +08:00
Sebastien Bourdeauducq b9bfe090f4 flake: cleanup 2023-01-09 18:23:36 +08:00
mwojcik eb3742fb08 kc705: do not reset si5324 during clock switch 2023-01-09 18:18:21 +08:00
Egor Savkin 070fed755b
firmware: unify RTIO error message format 2023-01-09 16:13:05 +08:00
Sebastien Bourdeauducq 63f1a6d197 drtio: partially fix tests 2023-01-06 18:33:13 +08:00
Sebastien Bourdeauducq 7dafdfe2f7 artiq_flash: fix bit2bin 2023-01-06 18:24:00 +08:00
Sebastien Bourdeauducq ec893222a4 rtio: remove support for async mode 2023-01-06 18:22:05 +08:00
Sebastien Bourdeauducq 573a895c1e remove RTIOClockMultiplier 2023-01-06 17:59:18 +08:00
Sebastien Bourdeauducq cf2a4972f7 remove WRPLL 2023-01-06 17:53:11 +08:00
Sebastien Bourdeauducq 668997a451 flake: update dependencies 2023-01-06 17:49:13 +08:00
Sebastien Bourdeauducq 5da9794895 remove Sayma and Metlino support 2023-01-06 17:41:12 +08:00
Spaqin 3838dfc1d1
DRTIO: RTIO/SYS clock merge, KC705 2023-01-06 07:13:38 +08:00
Sebastien Bourdeauducq 1be7e2a2e1 doc: duplicate nixConfig 2023-01-04 15:13:55 +08:00
Sebastien Bourdeauducq 1bf7188dec gui: update version in logo 2023-01-04 15:07:56 +08:00
mwojcik bdae594c79 suservo experimentals: fix rtio ch name changes 2023-01-04 14:56:18 +08:00
mwojcik 8dc6902c23 AD9912: Add PLL bypass option (pll_en) like AD9910 2022-12-21 13:34:31 +08:00
Norman Krackow dbb77b5356
artiq_sinara_tester: change mirny frequencies 2022-12-21 09:47:47 +08:00
Sebastien Bourdeauducq 1fc127c770 fix default version 2022-12-20 12:56:43 +08:00
David Nadlinger 88684dbd2a test_embedding: Fix up spelling in FIXME comment [nfc] 2022-12-19 01:02:51 +00:00
David Nadlinger b9f13d48aa firmware: Fix object references in tuples
Sine 8740ec3dd, the alignment() information from
"run-time type information" (i.e. the Tag type) is also
used when sending tuples to the host.
2022-12-19 00:57:46 +00:00
David Nadlinger 4bb2a3b9e0 RELEASE_NOTES: Two typo/formatting fixes 2022-12-18 17:26:58 +00:00
David Nadlinger f5c408d8d9 RELEASE_NOTES: Fix up punctuation 2022-12-18 17:11:36 +00:00
Sebastien Bourdeauducq 4be7f302e4 flake: vivado 2022.2 2022-12-18 16:51:02 +08:00
Spaqin 17efc28dbe
DRTIO: RTIO/SYS clock merge 2022-12-17 15:39:54 +08:00
David Nadlinger 1e0102379b firmware: Rename si5324 crystal_{ref -> as_ckin2} [nfc]
This would have made the issue in the pre-740543d4e code
much more obvious (the config option by itself does not
have any effect on the choice of active reference input).
2022-12-17 02:17:12 +00:00
David Nadlinger ceabeb8d84 firmware: Fix Si5324 initialisation for satellites
Commit 740543d4e2 had unintentionally broken DRTIO
satellites, as si5324::setup is also used there. This
imports setup_si5324_as_synthesizer() from artiq-zynq,
where the input selection was already explicitly done.

GitHub: Fixes #2028.
2022-12-17 02:17:06 +00:00
SingularitySurfer 8e476dd502 implement pca9539 and runtime io-expander chip selection
better comments and address translation

fix spurious };

unwrap init in runtime and return err instead of panic

propagate error

del unnecessary use

Signed-off-by: SingularitySurfer <Norman_Krackow@gmx.de>
2022-12-14 22:46:38 +08:00
David Nadlinger 874d298ceb master/scheduler: Unbreak submitting from repository
This is a fix-up to commit 2a58981822.
2022-12-13 14:58:23 +00:00
Egor Savkin d75ade7be6 Fix rtiomap failure on device aliases
Signed-off-by: Egor Savkin <es@m-labs.hk>
2022-12-13 17:21:10 +08:00
Egor Savkin 2a58981822 Scheduler: replace relative path to absolute
Signed-off-by: Egor Savkin <es@m-labs.hk>
2022-12-09 21:43:36 +08:00
Egor Savkin e80442811e
worker_impl: do not write results without rid (#2020) 2022-12-09 16:18:28 +08:00
Egor Savkin 12649720f1 browser: read artiq_version from HDF5 as string
Signed-off-by: Egor Savkin <es@m-labs.hk>
2022-12-07 16:39:19 +08:00
Egor Savkin 454ae39c5d
firmware: fix crash on exception with host message (#2017) 2022-12-07 10:41:43 +08:00
David Nadlinger 3c7a394eff runtime/rtio_clocking: Deduplicate/document input selection [nfc] 2022-12-04 04:21:44 +00:00
David Nadlinger 740543d4e2 firmware: Fix Kasli v2 runtime rtio_clock selection
SI5324_EXT_REF now only controls the (deprecated) fallbacks
for when the rtio_clock option is not set.
2022-12-04 02:23:38 +00:00
Egor Savkin b2b559e73b
browser: tolerate missing HDF5 metadata 2022-12-02 16:30:58 +08:00
Egor Savkin 1852491102
add channel names to RTIO errors 2022-12-02 16:27:03 +08:00
Egor Savkin c591e7e305
sampler: fix reference voltage of recent hardware 2022-12-02 10:45:40 +08:00
David Nadlinger 261dc6b933 firmware/runtime: Fix Ext0_Synth0_*to125 log messages 2022-12-02 01:37:56 +00:00
David Nadlinger 1abedba6dc coredevice/fastino: Fix stray punctuation [nfc] 2022-12-01 12:11:35 +00:00
Egor Savkin aa2febca53
browser: fix dummy device creation failure on analyze 2022-12-01 17:45:02 +08:00
Egor Savkin d60a96a715 Fix deprecated warnings on nix develop
Signed-off-by: Egor Savkin <es@m-labs.hk>
2022-12-01 17:33:18 +08:00
wlph17 3f93f16955
manual: add msys2 openocd instructions (#2014) 2022-12-01 17:23:51 +08:00
Sebastien Bourdeauducq 3735b7ea9d Revert "flake: update cargo-xbuild"
This reverts commit 195d2aea6a.
2022-11-30 22:19:27 +08:00
Sebastien Bourdeauducq 195d2aea6a flake: update cargo-xbuild 2022-11-30 21:48:25 +08:00
Sebastien Bourdeauducq 6d179b2bf5 flake: nixos 22.11 2022-11-30 21:36:36 +08:00
Sebastien Bourdeauducq 275b00bfc2 flake: fix libcrypt.so.1 not found by vivado 2022-11-30 11:26:23 +08:00
Jonathan Coates b8b6ce14cc Update smoltcp to 0.8.2
This fixes an issue where TCP issues are not retransmitted when only
some packets in a burst are acknowledged. This causes smoltcp to never
make progress and hang.

Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2022-11-28 22:10:23 +08:00
Nico Pulido 88c5109627 language: check_unprocessed_arguments after constructing experiment
Signed-off-by: Nico Pulido-Mateo <pulido@iqo.uni-hannover.de>
2022-11-27 02:29:57 +00:00
David Nadlinger dee154b35b compiler: Add missing sections to kernel linker script
This caused sporadic LoadFaults with LLD 14 and above, as they
happened to lay out the (not otherwise mentioned) GOT/PLT such
that they would overlap with the stack guard page.

LLD does support the --orphan-handling=error option, which
would be useful to avoid similar problems in the future, but
then we'd need to mention all the other misc sections
(symbol table, comments) in the linker script as well.

GitHub: Fixes #1975.
2022-11-24 16:57:31 +00:00
David Nadlinger 950b9ac4d6 firmware: More explicit panic message if stack guard is tripped
This should give even only mildly technical users a
chance to figure out what's going on, which empirically
is not the case for a plain Exception(LoadFault) without
further context.
2022-11-24 16:54:49 +00:00
Egor Savkin 6c47aac760
dashboard: merge create dataset and edit dataset features 2022-11-23 18:22:53 +08:00
mwojcik f2c1e663a7 regenerate suservo_coherent patch with var_urukul base 2022-11-23 17:22:26 +08:00
Egor Savkin f7f027001e
compiler: insert new lines into long synthesized code (#1986) 2022-11-23 12:10:32 +08:00
David Nadlinger 0b3c232819 language: Clarify error message for unprocessed arguments
"Unexpected argument(s)" would be another less ambiguous,
shorter phrasing.
2022-11-22 11:26:07 +00:00
Etienne Wodey d45f9b6950 ddb_template: propagate fastino log2_width setting
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2022-11-17 10:54:37 +08:00
Sebastien Bourdeauducq 2fe02cee6f doc: MSYS2 packages 2022-11-15 19:32:06 +08:00
Sebastien Bourdeauducq 404f24af6b compiler: set lld emulation explicitly 2022-11-15 11:20:06 +08:00
David Nadlinger 3d25092cbd firmware/rpc_proto: Remove unnecessary cast [nfc] 2022-11-14 22:50:38 +00:00
David Nadlinger dbbe8e8ed4 firmware/rpc_proto: Fix typo breaking receiving of arrays
This was introduced in 8740ec3dd5.
2022-11-14 22:49:45 +00:00
David Nadlinger 8740ec3dd5 firmware/rpc_proto: Fix size/alignment calculation for structs with tail padding
Also factors out duplicate code for (de)serializing
elements of lists and ndarrays, and replaces the rounding
calculations by the well-known, much faster power-of-two-only
bit-twiddling version.

GitHub: Fixes #1934.
2022-11-14 11:37:45 +08:00
David Nadlinger 6caa779c74 firmware/ksupport: Include .gcc_except_table (LSDA)
For whatever reason, no language-specific unwind data
was generated for ksupport code so far, but rustc does
emit it for an upcoming refactoring.
2022-11-14 11:37:45 +08:00
David Nadlinger 4819016a3c firmware/ksupport: Document rpc_recv alignment requirements [nfc] 2022-11-14 11:37:45 +08:00
David Nadlinger 00a27b105a compiler: Extract maximum alignment from target data layout
In particular, i64/double are actually supposed to be aligned
to their size on RISC-V (at least according to the ELF psABI),
though it is unclear to me whether this actually caused any
issues.
2022-11-14 11:37:45 +08:00
David Nadlinger beff15de5e compiler/targets: Fix refactoring leftover for native (host) target
It's unclear whether this actually caused any issues, or why this
wasn't done before (instead just setting the now-removed endianness
flag).
2022-11-14 11:37:45 +08:00
火焚 富良 defc69d9c3
compiler: fix const str/bytes handling (#1990) 2022-11-11 13:15:50 +08:00
火焚 富良 e2178f6c86
Fix GUI log issues introduced by #1950 2022-11-09 16:55:17 +08:00
Sebastien Bourdeauducq f3f068036a use maintained fork of python-Levenshtein 2022-11-03 21:24:49 +08:00
mwojcik ad000609ce simplify tsc with no rtio/sys clk distinction 2022-11-01 08:12:54 +08:00
mwojcik af0b94bb34 rtio_clock: remove 150MHz support 2022-11-01 08:12:54 +08:00
mwojcik 5cd57e8688 rtio_clocking: switch clocks and reboot 2022-11-01 08:12:54 +08:00
mwojcik f8eb695c0f dma test: no more rsys or rtio domains 2022-11-01 08:12:54 +08:00
mwojcik 458bd8a927 kasli_generic: remove rtio clockdomain reference 2022-11-01 08:12:54 +08:00
mwojcik a6856a5e4a rtio: remove rtio clock, use sys instead 2022-11-01 08:12:54 +08:00
mwojcik 1eb87164be kasli: remove rtiocrg, use rtio/sys merge 2022-11-01 08:12:54 +08:00
Sebastien Bourdeauducq f75ddf78b0 dashboard: restore connection/version message 2022-10-21 19:17:00 +08:00
Sebastien Bourdeauducq e0b1098bc0 dashboard: remove incorrect moninj proxy message 2022-10-21 19:13:47 +08:00
Robert Jördens e5c621751f
Merge pull request #1962 from quartiq/miqro
Support MIQRO mode for Phaser
2022-10-19 16:56:02 +02:00
Robert Jördens 07db770423 phaser: fix tester 2022-10-19 16:54:00 +02:00
Robert Jördens eb7a0714b3 literal copy paste error 2022-10-19 16:44:44 +02:00
Robert Jördens e15b5b50d8 phaser: tweak docs, relax slack 2022-10-19 16:42:03 +02:00
Robert Jördens 1820e1f715 phaser: cleanup 2022-10-19 16:25:33 +02:00
Robert Jördens 118b7aca1d
Merge pull request #1980 from FabianSchwartau/fix_phaser_init_delays
Fixed two too low delay values in Phaser init
2022-10-19 15:55:11 +02:00
Fabian Schwartau d5e267fadf Fixed two too low delay values in Phaser init
Signed-off-by: Fabian Schwartau <fabian@opencode.eu>
2022-10-19 15:45:45 +02:00
Sebastien Bourdeauducq 286f151d9a flake: switch to upstream llvmlite 2022-10-19 13:05:51 +08:00
Sebastien Bourdeauducq 19b8d28a2e flake: update dependencies 2022-10-10 17:58:20 +08:00
Sebastien Bourdeauducq 3ffbc5681e flake: update dependencies, enable misoc tests 2022-10-08 13:31:52 +08:00
Sebastien Bourdeauducq 192cab887f afws_client: update 2022-10-07 11:39:36 +08:00
wlph17 9846ee653c
flake: set Nix Qt environment variables in development shell
allows applets to run standalone via ``python -m ...`` without requiring the Nix Qt wrapper
2022-10-07 11:31:43 +08:00
fanmingyu212 56e6b1428c llvm: change addr2line to symbolizer
`llvm-addr2line` is not included as part of the llvm binary package for Windows. This causes ARTIQ python compilations issues when conda is not used (so the `llvm-tools` conda package is not installed, which provides `llvm-addr2line` currently).
2022-10-04 09:35:56 +08:00
Michael Birtwell b895846322 Improve exception reports when exception can't be reconstructed
Artiq assumes that all exceptions raised by the kernel can be constructed with
a single string argument. This isn't always the case. Especially for
exceptions that originated in python and were propagated to the kernel over
rpc.

With out this change a mosek solver failure looks like:
```
ERROR    root:logging_tools.py:41 Terminating with exception (TypeError: __init__() missing 1 required positional argument: 'msg')
Traceback (most recent call last):
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/master/worker_impl.py", line 540, in main
    exp_inst.run()
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/test_tools/experiment.py", line 82, in wrapper
    meth()
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/language/core.py", line 54, in run_on_core
    return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/core.py", line 152, in run
    self.comm.serve(embedding_map, symbolizer, demangler)
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/comm_kernel.py", line 720, in serve
    self._serve_exception(embedding_map, symbolizer, demangler)
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/comm_kernel.py", line 699, in _serve_exception
    python_exn = python_exn_type(
TypeError: __init__() missing 1 required positional argument: 'msg'
```

With this change we get:
```
ERROR    root:logging_tools.py:41 Terminating with exception (RuntimeError: Exception type=<class 'mosek.Error'>, which couldn't be reconstructed (__init__() missing 1 required positional argument: 'msg'))
Core Device Traceback:
Traceback (most recent call first):
  File "/home/mb/oxionics/ion-transport/tests/test_end_to_end.py", line 280, in get_transport
    return self.seq.solve()
  File "/home/mb/oxionics/ion-transport/tests/test_end_to_end.py", line 288, in artiq_worker_test_end_to_end.TransportTestScan.run(..., ...) (RA=+0x2e4)
    self.seq.record(self.get_transport(1e-6 + 1e-7 * x))
mosek.Error(27): rescode.err_license_expired(1001): The license has expired.

End of Core Device Traceback
Traceback (most recent call last):
  File "/home/mb/oxionics/artiq/artiq/master/worker_impl.py", line 540, in main
    exp_inst.run()
  File "/home/mb/oxionics/artiq/artiq/test_tools/experiment.py", line 82, in wrapper
    meth()
  File "/home/mb/oxionics/artiq/artiq/language/core.py", line 54, in run_on_core
    return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
  File "/home/mb/oxionics/artiq/artiq/coredevice/core.py", line 152, in run
    self.comm.serve(embedding_map, symbolizer, demangler)
  File "/home/mb/oxionics/artiq/artiq/coredevice/comm_kernel.py", line 732, in serve
    self._serve_exception(embedding_map, symbolizer, demangler)
  File "/home/mb/oxionics/artiq/artiq/coredevice/comm_kernel.py", line 714, in _serve_exception
    raise python_exn
RuntimeError: Exception type=<class 'mosek.Error'>, which couldn't be reconstructed (__init__() missing 1 required positional argument: 'msg')
```

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-09-26 20:25:13 +08:00
Robert Jördens a1a4545ed4 docs: fix syntax 2022-09-23 16:22:21 +02:00
Robert Jördens a0053f7a2b add release note 2022-09-23 15:57:43 +02:00
Robert Jördens 740f3d220b refine/fixes 2022-09-23 13:39:49 +00:00
Robert Jördens 513f9f00f3 miqro: document coredevice driver 2022-09-23 12:59:21 +00:00
Robert Jördens 5cfa8d9a42 add tester support, refactor gateware mode 2022-09-23 11:54:40 +00:00
Robert Jördens 0e4a87826c return pulse support 2022-09-20 14:35:06 +00:00
Sebastien Bourdeauducq 1709cf9717 afws_client: update 2022-09-19 16:58:41 +08:00
Sebastien Bourdeauducq 4266beeb9c experimental-features: rename patches to be compatible with AFWS server sanitize() 2022-09-19 16:57:53 +08:00
mwojcik c955ac15ed dashboard moninj: add tooltip for off button 2022-09-19 10:19:54 +08:00
mwojcik 81ef484864 dashboard moninj: check if ad9910 was init 2022-09-19 10:19:54 +08:00
mwojcik f2c3f95040 moninj: fix ad9914 behavior, comment cleanup 2022-09-19 10:19:54 +08:00
mwojcik 616ed3dcc2 moninj: dds inj: extract shared code
detect urukul already init in more than one way
detect ad9912 channel already init
2022-09-19 10:19:54 +08:00
Robert Jördens aedcf205c7 miqro: docs 2022-09-16 12:15:13 +00:00
Robert Jördens 14ab1d4bbc miqro format change: encode len, not end 2022-09-15 11:02:59 +00:00
Sebastien Bourdeauducq a028b5c9f7 afws_client: update 2022-09-15 09:15:38 +08:00
Sebastien Bourdeauducq 6085fe3319 experimental-features: add SU Servo coherent phase tracking mode (PR #1467) 2022-09-13 09:37:26 +08:00
Robert Jördens af28bf3550 simplify dt reset 2022-09-08 08:39:48 +02:00
Robert Jördens 4df880faf6 clean up docs 2022-09-08 08:38:26 +02:00
Robert Jördens 857fb4ecec spelling 2022-09-06 20:44:47 +00:00
Robert Jördens a91836e5fe easier fix for dt 2022-09-06 20:26:50 +00:00
Robert Jördens c5c5c30617 add set_window(), clean up api 2022-09-06 16:05:10 +00:00
Robert Jördens 27e3c044ed fix dt computation 2022-09-06 14:32:57 +00:00
Robert Jördens c26fa5eb90 err out on tune_fifo_offset 2022-09-05 20:48:15 +00:00
Sebastien Bourdeauducq 411afbdc23 experimental-features: add SU Servo extension for variable number of Urukuls (PR #1782) 2022-09-05 11:53:09 +08:00
Sebastien Bourdeauducq b4287ac9f4 flake: add experimental feature support 2022-09-05 11:48:43 +08:00
Robert Jördens 1cc57e2345 fix len 2022-09-04 21:00:24 +00:00
Robert Jördens 263c2751b3 add profile_mu 2022-09-04 20:43:28 +00:00
Robert Jördens 876f26ee30 add some docs 2022-09-04 19:56:52 +00:00
Robert Jördens fa3678f8a3 mem auto increment 2022-09-04 12:03:44 +00:00
Robert Jördens f4d325112c reset and elaborate, si functions 2022-09-04 11:19:38 +00:00
Robert Jördens b6586cd7e4 add window data delay 2022-09-02 20:45:13 +00:00
Robert Jördens 3809ac5470 fix type, clean clear 2022-09-02 19:47:06 +00:00
Robert Jördens b9727fdfce refactor for 32 bit mem access 2022-09-02 16:38:53 +00:00
Robert Jördens d6d0c2c866 miqro: name register constants 2022-09-02 15:55:28 +00:00
Robert Jördens 0df2cadcd3 fixes 2022-09-02 15:29:36 +00:00
Robert Jördens 25c0dc4688 whitespace 2022-09-02 14:54:18 +00:00
Robert Jördens cf48232a90 fixes 2022-09-02 14:38:38 +00:00
Robert Jördens a20087848d differentiate phaser modes 2022-09-02 11:03:23 +00:00
Robert Jördens 31663556b8 phaser: add miqro mode 2022-09-02 09:32:06 +00:00
Robert Jördens 47f90a58cc add miqro phy 2022-09-02 09:32:06 +00:00
Mikołaj Sowiński 3c7ab498d1 Added DDS selection for Kasli tester variant
Signed-off-by: Mikołaj Sowiński <msowinski@technosystem.com.pl>
2022-09-02 17:14:23 +08:00
Deepskyhunter 7c306d5609
GUI log: Apply level and text filter to existing log messages (#1950) 2022-08-29 15:20:44 +08:00
mwojcik b705862ecd afws_client: fix argument order 2022-08-25 13:17:41 +08:00
fanmingyu212 20cb99061e doc: updates artiq_flash syntax in developing.rst 2022-08-25 07:03:26 +08:00
Sebastien Bourdeauducq 5ef94d30dd versioneer: fix default 2022-08-18 14:35:58 +08:00
kk1050 3c72b8d646
dashboard: use break_realtime instead of reset for Urukul set freq (#1940) 2022-08-16 14:02:01 +08:00
Sebastien Bourdeauducq 27397625ba dashboard: improve moninj logging 2022-08-12 13:41:05 +08:00
cc78078 3535d0f1ae
kasli: relocate the SatelliteBase Error LED code (#1955) 2022-08-12 12:41:50 +08:00
cc78078 185c91f522
kasli: add Error LED to MasterBase and SatelliteBase 2022-08-11 15:06:58 +08:00
Deepskyhunter f31279411e
dashboard/moninj: make arguments a dict for DDS setters 2022-08-02 17:09:56 +08:00
Alex Wong Tat Hang a3ae82502c
gtx_7series: fix IBUFGS_GTE2 buffer parameters
Co-authored-by: topquark12 <aw@m-labs.hk>
2022-08-01 10:21:28 +08:00
Deepskyhunter 0cdb06fdf5
language/environment: support no argument manager
unbreak tests
2022-07-28 17:55:25 +08:00
Deepskyhunter 2a7a72b27a
language.environment: error out if unknown arguments are passed (#1927)
Closes #1641
2022-07-26 10:42:03 +08:00
kk1050 748e28be38
artiq_flash: bail out if scan chain is wrong
Due to OpenOCD limitations, there currently doesn't seem to be a better way of doing it. Upstream patch may be coming.
2022-07-26 09:49:48 +08:00
Sebastien Bourdeauducq 4b1715c80b typo 2022-07-21 11:58:25 +08:00
Robert Jördens 5985595845
Merge pull request #1933 from quartiq/nk/phaser-servo
Nk/phaser servo
2022-07-11 14:36:25 +02:00
Robert Jördens a8f498b478
Merge branch 'master' into nk/phaser-servo 2022-07-11 14:35:25 +02:00
Sebastien Bourdeauducq db4bccda7e flake: bump major version 2022-07-08 18:49:40 +08:00
Sebastien Bourdeauducq 5c461443e4 flake: update dependencies 2022-07-08 17:52:58 +08:00
Sebastien Bourdeauducq cb711e0ee3 edit ARTIQ-7 release notes 2022-07-08 17:51:02 +08:00
Sebastien Bourdeauducq 9ba239b8b2 flake: add aarch64 openocd package 2022-07-08 11:35:17 +08:00
Robert Jördens 4ea11f4609 RELEASE_NOTES: update servo note 2022-07-07 16:03:35 +02:00
SingularitySurfer 57ac6ec003 add release note 2022-07-07 15:57:08 +02:00
Robert Jördens d2dacc6433 Merge branch 'master' into nk/phaser-servo-clean
* master: (25 commits)
  flake: update rpi-1 host key
  aqctl_moninj_proxy: clear listeners on disconnect
  Add method to check if termination is requested (#811, #1932)
  moninj: fix underflows by order of operation fix channel toggle
  moninj: fix underflows for urukul freq set
  Urukul monitoring (#1142, #1921)
  moninj: make receive_task private again
  moninj,corelog: fix/cleanup exception handling (#1897)
  aqctl_corelog: enable keepalive, terminate on connection failure
  Modify log for matching the style
  Add log message when dashboard connected to proxy
  Public receive_task for the use in proxy
  applets.simple: Actually forward dataset_prefixes when using IPC
  master: Fixup 32db6ff978 (argument_ui support)
  Revert "add pull.yml (#1918)"
  add pull.yml (#1918)
  Allow experiments to specify a custom argument editor UI (#1916)
  dashboard: Add submit/close hooks for custom argument editors
  dashboard: Plumb through datasets client to ExperimentManager
  dashboard: Add cmdline option to load plugins on startup
  ...
2022-07-07 15:56:30 +02:00
Sebastien Bourdeauducq 734b2a6747 flake: update rpi-1 host key 2022-07-07 18:03:17 +08:00
Deepskyhunter c7394802bd
aqctl_moninj_proxy: clear listeners on disconnect 2022-07-07 17:20:08 +08:00
kk1050 7aa6104872
Add method to check if termination is requested (#811, #1932)
Co-authored-by: kk105 <kkl@m-kabs.hk>
2022-07-07 17:01:34 +08:00
mwojcik 46f2842d38 moninj: fix underflows by order of operation
fix channel toggle
2022-07-07 12:37:10 +08:00
mwojcik c9fb7b410f moninj: fix underflows for urukul freq set 2022-07-07 12:37:10 +08:00
Spaqin 8be945d5c7
Urukul monitoring (#1142, #1921) 2022-07-07 10:52:53 +08:00
SingularitySurfer 9c8ffa54b2 reverse to servo enable. hopefully adapted all comments etc. 2022-07-06 14:33:46 +00:00
Sebastien Bourdeauducq d17675e9b5 moninj: make receive_task private again 2022-07-02 17:58:24 +08:00
Sebastien Bourdeauducq 388b81af19 moninj,corelog: fix/cleanup exception handling (#1897) 2022-07-02 17:48:18 +08:00
Deepskyhunter 02b086c9e5
aqctl_corelog: enable keepalive, terminate on connection failure 2022-07-02 17:33:58 +08:00
SingularitySurfer 953dd899fd refine docu 2022-06-23 15:46:15 +00:00
SingularitySurfer 689a2ef8ba refine note 2022-06-23 15:23:00 +00:00
SingularitySurfer d8cfe22501 add note about setpoint resolution 2022-06-23 15:18:55 +00:00
Deepskyhunter b4f24dd326 Modify log for matching the style 2022-06-23 19:16:36 +08:00
Deepskyhunter da6d35e7c6 Add log message when dashboard connected to proxy 2022-06-23 19:16:36 +08:00
Deepskyhunter 745f440597 Public receive_task for the use in proxy
Notify proxy and terminate after receive_task end
2022-06-23 19:16:36 +08:00
SingularitySurfer 2e834cf406 unflip logic.. 2022-06-23 10:20:38 +00:00
SingularitySurfer 3f8a221c76 flip logic of enable bit to bypass bit and update some comments 2022-06-23 10:08:34 +00:00
SingularitySurfer ab097b8ef9 add offset to coefficients as data 2022-06-23 09:37:37 +00:00
SingularitySurfer 24b4ec46bd more documentation 2022-06-23 08:48:28 +00:00
Norman Krackow 56c59e38f0
Update artiq/coredevice/phaser.py
Co-authored-by: Robert Jördens <rj@quartiq.de>
2022-06-23 09:15:50 +02:00
SingularitySurfer c0581178d6 impl offsets. to be tested 2022-06-22 16:20:59 +00:00
SingularitySurfer 43c94577ce impl set_iir. untested 2022-06-22 15:35:49 +00:00
SingularitySurfer ce4055db3b force hold on bypass and use names in set_servo() in init 2022-06-21 10:11:49 +00:00
SingularitySurfer b67a70392d rename to coeff base and shorter write16 2022-06-21 09:59:40 +00:00
SingularitySurfer 57176fedb2 add servo docu 2022-06-21 09:29:42 +00:00
SingularitySurfer 8bea821f93 just &1 to stay in field 2022-06-21 08:43:55 +00:00
SingularitySurfer 0388161754 disable servo in init 2022-06-21 07:49:29 +00:00
SingularitySurfer 751af3144e fix old line that I forgot 2022-06-21 07:43:28 +00:00
SingularitySurfer 5df766e6da fix ors 2022-06-21 07:36:59 +00:00
David Nadlinger e1f9feae8b applets.simple: Actually forward dataset_prefixes when using IPC
Turns out I had inadvertently only tested 2d6fc154d using the
socket interface.
2022-06-19 18:08:25 +01:00
David Nadlinger dd928fc014 master: Fixup 32db6ff978 (argument_ui support)
This was lost in the ndscan diff upstreaming process
due to other Oxford-local changes in artiq.master.worker.
2022-06-19 11:33:40 +01:00
Sebastien Bourdeauducq 48cb111035 Revert "add pull.yml (#1918)"
This reverts commit d8597e9dc8.
2022-06-19 11:57:46 +08:00
hartytp d8597e9dc8
add pull.yml (#1918) 2022-06-18 12:37:23 +01:00
David Nadlinger 32db6ff978
Allow experiments to specify a custom argument editor UI (#1916)
On the master/EnvExperiment side, the only addition is an optional
property `argument_ui` that is made accessible to the dashboard, e.g.

    class Example(EnvExperiment):
        argument_ui = "ndscan"
        def build(self):
           …

Clients – primarily artiq_dashboard, but in principle e.g. a
command-line UI could do the same – can then compare the value to a
list of well-known names and prefer any matching custom UI handlers.

On the dashboard side, this commit adds the mechanism to register
a custom argument editor for a given argument_ui string, i.e. the
widget that displays the parameter values within the wider
experiment UI shell with the submit button, pipeline parameters, and
so on. The registry remains empty by default and would be filled by
out-of-tree plugins such as ndscan.

The UI state readback is implemented somewhat defensively to avoid
needless disruptions to users when upgrading.
2022-06-18 15:55:13 +08:00
David Nadlinger dbc87f08ff dashboard: Add submit/close hooks for custom argument editors
These are used by ndscan, as re-serialising the entire ndscan
parameter metadata tree, which can grow to be quite extensive,
on every single Qt change event is a bit excessive (and would
probably cause a bit of lag while typing for big experiments
on low-end machines).
2022-06-18 15:51:39 +08:00
David Nadlinger c4068e6896 dashboard: Plumb through datasets client to ExperimentManager
This is analogous to the explist/schedule subscribers, and allows
custom argument editors (such as ndscan) to provide hints/defaults/…
from datasets once available.
2022-06-18 15:50:05 +08:00
David Nadlinger 85895ab89b dashboard: Add cmdline option to load plugins on startup
Together with m-labs/artiq#1916, this allows the user to integrate
multiple argument UIs implemented in external libraries.
2022-06-18 15:48:32 +08:00
kk1050 46fb8916bb
update SEEN_ASYNC_ERRORS in destination_survey 2022-06-18 15:46:49 +08:00
David Nadlinger 2d6fc154db applets: Allow wildcard subscription to all datasets matching prefix via IPC
This allows ndscan v0.3+ to use the IPC interface for efficiency;
previously, the non-upstreamed RID dataset namespace feature allowed
the applets to somewhat efficient subscribe directly to the master
process via the socket interface.
2022-06-18 15:45:57 +08:00
David Nadlinger 4c42f65909 applets: Add ${server}, ${port_control}, ${port_notify} command substitutions
This facilitates applets that connect back to the master
(e.g. to update datasets on user request, as used by ndscan).
2022-06-18 15:19:35 +08:00
David Nadlinger f4d639242d units: Add nW (nanowatts)
We found this quite useful/common for laser beams.
2022-06-18 15:11:05 +08:00
SingularitySurfer d09153411f adress some review comments 2022-06-17 13:03:21 +00:00
Norman Krackow dc49372d57
Update artiq/coredevice/phaser.py
Co-authored-by: Robert Jördens <rj@quartiq.de>
2022-06-17 14:40:07 +02:00
Norman Krackow 2044dc3ae5
Update artiq/coredevice/phaser.py
Co-authored-by: Robert Jördens <rj@quartiq.de>
2022-06-17 14:39:37 +02:00
SingularitySurfer ae3f1c1c71 adapt servo functions. Todo: docu 2022-06-17 11:47:45 +00:00
Sebastien Bourdeauducq bf3b155a31 flake: update dependencies 2022-06-17 16:07:31 +08:00
SingularitySurfer 1bddadc6e2 cleanup and comments 2022-06-15 17:32:11 +00:00
SingularitySurfer b0f9fd9c4c implement main driver functions 2022-06-15 12:40:21 +00:00
Michael Birtwell 69c4026d2b Fix returning tuples of lists of arrays from RPCs
When serialising a list of objects `_send_rpc_value` makes a copy of the
upcoming tags to pass repeatedly to the recursive call. Then uses
`_skip_rpc_value` to skip over the tags that should have been processed.
This didn't handle numpy arrays so, after processing a list of arrays it
got out of sync and failed.

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-06-15 00:08:49 +08:00
Deepskyhunter e47834d82e Bugfix: Add missing item inside state to solve KeyError
KeyError raised when trying to load default_state()
due to missing Key "seed" in "RangeScan" and "CenterScan" in
state. Add {"seed": None} to resolve the bug.
2022-06-14 11:41:55 +08:00
Spaqin 4ede14b14d
dashboard: add DDS quick set-frequency feature 2022-06-09 12:01:06 +08:00
kk1050 4ddd2739ee
add log_tuples function (#1896)
Co-authored-by: kk105 <kkl@m-kabs.hk>
2022-06-06 18:41:46 +08:00
Sebastien Bourdeauducq e702624720 flake: do not use __impure (breaks hydra) 2022-06-04 10:32:02 +08:00
Sebastien Bourdeauducq 68ef0073ea doc: mock sipyco.keepalive. Closes #1900 2022-06-01 20:46:16 +08:00
Sebastien Bourdeauducq 71a37bb408 doc: switch to wavedrompy 2022-06-01 20:45:49 +08:00
occheung f79f7db3a2 dyld: handle rebind on symbols relocated by CALL_PLT 2022-06-01 12:44:33 +08:00
occheung 872f8f039f dyld: support additional RV32 reloc types
The support of LO12 type requires the runtime linker to find the corresponding HI20 symbol. resolve_rela needs the entire relocation section for that.
2022-06-01 12:44:33 +08:00
occheung 50495097e5 dyld: rename pltrel to jmprel
nac3ld will not generate PLT & its relocation section. There might not be a pltrel in that case.
On the other hand, rebinding will not be limited to the symbols in the PLT when linked with nac3ld.
Thus the renaming.
2022-06-01 12:44:33 +08:00
Sebastien Bourdeauducq ca614a3eea use asyncio get/new_event_loop as recommended 2022-05-31 23:06:54 +08:00
Sebastien Bourdeauducq 8bf6bc4d1f flake: update dependencies 2022-05-31 20:59:21 +08:00
occheung 6d46c886d7 ld.lld: translate TARGET2 reloc to relative 2022-05-31 18:26:06 +08:00
Sebastien Bourdeauducq a5b7e958f8 flake: update dependencies 2022-05-31 18:25:08 +08:00
Sebastien Bourdeauducq 667f36a2e7 gui: fix Python 3.10 PyQt float/int issues. Closes #1887 2022-05-29 08:43:25 +08:00
Sebastien Bourdeauducq 7cff63e539 frontend: use sipyco SignalHandler (#1063) 2022-05-27 15:17:33 +08:00
Sebastien Bourdeauducq df1b19082c flake: update dependencies 2022-05-27 15:14:11 +08:00
Sebastien Bourdeauducq d478086119 flake: support impure derivation for HITL test 2022-05-26 12:00:40 +08:00
Sebastien Bourdeauducq 18a08954c1 flake: update comtools 2022-05-25 15:48:17 +08:00
Sebastien Bourdeauducq 57086e2349 flake: update nixpkgs 2022-05-25 14:20:04 +08:00
mwojcik cf8e583847 comm_mgmt: expect error on config_read 2022-05-19 16:48:59 +08:00
mwojcik d24a36a02a comm_mgmt: fix read_expect 2022-05-19 16:48:59 +08:00
mwojcik 4bdb4c8e11 config: error instead of empty value if key not found 2022-05-19 16:48:59 +08:00
Sebastien Bourdeauducq 8599be5550 flake: update nixpkgs 2022-05-18 19:04:52 +08:00
Sebastien Bourdeauducq 9896d78e07 afws_client: update 2022-05-18 19:04:13 +08:00
kk1050 70503bee6f
dashboard: add dataset rename feature (#1893)
Co-authored-by: kk105 <kkl@m-kabs.hk>
2022-05-18 17:07:43 +08:00
Laurent Stephenson 16393efa7c fix issue #1890: make dashboard use moninj port from device_db
Signed-off-by: Laurent Stephenson <laurent.stephenson@nist.gov>
2022-05-13 06:23:59 +08:00
David Nadlinger 8a7af3f75c compiler: Fix "nowrite" miscompilation for sret functions
This affected e.g. rtio_input_timestamped_data().
2022-05-07 21:43:55 +01:00
Spaqin 35f30ddf05
Expose TTLClockGen for Kasli JSONs (#1886) 2022-05-06 13:33:42 +08:00
Sebastien Bourdeauducq c440f9fe1b flake: update dependencies 2022-05-04 08:28:55 +08:00
Sebastien Bourdeauducq 69b6426800 flake: use importCargoLock 2022-04-24 14:02:59 +08:00
Michael Birtwell 50dbda4f43 Use new ip_addr_storage module instead of net_settings
Necessary to avoid needing the alloc only trait impls in net_settings
when compiling the bootloader.
2022-04-24 10:10:43 +08:00
Michael Birtwell 95378cf9c9 Centralise all uses of the IPv4 index in net_settings.rs 2022-04-24 10:10:43 +08:00
Michael Birtwell 671453938b Require explicitly closing TcpStreams
Instead of automatically closing and draining the TcpStream in the Drop
implementation instead expect the user to call TcpStream::close.
Add close called to all users of TcpStream.
Document the requirement to call close on TcpListener::accept, this seems
to be the only way to get a new TcpStream at the moment.
2022-04-24 10:10:43 +08:00
Michael Birtwell 1fe59d27dc Use an Ipv4AddrConfig enum instead of the USE_DHCP constant 2022-04-24 10:10:43 +08:00
Michael Birtwell 73082d116f Ensure that pending data is sent when closing sockets
This is only necessary if close hasn't been called on the socket
but that's not always done. e.g. by the core analyzer server.
2022-04-24 10:10:43 +08:00
Michael Birtwell 596b9a265c Prefer DHCP to the built-in static IPs
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-04-24 10:10:42 +08:00
Michael Birtwell 6ffb1f83ee DHCP support for core device firmware
DHCP is enabled by setting the `ip` config entry to "use_dhcp". Reusing this
config field rather than creating a new one means that there is no ambiguity
over which config field takes precedence.

Adds a thread to configure the interface based on DHCP events
Adds a `Dhcpv4Socket` as a wrapper around smoltcp's version
Formalises the storage of the IP addresses so that we can update one in
another module.

There's also a workaround for the first DHCP discover packet frequently
going missing.

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-04-24 10:10:14 +08:00
Michael Birtwell c60de48a30 Upgrade smoltcp 0.6.0 -> 0.8.0
Main changes:
Deal with interfaces now being generic over mediums, update interface name
and initialisation.
Interfaces now own their sockets. So we store a reference to the Interface
instead of the SocketSet in Scheduler and IO.
Sockets are no longer reference counted. We never called the function to
increase the socket's reference count, so now we just remove it where it
was previously released. This will result in the socket being dropped at
a different time, but I think that should be fine.

Tested firmware upload to the bootloader and spamming artiq_coremgmt log
calls to download the log from the firmware.

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-04-24 10:09:27 +08:00
Suthep Pomjaksilp 06ad76b6ab
applets: add progress bar applet
Signed-off-by: Suthep Pomjaksilp <pomjaksi@physik.uni-kl.de>
2022-04-22 09:27:28 +08:00
David Nadlinger b2b84b1fd6 test: Fixup 6b5c390d4 typo 2022-04-22 00:29:03 +01:00
David Nadlinger 6b5c390d48 compiler: Fix #1871 (array() breaks math functions)
GitHub: Fixes #1871.
2022-04-22 00:12:20 +01:00
David Nadlinger 2cb08814e8 flake: Add compiler test prerequisites to devShell
Useful while working on the legacy compiler.
2022-04-21 23:47:23 +01:00
Sebastien Bourdeauducq 58b59b99ff flake: update dependencies 2022-04-19 11:04:49 +08:00
Sebastien Bourdeauducq fa3ee8ad23 flake: update sipyco 2022-04-12 08:54:58 +08:00
Michael Birtwell cab9d90d01 Use sipyco.keepalive
Remove the implementation of setting keepalive settings on sockets and use
the implementation from sipyco instead.

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-04-12 08:53:35 +08:00
Sebastien Bourdeauducq 0a029748ee flake: update dependencies 2022-04-09 17:24:44 +08:00
Leon Riesebos 386391e3f9 browser: support datasets that use h5 group notation
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2022-04-07 18:18:13 +08:00
Leon Riesebos b5dc9fd640 browser: cleanup datasets panel for empty h5 files
this fix makes sure the datasets panel is cleared if an h5 file is empty or the datasets and archive groups are empty
2022-04-07 18:18:05 +08:00
Sebastien Bourdeauducq c82c358f3a runtime: provide/fix more libc mem functions 2022-03-28 13:33:57 +08:00
Sebastien Bourdeauducq 723f41c78b runtime: fix EXCEPTION_ID_LOOKUP 2022-03-26 20:10:24 +08:00
Sebastien Bourdeauducq 866a83796a firmware: add UnwrapNoneError exception 2022-03-26 15:28:13 +08:00
Timothy Ballance f91e106586 llvm_ir: fixed broken code in previous patch 2022-03-22 18:50:58 +08:00
Timothy Ballance a289d69883 llvm_ir: fixed stack leak on ffi call 2022-03-22 09:00:40 +08:00
Sebastien Bourdeauducq f89275b02a master: fix compiler access to source code with submit-by-content 2022-03-20 18:08:04 +08:00
Sebastien Bourdeauducq 65d2dd0173 fix compilation warning 2022-03-20 16:15:01 +08:00
Sebastien Bourdeauducq 6b33f3b719 update vivado 2022-03-20 16:09:58 +08:00
Sebastien Bourdeauducq 80d412a8bf support submitting experiments by content 2022-03-20 12:58:55 +08:00
Sebastien Bourdeauducq 922d2b1619 drop support for big-endian moninj 2022-03-19 22:58:31 +08:00
Sebastien Bourdeauducq d644e982c8 RELEASE_NOTES: update 2022-03-19 22:50:54 +08:00
Sebastien Bourdeauducq ec1efd7af9 dashboard: connect to moninj via proxy 2022-03-19 22:50:36 +08:00
Sebastien Bourdeauducq 735133a2b4 artiq_dashboard: remove references to core device in moninj 2022-03-19 22:36:07 +08:00
Sebastien Bourdeauducq 207717c740 artiq_dashboard: fix handling of moninj comment 2022-03-19 22:33:31 +08:00
Sebastien Bourdeauducq 6d92e539b1 artiq_ddb_template: add aqctl_moninj_proxy 2022-03-19 22:33:03 +08:00
Sebastien Bourdeauducq 6a49b8cb58 update dependencies 2022-03-19 19:53:38 +08:00
Sebastien Bourdeauducq df1513f0e9 add aqctl_moninj_proxy to device dbs 2022-03-19 19:25:21 +08:00
Sebastien Bourdeauducq d3073022ac aqctl_moninj_proxy: fix all major bugs 2022-03-19 19:06:12 +08:00
Sebastien Bourdeauducq bbb2c75194 add aqctl_moninj_proxy 2022-03-18 17:02:50 +08:00
Sebastien Bourdeauducq 710786388c update nixpkgs 2022-03-17 21:09:48 +08:00
Sebastien Bourdeauducq aff569b2c3 firmware: support 64-bit moninj probes 2022-03-17 19:56:07 +08:00
Sebastien Bourdeauducq a159ef642d drtio: demote default routing table message to info 2022-03-16 21:22:35 +08:00
Sebastien Bourdeauducq 1a26eb8cf2 coredevice: only print version mismatch warning when relevant 2022-03-16 21:21:43 +08:00
Sebastien Bourdeauducq c1c2d21ba7 flake: fix error message when Vivado is not found 2022-03-16 21:20:48 +08:00
Sebastien Bourdeauducq e5e4d55f84 mgmt: fix config write error message 2022-03-16 08:28:31 +08:00
Sebastien Bourdeauducq 71e8b49246 update nix dependencies 2022-03-10 17:04:44 +08:00
pca006132 ebfeb1869f firmware: use &CSlice for lists 2022-03-10 16:30:22 +08:00
pca006132 eb6817c8f1 compiler/transforms/llvm_ir_generator: changed list representation
The representation of TList(T) is changed from `{T*, u32}` to
`{T*, u32}*`. The old representation forbids changing the length of a
list when the list is passed as a parameter into functions, as the
length is passed by value. The representation now matches with nac3.
2022-03-10 16:30:22 +08:00
Sebastien Bourdeauducq 8415151866 update copyright year 2022-03-10 11:56:16 +08:00
ciciwu 67ca48fa84
manual: fix formatting (#1865) 2022-03-08 19:03:47 +08:00
ciciwu 9a96387dfe
phaser: fix docstring formatting (#1866) 2022-03-08 19:03:30 +08:00
Sebastien Bourdeauducq b02abc2bf4 remove legacy versioning files 2022-03-06 18:30:08 +08:00
Sebastien Bourdeauducq ac55da81d8 core: support precompilation of kernels 2022-03-06 18:25:18 +08:00
spaqin 232f28c0e8 kern_hw: fix return type 2022-03-04 15:16:14 +08:00
spaqin 51fa1b5e5e drtio: fix i2c switch 2022-03-04 15:16:14 +08:00
spaqin 17ecd35530 test_i2c: fix for missing readback 2022-03-01 17:40:20 +08:00
Spaqin a85b4d5f5e
I2C API for PCA9547 support (#1860) 2022-03-01 15:07:53 +08:00
David Nadlinger 9bfbd39fa3 flake.nix: Use upstream llvmlite 0.38.0, which already has the patches 2022-02-26 10:23:24 +08:00
Sebastien Bourdeauducq 338bb189b4 dashboard: fix typo (#1858) 2022-02-26 08:58:03 +08:00
Leon Riesebos c4292770f8
Kasli JSON description for SPI over DIO cards (#1800) 2022-02-26 07:36:00 +08:00
Sebastien Bourdeauducq 2b918ac6f7 coredevice: merge pcf8574a into i2c 2022-02-25 19:01:14 +08:00
Michael Birtwell 1b80746f48 Remove `outer_final`
We don't need to know whether there's a outer finally block
that's already implicit in the current break and continue
target.

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-02-24 19:58:33 +08:00
Michael Birtwell 2d6215158f Fix try/finally:while:try compilation
When we have a trys inside a loop then we want to make sure any
finallys are executed by break and continue inside this try. But
this shouldn't pull finallys defined outside the loop in to the
loop. This change resets the `outer_final` attribute when
visiting for and while loops so that this doesn't happen.

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-02-24 19:58:33 +08:00
mwojcik c000af9985 flake: extra-sandbox-paths too 2022-02-23 15:35:47 +08:00
mwojcik 35f91aef68 flake: fix substituters 2022-02-23 15:35:47 +08:00
Sebastien Bourdeauducq 0da7b83176 runtime: add nac3 exception symbols 2022-02-23 11:04:53 +08:00
Steve Fan ad656d1e53
dashboard: add device database reload action in context menu (#1853) 2022-02-22 16:18:27 +08:00
Sebastien Bourdeauducq 69ce09c7c0 manual: minor fixes 2022-02-21 18:44:18 +08:00
Sebastien Bourdeauducq 6a586c2e4d manual: kasli-soc flashing 2022-02-21 16:27:59 +08:00
Sebastien Bourdeauducq e84056f7e0 manual: Flakes installation instructions. Closes #1835 2022-02-21 16:20:14 +08:00
Mike Birtwell a106ed0295
artiq_flash: don't try to make rtm_binary_dir if binary_dir unset (#1851)
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-02-18 18:54:17 +08:00
Robert Jördens c8b9eed9c9 fastino: add comments about sideeffects on v0.1 2022-02-16 14:42:22 +00:00
Robert Jördens 08b65470cd fastino: robustify init()
* init() now also clear and resets more state including the interpolators.
  If not done, this PLL unlocks/locks may lead to random interpolator state
  on boot to which the CICs react badly.
* Use and expose `t_frame`
* Clarify implementation state of `read()`
2022-02-16 14:34:22 +00:00
Sebastien Bourdeauducq 65eab31f23 simplify board package format and artiq_flash 2022-02-14 15:54:17 +08:00
Sebastien Bourdeauducq 6dfc854673 flake: install artiq-comtools 2022-02-13 17:15:25 +08:00
Sebastien Bourdeauducq 5a8928fbf3 flake: set pythonparser version 2022-02-12 17:48:35 +08:00
Sebastien Bourdeauducq b3b73948a2 flake: update dependencies 2022-02-12 11:04:41 +08:00
Sebastien Bourdeauducq 8433cc6731 flake: use sipyco flake 2022-02-12 10:59:10 +08:00
Sebastien Bourdeauducq 0649e69d94 flake: cleanup 2022-02-12 10:25:24 +08:00
Sebastien Bourdeauducq bbfa926fa6 flake: add documentation outputs 2022-02-11 14:36:18 +08:00
Sebastien Bourdeauducq 9e37fb95d6 manual: use recommended contents caption 2022-02-11 14:25:10 +08:00
Sebastien Bourdeauducq 034a0fdb35 flake: install recommended wavedrom-cli. Closes #1845 2022-02-11 14:24:41 +08:00
Sebastien Bourdeauducq 0e178e40ac RELEASE_NOTES: fix formatting 2022-02-11 14:23:56 +08:00
Sebastien Bourdeauducq a0070d4396 flake: add docs dependencies 2022-02-09 10:53:52 +08:00
Sebastien Bourdeauducq 03a367f565 flake: export more packages 2022-02-09 10:41:30 +08:00
Sebastien Bourdeauducq b893d97d7b afws_client: add login successful message 2022-02-08 21:52:48 +08:00
Sebastien Bourdeauducq b6f5ba8b5b afws_client: improve error message when output already exists 2022-02-08 21:26:12 +08:00
Sebastien Bourdeauducq cc69482dad afws: nix requires full Git commit hash 2022-02-08 21:05:39 +08:00
Sebastien Bourdeauducq 833acb6925 add AFWS client 2022-02-07 14:28:00 +08:00
occheung d5eec652ee tester: specify att with dB 2022-02-07 14:22:52 +08:00
occheung a74196aa27 mirny: allow set attenuation with dB 2022-02-07 14:22:52 +08:00
Steve Fan 798a412c6f
comm_moninj: set keepalive for socket (#1843) 2022-02-04 13:51:19 +08:00
David Nadlinger e45cb217be firmware: Explicitly use wrapping integer math in PRNGs
Patch by Hannah McLaughlin; apparently, the overflow actually
doesn't get checked/reported without `opt-level = 2` and
`lto = "thin"`.
2022-02-03 23:57:17 +00:00
Sebastien Bourdeauducq 8866ab301a flake: update dependencies 2022-02-02 16:39:49 +08:00
Sebastien Bourdeauducq 3cddb14174 flake: break artiq false dependencies 2022-02-02 16:33:17 +08:00
Sebastien Bourdeauducq 245fe6e9ea flake: remove non-HITL board packages
Those can be built externally by calling makeArtiqBoardPackage directly.
2022-02-02 16:04:00 +08:00
Sebastien Bourdeauducq ef25640937 compiler: fix noreturn attribute on __artiq_resume 2022-02-01 19:01:40 +08:00
Sebastien Bourdeauducq dd3279e506 flake: add jsonschema to makeArtiqBoardPackage 2022-01-30 19:38:56 +08:00
Sebastien Bourdeauducq afb98a1903 flake: export makeArtiqBoardPackage 2022-01-30 19:31:20 +08:00
Steve Fan 34008b7a21
Backport of "fixes alignment and size problem" from artiq-zynq (#1841) 2022-01-28 20:49:55 +08:00
pca006132 93328ad8ee compiler: only allow constant exception messages
Otherwise, the exception message might be allocated on a stack, and will
become a dangling pointer when the exception is raised.
This will break some code that constructs exceptions with a function by
passing the message as a parameter because we cannot know if the parameter
is a constant. A way to mitigate this would be to defer this check to
LLVM IR codegen stage, and do inlining first for those exception
allocation functions, but I am not sure if we will guarantee inlining
for certain functions, and whether this is really needed.
2022-01-28 09:01:39 +08:00
Steve Fan 234a82aaa9
dashboard: prioritize min as part of default value resolution (#1839) 2022-01-27 17:45:09 +08:00
Sebastien Bourdeauducq ee511758ce fix typo 2022-01-26 07:51:35 +08:00
Sebastien Bourdeauducq e6c18364ae flake: consistent version string 2022-01-26 07:51:02 +08:00
pca006132 9d43762695 test: fixed lit tests
Note that because we changed exception representation from using string
names as exception identifier into using integer IDs, we need to
initialize the embedding map in order to allocate the integer IDs. Also,
we can no longer print the exception names and messages from the kernel,
we will need the host to map exception IDs to names, and may need the
host to map string IDs to actual strings (messages can be static strings
in the firmware, or strings stored in the host only).

We now check for exception IDs for lit tests, which are fixed because we
preallocated all builtin exceptions.
2022-01-26 07:16:54 +08:00
pca006132 4132c450a5 firmware: runtime changes for exception
Ported from:
M-Labs/artiq-zynq#162

This includes new API for exception handling, some refactoring to avoid
code duplication for exception structures, and modified protocols to
send nested exceptions and avoid string allocation.
2022-01-26 07:16:54 +08:00
pca006132 536b3e0c26 test: added test case for nested exceptions and try 2022-01-26 07:16:54 +08:00
pca006132 ba34700798 coredevice: report nested exceptions 2022-01-26 07:16:54 +08:00
pca006132 6ec003c1c9 compiler: fixed dead code eliminator
Instead of removing basic blocks with no predecessor, we will now mark
and remove all blocks that are unreachable from the entry block. This
can handle loops that are dead code. This is needed as we will now
generate more complicated code for exception handling which the old dead
code eliminator failed to handle.
2022-01-26 07:16:54 +08:00
pca006132 da4ff44377 compiler: fixed try codegen and allocate exceptions
Exceptions are now allocated in the runtime when we raise the exception,
and destroyed when we exit the catch block. Nested exception and try
block is now supported, and should behave the same as in CPython.
Exceptions raised in except blocks will now unwind through finally
blocks, matching the behavior in CPython. Reraise will now preserve
backtrace.

Phi block LLVM IR generation is modified to handle landingpads, which
one ARTIQ IR will map to multiple LLVM IR.
2022-01-26 07:16:54 +08:00
pca006132 4644e105b1 compiler: modified exception representation
Exception name is replaced by exception ID, which requires no
allocation. Other strings in the exception can now be 'host-only'
strings, which is represented by a CSlice with len = usize::MAX and
ptr = key, to avoid the need for allocation when raising exceptions
through RPC.
2022-01-26 07:16:54 +08:00
hartytp 715bff3ebf
Revert "Merge pull request #1544 from airwoodix/dataset-compression" (#1838)
* Revert "Merge pull request #1544 from airwoodix/dataset-compression"

This reverts commit 311a818a49, reversing
changes made to 7ffe4dc2e3.

* fix accidental revert of f42bea06a8
2022-01-25 10:02:15 +08:00
Sebastien Bourdeauducq f58aa3bdf6 flake: update qasync 2022-01-19 20:44:50 +08:00
Sebastien Bourdeauducq 4e420fc297 flake: update inputs 2022-01-19 20:18:54 +08:00
Sebastien Bourdeauducq 5597be3356 flake: add beta to version string 2022-01-19 20:17:11 +08:00
Sebastien Bourdeauducq f542f045da manual: use git+https URL for ARTIQ flake
github: flake URL lacks revCount
2022-01-19 20:04:20 +08:00
Sebastien Bourdeauducq 53878fe1d4 flake: get version number from nix 2022-01-19 19:58:55 +08:00
Sebastien Bourdeauducq 735cd1eb3e manual: update development instructions 2022-01-14 16:50:08 +08:00
Steve Fan 3f812c4c2c
comm_kernel: fix RPC exception handling (#1801) 2022-01-12 15:23:37 +08:00
occheung b6c59a0cb3 update misoc dependencies
Suppress warning when compiling libunwind.
7242dc5a41
2022-01-11 17:32:19 +08:00
Steve Fan de5892a00a
comm_kernel: check if elements are within bounds for RPC list (#1824) 2022-01-11 17:16:45 +08:00
Peter Drmota 4eee49f889 gateware.test.suservo: Fix tests for python >=3.7
Closes #1748
2022-01-11 17:16:09 +08:00
occheung 9eee0e5a7b gateware/suservo: fix profile no. in test
Follow-up/Test update for 9d49302.
2022-01-11 14:20:47 +08:00
Steve Fan d7dd75e833 comm_kernel: fix off-by-one error for numeric value range check 2022-01-11 10:13:42 +08:00
Spaqin 095fb9e333
add Almazny support (#1780) 2022-01-11 09:55:39 +08:00
Sebastien Bourdeauducq 4e3e0d129c firmware: fix compilation warning 2022-01-11 09:31:26 +08:00
pca006132 12ee326fb4 firmware: fixed personality function 2022-01-11 09:30:19 +08:00
occheung 61349f9685 sinara_tester: fix outdated API 2022-01-10 17:23:28 +08:00
occheung cea0a15e1e suservo: use default urukul profile 2022-01-10 16:21:39 +08:00
occheung 8b45f917d1 urukul: use default profile 2022-01-10 16:21:39 +08:00
pca006132 6542b65db3 compiler: fixed exception codegen issues 2022-01-10 15:54:29 +08:00
pca006132 9f90088fa6 compiler: generate appropriate landingpad IR
When used together with modified personality function, we got ~20%
performance improvement in exception unwinding with zynq.
2022-01-10 15:54:29 +08:00
occheung 5e1847e7c1 compiler: rename `variables` to `retainedNodes`
Part of the changes that was made to LLVM 6 by the time that LLVM 7 was released.
LLVM commit: 2c864551df
LLVM differential review: https://reviews.llvm.org/D45024
2022-01-10 11:28:37 +08:00
occheung 6f3c49528d compiler: revert cabe5ac
The lack of debug emitter causes #1821.
2022-01-10 11:26:03 +08:00
Sebastien Bourdeauducq eaa1505c94 update documentation (#1820) 2022-01-08 11:55:52 +08:00
Leon Riesebos f42bea06a8 worker_db: removed warning for writing a dataset that is also in the archive
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2022-01-08 11:48:18 +08:00
occheung 9d493028e5 gateware/suservo: write to profile 7
Fixes #1817.
2022-01-07 16:41:19 +08:00
Sebastien Bourdeauducq bbac477092 tools: fix importlib issue 2021-12-21 13:20:11 +08:00
Steve Fan c0a7be0a90 llvm_ir: move stacksave before lltag alloca in build_rpc
Signed-off-by: Steve Fan <sf@m-labs.hk>
2021-12-19 00:07:07 +00:00
Sebastien Bourdeauducq 9e5e234af3 stop using explicit ProactorEventLoop on Windows
It is now the default in Python.
2021-12-14 20:06:38 +08:00
Sebastien Bourdeauducq 352317df11 test_dataset_db: remove (too much breakage on Windows) 2021-12-14 19:27:15 +08:00
Sebastien Bourdeauducq a518963a47 test_dataset_db: disable tests broken on windows 2021-12-14 19:19:22 +08:00
Sebastien Bourdeauducq 37f14d94d0 test_dataset_db: fix for windows 2021-12-14 19:07:17 +08:00
Sebastien Bourdeauducq 4f723e19a6 RELEASE_NOTES: update 2021-12-14 00:05:49 +08:00
Peter Drmota 7c664142a5
Simplified use of the AD9910 RAM feature (#1584)
* coredevice: Change Urukul default single-tone profile to 7

This allows using the internal profile control in RAM modulation mode (which always starts to play back at profile 0) without competing for the content of the profile 0 register used in single tone mode.

Signed-off-by: Peter Drmota <peter.drmota@physics.ox.ac.uk>

* ad9910/set_mu: comment on caveats when setting register

* ad9910: avoid unnecessary write/param

Credit: Solution proposed by @pmldrmota in https://github.com/m-labs/artiq/pull/1584#issuecomment-987774353

* revert 1064fdff (`set_mu()` comments)

158a7be7 had addressed this issue.

Co-authored-by: occheung <dc@m-labs.hk>
2021-12-13 23:44:03 +08:00
Etienne Wodey 33a9ca2684 tools/file_import: use SourceFileLoader
This allows loading modules from files with extensions not in
importlib.machinery.SOURCE_SUFFIXES

Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-12-09 11:47:04 +08:00
Sébastien Bourdeauducq 311a818a49
Merge pull request #1544 from airwoodix/dataset-compression
datasets: support compression in HDF5 archives
2021-12-06 12:43:19 +08:00
Sébastien Bourdeauducq 1def0d98c5
Merge branch 'master' into dataset-compression 2021-12-06 12:40:30 +08:00
Leon Riesebos 7ffe4dc2e3 coredevice: set default pow for ad9912 set_mu() 2021-12-06 12:34:55 +08:00
Leon Riesebos 9e3ea4e8ef coredevice: fixed type annotations for AD9910 2021-12-06 12:34:55 +08:00
Sebastien Bourdeauducq 12512bfb2f flake: get rid of TARGET_AR 2021-12-05 14:37:09 +08:00
Steve Fan 4a6bea479a
Host report for async error upon kernel termination (#1791)
Closes #1644
2021-12-04 13:33:24 +08:00
Sebastien Bourdeauducq 9bbf7eb485 flake: use ed25519 key for hitl 2021-12-03 18:35:10 +08:00
mwojcik f8a649deda release notes: mention 100mhz support 2021-12-03 17:19:11 +08:00
mwojcik 7953f3d705 kc705: add drtio 100mhz clk switch 2021-12-03 17:19:11 +08:00
mwojcik f281112779 satman: add 100mhz si5324 settings
siphaser: add calculated vco for 100mhz comment
2021-12-03 17:19:11 +08:00
mwojcik eec3ea6589 siphaser: add support for 100mhz rtio 2021-12-03 17:19:11 +08:00
Sebastien Bourdeauducq 163f5d9128 flake: debug hitl auth failures 2021-12-03 17:16:54 +08:00
Etienne Wodey 9f830b86c0
kasli: add SED lanes count option to HW description JSON file (#1745)
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-12-03 17:05:35 +08:00
Sebastien Bourdeauducq b8e7add785 language: remove deprecated set_dataset(..., save=...) 2021-12-01 22:41:34 +08:00
Sebastien Bourdeauducq 5a923a0956 flake: switch to nixos- branch 2021-12-01 22:39:24 +08:00
David Nadlinger c6039479e4 compiler: Add lit test for call site attributes [nfc] 2021-11-27 04:46:07 +00:00
David Nadlinger 63b5727a0c compiler: Also emit byval argument attributes at call sites
See previous commit.

GitHub: Fixes #1599.
2021-11-27 04:45:50 +00:00
David Nadlinger 9b01db3d11 compiler: Emit sret call site argument attributes
LLVM 6 seemed not to mind the mismatch, but more recent
versions produce miscompilations without this.

Needs llvmlite support (GitHub: numba/llvmlite#702).
2021-11-27 04:44:41 +00:00
Sebastien Bourdeauducq 6a433b2fce artiq_sinara_tester: test Urukul attenuator digital control 2021-11-24 18:57:16 +08:00
occheung 5ed9e49b94 changelog: update drtio protocol 2021-11-24 12:00:56 +08:00
occheung 9423428bb0 drtio: fix crc32 offset address 2021-11-24 12:00:56 +08:00
Sebastien Bourdeauducq 7307b30213 flake: update to nixpkgs 21.11 2021-11-23 12:15:17 +08:00
Harry Ho b49f813b17 artiq_flash: ignore checking non-RTM artifacts if unused 2021-11-18 16:59:32 +08:00
Peter Drmota 20e079a381
AD9910 driver feature extension and SUServo IIR readability (#1500)
* coredevice.ad9910: Add set_cfr2 function and extend arguments of set_cfr1 and set_sync

* SUServo: Wrap CPLD and DDS devices in a list

* SUServo: Refactor [nfc]

Co-authored-by: drmota <peter.drmota@physics.ox.ac.uk>
Co-authored-by: David Nadlinger <code@klickverbot.at>
2021-11-15 12:09:16 +08:00
Sebastien Bourdeauducq f0c50c80e6 flake: update dependencies 2021-11-12 19:28:51 +08:00
Sebastien Bourdeauducq 46604300a2 flake: update dependencies 2021-11-10 14:59:02 +08:00
Sebastien Bourdeauducq c029977a27 flake: update dependencies 2021-11-10 09:54:34 +08:00
Sebastien Bourdeauducq 80115fcc02 flake: apply llvmlite callsite patch 2021-11-08 17:34:30 +08:00
occheung ac2f55b3ff flake: patch llvmlite 2021-11-08 16:59:08 +08:00
occheung db3e5e83e6 bump misoc 2021-11-08 16:59:08 +08:00
occheung 09945ecc4d gateware: fix drtio/dma tests 2021-11-08 16:59:08 +08:00
occheung 02119282b8 build_soc: build VexRiscv_G if not kasli v1.x 2021-11-08 16:59:08 +08:00
occheung 750b0ce46d ddb_temp: select appropriate compiler target 2021-11-08 16:59:08 +08:00
occheung 531670d6c5 dyld: check ABI 2021-11-08 16:59:08 +08:00
occheung 0f660735bf ll_gen: adjust csr address by detecting target class 2021-11-08 16:59:08 +08:00
occheung 0755757601 compiler/tb: use FPU 2021-11-08 16:59:08 +08:00
occheung 0d708cd61a compiler/target: split RISCV target into float/non-float 2021-11-08 16:59:08 +08:00
occheung 03b803e764 firmware: adjust csr separation 2021-11-08 16:59:08 +08:00
occheung b3e315e24a rust: find json file using CARGO_TRIPLE 2021-11-08 16:59:08 +08:00
occheung 0898e101e2 board_misoc: reuse riscv dir for comm & kernel 2021-11-08 16:59:08 +08:00
occheung cb247f235f gateware: pass adr_w/data_w to submodules 2021-11-08 16:59:08 +08:00
occheung 90f944481c kernel_cpu: add fpu if not kasli v1.x 2021-11-08 16:59:08 +08:00
occheung d84ad0095b comm_cpu: select 64b bus if not kasli v1.x 2021-11-08 16:59:08 +08:00
occheung dd68b4ab82 mailbox: parametrize address width 2021-11-08 16:59:08 +08:00
occheung c6e0e26440 drtio: accept 32b/64b bus 2021-11-08 16:59:08 +08:00
occheung 8da924ec0f dma: set conversion granularity using bus width 2021-11-08 16:59:08 +08:00
Robert Jördens 591507a7c0
Merge pull request #1774 from m-labs/fastino-cic
Fastino cic
2021-10-28 17:44:20 +02:00
Robert Jördens 5a5b0cc7c0 fastino: expand docs 2021-10-28 15:19:48 +00:00
Spaqin 69cddc6b86
rtio_clocking: add warnings for unsupported rtio_clock settings (#1773) 2021-10-28 16:34:22 +08:00
Spaqin 9b1d7e297d
runtime: clock input specification improvements
closes #1735
2021-10-28 16:21:51 +08:00
Harry Ho 21b07dc667 flake: fix missing freetype & fontconfig libs for Vivado GUI mode 2021-10-28 14:39:47 +08:00
Robert Jördens 1ff474893d Revert "fastino: make driver filter order configurable"
This reverts commit 10c37b87ec.
2021-10-28 06:29:56 +00:00
Robert Jördens 10c37b87ec fastino: make driver filter order configurable 2021-10-27 20:24:58 +00:00
Harry Ho c940f104f1 artiq_flash: fix gateware header not in little-endian for RISC-V 2021-10-25 11:20:26 +08:00
Harry Ho 0aa8a739aa sayma_rtm: fix RTM firmware not in little-endian for RISC-V 2021-10-25 11:20:26 +08:00
Sebastien Bourdeauducq 43eab14f56 flake: update dependencies 2021-10-21 15:06:38 +08:00
Sebastien Bourdeauducq cc15a4f572 flake: update Vivado 2021-10-21 11:24:55 +08:00
Sebastien Bourdeauducq df6aeb99f6 flake: check gateware timing 2021-10-18 11:09:10 +08:00
Sebastien Bourdeauducq bb61f2dae6 flake: update dependencies 2021-10-18 10:38:28 +08:00
Sebastien Bourdeauducq b0cbad530b flake: update dependencies 2021-10-16 19:10:28 +08:00
Sebastien Bourdeauducq 92cdfac35a flake: fix cargoDeps sha256 2021-10-16 18:20:25 +08:00
occheung bf180c168c flake.lock: update dependencies 2021-10-16 17:42:24 +08:00
occheung d5fa3d131a cargo.lock: update libc version for libfringe 2021-10-16 17:42:24 +08:00
occheung 6d3164a912 riscv: print mtval on panic 2021-10-16 17:42:24 +08:00
occheung 46326716fd runtime: bump libfringe, impl ecall abi
See libfringe PR: M-Labs/libfringe#1
2021-10-16 17:42:24 +08:00
occheung 0a59c889de satman/kern: init locked PMP on startup 2021-10-16 17:42:24 +08:00
occheung 27a7a96626 runtime: setup pmp + transfer to user 2021-10-16 17:42:24 +08:00
occheung a0bf11b465 riscv: impl pmp 2021-10-16 17:42:24 +08:00
occheung 790a20edf6 linker: generate stack guard + symbol 2021-10-16 17:42:24 +08:00
fanmingyu212 178a86bcda master: add an argument to set an experiment subdirectory
Signed-off-by: Mingyu Fan <mingyufan@ucsb.edu>
2021-10-15 16:54:31 +08:00
Sebastien Bourdeauducq 35d21c98d3 Revert "runtime: expose rint from libm"
Consistency with NAR3/Zynq where rint is not available.

This reverts commit f5100702f6.
2021-10-11 08:12:04 +08:00
Sebastien Bourdeauducq f5100702f6 runtime: expose rint from libm 2021-10-10 20:40:17 +08:00
Sebastien Bourdeauducq 3c1cbf47d2 phaser: add more slack during init. Closes #1757 2021-10-10 16:18:55 +08:00
Robert Jördens 3f6bf33298 fastino: add interpolator support 2021-10-08 15:47:07 +00:00
Harry Ho 501eb1fa23 flake: add microscope 2021-10-08 12:39:35 +08:00
Harry Ho ea9bc04407 flake: add jesd204b 2021-10-08 12:39:35 +08:00
occheung 59065c4663 alloc_list: support alloc w/ large align
Signed-off-by: Oi Chee Cheung <dc@m-labs.hk>
2021-10-07 12:38:03 +08:00
Spaqin 1894f0f626
gateware: share RTIOClockMultiplier and fix_serdes_timing_path (#1760) 2021-10-07 08:19:38 +08:00
Sebastien Bourdeauducq 4bfd010f03 setup: Python 3.7+ 2021-09-27 17:46:25 +08:00
Etienne Wodey a8333053c9 sinara_tester: add device_db and test selection CLI options
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-09-27 17:44:50 +08:00
occheung 7a7e17f7e3 openocd: update and apply 4-byte address support patch
See the relevant commit made in nix-scripts repo.
575ef05cd5
2021-09-27 09:34:46 +08:00
Sebastien Bourdeauducq 3ed10221d8 compiler: remove big-endian support. Closes #1590 2021-09-13 13:40:24 +08:00
Sebastien Bourdeauducq e8a7a8f41e compiler: work around idiotic windoze behavior that causes conda ld.lld not to be found 2021-09-13 10:40:54 +08:00
Sebastien Bourdeauducq 4834966798 flake: add jsonschema to dev environment 2021-09-13 07:39:15 +08:00
Sebastien Bourdeauducq 7209e6f279 flake: add cargo-xbuild to dev environment 2021-09-13 07:20:36 +08:00
Sebastien Bourdeauducq ffb1e3ec2d wavesynth: np.int is deprecated 2021-09-13 07:02:35 +08:00
Sebastien Bourdeauducq 2d79d824f9 firmware: remove minor or1k leftovers 2021-09-12 20:03:37 +08:00
Sebastien Bourdeauducq 1a0c4219ec doc: mor1kx -> VexRiscv 2021-09-12 19:27:00 +08:00
Sebastien Bourdeauducq 2e5c32878f flake: add other KC705 NIST builds 2021-09-10 17:19:32 +08:00
occheung a573dcf3f9 board_misoc/build: use rv32 as target arg
The original rv64 argument was only to match the misoc counterpart.
2021-09-10 14:11:23 +08:00
occheung 448974fe11 runtime/main: cleanup 2021-09-10 13:59:53 +08:00
occheung b091d8cb66 kernel: flush cache before mod_init
This could be necessary as redirecting instructions from D$ directly to I$ as it seems.
Related: https://github.com/SpinalHDL/VexRiscv/issues/137
2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq d50e24acb1 update dependencies 2021-09-10 13:25:12 +08:00
occheung 5394d04669 test_spi: add delay 2021-09-10 13:25:12 +08:00
occheung b8ed5a0d91 alloc: fix alignment for riscv32 arch 2021-09-10 13:25:12 +08:00
occheung 2213e7ffac ksupp/rtio/exception: fix timestamp 2021-09-10 13:25:12 +08:00
occheung 09ffd9de1e dma: fix timestamp fetch 2021-09-10 13:25:12 +08:00
occheung 051a14abf2 rtio/dma: fix endianness 2021-09-10 13:25:12 +08:00
occheung c6ba0f3cf4 ksupport: fix dma cslice (ffi) 2021-09-10 13:25:12 +08:00
occheung c812a837ab runtime: enlarge stack size 2021-09-10 13:25:12 +08:00
occheung a596db404d satman: fix cargo xbuild sysroot 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq eff7ae5aff flake: make llvm-strip in HITL test 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq c78fbe9bd2 flake: make bscanspi bitstreams available in HITL test 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 17b9d2fc5a flake: add KC705 HITL test 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 5e2664ae7e flake: add openocd 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 64ce7e498b flake: make board package a Python package 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 952acce65b flake: build board package on Hydra 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 7ae4b2d9bb flake: update dependencies 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq ce0964e25f flake: fix cargo sha256 2021-09-10 13:25:12 +08:00
occheung 4fab267593 cargo: std dependency hack 2021-09-10 13:25:12 +08:00
occheung dcbd9f905c cargo: use cargo xbuild 2021-09-10 13:25:12 +08:00
occheung 9f6b3f6014 firmware: clarify target triple
The lack of compressed instruction support can be inferred from the target triple, literally.
2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 9697ec33eb flake: update dependencies 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq eee80c7697 flake: use improved Rust support in nixpkgs 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq b7efb2f633 flake: remove outdated comment 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 9ee03bd438 flake: reenable lit test 2021-09-10 13:25:12 +08:00
occheung 4619a33db4 test: remove broken array return tests
Removed test cases that do not respect lifetime/scope constraint.
See discussion in artiq-zynq repo: M-Labs/artiq-zynq#119
Referred to the patch from @dnadlinger. 5faa30a837
2021-09-10 13:25:12 +08:00
occheung 5985f7efb5 syscall: lower nowrite to inaccessiblememonly
In the origin implementation, the `nowrite` flag literally means not writing memory at all.
Due to the usage of flags on certain functions, it results in the same issues found in artiq-zynq after optimization passes. (M-Labs/artiq-zynq#119)
A fix wrote by @dnadlinger can resolve this issue. (c1e46cc7c8)
2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 6db7280b09 flake: board package WIP 2021-09-10 13:25:12 +08:00
occheung d8ac429059 dyld: streamline lib.rs
Only riscv32 is supported anyway, no need to have excessive architecture check.
2021-09-10 13:25:12 +08:00
occheung 798774192d slave_fpga/bootloader: read in little endian 2021-09-10 13:25:12 +08:00
occheung eecd825d23 firmware: suppress warning 2021-09-10 13:25:12 +08:00
occheung 1da0554a49 pcr: purge 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 035d15af9d flake: clean up vivado, add installer environment 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 9addd08587 flake: fetch MiSoC submodules 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 3e09e48152 flake: set up Vivado 2021-09-10 13:25:12 +08:00
occheung 5d0a8cf9ac llvm_ir_gen: fix indent 2021-09-10 13:25:12 +08:00
occheung 70507e1b72 Cargo.lock: update 2021-09-10 13:25:12 +08:00
occheung c113cd6bf5 libfringe: bump 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 251cd4dcc6 flake: update dependencies 2021-09-10 13:25:12 +08:00
occheung 61b0170a12 firmware: purge or1k 2021-09-10 13:25:12 +08:00
occheung af263ffe1f ksupport: fix rpc, cache signature (FFI)
The reason of the borrow stuff is explained in M-Labs/artiq-zynq#76 (artiq-zyna repo).
As for `cache_get()`, compiler will perform stack allocation to pre-allocate the returned structure, and pass to cache_get alongside the `key`.
However, ksupport fails to recognize the passed memory, so it will always assume the passed memory as the key.
2021-09-10 13:25:12 +08:00
occheung a833974b50 analyzer: fix endianness 2021-09-10 13:25:12 +08:00
occheung d623acc29d llvm_ir_gen: fix now with now_pinning & little-endian target 2021-09-10 13:25:12 +08:00
occheung 8fa47b8119 rpc: enforce alignment 2021-09-10 13:25:12 +08:00
occheung de0f2d4a28 firmware: adopt endianness protocol in artiq-zynq
Related:
artiq-zynq: M-Labs/artiq-zynq#126
artiq: #1588
2021-09-10 13:25:12 +08:00
occheung 9afe63c08a ksupport: fix proto_artiq dependency 2021-09-10 13:25:12 +08:00
occheung 29a2f106d1 ksupport: replace asm with llvm_asm 2021-09-10 13:25:12 +08:00
occheung b30ed75e69 kernel.ld: load elf header and prog headers
ld.lld has a habit of not putting the headers under any load sections.
However, the headers are needed by libunwind to handle exception raised by the kernel.
Creating PT_LOAD section with FILEHDR and PHDRS solves this issue. Other PHDRS are also specified as linkers (not limited to ld.lld) will not create additional unspecified headers even when necessary.
2021-09-10 13:25:12 +08:00
occheung 279593f984 ksupport.ld: merge sbss with bss 2021-09-10 13:25:12 +08:00
occheung 1ba8c8dfee runtime: remove irq again 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 942bd1a95d flake: add hydraJobs 2021-09-10 13:25:12 +08:00
occheung 3d629006df makefiles: revert byte-swaps 2021-09-10 13:25:12 +08:00
occheung 7542105f0f board_misoc: remove pcr
VexRiscv seems to not support additional hardware performance counter, at least I have not seen any documentation on how to use it.
2021-09-10 13:25:12 +08:00
occheung 01ca114c66 runtime: remove irq dependency 2021-09-10 13:25:12 +08:00
occheung 36171f2c61 runtime: remove inaccurate sp on panic 2021-09-10 13:25:12 +08:00
occheung 01e357e5d3 ksupport.ld: reduce load section alignment 2021-09-10 13:25:12 +08:00
occheung f77b607b56 compiler: generate symbols 2021-09-10 13:25:12 +08:00
occheung 1293e0750e ld, makefiles: use ld.lld 2021-09-10 13:25:12 +08:00
occheung fc42d053d9 kernel: use vexriscv 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 9adab6c817 flake: add devshell 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 8c468d0346 flake: switch to nightly rust with mozilla overlay 2021-09-10 13:25:12 +08:00
occheung 1b516b16e2 targets: default to vexriscv cpu 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq be5ae5c5b4 flake: configure binary cache 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq d13efd6587 add Nix flake 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq e8fe8409b2 libartiq_support: compatibility with recent stable rustc 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq cabe5ace8e compiler: remove DebugInfoEmitter for now
Causes problems with LLVM 9 and not needed at first.
2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 6629a49e86 compiler: use LLVM binutils/linker for Arm as well
Previously we kept GNU Binutils because they are less of a pain to support
on Windoze - the source of so many problems - but with RISC-V we need to
update LLVM anyway.
2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 43d120359d compiler: switch to upstream llvmlite and RISC-V target 2021-09-10 13:25:12 +08:00
Sebastien Bourdeauducq 5656e52581 remove profiler 2021-09-10 13:25:12 +08:00
occheung 1b8b4baf6a ksupport: fix panic, libc, unwind 2021-09-10 13:25:12 +08:00
occheung 905330b0f1 ksupport: handle riscv exceptions 2021-09-10 13:25:12 +08:00
occheung 50a62b3d42 liballoc: change align to 16 bytes 2021-09-10 13:25:12 +08:00
occheung 7f0bc9f7f0 runtime/makefile: specify emulation, flip endianness 2021-09-10 13:25:12 +08:00
occheung c42adfe6fd runtime.ld: merge .sbss & .bss 2021-09-10 13:25:12 +08:00
occheung f56152e72f rust: fix dependencies 2021-09-10 13:25:12 +08:00
occheung c800b6c8d3 runtime: update rust alloc, managed 2021-09-10 13:25:09 +08:00
occheung e99061b013 runtime: add riscv 2021-09-10 13:23:22 +08:00
occheung ecedec577c runtime: impl riscv exception handling 2021-09-10 13:23:15 +08:00
occheung 252594a606 runtime: impl riscv panic handler 2021-09-10 13:20:31 +08:00
occheung 31bf17563c personality: update from rust/panic_unwind 2021-09-10 13:20:31 +08:00
occheung bfddd8a30f libdyld: add riscv support 2021-09-10 13:20:31 +08:00
occheung ad3037d0f6 libc: add minimal C types 2021-09-10 13:20:31 +08:00
occheung daaf6c3401 libunwind: add rust interface 2021-09-10 13:20:31 +08:00
occheung 6d9cebfd42 satman: handle .sbss generation 2021-09-10 13:20:31 +08:00
occheung 96438c9da7 satman: make fbi big-endian 2021-09-10 13:20:31 +08:00
occheung 6535b2f089 satman: fix feature 2021-09-10 13:20:31 +08:00
occheung 45adaa1d98 satman: add riscv exception handling 2021-09-10 13:20:31 +08:00
occheung 869a282410 satman: use riscv 2021-09-10 13:20:31 +08:00
occheung ebb9f298b5 proto_artiq: update alloc type path 2021-09-10 13:20:31 +08:00
occheung 97a0132f15 libio: update alloc type path 2021-09-10 13:20:31 +08:00
occheung 37ea863004 libio: pin failure version 2021-09-10 13:20:31 +08:00
occheung 3ff74e0693 bootloader: handle .sbss generation in .ld
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
occheung 448fe0e8cf bootloader: fix panic
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
occheung 8294d7fea5 bootloader: swap endianness
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
occheung 13032272fd bootloader: add rv32 exception handler
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
occheung 46102ee737 board_misoc: build vectors.S with rv64 target in misoc
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
occheung b87ea79d51 rv32: rm irq & vexriscv-rust
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
occheung 9aee42f0f2 rv32/boot: remove hotswap
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
occheung 82b4052cd6 libboard_misoc: vexriscv integration
Signed-off-by: occheung <dc@m-labs.hk>
2021-09-10 13:20:31 +08:00
Leon Riesebos 2cf144a60c ddb_template: edge counter keys correspond with according ttl keys
previously ttl_counter_0 and ttl_0 could be on completely different physical ttl output channels
with this change, ttl_0_counter (note the changed key format) is always on the same channel as ttl_0

Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-09-06 09:06:04 +08:00
Robert Jördens e7a46ec767
Merge pull request #1749 from airwoodix/phaser-frame-alignment-utils
Phaser: add helpers to align updates with RTIO timeline
2021-09-03 14:00:17 +02:00
Etienne Wodey 4d7bd3ee32 phaser: fail init() if frame timestamp measurement times out
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-09-03 12:01:26 +02:00
Etienne Wodey 075cb26dd7 phaser: rename get_next_frame_timestamp() to get_next_frame_mu()
and implement review comments (PR #1749)

Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-09-03 09:58:01 +02:00
Etienne Wodey 7aebf02f84 phaser: docs: add reference to get_next_frame_timestamps(), fix typo
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-09-01 17:44:46 +02:00
Etienne Wodey 61b44d40dd phaser: add labels to debug init prints
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-09-01 17:43:30 +02:00
Etienne Wodey 65f8a97b56 phaser: add helpers to align updates to the RTIO timeline
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2021-09-01 17:42:54 +02:00
Robert Jördens 11790c6d7c
Merge pull request #1746 from quartiq/suservo_tester
Suservo tester
2021-08-19 10:20:29 +02:00
SingularitySurfer 65f63e6927 fix suservo start 2021-08-19 07:38:48 +00:00
Robert Jördens a53162d01d tester: tweak suservo
* p gain 1 to get reasonable power
* refine testing instructions and comments
2021-08-19 09:17:14 +02:00
SingularitySurfer 4d21a72407 Implement SUServo tester. 2021-08-18 15:10:27 +00:00
Mikołaj Sowiński 898122f3e5
Added support for HVAMP_8CH (#1741) 2021-08-16 13:39:00 +08:00
Sebastien Bourdeauducq 420891ba54 syntax 2021-08-12 13:01:35 +08:00
Sebastien Bourdeauducq 9f94bc61ae missing part of 477b1516d 2021-08-12 12:55:37 +08:00
Sebastien Bourdeauducq c69a1316ad compiler: stop using sys.version_info for parser 2021-08-12 12:52:24 +08:00
Sebastien Bourdeauducq 477b1516d3 remove profiler 2021-08-12 12:51:55 +08:00
Sebastien Bourdeauducq e3edb505e3 setup.py: remove outdated dependency_links 2021-08-12 12:48:46 +08:00
Sebastien Bourdeauducq 67847f98f4 artiq_run: fix multiarch 2021-08-12 12:48:10 +08:00
mwojcik 7879d3630b made kc705/gtx interface more similar to kasli/gtp 2021-08-10 18:53:52 +08:00
Sebastien Bourdeauducq 242dfae38e kc705: fix DRTIO targets 2021-08-06 15:41:47 +08:00
Star Chen 5111132ef0
ICAP: prevent sayma from using it (#1740) 2021-08-06 15:08:30 +08:00
Sebastien Bourdeauducq dc546630e4 kc705: DRTIO variants WIP 2021-08-06 14:41:41 +08:00
Robert Jördens fd824f7ad0 ddb_template: print LED channel nos on Kasli v2 2021-08-05 17:29:38 +02:00
Harry Ho c9608c0a89 zotino: default div_read unified with ad53xx at 16, fix ad53xx doc 2021-08-05 17:42:11 +08:00
Star Chen 6b88ea563d
talk to ICAP primitive to restart gateware (#1733) 2021-08-05 17:00:31 +08:00
Sebastien Bourdeauducq 97e994700b compiler: turn __repr__ into __str__ when sphinx is used. Closes #741 2021-08-05 11:32:20 +08:00
Sebastien Bourdeauducq c3d765f745 ad9910: fix type annotations 2021-08-05 11:30:54 +08:00
Robert Jördens 1e869aedd3
docs: clarify rtio_clock=e req's and use case
This regularly leads to people misunderstanding the setting.
Mentioning the Si5324 specifically or Urukul synchronization doesn't help constraining or explaining the feature, its consequences and requirements.
Despite being non-standard this feature is also generally not sufficient to achieve cross-device determinism as the other devices need to be made deterministic as well.
2021-08-03 11:36:04 +02:00
Sebastien Bourdeauducq 53a98acfe4 artiq_flash: cleanup openocd handling, do not follow symlinks
Not following symlinks allows files to be added to OpenOCD via nixpkgs buildEnv.
2021-07-26 17:01:24 +08:00
Star Chen 30e5e06a33
moninj: fix read of incomplete data (#1729) 2021-07-22 17:56:38 +08:00
Star Chen ebb67eaeee
applets: add length warning message on plot for `plot_xy_hist` and fix bug (#1725) 2021-07-19 15:45:48 +08:00
Star Chen 943a95e07a
applets: add data length warning message for `plot_xy` (#1722) 2021-07-19 15:14:15 +08:00
Star Chen e996b5f635
applets: fix warning timing 2021-07-19 12:26:01 +08:00
StarChen 796aeabb53 documentation: correct artiq_coremgmt examples 2021-07-19 12:09:51 +08:00
Sebastien Bourdeauducq 4fb8ea5b73 artiq_flash: determine which firmware to flash by looking at filesystem
Closes #1719
2021-07-14 16:43:00 +08:00
Star Chen 5cd721c514
applets: add plot_hist dataset length mismatch warning (#1718) 2021-07-14 15:57:55 +08:00
Sebastien Bourdeauducq d327d2a505 doc: document shell-dev shortcut 2021-07-14 08:32:03 +08:00
Sebastien Bourdeauducq bc7ce7d6aa doc: mention Vivado version from vivado.nix. Closes #1715 2021-07-14 08:27:08 +08:00
Star Chen 6ce9c26402
GUI: add option to create new datasets (#1716) 2021-07-13 12:53:35 +08:00
occheung 2204fd2b22 adf5356: add delay to sync()
Signed-off-by: Oi Chee Cheung <dc@m-labs.hk>
2021-07-08 10:03:20 +08:00
pca006132 b10d1bdd37 compiler: proper union find
The find implementation was not very optimized, and the unify function
did not consider tree height and may build some tall trees.
2021-07-07 09:22:16 +08:00
pca006132 4ede58e44b compiler: reduce calls to TypedTreeHasher
We need to check if our inference reached a fixed point. This is checked
using hash of the types in the AST, which is very slow. This patch
avoids computing the hash if we can make sure that the AST is definitely
changed, which is when we parse a new function.

For some simple programs with many functions, this can significantly
reduce the compile time by up to ~30%.
2021-07-07 09:22:16 +08:00
Sebastien Bourdeauducq 51d2861e63 Freenode -> OFTC 2021-07-05 22:15:58 +08:00
Sebastien Bourdeauducq 29fd58e34b RELEASE_NOTES: update and fix formatting 2021-07-05 21:22:34 +08:00
pca006132 0257ecc332 update release notes 2021-07-02 17:01:31 +08:00
pca006132 822e8565f7 compiler: supports kernel decorators with path 2021-07-02 17:01:31 +08:00
pca006132 6fb31a7abb compiler: allow empty list in quote 2021-07-02 15:16:19 +08:00
pca006132 0806b67dbf compiler: speedup list processing 2021-07-02 14:22:25 +08:00
pca006132 f531af510c compiler: fixed embedding annotation evaluation 2021-06-25 11:32:23 +08:00
pca006132 c29a149d16 compiler: allows string annotation
According to PEP484, type hint can be a string literal for forward
references. With PEP563, type hint would be preserved in annotations in
string form.
2021-06-25 11:01:48 +08:00
Etienne Wodey 094a346974 docs: fix snippet to advance the timeline by one coarse RTIO cycle
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-23 20:29:43 +08:00
Etienne Wodey 68268e3db8 docs: fix some formatting issues
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-23 20:29:43 +08:00
Etienne Wodey cca654bd47 test_device_db: fix on Windows (tempfile access limitations)
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-21 16:47:22 +08:00
Etienne Wodey 8bedf278f0 set_dataset: pass HDF5 options as a dict, not as loose kwargs
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-17 16:43:05 +02:00
Etienne Wodey 12ef907f34 master/databases: fix AttributeError in DatasetDB.set()
Add corresponding unit test.

Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-17 16:30:38 +02:00
Etienne Wodey d8b1e59538 datasets: allow passing options to HDF5 backend (e.g. compression)
This breaks the internal dataset representation used by applets
and when saving to disk (``dataset_db.pyon``).

See ``test/test_dataset_db.py`` and ``test/test_datasets.py``
for examples.

Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-17 12:04:16 +02:00
Etienne Wodey b8ab5f2607 master/databases: use tools.file_import to load the device_db
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-17 07:58:17 +08:00
Etienne Wodey 5c23e6edb6 test: add regression tests for master.databases.DeviceDB
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-06-17 07:58:17 +08:00
Sebastien Bourdeauducq 7046aa9c23 compiler: stop using deprecated numpy.float 2021-06-15 10:48:34 +08:00
Sebastien Bourdeauducq ea0c7b6173 Merge remote-tracking branch 'harrydrtio/k7-drtio' 2021-06-15 10:04:45 +08:00
Star Chen 9dee8bb9c9
Kasli: Added front panel user LED (#1623) (#1694) 2021-06-07 16:05:50 +08:00
pca006132 bcb030cc9c
aqctl_corelog: fix endianness issue (closes #1682) (#1689)
Fixed according to
https://forum.m-labs.hk/d/190-fetchingreading-the-core-log-in-a-central-location/10

Tested with both KC705 and ZC706.
2021-06-03 14:06:17 +08:00
Sebastien Bourdeauducq 522c2f5995 doc: nixpkgs 21.05 2021-06-02 08:18:28 +08:00
Sebastien Bourdeauducq ea1dd2da43 artiq_ddb_template: kasli-soc support 2021-05-30 20:33:44 +08:00
Leon Riesebos 07bd1e27c1 artiq_flash: wrap paramiko commands in bash login shell
the login shell will load the nix environment on non-nixos systems

Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-05-27 21:44:10 +08:00
David Nadlinger b89610bbcd manual/compiler: Mention TArray annotation 2021-05-24 11:50:10 +01:00
pca006132 4c743cf8af revert busy polling 2021-05-23 14:07:11 +08:00
pca006132 1e9a131386 coredevice.comm_kernel: performance improvement
reduced latency by busy polling, and improved byte list performance.
2021-05-23 13:30:00 +08:00
Harry Ho 43b2a3791c jsonschema: only allow enable_sata_drtio=true for Kasli if v1.0/1.1 2021-05-17 12:46:19 +08:00
Sebastien Bourdeauducq 935e18c1be artiq_flash: improve openocd not found error message 2021-05-13 14:45:23 +08:00
Robert Jördens 67d474e6cf
Merge pull request #1657 from pathfinder49/phaser 2021-05-12 12:54:05 +02:00
fanmingyu212 91832aa886 manual: cannot use empty lists in kernel
Signed-off-by: Mingyu Fan <mingyufan@ucsb.edu>
2021-05-12 11:37:18 +08:00
Marius Weber 129cf8c1dd Phaser: Make set_nco_phase set the phase of the NCO
Previous to this commit `set_nco_phase()` set the phase of the DUC instead
of the NCO. Setting the phase of the NCO may be desirable to utilise the
auto-sync functionality of the double-buffered DAC-NCO settings.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-11 23:16:14 +01:00
Charles Baynham 011f3bdb2e docs: Add artiq_influx_generic to default_network_ports.rst and list_of_ndsps.rst 2021-05-10 15:26:26 +08:00
Marius Weber fb6fad7c64 update release notes
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 15:04:56 +01:00
Marius Weber 043c9c20d7 phaser: Improve documentation of DAC settings
1. Clarify which features require additional configuration via the `dac`
   constructor argument.
2. Document when DAC settings apply immediatly/are staged.
3. Document how staged DAC settings may be applied
4. Calrify operation of `dac_sync`

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:58:30 +01:00
Marius Weber f97baa8aec phaser: workaround malformed output with `mixer_ena=1` & `nco_ena=0`
When Phaser is powered on and `init()` is first called, enabling the
DAC-mixer while leaving the NCO disabled causes malformed output.
This commit implements a workaround by making sure the NCO is enabled,
before being set to the disired state.

This commit also avoids the following procedure, resulting in
malformed output:
1. Operate Phaser with the DAC Mixer and NCO enabled
2. Set the NCO to a non-zero frequency
3. Disable the NCO in the device_db
4. Re-initialise Phaser

After this procedure, with CMIX disabled, incorrect output is produced.
To clear the fault one must re-enable the NCO and write the NCO freqeuncy
to zero before disabling the NCO.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber 4fa2028671 phaser: fix coarse mixer register offset
The CMIX bits are bits 12-15 in register 0x0d. This has been checked
against the datasheet and verified on hardware. Until now, the bit for
CMIX1 was written to CMIX0. The CMIX0 bit was written to a reserved bit.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber 515cfa7dfb Phaser: expose coarse mixer and document need to enable the DAC-mixer.
in some use cases a larger tunable range than available via the DUC may
be needed. Some use cases may wish to combine the coarse mixer with the
DUC to extend the tunable range.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber 4f812cc4ed Phaser: zero oscillator amplitude after `init()` (close #1651)
Currently, `init()` leaves a single oscillator at full scale. The phase
accumulator of this oscillator is held continuously cleared. Provided no
upconverting mechanism is active (DUC, CMIX, NCO), this produces a full-scale
DC voltage. The DC voltage is blocked by hardware capacitors. This behaviour
is not mentioned by the `init` documentation.

If one attempts to use any other oscillator without reducing the amplitude
of the oscillator enabled by `init`, there is by significant clipping.

In the case that the NCO or CMIX are configured via the device_db
(suggested in the docs), leaving the osillator at full scale results in
full RF output power after calling `init()`. This may plausibly damage loads
driven by phaser.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber 407fba232d Phaser upconverter: set phase-frequency detector to 62.5 MHz (close #1648)
The suitable PFD clock depends on the use case and will likely need
to be configured by some users. All things being equal, a higher PFD
clock is desirable as is results in lower local oscillator phase-noise.

Phaser was designed around a maximum PFD clock of 62.5 MHz. In integer mode,
with no local oscillator frequency divisor set, a 62.5 MHz PFD clock results
in a 125 MHz local oscillator step size. Given the +-200 MHz range of the DUC
(more if using the DAC mixer), this step size will be acceptable to many.
This seems like the most appropreate default configuration as it should offer
the best phase-noise performance.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber 75445fe5f0 Phaser: expose and automate clearing of DAC `sif_sync` (close #1630 and #1650)
`sif_sync` must be triggered to apply NCO frequency changes. To achieve per
channel frequency tunability exeeding the range of the DUC, the NCO frequeny must
adjusted. User code will need to trigger `sif_sync` to achieve this.

`sif_sync` can only be triggered if the bit was cleared. To avoid this pitfall,
the clearing of `sif_sync` is automated.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber 1c96797de5 Phaser upconverter: Follow datasheet procedure for VCO calibration (close #1643)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber 7404152e4c Phaser upconverter: rename `ndiv` -> `nint` to match datasheet (close #1638)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:47 +01:00
Marius Weber eb477ee06b phaser: print gw_rev in debug mode 2021-05-08 14:48:46 +01:00
Marius Weber c7e992e26d Phaser: flake8
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-05-08 14:48:46 +01:00
Sebastien Bourdeauducq eb38b664e3 phaser: typo 2021-05-07 10:00:10 +08:00
Peter Drmota 47bf5d36af coredevice.comm_kernel: Fix unpacking of lists of numpy.int64
test.coredevice.test_embedding: Add tests for list of numpy.int64
2021-04-21 15:46:58 +01:00
Leon Riesebos af4fadcd54 added DefaultMissing to __all__
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:42:21 +08:00
Leon Riesebos a0cea3a011 added __iter__ and __len__ to ScanObject base class
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:42:21 +08:00
Leon Riesebos 2671c271d4 ad99xx unified type annotations for cfg_sw() methods and fixed test cases
closes #1642

Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:29:55 +08:00
Leon Riesebos d745d50245 ad99xx added additional kernel invariants
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:18:31 +08:00
Leon Riesebos 4a6201c083 ad99xx make kernel invariants instance variable
prevents mutations on class variable that applies to all instances at once
closes #1654

Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-04-21 11:18:31 +08:00
Robert Jördens ffe1c9f9b1
Merge pull request #1628 from pathfinder49/fastino_mu_fix
fastino: ensure `xxx_to_mu()` methods return int32 on the host
2021-04-15 15:02:12 +02:00
Marius Weber bda5aa7c7e fastino: ensure `xxx_to_mu()` methods return int32 on the host
Currently running `voltage_to_mu()` or `voltage_group_to_mu()` on the host will
convert all machine unit values to int64. This leads to issues when machine units
are returned from RPCs.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2021-04-15 11:41:22 +01:00
Sebastien Bourdeauducq 78490bef5d manual: document Vivado installer crash workaround 2021-04-05 09:17:50 +08:00
David Nadlinger b7f3eaebf9 gui: Fix occasional wrong fuzzy select menu position on KDE/Linux 2021-04-04 00:04:11 +01:00
Harry Ho fc59791583 jsonschema: mirny: fix clk_sel default value 2021-03-30 16:06:56 +08:00
Harry Ho 8002fcf8bb jsonschema: style 2021-03-29 17:49:43 +08:00
Harry Ho 5f32cb7196 jsonschema: mirny: accept string enums for validating clk_sel 2021-03-29 17:49:43 +08:00
Harry Ho 75efb8985c ddb_template: mirny_cpld: accept clk_sel as a string 2021-03-29 17:49:43 +08:00
Sebastien Bourdeauducq 523fa01343 manual: fix OpenOCD conda instructions 2021-03-27 12:16:08 +08:00
David Nadlinger bdaaf3c1d7 dashboard: Disable Group CCB policy menu before first entry is selected
It was possible to crash the dashboard by opening the context menu
before an applet entry had been selected for the first time (e.g.
immediately after startup) and selecting one of the Group CCB
actions, as the enable update slot would not have been run.
2021-03-21 02:04:24 +00:00
David Nadlinger 6fd088e339 test/lit: Fix invalid type inference test
This broke after b8cd163978, but
is invalid code to start with; this would have previously
crashed the code generator had the code actually been compiled.

(Allowing implicit conversion to bool would be a separate debate.)
2021-03-21 01:46:52 +00:00
David Nadlinger be4669d7a5 compiler: Fix crash with try/finally and stack-return function calls
The previous code could have never worked as-is, as the result slot
went unused, and it tried to append the load instruction to the
block just terminated with the invoke.

GitHub: Fixes #1506, #1531.
2021-03-21 01:31:26 +00:00
David Nadlinger 1f40f3ce15 compiler: Map host numpy.bool_ values to TBool
Since we don't implement any integer-like operations for TBool
(addition, bitwise not, etc.), TBool is currently neither
strictly equivalent to builtin bool nor numpy.bool_, but through
very obvious compiler errors (operation not supported) rather than
silently different runtime behaviour.

Just mapping both to TBool thus is a huge improvement over the
current behaviour (where numpy.False_ is a true-like object). In
the future, we could still implement more operations for TBool,
presumably following numpy.bool_ rather than the builtin type,
just like builtin integers get translated to the numpy-like
TInt{32,64}.

GitHub: Fixes #1275.
2021-03-20 00:54:41 +00:00
David Nadlinger b8cd163978 compiler: Fix type inference for "ternary" if expressions
Previously, any type would be accepted for the test expression,
leading to internal errors in the code generator if the passed
value wasn't in fact a bool.
2021-03-20 00:27:25 +00:00
David Nadlinger 888696f588 coredevice: Fix RPC typing for bool lists/arrays
GitHub: Fixes #1635.
2021-03-20 00:03:10 +00:00
Leon Riesebos d04bcd8754 add get_*() functions to ad9910, ad9912, and urukul. closes #1616
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-03-15 13:06:24 +08:00
Leon Riesebos c22f731a61 added typing and reformatted driver for ad9910, ad9912, and urukul
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-03-15 13:06:24 +08:00
David Nadlinger 5ba22c11c3 compiler: Change type inference rules for empty array() calls
array([...]), the constructor for NumPy arrays, currently has the
status of some weird kind of macro in ARTIQ Python, as it needs
to determine the number of dimensions in the resulting array
type, which is a fixed type parameter on which inference cannot
be performed.

This leads to an ambiguity for empty lists, which could contain
elements of arbitrary type, including other lists (which would
add to the number of dimensions).

Previously, I had chosen to make array([]) to be of completely
indeterminate type for this reason. However, this is different
to how the call behaves in host NumPy, where this is a well-formed
call creating an empty 1D array (or 2D for array([[], []]), etc.).

This commit adds special matching for (recursive lists of) empty
ListT AST nodes to treat them as scalar dimensions, with the
element type still unknown.

This also happens to fix type inference for embedding empty 1D
NumPy arrays from host object attributes, although multi-dimensional
arrays will still require work (see GitHub #1633).

GitHub: Fixes #1626.
2021-03-14 22:48:43 +00:00
David Nadlinger c707ccf7d7 compiler: Properly implement NumPy array slicing
Strided slicing of one-dimensional arrays (i.e. with non-trivial
steps) might have previously been working, but would have had
different semantics, as all slices were copies rather than a view
into the original data.

Fixing this in the future will require adding support for an index
stride field/tuple to our array representation (and all the
associated indexing logic).

GitHub: Fixes #1627.
2021-03-14 20:02:59 +00:00
David Nadlinger 557671b7db compiler: Fix type inference in slice expressions
This was a long-standing issue affecting both lists and
the new NumPy array implementation, just caused by the
generic inference passes not being run on the slice
subexpressions (and thus e.g. ints not being monomorphized).

GitHub: Fixes #1632.
2021-03-14 18:46:28 +00:00
David Nadlinger 75c255425d compiler: Linguistically untangle comment [nfc] 2021-03-14 18:40:21 +00:00
Leon Riesebos b8f4c6b9bb added test case for get_experiment() with nested class
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-02-28 14:26:44 +08:00
Leon Riesebos 1deaa758ce get_experiment() is able to get nested experiment classes using dots in class names.
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-02-28 14:26:44 +08:00
Leon Riesebos 3c68223337 replaced deprecated inspect.getargspec() with inspect.getfullargspec()
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-02-28 14:25:05 +08:00
Leon Riesebos cd7f9531d7 added abstract describe method to ScanObject
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2021-02-28 14:25:05 +08:00
jonathanpritchard e577542f6b
Updated NDSP documentation (#1617) 2021-02-25 09:27:10 +08:00
Sebastien Bourdeauducq 92fd705990 increase memory allocated to comms CPU
See discussion in #1612.
2021-02-21 19:06:12 +08:00
Sebastien Bourdeauducq 8deb269b9a update major version 2021-02-17 16:18:05 +08:00
Sebastien Bourdeauducq 489f950406 RELEASE_NOTES: update ARTIQ-6 section 2021-02-17 15:52:08 +08:00
Sebastien Bourdeauducq 14d464b4cf update copyright year 2021-02-17 15:52:08 +08:00
Etienne Wodey 3cd96a951a master: refactor experiments enumeration, use tools.get_experiment
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-02-13 10:06:12 +08:00
Etienne Wodey 2ca9b64ba1 test: add unit tests for tools.file_import and tools.get_experiment
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-02-13 10:06:12 +08:00
Sebastien Bourdeauducq d33a206f04 eem: fix Urukul QSPI after 9ef5717de8 (2) 2021-02-12 13:17:48 +08:00
Astro 3844cde97b jsonschema: validate hw_dev depending on target 2021-02-12 11:09:01 +08:00
Sebastien Bourdeauducq 22ce5b0299 eem: fix Urukul QSPI after 9ef5717de8 2021-02-12 10:59:53 +08:00
Etienne Wodey af411de639 tools/file_import: simplify, remove deprecated load_module() call
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-02-10 16:03:31 +08:00
Sebastien Bourdeauducq e54dd08821 metlino,sayma: adapt to new EEM API
This also enables 4X SERDES TTLs.
2021-02-10 15:32:10 +08:00
Sebastien Bourdeauducq 547254e89e eem_7series: pass through kwargs 2021-02-10 15:31:49 +08:00
Sebastien Bourdeauducq 49299c00a9 eem: enable DCI for LVDS TTL 2021-02-10 15:31:25 +08:00
Sebastien Bourdeauducq 9ef5717de8 eem: support different I/O standards in EEM slots 2021-02-10 15:31:05 +08:00
Drew 48a1c305c1 master: fix DeprecationWarning on logger.warn
Resolves error message shown.

The following error message is shown when worker_impl.py:199 is run: 

```
WARNING:worker(RID,EXPERIMENT):py.warnings:/nix/store/77sw4p03cb7rdayx86agi4yqxh5wq46b-python3.7-artiq-5.7141.1b68906/lib/python3.7/site-packages/artiq/master/worker_impl.py:199: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
  logging.warn(message)
```
2021-02-10 15:27:22 +08:00
Astro 461199b903 kasli_generic: warn if min_artiq_version is not met 2021-02-10 15:26:15 +08:00
Astro 4b2ed67dd7 coredevice_generic.schema.json: add "min_artiq_version" 2021-02-10 15:26:15 +08:00
Sebastien Bourdeauducq cf9cf0ab6f ttl_serdes_7series: add dci (HP bank) support 2021-02-07 22:32:18 +08:00
Sebastien Bourdeauducq 997a48fb31 ttl_serdes_ultrascale: fix, add dummy dci argument 2021-02-07 22:31:46 +08:00
Sebastien Bourdeauducq bbe0c9162a ttl_serdes_ultrascale: cleanup 2021-02-07 22:00:33 +08:00
Sebastien Bourdeauducq 3572e2a9c7 ttl_serdes_7series: fix 2021-02-07 21:41:13 +08:00
Sebastien Bourdeauducq 88c212b84f ttl_serdes_7series: cleanup 2021-02-07 21:33:21 +08:00
Sebastien Bourdeauducq db25f4e8f7 ttl_serdes_7series: use simpler I/O buffers
In theory equivalent with these parameters.
2021-02-07 20:10:37 +08:00
Sebastien Bourdeauducq 6bd9691ba8 gateware: remove TTL dead code 2021-02-07 19:58:02 +08:00
Sebastien Bourdeauducq bfacd1e5b3 eem: fix Grabber cc_0-2 signal definitions 2021-02-07 18:01:05 +08:00
Sebastien Bourdeauducq f7a33a1f99 gateware: make 7-series EEM handling functions shareable 2021-02-07 14:34:26 +08:00
Sebastien Bourdeauducq 1213f78ee9 jsonschema: support kasli_soc 2021-02-07 13:39:01 +08:00
Robert Jördens 2f5ea67b69
Merge pull request #1596 from airwoodix/fix-adf5356-init
coredevice/adf5356: fix initial device detection
2021-02-02 18:20:08 +01:00
Robert Jördens 0c634c7a46
Merge pull request #1601 from airwoodix/enh-mirny-clksel
mirny: hw_rev independent, human readable clk_sel
2021-02-02 16:32:09 +01:00
Etienne Wodey d691b05d78 coredevice/mirny: better error handling for clk_sel
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-02-02 16:23:47 +01:00
Etienne Wodey 78e1b9f8e5 sinara_tester/mirny: remove hw_rev checking fixup code
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-01-29 18:47:40 +01:00
Etienne Wodey 6f8e788620 coredevice/mirny: support human readable clk_sel
In init(), read hw_rev to derive clk_sel code from user string.

Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-01-29 18:46:47 +01:00
Etienne Wodey a8bc98a77b coredevice/adf5356: fix initial device detection
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2021-01-28 18:29:40 +01:00
Ilia Sergachev 78cbab4260 doc: fix missing artiq_flash argument 2021-01-29 00:45:17 +08:00
Sebastien Bourdeauducq 3657055bc0 doc: fix typo. Closes #1586 2021-01-27 13:15:04 +08:00
David Nadlinger f9872bb7b8 coredevice: Handle prematurely closed sockets in comm_kernel receive loop
recv() returns 0 instead of data if the socket has already
been closed. This is translated into a zero-length list on
the Python layer. Previously, the code would enter an
infinite loop if the socket was closed while attempting
to receive data.
2021-01-26 18:10:49 +08:00
David Nadlinger f1fd42ea98 coredevice: Re-enable TCP keepalive
This partially reverts commit b5e1bd3fa2,
which had removed keepalive. This, however, led to experiments
hanging forever if the core device had dropped the connection
(e.g. to a kernel CPU panic, or the device being rebooted).

The chosen keepalive settings are fairly conservative (with the
10 s timeout) to avoid any possible interaction with smoltcp's
3 s ARP try interval (see GitHub issue #1150), even though this
should be a non-issue now due to the larger ARP cache.
2021-01-26 18:10:49 +08:00
pca006132 8148fdb8a7
use device endian for core device protocols (#1591) 2021-01-22 16:33:21 +08:00
Harry Ho a0fd5261ea kc705: cleanup 2021-01-22 11:11:13 +08:00
Harry Ho 7c4eed7a11 kc705: simplify DRTIO master & satellite
* KC705 master: user can no longer choose whether or not the SMA acts as the 2nd DRTIO channel; SFP and SMA now act as the 1st and 2nd channel respectively by default.
* KC705 satellite: user should now use `--sma` to enable using the SMA as the satellite channel; SFP acts as the satellite channel by default.
2021-01-22 11:11:13 +08:00
David Nadlinger 1e443a3aea coredevice: Reuse Target.little_endian for protocol endianness [nfc] 2021-01-21 09:11:54 +01:00
pca006132 ec72eeda46 coredevice: use device endian for kernel and RPC 2021-01-21 09:07:48 +01:00
pca006132 3832b261b1 firmware: optimize integer array/list rpc 2021-01-21 09:05:17 +01:00
Harry Ho 88b14082b6 drtio/transceiver/gtx: delete obsolete modules 2021-01-20 15:05:32 +08:00
Harry Ho 9daf77bd58 kc705: add multichannel support on satellite
* Two DRTIO channels (i.e. satellite and repeater) are enabled by default.
* User can choose either the SFP or SMA as the satellite channel (by passing `--drtio-sat sfp` or --drtio-sat sma` to the argparser), and the unchosen would become the repeater channel.
2021-01-20 15:05:32 +08:00
Harry Ho 52afd4ef6b kc705: add GTX multilane support, add multichannel support on master
* One DRTIO master channel is enabled by default.
* User can set the SMA as the 2nd master channel (by passing --drtio-sma to the argparser).
* Multi-channel (i.e. with repeaters) on KC705 satellite is supported but has not been implemented yet.
2021-01-20 15:05:32 +08:00
Harry Ho f6d39fd6ba kc705: revive DRTIO master with updated syntax
* KC705 master variant now uses Si5324 as synthesiser.
* Multi-channel has not been implemented yet.
2021-01-20 15:05:31 +08:00
Harry Ho f25e86e934 kc705: revive DRTIO satellite with updated syntax, update GTX
* Multi-channel has not been implemented yet.
2021-01-20 11:25:38 +08:00
David Nadlinger c229e76d07 compiler: Add accidentally omitted note to invalid RPC type diagnostic
Might be a minor quality-of-life employment, but there
isn't a test case for this anyway.
2021-01-20 01:49:16 +01:00
Robert Jördens 261870bdee phaser: fix oscillator rtio address for even base addresses
close #1580
2021-01-19 16:56:50 +01:00
Sebastien Bourdeauducq 641f8bcdd6 doc: update development instructions. Closes #1585 2021-01-18 15:12:25 +08:00
David Nadlinger f11aef74b4 gui: Add context menu entry to close all applets
This is occasionally very useful if a large number of
applets were left open (e.g. spawned via CCB).
2021-01-17 11:56:03 +01:00
Sebastien Bourdeauducq c675488a99 reorganize JSON schema files 2021-01-16 10:43:14 +08:00
Astro de5f9cd49f RELEASE_NOTES: add JSON schema 2021-01-16 10:35:23 +08:00
Astro c6807f4594 kasli_generic: validate description against schema, use defaults from schema 2021-01-16 10:35:23 +08:00
Astro 45b5cfce05 gateware: add a kasli_generic.schema.json 2021-01-16 10:35:23 +08:00
Sebastien Bourdeauducq cb44b0cd1a doc: remove qutip from install example (removed from nixpkgs) 2021-01-15 17:18:44 +08:00
David Nadlinger 9b39b1e328 test: Add coredevice tests for matrix multiplication
Also includes a regression test specifically for
mixing multiple types in one kernel.
2021-01-12 03:02:07 +01:00
David Nadlinger f0284b2549 compiler: Fix collision of environments in matmult implementations
GitHub: Fixes #1578.
2021-01-12 03:02:07 +01:00
David Nadlinger 362f8ecb69 compiler: Add test for disallowing type-unstable array-assign binops 2021-01-12 03:02:07 +01:00
David Nadlinger 96692791cf compiler: Implement assigning binops for arrays
GitHub: Fixes #1579.
2021-01-12 03:02:07 +01:00
pca006132 5b5db1433b Revert "compiler: enabled vectorize option"
This reverts commit 636898c302.
2021-01-11 19:43:12 +08:00
Harry Ho 3e93d71aeb manual: fix artiq.dashboard becoming alias of unittest.mock
* Closes m-labs#1293
2021-01-11 18:48:30 +08:00
pca006132 636898c302 compiler: enabled vectorize option 2021-01-11 16:31:24 +08:00
occheung 6a5f5088e2 frontend: sinara_tester: add mirny test
Signed-off-by: Oi Chee Cheung <dc@m-labs.hk>
2021-01-05 17:01:01 +08:00
Harry Ho cff7bcc122 Merge branch 'master' (43be383c86) into k7-drtio 2020-12-31 13:30:46 +08:00
Harry Ho dc7addf394 Revert "drtio: remove KC705/GTX support"
This reverts commit ebdbaaad32.
2020-12-31 13:29:50 +08:00
Chris Ballance 43be383c86 kasli v2.0: drive TX_DISABLE low on all SFPs (fixes #1570)
This was the same problem as #1508 but on SFP1..3
2020-12-23 00:10:12 +08:00
Harry Ho 43ecb3fea6 sayma: add comments about CPLL line rate on KU GTH 2020-12-19 17:05:20 +08:00
Harry Ho 8cd794e9f4 jesd204_tools: use new syntax from jesd204b core
* requires jesd204b changes as in https://github.com/HarryMakes/jesd204b/tree/gth
2020-12-19 17:05:20 +08:00
Aadit Rahul Kamat 19f75f1cfd artiq_browser: update h5py api call
Signed-off-by: Aadit Rahul Kamat <aadit.k12@gmail.com>
2020-12-17 14:23:16 +08:00
Aadit Rahul Kamat 0a14cc5855 Update link to Release Notes doc in PR template
Signed-off-by: Aadit Rahul Kamat <aadit.k12@gmail.com>
2020-12-17 14:21:25 +08:00
occheung a017dafee6 ddb_template: mirny_cpld: add default value
Signed-off-by: Oi Chee Cheung <dc@m-labs.hk>
2020-12-15 11:00:59 +08:00
Harry Ho 73271600a1 jdcg: STPL tests now perform after DAC initialization 2020-12-14 18:03:31 +08:00
occheung 3f631c417d artiq_ddb_template: mirny_cpld: add refclk, clk_sel args
Signed-off-by: occheung <occheung@connect.ust.hk>
2020-12-14 13:38:20 +08:00
occheung 33d39b261a artiq_ddb_template: mirny_cpld: rename adf5355 to adf5356
Signed-off-by: occheung <occheung@connect.ust.hk>
2020-12-14 13:38:20 +08:00
Sebastien Bourdeauducq 4b10273a2d gui: quamash -> qasync 2020-12-12 21:59:25 +08:00
Sebastien Bourdeauducq 1ce505c547 coredevice: remove obsolete watchdog code (#1458) 2020-12-08 13:25:39 +08:00
Sebastien Bourdeauducq 072053c3b2 compiler: remove obsolete watchdog code (#1458) 2020-12-08 13:25:08 +08:00
Sebastien Bourdeauducq ccdc741e73 sayma_amc: fix --sfp argument 2020-12-07 18:02:36 +08:00
Robert Jördens 33285253fb
Merge pull request #1558 from quartiq/phased_ddb_fix
Phased ddb fix
2020-12-04 16:38:40 +01:00
Leon Riesebos 3b2c225fc4 allow dashboard to close if no connection can be made to moninj
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2020-12-04 23:00:23 +08:00
Leon Riesebos 94271504dd Added RuntimeError to prelude to make the name available in kernels
closes #1477

Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2020-12-04 22:59:08 +08:00
SingularitySurfer 9b4b550f76 5 is correct. 2020-12-04 14:49:30 +00:00
SingularitySurfer cba631610c fixed phaser number of rtio channels 2020-12-04 14:40:59 +00:00
Robert Jördens 6ceb3f3095
Merge pull request #1551 from quartiq/tester_tweaks
modified urukul instructions in sinara tester script
2020-11-26 16:22:44 +01:00
Harry Ho d51d4e6ce0 doc: fix missing instructions for bypassing Si5324 on Kasli 2020-11-26 12:03:28 +08:00
Sebastien Bourdeauducq eda4850f64 Revert "fixes with statement with multiple items"
This reverts commit 88d346fa26.
2020-11-22 11:57:22 +08:00
Sebastien Bourdeauducq 8e46c3c1fd Revert "compiler: fix incorrect with behavior"
This reverts commit fe6115bcbb.
2020-11-22 11:57:21 +08:00
SingularitySurfer 0605267424 modified urukul instructions 2020-11-19 12:20:34 +00:00
Marius Weber 3e38833020 ad9910: fix `turns_to_pow` return-type on host
When run on the host, the `turns_to_pow` retrun-type is numpy.int64.
Sensibly, the compiler does not attempt to convert `numpy.int64` to `int32`.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2020-11-13 18:54:47 +01:00
David Nadlinger 9ff47bacab compiler: Provide libm special functions (erf, Bessel functions, …)
Tests hard-depend on SciPy to make sure this is exercised
during CI.
2020-11-11 19:15:30 +01:00
David Nadlinger a5dcd86fb8 test/lit: Rename `array` to avoid conflict with standard library
The old name created problems if a test dependency (e.g. NumPy/SciPy)
ends up importing the system `array` module internally somewhere.
2020-11-11 17:42:53 +01:00
David Nadlinger d95e619567 compiler: Implement binary NumPy math functions (arctan2, …)
The bulk of the diff is just factoring out the implementation
for binary arithmetic implementations, to be reused for binary
function calls.
2020-11-11 01:35:28 +01:00
David Nadlinger fcf4763ae7 RELEASE_NOTES: Expand information on ndarrays 2020-11-10 20:40:18 +01:00
David Nadlinger bc6fbecbda compiler, firmware: Do not expose abort() to kernels
This was only exposed for the assert implementation, and
does not exist on Zynq.
2020-11-10 20:40:18 +01:00
David Nadlinger 292043a0a7 compiler: Raise AssertionErrors instead of abort()ing on all targets 2020-11-10 20:40:18 +01:00
Leon Riesebos d8a5a8f568 fixed value scaling issue for the center scan gui widget
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
2020-11-10 18:42:18 +01:00
Etienne Wodey dbcac62fd0 coredevice: adf5356: fix/adjust docs
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-11-10 10:49:22 +08:00
Etienne Wodey e8730a7e14 coredevice: adf5356: add test for failed PLL lock
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-11-10 10:49:22 +08:00
Etienne Wodey 3844123c13 coredevice: adf5356: add enable/disable and power setting for outA
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-11-10 10:49:22 +08:00
Etienne Wodey 61dc2b8b64 coredevice: adf5356: add some tests
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-11-10 10:49:22 +08:00
Etienne Wodey b200465cce coredevice: adf5355: rename to adf5356
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-11-10 10:49:22 +08:00
Etienne Wodey d433f6e86d coredevice: adf5355: more general PLL parameters calculation
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-11-10 10:49:22 +08:00
Etienne Wodey b856df7c35 coredevice: adf5355: cleanup, style 2020-11-10 10:49:22 +08:00
Etienne Wodey 211500089f coredevice: mirny/adf5355: add basic high-level interface
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-11-10 10:49:22 +08:00
David Nadlinger 4f311e7448 compiler: Raise exception on failed assert()s rather than panic
This allows assert() to be used on Zynq, where abort() is not
currently implemented for kernels. Furthermore, this is arguably
the more natural implementation of assertions on all kernel targets
(i.e. where embedding into host Python is used), as it matches host
Python behavior, and the exception information actually makes it to
the user rather than leading to a ConnectionClosed error.

Since this does not implement printing of the subexpressions, I
left the old print+abort implementation as default for the time
being.

The lit/integration/instance.py diff isn't just a spurious change;
the exception-based assert implementation exposes a limitation in
the existing closure lifetime tracking algorithm (which is not
supposed to be what is tested there).

GitHub: Fixes #1539.
2020-11-10 00:51:24 +01:00
David Nadlinger f0ec987d23 test/coredevice: Avoid NumPy deprecation warning
Jagged arrays are no longer silently inferred as dtype=object,
as per NEP-34.

The compiler ndarray (re)implementation is unchanged, so the
test still fails.
2020-11-09 23:53:50 +01:00
Sebastien Bourdeauducq ea95d91428 wrpll: separate collector reset 2020-11-09 17:57:13 +08:00
David Nadlinger a97b4633cb compiler: Add math_fns module docstring [nfc] 2020-10-31 19:06:00 +01:00
Robert Jördens 19bd1e38d4
Merge pull request #1540 from airwoodix/fix-typo-phaser-doc
coredevice/phaser: fix typo in docstring
2020-10-29 22:55:06 +01:00
Etienne Wodey ecef5661ce coredevice/phaser: fix typos in docstring
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-10-29 20:27:08 +01:00
David Nadlinger d672d2fc35 test/coredevice: Fixup NumPy references
This fixes a copy/paste refactoring mistake from d5f90f6c9.
2020-10-20 02:49:05 +02:00
David Nadlinger d5f90f6c9f compiler: Fix quoting of multi-dimensional arrays
GitHub: Fixes m-labs/artiq#1523.
2020-10-20 01:40:14 +02:00
David Nadlinger d161fd5d84 compiler: Properly expand dimensions for array([]) with ndarray elements
This matches host NumPy behaviour (and, in either case, was
previously broken, as it still continued past the array element
type).
2020-10-20 01:40:14 +02:00
David Nadlinger 94489f9183 compiler: Fix inference order issue in multi-dim. subscript
This will be caught by the test for an imminent array quoting fix.
2020-10-20 01:40:14 +02:00
Robert Jördens a9dd0a268c
Merge pull request #1533 from m-labs/phaser
Phaser
2020-10-19 09:30:12 +02:00
Robert Jördens 30d1acee9f fastlink: fix fastino style link 2020-10-18 20:43:21 +00:00
Robert Jördens d98357051c add ref data 2020-10-18 20:43:21 +00:00
Robert Jördens 139385a571 fastlink: add fastino test 2020-10-18 17:11:09 +00:00
Sebastien Bourdeauducq d185f1ac67 wrpll: fix mulshift (2) 2020-10-17 00:32:02 +08:00
Sebastien Bourdeauducq 3f076bf79b wrpll: fix mulshift 2020-10-16 22:05:37 +08:00
Sebastien Bourdeauducq 90017da484 firmware: remove obsolete watchdog code (#1458) 2020-10-15 18:38:00 +08:00
Sebastien Bourdeauducq 6af8655cc7 README: update 2020-10-15 17:14:30 +08:00
Sebastien Bourdeauducq 840364cf0c RELEASE_NOTES: fix typo 2020-10-15 16:57:53 +08:00
Sebastien Bourdeauducq 24259523bb RELEASE_NOTES: link to issue consistently 2020-10-15 16:51:02 +08:00
Sebastien Bourdeauducq ed90450d2c README: mention Sinara in ARTIQ manifesto 2020-10-15 16:48:28 +08:00
Sebastien Bourdeauducq 0a37a3dbf7 RELEASE_NOTES: fix formatting 2020-10-15 16:45:17 +08:00
Sebastien Bourdeauducq 4027735a6d RELEASE_NOTES: fix formatting 2020-10-15 16:42:43 +08:00
Sebastien Bourdeauducq 4000adfb21 RELEASE_NOTES: update ARTIQ-6 section 2020-10-15 16:42:28 +08:00
Sebastien Bourdeauducq 59703ad31d test: stop checking for artiq_netboot 2020-10-15 16:18:56 +08:00
Sebastien Bourdeauducq 7a5996ba79 artiq_netboot: moved to git.m-labs.hk/M-Labs/artiq-netboot 2020-10-15 16:14:22 +08:00
Sebastien Bourdeauducq e66d2a6408 manual: clarify and expand nix-shell file 2020-10-15 14:31:25 +08:00
Sebastien Bourdeauducq 57ee57e7ea runtime: fix metlino si5324 init (2) 2020-10-14 18:41:56 +08:00
Sebastien Bourdeauducq ac35548d0f runtime: fix metlino si5324 init 2020-10-14 12:57:25 +08:00
Sebastien Bourdeauducq 35c61ce24d si5324: unify N31 settings when used as synthesizer
Closes #1528
2020-10-12 14:45:52 +08:00
hartytp a058be2ede wrpll: fix test_helper_collector 2020-10-08 19:43:12 +08:00
pca006132 d0d0a02fd0 test: added lit test for new error messages 2020-10-08 19:38:26 +08:00
pca006132 e9988f9d3b compiler: error message for custom operations
Emit error messages for custom comparison and inclusion test,
instead of compiler crashing.
2020-10-08 19:38:26 +08:00
Sebastien Bourdeauducq db62cf2abe wrpll: convert tests to self-checking unittests 2020-10-08 18:38:01 +08:00
Sebastien Bourdeauducq 07d43b6e5f wrpll: babysit Vivado DSP retiming
Design now passes timing.
2020-10-08 17:51:27 +08:00
Sebastien Bourdeauducq 7dfb4af682 kasli2: work around vivado clock constraint problem 2020-10-08 16:31:39 +08:00
Sebastien Bourdeauducq 96a5df0dc6 kasli2: add false path constraint for wrpll helper clock 2020-10-08 16:19:44 +08:00
Sebastien Bourdeauducq 6248970ef8 wrpll: clean up matlab comparison test 2020-10-08 15:40:15 +08:00
hartytp cd8c2ce713 wrpll: add test to compare collector+filter against Matlab simulation 2020-10-08 15:36:56 +08:00
hartytp d780faf4ac wrpll.si549: initialize the clock divider to a sensible value 2020-10-08 15:32:27 +08:00
hartytp e6ff2ddc32 wrpll: add more diagnostics in firmware and adapt to recent gateware changes 2020-10-08 15:32:27 +08:00
hartytp 7d7be6e711 wrpll.core: move collector into helper CD so we can get tags out while the filters are reset 2020-10-08 15:32:27 +08:00
Sebastien Bourdeauducq 3fa5d0b963 wrpll: clean up sign extension 2020-10-08 15:32:27 +08:00
hartytp 87911810d6 wrpll.core: add CSRs to monitor the collector outputs 2020-10-08 15:32:27 +08:00
hartytp f2f942a8b4 wrpll.ddmtd: remove CSRs from DDMTD
We will gather then from the collector output so we can get all tags on the same cycle
2020-10-08 15:32:27 +08:00
hartytp 85bb641917 wrpll.ddmtd: fix first edge deglitcher
The blind counter should be held in reset whenever the input is high,
not just when there is a rising edge (otherwise the counter runs down
during the main pulse and can then re-trigger on jitter from the falling edge)
2020-10-08 15:32:27 +08:00
hartytp f3cd0fc675 wrpll.filters: the helper clipping threshold is currently way too low. Move clipping after the bitshift to increase a bit.
TODO: think about this and pick a sensible threshold (and also think about integrator anti windup)
2020-10-08 15:32:27 +08:00
hartytp e5e648bde1 wrpll: add bit shift for collector helper output 2020-10-08 15:32:27 +08:00
hartytp c9ae406ac6 wrpll: change the DDMTD helper frequency to match CERN, improve docs 2020-10-08 15:32:27 +08:00
hartytp f6f6045f1a wrpll.thls: fix make 2020-10-08 15:32:27 +08:00
hartytp b44b870452 wrpll.filters: update to match Weida's MatLab simulations 2020-10-08 15:32:27 +08:00
hartytp e9ab434fa7 wrpll.core: update for modified collector 2020-10-08 15:32:27 +08:00
Sebastien Bourdeauducq 17c952b8fb wrpll: style 2020-10-08 15:32:27 +08:00
hartytp ebb7ccbfd1 wrpll: document DDMTD collector and fix unwrapping 2020-10-08 15:32:27 +08:00
Sebastien Bourdeauducq 7c2519c912 manual: nixpkgs 20.09 2020-10-08 09:18:46 +08:00
Sebastien Bourdeauducq 1bfe977203 manual: sphinx mock module whack-a-mole 2020-10-07 19:25:26 +08:00
Sebastien Bourdeauducq 66401aee9c dashboard: cleanup import 2020-10-07 19:24:54 +08:00
Sebastien Bourdeauducq 6baf3b2198 RELEASE_NOTES: fix indentation 2020-10-07 19:24:34 +08:00
pca006132 fe6115bcbb compiler: fix incorrect with behavior 2020-10-07 18:59:35 +08:00
pca006132 02f46e8b79 Fixes none to bool coercion
Fixes #1413 and #1414.
2020-10-07 15:34:24 +08:00
pca006132 88d346fa26 fixes with statement with multiple items
Closes #1478
2020-10-07 15:33:34 +08:00
Sebastien Bourdeauducq 9214e0f3e2 firmware: fix Si5324 CKIN selection on Kasli 2.0
https://github.com/sinara-hw/Kasli/issues/82#issuecomment-702129805
2020-10-02 20:35:32 +08:00
Robert Jördens eecd97ce4c phaser: debug and comments 2020-09-27 17:15:16 +00:00
Robert Jördens c453c24fb0 phaser: tweak slacks 2020-09-26 21:16:08 +00:00
Robert Jördens 6c8bddcf8d phaser: tune sync_dly 2020-09-26 21:13:00 +00:00
Robert Jördens 569e5e56cd phaser: autotune and fix fifo_offset 2020-09-26 20:37:16 +00:00
Robert Jördens 2fba3cfc78 phaser: debug init, systematic bring-up 2020-09-25 20:54:59 +00:00
Robert Jördens fec2f8b763 phaser: increase slack for iotest 2020-09-24 10:59:22 +00:00
Robert Jördens a65239957f ad53xx: distinguish errors 2020-09-24 10:52:03 +02:00
Robert Jördens 6e6480ec21 phaser: tweak slacks and errors, identify trf 2020-09-24 08:38:30 +00:00
Robert Jördens 03d5f985f8 phaser: another artiq-python signed integer quirk 2020-09-23 15:40:54 +00:00
Robert Jördens ef65ee18bd dac34h84: unflip spectrum, clear nco 2020-09-23 08:35:56 +00:00
Robert Jördens 50b4eb4840 Merge branch 'master' into phaser
* master: (26 commits)
  fastino: documentation and eem pass-through
  kasli2: forward sma_clkin to si5324
  test: relax test_dma_playback_time on Zynq
  rpc: fixed _write_bool
  fastino: document/cleanup
  build_soc: remove assertion that was used for test runs
  metlino_sayma_ttl: Fix RTIO frequency & demo code (#1516)
  Revert "test: temporarily disable test_async_throughput"
  build_soc: rename identifier_str to gateware_identifier_str
  test: relax loopback gate timing
  test: temporarily disable test_async_throughput
  test: relax test_pulse_rate on Zynq
  test: skip NonexistentI2CBus if I2C is not supported
  build_soc: override identifier_str only for gateware
  examples: add Metlino master, Sayma satellite with TTLOuts via FMC
  sayma_amc: add support for 4x DIO output channels via FMC
  fmcdio_vhdci_eem: fix pin naming
  build_soc: add identifier_str override option
  RPC: optimization by caching
  test: improved test_performance
  ...
2020-09-22 16:02:25 +00:00
Robert Jördens c55f2222dc fastino: documentation and eem pass-through
* Repeat information about matching log2_width a few times
  in the hope that people read it. #1518
* Pass through log2_width in kasli_generic json. close #1481
* Check DAC value range. #1518
2020-09-22 17:58:53 +02:00
Robert Jördens ad096f294c phaser: add hitl test exercising the complete API 2020-09-22 15:35:19 +00:00
Robert Jördens 85d16e3e5f phaser: tweaks 2020-09-22 15:27:38 +00:00
Robert Jördens 5c76f5c319 tester: add phaser 2020-09-22 14:36:49 +00:00
Robert Jördens fd5e221898 phaser: dac and trf register maps, init code 2020-09-22 14:08:39 +00:00
Robert Jördens 3e036e365a phaser: nco, settings and init tweaks 2020-09-22 09:52:49 +00:00
Robert Jördens fdb2867757 phaser: fewer iotest patterns 2020-09-21 17:06:26 +02:00
Robert Jördens d730851397 phaser: elaborate init sequence, more tests 2020-09-21 15:05:29 +00:00
Robert Jördens f0959fb871 phaser: iotest early, check_alarms 2020-09-17 14:13:58 +00:00
Robert Jördens b15e388b5f ad53xx: distinguish errors 2020-09-17 14:13:10 +00:00
Sebastien Bourdeauducq 29c940f4e3 kasli2: forward sma_clkin to si5324 2020-09-17 16:53:43 +08:00
Robert Jördens 868a9a1f0c phaser: new multidds 2020-09-16 14:06:38 +00:00
Robert Jördens c18f515bf9 phaser: rework rtio channels, sync_dly, init() 2020-09-16 12:23:07 +00:00
Robert Jördens f3b0398720 phaser: n=2, m=16, sync_dly 2020-09-16 09:19:15 +00:00
Robert Jördens 9b58b712a6 phaser: doc tweaks 2020-09-15 12:35:26 +00:00
Robert Jördens ff57813a9c phaser: init [wip] 2020-09-15 08:46:47 +00:00
Robert Jördens 07418258ae phaser: init [wip] 2020-09-15 08:46:10 +00:00
Robert Jördens 3a79ef740b phaser: work around integer size 2020-09-15 08:46:10 +00:00
Robert Jördens b449e7202b phaser: rework docs 2020-09-15 08:46:10 +00:00
Robert Jördens b619f657b9 phaser: doc tweaks 2020-09-12 19:59:49 +02:00
Robert Jördens c3728678d6 phaser: document, elaborate comments, some fixes 2020-09-12 17:35:14 +00:00
Robert Jördens e505dfed5b phaser: refactor coredevice driver 2020-09-12 14:17:40 +00:00
Robert Jördens fdd2d6f2fb phaser: SI methods 2020-09-12 11:02:37 +00:00
Sebastien Bourdeauducq bff611a888 test: relax test_dma_playback_time on Zynq 2020-09-11 11:21:45 +08:00
Robert Jördens 4e24700205 phaser: spelling 2020-09-09 16:52:52 +00:00
Robert Jördens 8aaeaa604e phaser: share_lut 2020-09-07 16:06:35 +00:00
Robert Jördens e69bb0aeb3 phaser: add comment about get_dac_data 2020-09-07 16:06:16 +00:00
pca006132 6195b1d3a0 rpc: fixed _write_bool
Closes #1519
2020-09-04 13:49:22 +08:00
Robert Jördens 56aa22caeb fastino: document/cleanup
* added documentation on `update`/`hold` mechanism
* mask machine unit values
* cleanup coredevice driver

close #1518
2020-09-03 17:44:26 +02:00
Astro 1b475bdac4 build_soc: remove assertion that was used for test runs 2020-09-03 20:24:18 +08:00
Harry Ho 458a411320
metlino_sayma_ttl: Fix RTIO frequency & demo code (#1516) 2020-09-03 15:08:31 +08:00
Sebastien Bourdeauducq 47e88dfcbe Revert "test: temporarily disable test_async_throughput"
This reverts commit f0289d49ab.
2020-09-03 14:19:55 +08:00
Astro 002a71dd8d build_soc: rename identifier_str to gateware_identifier_str 2020-09-02 00:00:57 +08:00
Sebastien Bourdeauducq 4398a2d5fa test: relax loopback gate timing 2020-09-01 17:50:09 +08:00
Sebastien Bourdeauducq f0289d49ab test: temporarily disable test_async_throughput
M-Labs/artiq-zynq#104
2020-09-01 17:49:40 +08:00
Sebastien Bourdeauducq 8d5dc0ad2a test: relax test_pulse_rate on Zynq 2020-09-01 17:08:26 +08:00
Sebastien Bourdeauducq f294d039b3 test: skip NonexistentI2CBus if I2C is not supported 2020-09-01 16:47:04 +08:00
Astro 91df3d7290 build_soc: override identifier_str only for gateware 2020-09-01 10:46:39 +08:00
Harry Ho 3d84135810 examples: add Metlino master, Sayma satellite with TTLOuts via FMC 2020-08-31 16:21:45 +08:00
Harry Ho dfbf3311cb sayma_amc: add support for 4x DIO output channels via FMC 2020-08-31 16:21:45 +08:00
Harry Ho 1ad9deaf91 fmcdio_vhdci_eem: fix pin naming 2020-08-31 16:21:45 +08:00
Astro 45ae6202c0 build_soc: add identifier_str override option
Signed-off-by: Stephan Maka <stephan@spaceboyz.net>
2020-08-31 11:48:58 +08:00
Robert Jördens 272dc5d36a phaser: documentation 2020-08-28 16:36:44 +00:00
pca006132 b2572003ac RPC: optimization by caching
This reduced the calls needed for socket send/recv.
2020-08-28 14:58:34 +08:00
pca006132 69f0699ebd test: improved test_performance
1. Added tests for small payload.
2. Added statistics.
2020-08-28 14:58:34 +08:00
Sebastien Bourdeauducq 7cf974a6a7 comm_kernel: fix typo 2020-08-28 12:25:23 +08:00
Robert Jördens 68bfa04abb phaser: trf readback strobe spi changes 2020-08-27 15:31:42 +00:00
Robert Jördens 96fc248d7c phaser: synchronize multidds to frame 2020-08-27 14:28:19 +00:00
Robert Jördens c10ac2c92a phaser: add trf, duc, interfaces, redo body assembly, use more natrual iq ordering (i lsb) 2020-08-27 14:26:09 +00:00
Robert Jördens e5e2392240 phaser: wire up multidds 2020-08-26 17:12:41 +00:00
Robert Jördens d1be1212ab phaser: coredevice shim, dds [wip] 2020-08-26 15:10:50 +00:00
pca006132 26bc5d2405 Updated release notes 2020-08-26 14:17:06 +08:00
pca006132 aac2194759 Ported rpc changes to or1k 2020-08-26 14:17:06 +08:00
pca006132 7181ff66a6 compiler: improved rpc performance for list and array
1. Removed duplicated tags before each elements.
2. Use numpy functions to speedup parsing.
2020-08-26 14:17:06 +08:00
pca006132 cfddc13294 test: fixed test_performance
Added more tests and use normal rpc instead of async rpc.

Async RPC does not represent the real throughput which is limited by the
hardware and the network. Normal RPC which requires a response from the
remote is closer to real usecases.
2020-08-26 14:17:06 +08:00
Robert Jördens 20fcfd95e9 phaser: coredevice shim, readback fix 2020-08-24 15:46:31 +00:00
Robert Jördens bcefb06e19 phaser: ddb template, split crc 2020-08-24 14:51:50 +00:00
Robert Jördens 11c9def589 phaser: readback delay, test fastlink 2020-08-24 14:49:36 +00:00
Paweł Kulik eb350c3459 Drive SFP0 TX_DISABLE low during startup (as was in Kasli v1.1). Fixes Ethernet on SFP modules with pullup on this line.
Signed-off-by: Paweł Kulik <pawel.kulik@creotech.pl>
2020-08-24 21:39:53 +08:00
Robert Jördens 63e4b95325 fastlink: rework crc injection 2020-08-23 19:41:13 +00:00
Robert Jördens a27a03ab3c fastlink: fix crc vs data width 2020-08-23 19:02:50 +00:00
Robert Jördens 7e584d0da1 fastino: use fastlink 2020-08-22 11:56:23 +00:00
Robert Jördens 3e99f1ce5a phaser: refactor link 2020-08-22 11:56:23 +00:00
Robert Jördens a34a647ec4 phaser: refactor fastlink 2020-08-22 11:56:23 +00:00
Robert Jördens aa0154d8e2 phaser: initial 2020-08-22 11:56:23 +00:00
Sebastien Bourdeauducq 5f6aa02b61 gui: unbreak background 2020-08-14 13:14:45 +08:00
David Nadlinger 69718fca90 gui: Improve fuzzy-select heuristics
Even though the code already used non-greedy wildcards before,
it would not find the shortest match, as earlier match starts
would still take precedence.

This could possibly be sped up a bit in CPython by doing
everything inside re using lookahead-assertion trickery, but the
current code is already imperceptibly fast for hundreds of
choices.
2020-08-14 02:13:45 +01:00
pca006132 a46573e97a Revert "test: set uart log level to INFO for DMA tests"
This reverts commit b05cbcbc24.
2020-08-13 12:44:33 +08:00
pca006132 b05cbcbc24 test: set uart log level to INFO for DMA tests 2020-08-13 12:24:57 +08:00
Sebastien Bourdeauducq 48008eaf5f test: omit unavailable math functions on OR1K 2020-08-12 15:01:13 +08:00
Sebastien Bourdeauducq d8cd5023f6 runtime: expose more libm functions 2020-08-12 13:36:06 +08:00
David Nadlinger c6f0c4dca4 test/coredevice: Ignore jagged 2D array embedding test for now 2020-08-10 00:23:38 +01:00
David Nadlinger daf57969b2 compiler: Do not expand strings into TInt(8)s in array() 2020-08-09 23:46:45 +01:00
David Nadlinger 778f2cf905 compiler: Fix numpy.full, implement for >1D 2020-08-09 23:46:45 +01:00
David Nadlinger 53d64d08a8 compiler: Fix multi-dim slice error message test, tweak wording 2020-08-09 23:14:56 +01:00
David Nadlinger d35f659d25 compiler: Add additional math fns available from Rust libm 2020-08-09 20:09:43 +01:00
David Nadlinger a39bd69ca4 compiler: Implement numpy.rint() using llvm.round() 2020-08-09 19:44:58 +01:00
David Nadlinger ae47d4c0ec test/coredevice: Add host/device consistency checks for NumPy math 2020-08-09 19:15:43 +01:00
David Nadlinger 8e262acd1e compiler: Slight array op implementation cleanup [nfc]
array_unaryop_funcs was never used; since the mangled names
are unique, a single dictionary would be nicer for overrides
anyway.s
2020-08-09 18:58:01 +01:00
David Nadlinger 33d931a5b7 compiler: Implement multi-dimensional indexing of arrays
This generates rather more code than necessary, but has
the advantage of automatically handling incomplete
multi-dimensional subscripts which still leave arrays
behind.
2020-08-09 17:08:43 +01:00
David Nadlinger b00ba5ece1 compiler: Support explicit array(…, dtype=…) syntax 2020-08-09 17:08:43 +01:00
David Nadlinger ad34df3de1 compiler: Support numpy.float
This would previously crash the compiler.
2020-08-09 17:08:43 +01:00
David Nadlinger 8783ba2072 compiler/firmware: RPCs for ndarrays 2020-08-09 17:08:43 +01:00
David Nadlinger 5472e830f6 compiler: Assume array()s are always rectangular 2020-08-09 03:54:42 +01:00
David Nadlinger 8eddb9194a test/lit: Add smoke test for math function broadcasting 2020-08-09 03:54:42 +01:00
David Nadlinger 1c645d8857 compiler: Unbreak quoting of 1D ndarrays
Lists and arrays no longer have the same representation all
the way through codegen, as used to be the case.

This could/should be made more efficient later, eliding the
temporary copies.
2020-08-09 03:54:42 +01:00
David Nadlinger df8f1c5c5a compiler: Annotate math functions nounwind/nowrite 2020-08-09 03:54:42 +01:00
David Nadlinger cc00ae9580 compiler: Implement broadcasting of math functions 2020-08-09 03:54:42 +01:00
David Nadlinger be7d78253f compiler: Implement 1D-/2D- array transpose
Left generic transpose (shape order inversion) for now, as that
would be less ugly if we implement forwarding to Python function
bodies for array function implementations.

Needs a runtime test case.
2020-08-09 03:54:42 +01:00
David Nadlinger faea886c44 compiler: Implement array vs. scalar broadcasting 2020-08-09 03:54:42 +01:00
David Nadlinger 56a872ccc0 compiler: Insert array binop shape check in caller for location information 2020-08-09 03:54:42 +01:00
David Nadlinger ef260adca8 compiler: Implement matrix multiplication
LLVM will take care of optimising the loops. This was still
unnecessarily painful; implementing generics and implementing
this in ARTIQ Python looks very attractive right now.
2020-08-09 03:54:42 +01:00
David Nadlinger 0da4a61d99 compiler: Fix method name typo [nfc] 2020-08-09 03:54:42 +01:00
David Nadlinger 78afa2ea8e compiler: Support MatMult in inferencer
Still needs actual codegen support.
2020-08-09 03:54:42 +01:00
David Nadlinger 4d48470320 compiler: Support common numpy.* math functions
Relies on the runtime to provide the necessary
(libm-compatible) functions.

The test is nifty, but a bit brittle; if this breaks in the
future because of optimizer changes, do not hesitate to convert
this into a more pedestrian test case.
2020-08-09 03:54:41 +01:00
David Nadlinger d37503f21d compiler: T{C -> External}Function, clarify docs [nfc] 2020-08-09 03:54:41 +01:00
David Nadlinger da255bee1b compiler: Implement element type coercion for arrays
So far, this is not exposed to the user beyond implicit conversions.

Note that all the implicit conversions, such as triggered by adding
arrays of mismatching types, or dividing integer arrays, are currently
emitted in a maximally inefficient way, where a temporary copy is first
made for the type conversion. The conversions would more sensibly be
implemented during the per-element operations to save on the extra
copies, but the current behaviour fell out of the rest of the IR
generator structure without extra changes.
2020-08-09 03:54:41 +01:00
David Nadlinger 4426e4144f compiler: Implement unary plus/minus for arrays
Implementation is needlessly generic to anticipate
coercion/transcendental functions.
2020-08-09 03:54:41 +01:00
David Nadlinger 0d8fbd4f19 test/lit: Add a test for matrix binary operations
No reason to believe other operations won't work the same.
(More exhaustive tests to follow using embedding for comparison
against NumPy.)
2020-08-09 03:54:41 +01:00
David Nadlinger 7bdd6785b7 test/lit: Basic ndarray smoke tests for all binops 2020-08-09 03:54:41 +01:00
David Nadlinger 4d002c7934 compiler: Explain use of rpc_tag() in array ops, formatting [nfc] 2020-08-09 03:54:41 +01:00
David Nadlinger a7e855b319 compiler.types: Change invalid default value [nfc]
This wasn't actually ever used, but was a dict instead of a set.
2020-08-09 03:54:41 +01:00
David Nadlinger 48fb80017f compiler: Implement basic element-wise array operations 2020-08-09 03:54:41 +01:00
David Nadlinger 9af6e5747d compiler: Factor rpc_tag() out of llvm_ir_generator 2020-08-09 03:54:41 +01:00
David Nadlinger e77c7d1c39 compiler: Add inferencer support for array operations 2020-08-09 03:54:41 +01:00
David Nadlinger ef57cad1a3 compiler: Test ndarray element assignment 2020-08-09 03:54:41 +01:00
David Nadlinger a9a975e5d4 language: Allow instantating TArray using bare ints 2020-08-09 03:54:41 +01:00
David Nadlinger 504b8f0148 language: Export TArray 2020-08-09 03:54:41 +01:00
David Nadlinger dea3c0c572 compiler: Don't store redundant ndarray buffer length, match list layout
This adds `elt` to _TPointer and the ir.Offset IR instruction,
which is like GetElem but without the final load.
2020-08-09 03:54:41 +01:00
David Nadlinger e82357d180 compiler: Fix inferencer tests after adding TArray.num_dims 2020-08-09 03:54:41 +01:00
David Nadlinger cb1cadb46a compiler: Fix/test 1D array construction from generic iterables 2020-08-09 03:54:41 +01:00
David Nadlinger 38c17622cc compiler: Axis-wise iteration of ndarrays
Matches NumPy. Slicing a TList reallocates, this doesn't; offsetting
couldn't be handled in the IR without introducing new semantics
(the Alloc kludge; could/should be made its own IR type).
2020-08-09 03:54:41 +01:00
David Nadlinger c95a978ab6 compiler: Iteration for 1D ndarrays 2020-08-09 03:54:41 +01:00
David Nadlinger bc17bb4d1a compiler: Parametrize TArray in number of dimensions 2020-08-09 03:54:41 +01:00
David Nadlinger 632c5bc937 compiler: Add ndarray .shape access 2020-08-09 03:54:41 +01:00
David Nadlinger 40f59561f2 compiler: Add test for length of empty arrays [nfc]
This makes sure we are actually emitting this as an 1D array
(like NumPy does).
2020-08-09 03:54:41 +01:00
David Nadlinger d882f8a3f0 compiler: Implement len() for ndarrays 2020-08-09 03:54:41 +01:00
David Nadlinger 575be2aeca compiler: Basic support for creation of multidimensional arrays
Breaks all uses of array(), as indexing is not yet implemented.
2020-08-09 03:54:41 +01:00
David Nadlinger 56010c49fb compiler/inferencer: Detect rectangular array()s
Still needs support through all the rest of the compiler, and
support for higher-dimensional arrays.

Alternatively, we could always assume ndarrays of ndarrays
are rectangular (i.e. ban array/list element types), and
detect mismatch at runtime. This might turn out to be
preferrable to be able to construct matrices from rows/columns.

`array()` is disallowed for no particularly good reason but
numpy API compatibility.
2020-08-09 03:54:41 +01:00
David Nadlinger 6ea836183d test/lit: Move some list tests to appropriate module [nfc] 2020-08-09 03:54:41 +01:00
pmldrmota 1df62862cd
AD9910: Write correct number of bits to POW register (#1498)
* coredevice.ad9910: Add return type hints to conversion functions

* coredevice.ad9910: Make set_pow write correct number of bits
The AD9910 expects 16 bits. Thus, if writing 32 bits to the POW register, the chip would likely enter a locked-up state.

* coredevice.ad9910: Correct data alignment in write_16

Co-authored-by: Robert Jördens <rj@quartiq.de>

* coredevice.ad9910: Add function to read from 16 bit registers

Co-authored-by: drmota <peter.drmota@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
2020-08-07 10:10:44 +02:00
Sebastien Bourdeauducq 504f72a02c rtio: remove legacy i_overflow_reset CSR 2020-08-06 17:52:32 +08:00
Sebastien Bourdeauducq 5f36e49f91 test_rtio: make DMA test generic wrt TTL channel 2020-08-06 16:36:14 +08:00
pca006132 3bfd372c20 compiler: linker discard local symbols.
Fixes exception backtrace problem for ARM.
2020-08-06 16:07:28 +08:00
Sebastien Bourdeauducq e3c5775584 test: skip CacheTest.test_borrow on Zynq 2020-08-06 10:54:30 +08:00
Sebastien Bourdeauducq 9c9dc3d0ef manual: Kasli now supports 10/100 Ethernet 2020-08-01 10:35:37 +08:00
David Nadlinger ae999db8f6 compiler: Revert function call lifetime tracking fix
This reverts commits f8d1506922
and cf19c9512d.

While the commit just fixes a clear typo in the implementation,
it turns out the original algorithm isn't flexible enough to
capture functions that transitively return references to
long-lived data. For instance, while cache_get() is special-cased
in the compiler to be recognised as returning a value of Global()
lifetime, a function just forwarding to it (as seen in the
embedding tests) isn't anymore.

A separate issue is also that this makes implementing functions
that take lists and return references to global data in user code
impossible, which central parts of the Oxford codebase rely on.

Just reverting for now to unblock master; a fix is easily designed,
but needs testing.
2020-07-30 16:40:39 +01:00
Sebastien Bourdeauducq 709026d945 test: relax device_to_host_rate 2020-07-30 17:46:22 +08:00
Sebastien Bourdeauducq 455e4859b7 simplify versioneer
Original version is very complex and still has a number of problems.
2020-07-30 00:54:07 +08:00
Sebastien Bourdeauducq 5fd0d0bbb6 gui: work around quamash bug with python 3.8 2020-07-28 12:08:47 +08:00
David Nadlinger cf19c9512d RELEASE_NOTES: Add entry for compiler lifetime tracking fix
I contemplated putting this in the "Breaking changes" section,
as it might break user code that has avoided being hit by
memory corruption from the use-after free by chance (even
though it was always an accepts-illegal bug).
2020-07-28 00:54:20 +01:00
David Nadlinger f8d1506922 compiler: Fix lifetime tracking for function call return values
GitHub: Fixes #1497.
2020-07-28 00:33:28 +01:00
cw-mlabs e4b16428f5 wrpll: fix run signal 2020-07-27 13:02:02 +08:00
cw-mlabs 8dd9a6d024 wrpll: fix scl signal 2020-07-27 12:59:32 +08:00
Charles Baynham 9b44ec7bc6 parameters: Allow forcing a NumberValue to return a float
Signed-off-by: Charles Baynham <charles.baynham@npl.co.uk>
2020-07-27 12:25:51 +08:00
David Nadlinger 1c72585c1b compiler: Handle None-returning function calls used as values
GitHub: Fixes #1493.
2020-07-25 02:20:53 +01:00
David Nadlinger 57e759a1ed compiler: Consistently use llunit through llvm_ir_generator [nfc] 2020-07-25 02:20:52 +01:00
Sebastien Bourdeauducq 2a2f5c4d58 comm_analyzer: make header error flag more general 2020-07-20 19:39:19 +08:00
Sebastien Bourdeauducq 553a49e194 test_moninj: set loop_out as output 2020-07-19 17:59:43 +08:00
Sebastien Bourdeauducq 8510bf4e55 test_analyzer: configure loop_out as output 2020-07-16 19:28:58 +08:00
pca006132 eb28d7be3a firmware/rpc: fixed typo 2020-07-16 15:15:47 +08:00
pca006132 f78d673079 firmware/rpc: added `#[repr(C)]` for structs.
Previously the structs are in repr(Rust) which has no layout guarantee.
2020-07-16 15:11:17 +08:00
Robert Jördens e31ee1f0b3 firmware/i2c: rewrite I2C implementation
* Never drive SDL or SDA high. They are specified to be open
  collector/drain and pulled up by resistive pullups. Driving
  high fails miserably in a multi-master topology (e.g. with
  a USB I2C interface). It would only ever be implemented to
  speed up the bus actively but that's tricky and completely
  unnecessary here.
* Make the handover states between the I2C protocol phases (start, stop,
  restart, write, read) well defined. Add comments stressing those
  pre/postconditions.
* Add checks for SDA arbitration failures and stuck SCL.
* Remove wrong, misleading or redundant comments.
2020-07-15 16:43:07 +08:00
Sebastien Bourdeauducq 4340a5cfc1 rtio/dma: fix previous commit 2020-07-12 10:14:22 +08:00
Sebastien Bourdeauducq f2e0d27334 rtio/dma: remove dead/broken code 2020-07-12 10:13:18 +08:00
Sebastien Bourdeauducq 901be75ba4 sayma_rtm: fix Si5324 reset
Closes #1483
2020-07-11 09:51:01 +08:00
Sebastien Bourdeauducq 8719bab726 Revert "i2c: duplicate TCA9548 control byte"
This reverts commit f265976df6.
2020-07-08 19:02:02 +08:00
Sebastien Bourdeauducq f273a9aacc artiq_ddb_template: remove SFP LEDs on hw 2.0+ 2020-07-08 18:15:36 +08:00
Sebastien Bourdeauducq 2d1f1fff7f kasli_generic: do not attempt to use SFP LED for RTIO on 2.0+ 2020-07-08 18:14:44 +08:00
Sebastien Bourdeauducq 85b5a04acf test: print transfer rates in MiB/s 2020-07-07 17:28:47 +08:00
Sebastien Bourdeauducq 13501115f6 test: remove watchdog test (#1458) 2020-07-07 17:28:47 +08:00
Donald Sebastian Leung f265976df6 i2c: duplicate TCA9548 control byte 2020-07-03 16:45:05 +08:00
David Nadlinger 3f0cf6e683 runtime: Stop kernel CPU before restarting comms CPU on panic
Before, the system would enter a boot loop when a panic occurred
while the kernel CPU was active (and panic_reset == 1), as
kernel::start() for the startup kernel would panic.
2020-07-01 17:29:05 +08:00
Sebastien Bourdeauducq 95807234d9 compiler: use binutils for ARM
This is mostly due to Windoze, where installing anything is a PITA and the LLVM tools won't be available soon.
2020-06-28 17:33:03 +08:00
Sebastien Bourdeauducq 906256cc02 manual: remove reference to conda install script 2020-06-26 10:50:11 +08:00
Sebastien Bourdeauducq 5d58a195c0 manual: clarify board package installation 2020-06-26 10:49:31 +08:00
Sebastien Bourdeauducq fb6a8899f4 manual: use conda env creation command 2020-06-26 10:47:55 +08:00
Sebastien Bourdeauducq 89c53c35e8 dashboard: style 2020-06-26 10:12:03 +08:00
David Nadlinger f36692638c dashboard: Add "Quick Open" dialog for experiments on global shortcut
This is similar to functionality in Sublime Text, VS Code, etc.
2020-06-26 10:11:33 +08:00
Sebastien Bourdeauducq 91c93e1ad8 manual: add note about additional packages 2020-06-20 19:32:13 +08:00
Sebastien Bourdeauducq 4ad46e0e40 update Conda installation method 2020-06-20 19:26:31 +08:00
David Nadlinger 966ed5d013 master/scheduler: Fix priority/due date precedence order when waiting to prepare
See test case – previously, the highest-priority pending run would
be used to calculate the timeout, rather than the earliest one.

This probably managed to go undetected for that long as any unrelated
changes to the pipeline (e.g. new submissions, or experiments pausing)
would also cause _get_run() to be re-evaluated.
2020-06-19 23:45:52 +01:00
David Nadlinger 7955b63b00 master: Always write results to HDF5 once run stage is reached
Previously, a significant risk of losing experimental results would
be associated with long-running experiments, as any stray exceptions
while run()ing the experiment – for instance, due to infrequent
network glitches or hardware reliability issue – would cause no
HDF5 file to be written. This was especially troublesome as long
experiments would suffer from a higher probability of unanticipated
failures, while at the same time being more costly to re-take in
terms of wall-clock time.

Unanticipated uncaught exceptions like that were enough of an issue
that several Oxford codebases had come up with their own half-baked
mitigation strategies, from swallowing all exceptions in run() by
convention, to always broadcasting all results to uniquely named
datasets such that the partial results could be recovered and written
to HDF5 by manually run recovery experiments.

This commit addresses the problem at its source, changing the worker
behaviour such that an HDF5 file is always written as soon as run()
starts.
2020-06-18 17:47:26 +01:00
David Nadlinger d87042597a master/worker_impl: Factor out "completed" message sending [nfc]
Just reduces the visual complexity/potential for typos a bit, and
we already have put_exception_report().
2020-06-18 01:30:46 +01:00
charlesbaynham 2429a266f6
ad9912: Fix typing problem on ad9912 (#1466)
Closes #1463

FTW and phase word were ambiguously typed, resulting in failure to compile
2020-06-16 20:17:22 +02:00
charlesbaynham ce7e92a75e
docs: Add docs for RTIO SED sequencing (#1461)
Signed-off-by: Charles Baynham <charles.baynham@npl.co.uk>
2020-06-15 18:43:34 +02:00
Harry Ho 1a17d0c869 zotino: add USER LED test 2020-06-11 16:03:56 +08:00
Harry Ho 6156bd4088 fastino: add tests using DACs and USER LEDs 2020-06-11 14:55:46 +08:00
Sebastien Bourdeauducq a18d2468e9 test: do not build libartiq_support in lit.cfg 2020-06-10 17:15:24 +08:00
Robert Jördens 9822b88d9b
ad9910: fix asf range (#1450)
* ad9910: fix asf range

The ASF is a 14-bit word. The highest possible value is 0x3fff, not
0x3ffe. `int(round(1.0 * 0x3fff)) == 0x3fff`.

I don't remember and understand why this was 0x3ffe since the beginning.
0x3fff was already used as a default in `set_mu()`

Signed-off-by: Robert Jördens <rj@quartiq.de>

* RELEASE_NOTES: ad9910 asf scale change

Co-authored-by: David Nadlinger <code@klickverbot.at>
2020-05-29 11:13:26 +02:00
Sebastien Bourdeauducq cb76f9da89 metlino: fix CSR collisions
Closes #1425
2020-05-29 15:59:44 +08:00
Sebastien Bourdeauducq bd9eec15c0 metlino: increase number of DRTIO links
Seems OK with Vivado 2019.2.
2020-05-29 15:59:16 +08:00
Sebastien Bourdeauducq d5c1eaa16e runtime: remove stack alignment requirement
I suppose this was for TMPU, but was never finished.
2020-05-29 15:37:23 +08:00
Sebastien Bourdeauducq 02900d79d0 firmware: fix typos 2020-05-29 15:21:07 +08:00
Sebastien Bourdeauducq d8b5bcf019 sayma_amc: support uTCA backplane for DRTIO 2020-05-29 14:58:49 +08:00
Sebastien Bourdeauducq 8b939b7cb3 sayma_amc: remove Master (obsoleted by Metlino) 2020-05-29 14:40:49 +08:00
Charles Baynham 5db2afc7a7 dashboard: Release notes for #1453 2020-05-26 18:16:35 +08:00
Charles Baynham 692c466838 Use logger formatting 2020-05-26 17:59:55 +08:00
Charles Baynham 8858ba8095 dashboard: Restart applets if required
Restart applets that are already running if a ccb call updates their spec

Signed-off-by: Charles Baynham <charles.baynham@npl.co.uk>
2020-05-26 17:59:55 +08:00
Marius Weber 2538840756
Coredevice Input Validation (#1447)
* Input validation and masking of SI -> mu conversions (close #1446)

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

* Update RELEASE_NOTES

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

Co-authored-by: Robert Jördens <rj@quartiq.de>
2020-05-17 15:09:11 +02:00
Marius Weber b3b6cb8efe
ad53xx improvements (#1445)
* ad53xx: voltage_to_mu() validation & documentation (closes #1443, #1444)
The voltage input (float) is checked for validity. If we need more
speed, we may want to check the DAC-code for over/underflow instead.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

* ad53xx documentation: voltage_to_mu is only valid for 16-bit DACs

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

* AD53xx: add voltage_to_mu method (closes #1341)

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

* ad53xx: improve voltage_to_mu performance
Interger comparison is faster than floating point math.

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

* AD53xx: voltage_to_mu method now uses attribute values

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

* Fixup RELEASE_NOTES.rst

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>

* ad53xx: documentation improvements

voltage_to_mu return value
14-bit DAC support

Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
2020-05-08 19:23:43 +02:00
Sebastien Bourdeauducq 4e9a529e5a kasli: integrate WRPLL 2020-05-07 21:34:02 +08:00
Sebastien Bourdeauducq 60e5f1c18e kasli: DRTIO support for Kasli 2 2020-05-07 20:09:43 +08:00
Sebastien Bourdeauducq 1f2182d4c7 kasli: default to hardware v2 2020-05-07 19:15:03 +08:00
Sebastien Bourdeauducq 35f1814235 kasli: implement virtual LEDs 2020-05-07 19:07:43 +08:00
Sebastien Bourdeauducq b83afedf43 kasli: light up ERROR LED on panic 2020-05-07 19:06:10 +08:00
Sebastien Bourdeauducq 4982fde898 firmware: I2C I/O expander support 2020-05-05 21:38:17 +08:00
Sebastien Bourdeauducq ef4e5bc69b firmware: Kasli I2C EEPROM cleanup 2020-05-05 21:29:29 +08:00
Sebastien Bourdeauducq 85e92ae28c compiler: use more LLVM tools on ARM (#733) 2020-04-28 16:21:50 +08:00
Sebastien Bourdeauducq 7e400a78f4 kasli: compile tester for hw 2.0 by default 2020-04-28 16:07:56 +08:00
Sebastien Bourdeauducq 140a26ad7e compiler: ld -> ld.lld 2020-04-28 16:07:26 +08:00
Sebastien Bourdeauducq 4228e0205c compiler: link with lld on ARM (#733) 2020-04-28 15:00:24 +08:00
Sebastien Bourdeauducq 3a7819704a rtio: support direct 64-bit now CSR in KernelInitiator 2020-04-26 16:04:32 +08:00
Sebastien Bourdeauducq 251a0101a6 compiler: support disabling now-pinning 2020-04-26 12:38:43 +08:00
Sebastien Bourdeauducq d19f28fa84 kasli: v2 clocking WIP, remove SFP LEDs from RTIO 2020-04-23 23:02:18 +08:00
Sebastien Bourdeauducq 9bc43b2dbf kasli: support EEPROM on v2 2020-04-23 23:00:36 +08:00
Sebastien Bourdeauducq 77e6fdb7a7 artiq_flash: cleanup Sayma RTM management, support flashing AMC with RTM disconnected 2020-04-14 18:22:06 +08:00
Robert Jördens ea79ba4622 ttl_serdes: detect edges on short pulses
Edges on pulses shorter than the RTIO period were missed because the
reference sample and the last sample of the serdes word are the same.

This change enables detection of edges on pulses as short as the
serdes UI (and shorter as long as the pulse still hits a serdes sample
aperture).

In any RTIO period, only the leading event corresponding to the first
edge with slope according to sensitivity is registerd. If the channel is
sensitive to both rising and falling edges and if the pulse is contained
within an RTIO period, or if it is sensitive only to one edge slope and
there are multiple pulses in an RTIO period, only the leading event is
seen. Thus this possibility of lost events is still there. Only the
conditions under which loss occurs are reduced.

In testing with the kasli-ptb6 variant, this also improves resource
usage (a couple hundred LUT) and timing (0.1 ns WNS).
2020-04-13 13:21:03 +02:00
Sebastien Bourdeauducq e8b73876ab comm_kernel: add Zynq runtime identifier 2020-04-12 17:25:14 +08:00
Sebastien Bourdeauducq de57039e6e comm_kernel: cleanup 2020-04-12 16:02:36 +08:00
Sebastien Bourdeauducq 9dc24f255e comm_kernel: remove dead code 2020-04-12 15:06:46 +08:00
Sebastien Bourdeauducq fb0ade77a9 firmware: fix non-DRTIO build 2020-04-10 17:23:17 +08:00
Sebastien Bourdeauducq ec7b2bea12 sayma: round FTW like Urukul in JDCGSyncDDS 2020-04-08 15:00:33 +08:00
Sebastien Bourdeauducq 0f4be22274 sayma: add simple sychronized DDS for testing 2020-04-08 14:13:54 +08:00
Sebastien Bourdeauducq 3c823a483a sayma: improve DAC sync messaging (again) 2020-04-06 22:36:43 +08:00
Sebastien Bourdeauducq 4d601c2102 sayma: improve DAC sync messaging 2020-04-06 22:36:03 +08:00
Sebastien Bourdeauducq 61d4614b61 sayma: fix/cleanup DRTIO-DAC sync interaction 2020-04-06 22:34:05 +08:00
Sebastien Bourdeauducq facc0357d8 drtio: make sure receive buffer is drained after ping reply 2020-04-06 22:33:15 +08:00
Sebastien Bourdeauducq ffd3172e02 sayma: move SYSREF DDMTD to RTM (#795) 2020-04-06 00:01:28 +08:00
Sebastien Bourdeauducq 8f608fa2fa examples/sines_urukul_sayma: adapt for sayma v2, use 1 DAC only 2020-04-05 16:51:40 +08:00
Etienne Wodey 90d08988b2 language/environment: BooleanValue: fix type detection
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-04-04 15:37:04 +08:00
Etienne Wodey 9b03a365ed language/environment: cast argument processor default values early
Fixes #1434. Also add unit tests for some argument processors.

Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-04-04 15:37:04 +08:00
Sebastien Bourdeauducq 371d923385 manual: nixpkgs 20.03 2020-04-04 13:05:46 +08:00
Sebastien Bourdeauducq 9294efabc0 manual: Kasli can get MAC address from EEPROM 2020-03-14 12:19:43 +08:00
Sebastien Bourdeauducq 4a8d361ace soc: optimize programmable identifier 2020-03-12 23:09:13 +08:00
Sebastien Bourdeauducq 9e66dd7075 soc: reprogrammable identifier 2020-03-12 22:23:08 +08:00
Robert Jördens 380de177e7 rtio: fix wide output after RTIO refactoring
fixes 3d0c3cc1cf
2020-03-05 17:55:27 +00:00
Robert Jördens e803830b3b fastino: support wide RTIO interface and channel groups 2020-03-05 17:55:04 +00:00
Sebastien Bourdeauducq 8dbf30b23e manual: mention integrated Kasli JTAG 2020-03-02 18:42:01 +08:00
Sebastien Bourdeauducq 8451e58fbe ad9912: fix ftw width docstring 2020-02-27 02:11:12 +08:00
Paweł K 2a909839ff
artiq_flash: added option of specifying another username when connecting through SSH. (#1429)
Signed-off-by: Paweł Kulik <pawel.kulik@creotech.pl>
2020-02-19 19:44:11 +08:00
Sebastien Bourdeauducq 6d26def3ce sayma: drive filtered_clk_sel on master variant 2020-02-06 22:28:49 +08:00
Sebastien Bourdeauducq 52ec849008 sayma: fix sysref_delay_dac 2020-02-05 19:04:01 +08:00
Sebastien Bourdeauducq c7de1f2e6b metlino: drive clock muxes 2020-02-05 00:06:34 +08:00
Sebastien Bourdeauducq bf9f4e380a si5324: program I2C mux on Metlino 2020-02-03 18:07:59 +08:00
Sebastien Bourdeauducq ffb24e9fff artiq_flash: use correct proxy bitstream for Metlino 2020-02-03 18:07:26 +08:00
Sebastien Bourdeauducq 5f8e20b1a1 artiq_sinara_tester: fix device_db filename 2020-01-31 10:26:58 +08:00
Sebastien Bourdeauducq dfa033eb87 wrpll: new collector from Weida/Tom 2020-01-24 10:31:52 +08:00
Sebastien Bourdeauducq dee16edb78 wrpll: DDMTD sampler double latching 2020-01-22 19:16:26 +08:00
Sebastien Bourdeauducq f4d8f77268 turn kasli_tester into a frontend tool 2020-01-21 16:13:04 +08:00
Sebastien Bourdeauducq bfcbffcd8d update smoltcp
This disables the 'log' features which does not compile, and may break net_trace. To be investigated later.
2020-01-21 13:58:23 +08:00
Sebastien Bourdeauducq 82cdb7f933 typo 2020-01-21 10:07:13 +08:00
Robert Jördens 248230a89e fastino: style 2020-01-20 13:25:00 +01:00
Robert Jördens c45a872cba fastino: fix init, set_cfg 2020-01-20 13:25:00 +01:00
Robert Jördens 2c4e5bfee4 fastino: add [WIP] 2020-01-20 13:25:00 +01:00
Sebastien Bourdeauducq 8f9948a1ff kasli_sawgmaster: add basemod programming example 2020-01-20 20:14:24 +08:00
Sebastien Bourdeauducq e427aaaa66 basemod_att: fix imports 2020-01-20 20:14:24 +08:00
Sebastien Bourdeauducq 62a52cb086 sayma: do not pollute the log with DAC status on success 2020-01-20 20:14:24 +08:00
Sebastien Bourdeauducq 6b428ef3be sayma: initialize DAC before testing jesd::ready 2020-01-20 20:14:24 +08:00
Robert Jördens 7ab0282234 adf5355: style 2020-01-20 13:13:08 +01:00
Robert Jördens 9368c26d1c mirny: add to manual 2020-01-20 13:13:08 +01:00
Etienne Wodey da531404e8 artiq_ddb_template: add Mirny support
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
2020-01-20 13:13:08 +01:00
Robert Jördens 01a6e77d89 mirny: add
* This targets unrelease CPLD gateware (https://github.com/quartiq/mirny/issues/1)
* includes initial coredevice driver, eem shims, and kasli_generic tooling
* addresses the ARTIQ side of #1130
* Register abstraction to be written

Signed-off-by: Robert Jördens <rj@quartiq.de>
2020-01-20 13:13:08 +01:00
Sebastien Bourdeauducq ec03767dcf sayma: improve DAC status report 2020-01-20 18:22:06 +08:00
Sebastien Bourdeauducq 5c299de3b4 sayma: print DAC status on JESD not ready error 2020-01-20 18:21:29 +08:00
Sebastien Bourdeauducq 45efee724e sayma: add JESD204 PHY done diagnostics 2020-01-20 12:47:31 +08:00
Sebastien Bourdeauducq 6c3e71a83a wrpll: cleanup 2020-01-18 09:43:43 +08:00
Sebastien Bourdeauducq 344f8bd12a wrpll: collector patch from Weida 2020-01-18 09:42:58 +08:00
Sebastien Bourdeauducq 833f428391 sayma: fix hmc542 to/from mu 2020-01-16 09:10:32 +08:00
Sebastien Bourdeauducq 6c948c7726 sayma: RF switch control is active-low on Basemod, invert 2020-01-16 08:59:52 +08:00
Sebastien Bourdeauducq 50302d57c0 wrpll: more careful I2C timing 2020-01-14 20:03:46 +08:00
Sebastien Bourdeauducq 105dd60c78 wrpll: ADPLLProgrammer mini test bench and fixes 2020-01-14 16:52:25 +08:00
Sebastien Bourdeauducq 3242e9ec6c wrpll: loop test 2020-01-13 22:31:57 +08:00
Sebastien Bourdeauducq 8ec0f2e717 wrpll: implement ADPLLProgrammer 2020-01-13 22:30:11 +08:00
Sebastien Bourdeauducq d5895b8999 wrpll: adpll -> set_adpll 2020-01-13 20:46:36 +08:00
Sebastien Bourdeauducq e7ef23d30c wrpll: use CONFIG_CLOCK_FREQUENCY and rtio_frequency in trim_dcxos 2020-01-13 20:44:15 +08:00
Sebastien Bourdeauducq ea3bce6fe3 wrpll: wait for settling time after setting ADPLL 2020-01-13 20:43:34 +08:00
Sebastien Bourdeauducq d685619bcd wrpll: collector code modifications from Weida 2020-01-13 20:42:41 +08:00
Sebastien Bourdeauducq 9d7196bdb7 update copyright year 2020-01-13 19:33:44 +08:00
Sebastien Bourdeauducq e87d864063 wrpll: print ADPLL offsets 2020-01-13 19:32:30 +08:00
Sebastien Bourdeauducq 8edbc33d0e wrpll: calculate initial ADPLL offsets 2020-01-13 19:29:10 +08:00
Sebastien Bourdeauducq 9dd011f4ad firmware: remove bitrotten Sayma code 2020-01-13 18:47:54 +08:00
Sebastien Bourdeauducq 583a18dd5f firmware: expose fmod to kernels. Closes #1417 2020-01-10 14:33:02 +08:00
David Nadlinger d8c81d6d05 compiler: Other types microoptimisations
Interestingly enough, these actually seem to give a measurable
speedup (if small – about 1% improvement out of 6s whole-program
compile-time in one particular test case).

The previous implementation of is_mono() had also interesting
behaviour if `name` wasn't given; it would test only for the
presence of any keys specified via keyword arguments,
disregarding their values. Looking at uses across the current
ARTIQ codebase, I could neither find a case where this would
have actually been triggered, nor any rationale for it.

With the short-circuited implementation from this commit,
is_mono() now checks name/all of params against any specified
conditions.
2020-01-01 08:49:19 +00:00
David Nadlinger 2c34f0214b compiler: Short-circuit Type.unify() with identical other type
This considerably improves performance; ~15% in terms of total
artiq_run-to-kernel-compiled duration in one test case.
2020-01-01 08:49:19 +00:00
Robert Jördens eebae01503 artiq_client: add back quiet-verbose args for submission
close #1416
regression introduced in 3fd6962
2019-12-31 13:00:26 +01:00
Sebastien Bourdeauducq 3f32d78c0e wrpll: simple ADPLL test 2019-12-31 12:12:29 +08:00
Sebastien Bourdeauducq bb04b082a7 wrpll: clarify comment 2019-12-31 12:12:29 +08:00
David Nadlinger 1e864b7e2d coredevice/suservo: Add separate methods for setting only the IIR offset 2019-12-30 20:02:22 +00:00
Sebastien Bourdeauducq a666766f38 wrpll: add ADPLL offset registers 2019-12-30 22:19:42 +08:00
Sebastien Bourdeauducq 5c6e394928 ddmtd: add collector 2019-12-30 22:17:44 +08:00
Sebastien Bourdeauducq 642a305c6a wrpll: remove unnecessary delay
Counting now happens in the sys domain with no CDC between counter and CPU.
2019-12-30 20:01:06 +08:00
Sebastien Bourdeauducq f57f235dca wrpll: new frequency meter
As per Mattermost discussion with Tom.
2019-12-30 19:47:57 +08:00
Sebastien Bourdeauducq 9e15ff7e6a wrpll: improve DDMTD deglitcher 2019-12-30 16:56:06 +08:00
Sebastien Bourdeauducq dfad27125e runtime: relax/fix TCP keepalive settings (#1125) 2019-12-23 19:58:10 +08:00
Sebastien Bourdeauducq b5e1bd3fa2 coredevice: simplify/cleanup network connection code
This removes:
* host-side keepalive, which turns out not to be required
* custom connection timeout (the default is OK)
* SSH tunneling support (doesn't seem to be actually used anywhere)
2019-12-23 19:53:49 +08:00
David Nadlinger e8b9fcf0bb
doc/manual/developing: Clarify Nix PYTHONPATH usage
PYTHONPATH should still contain all the other directories
(obvious once you've made that mistake once, of course).
2019-12-23 00:50:03 +00:00
David Nadlinger af31c6ea21 coredevice: Don't use `is` to compare with integer literal
This works on CPython, but is not guaranteed to do so, and
produces a warning since 3.8 (see https://bugs.python.org/issue34850).
2019-12-22 05:46:41 +00:00
Sebastien Bourdeauducq fb2076a026 basemod_att: add dB functions, document 2019-12-21 14:56:41 +08:00
Sebastien Bourdeauducq b2480f0edc artiq_flash: update actions documentation 2019-12-21 14:18:28 +08:00
Sebastien Bourdeauducq d4e039cede basemod: add coredevice driver 2019-12-21 14:18:10 +08:00
Sebastien Bourdeauducq 106d25b32a kasli_sawgmaster: fix drtio_is_up 2019-12-21 14:17:52 +08:00
Sebastien Bourdeauducq 8759c8d360 shiftreg: fix get method 2019-12-21 14:17:22 +08:00
Sebastien Bourdeauducq c3030f4ffb kasli_sawgmaster: update device_db for BaseMod 2019-12-20 19:59:15 +08:00
Sebastien Bourdeauducq cab8c8249e coredevice/shiftreg: add get method 2019-12-20 18:58:50 +08:00
Sebastien Bourdeauducq b7f1623197 sayma_rtm: connect attenuator shift registers in series 2019-12-20 18:58:31 +08:00
Sebastien Bourdeauducq c5137eeb62 firmware: remove legacy hmc542 code 2019-12-20 15:25:55 +08:00
Sebastien Bourdeauducq 1c9cbe6285 sayma_rtm: add basemod attenuators on RTIO 2019-12-20 15:25:55 +08:00
David Nadlinger 8f518c6b05 compiler: Allow `None` in type hints
Similar to how Python itself interprets None as type(None),
make it translate to TNone in ARTIQ compiler type hints.
2019-12-19 09:36:45 +08:00
David Nadlinger 594ff45750 compiler: Revert support for `None` as `TNone`
This was mistakenly included in fb2b634c4a, and broke the test
case verifying that using None as an ARTIQ type annotation in fact
generates an error message.
2019-12-18 13:23:40 +00:00
David Nadlinger fb2b634c4a compiler, language: Implement @kernel_from_string
With support for polymorphism (or type erasure on pointers to
member functions) being absent in the ARTIQ compiler, code
generation is vital to be able to implement abstractions that
work with user-provided lists/trees of objects with uniform
interfaces (e.g. a common base class, or duck typing), but
different concrete types.

@kernel_from_string has been in production use for exactly
this use case in Oxford for the better part of a year now
(various places in ndscan).

GitHub: Fixes #1089.
2019-12-18 10:51:04 +08:00
Sebastien Bourdeauducq 6ee15fbcae sayma_rtm: basemod RF switches 2019-12-18 10:33:29 +08:00
David Nadlinger d3508b014f firmware: Add whitespace between panic handler location and message 2019-12-17 19:59:59 +00:00
David Nadlinger 0279a60a55 examples: Add README
This will be displayed by GitHub below the directory listing, and was
inspired by observing new users disregard the examples/ tree entirely
(even though the experiments and device DBs within would have cleared
up their getting-started confusion) due to the perceived complexity
wall induced by the wealth of subdirectories.
2019-12-17 13:35:19 +00:00
Sebastien Bourdeauducq 5fefdcc324 manual: clarify XY applet setup example 2019-12-15 10:41:58 +08:00
Sebastien Bourdeauducq 8d13aeb96c test: run test_help for browser and dashboard 2019-12-12 10:34:58 +08:00
Sebastien Bourdeauducq ac09f3a5da artiq_browser: fix command line argument handling. Closes #1404 2019-12-11 16:18:56 +08:00
Sebastien Bourdeauducq 52112d54f9 kasli_generic: expose peripheral_processors dictionary. Closes #1403 2019-12-10 10:30:06 +08:00
Sebastien Bourdeauducq 6f52540569 wrpll: fix previous commit 2019-12-09 20:13:55 +08:00
Sebastien Bourdeauducq 13486f3acf wrpll: swap helper/main si549 frequencies 2019-12-09 19:49:34 +08:00
Sebastien Bourdeauducq 150a02117c sayma_rtm: drive clk_src_ext_sel 2019-12-09 19:47:50 +08:00
Sebastien Bourdeauducq 307a6ca140 gth_ultrascale: make OBUFDS_GTE3 work
https://www.xilinx.com/support/answers/67919.html
2019-12-09 18:13:22 +08:00
Sebastien Bourdeauducq f7a5df8d81 wrpll: add diagram from Weida 2019-12-09 17:41:44 +08:00
Sebastien Bourdeauducq 4919fb8765 wrpll: print DDMTD helper tags 2019-12-09 17:39:22 +08:00
Sebastien Bourdeauducq 0d4eccc1a5 wrpll: improve debug output 2019-12-09 17:23:09 +08:00
Sebastien Bourdeauducq f633c62e8d wrpll: speed up si549 i2c access 2019-12-09 17:22:58 +08:00
Sebastien Bourdeauducq 14e09582b6 wrpll: work around si549 not working when lsdiv=2 2019-12-09 16:20:08 +08:00
Sebastien Bourdeauducq 439576f59d wrpll: fix Si549 initialization delays 2019-12-09 16:13:57 +08:00
Sebastien Bourdeauducq 2b5213b013 wrpll: constrain clocks 2019-12-09 12:26:44 +08:00
Sebastien Bourdeauducq 05e2e1899a wrpll: update OBUFDS_GTE2 comment
Seems O can fan out simultaneously to transceiver and fabric.
Kasli is using ODIV2 for no particular reason.
2019-12-09 11:58:54 +08:00
Sebastien Bourdeauducq 4148efd2ee wrpll: implement filters and connect to Si549 2019-12-09 11:47:29 +08:00
Sebastien Bourdeauducq d43fe644f0 wrpll: stabilize DDMTDSamplerGTP 2019-12-09 11:47:14 +08:00
Sebastien Bourdeauducq 0499f83580 wrpll: helper clock sanity check 2019-12-08 23:46:33 +08:00
Sebastien Bourdeauducq 46a776d06e sayma: introduce WRPLL on RTM 2019-12-08 15:30:00 +08:00
Sebastien Bourdeauducq f35f658bc5 artiq_flash: rework RTM management 2019-12-08 15:29:31 +08:00
Sebastien Bourdeauducq bcd061f141 artiq_flash: RTM is a regular DRTIO satellite, can be used with all variants 2019-12-08 15:12:04 +08:00
Sebastien Bourdeauducq 883310d83e sayma_rtm: si5324 -> cdrclkc 2019-12-08 14:26:05 +08:00
Sebastien Bourdeauducq 57a5bea43a sayma_rtm: support setting RTIO frequency 2019-12-08 11:45:31 +08:00
Sebastien Bourdeauducq da9237de53 wrpll: support differential DDMTD inputs 2019-12-07 18:18:57 +08:00
Paweł Kulik 3851a02a3a Added option of flashing only RTM gateware.
Signed-off-by: Paweł Kulik <pawel.kulik@creotech.pl>
2019-12-07 09:31:19 +08:00
Paweł Kulik 14e250c78f Enabled internal pullup for CML SYSREF outputs, otherwise there is no signal on them.
Signed-off-by: Paweł Kulik <pawel.kulik@creotech.pl>
2019-12-07 09:30:24 +08:00
Sebastien Bourdeauducq 7098854b0f wrpll: share DDMTD counter 2019-12-04 19:05:56 +08:00
Robert Jördens 05c5fed07d suservo: stray comma 2019-12-03 08:38:07 +00:00
Robert Jördens 56074cfffa suservo: support operating with one urukul
implemented by wiring up the second Urukul to dummy pins

Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-12-02 11:30:20 +01:00
Robert Jördens 86e1924493 kasli_generic: support external reference on masters 2019-11-30 07:34:41 +00:00
Sebastien Bourdeauducq eb271f383b wrpll: add DDMTD cores 2019-11-28 22:03:50 +08:00
Sebastien Bourdeauducq 39d5ca11f4 si549: increase I2C frequency 2019-11-28 22:03:26 +08:00
Sebastien Bourdeauducq 87894102e5 si549: use recommended i2c read sequence 2019-11-28 17:49:02 +08:00
Sebastien Bourdeauducq 2e55e39ac7 wrpll: use spaces to indent 2019-11-28 17:40:25 +08:00
Sebastien Bourdeauducq 354d82cfe3 wrpll: drive helper clock domain 2019-11-28 17:40:00 +08:00
Sebastien Bourdeauducq 4a03ca928d artiq_flash: sayma fixes 2019-11-28 17:38:29 +08:00
Sebastien Bourdeauducq 68cab5be8c si549: cleanups 2019-11-28 16:36:59 +08:00
Sebastien Bourdeauducq bcd2383c9d wrpll: si549 initialization 2019-11-27 22:58:08 +08:00
Sebastien Bourdeauducq 4832bfb08c wrpll: i2c functions, select_recovered_clock placeholder 2019-11-27 21:21:00 +08:00
Sebastien Bourdeauducq 449d2c4f08 libboard_misoc: fix !has_i2c 2019-11-27 21:04:28 +08:00
Robert Jördens e0687b77f5 si5324: 10 MHz ext_ref_frequency
* close #1254
* tested on innsbruck2 kasli variant
* sponsored by Uni Innsbruck/AQT

Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-11-22 18:29:12 +01:00
Sebastien Bourdeauducq c536f6c4df sayma_amc: output ddmtd_rec_clk 2019-11-20 19:16:04 +08:00
Sebastien Bourdeauducq ae50da09c4 drtio/gth_ultrascale: support OBUFDS_GTE3 2019-11-20 19:15:50 +08:00
Sebastien Bourdeauducq fe0c324b38 sayma: integrate si549 core 2019-11-20 17:37:16 +08:00
Sebastien Bourdeauducq fa41c946ea wrpll: si549 fixes 2019-11-20 17:04:24 +08:00
Sebastien Bourdeauducq c5dbab1929 gateware: move wrpll to drtio 2019-11-20 14:43:08 +08:00
gthickman 56d4b70e01 ad9910 osk (#1387)
* updated adoo10.py for RAM mode frequency control

* updated docstrings for set_cfr1() in ad9910.py

* fixed typo in ad9910.py

* added docstrings to ad9910.py

* removed OSK-related changes in AD9910, to be included in a separate branch.

* updated AD9910 set_cfr1 for control of OSK mode parameters

* updated AD9910 set_cfr1() for control of OSK mode parameters.
2019-11-18 15:57:26 +01:00
Fabian Schmid f73e2a3d30 doc: clarify urukul attenuator behavior
Closes #1386

Signed-off-by: Fabian Schmid <fabian.schmid@mpq.mpg.de>
2019-11-18 15:56:00 +01:00
Sebastien Bourdeauducq f330e285fb DEVELOPER_NOTES: update board lockfile info 2019-11-18 15:26:51 +08:00
Sebastien Bourdeauducq 9f459f37dc doc: NDSP URLs, mention contributions can be added to list
Closes #1389
2019-11-18 13:03:44 +08:00
Sebastien Bourdeauducq ac8a5b60c0 doc: add sipyco to mock modules 2019-11-18 08:37:44 +08:00
Sébastien Bourdeauducq b7292f0195
RELEASE_NOTES: fix rst formatting further 2019-11-15 15:59:24 +08:00
Sébastien Bourdeauducq cf6a7f243f
RELEASE_NOTES: fix rst formatting 2019-11-15 15:56:17 +08:00
Sebastien Bourdeauducq 57979da59a RELEASE_NOTES: add ARTIQ-6 section 2019-11-15 13:49:54 +08:00
Sebastien Bourdeauducq 3adc799785 update GUI background 2019-11-15 13:49:09 +08:00
Sebastien Bourdeauducq eefc6ae80d update release notes 2019-11-15 13:07:16 +08:00
Sebastien Bourdeauducq 310a627e16 doc: mention toptica-lasersdk-artiq conda package 2019-11-14 23:53:13 +08:00
Sebastien Bourdeauducq 9b6b683d87 update MAJOR_VERSION 2019-11-14 17:16:50 +08:00
Sebastien Bourdeauducq 6bb931a17b manual: point to beta Nix channel 2019-11-14 17:04:46 +08:00
Sebastien Bourdeauducq 29d84f5655 add beta version marker 2019-11-14 16:56:03 +08:00
465 changed files with 37741 additions and 22449 deletions

1
.gitattributes vendored
View File

@ -1 +0,0 @@
artiq/_version.py export-subst

View File

@ -38,7 +38,7 @@ Closes #XXX
### All Pull Requests ### All Pull Requests
- [x] Use correct spelling and grammar. - [x] Use correct spelling and grammar.
- [ ] Update [RELEASE_NOTES.md](../RELEASE_NOTES.md) if there are noteworthy changes, especially if there are changes to existing APIs. - [ ] Update [RELEASE_NOTES.rst](../RELEASE_NOTES.rst) if there are noteworthy changes, especially if there are changes to existing APIs.
- [ ] Close/update issues. - [ ] Close/update issues.
- [ ] Check the copyright situation of your changes and sign off your patches (`git commit --signoff`, see [copyright](../CONTRIBUTING.rst#copyright-and-sign-off)). - [ ] Check the copyright situation of your changes and sign off your patches (`git commit --signoff`, see [copyright](../CONTRIBUTING.rst#copyright-and-sign-off)).

3
.gitignore vendored
View File

@ -29,6 +29,7 @@ __pycache__/
/repository/ /repository/
/results /results
/last_rid.pyon /last_rid.pyon
/dataset_db.pyon /dataset_db.mdb
/dataset_db.mdb-lock
/device_db*.py /device_db*.py
/test* /test*

View File

@ -7,8 +7,7 @@ Reporting Issues/Bugs
Thanks for `reporting issues to ARTIQ Thanks for `reporting issues to ARTIQ
<https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and <https://github.com/m-labs/artiq/issues/new>`_! You can also discuss issues and
ask questions on IRC (the `#m-labs channel on freenode ask questions on IRC (the #m-labs channel on OFTC), the `Mattermost chat
<https://webchat.freenode.net/?channels=m-labs>`_), the `Mattermost chat
<https://chat.m-labs.hk>`_, or on the `forum <https://forum.m-labs.hk>`_. <https://chat.m-labs.hk>`_, or on the `forum <https://forum.m-labs.hk>`_.
The best bug reports are those which contain sufficient information. With The best bug reports are those which contain sufficient information. With
@ -27,7 +26,6 @@ report if possible:
* Operating System * Operating System
* ARTIQ version (with recent versions of ARTIQ, run ``artiq_client --version``) * ARTIQ version (with recent versions of ARTIQ, run ``artiq_client --version``)
* Version of the gateware and runtime loaded in the core device (in the output of ``artiq_coremgmt -D .... log``) * Version of the gateware and runtime loaded in the core device (in the output of ``artiq_coremgmt -D .... log``)
* If using Conda, output of `conda list`
* Hardware involved * Hardware involved

View File

@ -1,39 +1,17 @@
Sharing development boards Sharing development boards
========================== ==========================
To avoid conflicts for development boards on the server, while using a board you must hold the corresponding lock file present in ``/var/lib/artiq/boards``. Holding the lock file grants you exclusive access to the board. To avoid conflicts for development boards on the server, while using a board you must hold the corresponding lock file present in the ``/tmp`` folder of the machine to which the board is connected. Holding the lock file grants you exclusive access to the board.
To lock the KC705 for 30 minutes or until Ctrl-C is pressed: For example, to lock the KC705 until ENTER is pressed:
:: ::
flock --verbose /var/lib/artiq/boards/kc705-1 sleep 1800 ssh rpi-1.m-labs.hk "flock /tmp/board_lock-kc705-1 -c 'echo locked; read; echo unlocked'"
Check that the command acquires the lock, i.e. prints something such as:
::
flock: getting lock took 0.000003 seconds
flock: executing sleep
To lock the KC705 for the duration of the execution of a shell:
::
flock /var/lib/artiq/boards/kc705-1 bash
You may also use this script:
::
#!/bin/bash
exec flock /var/lib/artiq/boards/$1 bash --rcfile <(cat ~/.bashrc; echo PS1=\"[$1\ lock]\ \$PS1\")
If the board is already locked by another user, the ``flock`` commands above will wait for the lock to be released. If the board is already locked by another user, the ``flock`` commands above will wait for the lock to be released.
To determine which user is locking a board, use: To determine which user is locking a board, use a command such as:
:: ::
fuser -v /var/lib/artiq/boards/kc705-1 ssh rpi-1.m-labs.hk "fuser -v /tmp/board_lock-kc705-1"
Selecting a development board with artiq_flash
==============================================
The board lock file also contains the openocd commands for selecting the corresponding developer board:
::
artiq_flash -I "$(cat /var/lib/artiq/boards/sayma-1)"
Deleting git branches Deleting git branches

View File

@ -1 +0,0 @@
5

View File

@ -3,3 +3,6 @@ graft artiq/examples
include artiq/gui/logo*.svg include artiq/gui/logo*.svg
include versioneer.py include versioneer.py
include artiq/_version.py include artiq/_version.py
include artiq/coredevice/coredevice_generic.schema.json
include artiq/compiler/kernel.ld
include artiq/afws.pem

View File

@ -4,23 +4,23 @@
.. image:: https://raw.githubusercontent.com/m-labs/artiq/master/doc/logo/artiq.png .. image:: https://raw.githubusercontent.com/m-labs/artiq/master/doc/logo/artiq.png
:target: https://m-labs.hk/artiq :target: https://m-labs.hk/artiq
ARTIQ (Advanced Real-Time Infrastructure for Quantum physics) is the next-generation control system for quantum information experiments. ARTIQ (Advanced Real-Time Infrastructure for Quantum physics) is a leading-edge control and data acquisition system for quantum information experiments.
It is maintained and developed by `M-Labs <https://m-labs.hk>`_ and the initial development was for and in partnership with the `Ion Storage Group at NIST <https://www.nist.gov/pml/time-and-frequency-division/ion-storage>`_. ARTIQ is free software and offered to the entire research community as a solution equally applicable to other challenging control tasks, including outside the field of ion trapping. Several other laboratories (e.g. at the University of Oxford, the Army Research Lab, and the University of Maryland) have later adopted ARTIQ as their control system and have contributed to it. It is maintained and developed by `M-Labs <https://m-labs.hk>`_ and the initial development was for and in partnership with the `Ion Storage Group at NIST <https://www.nist.gov/pml/time-and-frequency-division/ion-storage>`_. ARTIQ is free software and offered to the entire research community as a solution equally applicable to other challenging control tasks, including outside the field of ion trapping. Many laboratories around the world have adopted ARTIQ as their control system, with over a hundred Sinara hardware crates deployed, and some have `contributed <https://m-labs.hk/experiment-control/funding/>`_ to it.
The system features a high-level programming language that helps describing complex experiments, which is compiled and executed on dedicated hardware with nanosecond timing resolution and sub-microsecond latency. It includes graphical user interfaces to parametrize and schedule experiments and to visualize and explore the results. The system features a high-level programming language that helps describing complex experiments, which is compiled and executed on dedicated hardware with nanosecond timing resolution and sub-microsecond latency. It includes graphical user interfaces to parametrize and schedule experiments and to visualize and explore the results.
ARTIQ uses FPGA hardware to perform its time-critical tasks. The `Sinara hardware <https://github.com/sinara-hw>`_, and in particular the Kasli FPGA carrier, is designed to work with ARTIQ. ARTIQ uses FPGA hardware to perform its time-critical tasks. The `Sinara hardware <https://github.com/sinara-hw>`_, and in particular the Kasli FPGA carrier, is designed to work with ARTIQ.
ARTIQ is designed to be portable to hardware platforms from different vendors and FPGA manufacturers. ARTIQ is designed to be portable to hardware platforms from different vendors and FPGA manufacturers.
Several different configurations of a `high-end FPGA evaluation kit <http://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ are also used and supported. FPGA platforms can be combined with any number of additional peripherals, either already accessible from ARTIQ or made accessible with little effort. Several different configurations of a `FPGA evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ and of a `Zynq evaluation kit <https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html>`_ are also used and supported. FPGA platforms can be combined with any number of additional peripherals, either already accessible from ARTIQ or made accessible with little effort.
ARTIQ and its dependencies are available in the form of Nix packages (for Linux) and Conda packages (for Windows and Linux). See `the manual <http://m-labs.hk/experiment-control/resources/>`_ for installation instructions. ARTIQ and its dependencies are available in the form of Nix packages (for Linux) and MSYS2 packages (for Windows). See `the manual <https://m-labs.hk/experiment-control/resources/>`_ for installation instructions.
Packages containing pre-compiled binary images to be loaded onto the hardware platforms are supplied for each configuration. Packages containing pre-compiled binary images to be loaded onto the hardware platforms are supplied for each configuration.
Like any open source software ARTIQ can equally be built and installed directly from `source <https://github.com/m-labs/artiq>`_. Like any open source software ARTIQ can equally be built and installed directly from `source <https://github.com/m-labs/artiq>`_.
ARTIQ is supported by M-Labs and developed openly. ARTIQ is supported by M-Labs and developed openly.
Components, features, fixes, improvements, and extensions are funded by and developed for the partnering research groups. Components, features, fixes, improvements, and extensions are often `funded <https://m-labs.hk/experiment-control/funding/>`_ by and developed for the partnering research groups.
Technologies employed include `Python <https://www.python.org/>`_, `Migen <https://github.com/m-labs/migen>`_, `MiSoC <https://github.com/m-labs/misoc>`_/`mor1kx <https://github.com/openrisc/mor1kx>`_, `LLVM <http://llvm.org/>`_/`llvmlite <https://github.com/numba/llvmlite>`_, and `Qt5 <http://www.qt.io/>`_. Core technologies employed include `Python <https://www.python.org/>`_, `Migen <https://github.com/m-labs/migen>`_, `Migen-AXI <https://github.com/peteut/migen-axi>`_, `Rust <https://www.rust-lang.org/>`_, `MiSoC <https://github.com/m-labs/misoc>`_/`VexRiscv <https://github.com/SpinalHDL/VexRiscv>`_, `LLVM <https://llvm.org/>`_/`llvmlite <https://github.com/numba/llvmlite>`_, and `Qt5 <https://www.qt.io/>`_.
Website: https://m-labs.hk/artiq Website: https://m-labs.hk/artiq
@ -29,7 +29,7 @@ Website: https://m-labs.hk/artiq
License License
======= =======
Copyright (C) 2014-2019 M-Labs Limited. Copyright (C) 2014-2023 M-Labs Limited.
ARTIQ is free software: you can redistribute it and/or modify ARTIQ is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by it under the terms of the GNU Lesser General Public License as published by
@ -48,9 +48,10 @@ The ARTIQ manifesto
=================== ===================
The free and open dissemination of methods and results is central to scientific progress. The free and open dissemination of methods and results is central to scientific progress.
The ARTIQ authors, contributors, and supporters consider the free and open exchange of scientific tools to be equally important and have chosen the licensing terms of ARTIQ accordingly.
ARTIQ, including its gateware, the firmware, and the ARTIQ tools and libraries are licensed as LGPLv3+. The ARTIQ and Sinara authors, contributors, and supporters consider the free and open exchange of scientific tools to be equally important and have chosen the licensing terms of ARTIQ and Sinara accordingly. ARTIQ, including its gateware, the firmware, and the ARTIQ tools and libraries are licensed as LGPLv3+. The Sinara hardware designs are licensed under CERN OHL.
This ensures that a user of ARTIQ obtains broad rights to use, redistribute, and modify it. This ensures that a user of ARTIQ or Sinara hardware designs obtains broad rights to use, redistribute, study, and modify them.
The following statements are intended to clarify the interpretation and application of the licensing terms: The following statements are intended to clarify the interpretation and application of the licensing terms:
* There is no requirement to distribute any unmodified, modified, or extended versions of ARTIQ. Only when distributing ARTIQ the source needs to be made available. * There is no requirement to distribute any unmodified, modified, or extended versions of ARTIQ. Only when distributing ARTIQ the source needs to be made available.

View File

@ -3,28 +3,300 @@
Release notes Release notes
============= =============
ARTIQ-8 (Unreleased)
--------------------
Highlights:
* New hardware support:
- Support for Shuttler, a 16-channel 125MSPS DAC card intended for ion transport.
Waveform generator and user API are similar to the NIST PDQ.
- Implemented Phaser-servo. This requires recent gateware on Phaser.
- Almazny v1.2 with finer RF switch control.
- Metlino and Sayma support has been dropped due to complications with synchronous RTIO clocking.
- More user LEDs are exposed to RTIO on Kasli.
- Implemented Phaser-MIQRO support. This requires the proprietary Phaser MIQRO gateware
variant from QUARTIQ.
- Sampler: fixed ADC MU to Volt conversion factor for Sampler v2.2+.
For earlier hardware versions, specify the hardware version in the device
database file (e.g. ``"hw_rev": "v2.1"``) to use the correct conversion factor.
* Support for distributed DMA, where DMA is run directly on satellites for corresponding
RTIO events, increasing bandwidth in scenarios with heavy satellite usage.
* Support for subkernels, where kernels are run on satellite device CPUs to offload some
of the processing and RTIO operations.
* CPU (on softcore platforms) and AXI bus (on Zynq) are now clocked synchronously with the RTIO
clock, to facilitate implementation of local processing on DRTIO satellites, and to slightly
reduce RTIO latency.
* Support for DRTIO-over-EEM, used with Shuttler.
* Added channel names to RTIO error messages.
* GUI:
- Implemented Applet Request Interfaces which allow applets to modify datasets and set the
current values of widgets in the dashboard's experiment windows.
- Implemented a new EntryArea widget which allows argument entry widgets to be used in applets.
- The "Close all applets" command (shortcut: Ctrl-Alt-W) now ignores docked applets,
making it a convenient way to clean up after exploratory work without destroying a
carefully arranged default workspace.
- Hotkeys now organize experiment windows in the order they were last interacted with:
+ CTRL+SHIFT+T tiles experiment windows
+ CTRL+SHIFT+C cascades experiment windows
* Persistent datasets are now stored in a LMDB database for improved performance.
* Python's built-in types (such as ``float``, or ``List[...]``) can now be used in type annotations on
kernel functions.
* Full Python 3.10 support.
* MSYS2 packaging for Windows, which replaces Conda. Conda packages are still available to
support legacy installations, but may be removed in a future release.
Breaking changes:
* ``SimpleApplet`` now calls widget constructors with an additional ``ctl`` parameter for control
operations, which includes dataset operations. It can be ignored if not needed. For an example usage,
refer to the ``big_number.py`` applet.
* ``SimpleApplet`` and ``TitleApplet`` now call ``data_changed`` with additional parameters. Derived applets
should change the function signature as below:
::
# SimpleApplet
def data_changed(self, value, metadata, persist, mods)
# SimpleApplet (old version)
def data_changed(self, data, mods)
# TitleApplet
def data_changed(self, value, metadata, persist, mods, title)
# TitleApplet (old version)
def data_changed(self, data, mods, title)
Accesses to the data argument should be replaced as below:
::
data[key][0] ==> persist[key]
data[key][1] ==> value[key]
* The ``ndecimals`` parameter in ``NumberValue`` and ``Scannable`` has been renamed to ``precision``.
Parameters after and including ``scale`` in both constructors are now keyword-only.
Refer to the updated ``no_hardware/arguments_demo.py`` example for current usage.
* Almazny v1.2 is incompatible with the legacy versions and is the default.
To use legacy versions, specify ``almazny_hw_rev`` in the JSON description.
* kasli_generic.py has been merged into kasli.py, and the demonstration designs without JSON descriptions
have been removed. The base classes remain present in kasli.py to support third-party flows without
JSON descriptions.
* Legacy PYON databases should be converted to LMDB with the script below:
::
from sipyco import pyon
import lmdb
old = pyon.load_file("dataset_db.pyon")
new = lmdb.open("dataset_db.mdb", subdir=False, map_size=2**30)
with new.begin(write=True) as txn:
for key, value in old.items():
txn.put(key.encode(), pyon.encode((value, {})).encode())
new.close()
* ``artiq.wavesynth`` has been removed.
ARTIQ-7
-------
Highlights:
* New hardware support:
- Kasli-SoC, a new EEM carrier based on a Zynq SoC, enabling much faster kernel execution
(see: https://arxiv.org/abs/2111.15290).
- DRTIO support on Zynq-based devices (Kasli-SoC and ZC706).
- DRTIO support on KC705.
- HVAMP_8CH 8 channel HV amplifier for Fastino / Zotinos
- Almazny mezzanine board for Mirny
- Phaser: improved documentation, exposed the DAC coarse mixer and ``sif_sync``, exposed upconverter calibration
and enabling/disabling of upconverter LO & RF outputs, added helpers to align Phaser updates to the
RTIO timeline (``get_next_frame_mu()``).
- Urukul: ``get()``, ``get_mu()``, ``get_att()``, and ``get_att_mu()`` functions added for AD9910 and AD9912.
* Softcore targets now use the RISC-V architecture (VexRiscv) instead of OR1K (mor1kx).
* Gateware FPU is supported on KC705 and Kasli 2.0.
* Faster compilation for large arrays/lists.
* Faster exception handling.
* Several exception handling bugs fixed.
* Support for a simpler shared library system with faster calls into the runtime. This is only used by the NAC3
compiler (nac3ld) and improves RTIO output performance (test_pulse_rate) by 9-10%.
* Moninj improvements:
- Urukul monitoring and frequency setting (through dashboard) is now supported.
- Core device moninj is now proxied via the ``aqctl_moninj_proxy`` controller.
* The configuration entry ``rtio_clock`` supports multiple clocking settings, deprecating the usage
of compile-time options.
* Added support for 100MHz RTIO clock in DRTIO.
* Previously detected RTIO async errors are reported to the host after each kernel terminates and a
warning is logged. The warning is additional to the one already printed in the core device log
immediately upon detection of the error.
* Extended Kasli gateware JSON description with configuration for SPI over DIO.
* TTL outputs can be now configured to work as a clock generator from the JSON.
* On Kasli, the number of FIFO lanes in the scalable events dispatcher (SED) can now be configured in
the JSON.
* ``artiq_ddb_template`` generates edge-counter keys that start with the key of the corresponding
TTL device (e.g. ``ttl_0_counter`` for the edge counter on TTL device ``ttl_0``).
* ``artiq_master`` now has an ``--experiment-subdir`` option to scan only a subdirectory of the
repository when building the list of experiments.
* Experiments can now be submitted by-content.
* The master can now optionally log all experiments submitted into a CSV file.
* Removed worker DB warning for writing a dataset that is also in the archive.
* Experiments can now call ``scheduler.check_termination()`` to test if the user
has requested graceful termination.
* ARTIQ command-line programs and controllers now exit cleanly on Ctrl-C.
* ``artiq_coremgmt reboot`` now reloads gateware as well, providing a more thorough and reliable
device reset (7-series FPGAs only).
* Firmware and gateware can now be built on-demand on the M-Labs server using ``afws_client``
(subscribers only). Self-compilation remains possible.
* Easier-to-use packaging via Nix Flakes.
* Python 3.10 support (experimental).
Breaking changes:
* Due to the new RISC-V CPU, the device database entry for the core device needs to be updated.
The ``target`` parameter needs to be set to ``rv32ima`` for Kasli 1.x and to ``rv32g`` for all
other boards. Freshly generated device database templates already contain this update.
* Updated Phaser-Upconverter default frequency 2.875 GHz. The new default uses the target PFD
frequency of the hardware design.
* ``Phaser.init()`` now disables all Kasli-oscillators. This avoids full power RF output being
generated for some configurations.
* Phaser: fixed coarse mixer frequency configuration
* Mirny: Added extra delays in ``ADF5356.sync()``. This avoids the need of an extra delay before
calling ``ADF5356.init()``.
* The deprecated ``set_dataset(..., save=...)`` is no longer supported.
* The ``PCA9548`` I2C switch class was renamed to ``I2CSwitch``, to accommodate support for PCA9547,
and possibly other switches in future. Readback has been removed, and now only one channel per
switch is supported.
ARTIQ-6
-------
Highlights:
* New hardware support:
- Phaser, a quad channel 1GS/s RF generator card with dual IQ upconverter and dual 5MS/s
ADC and FPGA.
- Zynq SoC core device (ZC706), enabling kernels to run on 1 GHz CPU core with a floating-point
unit for faster computations. This currently requires an external
repository (https://git.m-labs.hk/m-labs/artiq-zynq).
- Mirny 4-channel wide-band PLL/VCO-based microwave frequency synthesiser
- Fastino 32-channel, 3MS/s per channel, 16-bit DAC EEM
- Kasli 2.0, an improved core device with 12 built-in EEM slots, faster FPGA, 4 SFPs, and
high-precision clock recovery circuitry for DRTIO (to be supported in ARTIQ-7).
* ARTIQ Python (core device kernels):
- Multidimensional arrays are now available on the core device, using NumPy syntax.
Elementwise operations (e.g. ``+``, ``/``), matrix multiplication (``@``) and
multidimensional indexing are supported; slices and views are not yet.
- Trigonometric and other common math functions from NumPy are now available on the
core device (e.g. ``numpy.sin``), both for scalar arguments and implicitly
broadcast across multidimensional arrays.
- Failed assertions now raise ``AssertionError``\ s instead of aborting kernel
execution.
* Performance improvements:
- SERDES TTL inputs can now detect edges on pulses that are shorter
than the RTIO period (https://github.com/m-labs/artiq/issues/1432)
- Improved performance for kernel RPC involving list and array.
* Coredevice SI to mu conversions now always return valid codes, or raise a ``ValueError``.
* Zotino now exposes ``voltage_to_mu()``
* ``ad9910``:
- The maximum amplitude scale factor is now ``0x3fff`` (was ``0x3ffe`` before).
- The default single-tone profile is now 7 (was 0).
- Added option to ``set_mu()`` that affects the ASF, FTW and POW registers
instead of the single-tone profile register.
* Mirny now supports HW revision independent, human readable ``clk_sel`` parameters:
"XO", "SMA", and "MMCX". Passing an integer is backwards compatible.
* Dashboard:
- Applets now restart if they are running and a ccb call changes their spec
- A "Quick Open" dialog to open experiments by typing part of their name can
be brought up Ctrl-P (Ctrl+Return to immediately submit the selected entry
with the default arguments).
- The Applets dock now has a context menu command to quickly close all open
applets (shortcut: Ctrl-Alt-W).
* Experiment results are now always saved to HDF5, even if ``run()`` fails.
* Core device: ``panic_reset 1`` now correctly resets the kernel CPU as well if
communication CPU panic occurs.
* NumberValue accepts a ``type`` parameter specifying the output as ``int`` or ``float``
* A parameter ``--identifier-str`` has been added to many targets to aid
with reproducible builds.
* Python 3.7 support in Conda packages.
* `kasli_generic` JSON descriptions are now validated against a
schema. Description defaults have moved from Python to the
schema. Warns if ARTIQ version is too old.
Breaking changes:
* ``artiq_netboot`` has been moved to its own repository at
https://git.m-labs.hk/m-labs/artiq-netboot
* Core device watchdogs have been removed.
* The ARTIQ compiler now implements arrays following NumPy semantics, rather than as a
thin veneer around lists. Most prior use cases of NumPy arrays in kernels should work
unchanged with the new implementation, but the behavior might differ slightly in some
cases (for instance, non-rectangular arrays are not currently supported).
* ``quamash`` has been replaced with ``qasync``.
* Protocols are updated to use device endian.
* Analyzer dump format includes a byte for device endianness.
* To support variable numbers of Urukul cards in the future, the
``artiq.coredevice.suservo.SUServo`` constructor now accepts two device name lists,
``cpld_devices`` and ``dds_devices``, rather than four individual arguments.
* Experiment classes with underscore-prefixed names are now ignored when ``artiq_client``
determines which experiment to submit (consistent with ``artiq_run``).
ARTIQ-5 ARTIQ-5
------- -------
* The :class:`~artiq.coredevice.ad9910.AD9910` and Highlights:
:class:`~artiq.coredevice.ad9914.AD9914` phase reference timestamp parameters
have been renamed to ``ref_time_mu`` for consistency, as they are in machine * Performance improvements:
units. - Faster RTIO event submission (1.5x improvement in pulse rate test)
* :func:`~artiq.tools.verbosity_args` has been renamed to See: https://github.com/m-labs/artiq/issues/636
:func:`~artiq.tools.add_common_args`, and now adds a ``--version`` flag. - Faster compilation times (3 seconds saved on kernel compilation time on a typical
medium-size experiment)
See: https://github.com/m-labs/artiq/commit/611bcc4db4ed604a32d9678623617cd50e968cbf
* Improved packaging and build system:
- new continuous integration/delivery infrastructure based on Nix and Hydra,
providing reproducibility, speed and independence.
- rolling release process (https://github.com/m-labs/artiq/issues/1326).
- firmware, gateware and device database templates are automatically built for all
supported Kasli variants.
- new JSON description format for generic Kasli systems.
- Nix packages are now supported.
- many Conda problems worked around.
- controllers are now out-of-tree.
- split packages that enable lightweight applications that communicate with ARTIQ,
e.g. controllers running on non-x86 single-board computers.
* Improved Urukul support:
- AD9910 RAM mode.
- Configurable refclk divider and PLL bypass.
- More reliable phase synchronization at high sample rates.
- Synchronization calibration data can be read from EEPROM.
* A gateware-level input edge counter has been added, which offers higher * A gateware-level input edge counter has been added, which offers higher
throughput and increased flexibility over the usual TTL input PHYs where throughput and increased flexibility over the usual TTL input PHYs where
edge timestamps are not required. See :mod:`artiq.coredevice.edge_counter` for edge timestamps are not required. See ``artiq.coredevice.edge_counter`` for
the core device driver and :mod:`artiq.gateware.rtio.phy.edge_counter`/ the core device driver and ``artiq.gateware.rtio.phy.edge_counter``/
:meth:`artiq.gateware.eem.DIO.add_std` for the gateware components. ``artiq.gateware.eem.DIO.add_std`` for the gateware components.
* With DRTIO, Siphaser uses a better calibration mechanism.
See: https://github.com/m-labs/artiq/commit/cc58318500ecfa537abf24127f2c22e8fe66e0f8
* Schedule updates can be sent to influxdb (artiq_influxdb_schedule).
* Experiments can now programatically set their default pipeline, priority, and flush flag.
* List datasets can now be efficiently appended to from experiments using * List datasets can now be efficiently appended to from experiments using
:meth:`artiq.language.environment.HasEnvironment.append_to_dataset`. ``artiq.language.environment.HasEnvironment.append_to_dataset``.
* The core device now supports IPv6.
* To make development easier, the bootloader can receive firmware and secondary FPGA
gateware from the network.
* Python 3.7 compatibility (Nix and source builds only, no Conda).
* Various other bugs from 4.0 fixed.
* Preliminary Sayma v2 and Metlino hardware support.
Breaking changes:
* The ``artiq.coredevice.ad9910.AD9910`` and
``artiq.coredevice.ad9914.AD9914`` phase reference timestamp parameters
have been renamed to ``ref_time_mu`` for consistency, as they are in machine
units.
* The controller manager now ignores device database entries without the * The controller manager now ignores device database entries without the
``"command"`` key set to facilitate sharing of devices between multiple ``command`` key set to facilitate sharing of devices between multiple
masters. masters.
* The meaning of the ``-d/--dir`` and ``--srcbuild`` options of ``artiq_flash`` * The meaning of the ``-d/--dir`` and ``--srcbuild`` options of ``artiq_flash``
has changed. has changed.
* Experiments can now programatically set their default pipeline, priority, and flush flag.
* Controllers for third-party devices are now out-of-tree. * Controllers for third-party devices are now out-of-tree.
* ``aqctl_corelog`` now filters log messages below the ``WARNING`` level by default. * ``aqctl_corelog`` now filters log messages below the ``WARNING`` level by default.
This behavior can be changed using the ``-v`` and ``-q`` options like the other This behavior can be changed using the ``-v`` and ``-q`` options like the other

View File

@ -1,11 +1,7 @@
from ._version import get_versions from ._version import get_version
__version__ = get_versions()['version'] __version__ = get_version()
del get_versions del get_version
import os import os
__artiq_dir__ = os.path.dirname(os.path.abspath(__file__)) __artiq_dir__ = os.path.dirname(os.path.abspath(__file__))
del os del os
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions

View File

@ -1,526 +1,7 @@
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.18 (https://github.com/warner/python-versioneer)
"""Git implementation of _version.py."""
import errno
import os import os
import re
import subprocess
import sys
def get_rev():
return os.getenv("VERSIONEER_REV", default="unknown")
def get_keywords(): def get_version():
"""Get the keywords needed to look up the version information.""" return os.getenv("VERSIONEER_OVERRIDE", default="8.0+unknown.beta")
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "$Format:%d$"
git_full = "$Format:%H$"
git_date = "$Format:%ci$"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "pep440"
cfg.tag_prefix = ""
cfg.parentdir_prefix = "artiq-"
cfg.versionfile_source = "artiq/_version.py"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Decorator to mark a method as the handler for a particular VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None, None
stdout = p.communicate()[0].strip()
if sys.version_info[0] >= 3:
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
print("stdout was %s" % stdout)
return None, p.returncode
return stdout, p.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %s but none started with prefix %s" %
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long", "--abbrev=8",
"--match", "%s*" % tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:8] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = ("tag '%s' doesn't start with prefix '%s'"
% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"],
cwd=root)[0].strip()
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post.dev%d" % pieces["distance"]
else:
# exception #1
rendered = "0.post.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Eexceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
override = os.getenv("VERSIONEER_OVERRIDE")
if override:
return {"version": override, "full-revisionid": None,
"dirty": None,
"error": None, "date": None}
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in cfg.versionfile_source.split('/'):
root = os.path.dirname(root)
except NameError:
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None}
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to compute version", "date": None}

View File

@ -1,22 +1,96 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from PyQt5 import QtWidgets from PyQt5 import QtWidgets, QtCore, QtGui
from artiq.applets.simple import SimpleApplet from artiq.applets.simple import SimpleApplet
from artiq.tools import scale_from_metadata
from artiq.gui.tools import LayoutWidget
class NumberWidget(QtWidgets.QLCDNumber): class QResponsiveLCDNumber(QtWidgets.QLCDNumber):
def __init__(self, args): doubleClicked = QtCore.pyqtSignal()
QtWidgets.QLCDNumber.__init__(self)
self.setDigitCount(args.digit_count) def mouseDoubleClickEvent(self, event):
self.doubleClicked.emit()
class QCancellableLineEdit(QtWidgets.QLineEdit):
editCancelled = QtCore.pyqtSignal()
def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Escape:
self.editCancelled.emit()
else:
super().keyPressEvent(event)
class NumberWidget(LayoutWidget):
def __init__(self, args, req):
LayoutWidget.__init__(self)
self.dataset_name = args.dataset self.dataset_name = args.dataset
self.req = req
self.metadata = dict()
def data_changed(self, data, mods): self.number_area = QtWidgets.QStackedWidget()
self.addWidget(self.number_area, 0, 0)
self.unit_area = QtWidgets.QLabel()
self.unit_area.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignTop)
self.addWidget(self.unit_area, 0, 1)
self.lcd_widget = QResponsiveLCDNumber()
self.lcd_widget.setDigitCount(args.digit_count)
self.lcd_widget.doubleClicked.connect(self.start_edit)
self.number_area.addWidget(self.lcd_widget)
self.edit_widget = QCancellableLineEdit()
self.edit_widget.setValidator(QtGui.QDoubleValidator())
self.edit_widget.setAlignment(QtCore.Qt.AlignRight | QtCore.Qt.AlignVCenter)
self.edit_widget.editCancelled.connect(self.cancel_edit)
self.edit_widget.returnPressed.connect(self.confirm_edit)
self.number_area.addWidget(self.edit_widget)
font = QtGui.QFont()
font.setPointSize(60)
self.edit_widget.setFont(font)
unit_font = QtGui.QFont()
unit_font.setPointSize(20)
self.unit_area.setFont(unit_font)
self.number_area.setCurrentWidget(self.lcd_widget)
def start_edit(self):
# QLCDNumber value property contains the value of zero
# if the displayed value is not a number.
self.edit_widget.setText(str(self.lcd_widget.value()))
self.edit_widget.selectAll()
self.edit_widget.setFocus()
self.number_area.setCurrentWidget(self.edit_widget)
def confirm_edit(self):
scale = scale_from_metadata(self.metadata)
val = float(self.edit_widget.text())
val *= scale
self.req.set_dataset(self.dataset_name, val, **self.metadata)
self.number_area.setCurrentWidget(self.lcd_widget)
def cancel_edit(self):
self.number_area.setCurrentWidget(self.lcd_widget)
def data_changed(self, value, metadata, persist, mods):
try: try:
n = float(data[self.dataset_name][1]) self.metadata = metadata[self.dataset_name]
# This applet will degenerate other scalar types to native float on edit
# Use the dashboard ChangeEditDialog for consistent type casting
val = float(value[self.dataset_name])
scale = scale_from_metadata(self.metadata)
val /= scale
except (KeyError, ValueError, TypeError): except (KeyError, ValueError, TypeError):
n = "---" val = "---"
self.display(n)
unit = self.metadata.get("unit", "")
self.unit_area.setText(unit)
self.lcd_widget.display(val)
def main(): def main():

View File

@ -7,13 +7,13 @@ from artiq.applets.simple import SimpleApplet
class Image(pyqtgraph.ImageView): class Image(pyqtgraph.ImageView):
def __init__(self, args): def __init__(self, args, req):
pyqtgraph.ImageView.__init__(self) pyqtgraph.ImageView.__init__(self)
self.args = args self.args = args
def data_changed(self, data, mods): def data_changed(self, value, metadata, persist, mods):
try: try:
img = data[self.args.img][1] img = value[self.args.img]
except KeyError: except KeyError:
return return
self.setImage(img) self.setImage(img)

View File

@ -1,33 +1,47 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import PyQt5 # make sure pyqtgraph imports Qt5 import PyQt5 # make sure pyqtgraph imports Qt5
from PyQt5.QtCore import QTimer
import pyqtgraph import pyqtgraph
from artiq.applets.simple import TitleApplet from artiq.applets.simple import TitleApplet
class HistogramPlot(pyqtgraph.PlotWidget): class HistogramPlot(pyqtgraph.PlotWidget):
def __init__(self, args): def __init__(self, args, req):
pyqtgraph.PlotWidget.__init__(self) pyqtgraph.PlotWidget.__init__(self)
self.args = args self.args = args
self.timer = QTimer()
self.timer.setSingleShot(True)
self.timer.timeout.connect(self.length_warning)
def data_changed(self, data, mods, title): def data_changed(self, value, metadata, persist, mods, title):
try: try:
y = data[self.args.y][1] y = value[self.args.y]
if self.args.x is None: if self.args.x is None:
x = None x = None
else: else:
x = data[self.args.x][1] x = value[self.args.x]
except KeyError: except KeyError:
return return
if x is None: if x is None:
x = list(range(len(y)+1)) x = list(range(len(y)+1))
if len(y) and len(x) == len(y) + 1: if len(y) and len(x) == len(y) + 1:
self.timer.stop()
self.clear() self.clear()
self.plot(x, y, stepMode=True, fillLevel=0, self.plot(x, y, stepMode=True, fillLevel=0,
brush=(0, 0, 255, 150)) brush=(0, 0, 255, 150))
self.setTitle(title) self.setTitle(title)
else:
if not self.timer.isActive():
self.timer.start(1000)
def length_warning(self):
self.clear()
text = "⚠️ dataset lengths mismatch:\n"\
"There should be one more bin boundaries than there are Y values"
self.addItem(pyqtgraph.TextItem(text))
def main(): def main():

View File

@ -2,39 +2,58 @@
import numpy as np import numpy as np
import PyQt5 # make sure pyqtgraph imports Qt5 import PyQt5 # make sure pyqtgraph imports Qt5
from PyQt5.QtCore import QTimer
import pyqtgraph import pyqtgraph
from artiq.applets.simple import TitleApplet from artiq.applets.simple import TitleApplet
class XYPlot(pyqtgraph.PlotWidget): class XYPlot(pyqtgraph.PlotWidget):
def __init__(self, args): def __init__(self, args, req):
pyqtgraph.PlotWidget.__init__(self) pyqtgraph.PlotWidget.__init__(self)
self.args = args self.args = args
self.timer = QTimer()
self.timer.setSingleShot(True)
self.timer.timeout.connect(self.length_warning)
self.mismatch = {'X values': False,
'Error bars': False,
'Fit values': False}
def data_changed(self, data, mods, title): def data_changed(self, value, metadata, persist, mods, title):
try: try:
y = data[self.args.y][1] y = value[self.args.y]
except KeyError: except KeyError:
return return
x = data.get(self.args.x, (False, None))[1] x = value.get(self.args.x, (False, None))
if x is None: if x is None:
x = np.arange(len(y)) x = np.arange(len(y))
error = data.get(self.args.error, (False, None))[1] error = value.get(self.args.error, (False, None))
fit = data.get(self.args.fit, (False, None))[1] fit = value.get(self.args.fit, (False, None))
if not len(y) or len(y) != len(x): if not len(y) or len(y) != len(x):
return self.mismatch['X values'] = True
else:
self.mismatch['X values'] = False
if error is not None and hasattr(error, "__len__"): if error is not None and hasattr(error, "__len__"):
if not len(error): if not len(error):
error = None error = None
elif len(error) != len(y): elif len(error) != len(y):
return self.mismatch['Error bars'] = True
else:
self.mismatch['Error bars'] = False
if fit is not None: if fit is not None:
if not len(fit): if not len(fit):
fit = None fit = None
elif len(fit) != len(y): elif len(fit) != len(y):
return self.mismatch['Fit values'] = True
else:
self.mismatch['Fit values'] = False
if not any(self.mismatch.values()):
self.timer.stop()
else:
if not self.timer.isActive():
self.timer.start(1000)
return
self.clear() self.clear()
self.plot(x, y, pen=None, symbol="x") self.plot(x, y, pen=None, symbol="x")
@ -50,6 +69,13 @@ class XYPlot(pyqtgraph.PlotWidget):
xi = np.argsort(x) xi = np.argsort(x)
self.plot(x[xi], fit[xi]) self.plot(x[xi], fit[xi])
def length_warning(self):
self.clear()
text = "⚠️ dataset lengths mismatch:\n"
errors = ', '.join([k for k, v in self.mismatch.items() if v])
text = ' '.join([errors, "should have the same length as Y values"])
self.addItem(pyqtgraph.TextItem(text))
def main(): def main():
applet = TitleApplet(XYPlot) applet = TitleApplet(XYPlot)

View File

@ -2,6 +2,7 @@
import numpy as np import numpy as np
from PyQt5 import QtWidgets from PyQt5 import QtWidgets
from PyQt5.QtCore import QTimer
import pyqtgraph import pyqtgraph
from artiq.applets.simple import SimpleApplet from artiq.applets.simple import SimpleApplet
@ -21,7 +22,7 @@ def _compute_ys(histogram_bins, histograms_counts):
# pyqtgraph.GraphicsWindow fails to behave like a regular Qt widget # pyqtgraph.GraphicsWindow fails to behave like a regular Qt widget
# and breaks embedding. Do not use as top widget. # and breaks embedding. Do not use as top widget.
class XYHistPlot(QtWidgets.QSplitter): class XYHistPlot(QtWidgets.QSplitter):
def __init__(self, args): def __init__(self, args, req):
QtWidgets.QSplitter.__init__(self) QtWidgets.QSplitter.__init__(self)
self.resize(1000, 600) self.resize(1000, 600)
self.setWindowTitle("XY/Histogram") self.setWindowTitle("XY/Histogram")
@ -37,6 +38,10 @@ class XYHistPlot(QtWidgets.QSplitter):
self.hist_plot_data = None self.hist_plot_data = None
self.args = args self.args = args
self.timer = QTimer()
self.timer.setSingleShot(True)
self.timer.timeout.connect(self.length_warning)
self.mismatch = {'bins': False, 'xs': False}
def _set_full_data(self, xs, histogram_bins, histograms_counts): def _set_full_data(self, xs, histogram_bins, histograms_counts):
self.xy_plot.clear() self.xy_plot.clear()
@ -59,9 +64,9 @@ class XYHistPlot(QtWidgets.QSplitter):
point.histogram_index = index point.histogram_index = index
point.histogram_counts = counts point.histogram_counts = counts
self.hist_plot_data = self.hist_plot.plot( text = "click on a data point at the left\n"\
stepMode=True, fillLevel=0, "to see the corresponding histogram"
brush=(0, 0, 255, 150)) self.hist_plot.addItem(pyqtgraph.TextItem(text))
def _set_partial_data(self, xs, histograms_counts): def _set_partial_data(self, xs, histograms_counts):
ys = _compute_ys(self.histogram_bins, histograms_counts) ys = _compute_ys(self.histogram_bins, histograms_counts)
@ -87,8 +92,17 @@ class XYHistPlot(QtWidgets.QSplitter):
else: else:
self.arrow.setPos(position) self.arrow.setPos(position)
self.selected_index = spot_item.histogram_index self.selected_index = spot_item.histogram_index
self.hist_plot_data.setData(x=self.histogram_bins,
y=spot_item.histogram_counts) if self.hist_plot_data is None:
self.hist_plot.clear()
self.hist_plot_data = self.hist_plot.plot(
x=self.histogram_bins,
y=spot_item.histogram_counts,
stepMode=True, fillLevel=0,
brush=(0, 0, 255, 150))
else:
self.hist_plot_data.setData(x=self.histogram_bins,
y=spot_item.histogram_counts)
def _can_use_partial(self, mods): def _can_use_partial(self, mods):
if self.hist_plot_data is None: if self.hist_plot_data is None:
@ -110,18 +124,48 @@ class XYHistPlot(QtWidgets.QSplitter):
return False return False
return True return True
def data_changed(self, data, mods): def data_changed(self, value, metadata, persist, mods):
try: try:
xs = data[self.args.xs][1] xs = value[self.args.xs]
histogram_bins = data[self.args.histogram_bins][1] histogram_bins = value[self.args.histogram_bins]
histograms_counts = data[self.args.histograms_counts][1] histograms_counts = value[self.args.histograms_counts]
except KeyError: except KeyError:
return return
if len(xs) != histograms_counts.shape[0]:
self.mismatch['xs'] = True
else:
self.mismatch['xs'] = False
if histograms_counts.shape[1] != len(histogram_bins) - 1:
self.mismatch['bins'] = True
else:
self.mismatch['bins'] = False
if any(self.mismatch.values()):
if not self.timer.isActive():
self.timer.start(1000)
return
else:
self.timer.stop()
if self._can_use_partial(mods): if self._can_use_partial(mods):
self._set_partial_data(xs, histograms_counts) self._set_partial_data(xs, histograms_counts)
else: else:
self._set_full_data(xs, histogram_bins, histograms_counts) self._set_full_data(xs, histogram_bins, histograms_counts)
def length_warning(self):
self.xy_plot.clear()
self.hist_plot.clear()
text = "⚠️ dataset lengths mismatch:\n\n"
if self.mismatch['bins']:
text = ''.join([text,
"bin boundaries should have the same length\n"
"as the first dimension of histogram counts."])
if self.mismatch['bins'] and self.mismatch['xs']:
text = ''.join([text, '\n\n'])
if self.mismatch['xs']:
text = ''.join([text,
"point abscissas should have the same length\n"
"as the second dimension of histogram counts."])
self.xy_plot.addItem(pyqtgraph.TextItem(text))
def main(): def main():
applet = SimpleApplet(XYHistPlot) applet = SimpleApplet(XYHistPlot)

View File

@ -0,0 +1,34 @@
#!/usr/bin/env python3
from PyQt5 import QtWidgets
from artiq.applets.simple import SimpleApplet
class ProgressWidget(QtWidgets.QProgressBar):
def __init__(self, args, req):
QtWidgets.QProgressBar.__init__(self)
self.setMinimum(args.min)
self.setMaximum(args.max)
self.dataset_value = args.value
def data_changed(self, value, metadata, persist, mods):
try:
val = round(value[self.dataset_value])
except (KeyError, ValueError, TypeError):
val = 0
self.setValue(val)
def main():
applet = SimpleApplet(ProgressWidget)
applet.add_dataset("value", "counter")
applet.argparser.add_argument("--min", type=int, default=0,
help="minimum (left) value of the bar")
applet.argparser.add_argument("--max", type=int, default=100,
help="maximum (right) value of the bar")
applet.run()
if __name__ == "__main__":
main()

View File

@ -4,16 +4,115 @@ import asyncio
import os import os
import string import string
from quamash import QEventLoop, QtWidgets, QtCore from qasync import QEventLoop, QtWidgets, QtCore
from sipyco.sync_struct import Subscriber, process_mod from sipyco.sync_struct import Subscriber, process_mod
from sipyco.pc_rpc import AsyncioClient as RPCClient
from sipyco import pyon from sipyco import pyon
from sipyco.pipe_ipc import AsyncioChildComm from sipyco.pipe_ipc import AsyncioChildComm
from artiq.language.scan import ScanObject
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class _AppletRequestInterface:
def __init__(self):
raise NotImplementedError
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
"""
Set a dataset.
See documentation of ``artiq.language.environment.set_dataset``.
"""
raise NotImplementedError
def mutate_dataset(self, key, index, value):
"""
Mutate a dataset.
See documentation of ``artiq.language.environment.mutate_dataset``.
"""
raise NotImplementedError
def append_to_dataset(self, key, value):
"""
Append to a dataset.
See documentation of ``artiq.language.environment.append_to_dataset``.
"""
raise NotImplementedError
def set_argument_value(self, expurl, name, value):
"""
Temporarily set the value of an argument in a experiment in the dashboard.
The value resets to default value when recomputing the argument.
:param expurl: Experiment URL identifying the experiment in the dashboard. Example: 'repo:ArgumentsDemo'.
:param name: Name of the argument in the experiment.
:param value: Object representing the new temporary value of the argument. For ``Scannable`` arguments, this parameter
should be a ``ScanObject``. The type of the ``ScanObject`` will be set as the selected type when this function is called.
"""
raise NotImplementedError
class AppletRequestIPC(_AppletRequestInterface):
def __init__(self, ipc):
self.ipc = ipc
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
metadata = {}
if unit is not None:
metadata["unit"] = unit
if scale is not None:
metadata["scale"] = scale
if precision is not None:
metadata["precision"] = precision
self.ipc.set_dataset(key, value, metadata, persist)
def mutate_dataset(self, key, index, value):
mod = {"action": "setitem", "path": [key, 1], "key": index, "value": value}
self.ipc.update_dataset(mod)
def append_to_dataset(self, key, value):
mod = {"action": "append", "path": [key, 1], "x": value}
self.ipc.update_dataset(mod)
def set_argument_value(self, expurl, name, value):
if isinstance(value, ScanObject):
value = value.describe()
self.ipc.set_argument_value(expurl, name, value)
class AppletRequestRPC(_AppletRequestInterface):
def __init__(self, loop, dataset_ctl):
self.loop = loop
self.dataset_ctl = dataset_ctl
self.background_tasks = set()
def _background(self, coro, *args, **kwargs):
task = self.loop.create_task(coro(*args, **kwargs))
self.background_tasks.add(task)
task.add_done_callback(self.background_tasks.discard)
def set_dataset(self, key, value, unit=None, scale=None, precision=None, persist=None):
metadata = {}
if unit is not None:
metadata["unit"] = unit
if scale is not None:
metadata["scale"] = scale
if precision is not None:
metadata["precision"] = precision
self._background(self.dataset_ctl.set, key, value, metadata=metadata, persist=persist)
def mutate_dataset(self, key, index, value):
mod = {"action": "setitem", "path": [key, 1], "key": index, "value": value}
self._background(self.dataset_ctl.update, mod)
def append_to_dataset(self, key, value):
mod = {"action": "append", "path": [key, 1], "x": value}
self._background(self.dataset_ctl.update, mod)
class AppletIPCClient(AsyncioChildComm): class AppletIPCClient(AsyncioChildComm):
def set_close_cb(self, close_cb): def set_close_cb(self, close_cb):
self.close_cb = close_cb self.close_cb = close_cb
@ -64,12 +163,30 @@ class AppletIPCClient(AsyncioChildComm):
exc_info=True) exc_info=True)
self.close_cb() self.close_cb()
def subscribe(self, datasets, init_cb, mod_cb): def subscribe(self, datasets, init_cb, mod_cb, dataset_prefixes=[], *, loop):
self.write_pyon({"action": "subscribe", self.write_pyon({"action": "subscribe",
"datasets": datasets}) "datasets": datasets,
"dataset_prefixes": dataset_prefixes})
self.init_cb = init_cb self.init_cb = init_cb
self.mod_cb = mod_cb self.mod_cb = mod_cb
asyncio.ensure_future(self.listen()) self.listen_task = loop.create_task(self.listen())
def set_dataset(self, key, value, metadata, persist=None):
self.write_pyon({"action": "set_dataset",
"key": key,
"value": value,
"metadata": metadata,
"persist": persist})
def update_dataset(self, mod):
self.write_pyon({"action": "update_dataset",
"mod": mod})
def set_argument_value(self, expurl, name, value):
self.write_pyon({"action": "set_argument_value",
"expurl": expurl,
"name": name,
"value": value})
class SimpleApplet: class SimpleApplet:
@ -91,8 +208,11 @@ class SimpleApplet:
"for dataset notifications " "for dataset notifications "
"(ignored in embedded mode)") "(ignored in embedded mode)")
group.add_argument( group.add_argument(
"--port", default=3250, type=int, "--port-notify", default=3250, type=int,
help="TCP port to connect to") help="TCP port to connect to for notifications (ignored in embedded mode)")
group.add_argument(
"--port-control", default=3251, type=int,
help="TCP port to connect to for control (ignored in embedded mode)")
self._arggroup_datasets = self.argparser.add_argument_group("datasets") self._arggroup_datasets = self.argparser.add_argument_group("datasets")
@ -113,8 +233,11 @@ class SimpleApplet:
self.embed = os.getenv("ARTIQ_APPLET_EMBED") self.embed = os.getenv("ARTIQ_APPLET_EMBED")
self.datasets = {getattr(self.args, arg.replace("-", "_")) self.datasets = {getattr(self.args, arg.replace("-", "_"))
for arg in self.dataset_args} for arg in self.dataset_args}
# Optional prefixes (dataset sub-trees) to match subscriptions against;
# currently only used by out-of-tree subclasses (ndscan).
self.dataset_prefixes = []
def quamash_init(self): def qasync_init(self):
app = QtWidgets.QApplication([]) app = QtWidgets.QApplication([])
self.loop = QEventLoop(app) self.loop = QEventLoop(app)
asyncio.set_event_loop(self.loop) asyncio.set_event_loop(self.loop)
@ -128,8 +251,21 @@ class SimpleApplet:
if self.embed is not None: if self.embed is not None:
self.ipc.close() self.ipc.close()
def req_init(self):
if self.embed is None:
dataset_ctl = RPCClient()
self.loop.run_until_complete(dataset_ctl.connect_rpc(
self.args.server, self.args.port_control, "master_dataset_db"))
self.req = AppletRequestRPC(self.loop, dataset_ctl)
else:
self.req = AppletRequestIPC(self.ipc)
def req_close(self):
if self.embed is None:
self.req.dataset_ctl.close_rpc()
def create_main_widget(self): def create_main_widget(self):
self.main_widget = self.main_widget_class(self.args) self.main_widget = self.main_widget_class(self.args, self.req)
if self.embed is not None: if self.embed is not None:
self.ipc.set_close_cb(self.main_widget.close) self.ipc.set_close_cb(self.main_widget.close)
if os.name == "nt": if os.name == "nt":
@ -162,6 +298,14 @@ class SimpleApplet:
self.data = data self.data = data
return data return data
def is_dataset_subscribed(self, key):
if key in self.datasets:
return True
for prefix in self.dataset_prefixes:
if key.startswith(prefix):
return True
return False
def filter_mod(self, mod): def filter_mod(self, mod):
if self.embed is not None: if self.embed is not None:
# the parent already filters for us # the parent already filters for us
@ -170,14 +314,19 @@ class SimpleApplet:
if mod["action"] == "init": if mod["action"] == "init":
return True return True
if mod["path"]: if mod["path"]:
return mod["path"][0] in self.datasets return self.is_dataset_subscribed(mod["path"][0])
elif mod["action"] in {"setitem", "delitem"}: elif mod["action"] in {"setitem", "delitem"}:
return mod["key"] in self.datasets return self.is_dataset_subscribed(mod["key"])
else: else:
return False return False
def emit_data_changed(self, data, mod_buffer): def emit_data_changed(self, data, mod_buffer):
self.main_widget.data_changed(data, mod_buffer) persist = dict()
value = dict()
metadata = dict()
for k, d in data.items():
persist[k], value[k], metadata[k] = d
self.main_widget.data_changed(value, metadata, persist, mod_buffer)
def flush_mod_buffer(self): def flush_mod_buffer(self):
self.emit_data_changed(self.data, self.mod_buffer) self.emit_data_changed(self.data, self.mod_buffer)
@ -192,8 +341,8 @@ class SimpleApplet:
self.mod_buffer.append(mod) self.mod_buffer.append(mod)
else: else:
self.mod_buffer = [mod] self.mod_buffer = [mod]
asyncio.get_event_loop().call_later(self.args.update_delay, self.loop.call_later(self.args.update_delay,
self.flush_mod_buffer) self.flush_mod_buffer)
else: else:
self.emit_data_changed(self.data, [mod]) self.emit_data_changed(self.data, [mod])
@ -202,9 +351,11 @@ class SimpleApplet:
self.subscriber = Subscriber("datasets", self.subscriber = Subscriber("datasets",
self.sub_init, self.sub_mod) self.sub_init, self.sub_mod)
self.loop.run_until_complete(self.subscriber.connect( self.loop.run_until_complete(self.subscriber.connect(
self.args.server, self.args.port)) self.args.server, self.args.port_notify))
else: else:
self.ipc.subscribe(self.datasets, self.sub_init, self.sub_mod) self.ipc.subscribe(self.datasets, self.sub_init, self.sub_mod,
dataset_prefixes=self.dataset_prefixes,
loop=self.loop)
def unsubscribe(self): def unsubscribe(self):
if self.embed is None: if self.embed is None:
@ -212,16 +363,20 @@ class SimpleApplet:
def run(self): def run(self):
self.args_init() self.args_init()
self.quamash_init() self.qasync_init()
try: try:
self.ipc_init() self.ipc_init()
try: try:
self.create_main_widget() self.req_init()
self.subscribe()
try: try:
self.loop.run_forever() self.create_main_widget()
self.subscribe()
try:
self.loop.run_forever()
finally:
self.unsubscribe()
finally: finally:
self.unsubscribe() self.req_close()
finally: finally:
self.ipc_close() self.ipc_close()
finally: finally:
@ -260,4 +415,9 @@ class TitleApplet(SimpleApplet):
title = self.args.title title = self.args.title
else: else:
title = None title = None
self.main_widget.data_changed(data, mod_buffer, title) persist = dict()
value = dict()
metadata = dict()
for k, d in data.items():
persist[k], value[k], metadata[k] = d
self.main_widget.data_changed(value, metadata, persist, mod_buffer, title)

View File

@ -20,11 +20,46 @@ class Model(DictSyncTreeSepModel):
DictSyncTreeSepModel.__init__(self, ".", ["Dataset", "Value"], init) DictSyncTreeSepModel.__init__(self, ".", ["Dataset", "Value"], init)
def convert(self, k, v, column): def convert(self, k, v, column):
return short_format(v[1]) return short_format(v[1], v[2])
class DatasetCtl:
def __init__(self, master_host, master_port):
self.master_host = master_host
self.master_port = master_port
async def _execute_rpc(self, op_name, key_or_mod, value=None, persist=None, metadata=None):
logger.info("Starting %s operation on %s", op_name, key_or_mod)
try:
remote = RPCClient()
await remote.connect_rpc(self.master_host, self.master_port,
"master_dataset_db")
try:
if op_name == "set":
await remote.set(key_or_mod, value, persist, metadata)
elif op_name == "update":
await remote.update(key_or_mod)
else:
logger.error("Invalid operation: %s", op_name)
return
finally:
remote.close_rpc()
except:
logger.error("Failed %s operation on %s", op_name,
key_or_mod, exc_info=True)
else:
logger.info("Finished %s operation on %s", op_name,
key_or_mod)
async def set(self, key, value, persist=None, metadata=None):
await self._execute_rpc("set", key, value, persist, metadata)
async def update(self, mod):
await self._execute_rpc("update", mod)
class DatasetsDock(QtWidgets.QDockWidget): class DatasetsDock(QtWidgets.QDockWidget):
def __init__(self, datasets_sub, master_host, master_port): def __init__(self, dataset_sub, dataset_ctl):
QtWidgets.QDockWidget.__init__(self, "Datasets") QtWidgets.QDockWidget.__init__(self, "Datasets")
self.setObjectName("Datasets") self.setObjectName("Datasets")
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable | self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
@ -62,10 +97,9 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table.addAction(upload_action) self.table.addAction(upload_action)
self.set_model(Model(dict())) self.set_model(Model(dict()))
datasets_sub.add_setmodel_callback(self.set_model) dataset_sub.add_setmodel_callback(self.set_model)
self.master_host = master_host self.dataset_ctl = dataset_ctl
self.master_port = master_port
def _search_datasets(self): def _search_datasets(self):
if hasattr(self, "table_model_filter"): if hasattr(self, "table_model_filter"):
@ -82,30 +116,14 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table_model_filter.setSourceModel(self.table_model) self.table_model_filter.setSourceModel(self.table_model)
self.table.setModel(self.table_model_filter) self.table.setModel(self.table_model_filter)
async def _upload_dataset(self, name, value,):
logger.info("Uploading dataset '%s' to master...", name)
try:
remote = RPCClient()
await remote.connect_rpc(self.master_host, self.master_port,
"master_dataset_db")
try:
await remote.set(name, value)
finally:
remote.close_rpc()
except:
logger.error("Failed uploading dataset '%s'",
name, exc_info=True)
else:
logger.info("Finished uploading dataset '%s'", name)
def upload_clicked(self): def upload_clicked(self):
idx = self.table.selectedIndexes() idx = self.table.selectedIndexes()
if idx: if idx:
idx = self.table_model_filter.mapToSource(idx[0]) idx = self.table_model_filter.mapToSource(idx[0])
key = self.table_model.index_to_key(idx) key = self.table_model.index_to_key(idx)
if key is not None: if key is not None:
persist, value = self.table_model.backing_store[key] persist, value, metadata = self.table_model.backing_store[key]
asyncio.ensure_future(self._upload_dataset(key, value)) asyncio.ensure_future(self.dataset_ctl.set(key, value, metadata=metadata))
def save_state(self): def save_state(self):
return bytes(self.table.header().saveState()) return bytes(self.table.header().saveState())

View File

@ -10,22 +10,14 @@ import h5py
from sipyco import pyon from sipyco import pyon
from artiq import __artiq_dir__ as artiq_dir from artiq import __artiq_dir__ as artiq_dir
from artiq.gui.tools import LayoutWidget, log_level_to_name, get_open_file_name from artiq.gui.tools import (LayoutWidget, WheelFilter,
log_level_to_name, get_open_file_name)
from artiq.gui.entries import procdesc_to_entry from artiq.gui.entries import procdesc_to_entry
from artiq.master.worker import Worker, log_worker_exception from artiq.master.worker import Worker, log_worker_exception
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class _WheelFilter(QtCore.QObject):
def eventFilter(self, obj, event):
if (event.type() == QtCore.QEvent.Wheel and
event.modifiers() != QtCore.Qt.NoModifier):
event.ignore()
return True
return False
class _ArgumentEditor(QtWidgets.QTreeWidget): class _ArgumentEditor(QtWidgets.QTreeWidget):
def __init__(self, dock): def __init__(self, dock):
QtWidgets.QTreeWidget.__init__(self) QtWidgets.QTreeWidget.__init__(self)
@ -46,7 +38,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
self.setStyleSheet("QTreeWidget {background: " + self.setStyleSheet("QTreeWidget {background: " +
self.palette().midlight().color().name() + " ;}") self.palette().midlight().color().name() + " ;}")
self.viewport().installEventFilter(_WheelFilter(self.viewport())) self.viewport().installEventFilter(WheelFilter(self.viewport(), True))
self._groups = dict() self._groups = dict()
self._arg_to_widgets = dict() self._arg_to_widgets = dict()
@ -378,9 +370,9 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
class LocalDatasetDB: class LocalDatasetDB:
def __init__(self, datasets_sub): def __init__(self, dataset_sub):
self.datasets_sub = datasets_sub self.dataset_sub = dataset_sub
datasets_sub.add_setmodel_callback(self.init) dataset_sub.add_setmodel_callback(self.init)
def init(self, data): def init(self, data):
self._data = data self._data = data
@ -389,11 +381,11 @@ class LocalDatasetDB:
return self._data.backing_store[key][1] return self._data.backing_store[key][1]
def update(self, mod): def update(self, mod):
self.datasets_sub.update(mod) self.dataset_sub.update(mod)
class ExperimentsArea(QtWidgets.QMdiArea): class ExperimentsArea(QtWidgets.QMdiArea):
def __init__(self, root, datasets_sub): def __init__(self, root, dataset_sub):
QtWidgets.QMdiArea.__init__(self) QtWidgets.QMdiArea.__init__(self)
self.pixmap = QtGui.QPixmap(os.path.join( self.pixmap = QtGui.QPixmap(os.path.join(
artiq_dir, "gui", "logo_ver.svg")) artiq_dir, "gui", "logo_ver.svg"))
@ -402,11 +394,11 @@ class ExperimentsArea(QtWidgets.QMdiArea):
self.open_experiments = [] self.open_experiments = []
self._ddb = LocalDatasetDB(datasets_sub) self._ddb = LocalDatasetDB(dataset_sub)
self.worker_handlers = { self.worker_handlers = {
"get_device_db": lambda: {}, "get_device_db": lambda: {},
"get_device": lambda k: {"type": "dummy"}, "get_device": lambda key, resolve_alias=False: {"type": "dummy"},
"get_dataset": self._ddb.get, "get_dataset": self._ddb.get,
"update_dataset": self._ddb.update, "update_dataset": self._ddb.update,
} }
@ -516,5 +508,9 @@ class ExperimentsArea(QtWidgets.QMdiArea):
self.open_experiments.append(dock) self.open_experiments.append(dock)
return dock return dock
def set_argument_value(self, expurl, name, value):
logger.warning("Unable to set argument '%s', dropping change. "
"'set_argument_value' not supported in browser.", name)
def on_dock_closed(self, dock): def on_dock_closed(self, dock):
self.open_experiments.remove(dock) self.open_experiments.remove(dock)

View File

@ -42,7 +42,7 @@ class ThumbnailIconProvider(QtWidgets.QFileIconProvider):
except KeyError: except KeyError:
return return
try: try:
img = QtGui.QImage.fromData(t.value) img = QtGui.QImage.fromData(t[()])
except: except:
logger.warning("unable to read thumbnail from %s", logger.warning("unable to read thumbnail from %s",
info.filePath(), exc_info=True) info.filePath(), exc_info=True)
@ -71,7 +71,7 @@ class ZoomIconView(QtWidgets.QListView):
self._char_width = QtGui.QFontMetrics(self.font()).averageCharWidth() self._char_width = QtGui.QFontMetrics(self.font()).averageCharWidth()
self.setViewMode(self.IconMode) self.setViewMode(self.IconMode)
w = self._char_width*self.default_size w = self._char_width*self.default_size
self.setIconSize(QtCore.QSize(w, w*self.aspect)) self.setIconSize(QtCore.QSize(w, int(w*self.aspect)))
self.setFlow(self.LeftToRight) self.setFlow(self.LeftToRight)
self.setResizeMode(self.Adjust) self.setResizeMode(self.Adjust)
self.setWrapping(True) self.setWrapping(True)
@ -102,13 +102,14 @@ class Hdf5FileSystemModel(QtWidgets.QFileSystemModel):
h5 = open_h5(info) h5 = open_h5(info)
if h5 is not None: if h5 is not None:
try: try:
expid = pyon.decode(h5["expid"].value) expid = pyon.decode(h5["expid"][()]) if "expid" in h5 else dict()
start_time = datetime.fromtimestamp(h5["start_time"].value) start_time = datetime.fromtimestamp(h5["start_time"][()]) if "start_time" in h5 else "<none>"
v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n" v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n"
"class_name: {}\nrid: {}\nstart_time: {}").format( "class_name: {}\nrid: {}\nstart_time: {}").format(
h5["artiq_version"].value, expid["repo_rev"], h5["artiq_version"].asstr()[()] if "artiq_version" in h5 else "<none>",
expid["file"], expid["class_name"], expid.get("repo_rev", "<none>"),
h5["rid"].value, start_time) expid.get("file", "<none>"), expid.get("class_name", "<none>"),
h5["rid"][()] if "rid" in h5 else "<none>", start_time)
return v return v
except: except:
logger.warning("unable to read metadata from %s", logger.warning("unable to read metadata from %s",
@ -174,31 +175,45 @@ class FilesDock(QtWidgets.QDockWidget):
logger.debug("loading datasets from %s", info.filePath()) logger.debug("loading datasets from %s", info.filePath())
with f: with f:
try: try:
expid = pyon.decode(f["expid"].value) expid = pyon.decode(f["expid"][()]) if "expid" in f else dict()
start_time = datetime.fromtimestamp(f["start_time"].value) start_time = datetime.fromtimestamp(f["start_time"][()]) if "start_time" in f else "<none>"
v = { v = {
"artiq_version": f["artiq_version"].value, "artiq_version": f["artiq_version"].asstr()[()] if "artiq_version" in f else "<none>",
"repo_rev": expid["repo_rev"], "repo_rev": expid.get("repo_rev", "<none>"),
"file": expid["file"], "file": expid.get("file", "<none>"),
"class_name": expid["class_name"], "class_name": expid.get("class_name", "<none>"),
"rid": f["rid"].value, "rid": f["rid"][()] if "rid" in f else "<none>",
"start_time": start_time, "start_time": start_time,
} }
self.metadata_changed.emit(v) self.metadata_changed.emit(v)
except: except:
logger.warning("unable to read metadata from %s", logger.warning("unable to read metadata from %s",
info.filePath(), exc_info=True) info.filePath(), exc_info=True)
rd = dict()
rd = {}
if "archive" in f: if "archive" in f:
rd = {k: (True, v.value) for k, v in f["archive"].items()} def visitor(k, v):
if isinstance(v, h5py.Dataset):
# v.attrs is a non-serializable h5py.AttributeManager, need to convert to dict
# See https://docs.h5py.org/en/stable/high/attr.html#h5py.AttributeManager
rd[k] = (True, v[()], dict(v.attrs))
f["archive"].visititems(visitor)
if "datasets" in f: if "datasets" in f:
for k, v in f["datasets"].items(): def visitor(k, v):
if k in rd: if isinstance(v, h5py.Dataset):
logger.warning("dataset '%s' is both in archive and " if k in rd:
"outputs", k) logger.warning("dataset '%s' is both in archive "
rd[k] = (True, v.value) "and outputs", k)
if rd: # v.attrs is a non-serializable h5py.AttributeManager, need to convert to dict
self.datasets.init(rd) # See https://docs.h5py.org/en/stable/high/attr.html#h5py.AttributeManager
rd[k] = (True, v[()], dict(v.attrs))
f["datasets"].visititems(visitor)
self.datasets.init(rd)
self.dataset_changed.emit(info.filePath()) self.dataset_changed.emit(info.filePath())
def list_activated(self, idx): def list_activated(self, idx):

View File

@ -1,7 +1,9 @@
import os import os
import subprocess import subprocess
from misoc.cores import identifier from migen import *
from migen.build.platforms.sinara import kasli
from misoc.interconnect.csr import *
from misoc.integration.builder import * from misoc.integration.builder import *
from artiq.gateware.amp import AMPSoC from artiq.gateware.amp import AMPSoC
@ -22,28 +24,53 @@ def get_identifier_string(soc, suffix="", add_class_name=True):
return r return r
def add_identifier(soc, *args, **kwargs): class ReprogrammableIdentifier(Module, AutoCSR):
def __init__(self, ident):
self.address = CSRStorage(8)
self.data = CSRStatus(8)
contents = list(ident.encode())
l = len(contents)
if l > 255:
raise ValueError("Identifier string must be 255 characters or less")
contents.insert(0, l)
for i in range(8):
self.specials += Instance("ROM256X1", name="identifier_str"+str(i),
i_A0=self.address.storage[0], i_A1=self.address.storage[1],
i_A2=self.address.storage[2], i_A3=self.address.storage[3],
i_A4=self.address.storage[4], i_A5=self.address.storage[5],
i_A6=self.address.storage[6], i_A7=self.address.storage[7],
o_O=self.data.status[i],
p_INIT=sum(1 << j if c & (1 << i) else 0 for j, c in enumerate(contents)))
def add_identifier(soc, *args, gateware_identifier_str=None, **kwargs):
if hasattr(soc, "identifier"): if hasattr(soc, "identifier"):
raise ValueError raise ValueError
identifier_str = get_identifier_string(soc, *args, **kwargs) identifier_str = get_identifier_string(soc, *args, **kwargs)
soc.submodules.identifier = identifier.Identifier(identifier_str) soc.submodules.identifier = ReprogrammableIdentifier(gateware_identifier_str or identifier_str)
soc.config["IDENTIFIER_STR"] = identifier_str soc.config["IDENTIFIER_STR"] = identifier_str
def build_artiq_soc(soc, argdict): def build_artiq_soc(soc, argdict):
firmware_dir = os.path.join(artiq_dir, "firmware") firmware_dir = os.path.join(artiq_dir, "firmware")
builder = Builder(soc, **argdict) builder = Builder(soc, **argdict)
builder.software_packages = [] builder.software_packages = []
builder.add_software_package("bootloader", os.path.join(firmware_dir, "bootloader")) builder.add_software_package("bootloader", os.path.join(firmware_dir, "bootloader"))
if isinstance(soc, AMPSoC): is_kasli_v1 = isinstance(soc.platform, kasli.Platform) and soc.platform.hw_rev in ("v1.0", "v1.1")
builder.add_software_package("libm") kernel_cpu_type = "vexriscv" if is_kasli_v1 else "vexriscv-g"
builder.add_software_package("libprintf") builder.add_software_package("libm", cpu_type=kernel_cpu_type)
builder.add_software_package("libprintf", cpu_type=kernel_cpu_type)
builder.add_software_package("libunwind", cpu_type=kernel_cpu_type)
builder.add_software_package("ksupport", os.path.join(firmware_dir, "ksupport"), cpu_type=kernel_cpu_type)
# Generate unwinder for soft float target (ARTIQ runtime)
# If the kernel lacks FPU, then the runtime unwinder is already generated
if not is_kasli_v1:
builder.add_software_package("libunwind") builder.add_software_package("libunwind")
builder.add_software_package("ksupport", os.path.join(firmware_dir, "ksupport")) if not soc.config["DRTIO_ROLE"] == "satellite":
builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime")) builder.add_software_package("runtime", os.path.join(firmware_dir, "runtime"))
else: else:
# Assume DRTIO satellite.
builder.add_software_package("satman", os.path.join(firmware_dir, "satman")) builder.add_software_package("satman", os.path.join(firmware_dir, "satman"))
try: try:
builder.build() builder.build()

View File

@ -21,13 +21,19 @@ class scoped(object):
set of variables resolved as globals set of variables resolved as globals
""" """
class remote(object):
"""
:ivar remote_fn: (bool) whether function is ran on a remote device,
meaning arguments are received remotely and return is sent remotely
"""
# Typed versions of untyped nodes # Typed versions of untyped nodes
class argT(ast.arg, commontyped): class argT(ast.arg, commontyped):
pass pass
class ClassDefT(ast.ClassDef): class ClassDefT(ast.ClassDef):
_types = ("constructor_type",) _types = ("constructor_type",)
class FunctionDefT(ast.FunctionDef, scoped): class FunctionDefT(ast.FunctionDef, scoped, remote):
_types = ("signature_type",) _types = ("signature_type",)
class QuotedFunctionDefT(FunctionDefT): class QuotedFunctionDefT(FunctionDefT):
""" """
@ -58,7 +64,7 @@ class BinOpT(ast.BinOp, commontyped):
pass pass
class BoolOpT(ast.BoolOp, commontyped): class BoolOpT(ast.BoolOp, commontyped):
pass pass
class CallT(ast.Call, commontyped): class CallT(ast.Call, commontyped, remote):
""" """
:ivar iodelay: (:class:`iodelay.Expr`) :ivar iodelay: (:class:`iodelay.Expr`)
:ivar arg_exprs: (dict of str to :class:`iodelay.Expr`) :ivar arg_exprs: (dict of str to :class:`iodelay.Expr`)

View File

@ -38,6 +38,9 @@ class TInt(types.TMono):
def one(): def one():
return 1 return 1
def TInt8():
return TInt(types.TValue(8))
def TInt32(): def TInt32():
return TInt(types.TValue(32)) return TInt(types.TValue(32))
@ -82,13 +85,27 @@ class TList(types.TMono):
super().__init__("list", {"elt": elt}) super().__init__("list", {"elt": elt})
class TArray(types.TMono): class TArray(types.TMono):
def __init__(self, elt=None): def __init__(self, elt=None, num_dims=1):
if elt is None: if elt is None:
elt = types.TVar() elt = types.TVar()
super().__init__("array", {"elt": elt}) if isinstance(num_dims, int):
# Make TArray more convenient to instantiate from (ARTIQ) user code.
num_dims = types.TValue(num_dims)
# For now, enforce number of dimensions to be known, as we'd otherwise
# need to implement custom unification logic for the type of `shape`.
# Default to 1 to keep compatibility with old user code from before
# multidimensional array support.
assert isinstance(num_dims.value, int), "Number of dimensions must be resolved"
super().__init__("array", {"elt": elt, "num_dims": num_dims})
self.attributes = OrderedDict([
("buffer", types._TPointer(elt)),
("shape", types.TTuple([TInt32()] * num_dims.value)),
])
def _array_printer(typ, printer, depth, max_depth): def _array_printer(typ, printer, depth, max_depth):
return "numpy.array(elt={})".format(printer.name(typ["elt"], depth, max_depth)) return "numpy.array(elt={}, num_dims={})".format(
printer.name(typ["elt"], depth, max_depth), typ["num_dims"].value)
types.TypePrinter.custom_printers["array"] = _array_printer types.TypePrinter.custom_printers["array"] = _array_printer
class TRange(types.TMono): class TRange(types.TMono):
@ -109,18 +126,23 @@ class TException(types.TMono):
# * File, line and column where it was raised (str, int, int). # * File, line and column where it was raised (str, int, int).
# * Message, which can contain substitutions {0}, {1} and {2} (str). # * Message, which can contain substitutions {0}, {1} and {2} (str).
# * Three 64-bit integers, parameterizing the message (numpy.int64). # * Three 64-bit integers, parameterizing the message (numpy.int64).
# These attributes are prefixed with `#` so that users cannot access them,
# and we don't have to do string allocation in the runtime.
# #__name__ is now a string key in the host. TStr may not be an actual
# CSlice in the runtime, they might be a CSlice with length = i32::MAX and
# ptr = string key in the host.
# Keep this in sync with the function ARTIQIRGenerator.alloc_exn. # Keep this in sync with the function ARTIQIRGenerator.alloc_exn.
attributes = OrderedDict([ attributes = OrderedDict([
("__name__", TStr()), ("#__name__", TInt32()),
("__file__", TStr()), ("#__file__", TStr()),
("__line__", TInt32()), ("#__line__", TInt32()),
("__col__", TInt32()), ("#__col__", TInt32()),
("__func__", TStr()), ("#__func__", TStr()),
("__message__", TStr()), ("#__message__", TStr()),
("__param0__", TInt64()), ("#__param0__", TInt64()),
("__param1__", TInt64()), ("#__param1__", TInt64()),
("__param2__", TInt64()), ("#__param2__", TInt64()),
]) ])
def __init__(self, name="Exception", id=0): def __init__(self, name="Exception", id=0):
@ -155,7 +177,9 @@ def fn_list():
return types.TConstructor(TList()) return types.TConstructor(TList())
def fn_array(): def fn_array():
return types.TConstructor(TArray()) # numpy.array() is actually a "magic" macro that is expanded in-place, but
# just as for builtin functions, we do not want to quote it, etc.
return types.TBuiltinFunction("array")
def fn_Exception(): def fn_Exception():
return types.TExceptionConstructor(TException("Exception")) return types.TExceptionConstructor(TException("Exception"))
@ -208,9 +232,6 @@ def obj_interleave():
def obj_sequential(): def obj_sequential():
return types.TBuiltin("sequential") return types.TBuiltin("sequential")
def fn_watchdog():
return types.TBuiltinFunction("watchdog")
def fn_delay(): def fn_delay():
return types.TBuiltinFunction("delay") return types.TBuiltinFunction("delay")
@ -226,6 +247,12 @@ def fn_at_mu():
def fn_rtio_log(): def fn_rtio_log():
return types.TBuiltinFunction("rtio_log") return types.TBuiltinFunction("rtio_log")
def fn_subkernel_await():
return types.TBuiltinFunction("subkernel_await")
def fn_subkernel_preload():
return types.TBuiltinFunction("subkernel_preload")
# Accessors # Accessors
def is_none(typ): def is_none(typ):
@ -304,9 +331,12 @@ def is_iterable(typ):
return is_listish(typ) or is_range(typ) return is_listish(typ) or is_range(typ)
def get_iterable_elt(typ): def get_iterable_elt(typ):
# TODO: Arrays count as listish, but this returns the innermost element type for
# n-dimensional arrays, rather than the n-1 dimensional result of iterating over
# the first axis, which makes the name a bit misleading.
if is_str(typ) or is_bytes(typ) or is_bytearray(typ): if is_str(typ) or is_bytes(typ) or is_bytearray(typ):
return TInt(types.TValue(8)) return TInt8()
elif is_iterable(typ): elif types._is_pointer(typ) or is_iterable(typ):
return typ.find()["elt"].find() return typ.find()["elt"].find()
else: else:
assert False assert False
@ -320,6 +350,6 @@ def is_allocated(typ):
return not (is_none(typ) or is_bool(typ) or is_int(typ) or return not (is_none(typ) or is_bool(typ) or is_int(typ) or
is_float(typ) or is_range(typ) or is_float(typ) or is_range(typ) or
types._is_pointer(typ) or types.is_function(typ) or types._is_pointer(typ) or types.is_function(typ) or
types.is_c_function(typ) or types.is_rpc(typ) or types.is_external_function(typ) or types.is_rpc(typ) or
types.is_method(typ) or types.is_tuple(typ) or types.is_subkernel(typ) or types.is_method(typ) or
types.is_value(typ)) types.is_tuple(typ) or types.is_value(typ))

View File

@ -5,7 +5,8 @@ the references to the host objects and translates the functions
annotated as ``@kernel`` when they are referenced. annotated as ``@kernel`` when they are referenced.
""" """
import sys, os, re, linecache, inspect, textwrap, types as pytypes, numpy import typing
import os, re, linecache, inspect, textwrap, types as pytypes, numpy
from collections import OrderedDict, defaultdict from collections import OrderedDict, defaultdict
from pythonparser import ast, algorithm, source, diagnostic, parse_buffer from pythonparser import ast, algorithm, source, diagnostic, parse_buffer
@ -14,10 +15,17 @@ from pythonparser import lexer as source_lexer, parser as source_parser
from Levenshtein import ratio as similarity, jaro_winkler from Levenshtein import ratio as similarity, jaro_winkler
from ..language import core as language_core from ..language import core as language_core
from . import types, builtins, asttyped, prelude from . import types, builtins, asttyped, math_fns, prelude
from .transforms import ASTTypedRewriter, Inferencer, IntMonomorphizer, TypedtreePrinter from .transforms import ASTTypedRewriter, Inferencer, IntMonomorphizer, TypedtreePrinter
from .transforms.asttyped_rewriter import LocalExtractor from .transforms.asttyped_rewriter import LocalExtractor
try:
# From numpy=1.25.0 dispatching for `__array_function__` is done via
# a C wrapper: https://github.com/numpy/numpy/pull/23020
from numpy.core._multiarray_umath import _ArrayFunctionDispatcher
except ImportError:
_ArrayFunctionDispatcher = None
class SpecializedFunction: class SpecializedFunction:
def __init__(self, instance_type, host_function): def __init__(self, instance_type, host_function):
@ -45,8 +53,48 @@ class EmbeddingMap:
self.object_forward_map = {} self.object_forward_map = {}
self.object_reverse_map = {} self.object_reverse_map = {}
self.module_map = {} self.module_map = {}
# type_map connects the host Python `type` to the pair of associated
# `(TInstance, TConstructor)`s. The `used_…_names` sets cache the
# respective `.name`s for O(1) collision avoidance.
self.type_map = {} self.type_map = {}
self.used_instance_type_names = set()
self.used_constructor_type_names = set()
self.function_map = {} self.function_map = {}
self.str_forward_map = {}
self.str_reverse_map = {}
self.preallocate_runtime_exception_names(["RuntimeError",
"RTIOUnderflow",
"RTIOOverflow",
"RTIODestinationUnreachable",
"DMAError",
"I2CError",
"CacheError",
"SPIError",
"0:ZeroDivisionError",
"0:IndexError",
"UnwrapNoneError",
"SubkernelError"])
def preallocate_runtime_exception_names(self, names):
for i, name in enumerate(names):
if ":" not in name:
name = "0:artiq.coredevice.exceptions." + name
exn_id = self.store_str(name)
assert exn_id == i
def store_str(self, s):
if s in self.str_forward_map:
return self.str_forward_map[s]
str_id = len(self.str_forward_map)
self.str_forward_map[s] = str_id
self.str_reverse_map[str_id] = s
return str_id
def retrieve_str(self, str_id):
return self.str_reverse_map[str_id]
# Modules # Modules
def store_module(self, module, module_type): def store_module(self, module, module_type):
@ -60,16 +108,6 @@ class EmbeddingMap:
# Types # Types
def store_type(self, host_type, instance_type, constructor_type): def store_type(self, host_type, instance_type, constructor_type):
self._rename_type(instance_type)
self.type_map[host_type] = (instance_type, constructor_type)
def retrieve_type(self, host_type):
return self.type_map[host_type]
def has_type(self, host_type):
return host_type in self.type_map
def _rename_type(self, new_instance_type):
# Generally, user-defined types that have exact same name (which is to say, classes # Generally, user-defined types that have exact same name (which is to say, classes
# defined inside functions) do not pose a problem to the compiler. The two places which # defined inside functions) do not pose a problem to the compiler. The two places which
# cannot handle this are: # cannot handle this are:
@ -78,12 +116,29 @@ class EmbeddingMap:
# Since handling #2 requires renaming on ARTIQ side anyway, it's more straightforward # Since handling #2 requires renaming on ARTIQ side anyway, it's more straightforward
# to do it once when embedding (since non-embedded code cannot define classes in # to do it once when embedding (since non-embedded code cannot define classes in
# functions). Also, easier to debug. # functions). Also, easier to debug.
n = 0 suffix = 0
for host_type in self.type_map: new_instance_name = instance_type.name
instance_type, constructor_type = self.type_map[host_type] new_constructor_name = constructor_type.name
if instance_type.name == new_instance_type.name: while True:
n += 1 if (new_instance_name not in self.used_instance_type_names
new_instance_type.name = "{}.{}".format(new_instance_type.name, n) and new_constructor_name not in self.used_constructor_type_names):
break
suffix += 1
new_instance_name = f"{instance_type.name}.{suffix}"
new_constructor_name = f"{constructor_type.name}.{suffix}"
self.used_instance_type_names.add(new_instance_name)
instance_type.name = new_instance_name
self.used_constructor_type_names.add(new_constructor_name)
constructor_type.name = new_constructor_name
self.type_map[host_type] = (instance_type, constructor_type)
def retrieve_type(self, host_type):
return self.type_map[host_type]
def has_type(self, host_type):
return host_type in self.type_map
def attribute_count(self): def attribute_count(self):
count = 0 count = 0
@ -130,7 +185,22 @@ class EmbeddingMap:
obj_typ, _ = self.type_map[type(obj_ref)] obj_typ, _ = self.type_map[type(obj_ref)]
yield obj_id, obj_ref, obj_typ yield obj_id, obj_ref, obj_typ
def subkernels(self):
subkernels = {}
for k, v in self.object_forward_map.items():
if hasattr(v, "artiq_embedded"):
if v.artiq_embedded.destination is not None:
subkernels[k] = v
return subkernels
def has_rpc(self): def has_rpc(self):
return any(filter(
lambda x: (inspect.isfunction(x) or inspect.ismethod(x)) and \
(not hasattr(x, "artiq_embedded") or x.artiq_embedded.destination is None),
self.object_forward_map.values()
))
def has_rpc_or_subkernel(self):
return any(filter(lambda x: inspect.isfunction(x) or inspect.ismethod(x), return any(filter(lambda x: inspect.isfunction(x) or inspect.ismethod(x),
self.object_forward_map.values())) self.object_forward_map.values()))
@ -138,6 +208,7 @@ class EmbeddingMap:
class ASTSynthesizer: class ASTSynthesizer:
def __init__(self, embedding_map, value_map, quote_function=None, expanded_from=None): def __init__(self, embedding_map, value_map, quote_function=None, expanded_from=None):
self.source = "" self.source = ""
self.source_last_new_line = 0
self.source_buffer = source.Buffer(self.source, "<synthesized>") self.source_buffer = source.Buffer(self.source, "<synthesized>")
self.embedding_map = embedding_map self.embedding_map = embedding_map
self.value_map = value_map self.value_map = value_map
@ -156,16 +227,90 @@ class ASTSynthesizer:
return source.Range(self.source_buffer, range_from, range_to, return source.Range(self.source_buffer, range_from, range_to,
expanded_from=self.expanded_from) expanded_from=self.expanded_from)
def _add_iterable(self, fragment):
# Since DILocation points on the beginning of the piece of source
# we don't care if the fragment's end will overflow LLVM's limit.
if len(self.source) - self.source_last_new_line >= 2**16:
fragment = "\\\n" + fragment
self.source_last_new_line = len(self.source) + 2
return self._add(fragment)
def fast_quote_list(self, value):
elts = [None] * len(value)
is_T = False
if len(value) > 0:
v = value[0]
is_T = True
if isinstance(v, int):
T = int
elif isinstance(v, float):
T = float
elif isinstance(v, numpy.int32):
T = numpy.int32
elif isinstance(v, numpy.int64):
T = numpy.int64
else:
is_T = False
if is_T:
for v in value:
if not isinstance(v, T):
is_T = False
break
if is_T:
is_int = T != float
if T == int:
typ = builtins.TInt()
elif T == float:
typ = builtins.TFloat()
elif T == numpy.int32:
typ = builtins.TInt32()
elif T == numpy.int64:
typ = builtins.TInt64()
else:
assert False
text = [repr(elt) for elt in value]
start = len(self.source)
self.source += ", ".join(text)
if is_int:
for i, (v, t) in enumerate(zip(value, text)):
l = len(t)
elts[i] = asttyped.NumT(
n=int(v), ctx=None, type=typ,
loc=source.Range(
self.source_buffer, start, start + l,
expanded_from=self.expanded_from))
start += l + 2
else:
for i, (v, t) in enumerate(zip(value, text)):
l = len(t)
elts[i] = asttyped.NumT(
n=v, ctx=None, type=typ,
loc=source.Range(
self.source_buffer, start, start + l,
expanded_from=self.expanded_from))
start += l + 2
else:
for index, elt in enumerate(value):
elts[index] = self.quote(elt)
if index < len(value) - 1:
self._add_iterable(", ")
return elts
def quote(self, value): def quote(self, value):
"""Construct an AST fragment equal to `value`.""" """Construct an AST fragment equal to `value`."""
if value is None: if value is None:
typ = builtins.TNone() typ = builtins.TNone()
return asttyped.NameConstantT(value=value, type=typ, return asttyped.NameConstantT(value=value, type=typ,
loc=self._add(repr(value))) loc=self._add(repr(value)))
elif value is True or value is False: elif isinstance(value, (bool, numpy.bool_)):
typ = builtins.TBool() typ = builtins.TBool()
return asttyped.NameConstantT(value=value, type=typ, coerced = bool(value)
loc=self._add(repr(value))) return asttyped.NameConstantT(value=coerced, type=typ,
loc=self._add(repr(coerced)))
elif value is float:
typ = builtins.fn_float()
return asttyped.NameConstantT(value=None, type=typ,
loc=self._add("float"))
elif value is numpy.int32: elif value is numpy.int32:
typ = builtins.fn_int32() typ = builtins.fn_int32()
return asttyped.NameConstantT(value=None, type=typ, return asttyped.NameConstantT(value=None, type=typ,
@ -199,54 +344,40 @@ class ASTSynthesizer:
loc=self._add(repr(value))) loc=self._add(repr(value)))
elif isinstance(value, str): elif isinstance(value, str):
return asttyped.StrT(s=value, ctx=None, type=builtins.TStr(), return asttyped.StrT(s=value, ctx=None, type=builtins.TStr(),
loc=self._add(repr(value))) loc=self._add_iterable(repr(value)))
elif isinstance(value, bytes): elif isinstance(value, bytes):
return asttyped.StrT(s=value, ctx=None, type=builtins.TBytes(), return asttyped.StrT(s=value, ctx=None, type=builtins.TBytes(),
loc=self._add(repr(value))) loc=self._add_iterable(repr(value)))
elif isinstance(value, bytearray): elif isinstance(value, bytearray):
quote_loc = self._add('`') quote_loc = self._add_iterable('`')
repr_loc = self._add(repr(value)) repr_loc = self._add_iterable(repr(value))
unquote_loc = self._add('`') unquote_loc = self._add_iterable('`')
loc = quote_loc.join(unquote_loc) loc = quote_loc.join(unquote_loc)
return asttyped.QuoteT(value=value, type=builtins.TByteArray(), loc=loc) return asttyped.QuoteT(value=value, type=builtins.TByteArray(), loc=loc)
elif isinstance(value, list): elif isinstance(value, list):
begin_loc = self._add("[") begin_loc = self._add_iterable("[")
elts = [] elts = self.fast_quote_list(value)
for index, elt in enumerate(value): end_loc = self._add_iterable("]")
elts.append(self.quote(elt))
if index < len(value) - 1:
self._add(", ")
end_loc = self._add("]")
return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(), return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(),
begin_loc=begin_loc, end_loc=end_loc, begin_loc=begin_loc, end_loc=end_loc,
loc=begin_loc.join(end_loc)) loc=begin_loc.join(end_loc))
elif isinstance(value, tuple): elif isinstance(value, tuple):
begin_loc = self._add("(") begin_loc = self._add_iterable("(")
elts = [] elts = self.fast_quote_list(value)
for index, elt in enumerate(value): end_loc = self._add_iterable(")")
elts.append(self.quote(elt))
self._add(", ")
end_loc = self._add(")")
return asttyped.TupleT(elts=elts, ctx=None, return asttyped.TupleT(elts=elts, ctx=None,
type=types.TTuple([e.type for e in elts]), type=types.TTuple([e.type for e in elts]),
begin_loc=begin_loc, end_loc=end_loc, begin_loc=begin_loc, end_loc=end_loc,
loc=begin_loc.join(end_loc)) loc=begin_loc.join(end_loc))
elif isinstance(value, numpy.ndarray): elif isinstance(value, numpy.ndarray):
begin_loc = self._add("numpy.array([") return self.call(numpy.array, [list(value)], {})
elts = []
for index, elt in enumerate(value):
elts.append(self.quote(elt))
if index < len(value) - 1:
self._add(", ")
end_loc = self._add("])")
return asttyped.ListT(elts=elts, ctx=None, type=builtins.TArray(),
begin_loc=begin_loc, end_loc=end_loc,
loc=begin_loc.join(end_loc))
elif inspect.isfunction(value) or inspect.ismethod(value) or \ elif inspect.isfunction(value) or inspect.ismethod(value) or \
isinstance(value, pytypes.BuiltinFunctionType) or \ isinstance(value, pytypes.BuiltinFunctionType) or \
isinstance(value, SpecializedFunction): isinstance(value, SpecializedFunction) or \
isinstance(value, numpy.ufunc) or \
(isinstance(value, _ArrayFunctionDispatcher) if
_ArrayFunctionDispatcher is not None else False):
if inspect.ismethod(value): if inspect.ismethod(value):
quoted_self = self.quote(value.__self__) quoted_self = self.quote(value.__self__)
function_type = self.quote_function(value.__func__, self.expanded_from) function_type = self.quote_function(value.__func__, self.expanded_from)
@ -355,7 +486,7 @@ class ASTSynthesizer:
return asttyped.QuoteT(value=value, type=instance_type, return asttyped.QuoteT(value=value, type=instance_type,
loc=loc) loc=loc)
def call(self, callee, args, kwargs, callback=None): def call(self, callee, args, kwargs, callback=None, remote_fn=False):
""" """
Construct an AST fragment calling a function specified by Construct an AST fragment calling a function specified by
an AST node `function_node`, with given arguments. an AST node `function_node`, with given arguments.
@ -399,7 +530,7 @@ class ASTSynthesizer:
starargs=None, kwargs=None, starargs=None, kwargs=None,
type=types.TVar(), iodelay=None, arg_exprs={}, type=types.TVar(), iodelay=None, arg_exprs={},
begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None, begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None,
loc=callee_node.loc.join(end_loc)) loc=callee_node.loc.join(end_loc), remote_fn=remote_fn)
if callback is not None: if callback is not None:
node = asttyped.CallT( node = asttyped.CallT(
@ -434,7 +565,7 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
arg=node.arg, annotation=None, arg=node.arg, annotation=None,
arg_loc=node.arg_loc, colon_loc=node.colon_loc, loc=node.loc) arg_loc=node.arg_loc, colon_loc=node.colon_loc, loc=node.loc)
def visit_quoted_function(self, node, function): def visit_quoted_function(self, node, function, remote_fn):
extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine) extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine)
extractor.visit(node) extractor.visit(node)
@ -451,11 +582,11 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
node = asttyped.QuotedFunctionDefT( node = asttyped.QuotedFunctionDefT(
typing_env=extractor.typing_env, globals_in_scope=extractor.global_, typing_env=extractor.typing_env, globals_in_scope=extractor.global_,
signature_type=types.TVar(), return_type=types.TVar(), signature_type=types.TVar(), return_type=types.TVar(),
name=node.name, args=node.args, returns=node.returns, name=node.name, args=node.args, returns=None,
body=node.body, decorator_list=node.decorator_list, body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc, keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs, arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
loc=node.loc) loc=node.loc, remote_fn=remote_fn)
try: try:
self.env_stack.append(node.typing_env) self.env_stack.append(node.typing_env)
@ -527,7 +658,7 @@ class StitchingInferencer(Inferencer):
self.engine.process(diag) self.engine.process(diag)
return return
# Figure out what ARTIQ type does the value of the attribute have. # Figure out the ARTIQ type of the value of the attribute.
# We do this by quoting it, as if to serialize. This has some # We do this by quoting it, as if to serialize. This has some
# overhead (i.e. synthesizing a source buffer), but has the advantage # overhead (i.e. synthesizing a source buffer), but has the advantage
# of having the host-to-ARTIQ mapping code in only one place and # of having the host-to-ARTIQ mapping code in only one place and
@ -663,7 +794,7 @@ class TypedtreeHasher(algorithm.Visitor):
return hash(tuple(freeze(getattr(node, field_name)) for field_name in fields)) return hash(tuple(freeze(getattr(node, field_name)) for field_name in fields))
class Stitcher: class Stitcher:
def __init__(self, core, dmgr, engine=None, print_as_rpc=True): def __init__(self, core, dmgr, engine=None, print_as_rpc=True, destination=0, subkernel_arg_types=[]):
self.core = core self.core = core
self.dmgr = dmgr self.dmgr = dmgr
if engine is None: if engine is None:
@ -687,12 +818,21 @@ class Stitcher:
self.embedding_map = EmbeddingMap() self.embedding_map = EmbeddingMap()
self.value_map = defaultdict(lambda: []) self.value_map = defaultdict(lambda: [])
self.definitely_changed = False
self.destination = destination
self.first_call = True
# for non-annotated subkernels:
# main kernel inferencer output with types of arguments
self.subkernel_arg_types = subkernel_arg_types
def stitch_call(self, function, args, kwargs, callback=None): def stitch_call(self, function, args, kwargs, callback=None):
# We synthesize source code for the initial call so that # We synthesize source code for the initial call so that
# diagnostics would have something meaningful to display to the user. # diagnostics would have something meaningful to display to the user.
synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function)) synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function))
call_node = synthesizer.call(function, args, kwargs, callback) # first call of a subkernel will get its arguments from remote (DRTIO)
remote_fn = self.destination != 0
call_node = synthesizer.call(function, args, kwargs, callback, remote_fn=remote_fn)
synthesizer.finalize() synthesizer.finalize()
self.typedtree.append(call_node) self.typedtree.append(call_node)
@ -707,13 +847,19 @@ class Stitcher:
old_attr_count = None old_attr_count = None
while True: while True:
inferencer.visit(self.typedtree) inferencer.visit(self.typedtree)
typedtree_hash = typedtree_hasher.visit(self.typedtree) if self.definitely_changed:
attr_count = self.embedding_map.attribute_count() changed = True
self.definitely_changed = False
else:
typedtree_hash = typedtree_hasher.visit(self.typedtree)
attr_count = self.embedding_map.attribute_count()
changed = old_attr_count != attr_count or \
old_typedtree_hash != typedtree_hash
old_typedtree_hash = typedtree_hash
old_attr_count = attr_count
if old_typedtree_hash == typedtree_hash and old_attr_count == attr_count: if not changed:
break break
old_typedtree_hash = typedtree_hash
old_attr_count = attr_count
# After we've discovered every referenced attribute, check if any kernel_invariant # After we've discovered every referenced attribute, check if any kernel_invariant
# specifications refers to ones we didn't encounter. # specifications refers to ones we didn't encounter.
@ -765,6 +911,9 @@ class Stitcher:
if hasattr(function, 'artiq_embedded') and function.artiq_embedded.function: if hasattr(function, 'artiq_embedded') and function.artiq_embedded.function:
function = function.artiq_embedded.function function = function.artiq_embedded.function
if isinstance(function, str):
return source.Range(source.Buffer(function, "<string>"), 0, 0)
filename = function.__code__.co_filename filename = function.__code__.co_filename
line = function.__code__.co_firstlineno line = function.__code__.co_firstlineno
name = function.__code__.co_name name = function.__code__.co_name
@ -795,6 +944,10 @@ class Stitcher:
return [diagnostic.Diagnostic("note", return [diagnostic.Diagnostic("note",
"in kernel function here", {}, "in kernel function here", {},
call_loc)] call_loc)]
elif fn_kind == 'subkernel':
return [diagnostic.Diagnostic("note",
"in subkernel call here", {},
call_loc)]
else: else:
assert False assert False
else: else:
@ -814,7 +967,7 @@ class Stitcher:
self._function_loc(function), self._function_loc(function),
notes=self._call_site_note(loc, fn_kind)) notes=self._call_site_note(loc, fn_kind))
self.engine.process(diag) self.engine.process(diag)
elif fn_kind == 'rpc' and param.default is not inspect.Parameter.empty: elif fn_kind == 'rpc' or fn_kind == 'subkernel' and param.default is not inspect.Parameter.empty:
notes = [] notes = []
notes.append(diagnostic.Diagnostic("note", notes.append(diagnostic.Diagnostic("note",
"expanded from here while trying to infer a type for an" "expanded from here while trying to infer a type for an"
@ -833,11 +986,21 @@ class Stitcher:
Inferencer(engine=self.engine).visit(ast) Inferencer(engine=self.engine).visit(ast)
IntMonomorphizer(engine=self.engine).visit(ast) IntMonomorphizer(engine=self.engine).visit(ast)
return ast.type return ast.type
else: elif fn_kind == 'kernel' and self.first_call and self.destination != 0:
# Let the rest of the program decide. # subkernels do not have access to the main kernel code to infer
return types.TVar() # arg types - so these are cached and passed onto subkernel
# compilation, to avoid having to annotate them fully
for name, typ in self.subkernel_arg_types:
if param.name == name:
return typ
# Let the rest of the program decide.
return types.TVar()
def _quote_embedded_function(self, function, flags, remote_fn=False):
# we are now parsing new functions... definitely changed the type
self.definitely_changed = True
def _quote_embedded_function(self, function, flags):
if isinstance(function, SpecializedFunction): if isinstance(function, SpecializedFunction):
host_function = function.host_function host_function = function.host_function
else: else:
@ -848,10 +1011,20 @@ class Stitcher:
# Extract function source. # Extract function source.
embedded_function = host_function.artiq_embedded.function embedded_function = host_function.artiq_embedded.function
source_code = inspect.getsource(embedded_function) if isinstance(embedded_function, str):
filename = embedded_function.__code__.co_filename # This is a function to be eval'd from the given source code in string form.
module_name = embedded_function.__globals__['__name__'] # Mangle the host function's id() into the fully qualified name to make sure
first_line = embedded_function.__code__.co_firstlineno # there are no collisions.
source_code = embedded_function
embedded_function = host_function
filename = "<string>"
module_name = "__eval_{}".format(id(host_function))
first_line = 1
else:
source_code = inspect.getsource(embedded_function)
filename = embedded_function.__code__.co_filename
module_name = embedded_function.__globals__['__name__']
first_line = embedded_function.__code__.co_firstlineno
# Extract function annotation. # Extract function annotation.
signature = inspect.signature(embedded_function) signature = inspect.signature(embedded_function)
@ -894,13 +1067,11 @@ class Stitcher:
# Parse. # Parse.
source_buffer = source.Buffer(source_code, filename, first_line) source_buffer = source.Buffer(source_code, filename, first_line)
lexer = source_lexer.Lexer(source_buffer, version=sys.version_info[0:2], lexer = source_lexer.Lexer(source_buffer, version=(3, 6), diagnostic_engine=self.engine)
diagnostic_engine=self.engine)
lexer.indent = [(initial_indent, lexer.indent = [(initial_indent,
source.Range(source_buffer, 0, len(initial_whitespace)), source.Range(source_buffer, 0, len(initial_whitespace)),
initial_whitespace)] initial_whitespace)]
parser = source_parser.Parser(lexer, version=sys.version_info[0:2], parser = source_parser.Parser(lexer, version=(3, 6), diagnostic_engine=self.engine)
diagnostic_engine=self.engine)
function_node = parser.file_input().body[0] function_node = parser.file_input().body[0]
# Mangle the name, since we put everything into a single module. # Mangle the name, since we put everything into a single module.
@ -925,7 +1096,7 @@ class Stitcher:
engine=self.engine, prelude=self.prelude, engine=self.engine, prelude=self.prelude,
globals=self.globals, host_environment=host_environment, globals=self.globals, host_environment=host_environment,
quote=self._quote) quote=self._quote)
function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function) function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function, remote_fn)
function_node.flags = flags function_node.flags = flags
# Add it into our typedtree so that it gets inferenced and codegen'd. # Add it into our typedtree so that it gets inferenced and codegen'd.
@ -937,23 +1108,108 @@ class Stitcher:
return function_node return function_node
def _extract_annot(self, function, annot, kind, call_loc, fn_kind): def _extract_annot(self, function, annot, kind, call_loc, fn_kind):
if not isinstance(annot, types.Type): if isinstance(function, SpecializedFunction):
diag = diagnostic.Diagnostic("error", host_function = function.host_function
"type annotation for {kind}, '{annot}', is not an ARTIQ type",
{"kind": kind, "annot": repr(annot)},
self._function_loc(function),
notes=self._call_site_note(call_loc, fn_kind))
self.engine.process(diag)
return types.TVar()
else: else:
host_function = function
if hasattr(host_function, 'artiq_embedded'):
embedded_function = host_function.artiq_embedded.function
else:
embedded_function = host_function
if isinstance(embedded_function, str):
embedded_function = host_function
return self._to_artiq_type(
annot,
function=function,
kind=kind,
eval_in_scope=lambda x: eval(x, embedded_function.__globals__),
call_loc=call_loc,
fn_kind=fn_kind)
def _to_artiq_type(
self, annot, *, function, kind: str, eval_in_scope, call_loc: str, fn_kind: str
) -> types.Type:
if isinstance(annot, str):
try:
annot = eval_in_scope(annot)
except Exception:
diag = diagnostic.Diagnostic(
"error",
"type annotation for {kind}, {annot}, cannot be evaluated",
{"kind": kind, "annot": repr(annot)},
self._function_loc(function),
notes=self._call_site_note(call_loc, fn_kind))
self.engine.process(diag)
if isinstance(annot, types.Type):
return annot return annot
# Convert built-in Python types to ARTIQ ones.
if annot is None:
return builtins.TNone()
elif annot is numpy.int64:
return builtins.TInt64()
elif annot is numpy.int32:
return builtins.TInt32()
elif annot is float:
return builtins.TFloat()
elif annot is bool:
return builtins.TBool()
elif annot is str:
return builtins.TStr()
elif annot is bytes:
return builtins.TBytes()
elif annot is bytearray:
return builtins.TByteArray()
# Convert generic Python types to ARTIQ ones.
generic_ty = typing.get_origin(annot)
if generic_ty is not None:
type_args = typing.get_args(annot)
artiq_args = [
self._to_artiq_type(
x,
function=function,
kind=kind,
eval_in_scope=eval_in_scope,
call_loc=call_loc,
fn_kind=fn_kind)
for x in type_args
]
if generic_ty is list and len(artiq_args) == 1:
return builtins.TList(artiq_args[0])
elif generic_ty is tuple:
return types.TTuple(artiq_args)
# Otherwise report an unknown type and just use a fresh tyvar.
if annot is int:
message = (
"type annotation for {kind}, 'int' cannot be used as an ARTIQ type. "
"Use numpy's int32 or int64 instead."
)
ty = builtins.TInt()
else:
message = "type annotation for {kind}, '{annot}', is not an ARTIQ type"
ty = types.TVar()
diag = diagnostic.Diagnostic("error",
message,
{"kind": kind, "annot": repr(annot)},
self._function_loc(function),
notes=self._call_site_note(call_loc, fn_kind))
self.engine.process(diag)
return ty
def _quote_syscall(self, function, loc): def _quote_syscall(self, function, loc):
signature = inspect.signature(function) signature = inspect.signature(function)
arg_types = OrderedDict() arg_types = OrderedDict()
optarg_types = OrderedDict()
for param in signature.parameters.values(): for param in signature.parameters.values():
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD: if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",
@ -985,9 +1241,43 @@ class Stitcher:
self.engine.process(diag) self.engine.process(diag)
ret_type = types.TVar() ret_type = types.TVar()
function_type = types.TCFunction(arg_types, ret_type, function_type = types.TExternalFunction(arg_types, ret_type,
name=function.artiq_embedded.syscall, name=function.artiq_embedded.syscall,
flags=function.artiq_embedded.flags) flags=function.artiq_embedded.flags)
self.functions[function] = function_type
return function_type
def _quote_subkernel(self, function, loc):
if isinstance(function, SpecializedFunction):
host_function = function.host_function
else:
host_function = function
ret_type = builtins.TNone()
signature = inspect.signature(host_function)
if signature.return_annotation is not inspect.Signature.empty:
ret_type = self._extract_annot(host_function, signature.return_annotation,
"return type", loc, fn_kind='subkernel')
arg_types = OrderedDict()
optarg_types = OrderedDict()
for param in signature.parameters.values():
if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
diag = diagnostic.Diagnostic("error",
"subkernels must only use positional arguments; '{argument}' isn't",
{"argument": param.name},
self._function_loc(function),
notes=self._call_site_note(loc, fn_kind='subkernel'))
self.engine.process(diag)
arg_type = self._type_of_param(function, loc, param, fn_kind='subkernel')
if param.default is inspect.Parameter.empty:
arg_types[param.name] = arg_type
else:
optarg_types[param.name] = arg_type
function_type = types.TSubkernel(arg_types, optarg_types, ret_type,
sid=self.embedding_map.store_object(host_function),
destination=host_function.artiq_embedded.destination)
self.functions[function] = function_type self.functions[function] = function_type
return function_type return function_type
@ -1041,13 +1331,27 @@ class Stitcher:
host_function = function host_function = function
if function in self.functions: if function in self.functions:
pass return self.functions[function]
math_type = math_fns.match(function)
if math_type is not None:
self.functions[function] = math_type
elif not hasattr(host_function, "artiq_embedded") or \ elif not hasattr(host_function, "artiq_embedded") or \
(host_function.artiq_embedded.core_name is None and (host_function.artiq_embedded.core_name is None and
host_function.artiq_embedded.portable is False and host_function.artiq_embedded.portable is False and
host_function.artiq_embedded.syscall is None and host_function.artiq_embedded.syscall is None and
host_function.artiq_embedded.destination is None and
host_function.artiq_embedded.forbidden is False): host_function.artiq_embedded.forbidden is False):
self._quote_rpc(function, loc) self._quote_rpc(function, loc)
elif host_function.artiq_embedded.destination is not None and \
host_function.artiq_embedded.destination != self.destination:
# treat subkernels as kernels if running on the same device
if not 0 < host_function.artiq_embedded.destination <= 255:
diag = diagnostic.Diagnostic("error",
"subkernel destination must be between 1 and 255 (inclusive)", {},
self._function_loc(host_function))
self.engine.process(diag)
self._quote_subkernel(function, loc)
elif host_function.artiq_embedded.function is not None: elif host_function.artiq_embedded.function is not None:
if host_function.__name__ == "<lambda>": if host_function.__name__ == "<lambda>":
note = diagnostic.Diagnostic("note", note = diagnostic.Diagnostic("note",
@ -1071,8 +1375,13 @@ class Stitcher:
notes=[note]) notes=[note])
self.engine.process(diag) self.engine.process(diag)
destination = host_function.artiq_embedded.destination
# remote_fn only for first call in subkernels
remote_fn = destination is not None and self.first_call
self._quote_embedded_function(function, self._quote_embedded_function(function,
flags=host_function.artiq_embedded.flags) flags=host_function.artiq_embedded.flags,
remote_fn=remote_fn)
self.first_call = False
elif host_function.artiq_embedded.syscall is not None: elif host_function.artiq_embedded.syscall is not None:
# Insert a storage-less global whose type instructs the compiler # Insert a storage-less global whose type instructs the compiler
# to perform a system call instead of a regular call. # to perform a system call instead of a regular call.

View File

@ -36,6 +36,48 @@ class TKeyword(types.TMono):
def is_keyword(typ): def is_keyword(typ):
return isinstance(typ, TKeyword) return isinstance(typ, TKeyword)
# See rpc_proto.rs and comm_kernel.py:_{send,receive}_rpc_value.
def rpc_tag(typ, error_handler):
typ = typ.find()
if types.is_tuple(typ):
assert len(typ.elts) < 256
return b"t" + bytes([len(typ.elts)]) + \
b"".join([rpc_tag(elt_type, error_handler)
for elt_type in typ.elts])
elif builtins.is_none(typ):
return b"n"
elif builtins.is_bool(typ):
return b"b"
elif builtins.is_int(typ, types.TValue(32)):
return b"i"
elif builtins.is_int(typ, types.TValue(64)):
return b"I"
elif builtins.is_float(typ):
return b"f"
elif builtins.is_str(typ):
return b"s"
elif builtins.is_bytes(typ):
return b"B"
elif builtins.is_bytearray(typ):
return b"A"
elif builtins.is_list(typ):
return b"l" + rpc_tag(builtins.get_iterable_elt(typ), error_handler)
elif builtins.is_array(typ):
num_dims = typ["num_dims"].value
return b"a" + bytes([num_dims]) + rpc_tag(typ["elt"], error_handler)
elif builtins.is_range(typ):
return b"r" + rpc_tag(builtins.get_iterable_elt(typ), error_handler)
elif is_keyword(typ):
return b"k" + rpc_tag(typ.params["value"], error_handler)
elif types.is_function(typ) or types.is_method(typ) or types.is_rpc(typ):
raise ValueError("RPC tag for functional value")
elif '__objectid__' in typ.attributes:
return b"O"
else:
error_handler(typ)
class Value: class Value:
""" """
An SSA value that keeps track of its uses. An SSA value that keeps track of its uses.
@ -93,6 +135,7 @@ class NamedValue(Value):
def __init__(self, typ, name): def __init__(self, typ, name):
super().__init__(typ) super().__init__(typ)
self.name, self.function = name, None self.name, self.function = name, None
self.is_removed = False
def set_name(self, new_name): def set_name(self, new_name):
if self.function is not None: if self.function is not None:
@ -193,7 +236,7 @@ class Instruction(User):
self.drop_references() self.drop_references()
# Check this after drop_references in case this # Check this after drop_references in case this
# is a self-referencing phi. # is a self-referencing phi.
assert not any(self.uses) assert all(use.is_removed for use in self.uses)
def replace_with(self, value): def replace_with(self, value):
self.replace_all_uses_with(value) self.replace_all_uses_with(value)
@ -328,7 +371,7 @@ class BasicBlock(NamedValue):
self.remove_from_parent() self.remove_from_parent()
# Check this after erasing instructions in case the block # Check this after erasing instructions in case the block
# loops into itself. # loops into itself.
assert not any(self.uses) assert all(use.is_removed for use in self.uses)
def prepend(self, insn): def prepend(self, insn):
assert isinstance(insn, Instruction) assert isinstance(insn, Instruction)
@ -663,6 +706,81 @@ class SetLocal(Instruction):
def value(self): def value(self):
return self.operands[1] return self.operands[1]
class GetArgFromRemote(Instruction):
"""
An instruction that receives function arguments from remote
(ie. subkernel in DRTIO context)
:ivar arg_name: (string) argument name
:ivar arg_type: argument type
"""
"""
:param arg_name: (string) argument name
:param arg_type: argument type
"""
def __init__(self, arg_name, arg_type, name=""):
assert isinstance(arg_name, str)
super().__init__([], arg_type, name)
self.arg_name = arg_name
self.arg_type = arg_type
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.arg_name = self.arg_name
self_copy.arg_type = self.arg_type
return self_copy
def opcode(self):
return "getargfromremote({})".format(repr(self.arg_name))
class GetOptArgFromRemote(GetArgFromRemote):
"""
An instruction that may or may not retrieve an optional function argument
from remote, depending on number of values received by firmware.
:ivar rcv_count: number of received values,
determined by firmware
:ivar index: (integer) index of the current argument,
in reference to remote arguments
"""
"""
:param rcv_count: number of received valuese
:param index: (integer) index of the current argument,
in reference to remote arguments
"""
def __init__(self, arg_name, arg_type, rcv_count, index, name=""):
super().__init__(arg_name, arg_type, name)
self.rcv_count = rcv_count
self.index = index
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.rcv_count = self.rcv_count
self_copy.index = self.index
return self_copy
def opcode(self):
return "getoptargfromremote({})".format(repr(self.arg_name))
class SubkernelAwaitArgs(Instruction):
"""
A builtin instruction that takes min and max received messages as operands,
and a list of received types.
:ivar arg_types: (list of types) types of passed arguments (including optional)
"""
"""
:param arg_types: (list of types) types of passed arguments (including optional)
"""
def __init__(self, operands, arg_types, name=None):
assert isinstance(arg_types, list)
self.arg_types = arg_types
super().__init__(operands, builtins.TNone(), name)
class GetAttr(Instruction): class GetAttr(Instruction):
""" """
An intruction that loads an attribute from an object, An intruction that loads an attribute from an object,
@ -685,7 +803,7 @@ class GetAttr(Instruction):
typ = obj.type.attributes[attr] typ = obj.type.attributes[attr]
else: else:
typ = obj.type.constructor.attributes[attr] typ = obj.type.constructor.attributes[attr]
if types.is_function(typ) or types.is_rpc(typ): if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
typ = types.TMethod(obj.type, typ) typ = types.TMethod(obj.type, typ)
super().__init__([obj], typ, name) super().__init__([obj], typ, name)
self.attr = attr self.attr = attr
@ -738,6 +856,33 @@ class SetAttr(Instruction):
def value(self): def value(self):
return self.operands[1] return self.operands[1]
class Offset(Instruction):
"""
An intruction that adds an offset to a pointer (indexes into a list).
This is used to represent internally generated pointer arithmetic, and must
remain inside the same object (see :class:`GetElem` and LLVM's GetElementPtr).
"""
"""
:param lst: (:class:`Value`) list
:param index: (:class:`Value`) index
"""
def __init__(self, base, offset, name=""):
assert isinstance(base, Value)
assert isinstance(offset, Value)
typ = types._TPointer(builtins.get_iterable_elt(base.type))
super().__init__([base, offset], typ, name)
def opcode(self):
return "offset"
def base(self):
return self.operands[0]
def index(self):
return self.operands[1]
class GetElem(Instruction): class GetElem(Instruction):
""" """
An intruction that loads an element from a list. An intruction that loads an element from a list.
@ -755,7 +900,7 @@ class GetElem(Instruction):
def opcode(self): def opcode(self):
return "getelem" return "getelem"
def list(self): def base(self):
return self.operands[0] return self.operands[0]
def index(self): def index(self):
@ -781,7 +926,7 @@ class SetElem(Instruction):
def opcode(self): def opcode(self):
return "setelem" return "setelem"
def list(self): def base(self):
return self.operands[0] return self.operands[0]
def index(self): def index(self):
@ -840,6 +985,7 @@ class Arith(Instruction):
def rhs(self): def rhs(self):
return self.operands[1] return self.operands[1]
class Compare(Instruction): class Compare(Instruction):
""" """
A comparison operation on numbers. A comparison operation on numbers.
@ -1119,14 +1265,18 @@ class IndirectBranch(Terminator):
class Return(Terminator): class Return(Terminator):
""" """
A return instruction. A return instruction.
:param remote_return: (bool)
marks a return in subkernel context,
where the return value is sent back through DRTIO
""" """
""" """
:param value: (:class:`Value`) return value :param value: (:class:`Value`) return value
""" """
def __init__(self, value, name=""): def __init__(self, value, remote_return=False, name=""):
assert isinstance(value, Value) assert isinstance(value, Value)
super().__init__([value], builtins.TNone(), name) super().__init__([value], builtins.TNone(), name)
self.remote_return = remote_return
def opcode(self): def opcode(self):
return "return" return "return"
@ -1175,9 +1325,9 @@ class Raise(Terminator):
if len(self.operands) > 1: if len(self.operands) > 1:
return self.operands[1] return self.operands[1]
class Reraise(Terminator): class Resume(Terminator):
""" """
A reraise instruction. A resume instruction.
""" """
""" """
@ -1191,7 +1341,7 @@ class Reraise(Terminator):
super().__init__(operands, builtins.TNone(), name) super().__init__(operands, builtins.TNone(), name)
def opcode(self): def opcode(self):
return "reraise" return "resume"
def exception_target(self): def exception_target(self):
if len(self.operands) > 0: if len(self.operands) > 0:
@ -1277,6 +1427,7 @@ class LandingPad(Terminator):
def __init__(self, cleanup, name=""): def __init__(self, cleanup, name=""):
super().__init__([cleanup], builtins.TException(), name) super().__init__([cleanup], builtins.TException(), name)
self.types = [] self.types = []
self.has_cleanup = True
def copy(self, mapper): def copy(self, mapper):
self_copy = super().copy(mapper) self_copy = super().copy(mapper)

70
artiq/compiler/kernel.ld Normal file
View File

@ -0,0 +1,70 @@
/* Force ld to make the ELF header as loadable. */
PHDRS
{
headers PT_LOAD FILEHDR PHDRS ;
text PT_LOAD ;
data PT_LOAD ;
dynamic PT_DYNAMIC ;
eh_frame PT_GNU_EH_FRAME ;
}
SECTIONS
{
/* Push back .text section enough so that ld.lld not complain */
. = SIZEOF_HEADERS;
.text :
{
*(.text .text.*)
} : text
.rodata :
{
*(.rodata .rodata.*)
}
.eh_frame :
{
KEEP(*(.eh_frame))
} : text
.eh_frame_hdr :
{
KEEP(*(.eh_frame_hdr))
} : text : eh_frame
.got :
{
*(.got)
} : text
.got.plt :
{
*(.got.plt)
} : text
.data :
{
*(.data .data.*)
} : data
.dynamic :
{
*(.dynamic)
} : data : dynamic
.bss (NOLOAD) : ALIGN(4)
{
__bss_start = .;
*(.sbss .sbss.* .bss .bss.*);
. = ALIGN(4);
_end = .;
}
/* Kernel stack grows downward from end of memory, so put guard page after
* all the program contents. Note: This requires all loaded sections (at
* least those accessed) to be explicitly listed in the above!
*/
. = ALIGN(0x1000);
_sstack_guard = .;
}

132
artiq/compiler/math_fns.py Normal file
View File

@ -0,0 +1,132 @@
r"""
The :mod:`math_fns` module lists math-related functions from NumPy recognized
by the ARTIQ compiler so host function objects can be :func:`match`\ ed to
the compiler type metadata describing their core device analogue.
"""
from collections import OrderedDict
import numpy
from . import builtins, types
# Some special mathematical functions are exposed via their scipy.special
# equivalents. Since the rest of the ARTIQ core does not depend on SciPy,
# gracefully handle it not being present, making the functions simply not
# available.
try:
import scipy.special as scipy_special
except ImportError:
scipy_special = None
#: float -> float numpy.* math functions for which llvm.* intrinsics exist.
unary_fp_intrinsics = [(name, "llvm." + name + ".f64") for name in [
"sin",
"cos",
"exp",
"exp2",
"log",
"log10",
"log2",
"fabs",
"floor",
"ceil",
"trunc",
"sqrt",
]] + [
# numpy.rint() seems to (NumPy 1.19.0, Python 3.8.5, Linux x86_64)
# implement round-to-even, but unfortunately, rust-lang/libm only
# provides round(), which always rounds away from zero.
#
# As there is no equivalent of the latter in NumPy (nor any other
# basic rounding function), expose round() as numpy.rint anyway,
# even if the rounding modes don't match up, so there is some way
# to do rounding on the core device. (numpy.round() has entirely
# different semantics; it rounds to a configurable number of
# decimals.)
("rint", "llvm.round.f64"),
]
#: float -> float numpy.* math functions lowered to runtime calls.
unary_fp_runtime_calls = [
("tan", "tan"),
("arcsin", "asin"),
("arccos", "acos"),
("arctan", "atan"),
("sinh", "sinh"),
("cosh", "cosh"),
("tanh", "tanh"),
("arcsinh", "asinh"),
("arccosh", "acosh"),
("arctanh", "atanh"),
("expm1", "expm1"),
("cbrt", "cbrt"),
]
#: float -> float numpy.* math functions lowered to runtime calls.
unary_fp_runtime_calls = [
("tan", "tan"),
("arcsin", "asin"),
("arccos", "acos"),
("arctan", "atan"),
("sinh", "sinh"),
("cosh", "cosh"),
("tanh", "tanh"),
("arcsinh", "asinh"),
("arccosh", "acosh"),
("arctanh", "atanh"),
("expm1", "expm1"),
("cbrt", "cbrt"),
]
scipy_special_unary_runtime_calls = [
("erf", "erf"),
("erfc", "erfc"),
("gamma", "tgamma"),
("gammaln", "lgamma"),
("j0", "j0"),
("j1", "j1"),
("y0", "y0"),
("y1", "y1"),
]
# Not mapped: jv/yv, libm only supports integer orders.
#: (float, float) -> float numpy.* math functions lowered to runtime calls.
binary_fp_runtime_calls = [
("arctan2", "atan2"),
("copysign", "copysign"),
("fmax", "fmax"),
("fmin", "fmin"),
# ("ldexp", "ldexp"), # One argument is an int; would need a bit more plumbing.
("hypot", "hypot"),
("nextafter", "nextafter"),
]
#: Array handling builtins (special treatment due to allocations).
numpy_builtins = ["transpose"]
def fp_runtime_type(name, arity):
args = [("arg{}".format(i), builtins.TFloat()) for i in range(arity)]
return types.TExternalFunction(
OrderedDict(args),
builtins.TFloat(),
name,
# errno isn't observable from ARTIQ Python.
flags={"nounwind", "nowrite"},
broadcast_across_arrays=True)
math_fn_map = {
getattr(numpy, symbol): fp_runtime_type(mangle, arity=1)
for symbol, mangle in (unary_fp_intrinsics + unary_fp_runtime_calls)
}
for symbol, mangle in binary_fp_runtime_calls:
math_fn_map[getattr(numpy, symbol)] = fp_runtime_type(mangle, arity=2)
for name in numpy_builtins:
math_fn_map[getattr(numpy, name)] = types.TBuiltinFunction("numpy." + name)
if scipy_special is not None:
for symbol, mangle in scipy_special_unary_runtime_calls:
math_fn_map[getattr(scipy_special, symbol)] = fp_runtime_type(mangle, arity=1)
def match(obj):
return math_fn_map.get(obj, None)

View File

@ -10,7 +10,7 @@ string and infers types for it using a trivial :module:`prelude`.
import os import os
from pythonparser import source, diagnostic, parse_buffer from pythonparser import source, diagnostic, parse_buffer
from . import prelude, types, transforms, analyses, validators from . import prelude, types, transforms, analyses, validators, embedding
class Source: class Source:
def __init__(self, source_buffer, engine=None): def __init__(self, source_buffer, engine=None):
@ -18,7 +18,7 @@ class Source:
self.engine = diagnostic.Engine(all_errors_are_fatal=True) self.engine = diagnostic.Engine(all_errors_are_fatal=True)
else: else:
self.engine = engine self.engine = engine
self.embedding_map = None self.embedding_map = embedding.EmbeddingMap()
self.name, _ = os.path.splitext(os.path.basename(source_buffer.name)) self.name, _ = os.path.splitext(os.path.basename(source_buffer.name))
asttyped_rewriter = transforms.ASTTypedRewriter(engine=engine, asttyped_rewriter = transforms.ASTTypedRewriter(engine=engine,
@ -57,7 +57,8 @@ class Module:
constness_validator = validators.ConstnessValidator(engine=self.engine) constness_validator = validators.ConstnessValidator(engine=self.engine)
artiq_ir_generator = transforms.ARTIQIRGenerator(engine=self.engine, artiq_ir_generator = transforms.ARTIQIRGenerator(engine=self.engine,
module_name=src.name, module_name=src.name,
ref_period=ref_period) ref_period=ref_period,
embedding_map=self.embedding_map)
dead_code_eliminator = transforms.DeadCodeEliminator(engine=self.engine) dead_code_eliminator = transforms.DeadCodeEliminator(engine=self.engine)
local_access_validator = validators.LocalAccessValidator(engine=self.engine) local_access_validator = validators.LocalAccessValidator(engine=self.engine)
local_demoter = transforms.LocalDemoter() local_demoter = transforms.LocalDemoter()
@ -83,6 +84,8 @@ class Module:
constant_hoister.process(self.artiq_ir) constant_hoister.process(self.artiq_ir)
if remarks: if remarks:
invariant_detection.process(self.artiq_ir) invariant_detection.process(self.artiq_ir)
# for subkernels: main kernel inferencer output, to be passed to further compilations
self.subkernel_arg_types = inferencer.subkernel_arg_types
def build_llvm_ir(self, target): def build_llvm_ir(self, target):
"""Compile the module to LLVM IR for the specified target.""" """Compile the module to LLVM IR for the specified target."""

View File

@ -25,6 +25,7 @@ def globals():
"IndexError": builtins.fn_IndexError(), "IndexError": builtins.fn_IndexError(),
"ValueError": builtins.fn_ValueError(), "ValueError": builtins.fn_ValueError(),
"ZeroDivisionError": builtins.fn_ZeroDivisionError(), "ZeroDivisionError": builtins.fn_ZeroDivisionError(),
"RuntimeError": builtins.fn_RuntimeError(),
# Built-in Python functions # Built-in Python functions
"len": builtins.fn_len(), "len": builtins.fn_len(),
@ -36,6 +37,7 @@ def globals():
# ARTIQ decorators # ARTIQ decorators
"kernel": builtins.fn_kernel(), "kernel": builtins.fn_kernel(),
"subkernel": builtins.fn_kernel(),
"portable": builtins.fn_kernel(), "portable": builtins.fn_kernel(),
"rpc": builtins.fn_kernel(), "rpc": builtins.fn_kernel(),
@ -43,7 +45,6 @@ def globals():
"parallel": builtins.obj_parallel(), "parallel": builtins.obj_parallel(),
"interleave": builtins.obj_interleave(), "interleave": builtins.obj_interleave(),
"sequential": builtins.obj_sequential(), "sequential": builtins.obj_sequential(),
"watchdog": builtins.fn_watchdog(),
# ARTIQ time management functions # ARTIQ time management functions
"delay": builtins.fn_delay(), "delay": builtins.fn_delay(),
@ -54,4 +55,8 @@ def globals():
# ARTIQ utility functions # ARTIQ utility functions
"rtio_log": builtins.fn_rtio_log(), "rtio_log": builtins.fn_rtio_log(),
"core_log": builtins.fn_print(), "core_log": builtins.fn_print(),
# ARTIQ subkernel utility functions
"subkernel_await": builtins.fn_subkernel_await(),
"subkernel_preload": builtins.fn_subkernel_preload(),
} }

View File

@ -1,6 +1,6 @@
import os, sys, tempfile, subprocess, io import os, sys, tempfile, subprocess, io
from artiq.compiler import types, ir from artiq.compiler import types, ir
from llvmlite_artiq import ir as ll, binding as llvm from llvmlite import ir as ll, binding as llvm
llvm.initialize() llvm.initialize()
llvm.initialize_all_targets() llvm.initialize_all_targets()
@ -28,8 +28,10 @@ class RunTool:
for argument in self._pattern: for argument in self._pattern:
cmdline.append(argument.format(**self._tempnames)) cmdline.append(argument.format(**self._tempnames))
# https://bugs.python.org/issue17023
windows = os.name == "nt"
process = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=subprocess.PIPE, process = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True) universal_newlines=True, shell=windows)
stdout, stderr = process.communicate() stdout, stderr = process.communicate()
if process.returncode != 0: if process.returncode != 0:
raise Exception("{} invocation failed: {}". raise Exception("{} invocation failed: {}".
@ -67,33 +69,41 @@ class Target:
generated by the ARTIQ compiler will be deployed. generated by the ARTIQ compiler will be deployed.
:var triple: (string) :var triple: (string)
LLVM target triple, e.g. ``"or1k"`` LLVM target triple, e.g. ``"riscv32"``
:var data_layout: (string) :var data_layout: (string)
LLVM target data layout, e.g. ``"E-m:e-p:32:32-i64:32-f64:32-v64:32-v128:32-a:0:32-n32"`` LLVM target data layout, e.g. ``"E-m:e-p:32:32-i64:32-f64:32-v64:32-v128:32-a:0:32-n32"``
:var features: (list of string) :var features: (list of string)
LLVM target CPU features, e.g. ``["mul", "div", "ffl1"]`` LLVM target CPU features, e.g. ``["mul", "div", "ffl1"]``
:var additional_linker_options: (list of string)
Linker options for the target in addition to the target-independent ones, e.g. ``["--target2=rel"]``
:var print_function: (string) :var print_function: (string)
Name of a formatted print functions (with the signature of ``printf``) Name of a formatted print functions (with the signature of ``printf``)
provided by the target, e.g. ``"printf"``. provided by the target, e.g. ``"printf"``.
:var little_endian: (boolean) :var now_pinning: (boolean)
Whether the code will be executed on a little-endian machine. This cannot be always Whether the target implements the now-pinning RTIO optimization.
determined from data_layout due to JIT.
""" """
triple = "unknown" triple = "unknown"
data_layout = "" data_layout = ""
features = [] features = []
additional_linker_options = []
print_function = "printf" print_function = "printf"
little_endian = False now_pinning = True
tool_ld = "ld.lld"
tool_strip = "llvm-strip"
tool_symbolizer = "llvm-symbolizer"
tool_cxxfilt = "llvm-cxxfilt"
def __init__(self): def __init__(self, subkernel_id=None):
self.llcontext = ll.Context() self.llcontext = ll.Context()
self.subkernel_id = subkernel_id
def target_machine(self): def target_machine(self):
lltarget = llvm.Target.from_triple(self.triple) lltarget = llvm.Target.from_triple(self.triple)
llmachine = lltarget.create_target_machine( llmachine = lltarget.create_target_machine(
features=",".join(["+{}".format(f) for f in self.features]), features=",".join(["+{}".format(f) for f in self.features]),
reloc="pic", codemodel="default") reloc="pic", codemodel="default",
abiname="ilp32d" if isinstance(self, RV32GTarget) else "")
llmachine.set_asm_verbosity(True) llmachine.set_asm_verbosity(True)
return llmachine return llmachine
@ -139,7 +149,8 @@ class Target:
ir.BasicBlock._dump_loc = False ir.BasicBlock._dump_loc = False
type_printer = types.TypePrinter() type_printer = types.TypePrinter()
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", ".txt", suffix = "_subkernel_{}".format(self.subkernel_id) if self.subkernel_id is not None else ""
_dump(os.getenv("ARTIQ_DUMP_IR"), "ARTIQ IR", suffix + ".txt",
lambda: "\n".join(fn.as_entity(type_printer) for fn in module.artiq_ir)) lambda: "\n".join(fn.as_entity(type_printer) for fn in module.artiq_ir))
llmod = module.build_llvm_ir(self) llmod = module.build_llvm_ir(self)
@ -151,12 +162,12 @@ class Target:
_dump("", "LLVM IR (broken)", ".ll", lambda: str(llmod)) _dump("", "LLVM IR (broken)", ".ll", lambda: str(llmod))
raise raise
_dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", "_unopt.ll", _dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", suffix + "_unopt.ll",
lambda: str(llparsedmod)) lambda: str(llparsedmod))
self.optimize(llparsedmod) self.optimize(llparsedmod)
_dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", ".ll", _dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", suffix + ".ll",
lambda: str(llparsedmod)) lambda: str(llparsedmod))
return llparsedmod return llparsedmod
@ -174,8 +185,11 @@ class Target:
def link(self, objects): def link(self, objects):
"""Link the relocatable objects into a shared library for this target.""" """Link the relocatable objects into a shared library for this target."""
with RunTool([self.triple + "-ld", "-shared", "--eh-frame-hdr"] + with RunTool([self.tool_ld, "-shared", "--eh-frame-hdr"] +
self.additional_linker_options +
["-T" + os.path.join(os.path.dirname(__file__), "kernel.ld")] +
["{{obj{}}}".format(index) for index in range(len(objects))] + ["{{obj{}}}".format(index) for index in range(len(objects))] +
["-x"] +
["-o", "{output}"], ["-o", "{output}"],
output=None, output=None,
**{"obj{}".format(index): obj for index, obj in enumerate(objects)}) \ **{"obj{}".format(index): obj for index, obj in enumerate(objects)}) \
@ -191,7 +205,7 @@ class Target:
return self.link([self.assemble(self.compile(module)) for module in modules]) return self.link([self.assemble(self.compile(module)) for module in modules])
def strip(self, library): def strip(self, library):
with RunTool([self.triple + "-strip", "--strip-debug", "{library}", "-o", "{output}"], with RunTool([self.tool_strip, "--strip-debug", "{library}", "-o", "{output}"],
library=library, output=None) \ library=library, output=None) \
as results: as results:
return results["output"].read() return results["output"].read()
@ -204,9 +218,10 @@ class Target:
# just after the call. Offset them back to get an address somewhere # just after the call. Offset them back to get an address somewhere
# inside the call instruction (or its delay slot), since that's what # inside the call instruction (or its delay slot), since that's what
# the backtrace entry should point at. # the backtrace entry should point at.
last_inlined = None
offset_addresses = [hex(addr - 1) for addr in addresses] offset_addresses = [hex(addr - 1) for addr in addresses]
with RunTool([self.triple + "-addr2line", "--addresses", "--functions", "--inlines", with RunTool([self.tool_symbolizer, "--addresses", "--functions", "--inlines",
"--demangle", "--exe={library}"] + offset_addresses, "--demangle", "--output-style=GNU", "--exe={library}"] + offset_addresses,
library=library) \ library=library) \
as results: as results:
lines = iter(results["__stdout__"].read().rstrip().split("\n")) lines = iter(results["__stdout__"].read().rstrip().split("\n"))
@ -219,9 +234,11 @@ class Target:
if address_or_function[:2] == "0x": if address_or_function[:2] == "0x":
address = int(address_or_function[2:], 16) + 1 # remove offset address = int(address_or_function[2:], 16) + 1 # remove offset
function = next(lines) function = next(lines)
inlined = False
else: else:
address = backtrace[-1][4] # inlined address = backtrace[-1][4] # inlined
function = address_or_function function = address_or_function
inlined = True
location = next(lines) location = next(lines)
filename, line = location.rsplit(":", 1) filename, line = location.rsplit(":", 1)
@ -232,32 +249,61 @@ class Target:
else: else:
line = int(line) line = int(line)
# can't get column out of addr2line D: # can't get column out of addr2line D:
backtrace.append((filename, line, -1, function, address)) if inlined:
last_inlined.append((filename, line, -1, function, address))
else:
last_inlined = []
backtrace.append((filename, line, -1, function, address,
last_inlined))
return backtrace return backtrace
def demangle(self, names): def demangle(self, names):
with RunTool([self.triple + "-c++filt"] + names) as results: if not any(names):
return names
with RunTool([self.tool_cxxfilt] + names) as results:
return results["__stdout__"].read().rstrip().split("\n") return results["__stdout__"].read().rstrip().split("\n")
class NativeTarget(Target): class NativeTarget(Target):
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.triple = llvm.get_default_triple() self.triple = llvm.get_default_triple()
host_data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data) self.data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data)
assert host_data_layout[0] in "eE"
self.little_endian = host_data_layout[0] == "e"
class OR1KTarget(Target): class RV32IMATarget(Target):
triple = "or1k-linux" triple = "riscv32-unknown-linux"
data_layout = "E-m:e-p:32:32-i8:8:8-i16:16:16-i64:32:32-" \ data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
"f64:32:32-v64:32:32-v128:32:32-a0:0:32-n32" features = ["m", "a"]
features = ["mul", "div", "ffl1", "cmov", "addc"] additional_linker_options = ["-m", "elf32lriscv"]
print_function = "core_log" print_function = "core_log"
little_endian = False now_pinning = True
tool_ld = "ld.lld"
tool_strip = "llvm-strip"
tool_symbolizer = "llvm-symbolizer"
tool_cxxfilt = "llvm-cxxfilt"
class RV32GTarget(Target):
triple = "riscv32-unknown-linux"
data_layout = "e-m:e-p:32:32-i64:64-n32-S128"
features = ["m", "a", "f", "d"]
additional_linker_options = ["-m", "elf32lriscv"]
print_function = "core_log"
now_pinning = True
tool_ld = "ld.lld"
tool_strip = "llvm-strip"
tool_symbolizer = "llvm-symbolizer"
tool_cxxfilt = "llvm-cxxfilt"
class CortexA9Target(Target): class CortexA9Target(Target):
triple = "armv7-unknown-linux-gnueabihf" triple = "armv7-unknown-linux-gnueabihf"
data_layout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64" data_layout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64"
features = ["dsp", "fp16", "neon", "vfp3"] features = ["dsp", "fp16", "neon", "vfp3"]
additional_linker_options = ["-m", "armelf_linux_eabi", "--target2=rel"]
print_function = "core_log" print_function = "core_log"
little_endian = True now_pinning = False
tool_ld = "ld.lld"
tool_strip = "llvm-strip"
tool_symbolizer = "llvm-symbolizer"
tool_cxxfilt = "llvm-cxxfilt"

View File

@ -1,6 +1,6 @@
import os, sys, fileinput, ctypes import os, sys, fileinput, ctypes
from pythonparser import diagnostic from pythonparser import diagnostic
from llvmlite_artiq import binding as llvm from llvmlite import binding as llvm
from ..module import Module, Source from ..module import Module, Source
from ..targets import NativeTarget from ..targets import NativeTarget

View File

@ -1,6 +1,6 @@
import sys, fileinput import sys, fileinput
from pythonparser import diagnostic from pythonparser import diagnostic
from llvmlite_artiq import ir as ll from llvmlite import ir as ll
from ..module import Module, Source from ..module import Module, Source
from ..targets import NativeTarget from ..targets import NativeTarget

View File

@ -1,7 +1,7 @@
import sys, os import sys, os
from pythonparser import diagnostic from pythonparser import diagnostic
from ..module import Module, Source from ..module import Module, Source
from ..targets import OR1KTarget from ..targets import RV32GTarget
from . import benchmark from . import benchmark
def main(): def main():
@ -30,7 +30,7 @@ def main():
benchmark(lambda: Module(source), benchmark(lambda: Module(source),
"ARTIQ transforms and validators") "ARTIQ transforms and validators")
benchmark(lambda: OR1KTarget().compile_and_link([module]), benchmark(lambda: RV32GTarget().compile_and_link([module]),
"LLVM optimization and linking") "LLVM optimization and linking")
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -5,7 +5,7 @@ from ...master.databases import DeviceDB, DatasetDB
from ...master.worker_db import DeviceManager, DatasetManager from ...master.worker_db import DeviceManager, DatasetManager
from ..module import Module from ..module import Module
from ..embedding import Stitcher from ..embedding import Stitcher
from ..targets import OR1KTarget from ..targets import RV32GTarget
from . import benchmark from . import benchmark
@ -30,8 +30,9 @@ def main():
device_db_path = os.path.join(os.path.dirname(sys.argv[1]), "device_db.py") device_db_path = os.path.join(os.path.dirname(sys.argv[1]), "device_db.py")
device_mgr = DeviceManager(DeviceDB(device_db_path)) device_mgr = DeviceManager(DeviceDB(device_db_path))
dataset_db_path = os.path.join(os.path.dirname(sys.argv[1]), "dataset_db.pyon") dataset_db_path = os.path.join(os.path.dirname(sys.argv[1]), "dataset_db.mdb")
dataset_mgr = DatasetManager(DatasetDB(dataset_db_path)) dataset_db = DatasetDB(dataset_db_path)
dataset_mgr = DatasetManager()
argument_mgr = ProcessArgumentManager({}) argument_mgr = ProcessArgumentManager({})
@ -45,7 +46,7 @@ def main():
stitcher = embed() stitcher = embed()
module = Module(stitcher) module = Module(stitcher)
target = OR1KTarget() target = RV32GTarget()
llvm_ir = target.compile(module) llvm_ir = target.compile(module)
elf_obj = target.assemble(llvm_ir) elf_obj = target.assemble(llvm_ir)
elf_shlib = target.link([elf_obj]) elf_shlib = target.link([elf_obj])
@ -68,5 +69,7 @@ def main():
benchmark(lambda: target.strip(elf_shlib), benchmark(lambda: target.strip(elf_shlib),
"Stripping debug information") "Stripping debug information")
dataset_db.close_db()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -1,7 +1,7 @@
import sys, os import sys, os
from pythonparser import diagnostic from pythonparser import diagnostic
from ..module import Module, Source from ..module import Module, Source
from ..targets import OR1KTarget from ..targets import RV32GTarget
def main(): def main():
if not len(sys.argv) > 1: if not len(sys.argv) > 1:
@ -20,7 +20,7 @@ def main():
for filename in sys.argv[1:]: for filename in sys.argv[1:]:
modules.append(Module(Source.from_filename(filename, engine=engine))) modules.append(Module(Source.from_filename(filename, engine=engine)))
llobj = OR1KTarget().compile_and_link(modules) llobj = RV32GTarget().compile_and_link(modules)
basename, ext = os.path.splitext(sys.argv[-1]) basename, ext = os.path.splitext(sys.argv[-1])
with open(basename + ".so", "wb") as f: with open(basename + ".so", "wb") as f:

File diff suppressed because it is too large Load Diff

View File

@ -238,7 +238,7 @@ class ASTTypedRewriter(algorithm.Transformer):
body=node.body, decorator_list=node.decorator_list, body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc, keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs, arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
loc=node.loc) loc=node.loc, remote_fn=False)
try: try:
self.env_stack.append(node.typing_env) self.env_stack.append(node.typing_env)
@ -439,8 +439,9 @@ class ASTTypedRewriter(algorithm.Transformer):
def visit_Call(self, node): def visit_Call(self, node):
node = self.generic_visit(node) node = self.generic_visit(node)
node = asttyped.CallT(type=types.TVar(), iodelay=None, arg_exprs={}, node = asttyped.CallT(type=types.TVar(), iodelay=None, arg_exprs={},
func=node.func, args=node.args, keywords=node.keywords, remote_fn=False, func=node.func,
args=node.args, keywords=node.keywords,
starargs=node.starargs, kwargs=node.kwargs, starargs=node.starargs, kwargs=node.kwargs,
star_loc=node.star_loc, dstar_loc=node.dstar_loc, star_loc=node.star_loc, dstar_loc=node.dstar_loc,
begin_loc=node.begin_loc, end_loc=node.end_loc, loc=node.loc) begin_loc=node.begin_loc, end_loc=node.end_loc, loc=node.loc)

View File

@ -15,13 +15,26 @@ class DeadCodeEliminator:
self.process_function(func) self.process_function(func)
def process_function(self, func): def process_function(self, func):
modified = True # defer removing those blocks, so our use checks will ignore deleted blocks
while modified: preserve = [func.entry()]
modified = False work_list = [func.entry()]
for block in list(func.basic_blocks): while any(work_list):
if not any(block.predecessors()) and block != func.entry(): block = work_list.pop()
self.remove_block(block) for succ in block.successors():
modified = True if succ not in preserve:
preserve.append(succ)
work_list.append(succ)
to_be_removed = []
for block in func.basic_blocks:
if block not in preserve:
block.is_removed = True
to_be_removed.append(block)
for insn in block.instructions:
insn.is_removed = True
for block in to_be_removed:
self.remove_block(block)
modified = True modified = True
while modified: while modified:
@ -33,7 +46,8 @@ class DeadCodeEliminator:
# it also has to run after the interleaver, but interleaver # it also has to run after the interleaver, but interleaver
# doesn't like to work with IR before DCE. # doesn't like to work with IR before DCE.
if isinstance(insn, (ir.Phi, ir.Alloc, ir.GetAttr, ir.GetElem, ir.Coerce, if isinstance(insn, (ir.Phi, ir.Alloc, ir.GetAttr, ir.GetElem, ir.Coerce,
ir.Arith, ir.Compare, ir.Select, ir.Quote, ir.Closure)) \ ir.Arith, ir.Compare, ir.Select, ir.Quote, ir.Closure,
ir.Offset)) \
and not any(insn.uses): and not any(insn.uses):
insn.erase() insn.erase()
modified = True modified = True
@ -41,6 +55,8 @@ class DeadCodeEliminator:
def remove_block(self, block): def remove_block(self, block):
# block.uses are updated while iterating # block.uses are updated while iterating
for use in set(block.uses): for use in set(block.uses):
if use.is_removed:
continue
if isinstance(use, ir.Phi): if isinstance(use, ir.Phi):
use.remove_incoming_block(block) use.remove_incoming_block(block)
if not any(use.operands): if not any(use.operands):
@ -55,6 +71,8 @@ class DeadCodeEliminator:
def remove_instruction(self, insn): def remove_instruction(self, insn):
for use in set(insn.uses): for use in set(insn.uses):
if use.is_removed:
continue
if isinstance(use, ir.Phi): if isinstance(use, ir.Phi):
use.remove_incoming_value(insn) use.remove_incoming_value(insn)
if not any(use.operands): if not any(use.operands):

View File

@ -6,6 +6,30 @@ from collections import OrderedDict
from pythonparser import algorithm, diagnostic, ast from pythonparser import algorithm, diagnostic, ast
from .. import asttyped, types, builtins from .. import asttyped, types, builtins
from .typedtree_printer import TypedtreePrinter from .typedtree_printer import TypedtreePrinter
from artiq.experiment import kernel
def is_nested_empty_list(node):
"""If the passed AST node is an empty list, or a regularly nested list thereof,
returns the number of nesting layers, or ``None`` otherwise.
For instance, ``is_nested_empty_list([]) == 1`` and
``is_nested_empty_list([[], []]) == 2``, but
``is_nested_empty_list([[[]], []]) == None`` as the number of nesting layers doesn't
match.
"""
if not isinstance(node, ast.List):
return None
if not node.elts:
return 1
result = is_nested_empty_list(node.elts[0])
if result is None:
return None
for elt in node.elts[:1]:
if result != is_nested_empty_list(elt):
return None
return result + 1
class Inferencer(algorithm.Visitor): class Inferencer(algorithm.Visitor):
""" """
@ -22,6 +46,7 @@ class Inferencer(algorithm.Visitor):
self.function = None # currently visited function, for Return inference self.function = None # currently visited function, for Return inference
self.in_loop = False self.in_loop = False
self.has_return = False self.has_return = False
self.subkernel_arg_types = dict()
def _unify(self, typea, typeb, loca, locb, makenotes=None, when=""): def _unify(self, typea, typeb, loca, locb, makenotes=None, when=""):
try: try:
@ -154,7 +179,7 @@ class Inferencer(algorithm.Visitor):
# Convert to a method. # Convert to a method.
attr_type = types.TMethod(object_type, attr_type) attr_type = types.TMethod(object_type, attr_type)
self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc) self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc)
elif types.is_rpc(attr_type): elif types.is_rpc(attr_type) or types.is_subkernel(attr_type):
# Convert to a method. We don't have to bother typechecking # Convert to a method. We don't have to bother typechecking
# the self argument, since for RPCs anything goes. # the self argument, since for RPCs anything goes.
attr_type = types.TMethod(object_type, attr_type) attr_type = types.TMethod(object_type, attr_type)
@ -183,6 +208,14 @@ class Inferencer(algorithm.Visitor):
if builtins.is_bytes(collection.type) or builtins.is_bytearray(collection.type): if builtins.is_bytes(collection.type) or builtins.is_bytearray(collection.type):
self._unify(element.type, builtins.get_iterable_elt(collection.type), self._unify(element.type, builtins.get_iterable_elt(collection.type),
element.loc, None) element.loc, None)
elif builtins.is_array(collection.type):
array_type = collection.type.find()
elem_dims = array_type["num_dims"].value - 1
if elem_dims > 0:
elem_type = builtins.TArray(array_type["elt"], types.TValue(elem_dims))
else:
elem_type = array_type["elt"]
self._unify(element.type, elem_type, element.loc, collection.loc)
elif builtins.is_iterable(collection.type) and not builtins.is_str(collection.type): elif builtins.is_iterable(collection.type) and not builtins.is_str(collection.type):
rhs_type = collection.type.find() rhs_type = collection.type.find()
rhs_wrapped_lhs_type = types.TMono(rhs_type.name, {"elt": element.type}) rhs_wrapped_lhs_type = types.TMono(rhs_type.name, {"elt": element.type})
@ -199,15 +232,15 @@ class Inferencer(algorithm.Visitor):
self.generic_visit(node) self.generic_visit(node)
value = node.value value = node.value
if types.is_tuple(value.type): if types.is_tuple(value.type):
diag = diagnostic.Diagnostic("error", for elt in value.type.find().elts:
"multi-dimensional slices are not supported", {}, self._unify(elt, builtins.TInt(),
node.loc, []) value.loc, None)
self.engine.process(diag)
else: else:
self._unify(value.type, builtins.TInt(), self._unify(value.type, builtins.TInt(),
value.loc, None) value.loc, None)
def visit_SliceT(self, node): def visit_SliceT(self, node):
self.generic_visit(node)
if (node.lower, node.upper, node.step) == (None, None, None): if (node.lower, node.upper, node.step) == (None, None, None):
self._unify(node.type, builtins.TInt32(), self._unify(node.type, builtins.TInt32(),
node.loc, None) node.loc, None)
@ -227,16 +260,78 @@ class Inferencer(algorithm.Visitor):
def visit_SubscriptT(self, node): def visit_SubscriptT(self, node):
self.generic_visit(node) self.generic_visit(node)
if isinstance(node.slice, ast.Index):
self._unify_iterable(element=node, collection=node.value) if types.is_tuple(node.value.type):
if (not isinstance(node.slice, ast.Index) or
not isinstance(node.slice.value, ast.Num)):
diag = diagnostic.Diagnostic(
"error", "tuples can only be indexed by a constant", {},
node.slice.loc, []
)
self.engine.process(diag)
return
tuple_type = node.value.type.find()
index = node.slice.value.n
if index < 0 or index >= len(tuple_type.elts):
diag = diagnostic.Diagnostic(
"error",
"index {index} is out of range for tuple of size {size}",
{"index": index, "size": len(tuple_type.elts)},
node.slice.loc, []
)
self.engine.process(diag)
return
self._unify(node.type, tuple_type.elts[index], node.loc, node.value.loc)
elif isinstance(node.slice, ast.Index):
if types.is_tuple(node.slice.value.type):
if types.is_var(node.value.type):
return
if not builtins.is_array(node.value.type):
diag = diagnostic.Diagnostic(
"error",
"multi-dimensional indexing only supported for arrays, not {type}",
{"type": types.TypePrinter().name(node.value.type)},
node.loc, [])
self.engine.process(diag)
return
num_idxs = len(node.slice.value.type.find().elts)
array_type = node.value.type.find()
num_dims = array_type["num_dims"].value
remaining_dims = num_dims - num_idxs
if remaining_dims < 0:
diag = diagnostic.Diagnostic(
"error",
"too many indices for array of dimension {num_dims}",
{"num_dims": num_dims}, node.slice.loc, [])
self.engine.process(diag)
return
if remaining_dims == 0:
self._unify(node.type, array_type["elt"], node.loc,
node.value.loc)
else:
self._unify(
node.type,
builtins.TArray(array_type["elt"], remaining_dims))
else:
self._unify_iterable(element=node, collection=node.value)
elif isinstance(node.slice, ast.Slice): elif isinstance(node.slice, ast.Slice):
self._unify(node.type, node.value.type, if builtins.is_array(node.value.type):
node.loc, node.value.loc) if node.slice.step is not None:
else: # ExtSlice diag = diagnostic.Diagnostic(
pass # error emitted above "error",
"strided slicing not yet supported for NumPy arrays", {},
node.slice.step.loc, [])
self.engine.process(diag)
return
self._unify(node.type, node.value.type, node.loc, node.value.loc)
else: # ExtSlice
pass # error emitted above
def visit_IfExpT(self, node): def visit_IfExpT(self, node):
self.generic_visit(node) self.generic_visit(node)
self._unify(node.test.type, builtins.TBool(), node.test.loc, None)
self._unify(node.body.type, node.orelse.type, self._unify(node.body.type, node.orelse.type,
node.body.loc, node.orelse.loc) node.body.loc, node.orelse.loc)
self._unify(node.type, node.body.type, self._unify(node.type, node.body.type,
@ -265,21 +360,36 @@ class Inferencer(algorithm.Visitor):
node.operand.loc) node.operand.loc)
self.engine.process(diag) self.engine.process(diag)
else: # UAdd, USub else: # UAdd, USub
if types.is_var(operand_type):
return
if builtins.is_numeric(operand_type): if builtins.is_numeric(operand_type):
self._unify(node.type, operand_type, self._unify(node.type, operand_type, node.loc, None)
node.loc, None) return
elif not types.is_var(operand_type):
diag = diagnostic.Diagnostic("error", if builtins.is_array(operand_type):
"expected unary '{op}' operand to be of numeric type, not {type}", elt = operand_type.find()["elt"]
{"op": node.op.loc.source(), if builtins.is_numeric(elt):
"type": types.TypePrinter().name(operand_type)}, self._unify(node.type, operand_type, node.loc, None)
node.operand.loc) return
self.engine.process(diag) if types.is_var(elt):
return
diag = diagnostic.Diagnostic("error",
"expected unary '{op}' operand to be of numeric type, not {type}",
{"op": node.op.loc.source(),
"type": types.TypePrinter().name(operand_type)},
node.operand.loc)
self.engine.process(diag)
def visit_CoerceT(self, node): def visit_CoerceT(self, node):
self.generic_visit(node) self.generic_visit(node)
if builtins.is_numeric(node.type) and builtins.is_numeric(node.value.type): if builtins.is_numeric(node.type) and builtins.is_numeric(node.value.type):
pass pass
elif (builtins.is_array(node.type) and builtins.is_array(node.value.type)
and builtins.is_numeric(node.type.find()["elt"])
and builtins.is_numeric(node.value.type.find()["elt"])):
pass
else: else:
printer = types.TypePrinter() printer = types.TypePrinter()
note = diagnostic.Diagnostic("note", note = diagnostic.Diagnostic("note",
@ -305,7 +415,7 @@ class Inferencer(algorithm.Visitor):
self.visit(node) self.visit(node)
return node return node
def _coerce_numeric(self, nodes, map_return=lambda typ: typ): def _coerce_numeric(self, nodes, map_return=lambda typ: typ, map_node_type =lambda typ:typ):
# See https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex. # See https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex.
node_types = [] node_types = []
for node in nodes: for node in nodes:
@ -321,6 +431,7 @@ class Inferencer(algorithm.Visitor):
node_types.append(node.type) node_types.append(node.type)
else: else:
node_types.append(node.type) node_types.append(node.type)
node_types = [map_node_type(typ) for typ in node_types]
if any(map(types.is_var, node_types)): # not enough info yet if any(map(types.is_var, node_types)): # not enough info yet
return return
elif not all(map(builtins.is_numeric, node_types)): elif not all(map(builtins.is_numeric, node_types)):
@ -352,8 +463,125 @@ class Inferencer(algorithm.Visitor):
else: else:
assert False assert False
def _coerce_binary_broadcast_op(self, left, right, map_return_elt, op_loc):
def num_dims(typ):
if builtins.is_array(typ):
# TODO: If number of dimensions is ever made a non-fixed parameter,
# need to acutally unify num_dims in _coerce_binop/….
return typ.find()["num_dims"].value
return 0
left_dims = num_dims(left.type)
right_dims = num_dims(right.type)
if left_dims != right_dims and left_dims != 0 and right_dims != 0:
# Mismatch (only scalar broadcast supported for now).
note1 = diagnostic.Diagnostic("note", "operand of dimension {num_dims}",
{"num_dims": left_dims}, left.loc)
note2 = diagnostic.Diagnostic("note", "operand of dimension {num_dims}",
{"num_dims": right_dims}, right.loc)
diag = diagnostic.Diagnostic(
"error", "dimensions of '{op}' array operands must match",
{"op": op_loc.source()}, op_loc, [left.loc, right.loc], [note1, note2])
self.engine.process(diag)
return
def map_node_type(typ):
if not builtins.is_array(typ):
# This is a single value broadcast across the array.
return typ
return typ.find()["elt"]
# Figure out result type, handling broadcasts.
result_dims = left_dims if left_dims else right_dims
def map_return(typ):
elt = map_return_elt(typ)
result = builtins.TArray(elt=elt, num_dims=result_dims)
left = builtins.TArray(elt=elt, num_dims=left_dims) if left_dims else elt
right = builtins.TArray(elt=elt, num_dims=right_dims) if right_dims else elt
return (result, left, right)
return self._coerce_numeric((left, right),
map_return=map_return,
map_node_type=map_node_type)
def _coerce_binop(self, op, left, right): def _coerce_binop(self, op, left, right):
if isinstance(op, (ast.BitAnd, ast.BitOr, ast.BitXor, if isinstance(op, ast.MatMult):
if types.is_var(left.type) or types.is_var(right.type):
return
def num_dims(operand):
if not builtins.is_array(operand.type):
diag = diagnostic.Diagnostic(
"error",
"expected matrix multiplication operand to be of array type, not {type}",
{
"op": op.loc.source(),
"type": types.TypePrinter().name(operand.type)
}, op.loc, [operand.loc])
self.engine.process(diag)
return
num_dims = operand.type.find()["num_dims"].value
if num_dims not in (1, 2):
diag = diagnostic.Diagnostic(
"error",
"expected matrix multiplication operand to be 1- or 2-dimensional, not {type}",
{
"op": op.loc.source(),
"type": types.TypePrinter().name(operand.type)
}, op.loc, [operand.loc])
self.engine.process(diag)
return
return num_dims
left_dims = num_dims(left)
if not left_dims:
return
right_dims = num_dims(right)
if not right_dims:
return
def map_node_type(typ):
return typ.find()["elt"]
def map_return(typ):
if left_dims == 1:
if right_dims == 1:
result_dims = 0
else:
result_dims = 1
elif right_dims == 1:
result_dims = 1
else:
result_dims = 2
result = typ if result_dims == 0 else builtins.TArray(
typ, result_dims)
return (result, builtins.TArray(typ, left_dims),
builtins.TArray(typ, right_dims))
return self._coerce_numeric((left, right),
map_return=map_return,
map_node_type=map_node_type)
elif builtins.is_array(left.type) or builtins.is_array(right.type):
# Operations on arrays are element-wise (possibly using broadcasting).
# TODO: Allow only for integer arrays.
# allowed_int_array_ops = (ast.BitAnd, ast.BitOr, ast.BitXor, ast.LShift,
# ast.RShift)
allowed_array_ops = (ast.Add, ast.Mult, ast.FloorDiv, ast.Mod,
ast.Pow, ast.Sub, ast.Div)
if not isinstance(op, allowed_array_ops):
diag = diagnostic.Diagnostic(
"error", "operator '{op}' not valid for array types",
{"op": op.loc.source()}, op.loc)
self.engine.process(diag)
return
def map_result(typ):
if isinstance(op, ast.Div):
return builtins.TFloat()
return typ
return self._coerce_binary_broadcast_op(left, right, map_result, op.loc)
elif isinstance(op, (ast.BitAnd, ast.BitOr, ast.BitXor,
ast.LShift, ast.RShift)): ast.LShift, ast.RShift)):
# bitwise operators require integers # bitwise operators require integers
for operand in (left, right): for operand in (left, right):
@ -452,7 +680,7 @@ class Inferencer(algorithm.Visitor):
# division always returns a float # division always returns a float
return self._coerce_numeric((left, right), return self._coerce_numeric((left, right),
lambda typ: (builtins.TFloat(), builtins.TFloat(), builtins.TFloat())) lambda typ: (builtins.TFloat(), builtins.TFloat(), builtins.TFloat()))
else: # MatMult else:
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",
"operator '{op}' is not supported", {"op": op.loc.source()}, "operator '{op}' is not supported", {"op": op.loc.source()},
op.loc) op.loc)
@ -695,28 +923,111 @@ class Inferencer(algorithm.Visitor):
"strings currently cannot be constructed", {}, "strings currently cannot be constructed", {},
node.loc) node.loc)
self.engine.process(diag) self.engine.process(diag)
elif types.is_builtin(typ, "list") or types.is_builtin(typ, "array"): elif types.is_builtin(typ, "array"):
if types.is_builtin(typ, "list"): valid_forms = lambda: [
valid_forms = lambda: [ valid_form("array(x:'a) -> array(elt='b) where 'a is iterable"),
valid_form("list() -> list(elt='a)"), valid_form("array(x:'a, dtype:'b) -> array(elt='b) where 'a is iterable")
valid_form("list(x:'a) -> list(elt='b) where 'a is iterable") ]
]
self._unify(node.type, builtins.TList(), explicit_dtype = None
node.loc, None) keywords_acceptable = False
elif types.is_builtin(typ, "array"): if len(node.keywords) == 0:
valid_forms = lambda: [ keywords_acceptable = True
valid_form("array() -> array(elt='a)"), elif len(node.keywords) == 1:
valid_form("array(x:'a) -> array(elt='b) where 'a is iterable") if node.keywords[0].arg == "dtype":
] keywords_acceptable = True
explicit_dtype = node.keywords[0].value
if len(node.args) == 1 and keywords_acceptable:
arg, = node.args
self._unify(node.type, builtins.TArray(), num_empty_dims = is_nested_empty_list(arg)
node.loc, None) if num_empty_dims is not None:
# As a special case, following the behaviour of numpy.array (and
# repr() on ndarrays), consider empty lists to be exactly of the
# number of dimensions given, instead of potentially containing an
# unknown number of extra dimensions.
num_dims = num_empty_dims
# The ultimate element type will be TVar initially, but we might be
# able to resolve it from context.
elt = arg.type
for _ in range(num_dims):
assert builtins.is_list(elt)
elt = elt.find()["elt"]
else:
# In the absence of any other information (there currently isn't a way
# to specify any), assume that all iterables are expandable into a
# (runtime-checked) rectangular array of the innermost element type.
elt = arg.type
num_dims = 0
expected_dims = (node.type.find()["num_dims"].value
if builtins.is_array(node.type) else -1)
while True:
if num_dims == expected_dims:
# If we already know the number of dimensions of the result,
# stop so we can disambiguate the (innermost) element type of
# the argument if it is still unknown.
break
if types.is_var(elt):
# Can't make progress here because we don't know how many more
# dimensions might be "hidden" inside.
return
if not builtins.is_iterable(elt) or builtins.is_str(elt):
break
if builtins.is_array(elt):
num_dims += elt.find()["num_dims"].value
else:
num_dims += 1
elt = builtins.get_iterable_elt(elt)
if explicit_dtype is not None:
# TODO: Factor out type detection; support quoted type constructors
# (TList(TInt32), …)?
typ = explicit_dtype.type
if types.is_builtin(typ, "int32"):
elt = builtins.TInt32()
elif types.is_builtin(typ, "int64"):
elt = builtins.TInt64()
elif types.is_constructor(typ):
elt = typ.find().instance
else:
diag = diagnostic.Diagnostic(
"error",
"dtype argument of {builtin}() must be a valid constructor",
{"builtin": typ.find().name},
node.func.loc,
notes=[note])
self.engine.process(diag)
return
if num_dims == 0:
note = diagnostic.Diagnostic(
"note", "this expression has type {type}",
{"type": types.TypePrinter().name(arg.type)}, arg.loc)
diag = diagnostic.Diagnostic(
"error",
"the argument of {builtin}() must be of an iterable type",
{"builtin": typ.find().name},
node.func.loc,
notes=[note])
self.engine.process(diag)
return
self._unify(node.type,
builtins.TArray(elt, types.TValue(num_dims)),
node.loc, arg.loc)
else: else:
assert False diagnose(valid_forms())
elif types.is_builtin(typ, "list"):
valid_forms = lambda: [
valid_form("list() -> list(elt='a)"),
valid_form("list(x:'a) -> list(elt='b) where 'a is iterable")
]
self._unify(node.type, builtins.TList(), node.loc, None)
if len(node.args) == 0 and len(node.keywords) == 0: if len(node.args) == 0 and len(node.keywords) == 0:
pass # [] pass # []
elif len(node.args) == 1 and len(node.keywords) == 0: elif len(node.args) == 1 and len(node.keywords) == 0:
arg, = node.args arg, = node.args
@ -879,21 +1190,69 @@ class Inferencer(algorithm.Visitor):
diagnose(valid_forms()) diagnose(valid_forms())
elif types.is_builtin(typ, "make_array"): elif types.is_builtin(typ, "make_array"):
valid_forms = lambda: [ valid_forms = lambda: [
valid_form("numpy.full(count:int32, value:'a) -> numpy.array(elt='a)") valid_form("numpy.full(count:int32, value:'a) -> array(elt='a, num_dims=1)"),
valid_form("numpy.full(shape:(int32,)*'b, value:'a) -> array(elt='a, num_dims='b)"),
] ]
self._unify(node.type, builtins.TArray(),
node.loc, None)
if len(node.args) == 2 and len(node.keywords) == 0: if len(node.args) == 2 and len(node.keywords) == 0:
arg0, arg1 = node.args arg0, arg1 = node.args
self._unify(arg0.type, builtins.TInt32(), if types.is_var(arg0.type):
arg0.loc, None) return # undetermined yet
elif types.is_tuple(arg0.type):
num_dims = len(arg0.type.find().elts)
self._unify(arg0.type, types.TTuple([builtins.TInt32()] * num_dims),
arg0.loc, None)
else:
num_dims = 1
self._unify(arg0.type, builtins.TInt32(),
arg0.loc, None)
self._unify(node.type, builtins.TArray(num_dims=num_dims),
node.loc, None)
self._unify(arg1.type, node.type.find()["elt"], self._unify(arg1.type, node.type.find()["elt"],
arg1.loc, None) arg1.loc, None)
else: else:
diagnose(valid_forms()) diagnose(valid_forms())
elif types.is_builtin(typ, "numpy.transpose"):
valid_forms = lambda: [
valid_form("transpose(x: array(elt='a, num_dims=1)) -> array(elt='a, num_dims=1)"),
valid_form("transpose(x: array(elt='a, num_dims=2)) -> array(elt='a, num_dims=2)")
]
if len(node.args) == 1 and len(node.keywords) == 0:
arg, = node.args
if types.is_var(arg.type):
pass # undetermined yet
elif not builtins.is_array(arg.type):
note = diagnostic.Diagnostic(
"note", "this expression has type {type}",
{"type": types.TypePrinter().name(arg.type)}, arg.loc)
diag = diagnostic.Diagnostic(
"error",
"the argument of {builtin}() must be an array",
{"builtin": typ.find().name},
node.func.loc,
notes=[note])
self.engine.process(diag)
else:
num_dims = arg.type.find()["num_dims"].value
if num_dims not in (1, 2):
note = diagnostic.Diagnostic(
"note", "argument is {num_dims}-dimensional",
{"num_dims": num_dims}, arg.loc)
diag = diagnostic.Diagnostic(
"error",
"{builtin}() is currently only supported for up to "
"two-dimensional arrays", {"builtin": typ.find().name},
node.func.loc,
notes=[note])
self.engine.process(diag)
else:
self._unify(node.type, arg.type, node.loc, None)
else:
diagnose(valid_forms())
elif types.is_builtin(typ, "rtio_log"): elif types.is_builtin(typ, "rtio_log"):
valid_forms = lambda: [ valid_forms = lambda: [
valid_form("rtio_log(channel:str, args...) -> None"), valid_form("rtio_log(channel:str, args...) -> None"),
@ -927,9 +1286,6 @@ class Inferencer(algorithm.Visitor):
elif types.is_builtin(typ, "at_mu"): elif types.is_builtin(typ, "at_mu"):
simple_form("at_mu(time_mu:numpy.int64) -> None", simple_form("at_mu(time_mu:numpy.int64) -> None",
[builtins.TInt64()]) [builtins.TInt64()])
elif types.is_builtin(typ, "watchdog"):
simple_form("watchdog(time:float) -> [builtin context manager]",
[builtins.TFloat()], builtins.TNone())
elif types.is_constructor(typ): elif types.is_constructor(typ):
# An user-defined class. # An user-defined class.
self._unify(node.type, typ.find().instance, self._unify(node.type, typ.find().instance,
@ -938,6 +1294,55 @@ class Inferencer(algorithm.Visitor):
# Ignored. # Ignored.
self._unify(node.type, builtins.TNone(), self._unify(node.type, builtins.TNone(),
node.loc, None) node.loc, None)
elif types.is_builtin(typ, "subkernel_await"):
valid_forms = lambda: [
valid_form("subkernel_await(f: subkernel) -> f return type"),
valid_form("subkernel_await(f: subkernel, timeout: numpy.int64) -> f return type")
]
if 1 <= len(node.args) <= 2:
arg0 = node.args[0].type
if types.is_var(arg0):
pass # undetermined yet
else:
if types.is_method(arg0):
fn = types.get_method_function(arg0)
elif types.is_function(arg0) or types.is_subkernel(arg0):
fn = arg0
else:
diagnose(valid_forms())
self._unify(node.type, fn.ret,
node.loc, None)
if len(node.args) == 2:
arg1 = node.args[1]
if types.is_var(arg1.type):
pass
elif builtins.is_int(arg1.type):
# promote to TInt64
self._unify(arg1.type, builtins.TInt64(),
arg1.loc, None)
else:
diagnose(valid_forms())
else:
diagnose(valid_forms())
elif types.is_builtin(typ, "subkernel_preload"):
valid_forms = lambda: [
valid_form("subkernel_preload(f: subkernel) -> None")
]
if len(node.args) == 1:
arg0 = node.args[0].type
if types.is_var(arg0):
pass # undetermined yet
else:
if types.is_method(arg0):
fn = types.get_method_function(arg0)
elif types.is_function(arg0) or types.is_subkernel(arg0):
fn = arg0
else:
diagnose(valid_forms())
self._unify(node.type, fn.ret,
node.loc, None)
else:
diagnose(valid_forms())
else: else:
assert False assert False
@ -976,6 +1381,7 @@ class Inferencer(algorithm.Visitor):
typ_args = typ.args typ_args = typ.args
typ_optargs = typ.optargs typ_optargs = typ.optargs
typ_ret = typ.ret typ_ret = typ.ret
typ_func = typ
else: else:
typ_self = types.get_method_self(typ) typ_self = types.get_method_self(typ)
typ_func = types.get_method_function(typ) typ_func = types.get_method_function(typ)
@ -1013,11 +1419,43 @@ class Inferencer(algorithm.Visitor):
self.engine.process(diag) self.engine.process(diag)
return return
# Array broadcasting for functions explicitly marked as such.
if len(node.args) == typ_arity and types.is_broadcast_across_arrays(typ):
if typ_arity == 1:
arg_type = node.args[0].type.find()
if builtins.is_array(arg_type):
typ_arg, = typ_args.values()
self._unify(typ_arg, arg_type["elt"], node.args[0].loc, None)
self._unify(node.type, builtins.TArray(typ_ret, arg_type["num_dims"]),
node.loc, None)
return
elif typ_arity == 2:
if any(builtins.is_array(arg.type) for arg in node.args):
ret, arg0, arg1 = self._coerce_binary_broadcast_op(
node.args[0], node.args[1], lambda t: typ_ret, node.loc)
node.args[0] = self._coerce_one(arg0, node.args[0],
other_node=node.args[1])
node.args[1] = self._coerce_one(arg1, node.args[1],
other_node=node.args[0])
self._unify(node.type, ret, node.loc, None)
return
if types.is_subkernel(typ_func) and typ_func.sid not in self.subkernel_arg_types:
self.subkernel_arg_types[typ_func.sid] = []
for actualarg, (formalname, formaltyp) in \ for actualarg, (formalname, formaltyp) in \
zip(node.args, list(typ_args.items()) + list(typ_optargs.items())): zip(node.args, list(typ_args.items()) + list(typ_optargs.items())):
self._unify(actualarg.type, formaltyp, self._unify(actualarg.type, formaltyp,
actualarg.loc, None) actualarg.loc, None)
passed_args[formalname] = actualarg.loc passed_args[formalname] = actualarg.loc
if types.is_subkernel(typ_func):
if types.is_instance(actualarg.type):
# objects cannot be passed to subkernels, as rpc code doesn't support them
diag = diagnostic.Diagnostic("error",
"argument '{name}' of type: {typ} is not supported in subkernels",
{"name": formalname, "typ": actualarg.type},
actualarg.loc, [])
self.engine.process(diag)
self.subkernel_arg_types[typ_func.sid].append((formalname, formaltyp))
for keyword in node.keywords: for keyword in node.keywords:
if keyword.arg in passed_args: if keyword.arg in passed_args:
@ -1048,7 +1486,7 @@ class Inferencer(algorithm.Visitor):
passed_args[keyword.arg] = keyword.arg_loc passed_args[keyword.arg] = keyword.arg_loc
for formalname in typ_args: for formalname in typ_args:
if formalname not in passed_args: if formalname not in passed_args and not node.remote_fn:
note = diagnostic.Diagnostic("note", note = diagnostic.Diagnostic("note",
"the called function is of type {type}", "the called function is of type {type}",
{"type": types.TypePrinter().name(node.func.type)}, {"type": types.TypePrinter().name(node.func.type)},
@ -1153,9 +1591,7 @@ class Inferencer(algorithm.Visitor):
typ = node.context_expr.type typ = node.context_expr.type
if (types.is_builtin(typ, "interleave") or types.is_builtin(typ, "sequential") or if (types.is_builtin(typ, "interleave") or types.is_builtin(typ, "sequential") or
types.is_builtin(typ, "parallel") or types.is_builtin(typ, "parallel")):
(isinstance(node.context_expr, asttyped.CallT) and
types.is_builtin(node.context_expr.func.type, "watchdog"))):
# builtin context managers # builtin context managers
if node.optional_vars is not None: if node.optional_vars is not None:
self._unify(node.optional_vars.type, builtins.TNone(), self._unify(node.optional_vars.type, builtins.TNone(),
@ -1313,7 +1749,14 @@ class Inferencer(algorithm.Visitor):
def visit_FunctionDefT(self, node): def visit_FunctionDefT(self, node):
for index, decorator in enumerate(node.decorator_list): for index, decorator in enumerate(node.decorator_list):
if types.is_builtin(decorator.type, "kernel") or \ def eval_attr(attr):
if isinstance(attr.value, asttyped.QuoteT):
return getattr(attr.value.value, attr.attr)
return getattr(eval_attr(attr.value), attr.attr)
if isinstance(decorator, asttyped.AttributeT):
decorator = eval_attr(decorator)
if id(decorator) == id(kernel) or \
types.is_builtin(decorator.type, "kernel") or \
isinstance(decorator, asttyped.CallT) and \ isinstance(decorator, asttyped.CallT) and \
types.is_builtin(decorator.func.type, "kernel"): types.is_builtin(decorator.func.type, "kernel"):
continue continue

View File

@ -280,7 +280,7 @@ class IODelayEstimator(algorithm.Visitor):
context="as an argument for delay_mu()") context="as an argument for delay_mu()")
call_delay = value call_delay = value
elif not types.is_builtin(typ): elif not types.is_builtin(typ):
if types.is_function(typ) or types.is_rpc(typ): if types.is_function(typ) or types.is_rpc(typ) or types.is_subkernel(typ):
offset = 0 offset = 0
elif types.is_method(typ): elif types.is_method(typ):
offset = 1 offset = 1
@ -288,7 +288,7 @@ class IODelayEstimator(algorithm.Visitor):
else: else:
assert False assert False
if types.is_rpc(typ): if types.is_rpc(typ) or types.is_subkernel(typ):
call_delay = iodelay.Const(0) call_delay = iodelay.Const(0)
else: else:
delay = typ.find().delay.find() delay = typ.find().delay.find()
@ -311,13 +311,20 @@ class IODelayEstimator(algorithm.Visitor):
args[arg_name] = arg_node args[arg_name] = arg_node
free_vars = delay.duration.free_vars() free_vars = delay.duration.free_vars()
node.arg_exprs = { try:
arg: self.evaluate(args[arg], abort=abort, node.arg_exprs = {
context="in the expression for argument '{}' " arg: self.evaluate(args[arg], abort=abort,
"that affects I/O delay".format(arg)) context="in the expression for argument '{}' "
for arg in free_vars "that affects I/O delay".format(arg))
} for arg in free_vars
call_delay = delay.duration.fold(node.arg_exprs) }
call_delay = delay.duration.fold(node.arg_exprs)
except KeyError as e:
if getattr(node, "remote_fn", False):
note = diagnostic.Diagnostic("note",
"function called here", {},
node.loc)
self.abort("due to arguments passed remotely", node.loc, note)
else: else:
assert False assert False
else: else:

File diff suppressed because it is too large Load Diff

View File

@ -3,6 +3,7 @@ The :mod:`types` module contains the classes describing the types
in :mod:`asttyped`. in :mod:`asttyped`.
""" """
import builtins
import string import string
from collections import OrderedDict from collections import OrderedDict
from . import iodelay from . import iodelay
@ -55,38 +56,39 @@ class TVar(Type):
def __init__(self): def __init__(self):
self.parent = self self.parent = self
self.rank = 0
def find(self): def find(self):
if self.parent is self: parent = self.parent
if parent is self:
return self return self
else: else:
# The recursive find() invocation is turned into a loop # The recursive find() invocation is turned into a loop
# because paths resulting from unification of large arrays # because paths resulting from unification of large arrays
# can easily cause a stack overflow. # can easily cause a stack overflow.
root = self root = self
while root.__class__ == TVar: while parent.__class__ == TVar and root is not parent:
if root is root.parent: _, parent = root, root.parent = parent, parent.parent
break return root.parent
else:
root = root.parent
# path compression
iter = self
while iter.__class__ == TVar:
if iter is iter.parent:
break
else:
iter, iter.parent = iter.parent, root
return root
def unify(self, other): def unify(self, other):
other = other.find() if other is self:
return
if self.parent is self: x = other.find()
self.parent = other y = self.find()
if x is y:
return
if y.__class__ == TVar:
if x.__class__ == TVar:
if x.rank < y.rank:
x, y = y, x
y.parent = x
if x.rank == y.rank:
x.rank += 1
else:
y.parent = x
else: else:
self.find().unify(other) y.unify(x)
def fold(self, accum, fn): def fold(self, accum, fn):
if self.parent is self: if self.parent is self:
@ -95,6 +97,8 @@ class TVar(Type):
return self.find().fold(accum, fn) return self.find().fold(accum, fn)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
if self.parent is self: if self.parent is self:
return "<artiq.compiler.types.TVar %d>" % id(self) return "<artiq.compiler.types.TVar %d>" % id(self)
else: else:
@ -124,6 +128,8 @@ class TMono(Type):
return self return self
def unify(self, other): def unify(self, other):
if other is self:
return
if isinstance(other, TMono) and self.name == other.name: if isinstance(other, TMono) and self.name == other.name:
assert self.params.keys() == other.params.keys() assert self.params.keys() == other.params.keys()
for param in self.params: for param in self.params:
@ -139,6 +145,8 @@ class TMono(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TMono(%s, %s)" % (repr(self.name), repr(self.params)) return "artiq.compiler.types.TMono(%s, %s)" % (repr(self.name), repr(self.params))
def __getitem__(self, param): def __getitem__(self, param):
@ -171,6 +179,8 @@ class TTuple(Type):
return self return self
def unify(self, other): def unify(self, other):
if other is self:
return
if isinstance(other, TTuple) and len(self.elts) == len(other.elts): if isinstance(other, TTuple) and len(self.elts) == len(other.elts):
for selfelt, otherelt in zip(self.elts, other.elts): for selfelt, otherelt in zip(self.elts, other.elts):
selfelt.unify(otherelt) selfelt.unify(otherelt)
@ -185,6 +195,8 @@ class TTuple(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TTuple(%s)" % repr(self.elts) return "artiq.compiler.types.TTuple(%s)" % repr(self.elts)
def __eq__(self, other): def __eq__(self, other):
@ -198,8 +210,10 @@ class TTuple(Type):
return hash(tuple(self.elts)) return hash(tuple(self.elts))
class _TPointer(TMono): class _TPointer(TMono):
def __init__(self): def __init__(self, elt=None):
super().__init__("pointer") if elt is None:
elt = TMono("int", {"width": 8}) # i8*
super().__init__("pointer", params={"elt": elt})
class TFunction(Type): class TFunction(Type):
""" """
@ -237,6 +251,8 @@ class TFunction(Type):
return self return self
def unify(self, other): def unify(self, other):
if other is self:
return
if isinstance(other, TFunction) and \ if isinstance(other, TFunction) and \
self.args.keys() == other.args.keys() and \ self.args.keys() == other.args.keys() and \
self.optargs.keys() == other.optargs.keys(): self.optargs.keys() == other.optargs.keys():
@ -259,6 +275,8 @@ class TFunction(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TFunction({}, {}, {})".format( return "artiq.compiler.types.TFunction({}, {}, {})".format(
repr(self.args), repr(self.optargs), repr(self.ret)) repr(self.args), repr(self.optargs), repr(self.ret))
@ -273,20 +291,29 @@ class TFunction(Type):
def __hash__(self): def __hash__(self):
return hash((_freeze(self.args), _freeze(self.optargs), self.ret)) return hash((_freeze(self.args), _freeze(self.optargs), self.ret))
class TCFunction(TFunction): class TExternalFunction(TFunction):
""" """
A function type of a runtime-provided C function. A type of an externally-provided function.
:ivar name: (str) C function name This can be any function following the C ABI, such as provided by the
:ivar flags: (set of str) C function flags. C/Rust runtime, or a compiler backend intrinsic. The mangled name to link
against is encoded as part of the type.
:ivar name: (str) external symbol name.
This will be the symbol linked against (following any extra C name
mangling rules).
:ivar flags: (set of str) function flags.
Flag ``nounwind`` means the function never raises an exception. Flag ``nounwind`` means the function never raises an exception.
Flag ``nowrite`` means the function never writes any memory Flag ``nowrite`` means the function never accesses any memory
that the ARTIQ Python code can observe. that the ARTIQ Python code can observe.
:ivar broadcast_across_arrays: (bool)
If True, the function is transparently applied element-wise when called
with TArray arguments.
""" """
attributes = OrderedDict() attributes = OrderedDict()
def __init__(self, args, ret, name, flags={}): def __init__(self, args, ret, name, flags=set(), broadcast_across_arrays=False):
assert isinstance(flags, set) assert isinstance(flags, set)
for flag in flags: for flag in flags:
assert flag in {'nounwind', 'nowrite'} assert flag in {'nounwind', 'nowrite'}
@ -294,9 +321,12 @@ class TCFunction(TFunction):
self.name = name self.name = name
self.delay = TFixedDelay(iodelay.Const(0)) self.delay = TFixedDelay(iodelay.Const(0))
self.flags = flags self.flags = flags
self.broadcast_across_arrays = broadcast_across_arrays
def unify(self, other): def unify(self, other):
if isinstance(other, TCFunction) and \ if other is self:
return
if isinstance(other, TExternalFunction) and \
self.name == other.name: self.name == other.name:
super().unify(other) super().unify(other)
elif isinstance(other, TVar): elif isinstance(other, TVar):
@ -324,6 +354,8 @@ class TRPC(Type):
return self return self
def unify(self, other): def unify(self, other):
if other is self:
return
if isinstance(other, TRPC) and \ if isinstance(other, TRPC) and \
self.service == other.service and \ self.service == other.service and \
self.is_async == other.is_async: self.is_async == other.is_async:
@ -338,6 +370,8 @@ class TRPC(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TRPC({})".format(repr(self.ret)) return "artiq.compiler.types.TRPC({})".format(repr(self.ret))
def __eq__(self, other): def __eq__(self, other):
@ -351,6 +385,50 @@ class TRPC(Type):
def __hash__(self): def __hash__(self):
return hash(self.service) return hash(self.service)
class TSubkernel(TFunction):
"""
A kernel to be run on a satellite.
:ivar args: (:class:`collections.OrderedDict` of string to :class:`Type`)
function arguments
:ivar ret: (:class:`Type`)
return type
:ivar sid: (int) subkernel ID number
:ivar destination: (int) satellite destination number
"""
attributes = OrderedDict()
def __init__(self, args, optargs, ret, sid, destination):
assert isinstance(ret, Type)
super().__init__(args, optargs, ret)
self.sid, self.destination = sid, destination
self.delay = TFixedDelay(iodelay.Const(0))
def unify(self, other):
if other is self:
return
if isinstance(other, TSubkernel) and \
self.sid == other.sid and \
self.destination == other.destination:
self.ret.unify(other.ret)
elif isinstance(other, TVar):
other.unify(self)
else:
raise UnificationError(self, other)
def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TSubkernel({})".format(repr(self.ret))
def __eq__(self, other):
return isinstance(other, TSubkernel) and \
self.sid == other.sid
def __hash__(self):
return hash(self.sid)
class TBuiltin(Type): class TBuiltin(Type):
""" """
An instance of builtin type. Every instance of a builtin An instance of builtin type. Every instance of a builtin
@ -366,6 +444,8 @@ class TBuiltin(Type):
return self return self
def unify(self, other): def unify(self, other):
if other is self:
return
if self != other: if self != other:
raise UnificationError(self, other) raise UnificationError(self, other)
@ -373,6 +453,8 @@ class TBuiltin(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.{}({})".format(type(self).__name__, repr(self.name)) return "artiq.compiler.types.{}({})".format(type(self).__name__, repr(self.name))
def __eq__(self, other): def __eq__(self, other):
@ -388,6 +470,11 @@ class TBuiltin(Type):
class TBuiltinFunction(TBuiltin): class TBuiltinFunction(TBuiltin):
""" """
A type of a builtin function. A type of a builtin function.
Builtin functions are treated specially throughout all stages of the
compilation process according to their name (e.g. calls may not actually
lower to a function call). See :class:`TExternalFunction` for externally
defined functions that are otherwise regular.
""" """
class TConstructor(TBuiltin): class TConstructor(TBuiltin):
@ -428,6 +515,8 @@ class TInstance(TMono):
self.constant_attributes = set() self.constant_attributes = set()
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TInstance({}, {})".format( return "artiq.compiler.types.TInstance({}, {})".format(
repr(self.name), repr(self.attributes)) repr(self.name), repr(self.attributes))
@ -443,6 +532,8 @@ class TModule(TMono):
self.constant_attributes = set() self.constant_attributes = set()
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TModule({}, {})".format( return "artiq.compiler.types.TModule({}, {})".format(
repr(self.name), repr(self.attributes)) repr(self.name), repr(self.attributes))
@ -471,6 +562,8 @@ class TValue(Type):
return self return self
def unify(self, other): def unify(self, other):
if other is self:
return
if isinstance(other, TVar): if isinstance(other, TVar):
other.unify(self) other.unify(self)
elif self != other: elif self != other:
@ -480,6 +573,8 @@ class TValue(Type):
return fn(accum, self) return fn(accum, self)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
return "artiq.compiler.types.TValue(%s)" % repr(self.value) return "artiq.compiler.types.TValue(%s)" % repr(self.value)
def __eq__(self, other): def __eq__(self, other):
@ -538,6 +633,8 @@ class TDelay(Type):
return not (self == other) return not (self == other)
def __repr__(self): def __repr__(self):
if getattr(builtins, "__in_sphinx__", False):
return str(self)
if self.duration is None: if self.duration is None:
return "<{}.TIndeterminateDelay>".format(__name__) return "<{}.TIndeterminateDelay>".format(__name__)
elif self.cause is None: elif self.cause is None:
@ -561,13 +658,15 @@ def is_mono(typ, name=None, **params):
if not isinstance(typ, TMono): if not isinstance(typ, TMono):
return False return False
params_match = True if name is not None and typ.name != name:
return False
for param in params: for param in params:
if param not in typ.params: if param not in typ.params:
return False return False
params_match = params_match and \ if typ.params[param].find() != params[param].find():
typ.params[param].find() == params[param].find() return False
return name is None or (typ.name == name and params_match) return True
def is_polymorphic(typ): def is_polymorphic(typ):
return typ.fold(False, lambda accum, typ: accum or is_var(typ)) return typ.fold(False, lambda accum, typ: accum or is_var(typ))
@ -589,12 +688,15 @@ def is_function(typ):
def is_rpc(typ): def is_rpc(typ):
return isinstance(typ.find(), TRPC) return isinstance(typ.find(), TRPC)
def is_c_function(typ, name=None): def is_subkernel(typ):
return isinstance(typ.find(), TSubkernel)
def is_external_function(typ, name=None):
typ = typ.find() typ = typ.find()
if name is None: if name is None:
return isinstance(typ, TCFunction) return isinstance(typ, TExternalFunction)
else: else:
return isinstance(typ, TCFunction) and \ return isinstance(typ, TExternalFunction) and \
typ.name == name typ.name == name
def is_builtin(typ, name=None): def is_builtin(typ, name=None):
@ -613,6 +715,15 @@ def is_builtin_function(typ, name=None):
return isinstance(typ, TBuiltinFunction) and \ return isinstance(typ, TBuiltinFunction) and \
typ.name == name typ.name == name
def is_broadcast_across_arrays(typ):
# For now, broadcasting is only exposed to predefined external functions, and
# statically selected. Might be extended to user-defined functions if the design
# pans out.
typ = typ.find()
if not isinstance(typ, TExternalFunction):
return False
return typ.broadcast_across_arrays
def is_constructor(typ, name=None): def is_constructor(typ, name=None):
typ = typ.find() typ = typ.find()
if name is not None: if name is not None:
@ -717,12 +828,14 @@ class TypePrinter(object):
else: else:
return "%s(%s)" % (typ.name, ", ".join( return "%s(%s)" % (typ.name, ", ".join(
["%s=%s" % (k, self.name(typ.params[k], depth + 1)) for k in typ.params])) ["%s=%s" % (k, self.name(typ.params[k], depth + 1)) for k in typ.params]))
elif isinstance(typ, _TPointer):
return "{}*".format(self.name(typ["elt"], depth + 1))
elif isinstance(typ, TTuple): elif isinstance(typ, TTuple):
if len(typ.elts) == 1: if len(typ.elts) == 1:
return "(%s,)" % self.name(typ.elts[0], depth + 1) return "(%s,)" % self.name(typ.elts[0], depth + 1)
else: else:
return "(%s)" % ", ".join([self.name(typ, depth + 1) for typ in typ.elts]) return "(%s)" % ", ".join([self.name(typ, depth + 1) for typ in typ.elts])
elif isinstance(typ, (TFunction, TCFunction)): elif isinstance(typ, (TFunction, TExternalFunction)):
args = [] args = []
args += [ "%s:%s" % (arg, self.name(typ.args[arg], depth + 1)) args += [ "%s:%s" % (arg, self.name(typ.args[arg], depth + 1))
for arg in typ.args] for arg in typ.args]
@ -736,7 +849,7 @@ class TypePrinter(object):
elif not (delay.is_fixed() and iodelay.is_zero(delay.duration)): elif not (delay.is_fixed() and iodelay.is_zero(delay.duration)):
signature += " " + self.name(delay, depth + 1) signature += " " + self.name(delay, depth + 1)
if isinstance(typ, TCFunction): if isinstance(typ, TExternalFunction):
return "[ffi {}]{}".format(repr(typ.name), signature) return "[ffi {}]{}".format(repr(typ.name), signature)
elif isinstance(typ, TFunction): elif isinstance(typ, TFunction):
return signature return signature
@ -744,6 +857,10 @@ class TypePrinter(object):
return "[rpc{} #{}](...)->{}".format(typ.service, return "[rpc{} #{}](...)->{}".format(typ.service,
" async" if typ.is_async else "", " async" if typ.is_async else "",
self.name(typ.ret, depth + 1)) self.name(typ.ret, depth + 1))
elif isinstance(typ, TSubkernel):
return "<subkernel{} dest#{}>->{}".format(typ.sid,
typ.destination,
self.name(typ.ret, depth + 1))
elif isinstance(typ, TBuiltinFunction): elif isinstance(typ, TBuiltinFunction):
return "<function {}>".format(typ.name) return "<function {}>".format(typ.name)
elif isinstance(typ, (TConstructor, TExceptionConstructor)): elif isinstance(typ, (TConstructor, TExceptionConstructor)):

View File

@ -50,3 +50,9 @@ class ConstnessValidator(algorithm.Visitor):
node.loc) node.loc)
self.engine.process(diag) self.engine.process(diag)
return return
if builtins.is_array(typ):
diag = diagnostic.Diagnostic("error",
"array attributes cannot be assigned to",
{}, node.loc)
self.engine.process(diag)
return

View File

@ -99,11 +99,23 @@ class RegionOf(algorithm.Visitor):
visit_BinOpT = visit_sometimes_allocating visit_BinOpT = visit_sometimes_allocating
def visit_CallT(self, node): def visit_CallT(self, node):
if types.is_c_function(node.func.type, "cache_get"): if types.is_external_function(node.func.type, "cache_get"):
# The cache is borrow checked dynamically # The cache is borrow checked dynamically
return Global() return Global()
else:
self.visit_sometimes_allocating(node) if (types.is_builtin_function(node.func.type, "array")
or types.is_builtin_function(node.func.type, "make_array")
or types.is_builtin_function(node.func.type, "numpy.transpose")):
# While lifetime tracking across function calls in general is currently
# broken (see below), these special builtins that allocate an array on
# the stack of the caller _always_ allocate regardless of the parameters,
# and we can thus handle them without running into the precision issue
# mentioned in commit ae999db.
return self.visit_allocating(node)
# FIXME: Return statement missing here, but see m-labs/artiq#1497 and
# commit ae999db.
self.visit_sometimes_allocating(node)
# Value lives as long as the object/container, if it's mutable, # Value lives as long as the object/container, if it's mutable,
# or else forever # or else forever
@ -156,7 +168,7 @@ class RegionOf(algorithm.Visitor):
visit_NameConstantT = visit_immutable visit_NameConstantT = visit_immutable
visit_NumT = visit_immutable visit_NumT = visit_immutable
visit_EllipsisT = visit_immutable visit_EllipsisT = visit_immutable
visit_UnaryOpT = visit_immutable visit_UnaryOpT = visit_sometimes_allocating # possibly array op
visit_CompareT = visit_immutable visit_CompareT = visit_immutable
# Value lives forever # Value lives forever

View File

@ -80,21 +80,30 @@ def ad53xx_cmd_read_ch(channel, op):
return AD53XX_CMD_SPECIAL | AD53XX_SPECIAL_READ | (op + (channel << 7)) return AD53XX_CMD_SPECIAL | AD53XX_SPECIAL_READ | (op + (channel << 7))
# maintain function definition for backward compatibility
@portable @portable
def voltage_to_mu(voltage, offset_dacs=0x2000, vref=5.): def voltage_to_mu(voltage, offset_dacs=0x2000, vref=5.):
"""Returns the DAC register value required to produce a given output """Returns the 16-bit DAC register value required to produce a given output
voltage, assuming offset and gain errors have been trimmed out. voltage, assuming offset and gain errors have been trimmed out.
The 16-bit register value may also be used with 14-bit DACs. The additional
bits are disregarded by 14-bit DACs.
Also used to return offset register value required to produce a given Also used to return offset register value required to produce a given
voltage when the DAC register is set to mid-scale. voltage when the DAC register is set to mid-scale.
An offset of V can be used to trim out a DAC offset error of -V. An offset of V can be used to trim out a DAC offset error of -V.
:param voltage: Voltage :param voltage: Voltage in SI units.
Valid voltages are: [-2*vref, + 2*vref - 1 LSB] + voltage offset.
:param offset_dacs: Register value for the two offset DACs :param offset_dacs: Register value for the two offset DACs
(default: 0x2000) (default: 0x2000)
:param vref: DAC reference voltage (default: 5.) :param vref: DAC reference voltage (default: 5.)
:return: The 16-bit DAC register value
""" """
return int(round(0x10000*(voltage/(4.*vref)) + offset_dacs*0x4)) code = int(round((1 << 16) * (voltage / (4. * vref)) + offset_dacs * 0x4))
if code < 0x0 or code > 0xffff:
raise ValueError("Invalid DAC voltage!")
return code
class _DummyTTL: class _DummyTTL:
@ -118,9 +127,9 @@ class AD53xx:
transactions (default: 1) transactions (default: 1)
:param div_write: SPI clock divider for write operations (default: 4, :param div_write: SPI clock divider for write operations (default: 4,
50MHz max SPI clock with {t_high, t_low} >=8ns) 50MHz max SPI clock with {t_high, t_low} >=8ns)
:param div_read: SPI clock divider for read operations (default: 8, not :param div_read: SPI clock divider for read operations (default: 16, not
optimized for speed, but cf data sheet t22: 25ns min SCLK edge to SDO optimized for speed; datasheet says t22: 25ns min SCLK edge to SDO
valid) valid, and suggests the SPI speed for reads should be <=20 MHz)
:param vref: DAC reference voltage (default: 5.) :param vref: DAC reference voltage (default: 5.)
:param offset_dacs: Initial register value for the two offset DACs, device :param offset_dacs: Initial register value for the two offset DACs, device
dependent and must be set correctly for correct voltage to mu dependent and must be set correctly for correct voltage to mu
@ -169,6 +178,8 @@ class AD53xx:
self.write_offset_dacs_mu(self.offset_dacs) self.write_offset_dacs_mu(self.offset_dacs)
if not blind: if not blind:
ctrl = self.read_reg(channel=0, op=AD53XX_READ_CONTROL) ctrl = self.read_reg(channel=0, op=AD53XX_READ_CONTROL)
if ctrl == 0xffff:
raise ValueError("DAC not found")
if ctrl & 0b10000: if ctrl & 0b10000:
raise ValueError("DAC over temperature") raise ValueError("DAC over temperature")
delay(25*us) delay(25*us)
@ -222,7 +233,7 @@ class AD53xx:
def write_gain_mu(self, channel, gain=0xffff): def write_gain_mu(self, channel, gain=0xffff):
"""Program the gain register for a DAC channel. """Program the gain register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
:param gain: 16-bit gain register value (default: 0xffff) :param gain: 16-bit gain register value (default: 0xffff)
@ -234,7 +245,7 @@ class AD53xx:
def write_offset_mu(self, channel, offset=0x8000): def write_offset_mu(self, channel, offset=0x8000):
"""Program the offset register for a DAC channel. """Program the offset register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
:param offset: 16-bit offset register value (default: 0x8000) :param offset: 16-bit offset register value (default: 0x8000)
@ -247,7 +258,7 @@ class AD53xx:
"""Program the DAC offset voltage for a channel. """Program the DAC offset voltage for a channel.
An offset of +V can be used to trim out a DAC offset error of -V. An offset of +V can be used to trim out a DAC offset error of -V.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
:param voltage: the offset voltage :param voltage: the offset voltage
@ -259,7 +270,7 @@ class AD53xx:
def write_dac_mu(self, channel, value): def write_dac_mu(self, channel, value):
"""Program the DAC input register for a channel. """Program the DAC input register for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
""" """
self.bus.write( self.bus.write(
@ -269,7 +280,7 @@ class AD53xx:
def write_dac(self, channel, voltage): def write_dac(self, channel, voltage):
"""Program the DAC output voltage for a channel. """Program the DAC output voltage for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:). The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer. This method advances the timeline by the duration of one SPI transfer.
""" """
self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs, self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs,
@ -302,7 +313,7 @@ class AD53xx:
If no LDAC device was defined, the LDAC pulse is skipped. If no LDAC device was defined, the LDAC pulse is skipped.
See :meth load:. See :meth:`load`.
:param values: list of DAC values to program :param values: list of DAC values to program
:param channels: list of DAC channels to program. If not specified, :param channels: list of DAC channels to program. If not specified,
@ -344,7 +355,7 @@ class AD53xx:
""" Two-point calibration of a DAC channel. """ Two-point calibration of a DAC channel.
Programs the offset and gain register to trim out DAC errors. Does not Programs the offset and gain register to trim out DAC errors. Does not
take effect until LDAC is pulsed (see :meth load:). take effect until LDAC is pulsed (see :meth:`load`).
Calibration consists of measuring the DAC output voltage for a channel Calibration consists of measuring the DAC output voltage for a channel
with the DAC set to zero-scale (0x0000) and full-scale (0xffff). with the DAC set to zero-scale (0x0000) and full-scale (0xffff).
@ -366,3 +377,17 @@ class AD53xx:
self.core.break_realtime() self.core.break_realtime()
self.write_offset_mu(channel, 0x8000-offset_err) self.write_offset_mu(channel, 0x8000-offset_err)
self.write_gain_mu(channel, 0xffff-gain_err) self.write_gain_mu(channel, 0xffff-gain_err)
@portable
def voltage_to_mu(self, voltage):
"""Returns the 16-bit DAC register value required to produce a given
output voltage, assuming offset and gain errors have been trimmed out.
The 16-bit register value may also be used with 14-bit DACs. The
additional bits are disregarded by 14-bit DACs.
:param voltage: Voltage in SI units.
Valid voltages are: [-2*vref, + 2*vref - 1 LSB] + voltage offset.
:return: The 16-bit DAC register value
"""
return voltage_to_mu(voltage, self.offset_dacs, self.vref)

File diff suppressed because it is too large Load Diff

View File

@ -1,23 +0,0 @@
from artiq.language.core import kernel
class AD9154:
"""Kernel interface to AD9154 registers, using non-realtime SPI."""
def __init__(self, dmgr, spi_device, chip_select):
self.core = dmgr.get("core")
self.bus = dmgr.get(spi_device)
self.chip_select = chip_select
@kernel
def setup_bus(self, div=16):
self.bus.set_config_mu(0, 24, div, self.chip_select)
@kernel
def write(self, addr, data):
self.bus.write((addr << 16) | (data<< 8))
@kernel
def read(self, addr):
self.write((1 << 15) | addr, 0)
return self.bus.read()

View File

@ -3,14 +3,16 @@ from numpy import int32, int64
from artiq.language.core import ( from artiq.language.core import (
kernel, delay, portable, delay_mu, now_mu, at_mu) kernel, delay, portable, delay_mu, now_mu, at_mu)
from artiq.language.units import us, ms from artiq.language.units import us, ms
from artiq.language.types import TBool, TInt32, TInt64, TFloat, TList, TTuple
from artiq.coredevice import spi2 as spi from artiq.coredevice import spi2 as spi
from artiq.coredevice import urukul from artiq.coredevice import urukul
from artiq.coredevice.urukul import DEFAULT_PROFILE
# Work around ARTIQ-Python import machinery # Work around ARTIQ-Python import machinery
urukul_sta_pll_lock = urukul.urukul_sta_pll_lock urukul_sta_pll_lock = urukul.urukul_sta_pll_lock
urukul_sta_smp_err = urukul.urukul_sta_smp_err urukul_sta_smp_err = urukul.urukul_sta_smp_err
__all__ = [ __all__ = [
"AD9910", "AD9910",
"PHASE_MODE_CONTINUOUS", "PHASE_MODE_ABSOLUTE", "PHASE_MODE_TRACKING", "PHASE_MODE_CONTINUOUS", "PHASE_MODE_ABSOLUTE", "PHASE_MODE_TRACKING",
@ -19,7 +21,6 @@ __all__ = [
"RAM_MODE_CONT_BIDIR_RAMP", "RAM_MODE_CONT_RAMPUP", "RAM_MODE_CONT_BIDIR_RAMP", "RAM_MODE_CONT_RAMPUP",
] ]
_PHASE_MODE_DEFAULT = -1 _PHASE_MODE_DEFAULT = -1
PHASE_MODE_CONTINUOUS = 0 PHASE_MODE_CONTINUOUS = 0
PHASE_MODE_ABSOLUTE = 1 PHASE_MODE_ABSOLUTE = 1
@ -60,6 +61,9 @@ RAM_MODE_BIDIR_RAMP = 2
RAM_MODE_CONT_BIDIR_RAMP = 3 RAM_MODE_CONT_BIDIR_RAMP = 3
RAM_MODE_CONT_RAMPUP = 4 RAM_MODE_CONT_RAMPUP = 4
# Default profile for RAM mode
_DEFAULT_PROFILE_RAM = 0
class SyncDataUser: class SyncDataUser:
def __init__(self, core, sync_delay_seed, io_update_delay): def __init__(self, core, sync_delay_seed, io_update_delay):
@ -123,22 +127,24 @@ class AD9910:
To stabilize the SYNC_IN delay tuning, run :meth:`tune_sync_delay` once To stabilize the SYNC_IN delay tuning, run :meth:`tune_sync_delay` once
and set this to the delay tap number returned (default: -1 to signal no and set this to the delay tap number returned (default: -1 to signal no
synchronization and no tuning during :meth:`init`). synchronization and no tuning during :meth:`init`).
Can be a string of the form "eeprom_device:byte_offset" to read the value Can be a string of the form "eeprom_device:byte_offset" to read the
from a I2C EEPROM; in which case, `io_update_delay` must be set to the value from a I2C EEPROM; in which case, `io_update_delay` must be set
same string value. to the same string value.
:param io_update_delay: IO_UPDATE pulse alignment delay. :param io_update_delay: IO_UPDATE pulse alignment delay.
To align IO_UPDATE to SYNC_CLK, run :meth:`tune_io_update_delay` and To align IO_UPDATE to SYNC_CLK, run :meth:`tune_io_update_delay` and
set this to the delay tap number returned. set this to the delay tap number returned.
Can be a string of the form "eeprom_device:byte_offset" to read the value Can be a string of the form "eeprom_device:byte_offset" to read the
from a I2C EEPROM; in which case, `sync_delay_seed` must be set to the value from a I2C EEPROM; in which case, `sync_delay_seed` must be set
same string value. to the same string value.
""" """
kernel_invariants = {"chip_select", "cpld", "core", "bus",
"ftw_per_hz", "sysclk_per_mu"}
def __init__(self, dmgr, chip_select, cpld_device, sw_device=None, def __init__(self, dmgr, chip_select, cpld_device, sw_device=None,
pll_n=40, pll_cp=7, pll_vco=5, sync_delay_seed=-1, pll_n=40, pll_cp=7, pll_vco=5, sync_delay_seed=-1,
io_update_delay=0, pll_en=1): io_update_delay=0, pll_en=1):
self.kernel_invariants = {"cpld", "core", "bus", "chip_select",
"pll_en", "pll_n", "pll_vco", "pll_cp",
"ftw_per_hz", "sysclk_per_mu", "sysclk",
"sync_data"}
self.cpld = dmgr.get(cpld_device) self.cpld = dmgr.get(cpld_device)
self.core = self.cpld.core self.core = self.cpld.core
self.bus = self.cpld.bus self.bus = self.cpld.bus
@ -147,38 +153,41 @@ class AD9910:
if sw_device: if sw_device:
self.sw = dmgr.get(sw_device) self.sw = dmgr.get(sw_device)
self.kernel_invariants.add("sw") self.kernel_invariants.add("sw")
clk = self.cpld.refclk/[4, 1, 2, 4][self.cpld.clk_div] clk = self.cpld.refclk / [4, 1, 2, 4][self.cpld.clk_div]
self.pll_en = pll_en self.pll_en = pll_en
self.pll_n = pll_n self.pll_n = pll_n
self.pll_vco = pll_vco self.pll_vco = pll_vco
self.pll_cp = pll_cp self.pll_cp = pll_cp
if pll_en: if pll_en:
sysclk = clk*pll_n sysclk = clk * pll_n
assert clk <= 60e6 assert clk <= 60e6
assert 12 <= pll_n <= 127 assert 12 <= pll_n <= 127
assert 0 <= pll_vco <= 5 assert 0 <= pll_vco <= 5
vco_min, vco_max = [(370, 510), (420, 590), (500, 700), vco_min, vco_max = [(370, 510), (420, 590), (500, 700),
(600, 880), (700, 950), (820, 1150)][pll_vco] (600, 880), (700, 950), (820, 1150)][pll_vco]
assert vco_min <= sysclk/1e6 <= vco_max assert vco_min <= sysclk / 1e6 <= vco_max
assert 0 <= pll_cp <= 7 assert 0 <= pll_cp <= 7
else: else:
sysclk = clk sysclk = clk
assert sysclk <= 1e9 assert sysclk <= 1e9
self.ftw_per_hz = (1 << 32)/sysclk self.ftw_per_hz = (1 << 32) / sysclk
self.sysclk_per_mu = int(round(sysclk*self.core.ref_period)) self.sysclk_per_mu = int(round(sysclk * self.core.ref_period))
self.sysclk = sysclk self.sysclk = sysclk
if isinstance(sync_delay_seed, str) or isinstance(io_update_delay, str): if isinstance(sync_delay_seed, str) or isinstance(io_update_delay,
str):
if sync_delay_seed != io_update_delay: if sync_delay_seed != io_update_delay:
raise ValueError("When using EEPROM, sync_delay_seed must be equal to io_update_delay") raise ValueError("When using EEPROM, sync_delay_seed must be "
"equal to io_update_delay")
self.sync_data = SyncDataEeprom(dmgr, self.core, sync_delay_seed) self.sync_data = SyncDataEeprom(dmgr, self.core, sync_delay_seed)
else: else:
self.sync_data = SyncDataUser(self.core, sync_delay_seed, io_update_delay) self.sync_data = SyncDataUser(self.core, sync_delay_seed,
io_update_delay)
self.phase_mode = PHASE_MODE_CONTINUOUS self.phase_mode = PHASE_MODE_CONTINUOUS
@kernel @kernel
def set_phase_mode(self, phase_mode): def set_phase_mode(self, phase_mode: TInt32):
r"""Set the default phase mode. r"""Set the default phase mode.
for future calls to :meth:`set` and for future calls to :meth:`set` and
@ -223,7 +232,18 @@ class AD9910:
self.phase_mode = phase_mode self.phase_mode = phase_mode
@kernel @kernel
def write32(self, addr, data): def write16(self, addr: TInt32, data: TInt32):
"""Write to 16 bit register.
:param addr: Register address
:param data: Data to be written
"""
self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END, 24,
urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write((addr << 24) | ((data & 0xffff) << 8))
@kernel
def write32(self, addr: TInt32, data: TInt32):
"""Write to 32 bit register. """Write to 32 bit register.
:param addr: Register address :param addr: Register address
@ -237,7 +257,22 @@ class AD9910:
self.bus.write(data) self.bus.write(data)
@kernel @kernel
def read32(self, addr): def read16(self, addr: TInt32) -> TInt32:
"""Read from 16 bit register.
:param addr: Register address
"""
self.bus.set_config_mu(urukul.SPI_CONFIG, 8,
urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write((addr | 0x80) << 24)
self.bus.set_config_mu(
urukul.SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
16, urukul.SPIT_DDS_RD, self.chip_select)
self.bus.write(0)
return self.bus.read()
@kernel
def read32(self, addr: TInt32) -> TInt32:
"""Read from 32 bit register. """Read from 32 bit register.
:param addr: Register address :param addr: Register address
@ -252,7 +287,7 @@ class AD9910:
return self.bus.read() return self.bus.read()
@kernel @kernel
def read64(self, addr): def read64(self, addr: TInt32) -> TInt64:
"""Read from 64 bit register. """Read from 64 bit register.
:param addr: Register address :param addr: Register address
@ -275,7 +310,7 @@ class AD9910:
return (int64(hi) << 32) | lo return (int64(hi) << 32) | lo
@kernel @kernel
def write64(self, addr, data_high, data_low): def write64(self, addr: TInt32, data_high: TInt32, data_low: TInt32):
"""Write to 64 bit register. """Write to 64 bit register.
:param addr: Register address :param addr: Register address
@ -293,14 +328,14 @@ class AD9910:
self.bus.write(data_low) self.bus.write(data_low)
@kernel @kernel
def write_ram(self, data): def write_ram(self, data: TList(TInt32)):
"""Write data to RAM. """Write data to RAM.
The profile to write to and the step, start, and end address The profile to write to and the step, start, and end address
need to be configured before and separately using need to be configured before and separately using
:meth:`set_profile_ram` and the parent CPLD `set_profile`. :meth:`set_profile_ram` and the parent CPLD `set_profile`.
:param data List(int32): Data to be written to RAM. :param data: Data to be written to RAM.
""" """
self.bus.set_config_mu(urukul.SPI_CONFIG, 8, urukul.SPIT_DDS_WR, self.bus.set_config_mu(urukul.SPI_CONFIG, 8, urukul.SPIT_DDS_WR,
self.chip_select) self.chip_select)
@ -314,14 +349,14 @@ class AD9910:
self.bus.write(data[len(data) - 1]) self.bus.write(data[len(data) - 1])
@kernel @kernel
def read_ram(self, data): def read_ram(self, data: TList(TInt32)):
"""Read data from RAM. """Read data from RAM.
The profile to read from and the step, start, and end address The profile to read from and the step, start, and end address
need to be configured before and separately using need to be configured before and separately using
:meth:`set_profile_ram` and the parent CPLD `set_profile`. :meth:`set_profile_ram` and the parent CPLD `set_profile`.
:param data List(int32): List to be filled with data read from RAM. :param data: List to be filled with data read from RAM.
""" """
self.bus.set_config_mu(urukul.SPI_CONFIG, 8, urukul.SPIT_DDS_WR, self.bus.set_config_mu(urukul.SPI_CONFIG, 8, urukul.SPIT_DDS_WR,
self.chip_select) self.chip_select)
@ -343,15 +378,25 @@ class AD9910:
data[(n - preload) + i] = self.bus.read() data[(n - preload) + i] = self.bus.read()
@kernel @kernel
def set_cfr1(self, power_down=0b0000, phase_autoclear=0, def set_cfr1(self,
drg_load_lrr=0, drg_autoclear=0, power_down: TInt32 = 0b0000,
internal_profile=0, ram_destination=0, ram_enable=0): phase_autoclear: TInt32 = 0,
drg_load_lrr: TInt32 = 0,
drg_autoclear: TInt32 = 0,
phase_clear: TInt32 = 0,
internal_profile: TInt32 = 0,
ram_destination: TInt32 = 0,
ram_enable: TInt32 = 0,
manual_osk_external: TInt32 = 0,
osk_enable: TInt32 = 0,
select_auto_osk: TInt32 = 0):
"""Set CFR1. See the AD9910 datasheet for parameter meanings. """Set CFR1. See the AD9910 datasheet for parameter meanings.
This method does not pulse IO_UPDATE. This method does not pulse IO_UPDATE.
:param power_down: Power down bits. :param power_down: Power down bits.
:param phase_autoclear: Autoclear phase accumulator. :param phase_autoclear: Autoclear phase accumulator.
:param phase_clear: Asynchronous, static reset of the phase accumulator.
:param drg_load_lrr: Load digital ramp generator LRR. :param drg_load_lrr: Load digital ramp generator LRR.
:param drg_autoclear: Autoclear digital ramp generator. :param drg_autoclear: Autoclear digital ramp generator.
:param internal_profile: Internal profile control. :param internal_profile: Internal profile control.
@ -359,19 +404,55 @@ class AD9910:
(:const:`RAM_DEST_FTW`, :const:`RAM_DEST_POW`, (:const:`RAM_DEST_FTW`, :const:`RAM_DEST_POW`,
:const:`RAM_DEST_ASF`, :const:`RAM_DEST_POWASF`). :const:`RAM_DEST_ASF`, :const:`RAM_DEST_POWASF`).
:param ram_enable: RAM mode enable. :param ram_enable: RAM mode enable.
:param manual_osk_external: Enable OSK pin control in manual OSK mode.
:param osk_enable: Enable OSK mode.
:param select_auto_osk: Select manual or automatic OSK mode.
""" """
self.write32(_AD9910_REG_CFR1, self.write32(_AD9910_REG_CFR1,
(ram_enable << 31) | (ram_enable << 31) |
(ram_destination << 29) | (ram_destination << 29) |
(manual_osk_external << 23) |
(internal_profile << 17) | (internal_profile << 17) |
(drg_load_lrr << 15) | (drg_load_lrr << 15) |
(drg_autoclear << 14) | (drg_autoclear << 14) |
(phase_autoclear << 13) | (phase_autoclear << 13) |
(phase_clear << 11) |
(osk_enable << 9) |
(select_auto_osk << 8) |
(power_down << 4) | (power_down << 4) |
2) # SDIO input only, MSB first 2) # SDIO input only, MSB first
@kernel @kernel
def init(self, blind=False): def set_cfr2(self,
asf_profile_enable: TInt32 = 1,
drg_enable: TInt32 = 0,
effective_ftw: TInt32 = 1,
sync_validation_disable: TInt32 = 0,
matched_latency_enable: TInt32 = 0):
"""Set CFR2. See the AD9910 datasheet for parameter meanings.
This method does not pulse IO_UPDATE.
:param asf_profile_enable: Enable amplitude scale from single tone profiles.
:param drg_enable: Digital ramp enable.
:param effective_ftw: Read effective FTW.
:param sync_validation_disable: Disable the SYNC_SMP_ERR pin indicating
(active high) detection of a synchronization pulse sampling error.
:param matched_latency_enable: Simultaneous application of amplitude,
phase, and frequency changes to the DDS arrive at the output
* matched_latency_enable = 0: in the order listed
* matched_latency_enable = 1: simultaneously.
"""
self.write32(_AD9910_REG_CFR2,
(asf_profile_enable << 24) |
(drg_enable << 19) |
(effective_ftw << 16) |
(matched_latency_enable << 7) |
(sync_validation_disable << 5))
@kernel
def init(self, blind: TBool = False):
"""Initialize and configure the DDS. """Initialize and configure the DDS.
Sets up SPI mode, confirms chip presence, powers down unused blocks, Sets up SPI mode, confirms chip presence, powers down unused blocks,
@ -384,70 +465,71 @@ class AD9910:
if self.sync_data.sync_delay_seed >= 0 and not self.cpld.sync_div: if self.sync_data.sync_delay_seed >= 0 and not self.cpld.sync_div:
raise ValueError("parent cpld does not drive SYNC") raise ValueError("parent cpld does not drive SYNC")
if self.sync_data.sync_delay_seed >= 0: if self.sync_data.sync_delay_seed >= 0:
if self.sysclk_per_mu != self.sysclk*self.core.ref_period: if self.sysclk_per_mu != self.sysclk * self.core.ref_period:
raise ValueError("incorrect clock ratio for synchronization") raise ValueError("incorrect clock ratio for synchronization")
delay(50*ms) # slack delay(50 * ms) # slack
# Set SPI mode # Set SPI mode
self.set_cfr1() self.set_cfr1()
self.cpld.io_update.pulse(1*us) self.cpld.io_update.pulse(1 * us)
delay(1*ms) delay(1 * ms)
if not blind: if not blind:
# Use the AUX DAC setting to identify and confirm presence # Use the AUX DAC setting to identify and confirm presence
aux_dac = self.read32(_AD9910_REG_AUX_DAC) aux_dac = self.read32(_AD9910_REG_AUX_DAC)
if aux_dac & 0xff != 0x7f: if aux_dac & 0xff != 0x7f:
raise ValueError("Urukul AD9910 AUX_DAC mismatch") raise ValueError("Urukul AD9910 AUX_DAC mismatch")
delay(50*us) # slack delay(50 * us) # slack
# Configure PLL settings and bring up PLL # Configure PLL settings and bring up PLL
# enable amplitude scale from profiles # enable amplitude scale from profiles
# read effective FTW # read effective FTW
# sync timing validation disable (enabled later) # sync timing validation disable (enabled later)
self.write32(_AD9910_REG_CFR2, 0x01010020) self.set_cfr2(sync_validation_disable=1)
self.cpld.io_update.pulse(1*us) self.cpld.io_update.pulse(1 * us)
cfr3 = (0x0807c000 | (self.pll_vco << 24) | cfr3 = (0x0807c000 | (self.pll_vco << 24) |
(self.pll_cp << 19) | (self.pll_en << 8) | (self.pll_cp << 19) | (self.pll_en << 8) |
(self.pll_n << 1)) (self.pll_n << 1))
self.write32(_AD9910_REG_CFR3, cfr3 | 0x400) # PFD reset self.write32(_AD9910_REG_CFR3, cfr3 | 0x400) # PFD reset
self.cpld.io_update.pulse(1*us) self.cpld.io_update.pulse(1 * us)
if self.pll_en: if self.pll_en:
self.write32(_AD9910_REG_CFR3, cfr3) self.write32(_AD9910_REG_CFR3, cfr3)
self.cpld.io_update.pulse(1*us) self.cpld.io_update.pulse(1 * us)
if blind: if blind:
delay(100*ms) delay(100 * ms)
else: else:
# Wait for PLL lock, up to 100 ms # Wait for PLL lock, up to 100 ms
for i in range(100): for i in range(100):
sta = self.cpld.sta_read() sta = self.cpld.sta_read()
lock = urukul_sta_pll_lock(sta) lock = urukul_sta_pll_lock(sta)
delay(1*ms) delay(1 * ms)
if lock & (1 << self.chip_select - 4): if lock & (1 << self.chip_select - 4):
break break
if i >= 100 - 1: if i >= 100 - 1:
raise ValueError("PLL lock timeout") raise ValueError("PLL lock timeout")
delay(10*us) # slack delay(10 * us) # slack
if self.sync_data.sync_delay_seed >= 0: if self.sync_data.sync_delay_seed >= 0 and not blind:
self.tune_sync_delay(self.sync_data.sync_delay_seed) self.tune_sync_delay(self.sync_data.sync_delay_seed)
delay(1*ms) delay(1 * ms)
@kernel @kernel
def power_down(self, bits=0b1111): def power_down(self, bits: TInt32 = 0b1111):
"""Power down DDS. """Power down DDS.
:param bits: Power down bits, see datasheet :param bits: Power down bits, see datasheet
""" """
self.set_cfr1(power_down=bits) self.set_cfr1(power_down=bits)
self.cpld.io_update.pulse(1*us) self.cpld.io_update.pulse(1 * us)
# KLUDGE: ref_time_mu default argument is explicitly marked int64() to
# avoid silent truncation of explicitly passed timestamps. (Compiler bug?)
@kernel @kernel
def set_mu(self, ftw, pow_=0, asf=0x3fff, phase_mode=_PHASE_MODE_DEFAULT, def set_mu(self, ftw: TInt32 = 0, pow_: TInt32 = 0, asf: TInt32 = 0x3fff,
ref_time_mu=int64(-1), profile=0): phase_mode: TInt32 = _PHASE_MODE_DEFAULT,
"""Set profile 0 data in machine units. ref_time_mu: TInt64 = int64(-1),
profile: TInt32 = DEFAULT_PROFILE,
ram_destination: TInt32 = -1) -> TInt32:
"""Set DDS data in machine units.
This uses machine units (FTW, POW, ASF). The frequency tuning word This uses machine units (FTW, POW, ASF). The frequency tuning word
width is 32, the phase offset word width is 16, and the amplitude width is 32, the phase offset word width is 16, and the amplitude
scale factor width is 12. scale factor width is 14.
After the SPI transfer, the shared IO update pin is pulsed to After the SPI transfer, the shared IO update pin is pulsed to
activate the data. activate the data.
@ -462,7 +544,13 @@ class AD9910:
by :meth:`set_phase_mode` for this call. by :meth:`set_phase_mode` for this call.
:param ref_time_mu: Fiducial time used to compute absolute or tracking :param ref_time_mu: Fiducial time used to compute absolute or tracking
phase updates. In machine units as obtained by `now_mu()`. phase updates. In machine units as obtained by `now_mu()`.
:param profile: Profile number to set (0-7, default: 0). :param profile: Single tone profile number to set (0-7, default: 7).
Ineffective if `ram_destination` is specified.
:param ram_destination: RAM destination (:const:`RAM_DEST_FTW`,
:const:`RAM_DEST_POW`, :const:`RAM_DEST_ASF`,
:const:`RAM_DEST_POWASF`). If specified, write free DDS parameters
to the ASF/FTW/POW registers instead of to the single tone profile
register (default behaviour, see `profile`).
:return: Resulting phase offset word after application of phase :return: Resulting phase offset word after application of phase
tracking offset. When using :const:`PHASE_MODE_CONTINUOUS` in tracking offset. When using :const:`PHASE_MODE_CONTINUOUS` in
subsequent calls, use this value as the "current" phase. subsequent calls, use this value as the "current" phase.
@ -484,9 +572,18 @@ class AD9910:
# Also no need to use IO_UPDATE time as this # Also no need to use IO_UPDATE time as this
# is equivalent to an output pipeline latency. # is equivalent to an output pipeline latency.
dt = int32(now_mu()) - int32(ref_time_mu) dt = int32(now_mu()) - int32(ref_time_mu)
pow_ += dt*ftw*self.sysclk_per_mu >> 16 pow_ += dt * ftw * self.sysclk_per_mu >> 16
self.write64(_AD9910_REG_PROFILE0 + profile, if ram_destination == -1:
(asf << 16) | (pow_ & 0xffff), ftw) self.write64(_AD9910_REG_PROFILE0 + profile,
(asf << 16) | (pow_ & 0xffff), ftw)
else:
if not ram_destination == RAM_DEST_FTW:
self.set_ftw(ftw)
if not ram_destination == RAM_DEST_POWASF:
if not ram_destination == RAM_DEST_ASF:
self.set_asf(asf)
if not ram_destination == RAM_DEST_POW:
self.set_pow(pow_)
delay_mu(int64(self.sync_data.io_update_delay)) delay_mu(int64(self.sync_data.io_update_delay))
self.cpld.io_update.pulse_mu(8) # assumes 8 mu > t_SYN_CCLK self.cpld.io_update.pulse_mu(8) # assumes 8 mu > t_SYN_CCLK
at_mu(now_mu() & ~7) # clear fine TSC again at_mu(now_mu() & ~7) # clear fine TSC again
@ -496,8 +593,30 @@ class AD9910:
return pow_ return pow_
@kernel @kernel
def set_profile_ram(self, start, end, step=1, profile=0, nodwell_high=0, def get_mu(self, profile: TInt32 = DEFAULT_PROFILE
zero_crossing=0, mode=1): ) -> TTuple([TInt32, TInt32, TInt32]):
"""Get the frequency tuning word, phase offset word,
and amplitude scale factor.
.. seealso:: :meth:`get`
:param profile: Profile number to get (0-7, default: 7)
:return: A tuple ``(ftw, pow, asf)``
"""
# Read data
data = int64(self.read64(_AD9910_REG_PROFILE0 + profile))
# Extract and return fields
ftw = int32(data)
pow_ = int32((data >> 32) & 0xffff)
asf = int32((data >> 48) & 0x3fff)
return ftw, pow_, asf
@kernel
def set_profile_ram(self, start: TInt32, end: TInt32, step: TInt32 = 1,
profile: TInt32 = _DEFAULT_PROFILE_RAM,
nodwell_high: TInt32 = 0, zero_crossing: TInt32 = 0,
mode: TInt32 = 1):
"""Set the RAM profile settings. """Set the RAM profile settings.
:param start: Profile start address in RAM. :param start: Profile start address in RAM.
@ -521,69 +640,102 @@ class AD9910:
self.write64(_AD9910_REG_PROFILE0 + profile, hi, lo) self.write64(_AD9910_REG_PROFILE0 + profile, hi, lo)
@kernel @kernel
def set_ftw(self, ftw): def set_ftw(self, ftw: TInt32):
"""Set the value stored to the AD9910's frequency tuning word (FTW) register. """Set the value stored to the AD9910's frequency tuning word (FTW)
register.
:param ftw: Frequency tuning word to be stored, range: 0 to 0xffffffff. :param ftw: Frequency tuning word to be stored, range: 0 to 0xffffffff.
""" """
self.write32(_AD9910_REG_FTW, ftw) self.write32(_AD9910_REG_FTW, ftw)
@kernel @kernel
def set_asf(self, asf): def set_asf(self, asf: TInt32):
"""Set the value stored to the AD9910's amplitude scale factor (ASF) register. """Set the value stored to the AD9910's amplitude scale factor (ASF)
register.
:param asf: Amplitude scale factor to be stored, range: 0 to 0x3ffe. :param asf: Amplitude scale factor to be stored, range: 0 to 0x3fff.
""" """
self.write32(_AD9910_REG_ASF, asf<<2) self.write32(_AD9910_REG_ASF, asf << 2)
@kernel @kernel
def set_pow(self, pow_): def set_pow(self, pow_: TInt32):
"""Set the value stored to the AD9910's phase offset word (POW) register. """Set the value stored to the AD9910's phase offset word (POW)
register.
:param pow_: Phase offset word to be stored, range: 0 to 0xffff. :param pow_: Phase offset word to be stored, range: 0 to 0xffff.
""" """
self.write32(_AD9910_REG_POW, pow_) self.write16(_AD9910_REG_POW, pow_)
@kernel
def get_ftw(self) -> TInt32:
"""Get the value stored to the AD9910's frequency tuning word (FTW)
register.
:return: Frequency tuning word
"""
return self.read32(_AD9910_REG_FTW)
@kernel
def get_asf(self) -> TInt32:
"""Get the value stored to the AD9910's amplitude scale factor (ASF)
register.
:return: Amplitude scale factor
"""
return self.read32(_AD9910_REG_ASF) >> 2
@kernel
def get_pow(self) -> TInt32:
"""Get the value stored to the AD9910's phase offset word (POW)
register.
:return: Phase offset word
"""
return self.read16(_AD9910_REG_POW)
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def frequency_to_ftw(self, frequency): def frequency_to_ftw(self, frequency: TFloat) -> TInt32:
"""Return the frequency tuning word corresponding to the given """Return the 32-bit frequency tuning word corresponding to the given
frequency. frequency.
""" """
return int32(round(self.ftw_per_hz*frequency)) return int32(round(self.ftw_per_hz * frequency))
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def ftw_to_frequency(self, ftw): def ftw_to_frequency(self, ftw: TInt32) -> TFloat:
"""Return the frequency corresponding to the given frequency tuning """Return the frequency corresponding to the given frequency tuning
word. word.
""" """
return ftw / self.ftw_per_hz return ftw / self.ftw_per_hz
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def turns_to_pow(self, turns): def turns_to_pow(self, turns: TFloat) -> TInt32:
"""Return the phase offset word corresponding to the given phase """Return the 16-bit phase offset word corresponding to the given phase
in turns.""" in turns."""
return int32(round(turns*0x10000)) return int32(round(turns * 0x10000)) & int32(0xffff)
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def pow_to_turns(self, pow_): def pow_to_turns(self, pow_: TInt32) -> TFloat:
"""Return the phase in turns corresponding to a given phase offset """Return the phase in turns corresponding to a given phase offset
word.""" word."""
return pow_/0x10000 return pow_ / 0x10000
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def amplitude_to_asf(self, amplitude): def amplitude_to_asf(self, amplitude: TFloat) -> TInt32:
"""Return amplitude scale factor corresponding to given fractional """Return 14-bit amplitude scale factor corresponding to given
amplitude.""" fractional amplitude."""
return int32(round(amplitude*0x3ffe)) code = int32(round(amplitude * 0x3fff))
if code < 0 or code > 0x3fff:
raise ValueError("Invalid AD9910 fractional amplitude!")
return code
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def asf_to_amplitude(self, asf): def asf_to_amplitude(self, asf: TInt32) -> TFloat:
"""Return amplitude as a fraction of full scale corresponding to given """Return amplitude as a fraction of full scale corresponding to given
amplitude scale factor.""" amplitude scale factor."""
return asf / float(0x3ffe) return asf / float(0x3fff)
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def frequency_to_ram(self, frequency, ram): def frequency_to_ram(self, frequency: TList(TFloat), ram: TList(TInt32)):
"""Convert frequency values to RAM profile data. """Convert frequency values to RAM profile data.
To be used with :const:`RAM_DEST_FTW`. To be used with :const:`RAM_DEST_FTW`.
@ -596,7 +748,7 @@ class AD9910:
ram[i] = self.frequency_to_ftw(frequency[i]) ram[i] = self.frequency_to_ftw(frequency[i])
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def turns_to_ram(self, turns, ram): def turns_to_ram(self, turns: TList(TFloat), ram: TList(TInt32)):
"""Convert phase values to RAM profile data. """Convert phase values to RAM profile data.
To be used with :const:`RAM_DEST_POW`. To be used with :const:`RAM_DEST_POW`.
@ -609,7 +761,7 @@ class AD9910:
ram[i] = self.turns_to_pow(turns[i]) << 16 ram[i] = self.turns_to_pow(turns[i]) << 16
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def amplitude_to_ram(self, amplitude, ram): def amplitude_to_ram(self, amplitude: TList(TFloat), ram: TList(TInt32)):
"""Convert amplitude values to RAM profile data. """Convert amplitude values to RAM profile data.
To be used with :const:`RAM_DEST_ASF`. To be used with :const:`RAM_DEST_ASF`.
@ -622,7 +774,8 @@ class AD9910:
ram[i] = self.amplitude_to_asf(amplitude[i]) << 18 ram[i] = self.amplitude_to_asf(amplitude[i]) << 18
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def turns_amplitude_to_ram(self, turns, amplitude, ram): def turns_amplitude_to_ram(self, turns: TList(TFloat),
amplitude: TList(TFloat), ram: TList(TInt32)):
"""Convert phase and amplitude values to RAM profile data. """Convert phase and amplitude values to RAM profile data.
To be used with :const:`RAM_DEST_POWASF`. To be used with :const:`RAM_DEST_POWASF`.
@ -637,33 +790,65 @@ class AD9910:
self.amplitude_to_asf(amplitude[i]) << 2) self.amplitude_to_asf(amplitude[i]) << 2)
@kernel @kernel
def set_frequency(self, frequency): def set_frequency(self, frequency: TFloat):
"""Set the value stored to the AD9910's frequency tuning word (FTW) register. """Set the value stored to the AD9910's frequency tuning word (FTW)
register.
:param frequency: frequency to be stored, in Hz. :param frequency: frequency to be stored, in Hz.
""" """
return self.set_ftw(self.frequency_to_ftw(frequency)) self.set_ftw(self.frequency_to_ftw(frequency))
@kernel @kernel
def set_amplitude(self, amplitude): def set_amplitude(self, amplitude: TFloat):
"""Set the value stored to the AD9910's amplitude scale factor (ASF) register. """Set the value stored to the AD9910's amplitude scale factor (ASF)
register.
:param amplitude: amplitude to be stored, in units of full scale. :param amplitude: amplitude to be stored, in units of full scale.
""" """
return self.set_asf(self.amplitude_to_asf(amplitude)) self.set_asf(self.amplitude_to_asf(amplitude))
@kernel @kernel
def set_phase(self, turns): def set_phase(self, turns: TFloat):
"""Set the value stored to the AD9910's phase offset word (POW) register. """Set the value stored to the AD9910's phase offset word (POW)
register.
:param turns: phase offset to be stored, in turns. :param turns: phase offset to be stored, in turns.
""" """
return self.set_pow(self.turns_to_pow(turns)) self.set_pow(self.turns_to_pow(turns))
@kernel @kernel
def set(self, frequency, phase=0.0, amplitude=1.0, def get_frequency(self) -> TFloat:
phase_mode=_PHASE_MODE_DEFAULT, ref_time_mu=int64(-1), profile=0): """Get the value stored to the AD9910's frequency tuning word (FTW)
"""Set profile 0 data in SI units. register.
:return: frequency in Hz.
"""
return self.ftw_to_frequency(self.get_ftw())
@kernel
def get_amplitude(self) -> TFloat:
"""Get the value stored to the AD9910's amplitude scale factor (ASF)
register.
:return: amplitude in units of full scale.
"""
return self.asf_to_amplitude(self.get_asf())
@kernel
def get_phase(self) -> TFloat:
"""Get the value stored to the AD9910's phase offset word (POW)
register.
:return: phase offset in turns.
"""
return self.pow_to_turns(self.get_pow())
@kernel
def set(self, frequency: TFloat = 0.0, phase: TFloat = 0.0,
amplitude: TFloat = 1.0, phase_mode: TInt32 = _PHASE_MODE_DEFAULT,
ref_time_mu: TInt64 = int64(-1), profile: TInt32 = DEFAULT_PROFILE,
ram_destination: TInt32 = -1) -> TFloat:
"""Set DDS data in SI units.
.. seealso:: :meth:`set_mu` .. seealso:: :meth:`set_mu`
@ -672,18 +857,38 @@ class AD9910:
:param amplitude: Amplitude in units of full scale :param amplitude: Amplitude in units of full scale
:param phase_mode: Phase mode constant :param phase_mode: Phase mode constant
:param ref_time_mu: Fiducial time stamp in machine units :param ref_time_mu: Fiducial time stamp in machine units
:param profile: Profile to affect :param profile: Single tone profile to affect.
:param ram_destination: RAM destination.
:return: Resulting phase offset in turns :return: Resulting phase offset in turns
""" """
return self.pow_to_turns(self.set_mu( return self.pow_to_turns(self.set_mu(
self.frequency_to_ftw(frequency), self.turns_to_pow(phase), self.frequency_to_ftw(frequency), self.turns_to_pow(phase),
self.amplitude_to_asf(amplitude), phase_mode, ref_time_mu, self.amplitude_to_asf(amplitude), phase_mode, ref_time_mu,
profile)) profile, ram_destination))
@kernel @kernel
def set_att_mu(self, att): def get(self, profile: TInt32 = DEFAULT_PROFILE
) -> TTuple([TFloat, TFloat, TFloat]):
"""Get the frequency, phase, and amplitude.
.. seealso:: :meth:`get_mu`
:param profile: Profile number to get (0-7, default: 7)
:return: A tuple ``(frequency, phase, amplitude)``
"""
# Get values
ftw, pow_, asf = self.get_mu(profile)
# Convert and return
return (self.ftw_to_frequency(ftw), self.pow_to_turns(pow_),
self.asf_to_amplitude(asf))
@kernel
def set_att_mu(self, att: TInt32):
"""Set digital step attenuator in machine units. """Set digital step attenuator in machine units.
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu` .. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu`
:param att: Attenuation setting, 8 bit digital. :param att: Attenuation setting, 8 bit digital.
@ -691,9 +896,11 @@ class AD9910:
self.cpld.set_att_mu(self.chip_select - 4, att) self.cpld.set_att_mu(self.chip_select - 4, att)
@kernel @kernel
def set_att(self, att): def set_att(self, att: TFloat):
"""Set digital step attenuator in SI units. """Set digital step attenuator in SI units.
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att` .. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att`
:param att: Attenuation in dB. :param att: Attenuation in dB.
@ -701,7 +908,27 @@ class AD9910:
self.cpld.set_att(self.chip_select - 4, att) self.cpld.set_att(self.chip_select - 4, att)
@kernel @kernel
def cfg_sw(self, state): def get_att_mu(self) -> TInt32:
"""Get digital step attenuator value in machine units.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att_mu`
:return: Attenuation setting, 8 bit digital.
"""
return self.cpld.get_channel_att_mu(self.chip_select - 4)
@kernel
def get_att(self) -> TFloat:
"""Get digital step attenuator value in SI units.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att`
:return: Attenuation in dB.
"""
return self.cpld.get_channel_att(self.chip_select - 4)
@kernel
def cfg_sw(self, state: TBool):
"""Set CPLD CFG RF switch state. The RF switch is controlled by the """Set CPLD CFG RF switch state. The RF switch is controlled by the
logical or of the CPLD configuration shift register logical or of the CPLD configuration shift register
RF switch bit and the SW TTL line (if used). RF switch bit and the SW TTL line (if used).
@ -711,20 +938,26 @@ class AD9910:
self.cpld.cfg_sw(self.chip_select - 4, state) self.cpld.cfg_sw(self.chip_select - 4, state)
@kernel @kernel
def set_sync(self, in_delay, window): def set_sync(self,
in_delay: TInt32,
window: TInt32,
en_sync_gen: TInt32 = 0):
"""Set the relevant parameters in the multi device synchronization """Set the relevant parameters in the multi device synchronization
register. See the AD9910 datasheet for details. The SYNC clock register. See the AD9910 datasheet for details. The SYNC clock
generator preset value is set to zero, and the SYNC_OUT generator is generator preset value is set to zero, and the SYNC_OUT generator is
disabled. disabled by default.
:param in_delay: SYNC_IN delay tap (0-31) in steps of ~75ps :param in_delay: SYNC_IN delay tap (0-31) in steps of ~75ps
:param window: Symmetric SYNC_IN validation window (0-15) in :param window: Symmetric SYNC_IN validation window (0-15) in
steps of ~75ps for both hold and setup margin. steps of ~75ps for both hold and setup margin.
:param en_sync_gen: Whether to enable the DDS-internal sync generator
(SYNC_OUT, cf. sync_sel == 1). Should be left off for the normal
use case, where the SYNC clock is supplied by the core device.
""" """
self.write32(_AD9910_REG_SYNC, self.write32(_AD9910_REG_SYNC,
(window << 28) | # SYNC S/H validation delay (window << 28) | # SYNC S/H validation delay
(1 << 27) | # SYNC receiver enable (1 << 27) | # SYNC receiver enable
(0 << 26) | # SYNC generator disable (en_sync_gen << 26) | # SYNC generator enable
(0 << 25) | # SYNC generator SYS rising edge (0 << 25) | # SYNC generator SYS rising edge
(0 << 18) | # SYNC preset (0 << 18) | # SYNC preset
(0 << 11) | # SYNC output delay (0 << 11) | # SYNC output delay
@ -740,13 +973,15 @@ class AD9910:
Also modifies CFR2. Also modifies CFR2.
""" """
self.write32(_AD9910_REG_CFR2, 0x01010020) # clear SMP_ERR self.set_cfr2(sync_validation_disable=1) # clear SMP_ERR
self.cpld.io_update.pulse(1*us) self.cpld.io_update.pulse(1 * us)
self.write32(_AD9910_REG_CFR2, 0x01010000) # enable SMP_ERR delay(10 * us) # slack
self.cpld.io_update.pulse(1*us) self.set_cfr2(sync_validation_disable=0) # enable SMP_ERR
self.cpld.io_update.pulse(1 * us)
@kernel @kernel
def tune_sync_delay(self, search_seed=15): def tune_sync_delay(self,
search_seed: TInt32 = 15) -> TTuple([TInt32, TInt32]):
"""Find a stable SYNC_IN delay. """Find a stable SYNC_IN delay.
This method first locates a valid SYNC_IN delay at zero validation This method first locates a valid SYNC_IN delay at zero validation
@ -771,7 +1006,7 @@ class AD9910:
margin = 1 # 1*75ps setup and hold margin = 1 # 1*75ps setup and hold
for window in range(16): for window in range(16):
next_seed = -1 next_seed = -1
for in_delay in range(search_span - 2*window): for in_delay in range(search_span - 2 * window):
# alternate search direction around search_seed # alternate search direction around search_seed
if in_delay & 1: if in_delay & 1:
in_delay = -in_delay in_delay = -in_delay
@ -781,9 +1016,9 @@ class AD9910:
self.set_sync(in_delay, window) self.set_sync(in_delay, window)
self.clear_smp_err() self.clear_smp_err()
# integrate SMP_ERR statistics for a few hundred cycles # integrate SMP_ERR statistics for a few hundred cycles
delay(100*us) delay(100 * us)
err = urukul_sta_smp_err(self.cpld.sta_read()) err = urukul_sta_smp_err(self.cpld.sta_read())
delay(100*us) # slack delay(100 * us) # slack
if not (err >> (self.chip_select - 4)) & 1: if not (err >> (self.chip_select - 4)) & 1:
next_seed = in_delay next_seed = in_delay
break break
@ -795,14 +1030,15 @@ class AD9910:
window = max(min_window, window - 1 - margin) window = max(min_window, window - 1 - margin)
self.set_sync(search_seed, window) self.set_sync(search_seed, window)
self.clear_smp_err() self.clear_smp_err()
delay(100*us) # slack delay(100 * us) # slack
return search_seed, window return search_seed, window
else: else:
break break
raise ValueError("no valid window/delay") raise ValueError("no valid window/delay")
@kernel @kernel
def measure_io_update_alignment(self, delay_start, delay_stop): def measure_io_update_alignment(self, delay_start: TInt64,
delay_stop: TInt64) -> TInt32:
"""Use the digital ramp generator to locate the alignment between """Use the digital ramp generator to locate the alignment between
IO_UPDATE and SYNC_CLK. IO_UPDATE and SYNC_CLK.
@ -818,7 +1054,7 @@ class AD9910:
# set up DRG # set up DRG
self.set_cfr1(drg_load_lrr=1, drg_autoclear=1) self.set_cfr1(drg_load_lrr=1, drg_autoclear=1)
# DRG -> FTW, DRG enable # DRG -> FTW, DRG enable
self.write32(_AD9910_REG_CFR2, 0x01090000) self.set_cfr2(drg_enable=1)
# no limits # no limits
self.write64(_AD9910_REG_RAMP_LIMIT, -1, 0) self.write64(_AD9910_REG_RAMP_LIMIT, -1, 0)
# DRCTL=0, dt=1 t_SYNC_CLK # DRCTL=0, dt=1 t_SYNC_CLK
@ -837,14 +1073,14 @@ class AD9910:
at_mu(t + 0x1000 + delay_stop) at_mu(t + 0x1000 + delay_stop)
self.cpld.io_update.pulse_mu(16 - delay_stop) # realign self.cpld.io_update.pulse_mu(16 - delay_stop) # realign
ftw = self.read32(_AD9910_REG_FTW) # read out effective FTW ftw = self.read32(_AD9910_REG_FTW) # read out effective FTW
delay(100*us) # slack delay(100 * us) # slack
# disable DRG # disable DRG
self.write32(_AD9910_REG_CFR2, 0x01010000) self.set_cfr2(drg_enable=0)
self.cpld.io_update.pulse_mu(8) self.cpld.io_update.pulse_mu(8)
return ftw & 1 return ftw & 1
@kernel @kernel
def tune_io_update_delay(self): def tune_io_update_delay(self) -> TInt32:
"""Find a stable IO_UPDATE delay alignment. """Find a stable IO_UPDATE delay alignment.
Scan through increasing IO_UPDATE delays until a delay is found that Scan through increasing IO_UPDATE delays until a delay is found that
@ -881,5 +1117,5 @@ class AD9910:
"no clear IO_UPDATE-SYNC_CLK alignment edge found") "no clear IO_UPDATE-SYNC_CLK alignment edge found")
else: else:
# the good delay is period//2 after the edge # the good delay is period//2 after the edge
return (i + 1 + period//2) & (period - 1) return (i + 1 + period // 2) & (period - 1)
raise ValueError("no IO_UPDATE-SYNC_CLK alignment edge found") raise ValueError("no IO_UPDATE-SYNC_CLK alignment edge found")

View File

@ -1,5 +1,6 @@
from numpy import int32, int64 from numpy import int32, int64
from artiq.language.types import TInt32, TInt64, TFloat, TTuple, TBool
from artiq.language.core import kernel, delay, portable from artiq.language.core import kernel, delay, portable
from artiq.language.units import ms, us, ns from artiq.language.units import ms, us, ns
from artiq.coredevice.ad9912_reg import * from artiq.coredevice.ad9912_reg import *
@ -24,11 +25,14 @@ class AD9912:
f_ref/clk_div*pll_n where f_ref is the reference frequency and clk_div f_ref/clk_div*pll_n where f_ref is the reference frequency and clk_div
is the reference clock divider (both set in the parent Urukul CPLD is the reference clock divider (both set in the parent Urukul CPLD
instance). instance).
:param pll_en: PLL enable bit, set to 0 to bypass PLL (default: 1).
Note that when bypassing the PLL the red front panel LED may remain on.
""" """
kernel_invariants = {"chip_select", "cpld", "core", "bus", "ftw_per_hz"}
def __init__(self, dmgr, chip_select, cpld_device, sw_device=None, def __init__(self, dmgr, chip_select, cpld_device, sw_device=None,
pll_n=10): pll_n=10, pll_en=1):
self.kernel_invariants = {"cpld", "core", "bus", "chip_select",
"pll_n", "pll_en", "ftw_per_hz"}
self.cpld = dmgr.get(cpld_device) self.cpld = dmgr.get(cpld_device)
self.core = self.cpld.core self.core = self.cpld.core
self.bus = self.cpld.bus self.bus = self.cpld.bus
@ -37,13 +41,17 @@ class AD9912:
if sw_device: if sw_device:
self.sw = dmgr.get(sw_device) self.sw = dmgr.get(sw_device)
self.kernel_invariants.add("sw") self.kernel_invariants.add("sw")
self.pll_en = pll_en
self.pll_n = pll_n self.pll_n = pll_n
sysclk = self.cpld.refclk/[1, 1, 2, 4][self.cpld.clk_div]*pll_n if pll_en:
sysclk = self.cpld.refclk / [1, 1, 2, 4][self.cpld.clk_div] * pll_n
else:
sysclk = self.cpld.refclk
assert sysclk <= 1e9 assert sysclk <= 1e9
self.ftw_per_hz = 1/sysclk*(int64(1) << 48) self.ftw_per_hz = 1 / sysclk * (int64(1) << 48)
@kernel @kernel
def write(self, addr, data, length): def write(self, addr: TInt32, data: TInt32, length: TInt32):
"""Variable length write to a register. """Variable length write to a register.
Up to 4 bytes. Up to 4 bytes.
@ -54,14 +62,14 @@ class AD9912:
assert length > 0 assert length > 0
assert length <= 4 assert length <= 4
self.bus.set_config_mu(urukul.SPI_CONFIG, 16, self.bus.set_config_mu(urukul.SPI_CONFIG, 16,
urukul.SPIT_DDS_WR, self.chip_select) urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write((addr | ((length - 1) << 13)) << 16) self.bus.write((addr | ((length - 1) << 13)) << 16)
self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END, length*8, self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END, length * 8,
urukul.SPIT_DDS_WR, self.chip_select) urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write(data << (32 - length*8)) self.bus.write(data << (32 - length * 8))
@kernel @kernel
def read(self, addr, length): def read(self, addr: TInt32, length: TInt32) -> TInt32:
"""Variable length read from a register. """Variable length read from a register.
Up to 4 bytes. Up to 4 bytes.
@ -72,15 +80,15 @@ class AD9912:
assert length > 0 assert length > 0
assert length <= 4 assert length <= 4
self.bus.set_config_mu(urukul.SPI_CONFIG, 16, self.bus.set_config_mu(urukul.SPI_CONFIG, 16,
urukul.SPIT_DDS_WR, self.chip_select) urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write((addr | ((length - 1) << 13) | 0x8000) << 16) self.bus.write((addr | ((length - 1) << 13) | 0x8000) << 16)
self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END
| spi.SPI_INPUT, length*8, | spi.SPI_INPUT, length * 8,
urukul.SPIT_DDS_RD, self.chip_select) urukul.SPIT_DDS_RD, self.chip_select)
self.bus.write(0) self.bus.write(0)
data = self.bus.read() data = self.bus.read()
if length < 4: if length < 4:
data &= (1 << (length*8)) - 1 data &= (1 << (length * 8)) - 1
return data return data
@kernel @kernel
@ -93,26 +101,30 @@ class AD9912:
""" """
# SPI mode # SPI mode
self.write(AD9912_SER_CONF, 0x99, length=1) self.write(AD9912_SER_CONF, 0x99, length=1)
self.cpld.io_update.pulse(2*us) self.cpld.io_update.pulse(2 * us)
# Verify chip ID and presence # Verify chip ID and presence
prodid = self.read(AD9912_PRODIDH, length=2) prodid = self.read(AD9912_PRODIDH, length=2)
if (prodid != 0x1982) and (prodid != 0x1902): if (prodid != 0x1982) and (prodid != 0x1902):
raise ValueError("Urukul AD9912 product id mismatch") raise ValueError("Urukul AD9912 product id mismatch")
delay(50*us) delay(50 * us)
# HSTL power down, CMOS power down # HSTL power down, CMOS power down
self.write(AD9912_PWRCNTRL1, 0x80, length=1) pwrcntrl1 = 0x80 | ((~self.pll_en & 1) << 4)
self.cpld.io_update.pulse(2*us) self.write(AD9912_PWRCNTRL1, pwrcntrl1, length=1)
self.write(AD9912_N_DIV, self.pll_n//2 - 2, length=1) self.cpld.io_update.pulse(2 * us)
self.cpld.io_update.pulse(2*us) if self.pll_en:
# I_cp = 375 µA, VCO high range self.write(AD9912_N_DIV, self.pll_n // 2 - 2, length=1)
self.write(AD9912_PLLCFG, 0b00000101, length=1) self.cpld.io_update.pulse(2 * us)
self.cpld.io_update.pulse(2*us) # I_cp = 375 µA, VCO high range
delay(1*ms) self.write(AD9912_PLLCFG, 0b00000101, length=1)
self.cpld.io_update.pulse(2 * us)
delay(1 * ms)
@kernel @kernel
def set_att_mu(self, att): def set_att_mu(self, att: TInt32):
"""Set digital step attenuator in machine units. """Set digital step attenuator in machine units.
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu` .. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att_mu`
:param att: Attenuation setting, 8 bit digital. :param att: Attenuation setting, 8 bit digital.
@ -120,9 +132,11 @@ class AD9912:
self.cpld.set_att_mu(self.chip_select - 4, att) self.cpld.set_att_mu(self.chip_select - 4, att)
@kernel @kernel
def set_att(self, att): def set_att(self, att: TFloat):
"""Set digital step attenuator in SI units. """Set digital step attenuator in SI units.
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att` .. seealso:: :meth:`artiq.coredevice.urukul.CPLD.set_att`
:param att: Attenuation in dB. Higher values mean more attenuation. :param att: Attenuation in dB. Higher values mean more attenuation.
@ -130,62 +144,125 @@ class AD9912:
self.cpld.set_att(self.chip_select - 4, att) self.cpld.set_att(self.chip_select - 4, att)
@kernel @kernel
def set_mu(self, ftw, pow): def get_att_mu(self) -> TInt32:
"""Get digital step attenuator value in machine units.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att_mu`
:return: Attenuation setting, 8 bit digital.
"""
return self.cpld.get_channel_att_mu(self.chip_select - 4)
@kernel
def get_att(self) -> TFloat:
"""Get digital step attenuator value in SI units.
.. seealso:: :meth:`artiq.coredevice.urukul.CPLD.get_channel_att`
:return: Attenuation in dB.
"""
return self.cpld.get_channel_att(self.chip_select - 4)
@kernel
def set_mu(self, ftw: TInt64, pow_: TInt32 = 0):
"""Set profile 0 data in machine units. """Set profile 0 data in machine units.
After the SPI transfer, the shared IO update pin is pulsed to After the SPI transfer, the shared IO update pin is pulsed to
activate the data. activate the data.
:param ftw: Frequency tuning word: 32 bit unsigned. :param ftw: Frequency tuning word: 48 bit unsigned.
:param pow: Phase tuning word: 16 bit unsigned. :param pow_: Phase tuning word: 16 bit unsigned.
""" """
# streaming transfer of FTW and POW # streaming transfer of FTW and POW
self.bus.set_config_mu(urukul.SPI_CONFIG, 16, self.bus.set_config_mu(urukul.SPI_CONFIG, 16,
urukul.SPIT_DDS_WR, self.chip_select) urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write((AD9912_POW1 << 16) | (3 << 29)) self.bus.write((AD9912_POW1 << 16) | (3 << 29))
self.bus.set_config_mu(urukul.SPI_CONFIG, 32, self.bus.set_config_mu(urukul.SPI_CONFIG, 32,
urukul.SPIT_DDS_WR, self.chip_select) urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write((pow << 16) | (int32(ftw >> 32) & 0xffff)) self.bus.write((pow_ << 16) | (int32(ftw >> 32) & 0xffff))
self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END, 32, self.bus.set_config_mu(urukul.SPI_CONFIG | spi.SPI_END, 32,
urukul.SPIT_DDS_WR, self.chip_select) urukul.SPIT_DDS_WR, self.chip_select)
self.bus.write(int32(ftw)) self.bus.write(int32(ftw))
self.cpld.io_update.pulse(10*ns) self.cpld.io_update.pulse(10 * ns)
@kernel
def get_mu(self) -> TTuple([TInt64, TInt32]):
"""Get the frequency tuning word and phase offset word.
.. seealso:: :meth:`get`
:return: A tuple ``(ftw, pow)``.
"""
# Read data
high = self.read(AD9912_POW1, 4)
self.core.break_realtime() # Regain slack to perform second read
low = self.read(AD9912_FTW3, 4)
# Extract and return fields
ftw = (int64(high & 0xffff) << 32) | (int64(low) & int64(0xffffffff))
pow_ = (high >> 16) & 0x3fff
return ftw, pow_
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def frequency_to_ftw(self, frequency): def frequency_to_ftw(self, frequency: TFloat) -> TInt64:
"""Returns the frequency tuning word corresponding to the given """Returns the 48-bit frequency tuning word corresponding to the given
frequency. frequency.
""" """
return int64(round(self.ftw_per_hz*frequency)) return int64(round(self.ftw_per_hz * frequency)) & (
(int64(1) << 48) - 1)
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def ftw_to_frequency(self, ftw): def ftw_to_frequency(self, ftw: TInt64) -> TFloat:
"""Returns the frequency corresponding to the given """Returns the frequency corresponding to the given
frequency tuning word. frequency tuning word.
""" """
return ftw/self.ftw_per_hz return ftw / self.ftw_per_hz
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def turns_to_pow(self, phase): def turns_to_pow(self, phase: TFloat) -> TInt32:
"""Returns the phase offset word corresponding to the given """Returns the 16-bit phase offset word corresponding to the given
phase. phase.
""" """
return int32(round((1 << 14)*phase)) return int32(round((1 << 14) * phase)) & 0xffff
@portable(flags={"fast-math"})
def pow_to_turns(self, pow_: TInt32) -> TFloat:
"""Return the phase in turns corresponding to a given phase offset
word.
:param pow_: Phase offset word.
:return: Phase in turns.
"""
return pow_ / (1 << 14)
@kernel @kernel
def set(self, frequency, phase=0.0): def set(self, frequency: TFloat, phase: TFloat = 0.0):
"""Set profile 0 data in SI units. """Set profile 0 data in SI units.
.. seealso:: :meth:`set_mu` .. seealso:: :meth:`set_mu`
:param ftw: Frequency in Hz :param frequency: Frequency in Hz
:param pow: Phase tuning word in turns :param phase: Phase tuning word in turns
""" """
self.set_mu(self.frequency_to_ftw(frequency), self.set_mu(self.frequency_to_ftw(frequency),
self.turns_to_pow(phase)) self.turns_to_pow(phase))
@kernel @kernel
def cfg_sw(self, state): def get(self) -> TTuple([TFloat, TFloat]):
"""Get the frequency and phase.
.. seealso:: :meth:`get_mu`
:return: A tuple ``(frequency, phase)``.
"""
# Get values
ftw, pow_ = self.get_mu()
# Convert and return
return self.ftw_to_frequency(ftw), self.pow_to_turns(pow_)
@kernel
def cfg_sw(self, state: TBool):
"""Set CPLD CFG RF switch state. The RF switch is controlled by the """Set CPLD CFG RF switch state. The RF switch is controlled by the
logical or of the CPLD configuration shift register logical or of the CPLD configuration shift register
RF switch bit and the SW TTL line (if used). RF switch bit and the SW TTL line (if used).

View File

@ -80,6 +80,13 @@ class AD9914:
self.set_x_duration_mu = 7 * self.write_duration_mu self.set_x_duration_mu = 7 * self.write_duration_mu
self.exit_x_duration_mu = 3 * self.write_duration_mu self.exit_x_duration_mu = 3 * self.write_duration_mu
@staticmethod
def get_rtio_channels(bus_channel, channel, **kwargs):
# return only first entry, as there are several devices with the same RTIO channel
if channel == 0:
return [(bus_channel, None)]
return []
@kernel @kernel
def write(self, addr, data): def write(self, addr, data):
rtio_output((self.bus_channel << 8) | addr, data) rtio_output((self.bus_channel << 8) | addr, data)
@ -236,10 +243,10 @@ class AD9914:
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def frequency_to_ftw(self, frequency): def frequency_to_ftw(self, frequency):
"""Returns the frequency tuning word corresponding to the given """Returns the 32-bit frequency tuning word corresponding to the given
frequency. frequency.
""" """
return round(float(int64(2)**32*frequency/self.sysclk)) return int32(round(float(int64(2)**32*frequency/self.sysclk)))
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def ftw_to_frequency(self, ftw): def ftw_to_frequency(self, ftw):
@ -250,9 +257,9 @@ class AD9914:
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def turns_to_pow(self, turns): def turns_to_pow(self, turns):
"""Returns the phase offset word corresponding to the given phase """Returns the 16-bit phase offset word corresponding to the given
in turns.""" phase in turns."""
return round(float(turns*2**16)) return round(float(turns*2**16)) & 0xffff
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def pow_to_turns(self, pow): def pow_to_turns(self, pow):
@ -262,8 +269,12 @@ class AD9914:
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def amplitude_to_asf(self, amplitude): def amplitude_to_asf(self, amplitude):
"""Returns amplitude scale factor corresponding to given amplitude.""" """Returns 12-bit amplitude scale factor corresponding to given
return round(float(amplitude*0x0fff)) amplitude."""
code = round(float(amplitude * 0x0fff))
if code < 0 or code > 0xfff:
raise ValueError("Invalid AD9914 amplitude!")
return code
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def asf_to_amplitude(self, asf): def asf_to_amplitude(self, asf):
@ -314,10 +325,11 @@ class AD9914:
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def frequency_to_xftw(self, frequency): def frequency_to_xftw(self, frequency):
"""Returns the frequency tuning word corresponding to the given """Returns the 63-bit frequency tuning word corresponding to the given
frequency (extended resolution mode). frequency (extended resolution mode).
""" """
return int64(round(2.0*float(int64(2)**62)*frequency/self.sysclk)) return int64(round(2.0*float(int64(2)**62)*frequency/self.sysclk)) & (
(int64(1) << 63) - 1)
@portable(flags={"fast-math"}) @portable(flags={"fast-math"})
def xftw_to_frequency(self, xftw): def xftw_to_frequency(self, xftw):

598
artiq/coredevice/adf5356.py Normal file
View File

@ -0,0 +1,598 @@
"""RTIO driver for the Analog Devices ADF[45]35[56] family of GHz PLLs
on Mirny-style prefixed SPI buses.
"""
# https://github.com/analogdevicesinc/linux/blob/master/Documentation/devicetree/bindings/iio/frequency/adf5355.txt
# https://github.com/analogdevicesinc/linux/blob/master/drivers/iio/frequency/adf5355.c
# https://www.analog.com/media/en/technical-documentation/data-sheets/ADF5355.pdf
# https://www.analog.com/media/en/technical-documentation/data-sheets/ADF5355.pdf
# https://www.analog.com/media/en/technical-documentation/user-guides/EV-ADF5355SD1Z-UG-1087.pdf
from artiq.language.core import kernel, portable, delay
from artiq.language.units import us, GHz, MHz
from artiq.language.types import TInt32, TInt64
from artiq.coredevice import spi2 as spi
from artiq.coredevice.adf5356_reg import *
from numpy import int32, int64, floor, ceil
SPI_CONFIG = (
0 * spi.SPI_OFFLINE
| 0 * spi.SPI_END
| 0 * spi.SPI_INPUT
| 1 * spi.SPI_CS_POLARITY
| 0 * spi.SPI_CLK_POLARITY
| 0 * spi.SPI_CLK_PHASE
| 0 * spi.SPI_LSB_FIRST
| 0 * spi.SPI_HALF_DUPLEX
)
ADF5356_MIN_VCO_FREQ = int64(3.4 * GHz)
ADF5356_MAX_VCO_FREQ = int64(6.8 * GHz)
ADF5356_MAX_FREQ_PFD = int32(125.0 * MHz)
ADF5356_MODULUS1 = int32(1 << 24)
ADF5356_MAX_MODULUS2 = int32(1 << 28) # FIXME: ADF5356 has 28 bits MOD2
ADF5356_MAX_R_CNT = int32(1023)
class ADF5356:
"""Analog Devices AD[45]35[56] family of GHz PLLs.
:param cpld_device: Mirny CPLD device name
:param sw_device: Mirny RF switch device name
:param channel: Mirny RF channel index
:param ref_doubler: enable/disable reference clock doubler
:param ref_divider: enable/disable reference clock divide-by-2
:param core_device: Core device name (default: "core")
"""
kernel_invariants = {"cpld", "sw", "channel", "core", "sysclk"}
def __init__(
self,
dmgr,
cpld_device,
sw_device,
channel,
ref_doubler=False,
ref_divider=False,
core="core",
):
self.cpld = dmgr.get(cpld_device)
self.sw = dmgr.get(sw_device)
self.channel = channel
self.core = dmgr.get(core)
self.ref_doubler = ref_doubler
self.ref_divider = ref_divider
self.sysclk = self.cpld.refclk
assert 10 <= self.sysclk / 1e6 <= 600
self._init_registers()
@staticmethod
def get_rtio_channels(**kwargs):
return []
@kernel
def init(self, blind=False):
"""
Initialize and configure the PLL.
:param blind: Do not attempt to verify presence.
"""
if not blind:
# MUXOUT = VDD
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 1)
self.sync()
delay(1000 * us)
if not self.read_muxout():
raise ValueError("MUXOUT not high")
delay(800 * us)
# MUXOUT = DGND
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 2)
self.sync()
delay(1000 * us)
if self.read_muxout():
raise ValueError("MUXOUT not low")
delay(800 * us)
# MUXOUT = digital lock-detect
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 6)
else:
self.sync()
@kernel
def set_att(self, att):
"""Set digital step attenuator in SI units.
This method will write the attenuator settings of the channel.
.. seealso:: :meth:`artiq.coredevice.mirny.Mirny.set_att`
:param att: Attenuation in dB.
"""
self.cpld.set_att(self.channel, att)
@kernel
def set_att_mu(self, att):
"""Set digital step attenuator in machine units.
:param att: Attenuation setting, 8 bit digital.
"""
self.cpld.set_att_mu(self.channel, att)
@kernel
def write(self, data):
self.cpld.write_ext(self.channel | 4, 32, data)
@kernel
def read_muxout(self):
"""
Read the state of the MUXOUT line.
By default, this is configured to be the digital lock detection.
"""
return bool(self.cpld.read_reg(0) & (1 << (self.channel + 8)))
@kernel
def set_output_power_mu(self, n):
"""
Set the power level at output A of the PLL chip in machine units.
This driver defaults to `n = 3` at init.
:param n: output power setting, 0, 1, 2, or 3 (see ADF5356 datasheet, fig. 44).
"""
if n not in [0, 1, 2, 3]:
raise ValueError("invalid power setting")
self.regs[6] = ADF5356_REG6_RF_OUTPUT_A_POWER_UPDATE(self.regs[6], n)
self.sync()
@portable
def output_power_mu(self):
"""
Return the power level at output A of the PLL chip in machine units.
"""
return ADF5356_REG6_RF_OUTPUT_A_POWER_GET(self.regs[6])
@kernel
def enable_output(self):
"""
Enable output A of the PLL chip. This is the default after init.
"""
self.regs[6] |= ADF5356_REG6_RF_OUTPUT_A_ENABLE(1)
self.sync()
@kernel
def disable_output(self):
"""
Disable output A of the PLL chip.
"""
self.regs[6] &= ~ADF5356_REG6_RF_OUTPUT_A_ENABLE(1)
self.sync()
@kernel
def set_frequency(self, f):
"""
Output given frequency on output A.
:param f: 53.125 MHz <= f <= 6800 MHz
"""
freq = int64(round(f))
if freq > ADF5356_MAX_VCO_FREQ:
raise ValueError("Requested too high frequency")
# select minimal output divider
rf_div_sel = 0
while freq < ADF5356_MIN_VCO_FREQ:
freq <<= 1
rf_div_sel += 1
if (1 << rf_div_sel) > 64:
raise ValueError("Requested too low frequency")
# choose reference divider that maximizes PFD frequency
self.regs[4] = ADF5356_REG4_R_COUNTER_UPDATE(
self.regs[4], self._compute_reference_counter()
)
f_pfd = self.f_pfd()
# choose prescaler
if freq > int64(6e9):
self.regs[0] |= ADF5356_REG0_PRESCALER(1) # 8/9
n_min, n_max = 75, 65535
# adjust reference divider to be able to match n_min constraint
while n_min * f_pfd > freq:
r = ADF5356_REG4_R_COUNTER_GET(self.regs[4])
self.regs[4] = ADF5356_REG4_R_COUNTER_UPDATE(self.regs[4], r + 1)
f_pfd = self.f_pfd()
else:
self.regs[0] &= ~ADF5356_REG0_PRESCALER(1) # 4/5
n_min, n_max = 23, 32767
# calculate PLL parameters
n, frac1, (frac2_msb, frac2_lsb), (mod2_msb, mod2_lsb) = calculate_pll(
freq, f_pfd
)
if not (n_min <= n <= n_max):
raise ValueError("Invalid INT value")
# configure PLL
self.regs[0] = ADF5356_REG0_INT_VALUE_UPDATE(self.regs[0], n)
self.regs[1] = ADF5356_REG1_MAIN_FRAC_VALUE_UPDATE(self.regs[1], frac1)
self.regs[2] = ADF5356_REG2_AUX_FRAC_LSB_VALUE_UPDATE(self.regs[2], frac2_lsb)
self.regs[2] = ADF5356_REG2_AUX_MOD_LSB_VALUE_UPDATE(self.regs[2], mod2_lsb)
self.regs[13] = ADF5356_REG13_AUX_FRAC_MSB_VALUE_UPDATE(
self.regs[13], frac2_msb
)
self.regs[13] = ADF5356_REG13_AUX_MOD_MSB_VALUE_UPDATE(self.regs[13], mod2_msb)
self.regs[6] = ADF5356_REG6_RF_DIVIDER_SELECT_UPDATE(self.regs[6], rf_div_sel)
self.regs[6] = ADF5356_REG6_CP_BLEED_CURRENT_UPDATE(
self.regs[6], int32(floor(24 * f_pfd / (61.44 * MHz)))
)
self.regs[9] = ADF5356_REG9_VCO_BAND_DIVISION_UPDATE(
self.regs[9], int32(ceil(f_pfd / 160e3))
)
# commit
self.sync()
@kernel
def sync(self):
"""
Write all registers to the device. Attempts to lock the PLL.
"""
f_pfd = self.f_pfd()
delay(200 * us) # Slack
if f_pfd <= 75.0 * MHz:
for i in range(13, 0, -1):
self.write(self.regs[i])
delay(200 * us)
self.write(self.regs[0] | ADF5356_REG0_AUTOCAL(1))
else:
# AUTOCAL AT HALF PFD FREQUENCY
# calculate PLL at f_pfd/2
n, frac1, (frac2_msb, frac2_lsb), (mod2_msb, mod2_lsb) = calculate_pll(
self.f_vco(), f_pfd >> 1
)
delay(200 * us) # Slack
self.write(
13
| ADF5356_REG13_AUX_FRAC_MSB_VALUE(frac2_msb)
| ADF5356_REG13_AUX_MOD_MSB_VALUE(mod2_msb)
)
for i in range(12, 4, -1):
self.write(self.regs[i])
self.write(
ADF5356_REG4_R_COUNTER_UPDATE(self.regs[4], 2 * self.ref_counter())
)
self.write(self.regs[3])
self.write(
2
| ADF5356_REG2_AUX_MOD_LSB_VALUE(mod2_lsb)
| ADF5356_REG2_AUX_FRAC_LSB_VALUE(frac2_lsb)
)
self.write(1 | ADF5356_REG1_MAIN_FRAC_VALUE(frac1))
delay(200 * us)
self.write(ADF5356_REG0_INT_VALUE(n) | ADF5356_REG0_AUTOCAL(1))
# RELOCK AT WANTED PFD FREQUENCY
for i in [4, 2, 1]:
self.write(self.regs[i])
# force-disable autocal
self.write(self.regs[0] & ~ADF5356_REG0_AUTOCAL(1))
@portable
def f_pfd(self) -> TInt64:
"""
Return the PFD frequency for the cached set of registers.
"""
r = ADF5356_REG4_R_COUNTER_GET(self.regs[4])
d = ADF5356_REG4_R_DOUBLER_GET(self.regs[4])
t = ADF5356_REG4_R_DIVIDER_GET(self.regs[4])
return self._compute_pfd_frequency(r, d, t)
@portable
def f_vco(self) -> TInt64:
"""
Return the VCO frequency for the cached set of registers.
"""
return int64(
self.f_pfd()
* (
self.pll_n()
+ (self.pll_frac1() + self.pll_frac2() / self.pll_mod2())
/ ADF5356_MODULUS1
)
)
@portable
def pll_n(self) -> TInt32:
"""
Return the PLL integer value (INT) for the cached set of registers.
"""
return ADF5356_REG0_INT_VALUE_GET(self.regs[0])
@portable
def pll_frac1(self) -> TInt32:
"""
Return the main fractional value (FRAC1) for the cached set of registers.
"""
return ADF5356_REG1_MAIN_FRAC_VALUE_GET(self.regs[1])
@portable
def pll_frac2(self) -> TInt32:
"""
Return the auxiliary fractional value (FRAC2) for the cached set of registers.
"""
return (
ADF5356_REG13_AUX_FRAC_MSB_VALUE_GET(self.regs[13]) << 14
) | ADF5356_REG2_AUX_FRAC_LSB_VALUE_GET(self.regs[2])
@portable
def pll_mod2(self) -> TInt32:
"""
Return the auxiliary modulus value (MOD2) for the cached set of registers.
"""
return (
ADF5356_REG13_AUX_MOD_MSB_VALUE_GET(self.regs[13]) << 14
) | ADF5356_REG2_AUX_MOD_LSB_VALUE_GET(self.regs[2])
@portable
def ref_counter(self) -> TInt32:
"""
Return the reference counter value (R) for the cached set of registers.
"""
return ADF5356_REG4_R_COUNTER_GET(self.regs[4])
@portable
def output_divider(self) -> TInt32:
"""
Return the value of the output A divider.
"""
return 1 << ADF5356_REG6_RF_DIVIDER_SELECT_GET(self.regs[6])
def info(self):
"""
Return a summary of high-level parameters as a dict.
"""
prescaler = ADF5356_REG0_PRESCALER_GET(self.regs[0])
return {
# output
"f_outA": self.f_vco() / self.output_divider(),
"f_outB": self.f_vco() * 2,
"output_divider": self.output_divider(),
# PLL parameters
"f_vco": self.f_vco(),
"pll_n": self.pll_n(),
"pll_frac1": self.pll_frac1(),
"pll_frac2": self.pll_frac2(),
"pll_mod2": self.pll_mod2(),
"prescaler": "4/5" if prescaler == 0 else "8/9",
# reference / PFD
"sysclk": self.sysclk,
"ref_doubler": self.ref_doubler,
"ref_divider": self.ref_divider,
"ref_counter": self.ref_counter(),
"f_pfd": self.f_pfd(),
}
@portable
def _init_registers(self):
"""
Initialize cached registers with sensible defaults.
"""
# fill with control bits
self.regs = [int32(i) for i in range(ADF5356_NUM_REGS)]
# REG2
# ====
# avoid divide-by-zero
self.regs[2] |= ADF5356_REG2_AUX_MOD_LSB_VALUE(1)
# REG4
# ====
# single-ended reference mode is recommended
# for references up to 250 MHz, even if the signal is differential
if self.sysclk <= 250 * MHz:
self.regs[4] |= ADF5356_REG4_REF_MODE(0)
else:
self.regs[4] |= ADF5356_REG4_REF_MODE(1)
# phase detector polarity: positive
self.regs[4] |= ADF5356_REG4_PD_POLARITY(1)
# charge pump current: 0.94 mA
self.regs[4] |= ADF5356_REG4_CURRENT_SETTING(2)
# MUXOUT: digital lock detect
self.regs[4] |= ADF5356_REG4_MUX_LOGIC(1) # 3v3 logic
self.regs[4] |= ADF5356_REG4_MUXOUT(6)
# setup reference path
if self.ref_doubler:
self.regs[4] |= ADF5356_REG4_R_DOUBLER(1)
if self.ref_divider:
self.regs[4] |= ADF5356_REG4_R_DIVIDER(1)
r = self._compute_reference_counter()
self.regs[4] |= ADF5356_REG4_R_COUNTER(r)
# REG5
# ====
# reserved values
self.regs[5] = int32(0x800025)
# REG6
# ====
# reserved values
self.regs[6] = int32(0x14000006)
# enable negative bleed
self.regs[6] |= ADF5356_REG6_NEGATIVE_BLEED(1)
# charge pump bleed current
self.regs[6] |= ADF5356_REG6_CP_BLEED_CURRENT(
int32(floor(24 * self.f_pfd() / (61.44 * MHz)))
)
# direct feedback from VCO to N counter
self.regs[6] |= ADF5356_REG6_FB_SELECT(1)
# mute until the PLL is locked
self.regs[6] |= ADF5356_REG6_MUTE_TILL_LD(1)
# enable output A
self.regs[6] |= ADF5356_REG6_RF_OUTPUT_A_ENABLE(1)
# set output A power to max power, is adjusted by extra attenuator
self.regs[6] |= ADF5356_REG6_RF_OUTPUT_A_POWER(3) # +5 dBm
# REG7
# ====
# reserved values
self.regs[7] = int32(0x10000007)
# sync load-enable to reference
self.regs[7] |= ADF5356_REG7_LE_SYNC(1)
# frac-N lock-detect precision: 12 ns
self.regs[7] |= ADF5356_REG7_FRAC_N_LD_PRECISION(3)
# REG8
# ====
# reserved values
self.regs[8] = int32(0x102D0428)
# REG9
# ====
# default timeouts (from eval software)
self.regs[9] |= (
ADF5356_REG9_SYNTH_LOCK_TIMEOUT(13)
| ADF5356_REG9_AUTOCAL_TIMEOUT(31)
| ADF5356_REG9_TIMEOUT(0x67)
)
self.regs[9] |= ADF5356_REG9_VCO_BAND_DIVISION(
int32(ceil(self.f_pfd() / 160e3))
)
# REG10
# =====
# reserved values
self.regs[10] = int32(0xC0000A)
# ADC defaults (from eval software)
self.regs[10] |= (
ADF5356_REG10_ADC_ENABLE(1)
| ADF5356_REG10_ADC_CLK_DIV(256)
| ADF5356_REG10_ADC_CONV(1)
)
# REG11
# =====
# reserved values
self.regs[11] = int32(0x61200B)
# REG12
# =====
# reserved values
self.regs[12] = int32(0x15FC)
@portable
def _compute_pfd_frequency(self, r, d, t) -> TInt64:
"""
Calculate the PFD frequency from the given reference path parameters
"""
return int64(self.sysclk * ((1 + d) / (r * (1 + t))))
@portable
def _compute_reference_counter(self) -> TInt32:
"""
Determine the reference counter R that maximizes the PFD frequency
"""
d = ADF5356_REG4_R_DOUBLER_GET(self.regs[4])
t = ADF5356_REG4_R_DIVIDER_GET(self.regs[4])
r = 1
while self._compute_pfd_frequency(r, d, t) > ADF5356_MAX_FREQ_PFD:
r += 1
return int32(r)
@portable
def gcd(a, b):
while b:
a, b = b, a % b
return a
@portable
def split_msb_lsb_28b(v):
return int32((v >> 14) & 0x3FFF), int32(v & 0x3FFF)
@portable
def calculate_pll(f_vco: TInt64, f_pfd: TInt64):
"""
Calculate fractional-N PLL parameters such that
``f_vco`` = ``f_pfd`` * (``n`` + (``frac1`` + ``frac2``/``mod2``) / ``mod1``)
where
``mod1 = 2**24`` and ``mod2 <= 2**28``
:param f_vco: target VCO frequency
:param f_pfd: PFD frequency
:return: ``(n, frac1, (frac2_msb, frac2_lsb), (mod2_msb, mod2_lsb))``
"""
f_pfd = int64(f_pfd)
f_vco = int64(f_vco)
# integral part
n, r = int32(f_vco // f_pfd), f_vco % f_pfd
# main fractional part
r *= ADF5356_MODULUS1
frac1, frac2 = int32(r // f_pfd), r % f_pfd
# auxiliary fractional part
mod2 = f_pfd
while mod2 > ADF5356_MAX_MODULUS2:
mod2 >>= 1
frac2 >>= 1
gcd_div = gcd(frac2, mod2)
mod2 //= gcd_div
frac2 //= gcd_div
return n, frac1, split_msb_lsb_28b(frac2), split_msb_lsb_28b(mod2)

View File

@ -0,0 +1,642 @@
# auto-generated, do not edit
from artiq.language.core import portable
from artiq.language.types import TInt32
from numpy import int32
@portable
def ADF5356_REG0_AUTOCAL_GET(reg: TInt32) -> TInt32:
return int32((reg >> 21) & 0x1)
@portable
def ADF5356_REG0_AUTOCAL(x: TInt32) -> TInt32:
return int32((x & 0x1) << 21)
@portable
def ADF5356_REG0_AUTOCAL_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 21)) | ((x & 0x1) << 21))
@portable
def ADF5356_REG0_INT_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0xffff)
@portable
def ADF5356_REG0_INT_VALUE(x: TInt32) -> TInt32:
return int32((x & 0xffff) << 4)
@portable
def ADF5356_REG0_INT_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xffff << 4)) | ((x & 0xffff) << 4))
@portable
def ADF5356_REG0_PRESCALER_GET(reg: TInt32) -> TInt32:
return int32((reg >> 20) & 0x1)
@portable
def ADF5356_REG0_PRESCALER(x: TInt32) -> TInt32:
return int32((x & 0x1) << 20)
@portable
def ADF5356_REG0_PRESCALER_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 20)) | ((x & 0x1) << 20))
@portable
def ADF5356_REG1_MAIN_FRAC_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0xffffff)
@portable
def ADF5356_REG1_MAIN_FRAC_VALUE(x: TInt32) -> TInt32:
return int32((x & 0xffffff) << 4)
@portable
def ADF5356_REG1_MAIN_FRAC_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xffffff << 4)) | ((x & 0xffffff) << 4))
@portable
def ADF5356_REG2_AUX_FRAC_LSB_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 18) & 0x3fff)
@portable
def ADF5356_REG2_AUX_FRAC_LSB_VALUE(x: TInt32) -> TInt32:
return int32((x & 0x3fff) << 18)
@portable
def ADF5356_REG2_AUX_FRAC_LSB_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3fff << 18)) | ((x & 0x3fff) << 18))
@portable
def ADF5356_REG2_AUX_MOD_LSB_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0x3fff)
@portable
def ADF5356_REG2_AUX_MOD_LSB_VALUE(x: TInt32) -> TInt32:
return int32((x & 0x3fff) << 4)
@portable
def ADF5356_REG2_AUX_MOD_LSB_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3fff << 4)) | ((x & 0x3fff) << 4))
@portable
def ADF5356_REG3_PHASE_ADJUST_GET(reg: TInt32) -> TInt32:
return int32((reg >> 28) & 0x1)
@portable
def ADF5356_REG3_PHASE_ADJUST(x: TInt32) -> TInt32:
return int32((x & 0x1) << 28)
@portable
def ADF5356_REG3_PHASE_ADJUST_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 28)) | ((x & 0x1) << 28))
@portable
def ADF5356_REG3_PHASE_RESYNC_GET(reg: TInt32) -> TInt32:
return int32((reg >> 29) & 0x1)
@portable
def ADF5356_REG3_PHASE_RESYNC(x: TInt32) -> TInt32:
return int32((x & 0x1) << 29)
@portable
def ADF5356_REG3_PHASE_RESYNC_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 29)) | ((x & 0x1) << 29))
@portable
def ADF5356_REG3_PHASE_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0xffffff)
@portable
def ADF5356_REG3_PHASE_VALUE(x: TInt32) -> TInt32:
return int32((x & 0xffffff) << 4)
@portable
def ADF5356_REG3_PHASE_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xffffff << 4)) | ((x & 0xffffff) << 4))
@portable
def ADF5356_REG3_SD_LOAD_RESET_GET(reg: TInt32) -> TInt32:
return int32((reg >> 30) & 0x1)
@portable
def ADF5356_REG3_SD_LOAD_RESET(x: TInt32) -> TInt32:
return int32((x & 0x1) << 30)
@portable
def ADF5356_REG3_SD_LOAD_RESET_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 30)) | ((x & 0x1) << 30))
@portable
def ADF5356_REG4_COUNTER_RESET_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0x1)
@portable
def ADF5356_REG4_COUNTER_RESET(x: TInt32) -> TInt32:
return int32((x & 0x1) << 4)
@portable
def ADF5356_REG4_COUNTER_RESET_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 4)) | ((x & 0x1) << 4))
@portable
def ADF5356_REG4_CP_THREE_STATE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 5) & 0x1)
@portable
def ADF5356_REG4_CP_THREE_STATE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 5)
@portable
def ADF5356_REG4_CP_THREE_STATE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 5)) | ((x & 0x1) << 5))
@portable
def ADF5356_REG4_CURRENT_SETTING_GET(reg: TInt32) -> TInt32:
return int32((reg >> 10) & 0xf)
@portable
def ADF5356_REG4_CURRENT_SETTING(x: TInt32) -> TInt32:
return int32((x & 0xf) << 10)
@portable
def ADF5356_REG4_CURRENT_SETTING_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xf << 10)) | ((x & 0xf) << 10))
@portable
def ADF5356_REG4_DOUBLE_BUFF_GET(reg: TInt32) -> TInt32:
return int32((reg >> 14) & 0x1)
@portable
def ADF5356_REG4_DOUBLE_BUFF(x: TInt32) -> TInt32:
return int32((x & 0x1) << 14)
@portable
def ADF5356_REG4_DOUBLE_BUFF_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 14)) | ((x & 0x1) << 14))
@portable
def ADF5356_REG4_MUX_LOGIC_GET(reg: TInt32) -> TInt32:
return int32((reg >> 8) & 0x1)
@portable
def ADF5356_REG4_MUX_LOGIC(x: TInt32) -> TInt32:
return int32((x & 0x1) << 8)
@portable
def ADF5356_REG4_MUX_LOGIC_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 8)) | ((x & 0x1) << 8))
@portable
def ADF5356_REG4_MUXOUT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 27) & 0x7)
@portable
def ADF5356_REG4_MUXOUT(x: TInt32) -> TInt32:
return int32((x & 0x7) << 27)
@portable
def ADF5356_REG4_MUXOUT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x7 << 27)) | ((x & 0x7) << 27))
@portable
def ADF5356_REG4_PD_POLARITY_GET(reg: TInt32) -> TInt32:
return int32((reg >> 7) & 0x1)
@portable
def ADF5356_REG4_PD_POLARITY(x: TInt32) -> TInt32:
return int32((x & 0x1) << 7)
@portable
def ADF5356_REG4_PD_POLARITY_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 7)) | ((x & 0x1) << 7))
@portable
def ADF5356_REG4_POWER_DOWN_GET(reg: TInt32) -> TInt32:
return int32((reg >> 6) & 0x1)
@portable
def ADF5356_REG4_POWER_DOWN(x: TInt32) -> TInt32:
return int32((x & 0x1) << 6)
@portable
def ADF5356_REG4_POWER_DOWN_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 6)) | ((x & 0x1) << 6))
@portable
def ADF5356_REG4_R_COUNTER_GET(reg: TInt32) -> TInt32:
return int32((reg >> 15) & 0x3ff)
@portable
def ADF5356_REG4_R_COUNTER(x: TInt32) -> TInt32:
return int32((x & 0x3ff) << 15)
@portable
def ADF5356_REG4_R_COUNTER_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3ff << 15)) | ((x & 0x3ff) << 15))
@portable
def ADF5356_REG4_R_DIVIDER_GET(reg: TInt32) -> TInt32:
return int32((reg >> 25) & 0x1)
@portable
def ADF5356_REG4_R_DIVIDER(x: TInt32) -> TInt32:
return int32((x & 0x1) << 25)
@portable
def ADF5356_REG4_R_DIVIDER_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 25)) | ((x & 0x1) << 25))
@portable
def ADF5356_REG4_R_DOUBLER_GET(reg: TInt32) -> TInt32:
return int32((reg >> 26) & 0x1)
@portable
def ADF5356_REG4_R_DOUBLER(x: TInt32) -> TInt32:
return int32((x & 0x1) << 26)
@portable
def ADF5356_REG4_R_DOUBLER_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 26)) | ((x & 0x1) << 26))
@portable
def ADF5356_REG4_REF_MODE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 9) & 0x1)
@portable
def ADF5356_REG4_REF_MODE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 9)
@portable
def ADF5356_REG4_REF_MODE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 9)) | ((x & 0x1) << 9))
@portable
def ADF5356_REG6_BLEED_POLARITY_GET(reg: TInt32) -> TInt32:
return int32((reg >> 31) & 0x1)
@portable
def ADF5356_REG6_BLEED_POLARITY(x: TInt32) -> TInt32:
return int32((x & 0x1) << 31)
@portable
def ADF5356_REG6_BLEED_POLARITY_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 31)) | ((x & 0x1) << 31))
@portable
def ADF5356_REG6_CP_BLEED_CURRENT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 13) & 0xff)
@portable
def ADF5356_REG6_CP_BLEED_CURRENT(x: TInt32) -> TInt32:
return int32((x & 0xff) << 13)
@portable
def ADF5356_REG6_CP_BLEED_CURRENT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xff << 13)) | ((x & 0xff) << 13))
@portable
def ADF5356_REG6_FB_SELECT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 24) & 0x1)
@portable
def ADF5356_REG6_FB_SELECT(x: TInt32) -> TInt32:
return int32((x & 0x1) << 24)
@portable
def ADF5356_REG6_FB_SELECT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 24)) | ((x & 0x1) << 24))
@portable
def ADF5356_REG6_GATE_BLEED_GET(reg: TInt32) -> TInt32:
return int32((reg >> 30) & 0x1)
@portable
def ADF5356_REG6_GATE_BLEED(x: TInt32) -> TInt32:
return int32((x & 0x1) << 30)
@portable
def ADF5356_REG6_GATE_BLEED_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 30)) | ((x & 0x1) << 30))
@portable
def ADF5356_REG6_MUTE_TILL_LD_GET(reg: TInt32) -> TInt32:
return int32((reg >> 11) & 0x1)
@portable
def ADF5356_REG6_MUTE_TILL_LD(x: TInt32) -> TInt32:
return int32((x & 0x1) << 11)
@portable
def ADF5356_REG6_MUTE_TILL_LD_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 11)) | ((x & 0x1) << 11))
@portable
def ADF5356_REG6_NEGATIVE_BLEED_GET(reg: TInt32) -> TInt32:
return int32((reg >> 29) & 0x1)
@portable
def ADF5356_REG6_NEGATIVE_BLEED(x: TInt32) -> TInt32:
return int32((x & 0x1) << 29)
@portable
def ADF5356_REG6_NEGATIVE_BLEED_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 29)) | ((x & 0x1) << 29))
@portable
def ADF5356_REG6_RF_DIVIDER_SELECT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 21) & 0x7)
@portable
def ADF5356_REG6_RF_DIVIDER_SELECT(x: TInt32) -> TInt32:
return int32((x & 0x7) << 21)
@portable
def ADF5356_REG6_RF_DIVIDER_SELECT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x7 << 21)) | ((x & 0x7) << 21))
@portable
def ADF5356_REG6_RF_OUTPUT_A_ENABLE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 6) & 0x1)
@portable
def ADF5356_REG6_RF_OUTPUT_A_ENABLE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 6)
@portable
def ADF5356_REG6_RF_OUTPUT_A_ENABLE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 6)) | ((x & 0x1) << 6))
@portable
def ADF5356_REG6_RF_OUTPUT_A_POWER_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0x3)
@portable
def ADF5356_REG6_RF_OUTPUT_A_POWER(x: TInt32) -> TInt32:
return int32((x & 0x3) << 4)
@portable
def ADF5356_REG6_RF_OUTPUT_A_POWER_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3 << 4)) | ((x & 0x3) << 4))
@portable
def ADF5356_REG6_RF_OUTPUT_B_ENABLE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 10) & 0x1)
@portable
def ADF5356_REG6_RF_OUTPUT_B_ENABLE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 10)
@portable
def ADF5356_REG6_RF_OUTPUT_B_ENABLE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 10)) | ((x & 0x1) << 10))
@portable
def ADF5356_REG7_FRAC_N_LD_PRECISION_GET(reg: TInt32) -> TInt32:
return int32((reg >> 5) & 0x3)
@portable
def ADF5356_REG7_FRAC_N_LD_PRECISION(x: TInt32) -> TInt32:
return int32((x & 0x3) << 5)
@portable
def ADF5356_REG7_FRAC_N_LD_PRECISION_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3 << 5)) | ((x & 0x3) << 5))
@portable
def ADF5356_REG7_LD_CYCLE_COUNT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 8) & 0x3)
@portable
def ADF5356_REG7_LD_CYCLE_COUNT(x: TInt32) -> TInt32:
return int32((x & 0x3) << 8)
@portable
def ADF5356_REG7_LD_CYCLE_COUNT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3 << 8)) | ((x & 0x3) << 8))
@portable
def ADF5356_REG7_LD_MODE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0x1)
@portable
def ADF5356_REG7_LD_MODE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 4)
@portable
def ADF5356_REG7_LD_MODE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 4)) | ((x & 0x1) << 4))
@portable
def ADF5356_REG7_LE_SEL_SYNC_EDGE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 27) & 0x1)
@portable
def ADF5356_REG7_LE_SEL_SYNC_EDGE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 27)
@portable
def ADF5356_REG7_LE_SEL_SYNC_EDGE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 27)) | ((x & 0x1) << 27))
@portable
def ADF5356_REG7_LE_SYNC_GET(reg: TInt32) -> TInt32:
return int32((reg >> 25) & 0x1)
@portable
def ADF5356_REG7_LE_SYNC(x: TInt32) -> TInt32:
return int32((x & 0x1) << 25)
@portable
def ADF5356_REG7_LE_SYNC_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 25)) | ((x & 0x1) << 25))
@portable
def ADF5356_REG7_LOL_MODE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 7) & 0x1)
@portable
def ADF5356_REG7_LOL_MODE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 7)
@portable
def ADF5356_REG7_LOL_MODE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 7)) | ((x & 0x1) << 7))
@portable
def ADF5356_REG9_AUTOCAL_TIMEOUT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 9) & 0x1f)
@portable
def ADF5356_REG9_AUTOCAL_TIMEOUT(x: TInt32) -> TInt32:
return int32((x & 0x1f) << 9)
@portable
def ADF5356_REG9_AUTOCAL_TIMEOUT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1f << 9)) | ((x & 0x1f) << 9))
@portable
def ADF5356_REG9_SYNTH_LOCK_TIMEOUT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0x1f)
@portable
def ADF5356_REG9_SYNTH_LOCK_TIMEOUT(x: TInt32) -> TInt32:
return int32((x & 0x1f) << 4)
@portable
def ADF5356_REG9_SYNTH_LOCK_TIMEOUT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1f << 4)) | ((x & 0x1f) << 4))
@portable
def ADF5356_REG9_TIMEOUT_GET(reg: TInt32) -> TInt32:
return int32((reg >> 14) & 0x3ff)
@portable
def ADF5356_REG9_TIMEOUT(x: TInt32) -> TInt32:
return int32((x & 0x3ff) << 14)
@portable
def ADF5356_REG9_TIMEOUT_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3ff << 14)) | ((x & 0x3ff) << 14))
@portable
def ADF5356_REG9_VCO_BAND_DIVISION_GET(reg: TInt32) -> TInt32:
return int32((reg >> 24) & 0xff)
@portable
def ADF5356_REG9_VCO_BAND_DIVISION(x: TInt32) -> TInt32:
return int32((x & 0xff) << 24)
@portable
def ADF5356_REG9_VCO_BAND_DIVISION_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xff << 24)) | ((x & 0xff) << 24))
@portable
def ADF5356_REG10_ADC_CLK_DIV_GET(reg: TInt32) -> TInt32:
return int32((reg >> 6) & 0xff)
@portable
def ADF5356_REG10_ADC_CLK_DIV(x: TInt32) -> TInt32:
return int32((x & 0xff) << 6)
@portable
def ADF5356_REG10_ADC_CLK_DIV_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xff << 6)) | ((x & 0xff) << 6))
@portable
def ADF5356_REG10_ADC_CONV_GET(reg: TInt32) -> TInt32:
return int32((reg >> 5) & 0x1)
@portable
def ADF5356_REG10_ADC_CONV(x: TInt32) -> TInt32:
return int32((x & 0x1) << 5)
@portable
def ADF5356_REG10_ADC_CONV_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 5)) | ((x & 0x1) << 5))
@portable
def ADF5356_REG10_ADC_ENABLE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0x1)
@portable
def ADF5356_REG10_ADC_ENABLE(x: TInt32) -> TInt32:
return int32((x & 0x1) << 4)
@portable
def ADF5356_REG10_ADC_ENABLE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 4)) | ((x & 0x1) << 4))
@portable
def ADF5356_REG11_VCO_BAND_HOLD_GET(reg: TInt32) -> TInt32:
return int32((reg >> 24) & 0x1)
@portable
def ADF5356_REG11_VCO_BAND_HOLD(x: TInt32) -> TInt32:
return int32((x & 0x1) << 24)
@portable
def ADF5356_REG11_VCO_BAND_HOLD_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x1 << 24)) | ((x & 0x1) << 24))
@portable
def ADF5356_REG12_PHASE_RESYNC_CLK_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 12) & 0xfffff)
@portable
def ADF5356_REG12_PHASE_RESYNC_CLK_VALUE(x: TInt32) -> TInt32:
return int32((x & 0xfffff) << 12)
@portable
def ADF5356_REG12_PHASE_RESYNC_CLK_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0xfffff << 12)) | ((x & 0xfffff) << 12))
@portable
def ADF5356_REG13_AUX_FRAC_MSB_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 18) & 0x3fff)
@portable
def ADF5356_REG13_AUX_FRAC_MSB_VALUE(x: TInt32) -> TInt32:
return int32((x & 0x3fff) << 18)
@portable
def ADF5356_REG13_AUX_FRAC_MSB_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3fff << 18)) | ((x & 0x3fff) << 18))
@portable
def ADF5356_REG13_AUX_MOD_MSB_VALUE_GET(reg: TInt32) -> TInt32:
return int32((reg >> 4) & 0x3fff)
@portable
def ADF5356_REG13_AUX_MOD_MSB_VALUE(x: TInt32) -> TInt32:
return int32((x & 0x3fff) << 4)
@portable
def ADF5356_REG13_AUX_MOD_MSB_VALUE_UPDATE(reg: TInt32, x: TInt32) -> TInt32:
return int32((reg & ~(0x3fff << 4)) | ((x & 0x3fff) << 4))
ADF5356_NUM_REGS = 14

185
artiq/coredevice/almazny.py Normal file
View File

@ -0,0 +1,185 @@
from artiq.language.core import kernel, portable
from numpy import int32
# almazny-specific data
ALMAZNY_LEGACY_REG_BASE = 0x0C
ALMAZNY_LEGACY_OE_SHIFT = 12
# higher SPI write divider to match almazny shift register timing
# min SER time before SRCLK rise = 125ns
# -> div=32 gives 125ns for data before clock rise
# works at faster dividers too but could be less reliable
ALMAZNY_LEGACY_SPIT_WR = 32
class AlmaznyLegacy:
"""
Almazny (High frequency mezzanine board for Mirny)
This applies to Almazny hardware v1.1 and earlier.
Use :class:`artiq.coredevice.almazny.AlmaznyChannel` for Almazny v1.2 and later.
:param host_mirny: Mirny device Almazny is connected to
"""
def __init__(self, dmgr, host_mirny):
self.mirny_cpld = dmgr.get(host_mirny)
self.att_mu = [0x3f] * 4
self.channel_sw = [0] * 4
self.output_enable = False
@kernel
def init(self):
self.output_toggle(self.output_enable)
@kernel
def att_to_mu(self, att):
"""
Convert an attenuator setting in dB to machine units.
:param att: attenuator setting in dB [0-31.5]
:return: attenuator setting in machine units
"""
mu = round(att * 2.0)
if mu > 63 or mu < 0:
raise ValueError("Invalid Almazny attenuator settings!")
return mu
@kernel
def mu_to_att(self, att_mu):
"""
Convert a digital attenuator setting to dB.
:param att_mu: attenuator setting in machine units
:return: attenuator setting in dB
"""
return att_mu / 2
@kernel
def set_att(self, channel, att, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel: index of the register [0-3]
:param att: attenuation setting in dBm [0-31.5]
:param rf_switch: rf switch (bool)
"""
self.set_att_mu(channel, self.att_to_mu(att), rf_switch)
@kernel
def set_att_mu(self, channel, att_mu, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel: index of the register [0-3]
:param att_mu: attenuation setting in machine units [0-63]
:param rf_switch: rf switch (bool)
"""
self.channel_sw[channel] = 1 if rf_switch else 0
self.att_mu[channel] = att_mu
self._update_register(channel)
@kernel
def output_toggle(self, oe):
"""
Toggles output on all shift registers on or off.
:param oe: toggle output enable (bool)
"""
self.output_enable = oe
cfg_reg = self.mirny_cpld.read_reg(1)
en = 1 if self.output_enable else 0
delay(100 * us)
new_reg = (en << ALMAZNY_LEGACY_OE_SHIFT) | (cfg_reg & 0x3FF)
self.mirny_cpld.write_reg(1, new_reg)
delay(100 * us)
@kernel
def _flip_mu_bits(self, mu):
# in this form MSB is actually 0.5dB attenuator
# unnatural for users, so we flip the six bits
return (((mu & 0x01) << 5)
| ((mu & 0x02) << 3)
| ((mu & 0x04) << 1)
| ((mu & 0x08) >> 1)
| ((mu & 0x10) >> 3)
| ((mu & 0x20) >> 5))
@kernel
def _update_register(self, ch):
self.mirny_cpld.write_ext(
ALMAZNY_LEGACY_REG_BASE + ch,
8,
self._flip_mu_bits(self.att_mu[ch]) | (self.channel_sw[ch] << 6),
ALMAZNY_LEGACY_SPIT_WR
)
delay(100 * us)
@kernel
def _update_all_registers(self):
for i in range(4):
self._update_register(i)
class AlmaznyChannel:
"""
One Almazny channel
Almazny is a mezzanine for the Quad PLL RF source Mirny that exposes and
controls the frequency-doubled outputs.
This driver requires Almazny hardware revision v1.2 or later
and Mirny CPLD gateware v0.3 or later.
Use :class:`artiq.coredevice.almazny.AlmaznyLegacy` for Almazny hardware v1.1 and earlier.
:param host_mirny: Mirny CPLD device name
:param channel: channel index (0-3)
"""
def __init__(self, dmgr, host_mirny, channel):
self.channel = channel
self.mirny_cpld = dmgr.get(host_mirny)
@portable
def to_mu(self, att, enable, led):
"""
Convert an attenuation in dB, RF switch state and LED state to machine
units.
:param att: attenuator setting in dB (0-31.5)
:param enable: RF switch state (bool)
:param led: LED state (bool)
:return: channel setting in machine units
"""
mu = int32(round(att * 2.))
if mu >= 64 or mu < 0:
raise ValueError("Attenuation out of range")
# unfortunate hardware design: bit reverse
mu = ((mu & 0x15) << 1) | ((mu >> 1) & 0x15)
mu = ((mu & 0x03) << 4) | (mu & 0x0c) | ((mu >> 4) & 0x03)
if enable:
mu |= 1 << 6
if led:
mu |= 1 << 7
return mu
@kernel
def set_mu(self, mu):
"""
Set channel state (machine units).
:param mu: channel state in machine units.
"""
self.mirny_cpld.write_ext(
addr=0xc + self.channel, length=8, data=mu, ext_div=32)
@kernel
def set(self, att, enable, led=False):
"""
Set attenuation, RF switch, and LED state (SI units).
:param att: attenuator setting in dB (0-31.5)
:param enable: RF switch state (bool)
:param led: LED state (bool)
"""
self.set_mu(self.to_mu(att, enable, led))

View File

@ -2,11 +2,11 @@ from artiq.language.core import *
from artiq.language.types import * from artiq.language.types import *
@syscall(flags={"nounwind", "nowrite"}) @syscall(flags={"nounwind"})
def cache_get(key: TStr) -> TList(TInt32): def cache_get(key: TStr) -> TList(TInt32):
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall(flags={"nowrite"}) @syscall
def cache_put(key: TStr, value: TList(TInt32)) -> TNone: def cache_put(key: TStr, value: TList(TInt32)) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")

View File

@ -1,37 +0,0 @@
import sys
import socket
import logging
logger = logging.getLogger(__name__)
def set_keepalive(sock, after_idle, interval, max_fails):
if sys.platform.startswith("linux"):
sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, after_idle)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, interval)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, max_fails)
elif sys.platform.startswith("win") or sys.platform.startswith("cygwin"):
# setting max_fails is not supported, typically ends up being 5 or 10
# depending on Windows version
sock.ioctl(socket.SIO_KEEPALIVE_VALS,
(1, after_idle*1000, interval*1000))
else:
logger.warning("TCP keepalive not supported on platform '%s', ignored",
sys.platform)
def initialize_connection(host, port, ssh_transport=None):
if ssh_transport is None:
sock = socket.create_connection((host, port), 5.0)
sock.settimeout(None)
set_keepalive(sock, 3, 2, 3)
logger.debug("connected to %s:%d", host, port)
else:
sock = ssh_transport.open_channel("direct-tcpip", (host, port),
("localhost", 9999), timeout=5.0)
ssh_transport.set_keepalive(2)
logger.debug("connected to %s:%d via SSH transport to %s:%d",
host, port, *ssh_transport.getpeername())
return sock

View File

@ -90,18 +90,28 @@ DecodedDump = namedtuple(
def decode_dump(data): def decode_dump(data):
parts = struct.unpack(">IQbbb", data[:15]) # extract endian byte
if data[0] == ord('E'):
endian = '>'
elif data[0] == ord('e'):
endian = '<'
else:
raise ValueError
data = data[1:]
# only header is device endian
# messages are big endian
parts = struct.unpack(endian + "IQbbb", data[:15])
(sent_bytes, total_byte_count, (sent_bytes, total_byte_count,
overflow_occured, log_channel, dds_onehot_sel) = parts error_occurred, log_channel, dds_onehot_sel) = parts
expected_len = sent_bytes + 15 expected_len = sent_bytes + 15
if expected_len != len(data): if expected_len != len(data):
raise ValueError("analyzer dump has incorrect length " raise ValueError("analyzer dump has incorrect length "
"(got {}, expected {})".format( "(got {}, expected {})".format(
len(data), expected_len)) len(data), expected_len))
if overflow_occured: if error_occurred:
logger.warning("analyzer FIFO overflow occured, " logger.warning("error occurred within the analyzer, "
"some messages have been lost") "data may be corrupted")
if total_byte_count > sent_bytes: if total_byte_count > sent_bytes:
logger.info("analyzer ring buffer has wrapped %d times", logger.info("analyzer ring buffer has wrapped %d times",
total_byte_count//sent_bytes) total_byte_count//sent_bytes)

View File

@ -2,14 +2,14 @@ import struct
import logging import logging
import traceback import traceback
import numpy import numpy
import socket
from enum import Enum from enum import Enum
from fractions import Fraction from fractions import Fraction
from collections import namedtuple from collections import namedtuple
from artiq.coredevice import exceptions from artiq.coredevice import exceptions
from artiq.coredevice.comm import initialize_connection
from artiq import __version__ as software_version from artiq import __version__ as software_version
from sipyco.keepalive import create_connection
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -23,6 +23,8 @@ class Request(Enum):
RPCReply = 7 RPCReply = 7
RPCException = 8 RPCException = 8
SubkernelUpload = 9
class Reply(Enum): class Reply(Enum):
SystemInfo = 2 SystemInfo = 2
@ -36,16 +38,17 @@ class Reply(Enum):
RPCRequest = 10 RPCRequest = 10
WatchdogExpired = 14
ClockFailure = 15 ClockFailure = 15
class UnsupportedDevice(Exception): class UnsupportedDevice(Exception):
pass pass
class LoadError(Exception): class LoadError(Exception):
pass pass
class RPCReturnValueError(ValueError): class RPCReturnValueError(ValueError):
pass pass
@ -53,6 +56,105 @@ class RPCReturnValueError(ValueError):
RPCKeyword = namedtuple('RPCKeyword', ['name', 'value']) RPCKeyword = namedtuple('RPCKeyword', ['name', 'value'])
def _receive_fraction(kernel, embedding_map):
numerator = kernel._read_int64()
denominator = kernel._read_int64()
return Fraction(numerator, denominator)
def _receive_list(kernel, embedding_map):
length = kernel._read_int32()
tag = chr(kernel._read_int8())
if tag == "b":
buffer = kernel._read(length)
return list(struct.unpack(kernel.endian + "%s?" % length, buffer))
elif tag == "i":
buffer = kernel._read(4 * length)
return list(struct.unpack(kernel.endian + "%sl" % length, buffer))
elif tag == "I":
buffer = kernel._read(8 * length)
return list(numpy.ndarray((length, ), kernel.endian + 'i8', buffer))
elif tag == "f":
buffer = kernel._read(8 * length)
return list(struct.unpack(kernel.endian + "%sd" % length, buffer))
else:
fn = receivers[tag]
elems = []
for _ in range(length):
# discard tag, as our device would still send the tag for each
# non-primitive elements.
kernel._read_int8()
item = fn(kernel, embedding_map)
elems.append(item)
return elems
def _receive_array(kernel, embedding_map):
num_dims = kernel._read_int8()
shape = tuple(kernel._read_int32() for _ in range(num_dims))
tag = chr(kernel._read_int8())
fn = receivers[tag]
length = numpy.prod(shape)
if tag == "b":
buffer = kernel._read(length)
elems = numpy.ndarray((length, ), '?', buffer)
elif tag == "i":
buffer = kernel._read(4 * length)
elems = numpy.ndarray((length, ), kernel.endian + 'i4', buffer)
elif tag == "I":
buffer = kernel._read(8 * length)
elems = numpy.ndarray((length, ), kernel.endian + 'i8', buffer)
elif tag == "f":
buffer = kernel._read(8 * length)
elems = numpy.ndarray((length, ), kernel.endian + 'd', buffer)
else:
fn = receivers[tag]
elems = []
for _ in range(numpy.prod(shape)):
# discard the tag
kernel._read_int8()
item = fn(kernel, embedding_map)
elems.append(item)
elems = numpy.array(elems)
return elems.reshape(shape)
def _receive_range(kernel, embedding_map):
start = kernel._receive_rpc_value(embedding_map)
stop = kernel._receive_rpc_value(embedding_map)
step = kernel._receive_rpc_value(embedding_map)
return range(start, stop, step)
def _receive_keyword(kernel, embedding_map):
name = kernel._read_string()
value = kernel._receive_rpc_value(embedding_map)
return RPCKeyword(name, value)
receivers = {
"\x00": lambda kernel, embedding_map: kernel._rpc_sentinel,
"t": lambda kernel, embedding_map:
tuple(kernel._receive_rpc_value(embedding_map)
for _ in range(kernel._read_int8())),
"n": lambda kernel, embedding_map: None,
"b": lambda kernel, embedding_map: bool(kernel._read_int8()),
"i": lambda kernel, embedding_map: numpy.int32(kernel._read_int32()),
"I": lambda kernel, embedding_map: numpy.int64(kernel._read_int64()),
"f": lambda kernel, embedding_map: kernel._read_float64(),
"s": lambda kernel, embedding_map: kernel._read_string(),
"B": lambda kernel, embedding_map: kernel._read_bytes(),
"A": lambda kernel, embedding_map: kernel._read_bytes(),
"O": lambda kernel, embedding_map:
embedding_map.retrieve_object(kernel._read_int32()),
"F": _receive_fraction,
"l": _receive_list,
"a": _receive_array,
"r": _receive_range,
"k": _receive_keyword
}
class CommKernelDummy: class CommKernelDummy:
def __init__(self): def __init__(self):
pass pass
@ -70,6 +172,16 @@ class CommKernelDummy:
pass pass
def incompatible_versions(v1, v2):
if v1.endswith(".beta") or v2.endswith(".beta"):
# Beta branches may introduce breaking changes. Check version strictly.
return v1 != v2
else:
# On stable branches, runtime/software protocol backward compatibility is kept.
# Runtime and software with the same major version number are compatible.
return v1.split(".", maxsplit=1)[0] != v2.split(".", maxsplit=1)[0]
class CommKernel: class CommKernel:
warned_of_mismatch = False warned_of_mismatch = False
@ -77,12 +189,31 @@ class CommKernel:
self._read_type = None self._read_type = None
self.host = host self.host = host
self.port = port self.port = port
self.read_buffer = bytearray()
self.write_buffer = bytearray()
def open(self, **kwargs):
def open(self):
if hasattr(self, "socket"): if hasattr(self, "socket"):
return return
self.socket = initialize_connection(self.host, self.port, **kwargs) self.socket = create_connection(self.host, self.port)
self.socket.sendall(b"ARTIQ coredev\n") self.socket.sendall(b"ARTIQ coredev\n")
endian = self._read(1)
if endian == b"e":
self.endian = "<"
elif endian == b"E":
self.endian = ">"
else:
raise IOError("Incorrect reply from device: expected e/E.")
self.unpack_int32 = struct.Struct(self.endian + "l").unpack
self.unpack_int64 = struct.Struct(self.endian + "q").unpack
self.unpack_float64 = struct.Struct(self.endian + "d").unpack
self.pack_header = struct.Struct(self.endian + "lB").pack
self.pack_int8 = struct.Struct(self.endian + "B").pack
self.pack_int32 = struct.Struct(self.endian + "l").pack
self.pack_int64 = struct.Struct(self.endian + "q").pack
self.pack_float64 = struct.Struct(self.endian + "d").pack
def close(self): def close(self):
if not hasattr(self, "socket"): if not hasattr(self, "socket"):
@ -91,36 +222,41 @@ class CommKernel:
del self.socket del self.socket
logger.debug("disconnected") logger.debug("disconnected")
def read(self, length):
r = bytes()
while len(r) < length:
rn = self.socket.recv(min(8192, length - len(r)))
if not rn:
raise ConnectionResetError("Connection closed")
r += rn
return r
def write(self, data):
self.socket.sendall(data)
# #
# Reader interface # Reader interface
# #
def _read(self, length):
# cache the reads to avoid frequent call to recv
while len(self.read_buffer) < length:
# the number is just the maximum amount
# when there is not much data, it would return earlier
diff = length - len(self.read_buffer)
flag = 0
if diff > 8192:
flag |= socket.MSG_WAITALL
new_buffer = self.socket.recv(8192, flag)
if not new_buffer:
raise ConnectionResetError("Core device connection closed unexpectedly")
self.read_buffer += new_buffer
result = self.read_buffer[:length]
self.read_buffer = self.read_buffer[length:]
return result
def _read_header(self): def _read_header(self):
self.open() self.open()
# Wait for a synchronization sequence, 5a 5a 5a 5a. # Wait for a synchronization sequence, 5a 5a 5a 5a.
sync_count = 0 sync_count = 0
while sync_count < 4: while sync_count < 4:
(sync_byte, ) = struct.unpack("B", self.read(1)) sync_byte = self._read(1)[0]
if sync_byte == 0x5a: if sync_byte == 0x5a:
sync_count += 1 sync_count += 1
else: else:
sync_count = 0 sync_count = 0
# Read message header. # Read message header.
(raw_type, ) = struct.unpack("B", self.read(1)) raw_type = self._read(1)[0]
self._read_type = Reply(raw_type) self._read_type = Reply(raw_type)
logger.debug("receiving message: type=%r", logger.debug("receiving message: type=%r",
@ -135,30 +271,26 @@ class CommKernel:
self._read_header() self._read_header()
self._read_expect(ty) self._read_expect(ty)
def _read_chunk(self, length):
return self.read(length)
def _read_int8(self): def _read_int8(self):
(value, ) = struct.unpack("B", self._read_chunk(1)) return self._read(1)[0]
return value
def _read_int32(self): def _read_int32(self):
(value, ) = struct.unpack(">l", self._read_chunk(4)) (value, ) = self.unpack_int32(self._read(4))
return value return value
def _read_int64(self): def _read_int64(self):
(value, ) = struct.unpack(">q", self._read_chunk(8)) (value, ) = self.unpack_int64(self._read(8))
return value return value
def _read_float64(self): def _read_float64(self):
(value, ) = struct.unpack(">d", self._read_chunk(8)) (value, ) = self.unpack_float64(self._read(8))
return value return value
def _read_bool(self): def _read_bool(self):
return True if self._read_int8() else False return True if self._read_int8() else False
def _read_bytes(self): def _read_bytes(self):
return self._read_chunk(self._read_int32()) return self._read(self._read_int32())
def _read_string(self): def _read_string(self):
return self._read_bytes().decode("utf-8") return self._read_bytes().decode("utf-8")
@ -167,38 +299,49 @@ class CommKernel:
# Writer interface # Writer interface
# #
def _write(self, data):
self.write_buffer += data
# if the buffer is already pretty large, send it
# the block size is arbitrary, tuning it may improve performance
if len(self.write_buffer) > 4096:
self._flush()
def _flush(self):
self.socket.sendall(self.write_buffer)
self.write_buffer.clear()
def _write_header(self, ty): def _write_header(self, ty):
self.open() self.open()
logger.debug("sending message: type=%r", ty) logger.debug("sending message: type=%r", ty)
# Write synchronization sequence and header. # Write synchronization sequence and header.
self.write(struct.pack(">lB", 0x5a5a5a5a, ty.value)) self._write(self.pack_header(0x5a5a5a5a, ty.value))
def _write_empty(self, ty): def _write_empty(self, ty):
self._write_header(ty) self._write_header(ty)
def _write_chunk(self, chunk): def _write_chunk(self, chunk):
self.write(chunk) self._write(chunk)
def _write_int8(self, value): def _write_int8(self, value):
self.write(struct.pack("B", value)) self._write(self.pack_int8(value))
def _write_int32(self, value): def _write_int32(self, value):
self.write(struct.pack(">l", value)) self._write(self.pack_int32(value))
def _write_int64(self, value): def _write_int64(self, value):
self.write(struct.pack(">q", value)) self._write(self.pack_int64(value))
def _write_float64(self, value): def _write_float64(self, value):
self.write(struct.pack(">d", value)) self._write(self.pack_float64(value))
def _write_bool(self, value): def _write_bool(self, value):
self.write(struct.pack("B", value)) self._write(b'\x01' if value else b'\x00')
def _write_bytes(self, value): def _write_bytes(self, value):
self._write_int32(len(value)) self._write_int32(len(value))
self.write(value) self._write(value)
def _write_string(self, value): def _write_string(self, value):
self._write_bytes(value.encode("utf-8")) self._write_bytes(value.encode("utf-8"))
@ -207,33 +350,47 @@ class CommKernel:
# Exported APIs # Exported APIs
# #
def reset_session(self):
self.write(struct.pack(">ll", 0x5a5a5a5a, 0))
def check_system_info(self): def check_system_info(self):
self._write_empty(Request.SystemInfo) self._write_empty(Request.SystemInfo)
self._flush()
self._read_header() self._read_header()
self._read_expect(Reply.SystemInfo) self._read_expect(Reply.SystemInfo)
runtime_id = self._read_chunk(4) runtime_id = self._read(4)
if runtime_id != b"AROR": if runtime_id == b"AROR":
gateware_version = self._read_string().split(";")[0]
if not self.warned_of_mismatch and incompatible_versions(gateware_version, software_version):
logger.warning("Mismatch between gateware (%s) "
"and software (%s) versions",
gateware_version, software_version)
CommKernel.warned_of_mismatch = True
finished_cleanly = self._read_bool()
if not finished_cleanly:
logger.warning("Previous kernel did not cleanly finish")
elif runtime_id == b"ARZQ":
pass
else:
raise UnsupportedDevice("Unsupported runtime ID: {}" raise UnsupportedDevice("Unsupported runtime ID: {}"
.format(runtime_id)) .format(runtime_id))
gateware_version = self._read_string().split(";")[0]
if gateware_version != software_version and not self.warned_of_mismatch:
logger.warning("Mismatch between gateware (%s) "
"and software (%s) versions",
gateware_version, software_version)
CommKernel.warned_of_mismatch = True
finished_cleanly = self._read_bool()
if not finished_cleanly:
logger.warning("Previous kernel did not cleanly finish")
def load(self, kernel_library): def load(self, kernel_library):
self._write_header(Request.LoadKernel) self._write_header(Request.LoadKernel)
self._write_bytes(kernel_library) self._write_bytes(kernel_library)
self._flush()
self._read_header()
if self._read_type == Reply.LoadFailed:
raise LoadError(self._read_string())
else:
self._read_expect(Reply.LoadCompleted)
def upload_subkernel(self, kernel_library, id, destination):
self._write_header(Request.SubkernelUpload)
self._write_int32(id)
self._write_int8(destination)
self._write_bytes(kernel_library)
self._flush()
self._read_header() self._read_header()
if self._read_type == Reply.LoadFailed: if self._read_type == Reply.LoadFailed:
@ -243,55 +400,16 @@ class CommKernel:
def run(self): def run(self):
self._write_empty(Request.RunKernel) self._write_empty(Request.RunKernel)
self._flush()
logger.debug("running kernel") logger.debug("running kernel")
_rpc_sentinel = object() _rpc_sentinel = object()
# See session.c:{send,receive}_rpc_value and llvm_ir_generator.py:_rpc_tag. # See rpc_proto.rs and compiler/ir.py:rpc_tag.
def _receive_rpc_value(self, embedding_map): def _receive_rpc_value(self, embedding_map):
tag = chr(self._read_int8()) tag = chr(self._read_int8())
if tag == "\x00": if tag in receivers:
return self._rpc_sentinel return receivers.get(tag)(self, embedding_map)
elif tag == "t":
length = self._read_int8()
return tuple(self._receive_rpc_value(embedding_map) for _ in range(length))
elif tag == "n":
return None
elif tag == "b":
return bool(self._read_int8())
elif tag == "i":
return numpy.int32(self._read_int32())
elif tag == "I":
return numpy.int64(self._read_int64())
elif tag == "f":
return self._read_float64()
elif tag == "F":
numerator = self._read_int64()
denominator = self._read_int64()
return Fraction(numerator, denominator)
elif tag == "s":
return self._read_string()
elif tag == "B":
return self._read_bytes()
elif tag == "A":
return self._read_bytes()
elif tag == "l":
length = self._read_int32()
return [self._receive_rpc_value(embedding_map) for _ in range(length)]
elif tag == "a":
length = self._read_int32()
return numpy.array([self._receive_rpc_value(embedding_map) for _ in range(length)])
elif tag == "r":
start = self._receive_rpc_value(embedding_map)
stop = self._receive_rpc_value(embedding_map)
step = self._receive_rpc_value(embedding_map)
return range(start, stop, step)
elif tag == "k":
name = self._read_string()
value = self._receive_rpc_value(embedding_map)
return RPCKeyword(name, value)
elif tag == "O":
return embedding_map.retrieve_object(self._read_int32())
else: else:
raise IOError("Unknown RPC value tag: {}".format(repr(tag))) raise IOError("Unknown RPC value tag: {}".format(repr(tag)))
@ -316,6 +434,9 @@ class CommKernel:
self._skip_rpc_value(tags) self._skip_rpc_value(tags)
elif tag == "r": elif tag == "r":
self._skip_rpc_value(tags) self._skip_rpc_value(tags)
elif tag == "a":
_ndims = tags.pop(0)
self._skip_rpc_value(tags)
else: else:
pass pass
@ -341,15 +462,15 @@ class CommKernel:
elif tag == "b": elif tag == "b":
check(isinstance(value, bool), check(isinstance(value, bool),
lambda: "bool") lambda: "bool")
self._write_int8(value) self._write_bool(value)
elif tag == "i": elif tag == "i":
check(isinstance(value, (int, numpy.int32)) and check(isinstance(value, (int, numpy.int32)) and
(-2**31 < value < 2**31-1), (-2**31 <= value < 2**31),
lambda: "32-bit int") lambda: "32-bit int")
self._write_int32(value) self._write_int32(value)
elif tag == "I": elif tag == "I":
check(isinstance(value, (int, numpy.int32, numpy.int64)) and check(isinstance(value, (int, numpy.int32, numpy.int64)) and
(-2**63 < value < 2**63-1), (-2**63 <= value < 2**63),
lambda: "64-bit int") lambda: "64-bit int")
self._write_int64(value) self._write_int64(value)
elif tag == "f": elif tag == "f":
@ -358,8 +479,8 @@ class CommKernel:
self._write_float64(value) self._write_float64(value)
elif tag == "F": elif tag == "F":
check(isinstance(value, Fraction) and check(isinstance(value, Fraction) and
(-2**63 < value.numerator < 2**63-1) and (-2**63 <= value.numerator < 2**63) and
(-2**63 < value.denominator < 2**63-1), (-2**63 <= value.denominator < 2**63),
lambda: "64-bit Fraction") lambda: "64-bit Fraction")
self._write_int64(value.numerator) self._write_int64(value.numerator)
self._write_int64(value.denominator) self._write_int64(value.denominator)
@ -379,9 +500,58 @@ class CommKernel:
check(isinstance(value, list), check(isinstance(value, list),
lambda: "list") lambda: "list")
self._write_int32(len(value)) self._write_int32(len(value))
for elt in value: tag_element = chr(tags[0])
tags_copy = bytearray(tags) if tag_element == "b":
self._send_rpc_value(tags_copy, elt, root, function) self._write(bytes(value))
elif tag_element == "i":
try:
self._write(struct.pack(self.endian + "%sl" % len(value), *value))
except struct.error:
raise RPCReturnValueError(
"type mismatch: cannot serialize {value} as {type}".format(
value=repr(value), type="32-bit integer list"))
elif tag_element == "I":
try:
self._write(struct.pack(self.endian + "%sq" % len(value), *value))
except struct.error:
raise RPCReturnValueError(
"type mismatch: cannot serialize {value} as {type}".format(
value=repr(value), type="64-bit integer list"))
elif tag_element == "f":
self._write(struct.pack(self.endian + "%sd" %
len(value), *value))
else:
for elt in value:
tags_copy = bytearray(tags)
self._send_rpc_value(tags_copy, elt, root, function)
self._skip_rpc_value(tags)
elif tag == "a":
check(isinstance(value, numpy.ndarray),
lambda: "numpy.ndarray")
num_dims = tags.pop(0)
check(num_dims == len(value.shape),
lambda: "{}-dimensional numpy.ndarray".format(num_dims))
for s in value.shape:
self._write_int32(s)
tag_element = chr(tags[0])
if tag_element == "b":
self._write(value.reshape((-1,), order="C").tobytes())
elif tag_element == "i":
array = value.reshape(
(-1,), order="C").astype(self.endian + 'i4')
self._write(array.tobytes())
elif tag_element == "I":
array = value.reshape(
(-1,), order="C").astype(self.endian + 'i8')
self._write(array.tobytes())
elif tag_element == "f":
array = value.reshape(
(-1,), order="C").astype(self.endian + 'd')
self._write(array.tobytes())
else:
for elt in value.reshape((-1,), order="C"):
tags_copy = bytearray(tags)
self._send_rpc_value(tags_copy, elt, root, function)
self._skip_rpc_value(tags) self._skip_rpc_value(tags)
elif tag == "r": elif tag == "r":
check(isinstance(value, range), check(isinstance(value, range),
@ -403,15 +573,15 @@ class CommKernel:
return msg return msg
def _serve_rpc(self, embedding_map): def _serve_rpc(self, embedding_map):
is_async = self._read_bool() is_async = self._read_bool()
service_id = self._read_int32() service_id = self._read_int32()
args, kwargs = self._receive_rpc_args(embedding_map) args, kwargs = self._receive_rpc_args(embedding_map)
return_tags = self._read_bytes() return_tags = self._read_bytes()
if service_id is 0: if service_id == 0:
service = lambda obj, attr, value: setattr(obj, attr, value) def service(obj, attr, value): return setattr(obj, attr, value)
else: else:
service = embedding_map.retrieve_object(service_id) service = embedding_map.retrieve_object(service_id)
logger.debug("rpc service: [%d]%r%s %r %r -> %s", service_id, service, logger.debug("rpc service: [%d]%r%s %r %r -> %s", service_id, service,
(" (async)" if is_async else ""), args, kwargs, return_tags) (" (async)" if is_async else ""), args, kwargs, return_tags)
@ -421,41 +591,41 @@ class CommKernel:
try: try:
result = service(*args, **kwargs) result = service(*args, **kwargs)
logger.debug("rpc service: %d %r %r = %r", service_id, args, kwargs, result)
self._write_header(Request.RPCReply)
self._write_bytes(return_tags)
self._send_rpc_value(bytearray(return_tags), result, result, service)
except RPCReturnValueError as exn: except RPCReturnValueError as exn:
raise raise
except Exception as exn: except Exception as exn:
logger.debug("rpc service: %d %r %r ! %r", service_id, args, kwargs, exn) logger.debug("rpc service: %d %r %r ! %r",
service_id, args, kwargs, exn)
self._write_header(Request.RPCException) self._write_header(Request.RPCException)
# Note: instead of sending strings, we send object ID
# This is to avoid the need of allocatio on the device side
# This is a special case: this only applies to exceptions
if hasattr(exn, "artiq_core_exception"): if hasattr(exn, "artiq_core_exception"):
exn = exn.artiq_core_exception exn = exn.artiq_core_exception
self._write_string(exn.name) self._write_int32(embedding_map.store_str(exn.name))
self._write_string(self._truncate_message(exn.message)) self._write_int32(embedding_map.store_str(self._truncate_message(exn.message)))
for index in range(3): for index in range(3):
self._write_int64(exn.param[index]) self._write_int64(exn.param[index])
filename, line, column, function = exn.traceback[-1] filename, line, column, function = exn.traceback[-1]
self._write_string(filename) self._write_int32(embedding_map.store_str(filename))
self._write_int32(line) self._write_int32(line)
self._write_int32(column) self._write_int32(column)
self._write_string(function) self._write_int32(embedding_map.store_str(function))
else: else:
exn_type = type(exn) exn_type = type(exn)
if exn_type in (ZeroDivisionError, ValueError, IndexError, RuntimeError) or \ if exn_type in (ZeroDivisionError, ValueError, IndexError, RuntimeError) or \
hasattr(exn, "artiq_builtin"): hasattr(exn, "artiq_builtin"):
self._write_string("0:{}".format(exn_type.__name__)) name = "0:{}".format(exn_type.__name__)
else: else:
exn_id = embedding_map.store_object(exn_type) exn_id = embedding_map.store_object(exn_type)
self._write_string("{}:{}.{}".format(exn_id, name = "{}:{}.{}".format(exn_id,
exn_type.__module__, exn_type.__module__,
exn_type.__qualname__)) exn_type.__qualname__)
self._write_string(self._truncate_message(str(exn))) self._write_int32(embedding_map.store_str(name))
self._write_int32(embedding_map.store_str(self._truncate_message(str(exn))))
for index in range(3): for index in range(3):
self._write_int64(0) self._write_int64(0)
@ -466,36 +636,93 @@ class CommKernel:
((filename, line, function, _), ) = tb ((filename, line, function, _), ) = tb
else: else:
assert False assert False
self._write_string(filename) self._write_int32(embedding_map.store_str(filename))
self._write_int32(line) self._write_int32(line)
self._write_int32(-1) # column not known self._write_int32(-1) # column not known
self._write_string(function) self._write_int32(embedding_map.store_str(function))
self._flush()
else:
logger.debug("rpc service: %d %r %r = %r",
service_id, args, kwargs, result)
self._write_header(Request.RPCReply)
self._write_bytes(return_tags)
self._send_rpc_value(bytearray(return_tags),
result, result, service)
self._flush()
def _serve_exception(self, embedding_map, symbolizer, demangler): def _serve_exception(self, embedding_map, symbolizer, demangler):
name = self._read_string() exception_count = self._read_int32()
message = self._read_string() nested_exceptions = []
params = [self._read_int64() for _ in range(3)]
filename = self._read_string() def read_exception_string():
line = self._read_int32() # note: if length == -1, the following int32 is the object key
column = self._read_int32() length = self._read_int32()
function = self._read_string() if length == -1:
return embedding_map.retrieve_str(self._read_int32())
else:
return self._read(length).decode("utf-8")
backtrace = [self._read_int32() for _ in range(self._read_int32())] for _ in range(exception_count):
name = embedding_map.retrieve_str(self._read_int32())
message = read_exception_string()
params = [self._read_int64() for _ in range(3)]
traceback = list(reversed(symbolizer(backtrace))) + \ filename = read_exception_string()
[(filename, line, column, *demangler([function]), None)] line = self._read_int32()
core_exn = exceptions.CoreException(name, message, params, traceback) column = self._read_int32()
function = read_exception_string()
nested_exceptions.append([name, message, params,
filename, line, column, function])
demangled_names = demangler([ex[6] for ex in nested_exceptions])
for i in range(exception_count):
nested_exceptions[i][6] = demangled_names[i]
exception_info = []
for _ in range(exception_count):
sp = self._read_int32()
initial_backtrace = self._read_int32()
current_backtrace = self._read_int32()
exception_info.append((sp, initial_backtrace, current_backtrace))
backtrace = []
stack_pointers = []
for _ in range(self._read_int32()):
backtrace.append(self._read_int32())
stack_pointers.append(self._read_int32())
self._process_async_error()
traceback = list(symbolizer(backtrace))
core_exn = exceptions.CoreException(nested_exceptions, exception_info,
traceback, stack_pointers)
if core_exn.id == 0: if core_exn.id == 0:
python_exn_type = getattr(exceptions, core_exn.name.split('.')[-1]) python_exn_type = getattr(exceptions, core_exn.name.split('.')[-1])
else: else:
python_exn_type = embedding_map.retrieve_object(core_exn.id) python_exn_type = embedding_map.retrieve_object(core_exn.id)
python_exn = python_exn_type(message.format(*params)) try:
python_exn = python_exn_type(
nested_exceptions[-1][1].format(*nested_exceptions[0][2]))
except Exception as ex:
python_exn = RuntimeError(
f"Exception type={python_exn_type}, which couldn't be "
f"reconstructed ({ex})"
)
python_exn.artiq_core_exception = core_exn python_exn.artiq_core_exception = core_exn
raise python_exn raise python_exn
def _process_async_error(self):
errors = self._read_int8()
if errors > 0:
map_name = lambda y, z: [f"{y}(s)"] if z else []
errors = map_name("collision", errors & 2 ** 0) + \
map_name("busy error", errors & 2 ** 1) + \
map_name("sequence error", errors & 2 ** 2)
logger.warning(f"{(', '.join(errors[:-1]) + ' and ') if len(errors) > 1 else ''}{errors[-1]} "
f"reported during kernel execution")
def serve(self, embedding_map, symbolizer, demangler): def serve(self, embedding_map, symbolizer, demangler):
while True: while True:
self._read_header() self._read_header()
@ -503,10 +730,9 @@ class CommKernel:
self._serve_rpc(embedding_map) self._serve_rpc(embedding_map)
elif self._read_type == Reply.KernelException: elif self._read_type == Reply.KernelException:
self._serve_exception(embedding_map, symbolizer, demangler) self._serve_exception(embedding_map, symbolizer, demangler)
elif self._read_type == Reply.WatchdogExpired:
raise exceptions.WatchdogExpired
elif self._read_type == Reply.ClockFailure: elif self._read_type == Reply.ClockFailure:
raise exceptions.ClockFailure raise exceptions.ClockFailure
else: else:
self._read_expect(Reply.KernelFinished) self._read_expect(Reply.KernelFinished)
self._process_async_error()
return return

View File

@ -1,10 +1,8 @@
from enum import Enum from enum import Enum
import logging import logging
import socket
import struct import struct
from artiq.coredevice.comm import initialize_connection from sipyco.keepalive import create_connection
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -21,11 +19,6 @@ class Request(Enum):
ConfigRemove = 14 ConfigRemove = 14
ConfigErase = 15 ConfigErase = 15
StartProfiler = 9
StopProfiler = 10
GetProfile = 11
Hotswap = 4
Reboot = 5 Reboot = 5
DebugAllocator = 8 DebugAllocator = 8
@ -40,8 +33,6 @@ class Reply(Enum):
ConfigData = 7 ConfigData = 7
Profile = 5
RebootImminent = 3 RebootImminent = 3
@ -59,11 +50,18 @@ class CommMgmt:
self.host = host self.host = host
self.port = port self.port = port
def open(self, **kwargs): def open(self):
if hasattr(self, "socket"): if hasattr(self, "socket"):
return return
self.socket = initialize_connection(self.host, self.port, **kwargs) self.socket = create_connection(self.host, self.port)
self.socket.sendall(b"ARTIQ management\n") self.socket.sendall(b"ARTIQ management\n")
endian = self._read(1)
if endian == b"e":
self.endian = "<"
elif endian == b"E":
self.endian = ">"
else:
raise IOError("Incorrect reply from device: expected e/E.")
def close(self): def close(self):
if not hasattr(self, "socket"): if not hasattr(self, "socket"):
@ -87,7 +85,7 @@ class CommMgmt:
self._write(struct.pack("B", value)) self._write(struct.pack("B", value))
def _write_int32(self, value): def _write_int32(self, value):
self._write(struct.pack(">l", value)) self._write(struct.pack(self.endian + "l", value))
def _write_bytes(self, value): def _write_bytes(self, value):
self._write_int32(len(value)) self._write_int32(len(value))
@ -112,12 +110,13 @@ class CommMgmt:
return ty return ty
def _read_expect(self, ty): def _read_expect(self, ty):
if self._read_header() != ty: header = self._read_header()
if header != ty:
raise IOError("Incorrect reply from device: {} (expected {})". raise IOError("Incorrect reply from device: {} (expected {})".
format(self._read_type, ty)) format(header, ty))
def _read_int32(self): def _read_int32(self):
(value, ) = struct.unpack(">l", self._read(4)) (value, ) = struct.unpack(self.endian + "l", self._read(4))
return value return value
def _read_bytes(self): def _read_bytes(self):
@ -161,7 +160,12 @@ class CommMgmt:
def config_read(self, key): def config_read(self, key):
self._write_header(Request.ConfigRead) self._write_header(Request.ConfigRead)
self._write_string(key) self._write_string(key)
self._read_expect(Reply.ConfigData) ty = self._read_header()
if ty == Reply.Error:
raise IOError("Device failed to read config. The key may not exist.")
elif ty != Reply.ConfigData:
raise IOError("Incorrect reply from device: {} (expected {})".
format(ty, Reply.ConfigData))
return self._read_string() return self._read_string()
def config_write(self, key, value): def config_write(self, key, value):
@ -170,7 +174,7 @@ class CommMgmt:
self._write_bytes(value) self._write_bytes(value)
ty = self._read_header() ty = self._read_header()
if ty == Reply.Error: if ty == Reply.Error:
raise IOError("Flash storage is full") raise IOError("Device failed to write config. More information may be available in the log.")
elif ty != Reply.Success: elif ty != Reply.Success:
raise IOError("Incorrect reply from device: {} (expected {})". raise IOError("Incorrect reply from device: {} (expected {})".
format(ty, Reply.Success)) format(ty, Reply.Success))
@ -184,45 +188,6 @@ class CommMgmt:
self._write_header(Request.ConfigErase) self._write_header(Request.ConfigErase)
self._read_expect(Reply.Success) self._read_expect(Reply.Success)
def start_profiler(self, interval, edges_size, hits_size):
self._write_header(Request.StartProfiler)
self._write_int32(interval)
self._write_int32(edges_size)
self._write_int32(hits_size)
self._read_expect(Reply.Success)
def stop_profiler(self):
self._write_header(Request.StopProfiler)
self._read_expect(Reply.Success)
def stop_profiler(self):
self._write_header(Request.StopProfiler)
self._read_expect(Reply.Success)
def get_profile(self):
self._write_header(Request.GetProfile)
self._read_expect(Reply.Profile)
hits = {}
for _ in range(self._read_int32()):
addr = self._read_int32()
count = self._read_int32()
hits[addr] = count
edges = {}
for _ in range(self._read_int32()):
caller = self._read_int32()
callee = self._read_int32()
count = self._read_int32()
edges[(caller, callee)] = count
return hits, edges
def hotswap(self, firmware):
self._write_header(Request.Hotswap)
self._write_bytes(firmware)
self._read_expect(Reply.RebootImminent)
def reboot(self): def reboot(self):
self._write_header(Request.Reboot) self._write_header(Request.Reboot)
self._read_expect(Reply.RebootImminent) self._read_expect(Reply.RebootImminent)

View File

@ -3,6 +3,7 @@ import logging
import struct import struct
from enum import Enum from enum import Enum
from sipyco.keepalive import async_open_connection
__all__ = ["TTLProbe", "TTLOverride", "CommMonInj"] __all__ = ["TTLProbe", "TTLOverride", "CommMonInj"]
@ -28,7 +29,14 @@ class CommMonInj:
self.disconnect_cb = disconnect_cb self.disconnect_cb = disconnect_cb
async def connect(self, host, port=1383): async def connect(self, host, port=1383):
self._reader, self._writer = await asyncio.open_connection(host, port) self._reader, self._writer = await async_open_connection(
host,
port,
after_idle=1,
interval=1,
max_fails=3,
)
try: try:
self._writer.write(b"ARTIQ moninj\n") self._writer.write(b"ARTIQ moninj\n")
self._receive_task = asyncio.ensure_future(self._receive_cr()) self._receive_task = asyncio.ensure_future(self._receive_cr())
@ -38,6 +46,9 @@ class CommMonInj:
del self._writer del self._writer
raise raise
def wait_terminate(self):
return self._receive_task
async def close(self): async def close(self):
self.disconnect_cb = None self.disconnect_cb = None
try: try:
@ -52,19 +63,19 @@ class CommMonInj:
del self._writer del self._writer
def monitor_probe(self, enable, channel, probe): def monitor_probe(self, enable, channel, probe):
packet = struct.pack(">bblb", 0, enable, channel, probe) packet = struct.pack("<bblb", 0, enable, channel, probe)
self._writer.write(packet) self._writer.write(packet)
def monitor_injection(self, enable, channel, overrd): def monitor_injection(self, enable, channel, overrd):
packet = struct.pack(">bblb", 3, enable, channel, overrd) packet = struct.pack("<bblb", 3, enable, channel, overrd)
self._writer.write(packet) self._writer.write(packet)
def inject(self, channel, override, value): def inject(self, channel, override, value):
packet = struct.pack(">blbb", 1, channel, override, value) packet = struct.pack("<blbb", 1, channel, override, value)
self._writer.write(packet) self._writer.write(packet)
def get_injection_status(self, channel, override): def get_injection_status(self, channel, override):
packet = struct.pack(">blb", 2, channel, override) packet = struct.pack("<blb", 2, channel, override)
self._writer.write(packet) self._writer.write(packet)
async def _receive_cr(self): async def _receive_cr(self):
@ -74,15 +85,19 @@ class CommMonInj:
if not ty: if not ty:
return return
if ty == b"\x00": if ty == b"\x00":
payload = await self._reader.read(9) payload = await self._reader.readexactly(13)
channel, probe, value = struct.unpack(">lbl", payload) channel, probe, value = struct.unpack("<lbq", payload)
self.monitor_cb(channel, probe, value) self.monitor_cb(channel, probe, value)
elif ty == b"\x01": elif ty == b"\x01":
payload = await self._reader.read(6) payload = await self._reader.readexactly(6)
channel, override, value = struct.unpack(">lbb", payload) channel, override, value = struct.unpack("<lbb", payload)
self.injection_status_cb(channel, override, value) self.injection_status_cb(channel, override, value)
else: else:
raise ValueError("Unknown packet type", ty) raise ValueError("Unknown packet type", ty)
except asyncio.CancelledError:
raise
except:
logger.error("Moninj connection terminating with exception", exc_info=True)
finally: finally:
if self.disconnect_cb is not None: if self.disconnect_cb is not None:
self.disconnect_cb() self.disconnect_cb()

View File

@ -1,5 +1,7 @@
import os, sys import os, sys
import numpy import numpy
from inspect import getfullargspec
from functools import wraps
from pythonparser import diagnostic from pythonparser import diagnostic
@ -11,7 +13,7 @@ from artiq.language.units import *
from artiq.compiler.module import Module from artiq.compiler.module import Module
from artiq.compiler.embedding import Stitcher from artiq.compiler.embedding import Stitcher
from artiq.compiler.targets import OR1KTarget, CortexA9Target from artiq.compiler.targets import RV32IMATarget, RV32GTarget, CortexA9Target
from artiq.coredevice.comm_kernel import CommKernel, CommKernelDummy from artiq.coredevice.comm_kernel import CommKernel, CommKernelDummy
# Import for side effects (creating the exception classes). # Import for side effects (creating the exception classes).
@ -52,6 +54,17 @@ def rtio_get_counter() -> TInt64:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
def get_target_cls(target):
if target == "rv32g":
return RV32GTarget
elif target == "rv32ima":
return RV32IMATarget
elif target == "cortexa9":
return CortexA9Target
else:
raise ValueError("Unsupported target")
class Core: class Core:
"""Core device driver. """Core device driver.
@ -65,79 +78,164 @@ class Core:
:param ref_multiplier: ratio between the RTIO fine timestamp frequency :param ref_multiplier: ratio between the RTIO fine timestamp frequency
and the RTIO coarse timestamp frequency (e.g. SERDES multiplication and the RTIO coarse timestamp frequency (e.g. SERDES multiplication
factor). factor).
:param analyzer_proxy: name of the core device analyzer proxy to trigger
(optional).
:param analyze_at_run_end: automatically trigger the core device analyzer
proxy after the Experiment's run stage finishes.
""" """
kernel_invariants = { kernel_invariants = {
"core", "ref_period", "coarse_ref_period", "ref_multiplier", "core", "ref_period", "coarse_ref_period", "ref_multiplier",
} }
def __init__(self, dmgr, host, ref_period, ref_multiplier=8, target="or1k"): def __init__(self, dmgr,
host, ref_period,
analyzer_proxy=None, analyze_at_run_end=False,
ref_multiplier=8,
target="rv32g", satellite_cpu_targets={}):
self.ref_period = ref_period self.ref_period = ref_period
self.ref_multiplier = ref_multiplier self.ref_multiplier = ref_multiplier
if target == "or1k": self.satellite_cpu_targets = satellite_cpu_targets
self.target_cls = OR1KTarget self.target_cls = get_target_cls(target)
elif target == "cortexa9":
self.target_cls = CortexA9Target
else:
raise ValueError("Unsupported target")
self.coarse_ref_period = ref_period*ref_multiplier self.coarse_ref_period = ref_period*ref_multiplier
if host is None: if host is None:
self.comm = CommKernelDummy() self.comm = CommKernelDummy()
else: else:
self.comm = CommKernel(host) self.comm = CommKernel(host)
self.analyzer_proxy_name = analyzer_proxy
self.analyze_at_run_end = analyze_at_run_end
self.first_run = True self.first_run = True
self.dmgr = dmgr self.dmgr = dmgr
self.core = self self.core = self
self.comm.core = self self.comm.core = self
self.analyzer_proxy = None
def notify_run_end(self):
if self.analyze_at_run_end:
self.trigger_analyzer_proxy()
def close(self): def close(self):
self.comm.close() self.comm.close()
def compile(self, function, args, kwargs, set_result=None, def compile(self, function, args, kwargs, set_result=None,
attribute_writeback=True, print_as_rpc=True): attribute_writeback=True, print_as_rpc=True,
target=None, destination=0, subkernel_arg_types=[]):
try: try:
engine = _DiagnosticEngine(all_errors_are_fatal=True) engine = _DiagnosticEngine(all_errors_are_fatal=True)
stitcher = Stitcher(engine=engine, core=self, dmgr=self.dmgr, stitcher = Stitcher(engine=engine, core=self, dmgr=self.dmgr,
print_as_rpc=print_as_rpc) print_as_rpc=print_as_rpc,
destination=destination, subkernel_arg_types=subkernel_arg_types)
stitcher.stitch_call(function, args, kwargs, set_result) stitcher.stitch_call(function, args, kwargs, set_result)
stitcher.finalize() stitcher.finalize()
module = Module(stitcher, module = Module(stitcher,
ref_period=self.ref_period, ref_period=self.ref_period,
attribute_writeback=attribute_writeback) attribute_writeback=attribute_writeback)
target = self.target_cls() target = target if target is not None else self.target_cls()
library = target.compile_and_link([module]) library = target.compile_and_link([module])
stripped_library = target.strip(library) stripped_library = target.strip(library)
return stitcher.embedding_map, stripped_library, \ return stitcher.embedding_map, stripped_library, \
lambda addresses: target.symbolize(library, addresses), \ lambda addresses: target.symbolize(library, addresses), \
lambda symbols: target.demangle(symbols) lambda symbols: target.demangle(symbols), \
module.subkernel_arg_types
except diagnostic.Error as error: except diagnostic.Error as error:
raise CompileError(error.diagnostic) from error raise CompileError(error.diagnostic) from error
def _run_compiled(self, kernel_library, embedding_map, symbolizer, demangler):
if self.first_run:
self.comm.check_system_info()
self.first_run = False
self.comm.load(kernel_library)
self.comm.run()
self.comm.serve(embedding_map, symbolizer, demangler)
def run(self, function, args, kwargs): def run(self, function, args, kwargs):
result = None result = None
@rpc(flags={"async"}) @rpc(flags={"async"})
def set_result(new_result): def set_result(new_result):
nonlocal result nonlocal result
result = new_result result = new_result
embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
embedding_map, kernel_library, symbolizer, demangler = \
self.compile(function, args, kwargs, set_result) self.compile(function, args, kwargs, set_result)
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
if self.first_run: self._run_compiled(kernel_library, embedding_map, symbolizer, demangler)
self.comm.check_system_info()
self.first_run = False
self.comm.load(kernel_library)
self.comm.run()
self.comm.serve(embedding_map, symbolizer, demangler)
return result return result
def compile_subkernel(self, sid, subkernel_fn, embedding_map, args, subkernel_arg_types):
# pass self to subkernels (if applicable)
# assuming the first argument is self
subkernel_args = getfullargspec(subkernel_fn.artiq_embedded.function)
self_arg = []
if len(subkernel_args[0]) > 0:
if subkernel_args[0][0] == 'self':
self_arg = args[:1]
destination = subkernel_fn.artiq_embedded.destination
destination_tgt = self.satellite_cpu_targets[destination]
target = get_target_cls(destination_tgt)(subkernel_id=sid)
object_map, kernel_library, _, _, _ = \
self.compile(subkernel_fn, self_arg, {}, attribute_writeback=False,
print_as_rpc=False, target=target, destination=destination,
subkernel_arg_types=subkernel_arg_types.get(sid, []))
if object_map.has_rpc_or_subkernel():
raise ValueError("Subkernel must not use RPC or subkernels in other destinations")
return destination, kernel_library
def compile_and_upload_subkernels(self, embedding_map, args, subkernel_arg_types):
for sid, subkernel_fn in embedding_map.subkernels().items():
destination, kernel_library = \
self.compile_subkernel(sid, subkernel_fn, embedding_map,
args, subkernel_arg_types)
self.comm.upload_subkernel(kernel_library, sid, destination)
def precompile(self, function, *args, **kwargs):
"""Precompile a kernel and return a callable that executes it on the core device
at a later time.
Arguments to the kernel are set at compilation time and passed to this function,
as additional positional and keyword arguments.
The returned callable accepts no arguments.
Precompiled kernels may use RPCs and subkernels.
Object attributes at the beginning of a precompiled kernel execution have the
values they had at precompilation time. If up-to-date values are required,
use RPC to read them.
Similarly, modified values are not written back, and explicit RPC should be used
to modify host objects.
Carefully review the source code of drivers calls used in precompiled kernels, as
they may rely on host object attributes being transfered between kernel calls.
Examples include code used to control DDS phase, and Urukul RF switch control
via the CPLD register.
The return value of the callable is the return value of the kernel, if any.
The callable may be called several times.
"""
if not hasattr(function, "artiq_embedded"):
raise ValueError("Argument is not a kernel")
result = None
@rpc(flags={"async"})
def set_result(new_result):
nonlocal result
result = new_result
embedding_map, kernel_library, symbolizer, demangler, subkernel_arg_types = \
self.compile(function, args, kwargs, set_result, attribute_writeback=False)
self.compile_and_upload_subkernels(embedding_map, args, subkernel_arg_types)
@wraps(function)
def run_precompiled():
nonlocal result
self._run_compiled(kernel_library, embedding_map, symbolizer, demangler)
return result
return run_precompiled
@portable @portable
def seconds_to_mu(self, seconds): def seconds_to_mu(self, seconds):
"""Convert seconds to the corresponding number of machine units """Convert seconds to the corresponding number of machine units
@ -204,3 +302,23 @@ class Core:
min_now = rtio_get_counter() + 125000 min_now = rtio_get_counter() + 125000
if now_mu() < min_now: if now_mu() < min_now:
at_mu(min_now) at_mu(min_now)
def trigger_analyzer_proxy(self):
"""Causes the core analyzer proxy to retrieve a dump from the device,
and distribute it to all connected clients (typically dashboards).
Returns only after the dump has been retrieved from the device.
Raises IOError if no analyzer proxy has been configured, or if the
analyzer proxy fails. In the latter case, more details would be
available in the proxy log.
"""
if self.analyzer_proxy is None:
if self.analyzer_proxy_name is not None:
self.analyzer_proxy = self.dmgr.get(self.analyzer_proxy_name)
if self.analyzer_proxy is None:
raise IOError("No analyzer proxy configured")
else:
success = self.analyzer_proxy.trigger()
if not success:
raise IOError("Analyzer proxy reported failure")

View File

@ -0,0 +1,641 @@
{
"$id": "https://m-labs.hk/kasli_generic.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Kasli variant description",
"type": "object",
"properties": {
"_description": {
"type": "string",
"description": "Free-form description text"
},
"target": {
"type": "string",
"description": "Target board"
},
"variant": {
"type": "string",
"description": "Target board variant name"
},
"min_artiq_version": {
"type": "string",
"description": "Minimum required ARTIQ version",
"default": "0"
},
"hw_rev": {
"type": "string",
"description": "Hardware revision"
},
"base": {
"type": "string",
"enum": ["use_drtio_role", "standalone", "master", "satellite"],
"description": "Deprecated, use drtio_role instead",
"default": "use_drtio_role"
},
"drtio_role": {
"type": "string",
"enum": ["standalone", "master", "satellite"],
"description": "Role that this device takes in a DRTIO network; 'standalone' means no DRTIO",
"default": "standalone"
},
"ext_ref_frequency": {
"type": "number",
"exclusiveMinimum": 0,
"description": "External reference frequency"
},
"rtio_frequency": {
"type": "number",
"exclusiveMinimum": 0,
"default": 125e6,
"description": "RTIO frequency"
},
"core_addr": {
"type": "string",
"format": "ipv4",
"description": "IPv4 address",
"default": "192.168.1.70"
},
"vendor": {
"type": "string",
"description": "Board vendor"
},
"eui48": {
"type": "array",
"items": {
"type": "string",
"pattern": "^([0-9a-f]{2}-){5}[0-9a-f]{2}$",
"examples": ["80-1f-12-4c-22-7f"]
},
"description": "Ethernet MAC addresses"
},
"enable_sata_drtio": {
"type": "boolean",
"default": false
},
"sed_lanes": {
"type": "number",
"minimum": 1,
"maximum": 32,
"default": 8,
"description": "Number of FIFOs in the SED, must be a power of 2"
},
"peripherals": {
"type": "array",
"items": {
"$ref": "#/definitions/peripheral"
}
}
},
"if": {
"properties": {
"target": { "const": "kasli" },
"hw_rev": {
"not": {
"oneOf": [
{ "const": "v1.0" },
{ "const": "v1.1" }
]
}
}
}
},
"then": {
"properties": {
"enable_sata_drtio": {
"const": false
}
}
},
"required": ["target", "variant", "hw_rev", "base", "peripherals"],
"additionalProperties": false,
"oneOf": [
{
"properties": {
"target": {
"type": "string",
"const": "kasli"
},
"hw_rev": {
"type": "string",
"enum": ["v1.0", "v1.1", "v2.0"]
}
}
},
{
"properties": {
"target": {
"type": "string",
"const": "kasli_soc"
},
"hw_rev": {
"type": "string",
"enum": ["v1.0", "v1.1"]
}
}
}
],
"definitions": {
"peripheral": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["dio", "dio_spi", "urukul", "novogorny", "sampler", "suservo", "zotino", "grabber", "mirny", "fastino", "phaser", "hvamp", "shuttler"]
},
"board": {
"type": "string"
},
"hw_rev": {
"type": "string",
"pattern": "^v[0-9]+\\.[0-9]+"
}
},
"required": ["type"],
"allOf": [{
"title": "DIO",
"if": {
"properties": {
"type": {
"const": "dio"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
},
"edge_counter": {
"type": "boolean",
"default": false
},
"bank_direction_low": {
"type": "string",
"enum": ["input", "output", "clkgen"]
},
"bank_direction_high": {
"type": "string",
"enum": ["input", "output", "clkgen"]
}
},
"required": ["ports", "bank_direction_low", "bank_direction_high"]
}
}, {
"title": "DIO_SPI",
"if": {
"properties": {
"type": {
"const": "dio_spi"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
},
"spi": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "dio_spi"
},
"clk": {
"type": "integer",
"minimum": 0,
"maximum": 7
},
"mosi": {
"type": "integer",
"minimum": 0,
"maximum": 7
},
"miso": {
"type": "integer",
"minimum": 0,
"maximum": 7
},
"cs": {
"type": "array",
"items": {
"type": "integer",
"minimum": 0,
"maximum": 7
}
}
},
"required": ["clk"]
},
"minItems": 1
},
"ttl": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "ttl"
},
"pin": {
"type": "integer",
"minimum": 0,
"maximum": 7
},
"direction": {
"type": "string",
"enum": ["input", "output"]
},
"edge_counter": {
"type": "boolean",
"default": false
}
},
"required": ["pin", "direction"]
},
"default": []
}
},
"required": ["ports", "spi"]
}
}, {
"title": "Urukul",
"if": {
"properties": {
"type": {
"const": "urukul"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 2
},
"synchronization": {
"type": "boolean",
"default": false
},
"refclk": {
"type": "number",
"minimum": 0
},
"clk_sel": {
"type": "integer",
"minimum": 0,
"maximum": 3
},
"clk_div": {
"type": "integer",
"minimum": 0,
"maximum": 3,
"default": 0
},
"pll_n": {
"type": "integer"
},
"pll_en": {
"type": "integer",
"minimum": 0,
"maximum": 1,
"default": 1
},
"pll_vco": {
"type": "integer"
},
"dds": {
"type": "string",
"enum": ["ad9910", "ad9912"],
"default": "ad9910"
}
},
"required": ["ports"]
}
}, {
"title": "Novogorny",
"if": {
"properties": {
"type": {
"const": "novogorny"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
}
},
"required": ["ports"]
}
}, {
"title": "Sampler",
"if": {
"properties": {
"type": {
"const": "sampler"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 2
}
},
"required": ["ports"]
}
}, {
"title": "SUServo",
"if": {
"properties": {
"type": {
"const": "suservo"
}
}
},
"then": {
"properties": {
"sampler_ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 2,
"maxItems": 2
},
"sampler_hw_rev": {
"type": "string",
"pattern": "^v[0-9]+\\.[0-9]+",
"default": "v2.2"
},
"urukul0_ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 2,
"maxItems": 2
},
"urukul1_ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 2,
"maxItems": 2
},
"refclk": {
"type": "number",
"minimum": 0
},
"clk_sel": {
"type": "integer",
"minimum": 0,
"maximum": 3
},
"pll_n": {
"type": "integer",
"default": 32
},
"pll_en": {
"type": "integer",
"minimum": 0,
"maximum": 1,
"default": 1
},
"pll_vco": {
"type": "integer"
}
},
"required": ["sampler_ports", "urukul0_ports"]
}
}, {
"title": "Zotino",
"if": {
"properties": {
"type": {
"const": "zotino"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
}
},
"required": ["ports"]
}
}, {
"title": "Grabber",
"if": {
"properties": {
"type": {
"const": "grabber"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 3
}
},
"required": ["ports"]
}
}, {
"title": "Mirny",
"if": {
"properties": {
"type": {
"const": "mirny"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
},
"refclk": {
"type": "number",
"exclusiveMinimum": 0,
"default": 100e6
},
"clk_sel": {
"oneOf": [
{
"type": "integer",
"minimum": 0,
"maximum": 3
},
{
"type": "string",
"enum": ["xo", "mmcx", "sma"]
}
],
"default": 0
},
"almazny": {
"type": "boolean",
"default": false
},
"almazny_hw_rev": {
"type": "string",
"pattern": "^v[0-9]+\\.[0-9]+",
"default": "v1.2"
}
},
"required": ["ports"]
}
}, {
"title": "Fastino",
"if": {
"properties": {
"type": {
"const": "fastino"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
},
"log2_width": {
"type": "integer",
"default": 0,
"description": "Width of DAC channel group (logarithm base 2)"
}
},
"required": ["ports"]
}
}, {
"title": "Phaser",
"if": {
"properties": {
"type": {
"const": "phaser"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
},
"mode": {
"type": "string",
"enum": ["base", "miqro"],
"default": "base"
}
},
"required": ["ports"]
}
}, {
"title": "HVAmp",
"if": {
"properties": {
"type": {
"const": "hvamp"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 1
}
},
"required": ["ports"]
}
},{
"title": "Shuttler",
"if": {
"properties": {
"type": {
"const": "shuttler"
}
}
},
"then": {
"properties": {
"ports": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 1,
"maxItems": 2
},
"drtio_destination": {
"type": "integer"
}
},
"required": ["ports"]
}
}]
}
}
}

View File

@ -0,0 +1,277 @@
class DAC34H84:
"""DAC34H84 settings and register map.
For possible values, documentation, and explanation, see the DAC datasheet
at https://www.ti.com/lit/pdf/slas751
"""
qmc_corr_ena = 0 # msb ab
qmc_offset_ena = 0 # msb ab
invsinc_ena = 0 # msb ab
interpolation = 1 # 2x
fifo_ena = 1
alarm_out_ena = 1
alarm_out_pol = 1
clkdiv_sync_ena = 1
iotest_ena = 0
cnt64_ena = 0
oddeven_parity = 0 # even
single_parity_ena = 1
dual_parity_ena = 0
rev_interface = 0
dac_complement = 0b0000 # msb A
alarm_fifo = 0b111 # msb 2-away
dacclkgone_ena = 1
dataclkgone_ena = 1
collisiongone_ena = 1
sif4_ena = 1
mixer_ena = 0
mixer_gain = 1
nco_ena = 0
revbus = 0
twos = 1
coarse_dac = 9 # 18.75 mA, 0-15
sif_txenable = 0
mask_alarm_from_zerochk = 0
mask_alarm_fifo_collision = 0
mask_alarm_fifo_1away = 0
mask_alarm_fifo_2away = 0
mask_alarm_dacclk_gone = 0
mask_alarm_dataclk_gone = 0
mask_alarm_output_gone = 0
mask_alarm_from_iotest = 0
mask_alarm_from_pll = 0
mask_alarm_parity = 0b0000 # msb a
qmc_offseta = 0 # 12b
fifo_offset = 2 # 0-7
qmc_offsetb = 0 # 12b
qmc_offsetc = 0 # 12b
qmc_offsetd = 0 # 12b
qmc_gaina = 0 # 11b
cmix_fs8 = 0
cmix_fs4 = 0
cmix_fs2 = 0
cmix_nfs4 = 0
qmc_gainb = 0 # 11b
qmc_gainc = 0 # 11b
output_delayab = 0b00
output_delaycd = 0b00
qmc_gaind = 0 # 11b
qmc_phaseab = 0 # 12b
qmc_phasecd = 0 # 12b
phase_offsetab = 0 # 16b
phase_offsetcd = 0 # 16b
phase_addab_lsb = 0 # 16b
phase_addab_msb = 0 # 16b
phase_addcd_lsb = 0 # 16b
phase_addcd_msb = 0 # 16b
pll_reset = 0
pll_ndivsync_ena = 1
pll_ena = 1
pll_cp = 0b01 # single charge pump
pll_p = 0b100 # p=4
pll_m2 = 1 # x2
pll_m = 8 # m = 8
pll_n = 0b0001 # n = 2
pll_vcotune = 0b01
pll_vco = 0x3f # 4 GHz
bias_sleep = 0
tsense_sleep = 0
pll_sleep = 0
clkrecv_sleep = 0
dac_sleep = 0b0000 # msb a
extref_ena = 0
fuse_sleep = 1
atest = 0b00000 # atest mode
syncsel_qmcoffsetab = 0b1001 # sif_sync and register write
syncsel_qmcoffsetcd = 0b1001 # sif_sync and register write
syncsel_qmccorrab = 0b1001 # sif_sync and register write
syncsel_qmccorrcd = 0b1001 # sif_sync and register write
syncsel_mixerab = 0b1001 # sif_sync and register write
syncsel_mixercd = 0b1001 # sif_sync and register write
syncsel_nco = 0b1000 # sif_sync
syncsel_fifo_input = 0b10 # external lvds istr
sif_sync = 0
syncsel_fifoin = 0b0010 # istr
syncsel_fifoout = 0b0100 # ostr
clkdiv_sync_sel = 0 # ostr
path_a_sel = 0
path_b_sel = 1
path_c_sel = 2
path_d_sel = 3
# swap dac pairs (CDAB) for layout
# swap I-Q dacs for spectral inversion
dac_a_sel = 3
dac_b_sel = 2
dac_c_sel = 1
dac_d_sel = 0
dac_sleep_en = 0b1111 # msb a
clkrecv_sleep_en = 1
pll_sleep_en = 1
lvds_data_sleep_en = 1
lvds_control_sleep_en = 1
temp_sense_sleep_en = 1
bias_sleep_en = 1
data_dly = 2
clk_dly = 0
ostrtodig_sel = 0
ramp_ena = 0
sifdac_ena = 0
grp_delaya = 0x00
grp_delayb = 0x00
grp_delayc = 0x00
grp_delayd = 0x00
sifdac = 0
def __init__(self, updates=None):
if updates is None:
return
for key, value in updates.items():
if not hasattr(self, key):
raise KeyError("invalid setting", key)
setattr(self, key, value)
def get_mmap(self):
mmap = []
mmap.append(
(0x00 << 16) |
(self.qmc_offset_ena << 14) | (self.qmc_corr_ena << 12) |
(self.interpolation << 8) | (self.fifo_ena << 7) |
(self.alarm_out_ena << 4) | (self.alarm_out_pol << 3) |
(self.clkdiv_sync_ena << 2) | (self.invsinc_ena << 0))
mmap.append(
(0x01 << 16) |
(self.iotest_ena << 15) | (self.cnt64_ena << 12) |
(self.oddeven_parity << 11) | (self.single_parity_ena << 10) |
(self.dual_parity_ena << 9) | (self.rev_interface << 8) |
(self.dac_complement << 4) | (self.alarm_fifo << 1))
mmap.append(
(0x02 << 16) |
(self.dacclkgone_ena << 14) | (self.dataclkgone_ena << 13) |
(self.collisiongone_ena << 12) | (self.sif4_ena << 7) |
(self.mixer_ena << 6) | (self.mixer_gain << 5) |
(self.nco_ena << 4) | (self.revbus << 3) | (self.twos << 1))
mmap.append((0x03 << 16) | (self.coarse_dac << 12) |
(self.sif_txenable << 0))
mmap.append(
(0x07 << 16) |
(self.mask_alarm_from_zerochk << 15) | (1 << 14) |
(self.mask_alarm_fifo_collision << 13) |
(self.mask_alarm_fifo_1away << 12) |
(self.mask_alarm_fifo_2away << 11) |
(self.mask_alarm_dacclk_gone << 10) |
(self.mask_alarm_dataclk_gone << 9) |
(self.mask_alarm_output_gone << 8) |
(self.mask_alarm_from_iotest << 7) | (1 << 6) |
(self.mask_alarm_from_pll << 5) | (self.mask_alarm_parity << 1))
mmap.append(
(0x08 << 16) | (self.qmc_offseta << 0))
mmap.append(
(0x09 << 16) | (self.fifo_offset << 13) | (self.qmc_offsetb << 0))
mmap.append((0x0a << 16) | (self.qmc_offsetc << 0))
mmap.append((0x0b << 16) | (self.qmc_offsetd << 0))
mmap.append((0x0c << 16) | (self.qmc_gaina << 0))
mmap.append(
(0x0d << 16) |
(self.cmix_fs8 << 15) | (self.cmix_fs4 << 14) |
(self.cmix_fs2 << 13) | (self.cmix_nfs4 << 12) |
(self.qmc_gainb << 0))
mmap.append((0x0e << 16) | (self.qmc_gainc << 0))
mmap.append(
(0x0f << 16) |
(self.output_delayab << 14) | (self.output_delaycd << 12) |
(self.qmc_gaind << 0))
mmap.append((0x10 << 16) | (self.qmc_phaseab << 0))
mmap.append((0x11 << 16) | (self.qmc_phasecd << 0))
mmap.append((0x12 << 16) | (self.phase_offsetab << 0))
mmap.append((0x13 << 16) | (self.phase_offsetcd << 0))
mmap.append((0x14 << 16) | (self.phase_addab_lsb << 0))
mmap.append((0x15 << 16) | (self.phase_addab_msb << 0))
mmap.append((0x16 << 16) | (self.phase_addcd_lsb << 0))
mmap.append((0x17 << 16) | (self.phase_addcd_msb << 0))
mmap.append(
(0x18 << 16) |
(0b001 << 13) | (self.pll_reset << 12) |
(self.pll_ndivsync_ena << 11) | (self.pll_ena << 10) |
(self.pll_cp << 6) | (self.pll_p << 3))
mmap.append(
(0x19 << 16) |
(self.pll_m2 << 15) | (self.pll_m << 8) | (self.pll_n << 4) |
(self.pll_vcotune << 2))
mmap.append(
(0x1a << 16) |
(self.pll_vco << 10) | (self.bias_sleep << 7) |
(self.tsense_sleep << 6) |
(self.pll_sleep << 5) | (self.clkrecv_sleep << 4) |
(self.dac_sleep << 0))
mmap.append(
(0x1b << 16) |
(self.extref_ena << 15) | (self.fuse_sleep << 11) |
(self.atest << 0))
mmap.append(
(0x1e << 16) |
(self.syncsel_qmcoffsetab << 12) |
(self.syncsel_qmcoffsetcd << 8) |
(self.syncsel_qmccorrab << 4) |
(self.syncsel_qmccorrcd << 0))
mmap.append(
(0x1f << 16) |
(self.syncsel_mixerab << 12) | (self.syncsel_mixercd << 8) |
(self.syncsel_nco << 4) | (self.syncsel_fifo_input << 2) |
(self.sif_sync << 1))
mmap.append(
(0x20 << 16) |
(self.syncsel_fifoin << 12) | (self.syncsel_fifoout << 8) |
(self.clkdiv_sync_sel << 0))
mmap.append(
(0x22 << 16) |
(self.path_a_sel << 14) | (self.path_b_sel << 12) |
(self.path_c_sel << 10) | (self.path_d_sel << 8) |
(self.dac_a_sel << 6) | (self.dac_b_sel << 4) |
(self.dac_c_sel << 2) | (self.dac_d_sel << 0))
mmap.append(
(0x23 << 16) |
(self.dac_sleep_en << 12) | (self.clkrecv_sleep_en << 11) |
(self.pll_sleep_en << 10) | (self.lvds_data_sleep_en << 9) |
(self.lvds_control_sleep_en << 8) |
(self.temp_sense_sleep_en << 7) | (1 << 6) |
(self.bias_sleep_en << 5) | (0x1f << 0))
mmap.append(
(0x24 << 16) | (self.data_dly << 13) | (self.clk_dly << 10))
mmap.append(
(0x2d << 16) |
(self.ostrtodig_sel << 14) | (self.ramp_ena << 13) |
(0x002 << 1) | (self.sifdac_ena << 0))
mmap.append(
(0x2e << 16) | (self.grp_delaya << 8) | (self.grp_delayb << 0))
mmap.append(
(0x2f << 16) | (self.grp_delayc << 8) | (self.grp_delayd << 0))
mmap.append((0x30 << 16) | self.sifdac)
return mmap

View File

@ -6,7 +6,7 @@ alone could achieve.
""" """
from artiq.language.core import syscall, kernel from artiq.language.core import syscall, kernel
from artiq.language.types import TInt32, TInt64, TStr, TNone, TTuple from artiq.language.types import TInt32, TInt64, TStr, TNone, TTuple, TBool
from artiq.coredevice.exceptions import DMAError from artiq.coredevice.exceptions import DMAError
from numpy import int64 from numpy import int64
@ -17,7 +17,7 @@ def dma_record_start(name: TStr) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall
def dma_record_stop(duration: TInt64) -> TNone: def dma_record_stop(duration: TInt64, enable_ddma: TBool) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall
@ -25,11 +25,11 @@ def dma_erase(name: TStr) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall
def dma_retrieve(name: TStr) -> TTuple([TInt64, TInt32]): def dma_retrieve(name: TStr) -> TTuple([TInt64, TInt32, TBool]):
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall
def dma_playback(timestamp: TInt64, ptr: TInt32) -> TNone: def dma_playback(timestamp: TInt64, ptr: TInt32, enable_ddma: TBool) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@ -47,6 +47,7 @@ class DMARecordContextManager:
def __init__(self): def __init__(self):
self.name = "" self.name = ""
self.saved_now_mu = int64(0) self.saved_now_mu = int64(0)
self.enable_ddma = False
@kernel @kernel
def __enter__(self): def __enter__(self):
@ -56,7 +57,7 @@ class DMARecordContextManager:
@kernel @kernel
def __exit__(self, type, value, traceback): def __exit__(self, type, value, traceback):
dma_record_stop(now_mu()) # see above dma_record_stop(now_mu(), self.enable_ddma) # see above
at_mu(self.saved_now_mu) at_mu(self.saved_now_mu)
@ -74,12 +75,20 @@ class CoreDMA:
self.epoch = 0 self.epoch = 0
@kernel @kernel
def record(self, name): def record(self, name, enable_ddma=False):
"""Returns a context manager that will record a DMA trace called ``name``. """Returns a context manager that will record a DMA trace called ``name``.
Any previously recorded trace with the same name is overwritten. Any previously recorded trace with the same name is overwritten.
The trace will persist across kernel switches.""" The trace will persist across kernel switches.
In DRTIO context, distributed DMA can be toggled with ``enable_ddma``.
Enabling it allows running DMA on satellites, rather than sending all
events from the master.
Keeping it disabled it may improve performance in some scenarios,
e.g. when there are many small satellite buffers."""
self.epoch += 1 self.epoch += 1
self.recorder.name = name self.recorder.name = name
self.recorder.enable_ddma = enable_ddma
return self.recorder return self.recorder
@kernel @kernel
@ -92,24 +101,24 @@ class CoreDMA:
def playback(self, name): def playback(self, name):
"""Replays a previously recorded DMA trace. This function blocks until """Replays a previously recorded DMA trace. This function blocks until
the entire trace is submitted to the RTIO FIFOs.""" the entire trace is submitted to the RTIO FIFOs."""
(advance_mu, ptr) = dma_retrieve(name) (advance_mu, ptr, uses_ddma) = dma_retrieve(name)
dma_playback(now_mu(), ptr) dma_playback(now_mu(), ptr, uses_ddma)
delay_mu(advance_mu) delay_mu(advance_mu)
@kernel @kernel
def get_handle(self, name): def get_handle(self, name):
"""Returns a handle to a previously recorded DMA trace. The returned handle """Returns a handle to a previously recorded DMA trace. The returned handle
is only valid until the next call to :meth:`record` or :meth:`erase`.""" is only valid until the next call to :meth:`record` or :meth:`erase`."""
(advance_mu, ptr) = dma_retrieve(name) (advance_mu, ptr, uses_ddma) = dma_retrieve(name)
return (self.epoch, advance_mu, ptr) return (self.epoch, advance_mu, ptr, uses_ddma)
@kernel @kernel
def playback_handle(self, handle): def playback_handle(self, handle):
"""Replays a handle obtained with :meth:`get_handle`. Using this function """Replays a handle obtained with :meth:`get_handle`. Using this function
is much faster than :meth:`playback` for replaying a set of traces repeatedly, is much faster than :meth:`playback` for replaying a set of traces repeatedly,
but incurs the overhead of managing the handles onto the programmer.""" but incurs the overhead of managing the handles onto the programmer."""
(epoch, advance_mu, ptr) = handle (epoch, advance_mu, ptr, uses_ddma) = handle
if self.epoch != epoch: if self.epoch != epoch:
raise DMAError("Invalid handle") raise DMAError("Invalid handle")
dma_playback(now_mu(), ptr) dma_playback(now_mu(), ptr, uses_ddma)
delay_mu(advance_mu) delay_mu(advance_mu)

View File

@ -91,6 +91,10 @@ class EdgeCounter:
self.channel = channel self.channel = channel
self.counter_max = (1 << (gateware_width - 1)) - 1 self.counter_max = (1 << (gateware_width - 1)) - 1
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel @kernel
def gate_rising(self, duration): def gate_rising(self, duration):
"""Count rising edges for the given duration and request the total at """Count rising edges for the given duration and request the total at

View File

@ -11,54 +11,99 @@ ZeroDivisionError = builtins.ZeroDivisionError
ValueError = builtins.ValueError ValueError = builtins.ValueError
IndexError = builtins.IndexError IndexError = builtins.IndexError
RuntimeError = builtins.RuntimeError RuntimeError = builtins.RuntimeError
AssertionError = builtins.AssertionError
class CoreException: class CoreException:
"""Information about an exception raised or passed through the core device.""" """Information about an exception raised or passed through the core device."""
def __init__(self, exceptions, exception_info, traceback, stack_pointers):
self.exceptions = exceptions
self.exception_info = exception_info
self.traceback = list(traceback)
self.stack_pointers = stack_pointers
def __init__(self, name, message, params, traceback): first_exception = exceptions[0]
name = first_exception[0]
if ':' in name: if ':' in name:
exn_id, self.name = name.split(':', 2) exn_id, self.name = name.split(':', 2)
self.id = int(exn_id) self.id = int(exn_id)
else: else:
self.id, self.name = 0, name self.id, self.name = 0, name
self.message, self.params = message, params self.message = first_exception[1]
self.traceback = list(traceback) self.params = first_exception[2]
def append_backtrace(self, record, inlined=False):
filename, line, column, function, address = record
stub_globals = {"__name__": filename, "__loader__": source_loader}
source_line = linecache.getline(filename, line, stub_globals)
indentation = re.search(r"^\s*", source_line).end()
if address is None:
formatted_address = ""
elif inlined:
formatted_address = " (inlined)"
else:
formatted_address = " (RA=+0x{:x})".format(address)
filename = filename.replace(artiq_dir, "<artiq>")
lines = []
if column == -1:
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
lines.append(" File \"{file}\", line {line}, in {function}{address}".
format(file=filename, line=line, function=function,
address=formatted_address))
else:
lines.append(" {}^".format(" " * (column - indentation)))
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
lines.append(" File \"{file}\", line {line}, column {column},"
" in {function}{address}".
format(file=filename, line=line, column=column + 1,
function=function, address=formatted_address))
return lines
def single_traceback(self, exception_index):
# note that we insert in reversed order
lines = []
last_sp = 0
start_backtrace_index = self.exception_info[exception_index][1]
zipped = list(zip(self.traceback[start_backtrace_index:],
self.stack_pointers[start_backtrace_index:]))
exception = self.exceptions[exception_index]
name = exception[0]
message = exception[1]
params = exception[2]
if ':' in name:
exn_id, name = name.split(':', 2)
exn_id = int(exn_id)
else:
exn_id = 0
lines.append("{}({}): {}".format(name, exn_id, message.format(*params)))
zipped.append(((exception[3], exception[4], exception[5], exception[6],
None, []), None))
for ((filename, line, column, function, address, inlined), sp) in zipped:
# backtrace of nested exceptions may be discontinuous
# but the stack pointer must increase monotonically
if sp is not None and sp <= last_sp:
continue
last_sp = sp
for record in reversed(inlined):
lines += self.append_backtrace(record, True)
lines += self.append_backtrace((filename, line, column, function,
address))
lines.append("Traceback (most recent call first):")
return "\n".join(reversed(lines))
def __str__(self): def __str__(self):
lines = [] tracebacks = [self.single_traceback(i) for i in range(len(self.exceptions))]
lines.append("Core Device Traceback (most recent call last):") traceback_str = ('\n\nDuring handling of the above exception, ' +
last_address = 0 'another exception occurred:\n\n').join(tracebacks)
for (filename, line, column, function, address) in self.traceback: return 'Core Device Traceback:\n' +\
stub_globals = {"__name__": filename, "__loader__": source_loader} traceback_str +\
source_line = linecache.getline(filename, line, stub_globals) '\n\nEnd of Core Device Traceback\n'
indentation = re.search(r"^\s*", source_line).end()
if address is None:
formatted_address = ""
elif address == last_address:
formatted_address = " (inlined)"
else:
formatted_address = " (RA=+0x{:x})".format(address)
last_address = address
filename = filename.replace(artiq_dir, "<artiq>")
if column == -1:
lines.append(" File \"{file}\", line {line}, in {function}{address}".
format(file=filename, line=line, function=function,
address=formatted_address))
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
else:
lines.append(" File \"{file}\", line {line}, column {column},"
" in {function}{address}".
format(file=filename, line=line, column=column + 1,
function=function, address=formatted_address))
lines.append(" {}".format(source_line.strip() if source_line else "<unknown>"))
lines.append(" {}^".format(" " * (column - indentation)))
lines.append("{}({}): {}".format(self.name, self.id,
self.message.format(*self.params)))
return "\n".join(lines)
class InternalError(Exception): class InternalError(Exception):
@ -103,8 +148,11 @@ class DMAError(Exception):
artiq_builtin = True artiq_builtin = True
class WatchdogExpired(Exception): class SubkernelError(Exception):
"""Raised when a watchdog expires.""" """Raised when an operation regarding a subkernel is invalid
or cannot be completed.
"""
artiq_builtin = True
class ClockFailure(Exception): class ClockFailure(Exception):

304
artiq/coredevice/fastino.py Normal file
View File

@ -0,0 +1,304 @@
"""RTIO driver for the Fastino 32channel, 16 bit, 2.5 MS/s per channel,
streaming DAC.
"""
from numpy import int32, int64
from artiq.language.core import kernel, portable, delay, delay_mu
from artiq.coredevice.rtio import (rtio_output, rtio_output_wide,
rtio_input_data)
from artiq.language.units import ns
from artiq.language.types import TInt32, TList
class Fastino:
"""Fastino 32-channel, 16-bit, 2.5 MS/s per channel streaming DAC
The RTIO PHY supports staging DAC data before transmitting them by writing
to the DAC RTIO addresses, if a channel is not "held" by setting its bit
using :meth:`set_hold`, the next frame will contain the update. For the
DACs held, the update is triggered explicitly by setting the corresponding
bit using :meth:`set_update`. Update is self-clearing. This enables atomic
DAC updates synchronized to a frame edge.
The `log2_width=0` RTIO layout uses one DAC channel per RTIO address and a
dense RTIO address space. The RTIO words are narrow (32 bit) and
few-channel updates are efficient. There is the least amount of DAC state
tracking in kernels, at the cost of more DMA and RTIO data.
The setting here and in the RTIO PHY (gateware) must match.
Other `log2_width` (up to `log2_width=5`) settings pack multiple
(in powers of two) DAC channels into one group and into one RTIO write.
The RTIO data width increases accordingly. The `log2_width`
LSBs of the RTIO address for a DAC channel write must be zero and the
address space is sparse. For `log2_width=5` the RTIO data is 512 bit wide.
If `log2_width` is zero, the :meth:`set_dac`/:meth:`set_dac_mu` interface
must be used. If non-zero, the :meth:`set_group`/:meth:`set_group_mu`
interface must be used.
:param channel: RTIO channel number
:param core_device: Core device name (default: "core")
:param log2_width: Width of DAC channel group (logarithm base 2).
Value must match the corresponding value in the RTIO PHY (gateware).
"""
kernel_invariants = {"core", "channel", "width", "t_frame"}
def __init__(self, dmgr, channel, core_device="core", log2_width=0):
self.channel = channel << 8
self.core = dmgr.get(core_device)
self.width = 1 << log2_width
# frame duration in mu (14 words each 7 clock cycles each 4 ns)
# self.core.seconds_to_mu(14*7*4*ns) # unfortunately this may round wrong
assert self.core.ref_period == 1*ns
self.t_frame = int64(14*7*4)
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel
def init(self):
"""Initialize the device.
* disables RESET, DAC_CLR, enables AFE_PWR
* clears error counters, enables error counting
* turns LEDs off
* clears `hold` and `continuous` on all channels
* clear and resets interpolators to unit rate change on all
channels
It does not change set channel voltages and does not reset the PLLs or clock
domains.
Note: On Fastino gateware before v0.2 this may lead to 0 voltage being emitted
transiently.
"""
self.set_cfg(reset=0, afe_power_down=0, dac_clr=0, clr_err=1)
delay_mu(self.t_frame)
self.set_cfg(reset=0, afe_power_down=0, dac_clr=0, clr_err=0)
delay_mu(self.t_frame)
self.set_continuous(0)
delay_mu(self.t_frame)
self.stage_cic(1)
delay_mu(self.t_frame)
self.apply_cic(0xffffffff)
delay_mu(self.t_frame)
self.set_leds(0)
delay_mu(self.t_frame)
self.set_hold(0)
delay_mu(self.t_frame)
@kernel
def write(self, addr, data):
"""Write data to a Fastino register.
:param addr: Address to write to.
:param data: Data to write.
"""
rtio_output(self.channel | addr, data)
@kernel
def read(self, addr):
"""Read from Fastino register.
TODO: untested
:param addr: Address to read from.
:return: The data read.
"""
raise NotImplementedError
# rtio_output(self.channel | addr | 0x80)
# return rtio_input_data(self.channel >> 8)
@kernel
def set_dac_mu(self, dac, data):
"""Write DAC data in machine units.
:param dac: DAC channel to write to (0-31).
:param data: DAC word to write, 16 bit unsigned integer, in machine
units.
"""
self.write(dac, data)
@kernel
def set_group_mu(self, dac: TInt32, data: TList(TInt32)):
"""Write a group of DAC channels in machine units.
:param dac: First channel in DAC channel group (0-31). The `log2_width`
LSBs must be zero.
:param data: List of DAC data pairs (2x16 bit unsigned) to write,
in machine units. Data exceeding group size is ignored.
If the list length is less than group size, the remaining
DAC channels within the group are cleared to 0 (machine units).
"""
if dac & (self.width - 1):
raise ValueError("Group index LSBs must be zero")
rtio_output_wide(self.channel | dac, data)
@portable
def voltage_to_mu(self, voltage):
"""Convert SI Volts to DAC machine units.
:param voltage: Voltage in SI Volts.
:return: DAC data word in machine units, 16 bit integer.
"""
data = int32(round((0x8000/10.)*voltage)) + int32(0x8000)
if data < 0 or data > 0xffff:
raise ValueError("DAC voltage out of bounds")
return data
@portable
def voltage_group_to_mu(self, voltage, data):
"""Convert SI Volts to packed DAC channel group machine units.
:param voltage: List of SI Volt voltages.
:param data: List of DAC channel data pairs to write to.
Half the length of `voltage`.
"""
for i in range(len(voltage)):
v = self.voltage_to_mu(voltage[i])
if i & 1:
v = data[i // 2] | (v << 16)
data[i // 2] = int32(v)
@kernel
def set_dac(self, dac, voltage):
"""Set DAC data to given voltage.
:param dac: DAC channel (0-31).
:param voltage: Desired output voltage.
"""
self.write(dac, self.voltage_to_mu(voltage))
@kernel
def set_group(self, dac, voltage):
"""Set DAC group data to given voltage.
:param dac: DAC channel (0-31).
:param voltage: Desired output voltage.
"""
data = [int32(0)] * (len(voltage) // 2)
self.voltage_group_to_mu(voltage, data)
self.set_group_mu(dac, data)
@kernel
def update(self, update):
"""Schedule channels for update.
:param update: Bit mask of channels to update (32 bit).
"""
self.write(0x20, update)
@kernel
def set_hold(self, hold):
"""Set channels to manual update.
:param hold: Bit mask of channels to hold (32 bit).
"""
self.write(0x21, hold)
@kernel
def set_cfg(self, reset=0, afe_power_down=0, dac_clr=0, clr_err=0):
"""Set configuration bits.
:param reset: Reset SPI PLL and SPI clock domain.
:param afe_power_down: Disable AFE power.
:param dac_clr: Assert all 32 DAC_CLR signals setting all DACs to
mid-scale (0 V).
:param clr_err: Clear error counters and PLL reset indicator.
This clears the sticky red error LED. Must be cleared to enable
error counting.
"""
self.write(0x22, (reset << 0) | (afe_power_down << 1) |
(dac_clr << 2) | (clr_err << 3))
@kernel
def set_leds(self, leds):
"""Set the green user-defined LEDs
:param leds: LED status, 8 bit integer each bit corresponding to one
green LED.
"""
self.write(0x23, leds)
@kernel
def set_continuous(self, channel_mask):
"""Enable continuous DAC updates on channels regardless of new data
being submitted.
"""
self.write(0x25, channel_mask)
@kernel
def stage_cic_mu(self, rate_mantissa, rate_exponent, gain_exponent):
"""Stage machine unit CIC interpolator configuration.
"""
if rate_mantissa < 0 or rate_mantissa >= 1 << 6:
raise ValueError("rate_mantissa out of bounds")
if rate_exponent < 0 or rate_exponent >= 1 << 4:
raise ValueError("rate_exponent out of bounds")
if gain_exponent < 0 or gain_exponent >= 1 << 6:
raise ValueError("gain_exponent out of bounds")
config = rate_mantissa | (rate_exponent << 6) | (gain_exponent << 10)
self.write(0x26, config)
@kernel
def stage_cic(self, rate) -> TInt32:
"""Compute and stage interpolator configuration.
This method approximates the desired interpolation rate using a 10 bit
floating point representation (6 bit mantissa, 4 bit exponent) and
then determines an optimal interpolation gain compensation exponent
to avoid clipping. Gains for rates that are powers of two are accurately
compensated. Other rates lead to overall less than unity gain (but more
than 0.5 gain).
The overall gain including gain compensation is
`actual_rate**order/2**ceil(log2(actual_rate**order))`
where `order = 3`.
Returns the actual interpolation rate.
"""
if rate <= 0 or rate > 1 << 16:
raise ValueError("rate out of bounds")
rate_mantissa = rate
rate_exponent = 0
while rate_mantissa > 1 << 6:
rate_exponent += 1
rate_mantissa >>= 1
order = 3
gain = 1
for i in range(order):
gain *= rate_mantissa
gain_exponent = 0
while gain > 1 << gain_exponent:
gain_exponent += 1
gain_exponent += order*rate_exponent
assert gain_exponent <= order*16
self.stage_cic_mu(rate_mantissa - 1, rate_exponent, gain_exponent)
return rate_mantissa << rate_exponent
@kernel
def apply_cic(self, channel_mask):
"""Apply the staged interpolator configuration on the specified channels.
Each Fastino channel starting with gateware v0.2 includes a fourth order
(cubic) CIC interpolator with variable rate change and variable output
gain compensation (see :meth:`stage_cic`).
Fastino gateware before v0.2 does not include the interpolators and the
methods affecting the CICs should not be used.
Channels using non-unity interpolation rate should have
continous DAC updates enabled (see :meth:`set_continuous`) unless
their output is supposed to be constant.
This method resets and settles the affected interpolators. There will be
no output updates for the next `order = 3` input samples.
Affected channels will only accept one input sample per input sample
period. This method synchronizes the input sample period to the current
frame on the affected channels.
If application of new interpolator settings results in a change of the
overall gain, there will be a corresponding output step.
"""
self.write(0x27, channel_mask)

View File

@ -25,6 +25,10 @@ class Grabber:
# ROI engine outputs for one video frame. # ROI engine outputs for one video frame.
self.sentinel = int32(int64(2**count_width)) self.sentinel = int32(int64(2**count_width))
@staticmethod
def get_rtio_channels(channel_base, **kwargs):
return [(channel_base, "ROI coordinates"), (channel_base + 1, "ROI mask")]
@kernel @kernel
def setup_roi(self, n, x0, y0, x1, y1): def setup_roi(self, n, x0, y0, x1, y1):
""" """

View File

@ -33,6 +33,11 @@ def i2c_read(busno: TInt32, ack: TBool) -> TInt32:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall(flags={"nounwind", "nowrite"})
def i2c_switch_select(busno: TInt32, address: TInt32, mask: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated")
@kernel @kernel
def i2c_poll(busno, busaddr): def i2c_poll(busno, busaddr):
"""Poll I2C device at address. """Poll I2C device at address.
@ -137,8 +142,10 @@ def i2c_read_many(busno, busaddr, addr, data):
i2c_stop(busno) i2c_stop(busno)
class PCA9548: class I2CSwitch:
"""Driver for the PCA9548 I2C bus switch. """Driver for the I2C bus switch.
PCA954X (or other) type detection is done by the CPU during I2C init.
I2C transactions not real-time, and are performed by the CPU without I2C transactions not real-time, and are performed by the CPU without
involving RTIO. involving RTIO.
@ -151,25 +158,19 @@ class PCA9548:
self.busno = busno self.busno = busno
self.address = address self.address = address
@kernel
def select(self, mask):
"""Enable/disable channels.
:param mask: Bit mask of enabled channels
"""
i2c_write_byte(self.busno, self.address, mask)
@kernel @kernel
def set(self, channel): def set(self, channel):
"""Enable one channel. """Enable one channel.
:param channel: channel number (0-7) :param channel: channel number (0-7)
""" """
self.select(1 << channel) i2c_switch_select(self.busno, self.address >> 1, 1 << channel)
@kernel @kernel
def readback(self): def unset(self):
return i2c_read_byte(self.busno, self.address) """Disable output of the I2C switch.
"""
i2c_switch_select(self.busno, self.address >> 1, 0)
class TCA6424A: class TCA6424A:
@ -207,3 +208,46 @@ class TCA6424A:
self._write24(0x8c, 0) # set all directions to output self._write24(0x8c, 0) # set all directions to output
self._write24(0x84, outputs_le) # set levels self._write24(0x84, outputs_le) # set levels
class PCF8574A:
"""Driver for the PCF8574 I2C remote 8-bit I/O expander.
I2C transactions not real-time, and are performed by the CPU without
involving RTIO.
"""
def __init__(self, dmgr, busno=0, address=0x7c, core_device="core"):
self.core = dmgr.get(core_device)
self.busno = busno
self.address = address
@kernel
def set(self, data):
"""Drive data on the quasi-bidirectional pins.
:param data: Pin data. High bits are weakly driven high
(and thus inputs), low bits are strongly driven low.
"""
i2c_start(self.busno)
try:
if not i2c_write(self.busno, self.address):
raise I2CError("PCF8574A failed to ack address")
if not i2c_write(self.busno, data):
raise I2CError("PCF8574A failed to ack data")
finally:
i2c_stop(self.busno)
@kernel
def get(self):
"""Retrieve quasi-bidirectional pin input data.
:return: Pin data
"""
i2c_start(self.busno)
ret = 0
try:
if not i2c_write(self.busno, self.address | 1):
raise I2CError("PCF8574A failed to ack address")
ret = i2c_read(self.busno, False)
finally:
i2c_stop(self.busno)
return ret

View File

@ -0,0 +1,38 @@
from os import path
import json
from jsonschema import Draft7Validator, validators
def extend_with_default(validator_class):
validate_properties = validator_class.VALIDATORS["properties"]
def set_defaults(validator, properties, instance, schema):
for property, subschema in properties.items():
if "default" in subschema:
instance.setdefault(property, subschema["default"])
for error in validate_properties(
validator, properties, instance, schema,
):
yield error
return validators.extend(
validator_class, {"properties" : set_defaults},
)
schema_path = path.join(path.dirname(__file__), "coredevice_generic.schema.json")
with open(schema_path, "r") as f:
schema = json.load(f)
validator = extend_with_default(Draft7Validator)(schema)
def load(description_path):
with open(description_path, "r") as f:
result = json.load(f)
global validator
validator.validate(result)
if result["base"] != "use_drtio_role":
result["drtio_role"] = result["base"]
return result

View File

@ -37,13 +37,17 @@ class KasliEEPROM:
@kernel @kernel
def select(self): def select(self):
mask = 1 << self.port mask = 1 << self.port
self.sw0.select(mask) if self.port < 8:
self.sw1.select(mask >> 8) self.sw0.set(self.port)
self.sw1.unset()
else:
self.sw0.unset()
self.sw1.set(self.port - 8)
@kernel @kernel
def deselect(self): def deselect(self):
self.sw0.select(0) self.sw0.unset()
self.sw1.select(0) self.sw1.unset()
@kernel @kernel
def write_i32(self, addr, value): def write_i32(self, addr, value):

169
artiq/coredevice/mirny.py Normal file
View File

@ -0,0 +1,169 @@
"""RTIO driver for Mirny (4 channel GHz PLLs)
"""
from artiq.language.core import kernel, delay, portable
from artiq.language.units import us
from numpy import int32
from artiq.coredevice import spi2 as spi
SPI_CONFIG = (
0 * spi.SPI_OFFLINE
| 0 * spi.SPI_END
| 0 * spi.SPI_INPUT
| 1 * spi.SPI_CS_POLARITY
| 0 * spi.SPI_CLK_POLARITY
| 0 * spi.SPI_CLK_PHASE
| 0 * spi.SPI_LSB_FIRST
| 0 * spi.SPI_HALF_DUPLEX
)
# SPI clock write and read dividers
SPIT_WR = 4
SPIT_RD = 16
SPI_CS = 1
WE = 1 << 24
# supported CPLD code version
PROTO_REV_MATCH = 0x0
class Mirny:
"""
Mirny PLL-based RF generator.
:param spi_device: SPI bus device
:param refclk: Reference clock (SMA, MMCX or on-board 100 MHz oscillator)
frequency in Hz
:param clk_sel: Reference clock selection.
Valid options are: "XO" - onboard crystal oscillator;
"SMA" - front-panel SMA connector; "MMCX" - internal MMCX connector.
Passing an integer writes it as ``clk_sel`` in the CPLD's register 1.
The effect depends on the hardware revision.
:param core_device: Core device name (default: "core")
"""
kernel_invariants = {"bus", "core", "refclk", "clk_sel_hw_rev"}
def __init__(self, dmgr, spi_device, refclk=100e6, clk_sel="XO", core_device="core"):
self.core = dmgr.get(core_device)
self.bus = dmgr.get(spi_device)
# reference clock frequency
self.refclk = refclk
if not (10 <= self.refclk / 1e6 <= 600):
raise ValueError("Invalid refclk")
# reference clock selection
try:
self.clk_sel_hw_rev = {
# clk source: [reserved, reserved, v1.1, v1.0]
"xo": [-1, -1, 0, 0],
"mmcx": [-1, -1, 3, 2],
"sma": [-1, -1, 2, 3],
}[clk_sel.lower()]
except AttributeError: # not a string, fallback to int
if clk_sel & 0x3 != clk_sel:
raise ValueError("Invalid clk_sel") from None
self.clk_sel_hw_rev = [clk_sel] * 4
except KeyError:
raise ValueError("Invalid clk_sel") from None
self.clk_sel = -1
# board hardware revision
self.hw_rev = 0 # v1.0: 3, v1.1: 2
# TODO: support clk_div on v1.0 boards
@kernel
def read_reg(self, addr):
"""Read a register"""
self.bus.set_config_mu(
SPI_CONFIG | spi.SPI_INPUT | spi.SPI_END, 24, SPIT_RD, SPI_CS
)
self.bus.write((addr << 25))
return self.bus.read() & int32(0xFFFF)
@kernel
def write_reg(self, addr, data):
"""Write a register"""
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24, SPIT_WR, SPI_CS)
self.bus.write((addr << 25) | WE | ((data & 0xFFFF) << 8))
@kernel
def init(self, blind=False):
"""
Initialize and detect Mirny.
Select the clock source based the board's hardware revision.
Raise ValueError if the board's hardware revision is not supported.
:param blind: Verify presence and protocol compatibility. Raise ValueError on failure.
"""
reg0 = self.read_reg(0)
self.hw_rev = reg0 & 0x3
if not blind:
if (reg0 >> 2) & 0x3 != PROTO_REV_MATCH:
raise ValueError("Mirny PROTO_REV mismatch")
delay(100 * us) # slack
# select clock source
self.clk_sel = self.clk_sel_hw_rev[self.hw_rev]
if self.clk_sel < 0:
raise ValueError("Hardware revision not supported")
self.write_reg(1, (self.clk_sel << 4))
delay(1000 * us)
@portable(flags={"fast-math"})
def att_to_mu(self, att):
"""Convert an attenuation setting in dB to machine units.
:param att: Attenuation setting in dB.
:return: Digital attenuation setting.
"""
code = int32(255) - int32(round(att * 8))
if code < 0 or code > 255:
raise ValueError("Invalid Mirny attenuation!")
return code
@kernel
def set_att_mu(self, channel, att):
"""Set digital step attenuator in machine units.
:param att: Attenuation setting, 8 bit digital.
"""
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 16, SPIT_WR, SPI_CS)
self.bus.write(((channel | 8) << 25) | (att << 16))
@kernel
def set_att(self, channel, att):
"""Set digital step attenuator in SI units.
This method will write the attenuator settings of the selected channel.
.. seealso:: :meth:`set_att_mu`
:param channel: Attenuator channel (0-3).
:param att: Attenuation setting in dB. Higher value is more
attenuation. Minimum attenuation is 0*dB, maximum attenuation is
31.5*dB.
"""
self.set_att_mu(channel, self.att_to_mu(att))
@kernel
def write_ext(self, addr, length, data, ext_div=SPIT_WR):
"""Perform SPI write to a prefixed address"""
self.bus.set_config_mu(SPI_CONFIG, 8, SPIT_WR, SPI_CS)
self.bus.write(addr << 25)
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, length, ext_div, SPI_CS)
if length < 32:
data <<= 32 - length
self.bus.write(data)

View File

@ -1,47 +0,0 @@
from artiq.experiment import kernel
from artiq.coredevice.i2c import (
i2c_start, i2c_write, i2c_read, i2c_stop, I2CError)
class PCF8574A:
"""Driver for the PCF8574 I2C remote 8-bit I/O expander.
I2C transactions not real-time, and are performed by the CPU without
involving RTIO.
"""
def __init__(self, dmgr, busno=0, address=0x7c, core_device="core"):
self.core = dmgr.get(core_device)
self.busno = busno
self.address = address
@kernel
def set(self, data):
"""Drive data on the quasi-bidirectional pins.
:param data: Pin data. High bits are weakly driven high
(and thus inputs), low bits are strongly driven low.
"""
i2c_start(self.busno)
try:
if not i2c_write(self.busno, self.address):
raise I2CError("PCF8574A failed to ack address")
if not i2c_write(self.busno, data):
raise I2CError("PCF8574A failed to ack data")
finally:
i2c_stop(self.busno)
@kernel
def get(self):
"""Retrieve quasi-bidirectional pin input data.
:return: Pin data
"""
i2c_start(self.busno)
ret = 0
try:
if not i2c_write(self.busno, self.address | 1):
raise I2CError("PCF8574A failed to ack address")
ret = i2c_read(self.busno, False)
finally:
i2c_stop(self.busno)
return ret

View File

@ -1,77 +0,0 @@
from .spr import mtspr, mfspr
from artiq.language.core import kernel
_MAX_SPRS_PER_GRP_BITS = 11
_SPRGROUP_PC = 7 << _MAX_SPRS_PER_GRP_BITS
_SPR_PCMR_CP = 0x00000001 # Counter present
_SPR_PCMR_CISM = 0x00000004 # Count in supervisor mode
_SPR_PCMR_CIUM = 0x00000008 # Count in user mode
_SPR_PCMR_LA = 0x00000010 # Load access event
_SPR_PCMR_SA = 0x00000020 # Store access event
_SPR_PCMR_IF = 0x00000040 # Instruction fetch event
_SPR_PCMR_DCM = 0x00000080 # Data cache miss event
_SPR_PCMR_ICM = 0x00000100 # Insn cache miss event
_SPR_PCMR_IFS = 0x00000200 # Insn fetch stall event
_SPR_PCMR_LSUS = 0x00000400 # LSU stall event
_SPR_PCMR_BS = 0x00000800 # Branch stall event
_SPR_PCMR_DTLBM = 0x00001000 # DTLB miss event
_SPR_PCMR_ITLBM = 0x00002000 # ITLB miss event
_SPR_PCMR_DDS = 0x00004000 # Data dependency stall event
_SPR_PCMR_WPE = 0x03ff8000 # Watchpoint events
@kernel(flags={"nowrite", "nounwind"})
def _PCCR(n):
return _SPRGROUP_PC + n
@kernel(flags={"nowrite", "nounwind"})
def _PCMR(n):
return _SPRGROUP_PC + 8 + n
class CorePCU:
"""Core device performance counter unit (PCU) access"""
def __init__(self, dmgr, core_device="core"):
self.core = dmgr.get(core_device)
@kernel
def start(self):
"""
Configure and clear the kernel CPU performance counters.
The eight counters are configured to count the following events:
* Load or store
* Instruction fetch
* Data cache miss
* Instruction cache miss
* Instruction fetch stall
* Load-store-unit stall
* Branch stall
* Data dependency stall
"""
for i in range(8):
if not mfspr(_PCMR(i)) & _SPR_PCMR_CP:
raise ValueError("counter not present")
mtspr(_PCMR(i), 0)
mtspr(_PCCR(i), 0)
mtspr(_PCMR(0), _SPR_PCMR_CISM | _SPR_PCMR_LA | _SPR_PCMR_SA)
mtspr(_PCMR(1), _SPR_PCMR_CISM | _SPR_PCMR_IF)
mtspr(_PCMR(2), _SPR_PCMR_CISM | _SPR_PCMR_DCM)
mtspr(_PCMR(3), _SPR_PCMR_CISM | _SPR_PCMR_ICM)
mtspr(_PCMR(4), _SPR_PCMR_CISM | _SPR_PCMR_IFS)
mtspr(_PCMR(5), _SPR_PCMR_CISM | _SPR_PCMR_LSUS)
mtspr(_PCMR(6), _SPR_PCMR_CISM | _SPR_PCMR_BS)
mtspr(_PCMR(7), _SPR_PCMR_CISM | _SPR_PCMR_DDS)
@kernel
def get(self, r):
"""
Read the performance counters and store the counts in the
array provided.
:param list[int] r: array to store the counter values
"""
for i in range(8):
r[i] = mfspr(_PCCR(i))

1638
artiq/coredevice/phaser.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,92 +0,0 @@
from collections import defaultdict
import subprocess
class Symbolizer:
def __init__(self, binary, triple, demangle=True):
cmdline = [
triple + "-addr2line", "--exe=" + binary,
"--addresses", "--functions", "--inlines"
]
if demangle:
cmdline.append("--demangle=rust")
self._addr2line = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
universal_newlines=True)
def symbolize(self, addr):
self._addr2line.stdin.write("0x{:08x}\n0\n".format(addr))
self._addr2line.stdin.flush()
self._addr2line.stdout.readline() # 0x[addr]
result = []
while True:
function = self._addr2line.stdout.readline().rstrip()
# check for end marker
if function == "0x00000000": # 0x00000000
self._addr2line.stdout.readline() # ??
self._addr2line.stdout.readline() # ??:0
return result
file, line = self._addr2line.stdout.readline().rstrip().split(":")
result.append((function, file, line, addr))
class CallgrindWriter:
def __init__(self, output, binary, triple, compression=True, demangle=True):
self._output = output
self._binary = binary
self._current = defaultdict(lambda: None)
self._ids = defaultdict(lambda: {})
self._compression = compression
self._symbolizer = Symbolizer(binary, triple, demangle=demangle)
def _write(self, fmt, *args, **kwargs):
self._output.write(fmt.format(*args, **kwargs))
self._output.write("\n")
def _spec(self, spec, value):
if self._current[spec] == value:
return
self._current[spec] = value
if not self._compression or value == "??":
self._write("{}={}", spec, value)
return
spec_ids = self._ids[spec]
if value in spec_ids:
self._write("{}=({})", spec, spec_ids[value])
else:
spec_ids[value] = len(spec_ids) + 1
self._write("{}=({}) {}", spec, spec_ids[value], value)
def header(self):
self._write("# callgrind format")
self._write("version: 1")
self._write("creator: ARTIQ")
self._write("positions: instr line")
self._write("events: Hits")
self._write("")
self._spec("ob", self._binary)
self._spec("cob", self._binary)
def hit(self, addr, count):
for function, file, line, addr in self._symbolizer.symbolize(addr):
self._spec("fl", file)
self._spec("fn", function)
self._write("0x{:08x} {} {}", addr, line, count)
def edge(self, caller, callee, count):
edges = self._symbolizer.symbolize(callee) + self._symbolizer.symbolize(caller)
for (callee, caller) in zip(edges, edges[1:]):
function, file, line, addr = callee
self._spec("cfl", file)
self._spec("cfn", function)
self._write("calls={} 0x{:08x} {}", count, addr, line)
function, file, line, addr = caller
self._spec("fl", file)
self._spec("fn", function)
self._write("0x{:08x} {} {}", addr, line, count)

View File

@ -15,24 +15,26 @@ SPI_CS_PGIA = 1 # separate SPI bus, CS used as RCLK
@portable @portable
def adc_mu_to_volt(data, gain=0): def adc_mu_to_volt(data, gain=0, corrected_fs=True):
"""Convert ADC data in machine units to Volts. """Convert ADC data in machine units to Volts.
:param data: 16 bit signed ADC word :param data: 16 bit signed ADC word
:param gain: PGIA gain setting (0: 1, ..., 3: 1000) :param gain: PGIA gain setting (0: 1, ..., 3: 1000)
:param corrected_fs: use corrected ADC FS reference.
Should be True for Samplers' revisions after v2.1. False for v2.1 and earlier.
:return: Voltage in Volts :return: Voltage in Volts
""" """
if gain == 0: if gain == 0:
volt_per_lsb = 20./(1 << 16) volt_per_lsb = 20.48 / (1 << 16) if corrected_fs else 20. / (1 << 16)
elif gain == 1: elif gain == 1:
volt_per_lsb = 2./(1 << 16) volt_per_lsb = 2.048 / (1 << 16) if corrected_fs else 2. / (1 << 16)
elif gain == 2: elif gain == 2:
volt_per_lsb = .2/(1 << 16) volt_per_lsb = .2048 / (1 << 16) if corrected_fs else .2 / (1 << 16)
elif gain == 3: elif gain == 3:
volt_per_lsb = .02/(1 << 16) volt_per_lsb = 0.02048 / (1 << 16) if corrected_fs else .02 / (1 << 16)
else: else:
raise ValueError("invalid gain") raise ValueError("invalid gain")
return data*volt_per_lsb return data * volt_per_lsb
class Sampler: class Sampler:
@ -48,12 +50,13 @@ class Sampler:
:param gains: Initial value for PGIA gains shift register :param gains: Initial value for PGIA gains shift register
(default: 0x0000). Knowledge of this state is not transferred (default: 0x0000). Knowledge of this state is not transferred
between experiments. between experiments.
:param hw_rev: Sampler's hardware revision string (default 'v2.2')
:param core_device: Core device name :param core_device: Core device name
""" """
kernel_invariants = {"bus_adc", "bus_pgia", "core", "cnv", "div"} kernel_invariants = {"bus_adc", "bus_pgia", "core", "cnv", "div", "corrected_fs"}
def __init__(self, dmgr, spi_adc_device, spi_pgia_device, cnv_device, def __init__(self, dmgr, spi_adc_device, spi_pgia_device, cnv_device,
div=8, gains=0x0000, core_device="core"): div=8, gains=0x0000, hw_rev="v2.2", core_device="core"):
self.bus_adc = dmgr.get(spi_adc_device) self.bus_adc = dmgr.get(spi_adc_device)
self.bus_adc.update_xfer_duration_mu(div, 32) self.bus_adc.update_xfer_duration_mu(div, 32)
self.bus_pgia = dmgr.get(spi_pgia_device) self.bus_pgia = dmgr.get(spi_pgia_device)
@ -62,6 +65,11 @@ class Sampler:
self.cnv = dmgr.get(cnv_device) self.cnv = dmgr.get(cnv_device)
self.div = div self.div = div
self.gains = gains self.gains = gains
self.corrected_fs = self.use_corrected_fs(hw_rev)
@staticmethod
def use_corrected_fs(hw_rev):
return hw_rev != "v2.1"
@kernel @kernel
def init(self): def init(self):
@ -144,4 +152,4 @@ class Sampler:
for i in range(n): for i in range(n):
channel = i + 8 - len(data) channel = i + 8 - len(data)
gain = (self.gains >> (channel*2)) & 0b11 gain = (self.gains >> (channel*2)) & 0b11
data[i] = adc_mu_to_volt(adc_data[i], gain) data[i] = adc_mu_to_volt(adc_data[i], gain, self.corrected_fs)

View File

@ -1,372 +0,0 @@
"""
Driver for the Smart Arbitrary Waveform Generator (SAWG) on RTIO.
The SAWG is an "improved DDS" built in gateware and interfacing to
high-speed DACs.
Output event replacement is supported except on the configuration channel.
"""
from artiq.language.types import TInt32, TFloat
from numpy import int32, int64
from artiq.language.core import kernel
from artiq.coredevice.spline import Spline
from artiq.coredevice.rtio import rtio_output
# sawg.Config addresses
_SAWG_DIV = 0
_SAWG_CLR = 1
_SAWG_IQ_EN = 2
# _SAWF_PAD = 3 # reserved
_SAWG_OUT_MIN = 4
_SAWG_OUT_MAX = 5
_SAWG_DUC_MIN = 6
_SAWG_DUC_MAX = 7
class Config:
"""SAWG configuration.
Exposes the configurable quantities of a single SAWG channel.
Access to the configuration registers for a SAWG channel can not
be concurrent. There must be at least :attr:`_rtio_interval` machine
units of delay between accesses. Replacement is not supported and will be
lead to an ``RTIOCollision`` as this is likely a programming error.
All methods therefore advance the timeline by the duration of one
configuration register transfer.
:param channel: RTIO channel number of the channel.
:param core: Core device.
"""
kernel_invariants = {"channel", "core", "_out_scale", "_duc_scale",
"_rtio_interval"}
def __init__(self, channel, core, cordic_gain=1.):
self.channel = channel
self.core = core
# normalized DAC output
self._out_scale = (1 << 15) - 1.
# normalized DAC output including DUC cordic gain
self._duc_scale = self._out_scale/cordic_gain
# configuration channel access interval
self._rtio_interval = int64(3*self.core.ref_multiplier)
@kernel
def set_div(self, div: TInt32, n: TInt32=0):
"""Set the spline evolution divider and current counter value.
The divider and the spline evolution are synchronized across all
spline channels within a SAWG channel. The DDS/DUC phase accumulators
always evolves at full speed.
.. note:: The spline evolution divider has not been tested extensively
and is currently considered a technological preview only.
:param div: Spline evolution divider, such that
``t_sawg_spline/t_rtio_coarse = div + 1``. Default: ``0``.
:param n: Current value of the counter. Default: ``0``.
"""
rtio_output((self.channel << 8) | _SAWG_DIV, div | (n << 16))
delay_mu(self._rtio_interval)
@kernel
def set_clr(self, clr0: TInt32, clr1: TInt32, clr2: TInt32):
"""Set the accumulator clear mode for the three phase accumulators.
When the ``clr`` bit for a given DDS/DUC phase accumulator is
set, that phase accumulator will be cleared with every phase offset
RTIO command and the output phase of the DDS/DUC will be
exactly the phase RTIO value ("absolute phase update mode").
.. math::
q^\prime(t) = p^\prime + (t - t^\prime) f^\prime
In turn, when the bit is cleared, the phase RTIO channels
determine a phase offset to the current (carrier-) value of the
DDS/DUC phase accumulator. This "relative phase update mode" is
sometimes also called continuous phase mode.
.. math::
q^\prime(t) = q(t^\prime) + (p^\prime - p) +
(t - t^\prime) f^\prime
Where:
* :math:`q`, :math:`q^\prime`: old/new phase accumulator
* :math:`p`, :math:`p^\prime`: old/new phase offset
* :math:`f^\prime`: new frequency
* :math:`t^\prime`: timestamp of setting new :math:`p`, :math:`f`
* :math:`t`: running time
:param clr0: Auto-clear phase accumulator of the ``phase0``/
``frequency0`` DUC. Default: ``True``
:param clr1: Auto-clear phase accumulator of the ``phase1``/
``frequency1`` DDS. Default: ``True``
:param clr2: Auto-clear phase accumulator of the ``phase2``/
``frequency2`` DDS. Default: ``True``
"""
rtio_output((self.channel << 8) | _SAWG_CLR, clr0 |
(clr1 << 1) | (clr2 << 2))
delay_mu(self._rtio_interval)
@kernel
def set_iq_en(self, i_enable: TInt32, q_enable: TInt32):
"""Enable I/Q data on this DAC channel.
Every pair of SAWG channels forms a buddy pair.
The ``iq_en`` configuration controls which DDS data is emitted to the
DACs.
Refer to the documentation of :class:`SAWG` for a mathematical
description of ``i_enable`` and ``q_enable``.
.. note:: Quadrature data from the buddy channel is currently
a technological preview only. The data is ignored in the SAWG
gateware and not added to the DAC output.
This is equivalent to the ``q_enable`` switch always being ``0``.
:param i_enable: Controls adding the in-phase
DUC-DDS data of *this* SAWG channel to *this* DAC channel.
Default: ``1``.
:param q_enable: controls adding the quadrature
DUC-DDS data of this SAWG's *buddy* channel to *this* DAC
channel. Default: ``0``.
"""
rtio_output((self.channel << 8) | _SAWG_IQ_EN, i_enable |
(q_enable << 1))
delay_mu(self._rtio_interval)
@kernel
def set_duc_max_mu(self, limit: TInt32):
"""Set the digital up-converter (DUC) I and Q data summing junctions
upper limit. In machine units.
The default limits are chosen to reach maximum and minimum DAC output
amplitude.
For a description of the limiter functions in normalized units see:
.. seealso:: :meth:`set_duc_max`
"""
rtio_output((self.channel << 8) | _SAWG_DUC_MAX, limit)
delay_mu(self._rtio_interval)
@kernel
def set_duc_min_mu(self, limit: TInt32):
""".. seealso:: :meth:`set_duc_max_mu`"""
rtio_output((self.channel << 8) | _SAWG_DUC_MIN, limit)
delay_mu(self._rtio_interval)
@kernel
def set_out_max_mu(self, limit: TInt32):
""".. seealso:: :meth:`set_duc_max_mu`"""
rtio_output((self.channel << 8) | _SAWG_OUT_MAX, limit)
delay_mu(self._rtio_interval)
@kernel
def set_out_min_mu(self, limit: TInt32):
""".. seealso:: :meth:`set_duc_max_mu`"""
rtio_output((self.channel << 8) | _SAWG_OUT_MIN, limit)
delay_mu(self._rtio_interval)
@kernel
def set_duc_max(self, limit: TFloat):
"""Set the digital up-converter (DUC) I and Q data summing junctions
upper limit.
Each of the three summing junctions has a saturating adder with
configurable upper and lower limits. The three summing junctions are:
* At the in-phase input to the ``phase0``/``frequency0`` fast DUC,
after the anti-aliasing FIR filter.
* At the quadrature input to the ``phase0``/``frequency0``
fast DUC, after the anti-aliasing FIR filter. The in-phase and
quadrature data paths both use the same limits.
* Before the DAC, where the following three data streams
are added together:
* the output of the ``offset`` spline,
* (optionally, depending on ``i_enable``) the in-phase output
of the ``phase0``/``frequency0`` fast DUC, and
* (optionally, depending on ``q_enable``) the quadrature
output of the ``phase0``/``frequency0`` fast DUC of the
buddy channel.
Refer to the documentation of :class:`SAWG` for a mathematical
description of the summing junctions.
:param limit: Limit value ``[-1, 1]``. The output of the limiter will
never exceed this limit. The default limits are the full range
``[-1, 1]``.
.. seealso::
* :meth:`set_duc_max`: Upper limit of the in-phase and quadrature
inputs to the DUC.
* :meth:`set_duc_min`: Lower limit of the in-phase and quadrature
inputs to the DUC.
* :meth:`set_out_max`: Upper limit of the DAC output.
* :meth:`set_out_min`: Lower limit of the DAC output.
"""
self.set_duc_max_mu(int32(round(limit*self._duc_scale)))
@kernel
def set_duc_min(self, limit: TFloat):
""".. seealso:: :meth:`set_duc_max`"""
self.set_duc_min_mu(int32(round(limit*self._duc_scale)))
@kernel
def set_out_max(self, limit: TFloat):
""".. seealso:: :meth:`set_duc_max`"""
self.set_out_max_mu(int32(round(limit*self._out_scale)))
@kernel
def set_out_min(self, limit: TFloat):
""".. seealso:: :meth:`set_duc_max`"""
self.set_out_min_mu(int32(round(limit*self._out_scale)))
class SAWG:
"""Smart arbitrary waveform generator channel.
The channel is parametrized as: ::
oscillators = exp(2j*pi*(frequency0*t + phase0))*(
amplitude1*exp(2j*pi*(frequency1*t + phase1)) +
amplitude2*exp(2j*pi*(frequency2*t + phase2)))
output = (offset +
i_enable*Re(oscillators) +
q_enable*Im(buddy_oscillators))
This parametrization can be viewed as two complex (quadrature) oscillators
(``frequency1``/``phase1`` and ``frequency2``/``phase2``) that are
executing and sampling at the coarse RTIO frequency. They can represent
frequencies within the first Nyquist zone from ``-f_rtio_coarse/2`` to
``f_rtio_coarse/2``.
.. note:: The coarse RTIO frequency ``f_rtio_coarse`` is the inverse of
``ref_period*multiplier``. Both are arguments of the ``Core`` device,
specified in the device database ``device_db.py``.
The sum of their outputs is then interpolated by a factor of
:attr:`parallelism` (2, 4, 8 depending on the bitstream) using a
finite-impulse-response (FIR) anti-aliasing filter (more accurately
a half-band filter).
The filter is followed by a configurable saturating limiter.
After the limiter, the data is shifted in frequency using a complex
digital up-converter (DUC, ``frequency0``/``phase0``) running at
:attr:`parallelism` times the coarse RTIO frequency. The first Nyquist
zone of the DUC extends from ``-f_rtio_coarse*parallelism/2`` to
``f_rtio_coarse*parallelism/2``. Other Nyquist zones are usable depending
on the interpolation/modulation options configured in the DAC.
The real/in-phase data after digital up-conversion can be offset using
another spline interpolator ``offset``.
The ``i_enable``/``q_enable`` switches enable emission of quadrature
signals for later analog quadrature mixing distinguishing upper and lower
sidebands and thus doubling the bandwidth. They can also be used to emit
four-tone signals.
.. note:: Quadrature data from the buddy channel is currently
ignored in the SAWG gateware and not added to the DAC output.
This is equivalent to the ``q_enable`` switch always being ``0``.
The configuration channel and the nine
:class:`artiq.coredevice.spline.Spline` interpolators are accessible as
attributes:
* :attr:`config`: :class:`Config`
* :attr:`offset`, :attr:`amplitude1`, :attr:`amplitude2`: in units
of full scale
* :attr:`phase0`, :attr:`phase1`, :attr:`phase2`: in units of turns
* :attr:`frequency0`, :attr:`frequency1`, :attr:`frequency2`: in units
of Hz
.. note:: The latencies (pipeline depths) of the nine data channels (i.e.
all except :attr:`config`) are matched. Equivalent channels (e.g.
:attr:`phase1` and :attr:`phase2`) are exactly matched. Channels of
different type or functionality (e.g. :attr:`offset` vs
:attr:`amplitude1`, DDS vs DUC, :attr:`phase0` vs :attr:`phase1`) are
only matched to within one coarse RTIO cycle.
:param channel_base: RTIO channel number of the first channel (amplitude).
The configuration channel and frequency/phase/amplitude channels are
then assumed to be successive channels.
:param parallelism: Number of output samples per coarse RTIO clock cycle.
:param core_device: Name of the core device that this SAWG is on.
"""
kernel_invariants = {"channel_base", "core", "parallelism",
"amplitude1", "frequency1", "phase1",
"amplitude2", "frequency2", "phase2",
"frequency0", "phase0", "offset"}
def __init__(self, dmgr, channel_base, parallelism, core_device="core"):
self.core = dmgr.get(core_device)
self.channel_base = channel_base
self.parallelism = parallelism
width = 16
time_width = 16
cordic_gain = 1.646760258057163 # Cordic(width=16, guard=None).gain
head_room = 1.001
self.config = Config(channel_base, self.core, cordic_gain)
self.offset = Spline(width, time_width, channel_base + 1,
self.core, 2.*head_room)
self.amplitude1 = Spline(width, time_width, channel_base + 2,
self.core, 2*head_room*cordic_gain**2)
self.frequency1 = Spline(3*width, time_width, channel_base + 3,
self.core, 1/self.core.coarse_ref_period)
self.phase1 = Spline(width, time_width, channel_base + 4,
self.core, 1.)
self.amplitude2 = Spline(width, time_width, channel_base + 5,
self.core, 2*head_room*cordic_gain**2)
self.frequency2 = Spline(3*width, time_width, channel_base + 6,
self.core, 1/self.core.coarse_ref_period)
self.phase2 = Spline(width, time_width, channel_base + 7,
self.core, 1.)
self.frequency0 = Spline(2*width, time_width, channel_base + 8,
self.core,
parallelism/self.core.coarse_ref_period)
self.phase0 = Spline(width, time_width, channel_base + 9,
self.core, 1.)
@kernel
def reset(self):
"""Re-establish initial conditions.
This clears all spline interpolators, accumulators and configuration
settings.
This method advances the timeline by the time required to perform all
7 writes to the configuration channel, plus 9 coarse RTIO cycles.
"""
self.config.set_div(0, 0)
self.config.set_clr(1, 1, 1)
self.config.set_iq_en(1, 0)
self.config.set_duc_min(-1.)
self.config.set_duc_max(1.)
self.config.set_out_min(-1.)
self.config.set_out_max(1.)
self.frequency0.set_mu(0)
coarse_cycle = int64(self.core.ref_multiplier)
delay_mu(coarse_cycle)
self.frequency1.set_mu(0)
delay_mu(coarse_cycle)
self.frequency2.set_mu(0)
delay_mu(coarse_cycle)
self.phase0.set_mu(0)
delay_mu(coarse_cycle)
self.phase1.set_mu(0)
delay_mu(coarse_cycle)
self.phase2.set_mu(0)
delay_mu(coarse_cycle)
self.amplitude1.set_mu(0)
delay_mu(coarse_cycle)
self.amplitude2.set_mu(0)
delay_mu(coarse_cycle)
self.offset.set_mu(0)
delay_mu(coarse_cycle)

View File

@ -1,36 +0,0 @@
from artiq.language.core import kernel, delay
from artiq.language.units import us
class ShiftReg:
"""Driver for shift registers/latch combos connected to TTLs"""
kernel_invariants = {"dt", "n"}
def __init__(self, dmgr, clk, ser, latch, n=32, dt=10*us):
self.core = dmgr.get("core")
self.clk = dmgr.get(clk)
self.ser = dmgr.get(ser)
self.latch = dmgr.get(latch)
self.n = n
self.dt = dt
@kernel
def set(self, data):
"""Sets the values of the latch outputs. This does not
advance the timeline and the waveform is generated before
`now`."""
delay(-2*(self.n + 1)*self.dt)
for i in range(self.n):
if (data >> (self.n-i-1)) & 1 == 0:
self.ser.off()
else:
self.ser.on()
self.clk.off()
delay(self.dt)
self.clk.on()
delay(self.dt)
self.clk.off()
self.latch.on()
delay(self.dt)
self.latch.off()
delay(self.dt)

View File

@ -0,0 +1,623 @@
from artiq.language.core import *
from artiq.language.types import *
from artiq.coredevice.rtio import rtio_output, rtio_input_data
from artiq.coredevice import spi2 as spi
from artiq.language.units import us
@portable
def shuttler_volt_to_mu(volt):
"""Return the equivalent DAC code. Valid input range is from -10 to
10 - LSB.
"""
return round((1 << 16) * (volt / 20.0)) & 0xffff
class Config:
"""Shuttler configuration registers interface.
The configuration registers control waveform phase auto-clear, and pre-DAC
gain & offset values for calibration with ADC on the Shuttler AFE card.
To find the calibrated DAC code, the Shuttler Core first multiplies the
output data with pre-DAC gain, then adds the offset.
.. note::
The DAC code is capped at 0x7fff and 0x8000.
:param channel: RTIO channel number of this interface.
:param core_device: Core device name.
"""
kernel_invariants = {
"core", "channel", "target_base", "target_read",
"target_gain", "target_offset", "target_clr"
}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_base = channel << 8
self.target_read = 1 << 6
self.target_gain = 0 * (1 << 4)
self.target_offset = 1 * (1 << 4)
self.target_clr = 1 * (1 << 5)
@kernel
def set_clr(self, clr):
"""Set/Unset waveform phase clear bits.
Each bit corresponds to a Shuttler waveform generator core. Setting a
clear bit forces the Shuttler Core to clear the phase accumulator on
waveform trigger (See :class:`Trigger` for the trigger method).
Otherwise, the phase accumulator increments from its original value.
:param clr: Waveform phase clear bits. The MSB corresponds to Channel
15, LSB corresponds to Channel 0.
"""
rtio_output(self.target_base | self.target_clr, clr)
@kernel
def set_gain(self, channel, gain):
"""Set the 16-bits pre-DAC gain register of a Shuttler Core channel.
The `gain` parameter represents the decimal portion of the gain
factor. The MSB represents 0.5 and the sign bit. Hence, the valid
total gain value (1 +/- 0.gain) ranges from 0.5 to 1.5 - LSB.
:param channel: Shuttler Core channel to be configured.
:param gain: Shuttler Core channel gain.
"""
rtio_output(self.target_base | self.target_gain | channel, gain)
@kernel
def get_gain(self, channel):
"""Return the pre-DAC gain value of a Shuttler Core channel.
:param channel: The Shuttler Core channel.
:return: Pre-DAC gain value. See :meth:`set_gain`.
"""
rtio_output(self.target_base | self.target_gain |
self.target_read | channel, 0)
return rtio_input_data(self.channel)
@kernel
def set_offset(self, channel, offset):
"""Set the 16-bits pre-DAC offset register of a Shuttler Core channel.
.. seealso::
:meth:`shuttler_volt_to_mu`
:param channel: Shuttler Core channel to be configured.
:param offset: Shuttler Core channel offset.
"""
rtio_output(self.target_base | self.target_offset | channel, offset)
@kernel
def get_offset(self, channel):
"""Return the pre-DAC offset value of a Shuttler Core channel.
:param channel: The Shuttler Core channel.
:return: Pre-DAC offset value. See :meth:`set_offset`.
"""
rtio_output(self.target_base | self.target_offset |
self.target_read | channel, 0)
return rtio_input_data(self.channel)
class DCBias:
"""Shuttler Core cubic DC-bias spline.
A Shuttler channel can generate a waveform `w(t)` that is the sum of a
cubic spline `a(t)` and a sinusoid modulated in amplitude by a cubic
spline `b(t)` and in phase/frequency by a quadratic spline `c(t)`, where
.. math::
w(t) = a(t) + b(t) * cos(c(t))
And `t` corresponds to time in seconds.
This class controls the cubic spline `a(t)`, in which
.. math::
a(t) = p_0 + p_1t + \\frac{p_2t^2}{2} + \\frac{p_3t^3}{6}
And `a(t)` is in Volt.
:param channel: RTIO channel number of this DC-bias spline interface.
:param core_device: Core device name.
"""
kernel_invariants = {"core", "channel", "target_o"}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_o = channel << 8
@kernel
def set_waveform(self, a0: TInt32, a1: TInt32, a2: TInt64, a3: TInt64):
"""Set the DC-bias spline waveform.
Given `a(t)` as defined in :class:`DCBias`, the coefficients should be
configured by the following formulae.
.. math::
T &= 8*10^{-9}
a_0 &= p_0
a_1 &= p_1T + \\frac{p_2T^2}{2} + \\frac{p_3T^3}{6}
a_2 &= p_2T^2 + p_3T^3
a_3 &= p_3T^3
:math:`a_0`, :math:`a_1`, :math:`a_2` and :math:`a_3` are 16, 32, 48
and 48 bits in width respectively. See :meth:`shuttler_volt_to_mu` for
machine unit conversion.
Note: The waveform is not updated to the Shuttler Core until
triggered. See :class:`Trigger` for the update triggering mechanism.
:param a0: The :math:`a_0` coefficient in machine unit.
:param a1: The :math:`a_1` coefficient in machine unit.
:param a2: The :math:`a_2` coefficient in machine unit.
:param a3: The :math:`a_3` coefficient in machine unit.
"""
coef_words = [
a0,
a1,
a1 >> 16,
a2 & 0xFFFF,
(a2 >> 16) & 0xFFFF,
(a2 >> 32) & 0xFFFF,
a3 & 0xFFFF,
(a3 >> 16) & 0xFFFF,
(a3 >> 32) & 0xFFFF,
]
for i in range(len(coef_words)):
rtio_output(self.target_o | i, coef_words[i])
delay_mu(int64(self.core.ref_multiplier))
class DDS:
"""Shuttler Core DDS spline.
A Shuttler channel can generate a waveform `w(t)` that is the sum of a
cubic spline `a(t)` and a sinusoid modulated in amplitude by a cubic
spline `b(t)` and in phase/frequency by a quadratic spline `c(t)`, where
.. math::
w(t) = a(t) + b(t) * cos(c(t))
And `t` corresponds to time in seconds.
This class controls the cubic spline `b(t)` and quadratic spline `c(t)`,
in which
.. math::
b(t) &= g * (q_0 + q_1t + \\frac{q_2t^2}{2} + \\frac{q_3t^3}{6})
c(t) &= r_0 + r_1t + \\frac{r_2t^2}{2}
And `b(t)` is in Volt, `c(t)` is in number of turns. Note that `b(t)`
contributes to a constant gain of :math:`g=1.64676`.
:param channel: RTIO channel number of this DC-bias spline interface.
:param core_device: Core device name.
"""
kernel_invariants = {"core", "channel", "target_o"}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_o = channel << 8
@kernel
def set_waveform(self, b0: TInt32, b1: TInt32, b2: TInt64, b3: TInt64,
c0: TInt32, c1: TInt32, c2: TInt32):
"""Set the DDS spline waveform.
Given `b(t)` and `c(t)` as defined in :class:`DDS`, the coefficients
should be configured by the following formulae.
.. math::
T &= 8*10^{-9}
b_0 &= q_0
b_1 &= q_1T + \\frac{q_2T^2}{2} + \\frac{q_3T^3}{6}
b_2 &= q_2T^2 + q_3T^3
b_3 &= q_3T^3
c_0 &= r_0
c_1 &= r_1T + \\frac{r_2T^2}{2}
c_2 &= r_2T^2
:math:`b_0`, :math:`b_1`, :math:`b_2` and :math:`b_3` are 16, 32, 48
and 48 bits in width respectively. See :meth:`shuttler_volt_to_mu` for
machine unit conversion. :math:`c_0`, :math:`c_1` and :math:`c_2` are
16, 32 and 32 bits in width respectively.
Note: The waveform is not updated to the Shuttler Core until
triggered. See :class:`Trigger` for the update triggering mechanism.
:param b0: The :math:`b_0` coefficient in machine unit.
:param b1: The :math:`b_1` coefficient in machine unit.
:param b2: The :math:`b_2` coefficient in machine unit.
:param b3: The :math:`b_3` coefficient in machine unit.
:param c0: The :math:`c_0` coefficient in machine unit.
:param c1: The :math:`c_1` coefficient in machine unit.
:param c2: The :math:`c_2` coefficient in machine unit.
"""
coef_words = [
b0,
b1,
b1 >> 16,
b2 & 0xFFFF,
(b2 >> 16) & 0xFFFF,
(b2 >> 32) & 0xFFFF,
b3 & 0xFFFF,
(b3 >> 16) & 0xFFFF,
(b3 >> 32) & 0xFFFF,
c0,
c1,
c1 >> 16,
c2,
c2 >> 16,
]
for i in range(len(coef_words)):
rtio_output(self.target_o | i, coef_words[i])
delay_mu(int64(self.core.ref_multiplier))
class Trigger:
"""Shuttler Core spline coefficients update trigger.
:param channel: RTIO channel number of the trigger interface.
:param core_device: Core device name.
"""
kernel_invariants = {"core", "channel", "target_o"}
def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device)
self.channel = channel
self.target_o = channel << 8
@kernel
def trigger(self, trig_out):
"""Triggers coefficient update of (a) Shuttler Core channel(s).
Each bit corresponds to a Shuttler waveform generator core. Setting
`trig_out` bits commits the pending coefficient update (from
`set_waveform` in :class:`DCBias` and :class:`DDS`) to the Shuttler Core
synchronously.
:param trig_out: Coefficient update trigger bits. The MSB corresponds
to Channel 15, LSB corresponds to Channel 0.
"""
rtio_output(self.target_o, trig_out)
RELAY_SPI_CONFIG = (0*spi.SPI_OFFLINE | 1*spi.SPI_END |
0*spi.SPI_INPUT | 0*spi.SPI_CS_POLARITY |
0*spi.SPI_CLK_POLARITY | 0*spi.SPI_CLK_PHASE |
0*spi.SPI_LSB_FIRST | 0*spi.SPI_HALF_DUPLEX)
ADC_SPI_CONFIG = (0*spi.SPI_OFFLINE | 0*spi.SPI_END |
0*spi.SPI_INPUT | 0*spi.SPI_CS_POLARITY |
1*spi.SPI_CLK_POLARITY | 1*spi.SPI_CLK_PHASE |
0*spi.SPI_LSB_FIRST | 0*spi.SPI_HALF_DUPLEX)
# SPI clock write and read dividers
# CS should assert at least 9.5 ns after clk pulse
SPIT_RELAY_WR = 4
# 25 ns high/low pulse hold (limiting for write)
SPIT_ADC_WR = 4
SPIT_ADC_RD = 16
# SPI CS line
CS_RELAY = 1 << 0
CS_LED = 1 << 1
CS_ADC = 1 << 0
# Referenced AD4115 registers
_AD4115_REG_STATUS = 0x00
_AD4115_REG_ADCMODE = 0x01
_AD4115_REG_DATA = 0x04
_AD4115_REG_ID = 0x07
_AD4115_REG_CH0 = 0x10
_AD4115_REG_SETUPCON0 = 0x20
class Relay:
"""Shuttler AFE relay switches.
It controls the AFE relay switches and the LEDs. Switch on the relay to
enable AFE output; And off to disable the output. The LEDs indicates the
relay status.
.. note::
The relay does not disable ADC measurements. Voltage of any channels
can still be read by the ADC even after switching off the relays.
:param spi_device: SPI bus device name.
:param core_device: Core device name.
"""
kernel_invariant = {"core", "bus"}
def __init__(self, dmgr, spi_device, core_device="core"):
self.core = dmgr.get(core_device)
self.bus = dmgr.get(spi_device)
@kernel
def init(self):
"""Initialize SPI device.
Configures the SPI bus to 16-bits, write-only, simultaneous relay
switches and LED control.
"""
self.bus.set_config_mu(
RELAY_SPI_CONFIG, 16, SPIT_RELAY_WR, CS_RELAY | CS_LED)
@kernel
def enable(self, en: TInt32):
"""Enable/Disable relay switches of corresponding channels.
Each bit corresponds to the relay switch of a channel. Asserting a bit
turns on the corresponding relay switch; Deasserting the same bit
turns off the switch instead.
:param en: Switch enable bits. The MSB corresponds to Channel 15, LSB
corresponds to Channel 0.
"""
self.bus.write(en << 16)
class ADC:
"""Shuttler AFE ADC (AD4115) driver.
:param spi_device: SPI bus device name.
:param core_device: Core device name.
"""
kernel_invariant = {"core", "bus"}
def __init__(self, dmgr, spi_device, core_device="core"):
self.core = dmgr.get(core_device)
self.bus = dmgr.get(spi_device)
@kernel
def read_id(self) -> TInt32:
"""Read the product ID of the ADC.
The expected return value is 0x38DX, the 4 LSbs are don't cares.
:return: The read-back product ID.
"""
return self.read16(_AD4115_REG_ID)
@kernel
def reset(self):
"""AD4115 reset procedure.
This performs a write operation of 96 serial clock cycles with DIN
held at high. It resets the entire device, including the register
contents.
.. note::
The datasheet only requires 64 cycles, but reasserting `CS_n` right
after the transfer appears to interrupt the start-up sequence.
"""
self.bus.set_config_mu(ADC_SPI_CONFIG, 32, SPIT_ADC_WR, CS_ADC)
self.bus.write(0xffffffff)
self.bus.write(0xffffffff)
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 32, SPIT_ADC_WR, CS_ADC)
self.bus.write(0xffffffff)
@kernel
def read8(self, addr: TInt32) -> TInt32:
"""Read from 8 bit register.
:param addr: Register address.
:return: Read-back register content.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
16, SPIT_ADC_RD, CS_ADC)
self.bus.write((addr | 0x40) << 24)
return self.bus.read() & 0xff
@kernel
def read16(self, addr: TInt32) -> TInt32:
"""Read from 16 bit register.
:param addr: Register address.
:return: Read-back register content.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
24, SPIT_ADC_RD, CS_ADC)
self.bus.write((addr | 0x40) << 24)
return self.bus.read() & 0xffff
@kernel
def read24(self, addr: TInt32) -> TInt32:
"""Read from 24 bit register.
:param addr: Register address.
:return: Read-back register content.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END | spi.SPI_INPUT,
32, SPIT_ADC_RD, CS_ADC)
self.bus.write((addr | 0x40) << 24)
return self.bus.read() & 0xffffff
@kernel
def write8(self, addr: TInt32, data: TInt32):
"""Write to 8 bit register.
:param addr: Register address.
:param data: Data to be written.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 16, SPIT_ADC_WR, CS_ADC)
self.bus.write(addr << 24 | (data & 0xff) << 16)
@kernel
def write16(self, addr: TInt32, data: TInt32):
"""Write to 16 bit register.
:param addr: Register address.
:param data: Data to be written.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 24, SPIT_ADC_WR, CS_ADC)
self.bus.write(addr << 24 | (data & 0xffff) << 8)
@kernel
def write24(self, addr: TInt32, data: TInt32):
"""Write to 24 bit register.
:param addr: Register address.
:param data: Data to be written.
"""
self.bus.set_config_mu(
ADC_SPI_CONFIG | spi.SPI_END, 32, SPIT_ADC_WR, CS_ADC)
self.bus.write(addr << 24 | (data & 0xffffff))
@kernel
def read_ch(self, channel: TInt32) -> TFloat:
"""Sample a Shuttler channel on the AFE.
It performs a single conversion using profile 0 and setup 0, on the
selected channel. The sample is then recovered and converted to volt.
:param channel: Shuttler channel to be sampled.
:return: Voltage sample in volt.
"""
# Always configure Profile 0 for single conversion
self.write16(_AD4115_REG_CH0, 0x8000 | ((channel * 2 + 1) << 4))
self.write16(_AD4115_REG_SETUPCON0, 0x1300)
self.single_conversion()
delay(100*us)
adc_code = self.read24(_AD4115_REG_DATA)
return ((adc_code / (1 << 23)) - 1) * 2.5 / 0.1
@kernel
def single_conversion(self):
"""Place the ADC in single conversion mode.
The ADC returns to standby mode after the conversion is complete.
"""
self.write16(_AD4115_REG_ADCMODE, 0x8010)
@kernel
def standby(self):
"""Place the ADC in standby mode and disables power down the clock.
The ADC can be returned to single conversion mode by calling
:meth:`single_conversion`.
"""
# Selecting internal XO (0b00) also disables clock during standby
self.write16(_AD4115_REG_ADCMODE, 0x8020)
@kernel
def power_down(self):
"""Place the ADC in power-down mode.
The ADC must be reset before returning to other modes.
.. note::
The AD4115 datasheet suggests placing the ADC in standby mode
before power-down. This is to prevent accidental entry into the
power-down mode.
.. seealso::
:meth:`standby`
:meth:`power_up`
"""
self.write16(_AD4115_REG_ADCMODE, 0x8030)
@kernel
def power_up(self):
"""Exit the ADC power-down mode.
The ADC should be in power-down mode before calling this method.
.. seealso::
:meth:`power_down`
"""
self.reset()
# Although the datasheet claims 500 us reset wait time, only waiting
# for ~500 us can result in DOUT pin stuck in high
delay(2500*us)
@kernel
def calibrate(self, volts, trigger, config, samples=[-5.0, 0.0, 5.0]):
"""Calibrate the Shuttler waveform generator using the ADC on the AFE.
It finds the average slope rate and average offset by samples, and
compensate by writing the pre-DAC gain and offset registers in the
configuration registers.
.. note::
If the pre-calibration slope rate < 1, the calibration procedure
will introduce a pre-DAC gain compensation. However, this may
saturate the pre-DAC voltage code. (See :class:`Config` notes).
Shuttler cannot cover the entire +/- 10 V range in this case.
.. seealso::
:meth:`Config.set_gain`
:meth:`Config.set_offset`
:param volts: A list of all 16 cubic DC-bias spline.
(See :class:`DCBias`)
:param trigger: The Shuttler spline coefficient update trigger.
:param config: The Shuttler Core configuration registers.
:param samples: A list of sample voltages for calibration. There must
be at least 2 samples to perform slope rate calculation.
"""
assert len(volts) == 16
assert len(samples) > 1
measurements = [0.0] * len(samples)
for ch in range(16):
# Find the average slope rate and offset
for i in range(len(samples)):
self.core.break_realtime()
volts[ch].set_waveform(
shuttler_volt_to_mu(samples[i]), 0, 0, 0)
trigger.trigger(1 << ch)
measurements[i] = self.read_ch(ch)
# Find the average output slope
slope_sum = 0.0
for i in range(len(samples) - 1):
slope_sum += (measurements[i+1] - measurements[i])/(samples[i+1] - samples[i])
slope_avg = slope_sum / (len(samples) - 1)
gain_code = int32(1 / slope_avg * (2 ** 16)) & 0xffff
# Scale the measurements by 1/slope, find average offset
offset_sum = 0.0
for i in range(len(samples)):
offset_sum += (measurements[i] / slope_avg) - samples[i]
offset_avg = offset_sum / len(samples)
offset_code = shuttler_volt_to_mu(-offset_avg)
self.core.break_realtime()
config.set_gain(ch, gain_code)
delay_mu(int64(self.core.ref_multiplier))
config.set_offset(ch, offset_code)

View File

@ -72,6 +72,10 @@ class SPIMaster:
self.channel = channel self.channel = channel
self.update_xfer_duration_mu(div, length) self.update_xfer_duration_mu(div, length)
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@portable @portable
def frequency_to_div(self, f): def frequency_to_div(self, f):
"""Convert a SPI clock frequency to the closest SPI clock divider.""" """Convert a SPI clock frequency to the closest SPI clock divider."""
@ -273,9 +277,8 @@ class NRTSPIMaster:
def set_config_mu(self, flags=0, length=8, div=6, cs=1): def set_config_mu(self, flags=0, length=8, div=6, cs=1):
"""Set the ``config`` register. """Set the ``config`` register.
Note that the non-realtime SPI cores are usually clocked by the system In many cases, the SPI configuration is already set by the firmware
clock and not the RTIO clock. In many cases, the SPI configuration is and you do not need to call this method.
already set by the firmware and you do not need to call this method.
""" """
spi_set_config(self.busno, flags, length, div, cs) spi_set_config(self.busno, flags, length, div, cs)

View File

@ -1,228 +0,0 @@
from numpy import int32, int64
from artiq.language.core import kernel, portable, delay
from artiq.coredevice.rtio import rtio_output, rtio_output_wide
from artiq.language.types import TInt32, TInt64, TFloat
class Spline:
r"""Spline interpolating RTIO channel.
One knot of a polynomial basis spline (B-spline) :math:`u(t)`
is defined by the coefficients :math:`u_n` up to order :math:`n = k`.
If the coefficients are evaluated starting at time :math:`t_0`,
the output :math:`u(t)` for :math:`t > t_0, t_0` is:
.. math::
u(t) &= \sum_{n=0}^k \frac{u_n}{n!} (t - t_0)^n \\
&= u_0 + u_1 (t - t_0) + \frac{u_2}{2} (t - t_0)^2 + \dots
This class contains multiple methods to convert spline knot data from SI
to machine units and multiple methods that set the current spline
coefficient data. None of these advance the timeline. The :meth:`smooth`
method is the only method that advances the timeline.
:param width: Width in bits of the quantity that this spline controls
:param time_width: Width in bits of the time counter of this spline
:param channel: RTIO channel number
:param core_device: Core device that this spline is attached to
:param scale: Scale for conversion between machine units and physical
units; to be given as the "full scale physical value".
"""
kernel_invariants = {"channel", "core", "scale", "width",
"time_width", "time_scale"}
def __init__(self, width, time_width, channel, core_device, scale=1.):
self.core = core_device
self.channel = channel
self.width = width
self.scale = float((int64(1) << width) / scale)
self.time_width = time_width
self.time_scale = float((1 << time_width) *
core_device.coarse_ref_period)
@portable(flags={"fast-math"})
def to_mu(self, value: TFloat) -> TInt32:
"""Convert floating point ``value`` from physical units to 32 bit
integer machine units."""
return int32(round(value*self.scale))
@portable(flags={"fast-math"})
def from_mu(self, value: TInt32) -> TFloat:
"""Convert 32 bit integer ``value`` from machine units to floating
point physical units."""
return value/self.scale
@portable(flags={"fast-math"})
def to_mu64(self, value: TFloat) -> TInt64:
"""Convert floating point ``value`` from physical units to 64 bit
integer machine units."""
return int64(round(value*self.scale))
@kernel
def set_mu(self, value: TInt32):
"""Set spline value (machine units).
:param value: Spline value in integer machine units.
"""
rtio_output(self.channel << 8, value)
@kernel(flags={"fast-math"})
def set(self, value: TFloat):
"""Set spline value.
:param value: Spline value relative to full-scale.
"""
if self.width > 32:
l = [int32(0)] * 2
self.pack_coeff_mu([self.to_mu64(value)], l)
rtio_output_wide(self.channel << 8, l)
else:
rtio_output(self.channel << 8, self.to_mu(value))
@kernel
def set_coeff_mu(self, value): # TList(TInt32)
"""Set spline raw values.
:param value: Spline packed raw values.
"""
rtio_output_wide(self.channel << 8, value)
@portable(flags={"fast-math"})
def pack_coeff_mu(self, coeff, packed): # TList(TInt64), TList(TInt32)
"""Pack coefficients into RTIO data
:param coeff: TList(TInt64) list of machine units spline coefficients.
Lowest (zeroth) order first. The coefficient list is zero-extended
by the RTIO gateware.
:param packed: TList(TInt32) list for packed RTIO data. Must be
pre-allocated. Length in bits is
``n*width + (n - 1)*n//2*time_width``
"""
pos = 0
for i in range(len(coeff)):
wi = self.width + i*self.time_width
ci = coeff[i]
while wi != 0:
j = pos//32
used = pos - 32*j
avail = 32 - used
if avail > wi:
avail = wi
cij = int32(ci)
if avail != 32:
cij &= (1 << avail) - 1
packed[j] |= cij << used
ci >>= avail
wi -= avail
pos += avail
@portable(flags={"fast-math"})
def coeff_to_mu(self, coeff, coeff64): # TList(TFloat), TList(TInt64)
"""Convert a floating point list of coefficients into a 64 bit
integer (preallocated).
:param coeff: TList(TFloat) list of coefficients in physical units.
:param coeff64: TList(TInt64) preallocated list of coefficients in
machine units.
"""
for i in range(len(coeff)):
vi = coeff[i] * self.scale
for j in range(i):
vi *= self.time_scale
ci = int64(round(vi))
coeff64[i] = ci
# artiq.wavesynth.coefficients.discrete_compensate:
if i == 2:
coeff64[1] += ci >> self.time_width + 1
elif i == 3:
coeff64[2] += ci >> self.time_width
coeff64[1] += ci // 6 >> 2*self.time_width
def coeff_as_packed_mu(self, coeff64):
"""Pack 64 bit integer machine units coefficients into 32 bit integer
RTIO data list.
This is a host-only method that can be used to generate packed
spline coefficient data to be frozen into kernels at compile time.
"""
n = len(coeff64)
width = n*self.width + (n - 1)*n//2*self.time_width
packed = [int32(0)] * ((width + 31)//32)
self.pack_coeff_mu(coeff64, packed)
return packed
def coeff_as_packed(self, coeff):
"""Convert floating point spline coefficients into 32 bit integer
packed data.
This is a host-only method that can be used to generate packed
spline coefficient data to be frozen into kernels at compile time.
"""
coeff64 = [int64(0)] * len(coeff)
self.coeff_to_mu(coeff, coeff64)
return self.coeff_as_packed_mu(coeff64)
@kernel(flags={"fast-math"})
def set_coeff(self, coeff): # TList(TFloat)
"""Set spline coefficients.
Missing coefficients (high order) are zero-extended byt the RTIO
gateware.
If more coefficients are supplied than the gateware supports the extra
coefficients are ignored.
:param value: List of floating point spline coefficients,
lowest order (constant) coefficient first. Units are the
unit of this spline's value times increasing powers of 1/s.
"""
n = len(coeff)
coeff64 = [int64(0)] * n
self.coeff_to_mu(coeff, coeff64)
width = n*self.width + (n - 1)*n//2*self.time_width
packed = [int32(0)] * ((width + 31)//32)
self.pack_coeff_mu(coeff64, packed)
self.set_coeff_mu(packed)
@kernel(flags={"fast-math"})
def smooth(self, start: TFloat, stop: TFloat, duration: TFloat,
order: TInt32):
"""Initiate an interpolated value change.
For zeroth order (step) interpolation, the step is at
``start + duration/2``.
First order interpolation corresponds to a linear value ramp from
``start`` to ``stop`` over ``duration``.
The third order interpolation is constrained to have zero first
order derivative at both `start` and `stop`.
For first order and third order interpolation (linear and cubic)
the interpolator needs to be stopped explicitly at the stop time
(e.g. by setting spline coefficient data or starting a new
:meth:`smooth` interpolation).
This method advances the timeline by ``duration``.
:param start: Initial value of the change. In physical units.
:param stop: Final value of the change. In physical units.
:param duration: Duration of the interpolation. In physical units.
:param order: Order of the interpolation. Only 0, 1,
and 3 are valid: step, linear, cubic.
"""
if order == 0:
delay(duration/2.)
self.set_coeff([stop])
delay(duration/2.)
elif order == 1:
self.set_coeff([start, (stop - start)/duration])
delay(duration)
elif order == 3:
v2 = 6.*(stop - start)/(duration*duration)
self.set_coeff([start, 0., v2, -2.*v2/duration])
delay(duration)
else:
raise ValueError("Invalid interpolation order. "
"Supported orders are: 0, 1, 3.")

View File

@ -1,12 +0,0 @@
from artiq.language.core import syscall
from artiq.language.types import TInt32, TNone
@syscall(flags={"nounwind", "nowrite"})
def mfspr(spr: TInt32) -> TInt32:
raise NotImplementedError("syscall not simulated")
@syscall(flags={"nowrite", "nowrite"})
def mtspr(spr: TInt32, value: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated")

View File

@ -23,12 +23,12 @@ def y_mu_to_full_scale(y):
@portable @portable
def adc_mu_to_volts(x, gain): def adc_mu_to_volts(x, gain, corrected_fs=True):
"""Convert servo ADC data from machine units to Volt.""" """Convert servo ADC data from machine units to Volt."""
val = (x >> 1) & 0xffff val = (x >> 1) & 0xffff
mask = 1 << 15 mask = 1 << 15
val = -(val & mask) + (val & ~mask) val = -(val & mask) + (val & ~mask)
return sampler.adc_mu_to_volt(val, gain) return sampler.adc_mu_to_volt(val, gain, corrected_fs)
class SUServo: class SUServo:
@ -57,38 +57,38 @@ class SUServo:
:param channel: RTIO channel number :param channel: RTIO channel number
:param pgia_device: Name of the Sampler PGIA gain setting SPI bus :param pgia_device: Name of the Sampler PGIA gain setting SPI bus
:param cpld0_device: Name of the first Urukul CPLD SPI bus :param cpld_devices: Names of the Urukul CPLD SPI buses
:param cpld1_device: Name of the second Urukul CPLD SPI bus :param dds_devices: Names of the AD9910 devices
:param dds0_device: Name of the AD9910 device for the DDS on the first
Urukul
:param dds1_device: Name of the AD9910 device for the DDS on the second
Urukul
:param gains: Initial value for PGIA gains shift register :param gains: Initial value for PGIA gains shift register
(default: 0x0000). Knowledge of this state is not transferred (default: 0x0000). Knowledge of this state is not transferred
between experiments. between experiments.
:param sampler_hw_rev: Sampler's revision string
:param core_device: Core device name :param core_device: Core device name
""" """
kernel_invariants = {"channel", "core", "pgia", "cpld0", "cpld1", kernel_invariants = {"channel", "core", "pgia", "cplds", "ddses",
"dds0", "dds1", "ref_period_mu"} "ref_period_mu", "corrected_fs"}
def __init__(self, dmgr, channel, pgia_device, def __init__(self, dmgr, channel, pgia_device,
cpld0_device, cpld1_device, cpld_devices, dds_devices,
dds0_device, dds1_device, gains=0x0000, sampler_hw_rev="v2.2", core_device="core"):
gains=0x0000, core_device="core"):
self.core = dmgr.get(core_device) self.core = dmgr.get(core_device)
self.pgia = dmgr.get(pgia_device) self.pgia = dmgr.get(pgia_device)
self.pgia.update_xfer_duration_mu(div=4, length=16) self.pgia.update_xfer_duration_mu(div=4, length=16)
self.dds0 = dmgr.get(dds0_device) assert len(dds_devices) == len(cpld_devices)
self.dds1 = dmgr.get(dds1_device) self.ddses = [dmgr.get(dds) for dds in dds_devices]
self.cpld0 = dmgr.get(cpld0_device) self.cplds = [dmgr.get(cpld) for cpld in cpld_devices]
self.cpld1 = dmgr.get(cpld1_device)
self.channel = channel self.channel = channel
self.gains = gains self.gains = gains
self.ref_period_mu = self.core.seconds_to_mu( self.ref_period_mu = self.core.seconds_to_mu(
self.core.coarse_ref_period) self.core.coarse_ref_period)
self.corrected_fs = sampler.Sampler.use_corrected_fs(sampler_hw_rev)
assert self.ref_period_mu == self.core.ref_multiplier assert self.ref_period_mu == self.core.ref_multiplier
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel @kernel
def init(self): def init(self):
"""Initialize the servo, Sampler and both Urukuls. """Initialize the servo, Sampler and both Urukuls.
@ -109,17 +109,15 @@ class SUServo:
sampler.SPI_CONFIG | spi.SPI_END, sampler.SPI_CONFIG | spi.SPI_END,
16, 4, sampler.SPI_CS_PGIA) 16, 4, sampler.SPI_CS_PGIA)
self.cpld0.init(blind=True) for i in range(len(self.cplds)):
cfg0 = self.cpld0.cfg_reg cpld = self.cplds[i]
self.cpld0.cfg_write(cfg0 | (0xf << urukul.CFG_MASK_NU)) dds = self.ddses[i]
self.dds0.init(blind=True)
self.cpld0.cfg_write(cfg0)
self.cpld1.init(blind=True) cpld.init(blind=True)
cfg1 = self.cpld1.cfg_reg prev_cpld_cfg = cpld.cfg_reg
self.cpld1.cfg_write(cfg1 | (0xf << urukul.CFG_MASK_NU)) cpld.cfg_write(prev_cpld_cfg | (0xf << urukul.CFG_MASK_NU))
self.dds1.init(blind=True) dds.init(blind=True)
self.cpld1.cfg_write(cfg1) cpld.cfg_write(prev_cpld_cfg)
@kernel @kernel
def write(self, addr, value): def write(self, addr, value):
@ -242,7 +240,7 @@ class SUServo:
""" """
val = self.get_adc_mu(channel) val = self.get_adc_mu(channel)
gain = (self.gains >> (channel*2)) & 0b11 gain = (self.gains >> (channel*2)) & 0b11
return adc_mu_to_volts(val, gain) return adc_mu_to_volts(val, gain, self.corrected_fs)
class Channel: class Channel:
@ -257,9 +255,15 @@ class Channel:
self.servo = dmgr.get(servo_device) self.servo = dmgr.get(servo_device)
self.core = self.servo.core self.core = self.servo.core
self.channel = channel self.channel = channel
# FIXME: this assumes the mem channel is right after the control # This assumes the mem channel is right after the control channels
# channels # Make sure this is always the case in eem.py
self.servo_channel = self.channel + 8 - self.servo.channel self.servo_channel = (self.channel + 4 * len(self.servo.cplds) -
self.servo.channel)
self.dds = self.servo.ddses[self.servo_channel // 4]
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel @kernel
def set(self, en_out, en_iir=0, profile=0): def set(self, en_out, en_iir=0, profile=0):
@ -294,7 +298,7 @@ class Channel:
base = (self.servo_channel << 8) | (profile << 3) base = (self.servo_channel << 8) | (profile << 3)
self.servo.write(base + 0, ftw >> 16) self.servo.write(base + 0, ftw >> 16)
self.servo.write(base + 6, (ftw & 0xffff)) self.servo.write(base + 6, (ftw & 0xffff))
self.servo.write(base + 4, offs) self.set_dds_offset_mu(profile, offs)
self.servo.write(base + 2, pow_) self.servo.write(base + 2, pow_)
@kernel @kernel
@ -307,21 +311,49 @@ class Channel:
:param profile: Profile number (0-31) :param profile: Profile number (0-31)
:param frequency: DDS frequency in Hz :param frequency: DDS frequency in Hz
:param offset: IIR offset (negative setpoint) in units of full scale. :param offset: IIR offset (negative setpoint) in units of full scale,
For positive ADC voltages as setpoints, this should be negative. see :meth:`dds_offset_to_mu`
Due to rounding and representation as two's complement,
``offset=1`` can not be represented while ``offset=-1`` can.
:param phase: DDS phase in turns :param phase: DDS phase in turns
""" """
if self.servo_channel < 4: ftw = self.dds.frequency_to_ftw(frequency)
dds = self.servo.dds0 pow_ = self.dds.turns_to_pow(phase)
else: offs = self.dds_offset_to_mu(offset)
dds = self.servo.dds1
ftw = dds.frequency_to_ftw(frequency)
pow_ = dds.turns_to_pow(phase)
offs = int(round(offset*(1 << COEFF_WIDTH - 1)))
self.set_dds_mu(profile, ftw, offs, pow_) self.set_dds_mu(profile, ftw, offs, pow_)
@kernel
def set_dds_offset_mu(self, profile, offs):
"""Set only IIR offset in DDS coefficient profile.
See :meth:`set_dds_mu` for setting the complete DDS profile.
:param profile: Profile number (0-31)
:param offs: IIR offset (17 bit signed)
"""
base = (self.servo_channel << 8) | (profile << 3)
self.servo.write(base + 4, offs)
@kernel
def set_dds_offset(self, profile, offset):
"""Set only IIR offset in DDS coefficient profile.
See :meth:`set_dds` for setting the complete DDS profile.
:param profile: Profile number (0-31)
:param offset: IIR offset (negative setpoint) in units of full scale
"""
self.set_dds_offset_mu(profile, self.dds_offset_to_mu(offset))
@portable
def dds_offset_to_mu(self, offset):
"""Convert IIR offset (negative setpoint) from units of full scale to
machine units (see :meth:`set_dds_mu`, :meth:`set_dds_offset_mu`).
For positive ADC voltages as setpoints, this should be negative. Due to
rounding and representation as two's complement, ``offset=1`` can not
be represented while ``offset=-1`` can.
"""
return int(round(offset * (1 << COEFF_WIDTH - 1)))
@kernel @kernel
def set_iir_mu(self, profile, adc, a1, b0, b1, dly=0): def set_iir_mu(self, profile, adc, a1, b0, b1, dly=0):
"""Set profile IIR coefficients in machine units. """Set profile IIR coefficients in machine units.
@ -526,5 +558,7 @@ class Channel:
:param y: IIR state in units of full scale :param y: IIR state in units of full scale
""" """
y_mu = int(round(y * Y_FULL_SCALE_MU)) y_mu = int(round(y * Y_FULL_SCALE_MU))
if y_mu < 0 or y_mu > (1 << 17) - 1:
raise ValueError("Invalid SUServo y-value!")
self.set_y_mu(profile, y_mu) self.set_y_mu(profile, y_mu)
return y_mu return y_mu

View File

@ -0,0 +1,136 @@
class TRF372017:
"""TRF372017 settings and register map.
For possible values, documentation, and explanation, see the datasheet.
https://www.ti.com/lit/gpn/trf372017
"""
rdiv = 2 # 13b - highest valid f_PFD
ref_inv = 0
neg_vco = 1
icp = 0 # 1.94 mA, 5b
icp_double = 0
cal_clk_sel = 0b1110 # div64, 4b
# default f_vco is 2.875 GHz
nint = 23 # 16b - lowest value suitable for fractional & integer mode
pll_div_sel = 0b01 # div2, 2b
prsc_sel = 0 # 4/5
vco_sel = 2 # 2b
vcosel_mode = 0
cal_acc = 0b00 # 2b
en_cal = 0 # leave at 0 - calibration is performed in `Phaser.init()`
nfrac = 0 # 25b
pwd_pll = 0
pwd_cp = 0
pwd_vco = 0
pwd_vcomux = 0
pwd_div124 = 0
pwd_presc = 0
pwd_out_buff = 1 # leave at 1 - only enable outputs after calibration
pwd_lo_div = 1 # leave at 1 - only enable outputs after calibration
pwd_tx_div = 1 # leave at 1 - only enable outputs after calibration
pwd_bb_vcm = 0
pwd_dc_off = 0
en_extvco = 0
en_isource = 0
ld_ana_prec = 0 # 2b
cp_tristate = 0 # 2b
speedup = 0
ld_dig_prec = 1
en_dith = 1
mod_ord = 2 # 3rd order, 2b
dith_sel = 0
del_sd_clk = 2 # 2b
en_frac = 0
vcobias_rtrim = 4 # 3b
pllbias_rtrim = 2 # 2b
vco_bias = 8 # 460 µA, 4b
vcobuf_bias = 2 # 2b
vcomux_bias = 3 # 2b
bufout_bias = 0 # 300 µA, 2b
vco_cal_ib = 0 # PTAT
vco_cal_ref = 2 # 1.04 V, 2b
vco_ampl_ctrl = 3 # 2b
vco_vb_ctrl = 0 # 1.2 V, 2b
en_ld_isource = 0
ioff = 0x80 # 8b
qoff = 0x80 # 8b
vref_sel = 4 # 0.85 V, 3b
tx_div_sel = 0 # div1, 2b
lo_div_sel = 0 # div1, 2b
tx_div_bias = 1 # 37.5 µA, 2b
lo_div_bias = 2 # 50 µA, 2b
vco_trim = 0x20 # 6b
vco_test_mode = 0
cal_bypass = 0
mux_ctrl = 1 # lock detect, 3b
isource_sink = 0
isource_trim = 4 # 3b
pd_tc = 0 # 2b
ib_vcm_sel = 0 # ptat
dcoffset_i = 2 # 150 µA, 2b
vco_bias_sel = 1 # spi
def __init__(self, updates=None):
if updates is None:
return
for key, value in updates.items():
if not hasattr(self, key):
raise KeyError("invalid setting", key)
setattr(self, key, value)
def get_mmap(self):
"""Memory map for TRF372017"""
mmap = []
mmap.append(
0x9 |
(self.rdiv << 5) | (self.ref_inv << 19) | (self.neg_vco << 20) |
(self.icp << 21) | (self.icp_double << 26) |
(self.cal_clk_sel << 27))
mmap.append(
0xa |
(self.nint << 5) | (self.pll_div_sel << 21) |
(self.prsc_sel << 23) | (self.vco_sel << 26) |
(self.vcosel_mode << 28) | (self.cal_acc << 29) |
(self.en_cal << 31))
mmap.append(0xb | (self.nfrac << 5))
mmap.append(
0xc |
(self.pwd_pll << 5) | (self.pwd_cp << 6) | (self.pwd_vco << 7) |
(self.pwd_vcomux << 8) | (self.pwd_div124 << 9) |
(self.pwd_presc << 10) | (self.pwd_out_buff << 12) |
(self.pwd_lo_div << 13) | (self.pwd_tx_div << 14) |
(self.pwd_bb_vcm << 15) | (self.pwd_dc_off << 16) |
(self.en_extvco << 17) | (self.en_isource << 18) |
(self.ld_ana_prec << 19) | (self.cp_tristate << 21) |
(self.speedup << 23) | (self.ld_dig_prec << 24) |
(self.en_dith << 25) | (self.mod_ord << 26) |
(self.dith_sel << 28) | (self.del_sd_clk << 29) |
(self.en_frac << 31))
mmap.append(
0xd |
(self.vcobias_rtrim << 5) | (self.pllbias_rtrim << 8) |
(self.vco_bias << 10) | (self.vcobuf_bias << 14) |
(self.vcomux_bias << 16) | (self.bufout_bias << 18) |
(1 << 21) | (self.vco_cal_ib << 22) | (self.vco_cal_ref << 23) |
(self.vco_ampl_ctrl << 26) | (self.vco_vb_ctrl << 28) |
(self.en_ld_isource << 31))
mmap.append(
0xe |
(self.ioff << 5) | (self.qoff << 13) | (self.vref_sel << 21) |
(self.tx_div_sel << 24) | (self.lo_div_sel << 26) |
(self.tx_div_bias << 28) | (self.lo_div_bias << 30))
mmap.append(
0xf |
(self.vco_trim << 7) | (self.vco_test_mode << 14) |
(self.cal_bypass << 15) | (self.mux_ctrl << 16) |
(self.isource_sink << 19) | (self.isource_trim << 20) |
(self.pd_tc << 23) | (self.ib_vcm_sel << 25) |
(1 << 28) | (self.dcoffset_i << 29) |
(self.vco_bias_sel << 31))
return mmap

View File

@ -36,6 +36,10 @@ class TTLOut:
self.channel = channel self.channel = channel
self.target_o = channel << 8 self.target_o = channel << 8
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel @kernel
def output(self): def output(self):
pass pass
@ -128,6 +132,10 @@ class TTLInOut:
self.target_sens = (channel << 8) + 2 self.target_sens = (channel << 8) + 2
self.target_sample = (channel << 8) + 3 self.target_sample = (channel << 8) + 3
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@kernel @kernel
def set_oe(self, oe): def set_oe(self, oe):
rtio_output(self.target_oe, 1 if oe else 0) rtio_output(self.target_oe, 1 if oe else 0)
@ -465,6 +473,10 @@ class TTLClockGen:
self.acc_width = numpy.int64(acc_width) self.acc_width = numpy.int64(acc_width)
@staticmethod
def get_rtio_channels(channel, **kwargs):
return [(channel, None)]
@portable @portable
def frequency_to_ftw(self, frequency): def frequency_to_ftw(self, frequency):
"""Returns the frequency tuning word corresponding to the given """Returns the frequency tuning word corresponding to the given

View File

@ -1,15 +1,15 @@
from numpy import int32, int64
from artiq.language.core import kernel, delay, portable, at_mu, now_mu from artiq.language.core import kernel, delay, portable, at_mu, now_mu
from artiq.language.units import us, ms from artiq.language.units import us, ms
from artiq.language.types import TInt32, TFloat, TBool
from numpy import int32, int64
from artiq.coredevice import spi2 as spi from artiq.coredevice import spi2 as spi
SPI_CONFIG = (0 * spi.SPI_OFFLINE | 0 * spi.SPI_END |
SPI_CONFIG = (0*spi.SPI_OFFLINE | 0*spi.SPI_END | 0 * spi.SPI_INPUT | 1 * spi.SPI_CS_POLARITY |
0*spi.SPI_INPUT | 1*spi.SPI_CS_POLARITY | 0 * spi.SPI_CLK_POLARITY | 0 * spi.SPI_CLK_PHASE |
0*spi.SPI_CLK_POLARITY | 0*spi.SPI_CLK_PHASE | 0 * spi.SPI_LSB_FIRST | 0 * spi.SPI_HALF_DUPLEX)
0*spi.SPI_LSB_FIRST | 0*spi.SPI_HALF_DUPLEX)
# SPI clock write and read dividers # SPI clock write and read dividers
SPIT_CFG_WR = 2 SPIT_CFG_WR = 2
@ -52,6 +52,9 @@ CS_DDS_CH1 = 5
CS_DDS_CH2 = 6 CS_DDS_CH2 = 6
CS_DDS_CH3 = 7 CS_DDS_CH3 = 7
# Default profile
DEFAULT_PROFILE = 7
@portable @portable
def urukul_cfg(rf_sw, led, profile, io_update, mask_nu, def urukul_cfg(rf_sw, led, profile, io_update, mask_nu,
@ -105,7 +108,7 @@ class _RegIOUpdate:
self.cpld = cpld self.cpld = cpld
@kernel @kernel
def pulse(self, t): def pulse(self, t: TFloat):
cfg = self.cpld.cfg_reg cfg = self.cpld.cfg_reg
self.cpld.cfg_write(cfg | (1 << CFG_IO_UPDATE)) self.cpld.cfg_write(cfg | (1 << CFG_IO_UPDATE))
delay(t) delay(t)
@ -117,7 +120,7 @@ class _DummySync:
self.cpld = cpld self.cpld = cpld
@kernel @kernel
def set_mu(self, ftw): def set_mu(self, ftw: TInt32):
pass pass
@ -188,7 +191,7 @@ class CPLD:
assert sync_div is None assert sync_div is None
sync_div = 0 sync_div = 0
self.cfg_reg = urukul_cfg(rf_sw=rf_sw, led=0, profile=0, self.cfg_reg = urukul_cfg(rf_sw=rf_sw, led=0, profile=DEFAULT_PROFILE,
io_update=0, mask_nu=0, clk_sel=clk_sel, io_update=0, mask_nu=0, clk_sel=clk_sel,
sync_sel=sync_sel, sync_sel=sync_sel,
rst=0, io_rst=0, clk_div=clk_div) rst=0, io_rst=0, clk_div=clk_div)
@ -196,12 +199,12 @@ class CPLD:
self.sync_div = sync_div self.sync_div = sync_div
@kernel @kernel
def cfg_write(self, cfg): def cfg_write(self, cfg: TInt32):
"""Write to the configuration register. """Write to the configuration register.
See :func:`urukul_cfg` for possible flags. See :func:`urukul_cfg` for possible flags.
:param data: 24 bit data to be written. Will be stored at :param cfg: 24 bit data to be written. Will be stored at
:attr:`cfg_reg`. :attr:`cfg_reg`.
""" """
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24, self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 24,
@ -210,7 +213,7 @@ class CPLD:
self.cfg_reg = cfg self.cfg_reg = cfg
@kernel @kernel
def sta_read(self): def sta_read(self) -> TInt32:
"""Read the status register. """Read the status register.
Use any of the following functions to extract values: Use any of the following functions to extract values:
@ -229,7 +232,7 @@ class CPLD:
return self.bus.read() return self.bus.read()
@kernel @kernel
def init(self, blind=False): def init(self, blind: TBool = False):
"""Initialize and detect Urukul. """Initialize and detect Urukul.
Resets the DDS I/O interface and verifies correct CPLD gateware Resets the DDS I/O interface and verifies correct CPLD gateware
@ -247,12 +250,12 @@ class CPLD:
proto_rev = urukul_sta_proto_rev(self.sta_read()) proto_rev = urukul_sta_proto_rev(self.sta_read())
if proto_rev != STA_PROTO_REV_MATCH: if proto_rev != STA_PROTO_REV_MATCH:
raise ValueError("Urukul proto_rev mismatch") raise ValueError("Urukul proto_rev mismatch")
delay(100*us) # reset, slack delay(100 * us) # reset, slack
self.cfg_write(cfg) self.cfg_write(cfg)
if self.sync_div: if self.sync_div:
at_mu(now_mu() & ~0xf) # align to RTIO/2 at_mu(now_mu() & ~0xf) # align to RTIO/2
self.set_sync_div(self.sync_div) # 125 MHz/2 = 1 GHz/16 self.set_sync_div(self.sync_div) # 125 MHz/2 = 1 GHz/16
delay(1*ms) # DDS wake up delay(1 * ms) # DDS wake up
@kernel @kernel
def io_rst(self): def io_rst(self):
@ -261,7 +264,7 @@ class CPLD:
self.cfg_write(self.cfg_reg & ~(1 << CFG_IO_RST)) self.cfg_write(self.cfg_reg & ~(1 << CFG_IO_RST))
@kernel @kernel
def cfg_sw(self, channel, on): def cfg_sw(self, channel: TInt32, on: TBool):
"""Configure the RF switches through the configuration register. """Configure the RF switches through the configuration register.
These values are logically OR-ed with the LVDS lines on EEM1. These values are logically OR-ed with the LVDS lines on EEM1.
@ -277,21 +280,44 @@ class CPLD:
self.cfg_write(c) self.cfg_write(c)
@kernel @kernel
def cfg_switches(self, state): def cfg_switches(self, state: TInt32):
"""Configure all four RF switches through the configuration register. """Configure all four RF switches through the configuration register.
:param state: RF switch state as a 4 bit integer. :param state: RF switch state as a 4 bit integer.
""" """
self.cfg_write((self.cfg_reg & ~0xf) | state) self.cfg_write((self.cfg_reg & ~0xf) | state)
@portable(flags={"fast-math"})
def mu_to_att(self, att_mu: TInt32) -> TFloat:
"""Convert a digital attenuation setting to dB.
:param att_mu: Digital attenuation setting.
:return: Attenuation setting in dB.
"""
return (255 - (att_mu & 0xff)) / 8
@portable(flags={"fast-math"})
def att_to_mu(self, att: TFloat) -> TInt32:
"""Convert an attenuation setting in dB to machine units.
:param att: Attenuation setting in dB.
:return: Digital attenuation setting.
"""
code = int32(255) - int32(round(att * 8))
if code < 0 or code > 255:
raise ValueError("Invalid urukul.CPLD attenuation!")
return code
@kernel @kernel
def set_att_mu(self, channel, att): def set_att_mu(self, channel: TInt32, att: TInt32):
"""Set digital step attenuator in machine units. """Set digital step attenuator in machine units.
This method will write the attenuator settings of all four channels. This method will also write the attenuator settings of the three
other channels. Use :meth:`get_att_mu` to retrieve the hardware
state set in previous experiments.
:param channel: Attenuator channel (0-3). :param channel: Attenuator channel (0-3).
:param att: Digital attenuation setting: :param att: 8-bit digital attenuation setting:
255 minimum attenuation, 0 maximum attenuation (31.5 dB) 255 minimum attenuation, 0 maximum attenuation (31.5 dB)
""" """
a = self.att_reg & ~(0xff << (channel * 8)) a = self.att_reg & ~(0xff << (channel * 8))
@ -299,7 +325,7 @@ class CPLD:
self.set_all_att_mu(a) self.set_all_att_mu(a)
@kernel @kernel
def set_all_att_mu(self, att_reg): def set_all_att_mu(self, att_reg: TInt32):
"""Set all four digital step attenuators (in machine units). """Set all four digital step attenuators (in machine units).
.. seealso:: :meth:`set_att_mu` .. seealso:: :meth:`set_att_mu`
@ -312,20 +338,29 @@ class CPLD:
self.att_reg = att_reg self.att_reg = att_reg
@kernel @kernel
def set_att(self, channel, att): def set_att(self, channel: TInt32, att: TFloat):
"""Set digital step attenuator in SI units. """Set digital step attenuator in SI units.
This method will write the attenuator settings of all four channels.
.. seealso:: :meth:`set_att_mu`
:param channel: Attenuator channel (0-3). :param channel: Attenuator channel (0-3).
:param att: Attenuation setting in dB. Higher value is more :param att: Attenuation setting in dB. Higher value is more
attenuation. Minimum attenuation is 0*dB, maximum attenuation is attenuation. Minimum attenuation is 0*dB, maximum attenuation is
31.5*dB. 31.5*dB.
""" """
self.set_att_mu(channel, 255 - int32(round(att*8))) self.set_att_mu(channel, self.att_to_mu(att))
@kernel @kernel
def get_att_mu(self): def get_att_mu(self) -> TInt32:
"""Return the digital step attenuator settings in machine units. """Return the digital step attenuator settings in machine units.
The result is stored and will be used in future calls of
:meth:`set_att_mu` and :meth:`set_att`.
.. seealso:: :meth:`get_channel_att_mu`
:return: 32 bit attenuator settings :return: 32 bit attenuator settings
""" """
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_INPUT, 32, self.bus.set_config_mu(SPI_CONFIG | spi.SPI_INPUT, 32,
@ -333,13 +368,41 @@ class CPLD:
self.bus.write(0) # shift in zeros, shift out current value self.bus.write(0) # shift in zeros, shift out current value
self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 32, self.bus.set_config_mu(SPI_CONFIG | spi.SPI_END, 32,
SPIT_ATT_WR, CS_ATT) SPIT_ATT_WR, CS_ATT)
delay(10*us) delay(10 * us)
self.att_reg = self.bus.read() self.att_reg = self.bus.read()
self.bus.write(self.att_reg) # shift in current value again and latch self.bus.write(self.att_reg) # shift in current value again and latch
return self.att_reg return self.att_reg
@kernel @kernel
def set_sync_div(self, div): def get_channel_att_mu(self, channel: TInt32) -> TInt32:
"""Get digital step attenuator value for a channel in machine units.
The result is stored and will be used in future calls of
:meth:`set_att_mu` and :meth:`set_att`.
.. seealso:: :meth:`get_att_mu`
:param channel: Attenuator channel (0-3).
:return: 8-bit digital attenuation setting:
255 minimum attenuation, 0 maximum attenuation (31.5 dB)
"""
return int32((self.get_att_mu() >> (channel * 8)) & 0xff)
@kernel
def get_channel_att(self, channel: TInt32) -> TFloat:
"""Get digital step attenuator value for a channel in SI units.
.. seealso:: :meth:`get_channel_att_mu`
:param channel: Attenuator channel (0-3).
:return: Attenuation setting in dB. Higher value is more
attenuation. Minimum attenuation is 0*dB, maximum attenuation is
31.5*dB.
"""
return self.mu_to_att(self.get_channel_att_mu(channel))
@kernel
def set_sync_div(self, div: TInt32):
"""Set the SYNC_IN AD9910 pulse generator frequency """Set the SYNC_IN AD9910 pulse generator frequency
and align it to the current RTIO timestamp. and align it to the current RTIO timestamp.
@ -351,12 +414,12 @@ class CPLD:
Minimum division ratio is 2. Maximum division ratio is 16. Minimum division ratio is 2. Maximum division ratio is 16.
""" """
ftw_max = 1 << 4 ftw_max = 1 << 4
ftw = ftw_max//div ftw = ftw_max // div
assert ftw*div == ftw_max assert ftw * div == ftw_max
self.sync.set_mu(ftw) self.sync.set_mu(ftw)
@kernel @kernel
def set_profile(self, profile): def set_profile(self, profile: TInt32):
"""Set the PROFILE pins. """Set the PROFILE pins.
The PROFILE pins are common to all four DDS channels. The PROFILE pins are common to all four DDS channels.

View File

@ -27,15 +27,15 @@ class Zotino(AD53xx):
:param clr_device: CLR RTIO TTLOut channel name. :param clr_device: CLR RTIO TTLOut channel name.
:param div_write: SPI clock divider for write operations (default: 4, :param div_write: SPI clock divider for write operations (default: 4,
50MHz max SPI clock) 50MHz max SPI clock)
:param div_read: SPI clock divider for read operations (default: 8, not :param div_read: SPI clock divider for read operations (default: 16, not
optimized for speed, but cf data sheet t22: 25ns min SCLK edge to SDO optimized for speed; datasheet says t22: 25ns min SCLK edge to SDO
valid) valid, and suggests the SPI speed for reads should be <=20 MHz)
:param vref: DAC reference voltage (default: 5.) :param vref: DAC reference voltage (default: 5.)
:param core_device: Core device name (default: "core") :param core_device: Core device name (default: "core")
""" """
def __init__(self, dmgr, spi_device, ldac_device=None, clr_device=None, def __init__(self, dmgr, spi_device, ldac_device=None, clr_device=None,
div_write=4, div_read=8, vref=5., core="core"): div_write=4, div_read=16, vref=5., core="core"):
AD53xx.__init__(self, dmgr=dmgr, spi_device=spi_device, AD53xx.__init__(self, dmgr=dmgr, spi_device=spi_device,
ldac_device=ldac_device, clr_device=clr_device, ldac_device=ldac_device, clr_device=clr_device,
chip_select=_SPI_CS_DAC, div_write=div_write, chip_select=_SPI_CS_DAC, div_write=div_write,

View File

@ -1,3 +1,4 @@
import asyncio
import logging import logging
from PyQt5 import QtCore, QtWidgets from PyQt5 import QtCore, QtWidgets
@ -44,6 +45,7 @@ class AppletsCCBDock(applets.AppletsDock):
self.ccbp_group_action.setMenu(ccbp_group_menu) self.ccbp_group_action.setMenu(ccbp_group_menu)
self.table.addAction(self.ccbp_group_action) self.table.addAction(self.ccbp_group_action)
self.table.itemSelectionChanged.connect(self.update_group_ccbp_menu) self.table.itemSelectionChanged.connect(self.update_group_ccbp_menu)
self.update_group_ccbp_menu()
ccbp_global_menu = QtWidgets.QMenu() ccbp_global_menu = QtWidgets.QMenu()
actiongroup = QtWidgets.QActionGroup(self.table) actiongroup = QtWidgets.QActionGroup(self.table)
@ -149,15 +151,16 @@ class AppletsCCBDock(applets.AppletsDock):
corresponds to a single group. If ``group`` is ``None`` or an empty corresponds to a single group. If ``group`` is ``None`` or an empty
list, it corresponds to the root. list, it corresponds to the root.
``command`` gives the command line used to run the applet, as if it ``command`` gives the command line used to run the applet, as if it was
was started from a shell. The dashboard substitutes variables such as started from a shell. The dashboard substitutes variables such as
``$python`` that gives the complete file name of the Python ``$python`` that gives the complete file name of the Python interpreter
interpreter running the dashboard. running the dashboard.
If the name already exists (after following any specified groups), the If the name already exists (after following any specified groups), the
command or code of the existing applet with that name is replaced, and command or code of the existing applet with that name is replaced, and
the applet is shown at its previous position. If not, a new applet the applet is restarted and shown at its previous position. If not, a
entry is created and the applet is shown at any position on the screen. new applet entry is created and the applet is shown at any position on
the screen.
If the group(s) do not exist, they are created. If the group(s) do not exist, they are created.
@ -181,9 +184,17 @@ class AppletsCCBDock(applets.AppletsDock):
else: else:
spec = {"ty": "code", "code": code, "command": command} spec = {"ty": "code", "code": code, "command": command}
if applet is None: if applet is None:
logger.debug("Applet %s does not exist: creating", name)
applet = self.new(name=name, spec=spec, parent=parent) applet = self.new(name=name, spec=spec, parent=parent)
else: else:
self.set_spec(applet, spec) if spec != self.get_spec(applet):
logger.debug("Applet %s already exists: updating existing spec", name)
self.set_spec(applet, spec)
if applet.applet_dock:
asyncio.ensure_future(applet.applet_dock.restart())
else:
logger.debug("Applet %s already exists and no update required", name)
if ccbp == "enable": if ccbp == "enable":
applet.setCheckState(0, QtCore.Qt.Checked) applet.setCheckState(0, QtCore.Qt.Checked)

View File

@ -3,8 +3,9 @@ import logging
import numpy as np import numpy as np
from PyQt5 import QtCore, QtWidgets from PyQt5 import QtCore, QtWidgets
from sipyco import pyon
from artiq.tools import short_format from artiq.tools import scale_from_metadata, short_format, exc_to_warning
from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel from artiq.gui.tools import LayoutWidget, QRecursiveFilterProxyModel
from artiq.gui.models import DictSyncTreeSepModel from artiq.gui.models import DictSyncTreeSepModel
from artiq.gui.scientific_spinbox import ScientificSpinBox from artiq.gui.scientific_spinbox import ScientificSpinBox
@ -13,73 +14,152 @@ from artiq.gui.scientific_spinbox import ScientificSpinBox
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class Editor(QtWidgets.QDialog): async def rename(key, new_key, value, metadata, persist, dataset_ctl):
def __init__(self, parent, dataset_ctl, key, value): if key != new_key:
await dataset_ctl.delete(key)
await dataset_ctl.set(new_key, value, metadata=metadata, persist=persist)
class CreateEditDialog(QtWidgets.QDialog):
def __init__(self, parent, dataset_ctl, key=None, value=None, metadata=None, persist=False):
QtWidgets.QDialog.__init__(self, parent=parent) QtWidgets.QDialog.__init__(self, parent=parent)
self.dataset_ctl = dataset_ctl self.dataset_ctl = dataset_ctl
self.key = key
self.initial_type = type(value)
self.setWindowTitle("Edit dataset") self.setWindowTitle("Create dataset" if key is None else "Edit dataset")
grid = QtWidgets.QGridLayout() grid = QtWidgets.QGridLayout()
grid.setRowMinimumHeight(1, 40)
grid.setColumnMinimumWidth(2, 60)
self.setLayout(grid) self.setLayout(grid)
grid.addWidget(QtWidgets.QLabel("Name:"), 0, 0) grid.addWidget(QtWidgets.QLabel("Name:"), 0, 0)
grid.addWidget(QtWidgets.QLabel(key), 0, 1) self.name_widget = QtWidgets.QLineEdit()
grid.addWidget(self.name_widget, 0, 1)
grid.addWidget(QtWidgets.QLabel("Value:"), 1, 0) grid.addWidget(QtWidgets.QLabel("Value:"), 1, 0)
grid.addWidget(self.get_edit_widget(value), 1, 1) self.value_widget = QtWidgets.QLineEdit()
self.value_widget.setPlaceholderText('PYON (Python)')
grid.addWidget(self.value_widget, 1, 1)
self.data_type = QtWidgets.QLabel("data type")
grid.addWidget(self.data_type, 1, 2)
self.value_widget.textChanged.connect(self.dtype)
buttons = QtWidgets.QDialogButtonBox( grid.addWidget(QtWidgets.QLabel("Unit:"), 2, 0)
QtWidgets.QDialogButtonBox.Ok | QtWidgets.QDialogButtonBox.Cancel) self.unit_widget = QtWidgets.QLineEdit()
grid.setRowStretch(2, 1) grid.addWidget(self.unit_widget, 2, 1)
grid.addWidget(buttons, 3, 0, 1, 2)
buttons.accepted.connect(self.accept) grid.addWidget(QtWidgets.QLabel("Scale:"), 3, 0)
buttons.rejected.connect(self.reject) self.scale_widget = QtWidgets.QLineEdit()
grid.addWidget(self.scale_widget, 3, 1)
grid.addWidget(QtWidgets.QLabel("Precision:"), 4, 0)
self.precision_widget = QtWidgets.QLineEdit()
grid.addWidget(self.precision_widget, 4, 1)
grid.addWidget(QtWidgets.QLabel("Persist:"), 5, 0)
self.box_widget = QtWidgets.QCheckBox()
grid.addWidget(self.box_widget, 5, 1)
self.ok = QtWidgets.QPushButton('&Ok')
self.ok.setEnabled(False)
self.cancel = QtWidgets.QPushButton('&Cancel')
self.buttons = QtWidgets.QDialogButtonBox(self)
self.buttons.addButton(
self.ok, QtWidgets.QDialogButtonBox.AcceptRole)
self.buttons.addButton(
self.cancel, QtWidgets.QDialogButtonBox.RejectRole)
grid.setRowStretch(6, 1)
grid.addWidget(self.buttons, 7, 0, 1, 3, alignment=QtCore.Qt.AlignHCenter)
self.buttons.accepted.connect(self.accept)
self.buttons.rejected.connect(self.reject)
self.key = key
self.name_widget.setText(key)
value_edit_string = self.value_to_edit_string(value)
if metadata is not None:
scale = scale_from_metadata(metadata)
t = value.dtype if value is np.ndarray else type(value)
if scale != 1 and np.issubdtype(t, np.number):
# degenerates to float type
value_edit_string = self.value_to_edit_string(
np.float64(value / scale))
self.unit_widget.setText(metadata.get('unit', ''))
self.scale_widget.setText(str(metadata.get('scale', '')))
self.precision_widget.setText(str(metadata.get('precision', '')))
self.value_widget.setText(value_edit_string)
self.box_widget.setChecked(persist)
def accept(self): def accept(self):
value = self.initial_type(self.get_edit_widget_value()) key = self.name_widget.text()
asyncio.ensure_future(self.dataset_ctl.set(self.key, value)) value = self.value_widget.text()
persist = self.box_widget.isChecked()
unit = self.unit_widget.text()
scale = self.scale_widget.text()
precision = self.precision_widget.text()
metadata = {}
if unit != "":
metadata['unit'] = unit
if scale != "":
metadata['scale'] = float(scale)
if precision != "":
metadata['precision'] = int(precision)
scale = scale_from_metadata(metadata)
value = self.parse_edit_string(value)
t = value.dtype if value is np.ndarray else type(value)
if scale != 1 and np.issubdtype(t, np.number):
# degenerates to float type
value = np.float64(value * scale)
if self.key and self.key != key:
asyncio.ensure_future(exc_to_warning(rename(self.key, key, value, metadata, persist, self.dataset_ctl)))
else:
asyncio.ensure_future(exc_to_warning(self.dataset_ctl.set(key, value, metadata=metadata, persist=persist)))
self.key = key
QtWidgets.QDialog.accept(self) QtWidgets.QDialog.accept(self)
def get_edit_widget(self, initial_value): def dtype(self):
raise NotImplementedError txt = self.value_widget.text()
try:
result = self.parse_edit_string(txt)
# ensure only pyon compatible types are permissable
pyon.encode(result)
except:
pixmap = self.style().standardPixmap(
QtWidgets.QStyle.SP_MessageBoxWarning)
self.data_type.setPixmap(pixmap)
self.ok.setEnabled(False)
else:
self.data_type.setText(type(result).__name__)
self.ok.setEnabled(True)
def get_edit_widget_value(self): @staticmethod
raise NotImplementedError def parse_edit_string(s):
if s == "":
raise TypeError
_eval_dict = {
"__builtins__": {},
"array": np.array,
"null": np.nan,
"inf": np.inf
}
for t_ in pyon._numpy_scalar:
_eval_dict[t_] = eval("np.{}".format(t_), {"np": np})
return eval(s, _eval_dict, {})
@staticmethod
class NumberEditor(Editor): def value_to_edit_string(v):
def get_edit_widget(self, initial_value): t = type(v)
self.edit_widget = ScientificSpinBox() r = ""
self.edit_widget.setDecimals(13) if isinstance(v, np.generic):
self.edit_widget.setPrecision() r += t.__name__
self.edit_widget.setRelativeStep() r += "("
self.edit_widget.setValue(float(initial_value)) r += repr(v)
return self.edit_widget r += ")"
elif v is None:
def get_edit_widget_value(self): return r
return self.edit_widget.value() else:
r += repr(v)
return r
class BoolEditor(Editor):
def get_edit_widget(self, initial_value):
self.edit_widget = QtWidgets.QCheckBox()
self.edit_widget.setChecked(bool(initial_value))
return self.edit_widget
def get_edit_widget_value(self):
return self.edit_widget.isChecked()
class StringEditor(Editor):
def get_edit_widget(self, initial_value):
self.edit_widget = QtWidgets.QLineEdit()
self.edit_widget.setText(initial_value)
return self.edit_widget
def get_edit_widget_value(self):
return self.edit_widget.text()
class Model(DictSyncTreeSepModel): class Model(DictSyncTreeSepModel):
@ -92,13 +172,13 @@ class Model(DictSyncTreeSepModel):
if column == 1: if column == 1:
return "Y" if v[0] else "N" return "Y" if v[0] else "N"
elif column == 2: elif column == 2:
return short_format(v[1]) return short_format(v[1], v[2])
else: else:
raise ValueError raise ValueError
class DatasetsDock(QtWidgets.QDockWidget): class DatasetsDock(QtWidgets.QDockWidget):
def __init__(self, datasets_sub, dataset_ctl): def __init__(self, dataset_sub, dataset_ctl):
QtWidgets.QDockWidget.__init__(self, "Datasets") QtWidgets.QDockWidget.__init__(self, "Datasets")
self.setObjectName("Datasets") self.setObjectName("Datasets")
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable | self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
@ -120,6 +200,11 @@ class DatasetsDock(QtWidgets.QDockWidget):
grid.addWidget(self.table, 1, 0) grid.addWidget(self.table, 1, 0)
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu) self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
create_action = QtWidgets.QAction("New dataset", self.table)
create_action.triggered.connect(self.create_clicked)
create_action.setShortcut("CTRL+N")
create_action.setShortcutContext(QtCore.Qt.WidgetShortcut)
self.table.addAction(create_action)
edit_action = QtWidgets.QAction("Edit dataset", self.table) edit_action = QtWidgets.QAction("Edit dataset", self.table)
edit_action.triggered.connect(self.edit_clicked) edit_action.triggered.connect(self.edit_clicked)
edit_action.setShortcut("RETURN") edit_action.setShortcut("RETURN")
@ -133,7 +218,7 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table.addAction(delete_action) self.table.addAction(delete_action)
self.table_model = Model(dict()) self.table_model = Model(dict())
datasets_sub.add_setmodel_callback(self.set_model) dataset_sub.add_setmodel_callback(self.set_model)
def _search_datasets(self): def _search_datasets(self):
if hasattr(self, "table_model_filter"): if hasattr(self, "table_model_filter"):
@ -146,25 +231,17 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table_model_filter.setSourceModel(self.table_model) self.table_model_filter.setSourceModel(self.table_model)
self.table.setModel(self.table_model_filter) self.table.setModel(self.table_model_filter)
def create_clicked(self):
CreateEditDialog(self, self.dataset_ctl).open()
def edit_clicked(self): def edit_clicked(self):
idx = self.table.selectedIndexes() idx = self.table.selectedIndexes()
if idx: if idx:
idx = self.table_model_filter.mapToSource(idx[0]) idx = self.table_model_filter.mapToSource(idx[0])
key = self.table_model.index_to_key(idx) key = self.table_model.index_to_key(idx)
if key is not None: if key is not None:
persist, value = self.table_model.backing_store[key] persist, value, metadata = self.table_model.backing_store[key]
t = type(value) CreateEditDialog(self, self.dataset_ctl, key, value, metadata, persist).open()
if np.issubdtype(t, np.number):
dialog_cls = NumberEditor
elif np.issubdtype(t, np.bool_):
dialog_cls = BoolEditor
elif np.issubdtype(t, np.unicode_):
dialog_cls = StringEditor
else:
logger.error("Cannot edit dataset %s: "
"type %s is not supported", key, t)
return
dialog_cls(self, self.dataset_ctl, key, value).open()
def delete_clicked(self): def delete_clicked(self):
idx = self.table.selectedIndexes() idx = self.table.selectedIndexes()

View File

@ -9,8 +9,11 @@ import h5py
from sipyco import pyon from sipyco import pyon
from artiq.gui.tools import LayoutWidget, log_level_to_name, get_open_file_name
from artiq.gui.entries import procdesc_to_entry, ScanEntry from artiq.gui.entries import procdesc_to_entry, ScanEntry
from artiq.gui.fuzzy_select import FuzzySelectWidget
from artiq.gui.tools import (LayoutWidget, WheelFilter,
log_level_to_name, get_open_file_name)
from artiq.tools import parse_devarg_override, unparse_devarg_override
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -22,15 +25,6 @@ logger = logging.getLogger(__name__)
# 2. file:<class name>@<file name> # 2. file:<class name>@<file name>
class _WheelFilter(QtCore.QObject):
def eventFilter(self, obj, event):
if (event.type() == QtCore.QEvent.Wheel and
event.modifiers() != QtCore.Qt.NoModifier):
event.ignore()
return True
return False
class _ArgumentEditor(QtWidgets.QTreeWidget): class _ArgumentEditor(QtWidgets.QTreeWidget):
def __init__(self, manager, dock, expurl): def __init__(self, manager, dock, expurl):
self.manager = manager self.manager = manager
@ -54,7 +48,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
self.setStyleSheet("QTreeWidget {background: " + self.setStyleSheet("QTreeWidget {background: " +
self.palette().midlight().color().name() + " ;}") self.palette().midlight().color().name() + " ;}")
self.viewport().installEventFilter(_WheelFilter(self.viewport())) self.viewport().installEventFilter(WheelFilter(self.viewport(), True))
self._groups = dict() self._groups = dict()
self._arg_to_widgets = dict() self._arg_to_widgets = dict()
@ -158,12 +152,29 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
self._groups[name] = group self._groups[name] = group
return group return group
def update_argument(self, name, argument):
widgets = self._arg_to_widgets[name]
# Qt needs a setItemWidget() to handle layout correctly,
# simply replacing the entry inside the LayoutWidget
# results in a bug.
widgets["entry"].deleteLater()
widgets["entry"] = procdesc_to_entry(argument["desc"])(argument)
widgets["disable_other_scans"].setVisible(
isinstance(widgets["entry"], ScanEntry))
widgets["fix_layout"].deleteLater()
widgets["fix_layout"] = LayoutWidget()
widgets["fix_layout"].addWidget(widgets["entry"])
self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"])
self.updateGeometries()
def _recompute_argument_clicked(self, name): def _recompute_argument_clicked(self, name):
asyncio.ensure_future(self._recompute_argument(name)) asyncio.ensure_future(self._recompute_argument(name))
async def _recompute_argument(self, name): async def _recompute_argument(self, name):
try: try:
expdesc = await self.manager.compute_expdesc(self.expurl) expdesc, _ = await self.manager.compute_expdesc(self.expurl)
except: except:
logger.error("Could not recompute argument '%s' of '%s'", logger.error("Could not recompute argument '%s' of '%s'",
name, self.expurl, exc_info=True) name, self.expurl, exc_info=True)
@ -174,22 +185,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
state = procdesc_to_entry(procdesc).default_state(procdesc) state = procdesc_to_entry(procdesc).default_state(procdesc)
argument["desc"] = procdesc argument["desc"] = procdesc
argument["state"] = state argument["state"] = state
self.update_argument(name, argument)
# Qt needs a setItemWidget() to handle layout correctly,
# simply replacing the entry inside the LayoutWidget
# results in a bug.
widgets = self._arg_to_widgets[name]
widgets["entry"].deleteLater()
widgets["entry"] = procdesc_to_entry(procdesc)(argument)
widgets["disable_other_scans"].setVisible(
isinstance(widgets["entry"], ScanEntry))
widgets["fix_layout"].deleteLater()
widgets["fix_layout"] = LayoutWidget()
widgets["fix_layout"].addWidget(widgets["entry"])
self.setItemWidget(widgets["widget_item"], 1, widgets["fix_layout"])
self.updateGeometries()
def _disable_other_scans(self, current_name): def _disable_other_scans(self, current_name):
for name, widgets in self._arg_to_widgets.items(): for name, widgets in self._arg_to_widgets.items():
@ -215,6 +211,15 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
pass pass
self.verticalScrollBar().setValue(state["scroll"]) self.verticalScrollBar().setValue(state["scroll"])
# Hooks that allow user-supplied argument editors to react to imminent user
# actions. Here, we always keep the manager-stored submission arguments
# up-to-date, so no further action is required.
def about_to_submit(self):
pass
def about_to_close(self):
pass
log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
@ -240,7 +245,8 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
self.manager = manager self.manager = manager
self.expurl = expurl self.expurl = expurl
self.argeditor = _ArgumentEditor(self.manager, self, self.expurl) editor_class = self.manager.get_argument_editor_class(expurl)
self.argeditor = editor_class(self.manager, self, self.expurl)
self.layout.addWidget(self.argeditor, 0, 0, 1, 5) self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
self.layout.setRowStretch(0, 1) self.layout.setRowStretch(0, 1)
@ -257,7 +263,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
datetime.setDate(QtCore.QDate.currentDate()) datetime.setDate(QtCore.QDate.currentDate())
else: else:
datetime.setDateTime(QtCore.QDateTime.fromMSecsSinceEpoch( datetime.setDateTime(QtCore.QDateTime.fromMSecsSinceEpoch(
scheduling["due_date"]*1000)) int(scheduling["due_date"]*1000)))
datetime_en.setChecked(scheduling["due_date"] is not None) datetime_en.setChecked(scheduling["due_date"] is not None)
def update_datetime(dt): def update_datetime(dt):
@ -300,7 +306,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
flush = self.flush flush = self.flush
flush.setToolTip("Flush the pipeline (of current- and higher-priority " flush.setToolTip("Flush the pipeline (of current- and higher-priority "
"experiments) before starting the experiment") "experiments) before starting the experiment")
self.layout.addWidget(flush, 2, 2, 1, 2) self.layout.addWidget(flush, 2, 2)
flush.setChecked(scheduling["flush"]) flush.setChecked(scheduling["flush"])
@ -308,6 +314,20 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
scheduling["flush"] = bool(checked) scheduling["flush"] = bool(checked)
flush.stateChanged.connect(update_flush) flush.stateChanged.connect(update_flush)
devarg_override = QtWidgets.QComboBox()
devarg_override.setEditable(True)
devarg_override.lineEdit().setPlaceholderText("Override device arguments")
devarg_override.lineEdit().setClearButtonEnabled(True)
devarg_override.insertItem(0, "core:analyze_at_run_end=True")
self.layout.addWidget(devarg_override, 2, 3)
devarg_override.setCurrentText(options["devarg_override"])
def update_devarg_override(text):
options["devarg_override"] = text
devarg_override.editTextChanged.connect(update_devarg_override)
self.devarg_override = devarg_override
log_level = QtWidgets.QComboBox() log_level = QtWidgets.QComboBox()
log_level.addItems(log_levels) log_level.addItems(log_levels)
log_level.setCurrentIndex(1) log_level.setCurrentIndex(1)
@ -328,6 +348,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
if "repo_rev" in options: if "repo_rev" in options:
repo_rev = QtWidgets.QLineEdit() repo_rev = QtWidgets.QLineEdit()
repo_rev.setPlaceholderText("current") repo_rev.setPlaceholderText("current")
repo_rev.setClearButtonEnabled(True)
repo_rev_label = QtWidgets.QLabel("Revision:") repo_rev_label = QtWidgets.QLabel("Revision:")
repo_rev_label.setToolTip("Experiment repository revision " repo_rev_label.setToolTip("Experiment repository revision "
"(commit ID) to use") "(commit ID) to use")
@ -368,6 +389,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
self.hdf5_load_directory = os.path.expanduser("~") self.hdf5_load_directory = os.path.expanduser("~")
def submit_clicked(self): def submit_clicked(self):
self.argeditor.about_to_submit()
try: try:
self.manager.submit(self.expurl) self.manager.submit(self.expurl)
except: except:
@ -390,7 +412,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
async def _recompute_arguments_task(self, overrides=dict()): async def _recompute_arguments_task(self, overrides=dict()):
try: try:
expdesc = await self.manager.compute_expdesc(self.expurl) expdesc, ui_name = await self.manager.compute_expdesc(self.expurl)
except: except:
logger.error("Could not recompute experiment description of '%s'", logger.error("Could not recompute experiment description of '%s'",
self.expurl, exc_info=True) self.expurl, exc_info=True)
@ -403,12 +425,13 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
arginfo[k][0]["default"].insert(0, v) arginfo[k][0]["default"].insert(0, v)
else: else:
arginfo[k][0]["default"] = v arginfo[k][0]["default"] = v
self.manager.initialize_submission_arguments(self.expurl, arginfo) self.manager.initialize_submission_arguments(self.expurl, arginfo, ui_name)
argeditor_state = self.argeditor.save_state() argeditor_state = self.argeditor.save_state()
self.argeditor.deleteLater() self.argeditor.deleteLater()
self.argeditor = _ArgumentEditor(self.manager, self, self.expurl) editor_class = self.manager.get_argument_editor_class(self.expurl)
self.argeditor = editor_class(self.manager, self, self.expurl)
self.argeditor.restore_state(argeditor_state) self.argeditor.restore_state(argeditor_state)
self.layout.addWidget(self.argeditor, 0, 0, 1, 5) self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
@ -421,7 +444,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
async def _recompute_sched_options_task(self): async def _recompute_sched_options_task(self):
try: try:
expdesc = await self.manager.compute_expdesc(self.expurl) expdesc, _ = await self.manager.compute_expdesc(self.expurl)
except: except:
logger.error("Could not recompute experiment description of '%s'", logger.error("Could not recompute experiment description of '%s'",
self.expurl, exc_info=True) self.expurl, exc_info=True)
@ -458,6 +481,9 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
return return
try: try:
if "devarg_override" in expid:
self.devarg_override.setCurrentText(
unparse_devarg_override(expid["devarg_override"]))
self.log_level.setCurrentIndex(log_levels.index( self.log_level.setCurrentIndex(log_levels.index(
log_level_to_name(expid["log_level"]))) log_level_to_name(expid["log_level"])))
if ("repo_rev" in expid and if ("repo_rev" in expid and
@ -472,6 +498,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
await self._recompute_arguments_task(arguments) await self._recompute_arguments_task(arguments)
def closeEvent(self, event): def closeEvent(self, event):
self.argeditor.about_to_close()
self.sigClosed.emit() self.sigClosed.emit()
QtWidgets.QMdiSubWindow.closeEvent(self, event) QtWidgets.QMdiSubWindow.closeEvent(self, event)
@ -488,8 +515,68 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
self.hdf5_load_directory = state["hdf5_load_directory"] self.hdf5_load_directory = state["hdf5_load_directory"]
class _QuickOpenDialog(QtWidgets.QDialog):
"""Modal dialog for opening/submitting experiments from a
FuzzySelectWidget."""
closed = QtCore.pyqtSignal()
def __init__(self, manager):
super().__init__(manager.main_window)
self.setModal(True)
self.manager = manager
self.setWindowTitle("Quick open...")
layout = QtWidgets.QGridLayout(self)
layout.setSpacing(0)
layout.setContentsMargins(0, 0, 0, 0)
self.setLayout(layout)
# Find matching experiment names. Open experiments are preferred to
# matches from the repository to ease quick window switching.
open_exps = list(self.manager.open_experiments.keys())
repo_exps = set("repo:" + k
for k in self.manager.explist.keys()) - set(open_exps)
choices = [(o, 100) for o in open_exps] + [(r, 0) for r in repo_exps]
self.select_widget = FuzzySelectWidget(choices)
layout.addWidget(self.select_widget)
self.select_widget.aborted.connect(self.close)
self.select_widget.finished.connect(self._open_experiment)
font_metrics = QtGui.QFontMetrics(self.select_widget.line_edit.font())
self.select_widget.setMinimumWidth(font_metrics.averageCharWidth() * 70)
def done(self, r):
if self.select_widget:
self.select_widget.abort()
self.closed.emit()
QtWidgets.QDialog.done(self, r)
def _open_experiment(self, exp_name, modifiers):
if modifiers & QtCore.Qt.ControlModifier:
try:
self.manager.submit(exp_name)
except:
# Not all open_experiments necessarily still exist in the explist
# (e.g. if the repository has been re-scanned since).
logger.warning("failed to submit experiment '%s'",
exp_name,
exc_info=True)
else:
self.manager.open_experiment(exp_name)
self.close()
class ExperimentManager: class ExperimentManager:
def __init__(self, main_window, #: Global registry for custom argument editor classes, indexed by the experiment
#: `argument_ui` string; can be populated by dashboard plugins such as ndscan.
#: If no handler for a requested UI name is found, the default built-in argument
#: editor will be used.
argument_ui_classes = dict()
def __init__(self, main_window, dataset_sub,
explist_sub, schedule_sub, explist_sub, schedule_sub,
schedule_ctl, experiment_db_ctl): schedule_ctl, experiment_db_ctl):
self.main_window = main_window self.main_window = main_window
@ -500,7 +587,10 @@ class ExperimentManager:
self.submission_scheduling = dict() self.submission_scheduling = dict()
self.submission_options = dict() self.submission_options = dict()
self.submission_arguments = dict() self.submission_arguments = dict()
self.argument_ui_names = dict()
self.datasets = dict()
dataset_sub.add_setmodel_callback(self.set_dataset_model)
self.explist = dict() self.explist = dict()
explist_sub.add_setmodel_callback(self.set_explist_model) explist_sub.add_setmodel_callback(self.set_explist_model)
self.schedule = dict() self.schedule = dict()
@ -508,6 +598,16 @@ class ExperimentManager:
self.open_experiments = dict() self.open_experiments = dict()
self.is_quick_open_shown = False
quick_open_shortcut = QtWidgets.QShortcut(
QtCore.Qt.CTRL + QtCore.Qt.Key_P,
main_window)
quick_open_shortcut.setContext(QtCore.Qt.ApplicationShortcut)
quick_open_shortcut.activated.connect(self.show_quick_open)
def set_dataset_model(self, model):
self.datasets = model
def set_explist_model(self, model): def set_explist_model(self, model):
self.explist = model.backing_store self.explist = model.backing_store
@ -524,6 +624,17 @@ class ExperimentManager:
else: else:
raise ValueError("Malformed experiment URL") raise ValueError("Malformed experiment URL")
def get_argument_editor_class(self, expurl):
ui_name = self.argument_ui_names.get(expurl, None)
if not ui_name and expurl[:5] == "repo:":
ui_name = self.explist.get(expurl[5:], {}).get("argument_ui", None)
if ui_name:
result = self.argument_ui_classes.get(ui_name, None)
if result:
return result
logger.warning("Ignoring unknown argument UI '%s'", ui_name)
return _ArgumentEditor
def get_submission_scheduling(self, expurl): def get_submission_scheduling(self, expurl):
if expurl in self.submission_scheduling: if expurl in self.submission_scheduling:
return self.submission_scheduling[expurl] return self.submission_scheduling[expurl]
@ -546,14 +657,15 @@ class ExperimentManager:
else: else:
# mutated by _ExperimentDock # mutated by _ExperimentDock
options = { options = {
"log_level": logging.WARNING "log_level": logging.WARNING,
"devarg_override": ""
} }
if expurl[:5] == "repo:": if expurl[:5] == "repo:":
options["repo_rev"] = None options["repo_rev"] = None
self.submission_options[expurl] = options self.submission_options[expurl] = options
return options return options
def initialize_submission_arguments(self, expurl, arginfo): def initialize_submission_arguments(self, expurl, arginfo, ui_name):
arguments = OrderedDict() arguments = OrderedDict()
for name, (procdesc, group, tooltip) in arginfo.items(): for name, (procdesc, group, tooltip) in arginfo.items():
state = procdesc_to_entry(procdesc).default_state(procdesc) state = procdesc_to_entry(procdesc).default_state(procdesc)
@ -564,7 +676,22 @@ class ExperimentManager:
"state": state, # mutated by entries "state": state, # mutated by entries
} }
self.submission_arguments[expurl] = arguments self.submission_arguments[expurl] = arguments
self.argument_ui_names[expurl] = ui_name
return arguments return arguments
def set_argument_value(self, expurl, name, value):
try:
argument = self.submission_arguments[expurl][name]
if argument["desc"]["ty"] == "Scannable":
ty = value["ty"]
argument["state"]["selected"] = ty
argument["state"][ty] = value
else:
argument["state"] = value
if expurl in self.open_experiments.keys():
self.open_experiments[expurl].argeditor.update_argument(name, argument)
except:
logger.warn("Failed to set value for argument \"{}\" in experiment: {}.".format(name, expurl), exc_info=1)
def get_submission_arguments(self, expurl): def get_submission_arguments(self, expurl):
if expurl in self.submission_arguments: if expurl in self.submission_arguments:
@ -573,9 +700,9 @@ class ExperimentManager:
if expurl[:5] != "repo:": if expurl[:5] != "repo:":
raise ValueError("Submission arguments must be preinitialized " raise ValueError("Submission arguments must be preinitialized "
"when not using repository") "when not using repository")
arginfo = self.explist[expurl[5:]]["arginfo"] class_desc = self.explist[expurl[5:]]
arguments = self.initialize_submission_arguments(expurl, arginfo) return self.initialize_submission_arguments(expurl,
return arguments class_desc["arginfo"], class_desc.get("argument_ui", None))
def open_experiment(self, expurl): def open_experiment(self, expurl):
if expurl in self.open_experiments: if expurl in self.open_experiments:
@ -626,7 +753,14 @@ class ExperimentManager:
entry_cls = procdesc_to_entry(argument["desc"]) entry_cls = procdesc_to_entry(argument["desc"])
argument_values[name] = entry_cls.state_to_value(argument["state"]) argument_values[name] = entry_cls.state_to_value(argument["state"])
try:
devarg_override = parse_devarg_override(options["devarg_override"])
except:
logger.error("Failed to parse device argument overrides for %s", expurl)
return
expid = { expid = {
"devarg_override": devarg_override,
"log_level": options["log_level"], "log_level": options["log_level"],
"file": file, "file": file,
"class_name": class_name, "class_name": class_name,
@ -664,7 +798,7 @@ class ExperimentManager:
else: else:
repo_match = "repo_rev" not in expid repo_match = "repo_rev" not in expid
if (repo_match and if (repo_match and
expid["file"] == file and ("file" in expid and expid["file"] == file) and
expid["class_name"] == class_name): expid["class_name"] == class_name):
rids.append(rid) rids.append(rid)
asyncio.ensure_future(self._request_term_multiple(rids)) asyncio.ensure_future(self._request_term_multiple(rids))
@ -677,13 +811,15 @@ class ExperimentManager:
revision = None revision = None
description = await self.experiment_db_ctl.examine( description = await self.experiment_db_ctl.examine(
file, use_repository, revision) file, use_repository, revision)
return description[class_name] class_desc = description[class_name]
return class_desc, class_desc.get("argument_ui", None)
async def open_file(self, file): async def open_file(self, file):
description = await self.experiment_db_ctl.examine(file, False) description = await self.experiment_db_ctl.examine(file, False)
for class_name, class_desc in description.items(): for class_name, class_desc in description.items():
expurl = "file:{}@{}".format(class_name, file) expurl = "file:{}@{}".format(class_name, file)
self.initialize_submission_arguments(expurl, class_desc["arginfo"]) self.initialize_submission_arguments(expurl, class_desc["arginfo"],
class_desc.get("argument_ui", None))
if expurl in self.open_experiments: if expurl in self.open_experiments:
self.open_experiments[expurl].close() self.open_experiments[expurl].close()
self.open_experiment(expurl) self.open_experiment(expurl)
@ -696,6 +832,7 @@ class ExperimentManager:
"options": self.submission_options, "options": self.submission_options,
"arguments": self.submission_arguments, "arguments": self.submission_arguments,
"docks": self.dock_states, "docks": self.dock_states,
"argument_uis": self.argument_ui_names,
"open_docks": set(self.open_experiments.keys()) "open_docks": set(self.open_experiments.keys())
} }
@ -706,5 +843,17 @@ class ExperimentManager:
self.submission_scheduling = state["scheduling"] self.submission_scheduling = state["scheduling"]
self.submission_options = state["options"] self.submission_options = state["options"]
self.submission_arguments = state["arguments"] self.submission_arguments = state["arguments"]
self.argument_ui_names = state.get("argument_uis", {})
for expurl in state["open_docks"]: for expurl in state["open_docks"]:
self.open_experiment(expurl) self.open_experiment(expurl)
def show_quick_open(self):
if self.is_quick_open_shown:
return
self.is_quick_open_shown = True
dialog = _QuickOpenDialog(self)
def closed():
self.is_quick_open_shown = False
dialog.closed.connect(closed)
dialog.show()

View File

@ -159,7 +159,7 @@ class WaitingPanel(LayoutWidget):
class ExplorerDock(QtWidgets.QDockWidget): class ExplorerDock(QtWidgets.QDockWidget):
def __init__(self, exp_manager, d_shortcuts, def __init__(self, exp_manager, d_shortcuts,
explist_sub, explist_status_sub, explist_sub, explist_status_sub,
schedule_ctl, experiment_db_ctl): schedule_ctl, experiment_db_ctl, device_db_ctl):
QtWidgets.QDockWidget.__init__(self, "Explorer") QtWidgets.QDockWidget.__init__(self, "Explorer")
self.setObjectName("Explorer") self.setObjectName("Explorer")
self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable | self.setFeatures(QtWidgets.QDockWidget.DockWidgetMovable |
@ -251,6 +251,12 @@ class ExplorerDock(QtWidgets.QDockWidget):
scan_repository_action.triggered.connect(scan_repository) scan_repository_action.triggered.connect(scan_repository)
self.el.addAction(scan_repository_action) self.el.addAction(scan_repository_action)
scan_ddb_action = QtWidgets.QAction("Scan device database", self.el)
def scan_ddb():
asyncio.ensure_future(device_db_ctl.scan())
scan_ddb_action.triggered.connect(scan_ddb)
self.el.addAction(scan_ddb_action)
self.current_directory = "" self.current_directory = ""
open_file_action = QtWidgets.QAction("Open file outside repository", open_file_action = QtWidgets.QAction("Open file outside repository",
self.el) self.el)

View File

@ -1,5 +1,6 @@
import asyncio import asyncio
import logging import logging
import textwrap
from collections import namedtuple from collections import namedtuple
from PyQt5 import QtCore, QtWidgets, QtGui from PyQt5 import QtCore, QtWidgets, QtGui
@ -7,12 +8,27 @@ from PyQt5 import QtCore, QtWidgets, QtGui
from sipyco.sync_struct import Subscriber from sipyco.sync_struct import Subscriber
from artiq.coredevice.comm_moninj import * from artiq.coredevice.comm_moninj import *
from artiq.coredevice.ad9910 import (
_AD9910_REG_PROFILE0, _AD9910_REG_PROFILE7,
_AD9910_REG_FTW, _AD9910_REG_CFR1
)
from artiq.coredevice.ad9912_reg import AD9912_POW1, AD9912_SER_CONF
from artiq.gui.tools import LayoutWidget from artiq.gui.tools import LayoutWidget
from artiq.gui.flowlayout import FlowLayout from artiq.gui.flowlayout import FlowLayout
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class _CancellableLineEdit(QtWidgets.QLineEdit):
def escapePressedConnect(self, cb):
self.esc_cb = cb
def keyPressEvent(self, event):
key = event.key()
if key == QtCore.Qt.Key_Escape:
self.esc_cb(event)
QtWidgets.QLineEdit.keyPressEvent(self, event)
class _TTLWidget(QtWidgets.QFrame): class _TTLWidget(QtWidgets.QFrame):
def __init__(self, dm, channel, force_out, title): def __init__(self, dm, channel, force_out, title):
@ -168,15 +184,172 @@ class _SimpleDisplayWidget(QtWidgets.QFrame):
raise NotImplementedError raise NotImplementedError
class _DDSWidget(_SimpleDisplayWidget): class _DDSModel:
def __init__(self, dm, bus_channel, channel, title): def __init__(self, dds_type, ref_clk, cpld=None, pll=1, clk_div=0):
self.cpld = cpld
self.cur_frequency = 0
self.cur_reg = 0
self.dds_type = dds_type
self.is_urukul = dds_type in ["AD9910", "AD9912"]
if dds_type == "AD9914":
self.ftw_per_hz = 2**32 / ref_clk
else:
if dds_type == "AD9910":
max_freq = 1 << 32
clk_mult = [4, 1, 2, 4]
elif dds_type == "AD9912": # AD9912
max_freq = 1 << 48
clk_mult = [1, 1, 2, 4]
else:
raise NotImplementedError
sysclk = ref_clk / clk_mult[clk_div] * pll
self.ftw_per_hz = 1 / sysclk * max_freq
def monitor_update(self, probe, value):
if self.dds_type == "AD9912":
value = value << 16
self.cur_frequency = self._ftw_to_freq(value)
def _ftw_to_freq(self, ftw):
return ftw / self.ftw_per_hz
class _DDSWidget(QtWidgets.QFrame):
def __init__(self, dm, title, bus_channel=0, channel=0, dds_model=None):
self.dm = dm
self.bus_channel = bus_channel self.bus_channel = bus_channel
self.channel = channel self.channel = channel
self.dds_name = title
self.cur_frequency = 0 self.cur_frequency = 0
_SimpleDisplayWidget.__init__(self, title) self.dds_model = dds_model
QtWidgets.QFrame.__init__(self)
self.setFrameShape(QtWidgets.QFrame.Box)
self.setFrameShadow(QtWidgets.QFrame.Raised)
grid = QtWidgets.QGridLayout()
grid.setContentsMargins(0, 0, 0, 0)
grid.setHorizontalSpacing(0)
grid.setVerticalSpacing(0)
self.setLayout(grid)
label = QtWidgets.QLabel(title)
label.setAlignment(QtCore.Qt.AlignCenter)
grid.addWidget(label, 1, 1)
# FREQ DATA/EDIT FIELD
self.data_stack = QtWidgets.QStackedWidget()
# page 1: display data
grid_disp = LayoutWidget()
grid_disp.layout.setContentsMargins(0, 0, 0, 0)
grid_disp.layout.setHorizontalSpacing(0)
grid_disp.layout.setVerticalSpacing(0)
self.value_label = QtWidgets.QLabel()
self.value_label.setAlignment(QtCore.Qt.AlignCenter)
grid_disp.addWidget(self.value_label, 0, 1, 1, 2)
unit = QtWidgets.QLabel("MHz")
unit.setAlignment(QtCore.Qt.AlignCenter)
grid_disp.addWidget(unit, 0, 3, 1, 1)
self.data_stack.addWidget(grid_disp)
# page 2: edit data
grid_edit = LayoutWidget()
grid_edit.layout.setContentsMargins(0, 0, 0, 0)
grid_edit.layout.setHorizontalSpacing(0)
grid_edit.layout.setVerticalSpacing(0)
self.value_edit = _CancellableLineEdit(self)
self.value_edit.setAlignment(QtCore.Qt.AlignRight)
grid_edit.addWidget(self.value_edit, 0, 1, 1, 2)
unit = QtWidgets.QLabel("MHz")
unit.setAlignment(QtCore.Qt.AlignCenter)
grid_edit.addWidget(unit, 0, 3, 1, 1)
self.data_stack.addWidget(grid_edit)
grid.addWidget(self.data_stack, 2, 1)
# BUTTONS
self.button_stack = QtWidgets.QStackedWidget()
# page 1: SET button
set_grid = LayoutWidget()
set_btn = QtWidgets.QToolButton()
set_btn.setText("Set")
set_btn.setToolTip("Set frequency")
set_grid.addWidget(set_btn, 0, 1, 1, 1)
# for urukuls also allow switching off RF
if self.dds_model.is_urukul:
off_btn = QtWidgets.QToolButton()
off_btn.setText("Off")
off_btn.setToolTip("Switch off the output")
set_grid.addWidget(off_btn, 0, 2, 1, 1)
self.button_stack.addWidget(set_grid)
# page 2: apply/cancel buttons
apply_grid = LayoutWidget()
apply = QtWidgets.QToolButton()
apply.setText("Apply")
apply.setToolTip("Apply changes")
apply_grid.addWidget(apply, 0, 1, 1, 1)
cancel = QtWidgets.QToolButton()
cancel.setText("Cancel")
cancel.setToolTip("Cancel changes")
apply_grid.addWidget(cancel, 0, 2, 1, 1)
self.button_stack.addWidget(apply_grid)
grid.addWidget(self.button_stack, 3, 1)
grid.setRowStretch(1, 1)
grid.setRowStretch(2, 1)
grid.setRowStretch(3, 1)
set_btn.clicked.connect(self.set_clicked)
apply.clicked.connect(self.apply_changes)
if self.dds_model.is_urukul:
off_btn.clicked.connect(self.off_clicked)
off_btn.setToolTip(textwrap.dedent(
"""Note: If TTL RTIO sw for the channel is switched high,
this button will not disable the channel.
Use the TTL override instead."""))
self.value_edit.returnPressed.connect(lambda: self.apply_changes(None))
self.value_edit.escapePressedConnect(self.cancel_changes)
cancel.clicked.connect(self.cancel_changes)
self.refresh_display()
def set_clicked(self, set):
self.data_stack.setCurrentIndex(1)
self.button_stack.setCurrentIndex(1)
self.value_edit.setText("{:.7f}"
.format(self.cur_frequency/1e6))
self.value_edit.setFocus()
self.value_edit.selectAll()
def off_clicked(self, set):
self.dm.dds_channel_toggle(self.dds_name, self.dds_model, sw=False)
def apply_changes(self, apply):
self.data_stack.setCurrentIndex(0)
self.button_stack.setCurrentIndex(0)
frequency = float(self.value_edit.text())*1e6
self.dm.dds_set_frequency(self.dds_name, self.dds_model, frequency)
def cancel_changes(self, cancel):
self.data_stack.setCurrentIndex(0)
self.button_stack.setCurrentIndex(0)
def refresh_display(self): def refresh_display(self):
self.value.setText("<font size=\"4\">{:.7f}</font><font size=\"2\"> MHz</font>" self.cur_frequency = self.dds_model.cur_frequency
self.value_label.setText("<font size=\"4\">{:.7f}</font>"
.format(self.cur_frequency/1e6))
self.value_edit.setText("{:.7f}"
.format(self.cur_frequency/1e6)) .format(self.cur_frequency/1e6))
def sort_key(self): def sort_key(self):
@ -202,51 +375,74 @@ _WidgetDesc = namedtuple("_WidgetDesc", "uid comment cls arguments")
def setup_from_ddb(ddb): def setup_from_ddb(ddb):
core_addr = None mi_addr = None
mi_port = None
dds_sysclk = None dds_sysclk = None
description = set() description = set()
for k, v in ddb.items(): for k, v in ddb.items():
comment = None
if "comment" in v:
comment = v["comment"]
try: try:
if isinstance(v, dict) and v["type"] == "local": if isinstance(v, dict):
if k == "core": comment = v.get("comment")
core_addr = v["arguments"]["host"] if v["type"] == "local":
elif v["module"] == "artiq.coredevice.ttl": if v["module"] == "artiq.coredevice.ttl":
channel = v["arguments"]["channel"] if "ttl_urukul" in k:
force_out = v["class"] == "TTLOut" continue
widget = _WidgetDesc(k, comment, _TTLWidget, (channel, force_out, k)) channel = v["arguments"]["channel"]
description.add(widget) force_out = v["class"] == "TTLOut"
elif (v["module"] == "artiq.coredevice.ad9914" widget = _WidgetDesc(k, comment, _TTLWidget, (channel, force_out, k))
and v["class"] == "AD9914"):
bus_channel = v["arguments"]["bus_channel"]
channel = v["arguments"]["channel"]
dds_sysclk = v["arguments"]["sysclk"]
widget = _WidgetDesc(k, comment, _DDSWidget, (bus_channel, channel, k))
description.add(widget)
elif ( (v["module"] == "artiq.coredevice.ad53xx" and v["class"] == "AD53XX")
or (v["module"] == "artiq.coredevice.zotino" and v["class"] == "Zotino")):
spi_device = v["arguments"]["spi_device"]
spi_device = ddb[spi_device]
while isinstance(spi_device, str):
spi_device = ddb[spi_device]
spi_channel = spi_device["arguments"]["channel"]
for channel in range(32):
widget = _WidgetDesc((k, channel), comment, _DACWidget, (spi_channel, channel, k))
description.add(widget) description.add(widget)
elif (v["module"] == "artiq.coredevice.ad9914"
and v["class"] == "AD9914"):
bus_channel = v["arguments"]["bus_channel"]
channel = v["arguments"]["channel"]
dds_sysclk = v["arguments"]["sysclk"]
model = _DDSModel(v["class"], dds_sysclk)
widget = _WidgetDesc(k, comment, _DDSWidget, (k, bus_channel, channel, model))
description.add(widget)
elif (v["module"] == "artiq.coredevice.ad9910"
and v["class"] == "AD9910") or \
(v["module"] == "artiq.coredevice.ad9912"
and v["class"] == "AD9912"):
channel = v["arguments"]["chip_select"] - 4
if channel < 0:
continue
dds_cpld = v["arguments"]["cpld_device"]
spi_dev = ddb[dds_cpld]["arguments"]["spi_device"]
bus_channel = ddb[spi_dev]["arguments"]["channel"]
pll = v["arguments"]["pll_n"]
refclk = ddb[dds_cpld]["arguments"]["refclk"]
clk_div = v["arguments"].get("clk_div", 0)
model = _DDSModel( v["class"], refclk, dds_cpld, pll, clk_div)
widget = _WidgetDesc(k, comment, _DDSWidget, (k, bus_channel, channel, model))
description.add(widget)
elif ( (v["module"] == "artiq.coredevice.ad53xx" and v["class"] == "AD53xx")
or (v["module"] == "artiq.coredevice.zotino" and v["class"] == "Zotino")):
spi_device = v["arguments"]["spi_device"]
spi_device = ddb[spi_device]
while isinstance(spi_device, str):
spi_device = ddb[spi_device]
spi_channel = spi_device["arguments"]["channel"]
for channel in range(32):
widget = _WidgetDesc((k, channel), comment, _DACWidget, (spi_channel, channel, k))
description.add(widget)
elif v["type"] == "controller" and k == "core_moninj":
mi_addr = v["host"]
mi_port = v.get("port_proxy", 1383)
except KeyError: except KeyError:
pass pass
return core_addr, dds_sysclk, description return mi_addr, mi_port, description
class _DeviceManager: class _DeviceManager:
def __init__(self): def __init__(self, schedule_ctl):
self.core_addr = None self.mi_addr = None
self.reconnect_core = asyncio.Event() self.mi_port = None
self.core_connection = None self.reconnect_mi = asyncio.Event()
self.core_connector_task = asyncio.ensure_future(self.core_connector()) self.mi_connection = None
self.mi_connector_task = asyncio.ensure_future(self.mi_connector())
self.schedule_ctl = schedule_ctl
self.ddb = dict() self.ddb = dict()
self.description = set() self.description = set()
@ -265,13 +461,12 @@ class _DeviceManager:
return ddb return ddb
def notify(self, mod): def notify(self, mod):
core_addr, dds_sysclk, description = setup_from_ddb(self.ddb) mi_addr, mi_port, description = setup_from_ddb(self.ddb)
if core_addr != self.core_addr: if (mi_addr, mi_port) != (self.mi_addr, self.mi_port):
self.core_addr = core_addr self.mi_addr = mi_addr
self.reconnect_core.set() self.mi_port = mi_port
self.reconnect_mi.set()
self.dds_sysclk = dds_sysclk
for to_remove in self.description - description: for to_remove in self.description - description:
widget = self.widgets_by_uid[to_remove.uid] widget = self.widgets_by_uid[to_remove.uid]
@ -319,44 +514,172 @@ class _DeviceManager:
self.description = description self.description = description
def ttl_set_mode(self, channel, mode): def ttl_set_mode(self, channel, mode):
if self.core_connection is not None: if self.mi_connection is not None:
widget = self.ttl_widgets[channel] widget = self.ttl_widgets[channel]
if mode == "0": if mode == "0":
widget.cur_override = True widget.cur_override = True
widget.cur_level = False widget.cur_level = False
self.core_connection.inject(channel, TTLOverride.level.value, 0) self.mi_connection.inject(channel, TTLOverride.level.value, 0)
self.core_connection.inject(channel, TTLOverride.oe.value, 1) self.mi_connection.inject(channel, TTLOverride.oe.value, 1)
self.core_connection.inject(channel, TTLOverride.en.value, 1) self.mi_connection.inject(channel, TTLOverride.en.value, 1)
elif mode == "1": elif mode == "1":
widget.cur_override = True widget.cur_override = True
widget.cur_level = True widget.cur_level = True
self.core_connection.inject(channel, TTLOverride.level.value, 1) self.mi_connection.inject(channel, TTLOverride.level.value, 1)
self.core_connection.inject(channel, TTLOverride.oe.value, 1) self.mi_connection.inject(channel, TTLOverride.oe.value, 1)
self.core_connection.inject(channel, TTLOverride.en.value, 1) self.mi_connection.inject(channel, TTLOverride.en.value, 1)
elif mode == "exp": elif mode == "exp":
widget.cur_override = False widget.cur_override = False
self.core_connection.inject(channel, TTLOverride.en.value, 0) self.mi_connection.inject(channel, TTLOverride.en.value, 0)
else: else:
raise ValueError raise ValueError
# override state may have changed # override state may have changed
widget.refresh_display() widget.refresh_display()
async def _submit_by_content(self, content, class_name, title):
expid = {
"log_level": logging.WARNING,
"content": content,
"class_name": class_name,
"arguments": {}
}
scheduling = {
"pipeline_name": "main",
"priority": 0,
"due_date": None,
"flush": False
}
rid = await self.schedule_ctl.submit(
scheduling["pipeline_name"],
expid,
scheduling["priority"], scheduling["due_date"],
scheduling["flush"])
logger.info("Submitted '%s', RID is %d", title, rid)
def _dds_faux_injection(self, dds_channel, dds_model, action, title, log_msg):
# create kernel and fill it in and send-by-content
# initialize CPLD (if applicable)
if dds_model.is_urukul:
# urukuls need CPLD init and switch to on
cpld_dev = """self.setattr_device("core_cache")
self.setattr_device("{}")""".format(dds_model.cpld)
# `sta`/`rf_sw`` variables are guaranteed for urukuls
# so {action} can use it
# if there's no RF enabled, CPLD may have not been initialized
# but if there is, it has been initialised - no need to do again
cpld_init = """delay(15*ms)
was_init = self.core_cache.get("_{cpld}_init")
sta = self.{cpld}.sta_read()
rf_sw = urukul_sta_rf_sw(sta)
if rf_sw == 0 and len(was_init) == 0:
delay(15*ms)
self.{cpld}.init()
self.core_cache.put("_{cpld}_init", [1])
""".format(cpld=dds_model.cpld)
else:
cpld_dev = ""
cpld_init = ""
# AD9912/9910: init channel (if uninitialized)
if dds_model.dds_type == "AD9912":
# 0xFF before init, 0x99 after
channel_init = """
if self.{dds_channel}.read({cfgreg}, length=1) == 0xFF:
delay(10*ms)
self.{dds_channel}.init()
""".format(dds_channel=dds_channel, cfgreg=AD9912_SER_CONF)
elif dds_model.dds_type == "AD9910":
# -1 before init, 2 after
channel_init = """
if self.{dds_channel}.read32({cfgreg}) == -1:
delay(10*ms)
self.{dds_channel}.init()
""".format(dds_channel=dds_channel, cfgreg=AD9912_SER_CONF)
else:
channel_init = "self.{dds_channel}.init()".format(dds_channel=dds_channel)
dds_exp = textwrap.dedent("""
from artiq.experiment import *
from artiq.coredevice.urukul import *
class {title}(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("{dds_channel}")
{cpld_dev}
@kernel
def run(self):
self.core.break_realtime()
{cpld_init}
delay(10*ms)
{channel_init}
delay(15*ms)
{action}
""".format(title=title, action=action,
dds_channel=dds_channel,
cpld_dev=cpld_dev, cpld_init=cpld_init,
channel_init=channel_init))
asyncio.ensure_future(
self._submit_by_content(
dds_exp,
title,
log_msg))
def dds_set_frequency(self, dds_channel, dds_model, freq):
action = "self.{ch}.set({freq})".format(
freq=freq, ch=dds_channel)
if dds_model.is_urukul:
action += """
ch_no = self.{ch}.chip_select - 4
self.{cpld}.cfg_switches(rf_sw | 1 << ch_no)
""".format(ch=dds_channel, cpld=dds_model.cpld)
self._dds_faux_injection(
dds_channel,
dds_model,
action,
"SetDDS",
"Set DDS {} {}MHz".format(dds_channel, freq/1e6))
def dds_channel_toggle(self, dds_channel, dds_model, sw=True):
# urukul only
if sw:
switch = "| 1 << ch_no"
else:
switch = "& ~(1 << ch_no)"
action = """
ch_no = self.{dds_channel}.chip_select - 4
self.{cpld}.cfg_switches(rf_sw {switch})
""".format(
dds_channel=dds_channel,
cpld=dds_model.cpld,
switch=switch
)
self._dds_faux_injection(
dds_channel,
dds_model,
action,
"ToggleDDS",
"Toggle DDS {} {}".format(dds_channel, "on" if sw else "off"))
def setup_ttl_monitoring(self, enable, channel): def setup_ttl_monitoring(self, enable, channel):
if self.core_connection is not None: if self.mi_connection is not None:
self.core_connection.monitor_probe(enable, channel, TTLProbe.level.value) self.mi_connection.monitor_probe(enable, channel, TTLProbe.level.value)
self.core_connection.monitor_probe(enable, channel, TTLProbe.oe.value) self.mi_connection.monitor_probe(enable, channel, TTLProbe.oe.value)
self.core_connection.monitor_injection(enable, channel, TTLOverride.en.value) self.mi_connection.monitor_injection(enable, channel, TTLOverride.en.value)
self.core_connection.monitor_injection(enable, channel, TTLOverride.level.value) self.mi_connection.monitor_injection(enable, channel, TTLOverride.level.value)
if enable: if enable:
self.core_connection.get_injection_status(channel, TTLOverride.en.value) self.mi_connection.get_injection_status(channel, TTLOverride.en.value)
def setup_dds_monitoring(self, enable, bus_channel, channel): def setup_dds_monitoring(self, enable, bus_channel, channel):
if self.core_connection is not None: if self.mi_connection is not None:
self.core_connection.monitor_probe(enable, bus_channel, channel) self.mi_connection.monitor_probe(enable, bus_channel, channel)
def setup_dac_monitoring(self, enable, spi_channel, channel): def setup_dac_monitoring(self, enable, spi_channel, channel):
if self.core_connection is not None: if self.mi_connection is not None:
self.core_connection.monitor_probe(enable, spi_channel, channel) self.mi_connection.monitor_probe(enable, spi_channel, channel)
def monitor_cb(self, channel, probe, value): def monitor_cb(self, channel, probe, value):
if channel in self.ttl_widgets: if channel in self.ttl_widgets:
@ -366,11 +689,11 @@ class _DeviceManager:
elif probe == TTLProbe.oe.value: elif probe == TTLProbe.oe.value:
widget.cur_oe = bool(value) widget.cur_oe = bool(value)
widget.refresh_display() widget.refresh_display()
if (channel, probe) in self.dds_widgets: elif (channel, probe) in self.dds_widgets:
widget = self.dds_widgets[(channel, probe)] widget = self.dds_widgets[(channel, probe)]
widget.cur_frequency = value*self.dds_sysclk/2**32 widget.dds_model.monitor_update(probe, value)
widget.refresh_display() widget.refresh_display()
if (channel, probe) in self.dac_widgets: elif (channel, probe) in self.dac_widgets:
widget = self.dac_widgets[(channel, probe)] widget = self.dac_widgets[(channel, probe)]
widget.cur_value = value widget.cur_value = value
widget.refresh_display() widget.refresh_display()
@ -385,26 +708,31 @@ class _DeviceManager:
widget.refresh_display() widget.refresh_display()
def disconnect_cb(self): def disconnect_cb(self):
logger.error("lost connection to core device moninj") logger.error("lost connection to moninj")
self.reconnect_core.set() self.reconnect_mi.set()
async def core_connector(self): async def mi_connector(self):
while True: while True:
await self.reconnect_core.wait() await self.reconnect_mi.wait()
self.reconnect_core.clear() self.reconnect_mi.clear()
if self.core_connection is not None: if self.mi_connection is not None:
await self.core_connection.close() await self.mi_connection.close()
self.core_connection = None self.mi_connection = None
new_core_connection = CommMonInj(self.monitor_cb, self.injection_status_cb, new_mi_connection = CommMonInj(self.monitor_cb, self.injection_status_cb,
self.disconnect_cb) self.disconnect_cb)
try: try:
await new_core_connection.connect(self.core_addr, 1383) await new_mi_connection.connect(self.mi_addr, self.mi_port)
except asyncio.CancelledError:
logger.info("cancelled connection to moninj")
break
except: except:
logger.error("failed to connect to core device moninj", exc_info=True) logger.error("failed to connect to moninj. Is aqctl_moninj_proxy running?", exc_info=True)
await asyncio.sleep(10.) await asyncio.sleep(10.)
self.reconnect_core.set() self.reconnect_mi.set()
else: else:
self.core_connection = new_core_connection logger.info("ARTIQ dashboard connected to moninj (%s)",
self.mi_addr)
self.mi_connection = new_mi_connection
for ttl_channel in self.ttl_widgets.keys(): for ttl_channel in self.ttl_widgets.keys():
self.setup_ttl_monitoring(True, ttl_channel) self.setup_ttl_monitoring(True, ttl_channel)
for bus_channel, channel in self.dds_widgets.keys(): for bus_channel, channel in self.dds_widgets.keys():
@ -413,13 +741,13 @@ class _DeviceManager:
self.setup_dac_monitoring(True, spi_channel, channel) self.setup_dac_monitoring(True, spi_channel, channel)
async def close(self): async def close(self):
self.core_connector_task.cancel() self.mi_connector_task.cancel()
try: try:
await asyncio.wait_for(self.core_connector_task, None) await asyncio.wait_for(self.mi_connector_task, None)
except asyncio.CancelledError: except asyncio.CancelledError:
pass pass
if self.core_connection is not None: if self.mi_connection is not None:
await self.core_connection.close() await self.mi_connection.close()
class _MonInjDock(QtWidgets.QDockWidget): class _MonInjDock(QtWidgets.QDockWidget):
@ -445,12 +773,12 @@ class _MonInjDock(QtWidgets.QDockWidget):
class MonInj: class MonInj:
def __init__(self): def __init__(self, schedule_ctl):
self.ttl_dock = _MonInjDock("TTL") self.ttl_dock = _MonInjDock("TTL")
self.dds_dock = _MonInjDock("DDS") self.dds_dock = _MonInjDock("DDS")
self.dac_dock = _MonInjDock("DAC") self.dac_dock = _MonInjDock("DAC")
self.dm = _DeviceManager() self.dm = _DeviceManager(schedule_ctl)
self.dm.ttl_cb = lambda: self.ttl_dock.layout_widgets( self.dm.ttl_cb = lambda: self.ttl_dock.layout_widgets(
self.dm.ttl_widgets.values()) self.dm.ttl_widgets.values())
self.dm.dds_cb = lambda: self.dds_dock.layout_widgets( self.dm.dds_cb = lambda: self.dds_dock.layout_widgets(

View File

@ -48,7 +48,7 @@ class Model(DictSyncModel):
else: else:
return "Outside repo." return "Outside repo."
elif column == 6: elif column == 6:
return v["expid"]["file"] return v["expid"].get("file", "<none>")
elif column == 7: elif column == 7:
if v["expid"]["class_name"] is None: if v["expid"]["class_name"] is None:
return "" return ""

11
artiq/examples/README.rst Normal file
View File

@ -0,0 +1,11 @@
ARTIQ experiment examples
=========================
This directory contains several sample ARTIQ master configurations
and associated experiments that illustrate basic usage of various
hardware and software features.
New users might want to peruse the ``no_hardware`` directory to
explore the argument/dataset machinery without needing access to
hardware, and the ``kc705_nist_clock`` directory for inspiration
on how to coordinate between host and FPGA core device code.

View File

@ -7,7 +7,11 @@ device_db = {
"type": "local", "type": "local",
"module": "artiq.coredevice.core", "module": "artiq.coredevice.core",
"class": "Core", "class": "Core",
"arguments": {"host": core_addr, "ref_period": 1e-9} "arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
}, },
"core_log": { "core_log": {
"type": "controller", "type": "controller",
@ -15,6 +19,20 @@ device_db = {
"port": 1068, "port": 1068,
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr "command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
}, },
"core_moninj": {
"type": "controller",
"host": "::1",
"port_proxy": 1383,
"port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": { "core_cache": {
"type": "local", "type": "local",
"module": "artiq.coredevice.cache", "module": "artiq.coredevice.cache",
@ -29,13 +47,13 @@ device_db = {
"i2c_switch0": { "i2c_switch0": {
"type": "local", "type": "local",
"module": "artiq.coredevice.i2c", "module": "artiq.coredevice.i2c",
"class": "PCA9548", "class": "I2CSwitch",
"arguments": {"address": 0xe0} "arguments": {"address": 0xe0}
}, },
"i2c_switch1": { "i2c_switch1": {
"type": "local", "type": "local",
"module": "artiq.coredevice.i2c", "module": "artiq.coredevice.i2c",
"class": "PCA9548", "class": "I2CSwitch",
"arguments": {"address": 0xe2} "arguments": {"address": 0xe2}
}, },
} }

View File

@ -1,384 +0,0 @@
import sys
import os
import select
from artiq.experiment import *
from artiq.coredevice.ad9910 import AD9910, SyncDataEeprom
if os.name == "nt":
import msvcrt
def chunker(seq, size):
res = []
for el in seq:
res.append(el)
if len(res) == size:
yield res
res = []
if res:
yield res
def is_enter_pressed() -> TBool:
if os.name == "nt":
if msvcrt.kbhit() and msvcrt.getch() == b"\r":
return True
else:
return False
else:
if select.select([sys.stdin, ], [], [], 0.0)[0]:
sys.stdin.read(1)
return True
else:
return False
class KasliTester(EnvExperiment):
def build(self):
# hack to detect artiq_run
if self.get_device("scheduler").__class__.__name__ != "DummyScheduler":
raise NotImplementedError(
"must be run with artiq_run to support keyboard interaction")
self.setattr_device("core")
self.leds = dict()
self.ttl_outs = dict()
self.ttl_ins = dict()
self.urukul_cplds = dict()
self.urukuls = dict()
self.samplers = dict()
self.zotinos = dict()
self.grabbers = dict()
ddb = self.get_device_db()
for name, desc in ddb.items():
if isinstance(desc, dict) and desc["type"] == "local":
module, cls = desc["module"], desc["class"]
if (module, cls) == ("artiq.coredevice.ttl", "TTLOut"):
dev = self.get_device(name)
if "led" in name: # guess
self.leds[name] = dev
else:
self.ttl_outs[name] = dev
elif (module, cls) == ("artiq.coredevice.ttl", "TTLInOut"):
self.ttl_ins[name] = self.get_device(name)
elif (module, cls) == ("artiq.coredevice.urukul", "CPLD"):
self.urukul_cplds[name] = self.get_device(name)
elif (module, cls) == ("artiq.coredevice.ad9910", "AD9910"):
self.urukuls[name] = self.get_device(name)
elif (module, cls) == ("artiq.coredevice.ad9912", "AD9912"):
self.urukuls[name] = self.get_device(name)
elif (module, cls) == ("artiq.coredevice.sampler", "Sampler"):
self.samplers[name] = self.get_device(name)
elif (module, cls) == ("artiq.coredevice.zotino", "Zotino"):
self.zotinos[name] = self.get_device(name)
elif (module, cls) == ("artiq.coredevice.grabber", "Grabber"):
self.grabbers[name] = self.get_device(name)
# Remove Urukul, Sampler and Zotino control signals
# from TTL outs (tested separately)
ddb = self.get_device_db()
for name, desc in ddb.items():
if isinstance(desc, dict) and desc["type"] == "local":
module, cls = desc["module"], desc["class"]
if ((module, cls) == ("artiq.coredevice.ad9910", "AD9910")
or (module, cls) == ("artiq.coredevice.ad9912", "AD9912")):
if "sw_device" in desc["arguments"]:
sw_device = desc["arguments"]["sw_device"]
del self.ttl_outs[sw_device]
elif (module, cls) == ("artiq.coredevice.urukul", "CPLD"):
io_update_device = desc["arguments"]["io_update_device"]
del self.ttl_outs[io_update_device]
elif (module, cls) == ("artiq.coredevice.sampler", "Sampler"):
cnv_device = desc["arguments"]["cnv_device"]
del self.ttl_outs[cnv_device]
elif (module, cls) == ("artiq.coredevice.zotino", "Zotino"):
ldac_device = desc["arguments"]["ldac_device"]
clr_device = desc["arguments"]["clr_device"]
del self.ttl_outs[ldac_device]
del self.ttl_outs[clr_device]
# Sort everything by RTIO channel number
self.leds = sorted(self.leds.items(), key=lambda x: x[1].channel)
self.ttl_outs = sorted(self.ttl_outs.items(), key=lambda x: x[1].channel)
self.ttl_ins = sorted(self.ttl_ins.items(), key=lambda x: x[1].channel)
self.urukuls = sorted(self.urukuls.items(), key=lambda x: (x[1].cpld.bus.channel, x[1].chip_select))
self.samplers = sorted(self.samplers.items(), key=lambda x: x[1].cnv.channel)
self.zotinos = sorted(self.zotinos.items(), key=lambda x: x[1].bus.channel)
self.grabbers = sorted(self.grabbers.items(), key=lambda x: x[1].channel_base)
@kernel
def test_led(self, led):
while not is_enter_pressed():
self.core.break_realtime()
# do not fill the FIFOs too much to avoid long response times
t = now_mu() - self.core.seconds_to_mu(0.2)
while self.core.get_rtio_counter_mu() < t:
pass
for i in range(3):
led.pulse(100*ms)
delay(100*ms)
def test_leds(self):
print("*** Testing LEDs.")
print("Check for blinking. Press ENTER when done.")
for led_name, led_dev in self.leds:
print("Testing LED: {}".format(led_name))
self.test_led(led_dev)
@kernel
def test_ttl_out_chunk(self, ttl_chunk):
while not is_enter_pressed():
self.core.break_realtime()
for _ in range(50000):
i = 0
for ttl in ttl_chunk:
i += 1
for _ in range(i):
ttl.pulse(1*us)
delay(1*us)
delay(10*us)
def test_ttl_outs(self):
print("*** Testing TTL outputs.")
print("Outputs are tested in groups of 4. Touch each TTL connector")
print("with the oscilloscope probe tip, and check that the number of")
print("pulses corresponds to its number in the group.")
print("Press ENTER when done.")
for ttl_chunk in chunker(self.ttl_outs, 4):
print("Testing TTL outputs: {}.".format(", ".join(name for name, dev in ttl_chunk)))
self.test_ttl_out_chunk([dev for name, dev in ttl_chunk])
@kernel
def test_ttl_in(self, ttl_out, ttl_in):
n = 42
self.core.break_realtime()
with parallel:
ttl_in.gate_rising(1*ms)
with sequential:
delay(50*us)
for _ in range(n):
ttl_out.pulse(2*us)
delay(2*us)
return ttl_in.count(now_mu()) == n
def test_ttl_ins(self):
print("*** Testing TTL inputs.")
if not self.ttl_outs:
print("No TTL output channel available to use as stimulus.")
return
default_ttl_out_name, default_ttl_out_dev = next(iter(self.ttl_outs))
ttl_out_name = input("TTL device to use as stimulus (default: {}): ".format(default_ttl_out_name))
if ttl_out_name:
ttl_out_dev = self.get_device(ttl_out_name)
else:
ttl_out_name = default_ttl_out_name
ttl_out_dev = default_ttl_out_dev
for ttl_in_name, ttl_in_dev in self.ttl_ins:
print("Connect {} to {}. Press ENTER when done."
.format(ttl_out_name, ttl_in_name))
input()
if self.test_ttl_in(ttl_out_dev, ttl_in_dev):
print("PASSED")
else:
print("FAILED")
@kernel
def init_urukul(self, cpld):
self.core.break_realtime()
cpld.init()
@kernel
def calibrate_urukul(self, channel):
self.core.break_realtime()
channel.init()
self.core.break_realtime()
sync_delay_seed, _ = channel.tune_sync_delay()
self.core.break_realtime()
io_update_delay = channel.tune_io_update_delay()
return sync_delay_seed, io_update_delay
@kernel
def setup_urukul(self, channel, frequency):
self.core.break_realtime()
channel.init()
channel.set(frequency*MHz)
channel.cfg_sw(1)
channel.set_att(6.)
@kernel
def cfg_sw_off_urukul(self, channel):
self.core.break_realtime()
channel.cfg_sw(0)
@kernel
def rf_switch_wave(self, channels):
while not is_enter_pressed():
self.core.break_realtime()
# do not fill the FIFOs too much to avoid long response times
t = now_mu() - self.core.seconds_to_mu(0.2)
while self.core.get_rtio_counter_mu() < t:
pass
for channel in channels:
channel.pulse(100*ms)
delay(100*ms)
# We assume that RTIO channels for switches are grouped by card.
def test_urukuls(self):
print("*** Testing Urukul DDSes.")
print("Initializing CPLDs...")
for name, cpld in sorted(self.urukul_cplds.items(), key=lambda x: x[0]):
print(name + "...")
self.init_urukul(cpld)
print("...done")
print("Calibrating inter-device synchronization...")
for channel_name, channel_dev in self.urukuls:
if (not isinstance(channel_dev, AD9910) or
not isinstance(channel_dev.sync_data, SyncDataEeprom)):
print("{}\tno EEPROM synchronization".format(channel_name))
else:
eeprom = channel_dev.sync_data.eeprom_device
offset = channel_dev.sync_data.eeprom_offset
sync_delay_seed, io_update_delay = self.calibrate_urukul(channel_dev)
print("{}\t{} {}".format(channel_name, sync_delay_seed, io_update_delay))
eeprom_word = (sync_delay_seed << 24) | (io_update_delay << 16)
eeprom.write_i32(offset, eeprom_word)
print("...done")
print("Frequencies:")
for card_n, channels in enumerate(chunker(self.urukuls, 4)):
for channel_n, (channel_name, channel_dev) in enumerate(channels):
frequency = 10*(card_n + 1) + channel_n
print("{}\t{}MHz".format(channel_name, frequency))
self.setup_urukul(channel_dev, frequency)
print("Press ENTER when done.")
input()
sw = [channel_dev for channel_name, channel_dev in self.urukuls if hasattr(channel_dev, "sw")]
if sw:
print("Testing RF switch control. Press ENTER when done.")
for swi in sw:
self.cfg_sw_off_urukul(swi)
self.rf_switch_wave([swi.sw for swi in sw])
@kernel
def get_sampler_voltages(self, sampler, cb):
self.core.break_realtime()
sampler.init()
delay(5*ms)
for i in range(8):
sampler.set_gain_mu(i, 0)
delay(100*us)
smp = [0.0]*8
sampler.sample(smp)
cb(smp)
def test_samplers(self):
print("*** Testing Sampler ADCs.")
for card_name, card_dev in self.samplers:
print("Testing: ", card_name)
for channel in range(8):
print("Apply 1.5V to channel {}. Press ENTER when done.".format(channel))
input()
voltages = []
def setv(x):
nonlocal voltages
voltages = x
self.get_sampler_voltages(card_dev, setv)
passed = True
for n, voltage in enumerate(voltages):
if n == channel:
if abs(voltage - 1.5) > 0.2:
passed = False
else:
if abs(voltage) > 0.2:
passed = False
if passed:
print("PASSED")
else:
print("FAILED")
print(" ".join(["{:.1f}".format(x) for x in voltages]))
@kernel
def set_zotino_voltages(self, zotino, voltages):
self.core.break_realtime()
zotino.init()
delay(200*us)
i = 0
for voltage in voltages:
zotino.write_dac(i, voltage)
delay(100*us)
i += 1
zotino.load()
def test_zotinos(self):
print("*** Testing Zotino DACs.")
print("Voltages:")
for card_n, (card_name, card_dev) in enumerate(self.zotinos):
voltages = [(-1)**i*(2.*card_n + .1*(i//2 + 1)) for i in range(32)]
print(card_name, " ".join(["{:.1f}".format(x) for x in voltages]))
self.set_zotino_voltages(card_dev, voltages)
print("Press ENTER when done.")
input()
@kernel
def grabber_capture(self, card_dev, rois):
self.core.break_realtime()
delay(100*us)
mask = 0
for i in range(len(rois)):
i = rois[i][0]
x0 = rois[i][1]
y0 = rois[i][2]
x1 = rois[i][3]
y1 = rois[i][4]
mask |= 1 << i
card_dev.setup_roi(i, x0, y0, x1, y1)
card_dev.gate_roi(mask)
n = [0]*len(rois)
card_dev.input_mu(n)
self.core.break_realtime()
card_dev.gate_roi(0)
print("ROI sums:", n)
def test_grabbers(self):
print("*** Testing Grabber Frame Grabbers.")
print("Activate the camera's frame grabber output, type 'g', press "
"ENTER, and trigger the camera.")
print("Just press ENTER to skip the test.")
if input().strip().lower() != "g":
print("skipping...")
return
rois = [[0, 0, 0, 2, 2], [1, 0, 0, 2048, 2048]]
print("ROIs:", rois)
for card_n, (card_name, card_dev) in enumerate(self.grabbers):
print(card_name)
self.grabber_capture(card_dev, rois)
def run(self):
print("****** Kasli system tester ******")
print("")
self.core.reset()
if self.leds:
self.test_leds()
if self.ttl_outs:
self.test_ttl_outs()
if self.ttl_ins:
self.test_ttl_ins()
if self.urukuls:
self.test_urukuls()
if self.samplers:
self.test_samplers()
if self.zotinos:
self.test_zotinos()
if self.grabbers:
self.test_grabbers()

View File

@ -5,7 +5,11 @@ device_db = {
"type": "local", "type": "local",
"module": "artiq.coredevice.core", "module": "artiq.coredevice.core",
"class": "Core", "class": "Core",
"arguments": {"host": core_addr, "ref_period": 1/(8*150e6)} "arguments": {
"host": core_addr,
"ref_period": 1e-9,
"analyzer_proxy": "core_analyzer"
}
}, },
"core_log": { "core_log": {
"type": "controller", "type": "controller",
@ -13,6 +17,20 @@ device_db = {
"port": 1068, "port": 1068,
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr "command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
}, },
"core_moninj": {
"type": "controller",
"host": "::1",
"port_proxy": 1383,
"port": 1384,
"command": "aqctl_moninj_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_analyzer": {
"type": "controller",
"host": "::1",
"port_proxy": 1385,
"port": 1386,
"command": "aqctl_coreanalyzer_proxy --port-proxy {port_proxy} --port-control {port} --bind {bind} " + core_addr
},
"core_cache": { "core_cache": {
"type": "local", "type": "local",
"module": "artiq.coredevice.cache", "module": "artiq.coredevice.cache",

View File

@ -1,114 +0,0 @@
core_addr = "192.168.1.70"
device_db = {
"core": {
"type": "local",
"module": "artiq.coredevice.core",
"class": "Core",
"arguments": {"host": core_addr, "ref_period": 1/(8*150e6)}
},
"core_log": {
"type": "controller",
"host": "::1",
"port": 1068,
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
},
"core_cache": {
"type": "local",
"module": "artiq.coredevice.cache",
"class": "CoreCache"
},
"core_dma": {
"type": "local",
"module": "artiq.coredevice.dma",
"class": "CoreDMA"
},
}
for i in range(8):
device_db["ttl" + str(i)] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut" if i < 4 else "TTLOut",
"arguments": {"channel": 1+i},
}
device_db.update(
spi_urukul0={
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 9}
},
ttl_urukul0_io_update={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 10}
},
ttl_urukul0_sw0={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 11}
},
ttl_urukul0_sw1={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 12}
},
ttl_urukul0_sw2={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 13}
},
ttl_urukul0_sw3={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 14}
},
urukul0_cpld={
"type": "local",
"module": "artiq.coredevice.urukul",
"class": "CPLD",
"arguments": {
"spi_device": "spi_urukul0",
"io_update_device": "ttl_urukul0_io_update",
"refclk": 150e6,
"clk_sel": 2
}
}
)
for i in range(4):
device_db["urukul0_ch" + str(i)] = {
"type": "local",
"module": "artiq.coredevice.ad9910",
"class": "AD9910",
"arguments": {
"pll_n": 16, # 600MHz sample rate
"pll_vco": 2,
"chip_select": 4 + i,
"cpld_device": "urukul0_cpld",
"sw_device": "ttl_urukul0_sw" + str(i)
}
}
for i in range(8):
device_db["sawg" + str(i)] = {
"type": "local",
"module": "artiq.coredevice.sawg",
"class": "SAWG",
"arguments": {"channel_base": i*10+0x010006, "parallelism": 4}
}
for i in range(8):
device_db["sawg" + str(8+i)] = {
"type": "local",
"module": "artiq.coredevice.sawg",
"class": "SAWG",
"arguments": {"channel_base": i*10+0x020006, "parallelism": 4}
}

View File

@ -1,37 +0,0 @@
from artiq.experiment import *
class Sines2Sayma(EnvExperiment):
def build(self):
self.setattr_device("core")
self.sawgs = [self.get_device("sawg"+str(i)) for i in range(16)]
@kernel
def drtio_is_up(self):
for i in range(3):
if not self.core.get_rtio_destination_status(i):
return False
return True
@kernel
def run(self):
while True:
print("waiting for DRTIO ready...")
while not self.drtio_is_up():
pass
print("OK")
self.core.reset()
for sawg in self.sawgs:
delay(1*ms)
sawg.reset()
for sawg in self.sawgs:
delay(1*ms)
sawg.amplitude1.set(.4)
# Do not use a sub-multiple of oscilloscope sample rates.
sawg.frequency0.set(9*MHz)
while self.drtio_is_up():
pass

Some files were not shown because too many files have changed in this diff Show More