forked from M-Labs/artiq
1
0
Fork 0

Compare commits

...

259 Commits

Author SHA1 Message Date
Robert Jördens 22ba2858d9 manual: deprecate old release
closes #863
2017-12-06 13:00:37 +01:00
Robert Jördens fd6c01f537 README: GPLv3+ 2016-09-15 09:52:52 -04:00
whitequark 3c8a7df913 coreanalyzer: fix rtio_log message extraction.
Fixes #550.
2016-09-14 18:37:03 +08:00
Sebastien Bourdeauducq 6150ab8509 doc: move VADJ, closes #554 2016-09-09 18:41:27 +08:00
Robert Jördens 2bdc5f33aa conda: fix nist_clock summary 2016-09-03 11:08:53 +02:00
whitequark 592993eddd conda: require misoc 0.2. 2016-08-21 10:44:03 +08:00
Robert Jördens 929ded30b8 RELEASE_NOTES: 1.3 2016-08-11 20:49:05 +02:00
Robert Jördens 92d5491cdc gui: cleanup compact_exponential, 15 digits 2016-08-11 14:18:41 +02:00
Robert Jördens 46cae14ab1 lda: fix windows path 2016-08-11 14:13:04 +02:00
Sebastien Bourdeauducq ff9497054c manual: explain how applets work. Closes #536 2016-08-11 19:10:04 +08:00
Robert Jördens 14addf74c1 gui.entries: avoid intermediate value feedback, closes #533 2016-08-04 11:35:57 +02:00
Kelly Stevens 0728be7378 doc/getting_started_mgmt: fix result folders 2016-08-03 14:10:06 +08:00
whitequark bdb85c8eab protocols.pc_rpc: exclude kernel_invariants from proxying.
Fixes #531.
2016-08-03 13:19:34 +08:00
whitequark ecc30979df transforms.llvm_ir_generator: skip RPC values for attribute writeback. 2016-08-03 13:19:29 +08:00
Sebastien Bourdeauducq ff1e35e03f analyzer: use picosecond resolution in VCD output. Closes #528 2016-08-03 10:57:47 +08:00
Kelly Stevens d5711133be doc: re-formatting a directory path in lda driver comments so it will display correctly in the sphinx documentation 2016-08-01 23:36:26 +02:00
Sebastien Bourdeauducq 3f224add2f artiq_flash: support using alternative OpenOCD config files 2016-07-19 15:36:18 +08:00
Robert Jördens 25a9d345d9 artiq_flash: fix openocd scripts path (#513) 2016-07-15 15:49:34 +02:00
Sebastien Bourdeauducq ce73e8eea7 target/pipistrello: shrink TTL FIFOs 2016-07-14 15:59:21 +08:00
Sebastien Bourdeauducq fe2b2496c1 RELEASE_NOTES: 1.2 2016-07-14 15:34:50 +08:00
Sebastien Bourdeauducq 2c749c77e7 targets/pipistrello: work around Xst placer idiocy 2016-07-14 15:14:11 +08:00
whitequark 1d58e3cf95 ir: `invoke` is a valid `delay` decomposition.
Fixes #510.
2016-07-14 01:10:52 +08:00
Sebastien Bourdeauducq c27f157a27 make sure monkey patches are applied when asyncio.open_connection() is used 2016-07-13 11:29:18 +08:00
Sebastien Bourdeauducq 972a74219d monkey-patch Python 3.5.2 to disable broken asyncio.base_events._ipaddr_info optimization (#506) 2016-07-13 11:26:35 +08:00
Sebastien Bourdeauducq 078a9abeb9 rtio: do not reset DDS and SPI PHYs on RTIO reset (#503) 2016-07-11 21:22:20 +02:00
whitequark eceafad7e3 compiler.testbench.perf_embedding: more fine grained reporting. 2016-07-11 17:55:18 +00:00
whitequark b5901db265 compiler.testbench.perf_embedding: update for core changes. 2016-07-11 16:31:35 +00:00
whitequark 3547b1d5ae runtime: update ppp code for lwip 2.0.0.
Fixes #499.
2016-07-09 09:01:00 +08:00
whitequark 2678bb060a embedding: reimplement 373578bc properly.
The core of the problem that 373578bc was attempting to solve is
that diagnostics sometimes should be chained; one way of chaining
is the loc.expanded_from feature, which handles macro-like expansion,
but another is providing context.

Before this commit, context was provided using an ad-hoc override
of a diagnostic engine, which did not work in cases where diagnostic
engine was not threaded through the call stack. This commit uses
the newly added pythonparser context feature to elegantly handle
the problem.
2016-07-07 16:01:22 +00:00
whitequark 6a1706b872 runtime: fix serialization of object lists.
Fixes #500.
2016-07-07 22:41:46 +08:00
whitequark a801cde953 embedding: fix location for diagnostics on quoted values.
Fixes #489.
2016-07-07 17:01:09 +08:00
Sebastien Bourdeauducq adcf53f1cb targets/kc705: redefine user SMAs as 3.3V IO. Closes #502 2016-07-07 14:56:36 +08:00
Sebastien Bourdeauducq d96d222814 targets/kc705: fix import 2016-07-07 13:41:18 +08:00
Robert Jördens 07b41763c2 spi: expose more documentation on chaining transfers 2016-07-04 12:44:34 +02:00
Robert Jördens cda20ab2ed spi: do not shift when starting a xfer, closes #495 2016-07-04 12:44:21 +02:00
Sebastien Bourdeauducq 1946e3c3cd language/environment: fix mutate_dataset with parent. Closes #498 2016-07-03 12:28:38 +08:00
Sebastien Bourdeauducq bf3c3cdfd8 worker: increase send_timeout (Windows can be really slow) 2016-07-03 12:24:43 +08:00
Sebastien Bourdeauducq e116d756b5 dashboard: kill the Qt built-in main window closing mechanism
When the main window is closed, Qt makes QApplication.exec() return, which conflicts with Quamash's implementation of loop.run_until_complete(). The conflict causes Quamash's run_forever() to return earlier than it should, and cause "RuntimeError('Event loop stopped before Future completed.')".

Closes #475
2016-07-01 18:47:27 +08:00
whitequark 6dc510a976 compiler.embedding: use the builtin print as RPC.
Fixes #206.
2016-06-29 11:58:05 +08:00
whitequark d22eefc13e compiler: remove now()/at().
Fixes #490.
2016-06-29 11:55:58 +08:00
Sebastien Bourdeauducq 1bb09f9ca6 RELEASE_NOTES: 1.1 2016-06-24 14:48:31 +08:00
Sebastien Bourdeauducq 9dd88f8b3b runtime: save now on RPC 2016-06-24 14:15:40 +08:00
Sebastien Bourdeauducq 1bee5bb460 manual: split source install instructions to a separate page 2016-06-22 10:00:52 +08:00
whitequark 43d0bddc9f Upgrade lwip to 2.0.0 to fix the keepalive bug #456. 2016-06-22 09:23:08 +08:00
Sebastien Bourdeauducq f0ac0b78c1 runtime: minor cleanup 2016-06-20 08:34:13 +08:00
Sebastien Bourdeauducq 5baba5fd1e test: add test for seamless handover on exception termination 2016-06-20 08:34:02 +08:00
Sebastien Bourdeauducq 5f8b02a1d2 runtime: save now when terminating with exception 2016-06-20 08:33:55 +08:00
Sebastien Bourdeauducq e069ce9dd8 runtime: cleanup now_init/now_save 2016-06-20 08:33:43 +08:00
Sebastien Bourdeauducq 6a81b16230 doc: scan objects not supported on core device 2016-06-18 18:56:28 +08:00
whitequark cc28b596b3 compiler.embedding: always do one final inference pass.
Fixes #477.
2016-06-17 08:49:38 +00:00
whitequark 770dda6fd7 transforms.inferencer: allow variable as type of `n` in `[]*n`.
Fixes #473.
2016-06-17 16:46:37 +08:00
Robert Jördens 3589528362 spi: cross-reference bit ordering and alignment, closes #482 2016-06-15 15:05:16 +02:00
Sebastien Bourdeauducq e56c50a8a0 pyon: support slices 2016-06-15 19:19:07 +08:00
Sebastien Bourdeauducq 46b75dba8d gui: save/restore last folder outside repository. Closes #476 2016-06-12 13:19:07 +08:00
Sebastien Bourdeauducq 2b936429da doc: precisions about time cursor interaction 2016-06-12 13:05:54 +08:00
Sebastien Bourdeauducq 77280a75d9 language: support setting slices of data in mutate_dataset 2016-06-12 12:56:12 +08:00
Sebastien Bourdeauducq 9dcd43fb0d gui/scanwidget: use -inf/inf to represent absence of boundaries (consistently with QDoubleSpinbox) 2016-06-11 16:52:48 -06:00
Sebastien Bourdeauducq ddd1c12852 gui/entries/_RangeScan: set range before setting value. Fixes clamping to 99.99 2016-06-11 16:50:16 -06:00
Sebastien Bourdeauducq 1faac1018b scanwidget: value may be None 2016-06-11 16:48:58 -06:00
Sebastien Bourdeauducq d3f092ce98 doc: add warning about pipistrello current draw 2016-06-11 10:27:22 -06:00
Sebastien Bourdeauducq 7e9fa3a81a doc: add core device comms details 2016-06-11 10:27:15 -06:00
Sebastien Bourdeauducq 47e3106c4e gateware/nist_clock: increase DDS bus drive strength. Closes #468 2016-06-07 11:08:05 -04:00
whitequark a80103576d analyzer: explicitly delimit messages (with \x1D).
Fixes #461.
2016-06-07 10:57:58 -04:00
whitequark 284d726d5e artiq_flash: explicitly pass path within conda env to openocd datarootdir.
By default, openocd searches for scripts in DATAROOTDIR/openocd/scripts.
This of course makes it not relocatable. Conda has a flag to try to
detect and fix such hardcoded paths, but it does not work on openocd
(likely because the .rodata contains an already concatenated path,
which cannot be padded with zeroes from the right).

So, we pass the path explicitly instead.
2016-06-06 15:07:03 -04:00
Sebastien Bourdeauducq 67b3afd3e7 gui/moninj: reduce logging level of UDP failure 2016-06-04 16:34:07 -04:00
Sebastien Bourdeauducq c5c7c269f7 gui/moninj: do not crash when there is no network 2016-06-04 16:30:30 -04:00
whitequark 188d53ac05 doc: update installing.rst to reflect openocd packaged in conda. 2016-06-04 16:26:47 -04:00
Sebastien Bourdeauducq 8d9c483cd0 explorer: fix directory listing error handling 2016-06-04 10:18:45 -04:00
Sebastien Bourdeauducq f571b3e1a1 dds: use fast math for asf computations 2016-06-03 23:34:20 -04:00
Sebastien Bourdeauducq 36f8aa8d9e gui/entries: remove ScientificSpinbox. Closes #457 2016-06-03 23:33:20 -04:00
Sebastien Bourdeauducq 2803b8574e doc: document common KC705 problems. Closes #450 2016-06-03 23:26:57 -04:00
Sebastien Bourdeauducq 97ec3de6f3 doc/getting_started_mgmt: adapt to changes. Closes #462 2016-06-03 23:26:57 -04:00
dhslichter 3c817be213 dds: fix asf_to_amplitude 2016-06-03 23:26:57 -04:00
Sebastien Bourdeauducq 404cf3524c gui/entries: remove unneeded parent 2016-06-03 23:26:57 -04:00
Sebastien Bourdeauducq 3ddb1e4201 doc: fix bool input 2016-06-03 23:26:57 -04:00
Sebastien Bourdeauducq ba98ac1dcc examples: run should not return a value 2016-06-03 23:26:57 -04:00
Sebastien Bourdeauducq f7eb0a6f22 examples/histograms: convert to mutate_dataset API. Closes #459 2016-05-31 20:24:46 -05:00
Sebastien Bourdeauducq eeab7db3d4 applets/plot_xy: use numpy array for default X axis. Closes #458 2016-05-30 22:48:44 -05:00
Sebastien Bourdeauducq f537562768 gui: fix explicit scan input validation 2016-05-30 15:45:38 -05:00
Sebastien Bourdeauducq b57b1ddc57 language/environment: be more verbose in NumberValue unit/scale documentation (#448) 2016-05-28 13:38:24 -05:00
Sebastien Bourdeauducq 6c39d939d8 do not attempt to use broken conda features 2016-05-27 23:09:52 -05:00
Robert Jördens 109aa73a6b gui: fix new() being called with arguments by qt (closes #444) 2016-05-25 23:13:23 +02:00
Sebastien Bourdeauducq 8d0034e11d rpctool: make readline optional, add to conda dependencies. Closes #442 2016-05-25 11:12:15 -05:00
Sebastien Bourdeauducq aa26b13816 language/RandomScan: automatic seed by default 2016-05-23 14:33:32 -07:00
Sebastien Bourdeauducq 42c84e0c72 coredevice/TCA6424A: convert 'outputs' value to little endian. Closes #437 2016-05-22 06:54:08 -07:00
Sebastien Bourdeauducq 9db9f4e624 bit2bin: close input file explicitly 2016-05-21 21:50:30 +08:00
dhslichter dfeff967ba qc2: swap SPI/TTL, all TTL lines are now In+Out compatible 2016-05-19 10:42:19 +08:00
Robert Jördens e9a3e5642e flash: tcl-quote paths (c.f. #256) 2016-05-17 09:20:06 +08:00
Robert Jördens b81b40d553 flash: use the handle 2016-05-17 09:20:06 +08:00
Robert Jördens 7bdf373b95 flash: close files (c.f. #256) 2016-05-17 09:20:06 +08:00
Robert Jördens f37bfef275 RELEASE_NOTES: 1.0 2016-05-10 12:50:43 +02:00
whitequark 7de77cfc8f Commit missing parts of 4e5d75295. 2016-05-09 23:57:52 +08:00
whitequark 9196cdc554 compiler: fix quoting of methods (fixes #423). 2016-05-09 23:57:52 +08:00
Sebastien Bourdeauducq ef3465a181 RELEASE_NOTES: 1.0rc4 2016-05-08 11:15:47 +08:00
Sebastien Bourdeauducq 85ecb900df examples/photon_histogram: delay after count() 2016-05-07 18:33:48 +08:00
Sebastien Bourdeauducq 18b6718d0c lwip/liteethif: cleanup, drop frames above MTU (#398) 2016-05-07 18:33:48 +08:00
Sebastien Bourdeauducq d365ce8de8 examples/photon_histogram: integers 2016-05-07 18:33:48 +08:00
Sebastien Bourdeauducq 7466a4d9a9 gui/applets: catch duplicate applet UIDs (#430) 2016-05-07 11:47:30 +08:00
Sebastien Bourdeauducq 2eb67902e8 gui: do not crash if 'recompute all arguments' fails 2016-05-05 00:53:29 +08:00
Sebastien Bourdeauducq fa609283d5 worker: use unix time for HDF5 start_time 2016-05-03 21:40:48 +02:00
Robert Jördens 53cef7e695 worker: run experiment in output directory 2016-05-03 21:02:51 +02:00
Robert Jördens 7803b68c4d worker_impl: save expid, rid, start_time 2016-05-03 20:53:57 +02:00
Robert Jördens 5b955e8ce8 worker_db: factor get_output_prefix() 2016-05-03 20:53:54 +02:00
Sebastien Bourdeauducq 16fdebad8e gateware/nist_qc2: increase DDS bus drive strength. Closes #421 2016-05-03 16:44:14 +08:00
Sebastien Bourdeauducq adeb88619c language/environment: update kernel_invariants in setattr_argument and setattr_device 2016-05-03 16:44:11 +08:00
Sebastien Bourdeauducq 9e14419abc style 2016-05-03 16:43:09 +08:00
Robert Jördens f7ba61b47b examples/transport: add slack between experiments/after count() 2016-05-03 10:41:43 +02:00
Robert Jördens dcf082e427 Revert "lwip: set MTU to 9000 to support jumbo frames"
This reverts commit dbbd11d798.

Breaks more than it fixes.
2016-04-30 08:23:03 +02:00
Sebastien Bourdeauducq 06268d182f gui/moninj: sort by channel. Closes #413 2016-04-30 12:31:06 +08:00
Sebastien Bourdeauducq b05e3f42e9 lwip: set MTU to 9000 to support jumbo frames 2016-04-30 00:31:17 +08:00
whitequark 88fd5431b5 compiler: make kernel_invariant an instance, not class, property.
Fixes #409.
2016-04-30 00:31:17 +08:00
whitequark e835ae2a2a doc: explain RPC return type annotations.
Fixes #410.
2016-04-30 00:31:17 +08:00
Sebastien Bourdeauducq 7844e98b1d doc/environment: datasets readonly in build 2016-04-27 01:43:01 +08:00
Sebastien Bourdeauducq 0a9b2aecbc doc: move compiler details to separate chapter 2016-04-27 01:37:39 +08:00
whitequark 9b04778f66 embedding: ignore empty lines, like annotations, before kernel functions.
Fixes #363.
2016-04-27 01:37:28 +08:00
whitequark 24e24ddca4 doc: Document fast-math flag and kernel invariants.
Fixes #351, #359.
2016-04-27 01:37:28 +08:00
whitequark 0f6f684670 compiler: allow RPCing builtin functions.
Fixes #366.
2016-04-27 01:37:28 +08:00
whitequark 8d9a22f8da compiler: don't typecheck RPCs except for return type.
Fixes #260.
2016-04-27 01:36:51 +08:00
whitequark 8a28039b74 coredevice: deserialize int64(width=64) as int(width=64), not host_int.
Fixes #402.
2016-04-22 23:28:01 +08:00
Sebastien Bourdeauducq 53f0477c3b RELEASE_NOTES: 1.0rc3 2016-04-18 14:39:18 +08:00
Sebastien Bourdeauducq ed17972104 master/experiments: log more details about experiment name conflicts 2016-04-16 21:36:33 +08:00
Sebastien Bourdeauducq f962092b38 environment: cleanup docs 2016-04-16 19:38:52 +08:00
Sebastien Bourdeauducq 9b4a04b307 environment/get_device_db: raise ValueError when device manager not present 2016-04-16 19:38:46 +08:00
Sebastien Bourdeauducq 398468410f Revert "environment,worker: remove enable_processors"
This reverts commit 08f903b8f4.
2016-04-16 16:49:15 +08:00
Sebastien Bourdeauducq 08f903b8f4 environment,worker: remove enable_processors 2016-04-16 14:21:23 +08:00
Sebastien Bourdeauducq 0a259418fb environment: make NumberValue return integers when appropriate. Closes #397 2016-04-16 14:21:22 +08:00
Sebastien Bourdeauducq 91645ffc24 test/analyzer: clear analyzer buffer after IO init 2016-04-15 01:18:58 +08:00
Robert Jördens a2e4f95a00 test_analyzer: loop_out.off() 2016-04-14 22:57:47 +08:00
Sebastien Bourdeauducq 385e8d98fc test/PulseRateDDS: run more iterations 2016-04-14 19:15:31 +08:00
Robert Jördens 2086e46598 test_rtio: integer division 2016-04-14 19:14:53 +08:00
Sebastien Bourdeauducq a27aa9680e protocols/pyon: minor cleanup 2016-04-14 19:14:38 +08:00
Sebastien Bourdeauducq c97cb1d3b9 master/worker_db: style 2016-04-14 19:14:20 +08:00
Robert Jördens 69f534cc20 gui.models: style 2016-04-14 19:12:07 +08:00
Robert Jördens 9ceca44dbe applets: style 2016-04-14 18:38:57 +08:00
Robert Jördens 001f6d6fab test_rtio: scale speed test results to 'event' intervals 2016-04-14 18:19:05 +08:00
Robert Jördens 3d487d98b7 test_rtio: comments and correction
* add comments what is actually being measured in the two rate tests
* remove spurious factor of two
2016-04-14 18:18:54 +08:00
whitequark d6510083b7 Commit missing parts of bb064c67a. 2016-04-14 18:14:47 +08:00
whitequark 904379db7e runtime: add kernel-accessible sqrt.
Fixes #382.
2016-04-14 18:14:47 +08:00
whitequark 2248a2eb9e embedding: s/kernel_constant_attributes/kernel_invariants/g
Requested in #359.
2016-04-14 18:14:44 +08:00
whitequark e6666ce6a9 test_pulse_rate_dds: adjust bounds. 2016-04-14 18:10:41 +08:00
whitequark 34454621fa conda: update llvmlite-artiq dependency.
Build 24 includes addc optimizations.
2016-04-14 18:10:30 +08:00
whitequark c6f946a816 llvm_ir_generator: add fast-math flags to fcmp.
This is allowed in 3.8.
2016-04-14 18:10:30 +08:00
whitequark d4f1614a23 llvm_ir_generator: change !{→unconditionally_}dereferenceable.
Since LLVM 3.8, !dereferenceable is weaker, so we introduce
!unconditionally_dereferenceable (http://reviews.llvm.org/D18738)
to regain its functionality.
2016-04-14 18:10:17 +08:00
whitequark 75252ca5a4 llvm_ir_generator: fix DICompileUnit.language. 2016-04-14 18:10:17 +08:00
whitequark 31b5154222 conda: update llvmlite-artiq dependency.
Build 22 includes debug information support.
2016-04-14 18:10:17 +08:00
whitequark 89326fb189 compiler: purge generated functions from backtraces. 2016-04-14 18:09:59 +08:00
whitequark a2f6e81c50 ttl: mark constant attributes for TTL{In,InOut,ClockGen}. 2016-04-14 18:09:59 +08:00
whitequark 702e959033 llvm_ir_generator: add TBAA metadata for @now. 2016-04-14 18:09:59 +08:00
whitequark f958cba4ed llvm_ir_generator: update debug info emission for LLVM 3.8. 2016-04-14 18:09:36 +08:00
whitequark 7c520aa0c4 coredevice: format backtrace RA as +0xN, not 0xN.
The absolute address is somewhere in the 0x4000000 range; the one
that is displayed is an offset from the shared object base.
2016-04-14 18:09:36 +08:00
whitequark 66bbee51d8 conda: require llvmlite-artiq built for LLVM 3.8. 2016-04-14 18:09:09 +08:00
whitequark f26990aa57 compiler: emit verbose assembly via ARTIQ_DUMP_ASM. 2016-04-14 18:09:02 +08:00
whitequark c89c27e389 compiler: add analysis passes from TargetMachine.
This doesn't have any effect right now, but is the right thing to do.
2016-04-14 18:08:47 +08:00
whitequark 1120c264b1 compiler: mark loaded pointers as !dereferenceable.
Also, lower the bound for test_pulse_rate_dds, since we generate
better code for it now.
2016-04-14 18:08:47 +08:00
whitequark 03b6555d9d compiler: update for LLVM 3.7. 2016-04-14 18:08:28 +08:00
whitequark 932e680f3e compiler: use correct data layout. 2016-04-14 18:07:56 +08:00
whitequark f59fd8faec llvm_ir_generator: do not use 'coldcc' calling convention.
First, this calling convention doesn't actually exist in OR1K
and trying to use it in Asserts build causes an UNREACHABLE.

Second, I tried to introduce it and it does not appear to produce
any measurable benefit: not only OR1K has a ton of CSRs but also
it is quite hard, if not realistically impossible, to produce
the kind of register pressure that would be relieved by sparing
a few more CSRs for our exception raising function calls, since
temporaries don't have to be preserved before a noreturn call
and spilling over ten registers across an exceptional edge
is not something that the code we care about would do.

Third, it produces measurable drawbacks: it inflates code size
of check:* functions by adding spills. Of course, this could be
alleviated by making __artiq_raise coldcc as well, but what's
the point anyway?
2016-04-14 18:07:35 +08:00
whitequark e416246e78 llvm_ir_generator: mark loads as non-null where applicable. 2016-04-14 18:07:35 +08:00
whitequark 50ae17649d test: relax lit/embedding/syscall_flags.
We currently have broken debug info. In either case, debug info
is irrelevant to this test.
2016-04-14 18:07:35 +08:00
whitequark f7603dcb6f compiler: fix ARTIQ_DUMP_ELF. 2016-04-14 18:07:17 +08:00
whitequark 812e79b63d llvm_ir_generator: don't mark non-constant attribute loads as invariant.
Oops.
2016-04-14 18:07:13 +08:00
whitequark dcb0ffdd03 Commit missing parts of 1d8b0d46. 2016-04-14 18:07:04 +08:00
whitequark ee7e648cb0 compiler: allow specifying per-function "fast-math" flags.
Fixes #351.
2016-04-14 18:07:04 +08:00
whitequark 5fafcc1341 Commit missing parts of 6f5332f8. 2016-04-14 18:07:04 +08:00
whitequark f7d4a37df9 compiler: allow flagging syscalls, providing information to optimizer.
This also fixes a crash in test_cache introduced in 1d8b0d46.
2016-04-14 18:06:47 +08:00
whitequark c6b21652ba compiler: mark FFI functions as ModRef=Ref using TBAA metadata.
Fascinatingly, the fact that you can mark call instructions with
!tbaa metadata is completely undocumented. Regardless, it is true:
a !tbaa metadata for an "immutable" type will cause
AliasAnalysis::getModRefBehavior to return OnlyReadsMemory for that
call site.

Don't bother marking loads with TBAA yet since we already place
!load.invariant on them (which is as good as the TBAA "immutable"
flag) and after that we're limited by lack of !nonnull anyway.

Also, add TBAA analysis passes in our pipeline to actually engage it.
2016-04-14 18:06:47 +08:00
whitequark 0e0f81b509 compiler: mark loads of kernel constant attributes as load invariant.
Also, enable LICM, since it can take advantage of this.
2016-04-14 18:06:47 +08:00
whitequark 081edb27d7 coredevice: add some kernel_constant_attributes specifications. 2016-04-14 18:06:47 +08:00
whitequark b5fd257a33 compiler: do not write back kernel constant attributes.
Fixes #322.
2016-04-14 18:06:21 +08:00
whitequark 665e59e064 compiler: implement kernel constant attributes.
Part of #322.
2016-04-14 18:06:21 +08:00
whitequark 348e058c6f test_pulse_rate_dds: tighten upper bound to 400us. 2016-04-14 18:06:06 +08:00
whitequark 718d411dd5 compiler: run IPSCCP.
This doesn't do much, only frees some registers.
2016-04-14 18:05:57 +08:00
whitequark 019f528ea6 compiler: raise inliner threshold to the equivalent of -O3. 2016-04-14 18:05:57 +08:00
whitequark 3fa5762c10 compiler: extract runtime checks into separate cold functions.
This reduces register pressure as well as function size, which
favorably affects the inliner.
2016-04-14 18:05:57 +08:00
whitequark fcf2a73f82 test_pulse_rate: tighten upper bound to 1500ns. 2016-04-14 18:05:31 +08:00
whitequark 92f3dc705f llvm_ir_generator: generate code more amenable to LLVM's GlobalOpt.
This exposes almost all embedded methods to inlining, with massive
gains.
2016-04-14 18:05:10 +08:00
whitequark f2c92fffea compiler: make quoted functions independent of outer environment. 2016-04-14 18:04:42 +08:00
whitequark ccb1d54beb compiler: tune the LLVM optimizer pipeline (fixes #315). 2016-04-14 18:04:42 +08:00
whitequark 8fa4281470 compiler: significantly increase readability of LLVM and ARTIQ IRs. 2016-04-14 18:04:42 +08:00
whitequark e534941383 compiler: quote functions directly instead of going through a local. 2016-04-14 18:04:22 +08:00
whitequark f72e050af5 transforms.llvm_ir_generator: extract class function attributes.
This should give LLVM more visibility.
2016-04-14 18:04:22 +08:00
whitequark 00facbbc78 compiler: get rid of the GetConstructor opcode. 2016-04-14 18:04:22 +08:00
Sebastien Bourdeauducq 321ba57e84 manual/installing: --toolchain vivado 2016-04-14 01:25:48 +08:00
Sebastien Bourdeauducq 582efe5b91 typo 2016-04-14 01:17:47 +08:00
Sebastien Bourdeauducq 349ccfb633 gateware/nist_qc2: substitute FMC 2016-04-14 01:04:19 +08:00
Sebastien Bourdeauducq 71b9ba6ab7 manual/faq: list HITL TTL connections 2016-04-14 01:04:19 +08:00
Sebastien Bourdeauducq 317e6ea38d manual/git: commit needed before starting master, refresh. Closes #387 2016-04-14 01:04:19 +08:00
dhslichter 08e742ce68 Updated qc2 pinouts for SPI and 2x DDS bus, update docs 2016-04-13 18:39:50 +08:00
Robert Jördens 90876e0143 examples: move pdq2 frame selects away from TTLInOut ttl3 2016-04-12 19:41:16 +08:00
Robert Jördens 6552aa4c28 test: set inputs to input(), should close #383 2016-04-12 18:17:39 +08:00
Sebastien Bourdeauducq 936190033e gui/models: handle Qt calling DictSyncTreeSepModel.index with garbage inputs. Closes #388 2016-04-11 20:12:07 +08:00
Sebastien Bourdeauducq e7d448efd3 dashboard/moninj: use ephemeral UDP port 2016-04-11 18:58:22 +08:00
Sebastien Bourdeauducq a6c17d3e40 dashboard/moninj: fix windows problems 2016-04-11 18:58:13 +08:00
Sebastien Bourdeauducq e4833a33fc dashboard/moninj: use thread instead of asyncio UDP (#39) 2016-04-11 18:57:21 +08:00
Robert Jördens 2617b9db82 installing.rst: typo 2016-04-10 21:01:15 +08:00
Robert Jördens a82f042337 installing.rst: triple colons 2016-04-10 11:21:32 +08:00
Robert Jördens 64d1bca6c1 installing.rst: update, clarify 2016-04-10 00:03:36 +08:00
Robert Jördens c858d44c73 README: rewrite, summarizing more aspects of ARTIQ 2016-04-10 00:00:20 +08:00
Robert Jördens a48f44eb39 introduction.rst: update license 2016-04-10 00:00:12 +08:00
Robert Jördens dee574084e README: note about doc/manual/introduction.rst 2016-04-09 23:59:50 +08:00
Robert Jördens e2def34ede ipython notebook example: datasets subgroup 2016-04-08 12:21:43 +08:00
Robert Jördens f3d8ac301c applets/simple: fix error msg, style 2016-04-08 01:27:22 +08:00
Robert Jördens 787ed65d00 plot_xy: fix errorbar plot 2016-04-08 01:26:42 +08:00
Robert Jördens ca24e00400 plot_xy: un-randomize the fit plot 2016-04-08 01:26:23 +08:00
Robert Jördens 1d3c0166da dashboard: allow more than 99 scan points 2016-04-08 01:26:05 +08:00
whitequark 5fef95b073 doc: use proper CMAKE_BUILD_TYPE for LLVM.
Fixes #380.
2016-04-06 22:08:10 +00:00
Sebastien Bourdeauducq 2b516d21ae manual/developing_a_ndsp: update to new bind interface 2016-04-06 19:14:13 +08:00
Robert Jördens d6339e49ca RELEASE_NOTES: spelling 2016-04-05 18:10:10 +08:00
Robert Jördens eea7cdcf89 worker_impl: style 2016-04-05 18:03:55 +08:00
Robert Jördens 7dab0433be RELEASE_NOTES: HDF5 encoding 2016-04-05 18:03:55 +08:00
Robert Jördens 0b32d9946a worker: trust that h5py encodes strings 2016-04-05 18:03:55 +08:00
Robert Jördens 690eb8c304 worker: trust that h5py maps all types as we want 2016-04-05 18:03:55 +08:00
Robert Jördens 4ba07e01d1 test_h5types: also test ndarrays 2016-04-05 18:03:55 +08:00
Robert Jördens a52b8fa3da test_h5types: use in-memory files 2016-04-05 18:03:55 +08:00
Robert Jördens 138271e936 RELEASE_NOTES: style 2016-04-05 18:03:46 +08:00
Robert Jördens e1f3968a4a worker, hdf5: move datasets to subgroup 2016-04-05 18:03:04 +08:00
Sebastien Bourdeauducq 4f589a7277 artiq_flash: clear error message when bin directory is absent 2016-04-05 16:09:53 +08:00
Sebastien Bourdeauducq 7081a11db6 pyqtgraph dock patch is not needed anymore 2016-04-05 15:24:24 +08:00
Sebastien Bourdeauducq cff01274e9 ship examples with package 2016-04-05 14:36:31 +08:00
Sebastien Bourdeauducq f4c5403803 manual/faq: cleanup/update 2016-04-05 14:17:25 +08:00
Sebastien Bourdeauducq eba90c8782 client: add --async option to scan-repository, recommend usage in git post-receive 2016-04-04 22:18:29 +08:00
Sebastien Bourdeauducq f9db7e472b examples/arguments_demo: demonstrate arguments from datasets (#368) 2016-04-02 23:08:14 +08:00
Sebastien Bourdeauducq b095c94919 master/worker_impl: use ParentDatasetDB in examine mode. Closes #368 2016-04-02 23:08:14 +08:00
Sebastien Bourdeauducq 2f404bae41 master: always expose full set of worker handlers (#368) 2016-04-02 23:08:14 +08:00
Sebastien Bourdeauducq 08549bc3c5 gui/experiment: fix recompute argument error handling 2016-04-02 23:08:14 +08:00
Sebastien Bourdeauducq 0808db6b40 Merge branch 'release-1' of github.com:m-labs/artiq into release-1 2016-04-02 22:32:27 +08:00
Sebastien Bourdeauducq ee5eb57645 RELEASE_NOTES: 1.0rc2 2016-04-02 22:31:58 +08:00
whitequark 3e6e8c6b51 Revert "test: XFAIL lit/devirtualization/*."
This reverts commit 63121e4faf.
2016-04-02 09:54:35 +00:00
whitequark 0eae25cae7 Revert "conda: simplify llvmlite-artiq version requirement"
This reverts commit 4d54695863.
2016-04-02 09:53:56 +00:00
Sebastien Bourdeauducq 4d54695863 conda: simplify llvmlite-artiq version requirement
Later 0.5.1 packages deleted.
2016-04-02 16:48:30 +08:00
whitequark 63121e4faf test: XFAIL lit/devirtualization/*. 2016-03-31 09:17:34 +00:00
whitequark aee3def228 again 2016-03-31 09:09:53 +00:00
Sebastien Bourdeauducq be61c7fa98 again 2016-03-31 16:49:20 +08:00
Sebastien Bourdeauducq 4467016de8 conda: try another way of specifying build number 2016-03-31 16:44:58 +08:00
Sebastien Bourdeauducq 56952aeb74 conda: restrict llvmlite-artiq 2016-03-31 16:36:14 +08:00
Robert Jördens 19e259e5ee doc: conda channel -> label 2016-03-31 10:30:06 +02:00
Robert Jördens 822c6545b5 doc: add note about xcb (closes #361) 2016-03-31 10:29:58 +02:00
Sebastien Bourdeauducq f4dd37924e conda: restrict to llvm-or1k 3.5 2016-03-31 16:17:35 +08:00
Sebastien Bourdeauducq eeba804095 manual: QC2 FMC voltage 2016-03-31 10:47:05 +08:00
Sebastien Bourdeauducq b761a824ee runtime: fix ddstest help (#365) 2016-03-31 10:27:41 +08:00
Sebastien Bourdeauducq 06e626024b gui: setParent(None) before deleteLater() to remove dock appears unnecessary and causes memory corruption on Windows. Closes #362 2016-03-30 11:39:36 +08:00
Sebastien Bourdeauducq 72da5cc0de gui: do 60114447 properly 2016-03-30 01:48:25 +08:00
Sebastien Bourdeauducq b64cea0a79 gui: log error and bail out on artiq_gui.pyon write failure (#360) 2016-03-30 01:45:37 +08:00
Sebastien Bourdeauducq 7cff4977b4 gui/applets: use a better default size, make minimum size proportional to font 2016-03-29 17:11:04 +08:00
Sebastien Bourdeauducq ddf6ec433e gui: better default layout 2016-03-29 17:11:04 +08:00
Sebastien Bourdeauducq ac0f62628d environment,worker_db: mutate datasets from experiments via dedicated method instead of Notifier. Closes #345 2016-03-29 17:11:04 +08:00
Sebastien Bourdeauducq 1884b22528 doc/tutorial: add missing type annotation in LED example. Closes #356 2016-03-29 14:54:05 +08:00
Sebastien Bourdeauducq e6da8f778e doc/dds: fix init timing margin 2016-03-29 12:01:22 +08:00
Sebastien Bourdeauducq 74b71e5f64 doc: fix comment about when and how DDS init should be done. Closes #353 2016-03-29 11:11:13 +08:00
Robert Jördens b04b5c8239 scanwidget: handle min, max, suffix (closes #352) 2016-03-29 00:55:27 +08:00
Robert Jördens 9de11dd958 RELEASE_NOTES: pipistrello speed change 2016-03-25 13:25:04 +01:00
Sebastien Bourdeauducq 027aa5d66a doc: update flterm instructions. Closes #346 2016-03-25 20:11:09 +08:00
Sebastien Bourdeauducq 4e0e8341ca gui/log: split lines correctly 2016-03-25 20:05:13 +08:00
Sebastien Bourdeauducq 69a531edf4 gui/log: send Qt model notifications correctly 2016-03-25 20:05:08 +08:00
Sebastien Bourdeauducq 74b3c47614 protocols/pc_rpc: short_exc_info 2016-03-25 20:04:53 +08:00
Sebastien Bourdeauducq 7bdec1b93b master/worker: use only first line in short_exc_info 2016-03-25 20:03:27 +08:00
Sebastien Bourdeauducq 5d5a4433a7 gui: redesign table/trees to avoid slow and buggy qt/pyqt autosize. Closes #182. Closes #187. 2016-03-25 20:03:22 +08:00
Robert Jördens 358d2a6ba3 i2c: fix variable name (closes #347) 2016-03-25 12:52:18 +01:00
Sebastien Bourdeauducq bbef353057 protocols/pipe_ipc: raise line length limit 2016-03-23 15:14:24 +08:00
160 changed files with 3332 additions and 2065 deletions

12
.gitignore vendored
View File

@ -17,12 +17,11 @@ __pycache__/
/misoc_*/ /misoc_*/
/artiq/test/results /artiq/test/results
/artiq/test/h5types.h5 /artiq/examples/master/results
/examples/master/results /artiq/examples/master/last_rid.pyon
/examples/master/last_rid.pyon /artiq/examples/master/dataset_db.pyon
/examples/master/dataset_db.pyon /artiq/examples/sim/results
/examples/sim/results /artiq/examples/sim/dataset_db.pyon
/examples/sim/dataset_db.pyon
# recommended location for testbed # recommended location for testbed
/run /run
@ -31,5 +30,4 @@ __pycache__/
/last_rid.pyon /last_rid.pyon
/dataset_db.pyon /dataset_db.pyon
/device_db.pyon /device_db.pyon
/h5types.h5
/test*.py /test*.py

View File

@ -1,4 +1,5 @@
graft artiq/runtime graft artiq/runtime
graft artiq/examples
include artiq/gui/logo.svg include artiq/gui/logo.svg
include versioneer.py include versioneer.py
include artiq/_version.py include artiq/_version.py

View File

@ -1,17 +1,32 @@
.. Always keep doc/manual/introduction.rst synchronized with this file, with the exception of the logo.
.. image:: doc/logo/artiq.png .. image:: doc/logo/artiq.png
ARTIQ (Advanced Real-Time Infrastructure for Quantum physics) is a ARTIQ (Advanced Real-Time Infrastructure for Quantum physics) is the next-generation control system for quantum information experiments.
next-generation control system for quantum information experiments. It is It is developed by `M-Labs <https://m-labs.hk>`_ for and in partnership with the `Ion Storage Group at NIST <http://www.nist.gov/pml/div688/grp10/index.cfm>`_ as free software.
developed in partnership with the Ion Storage Group at NIST, and its It is offered to the entire research community as a solution equally applicable to other challenging control tasks outside the field of ion trapping.
applicability reaches beyond ion trapping.
The system features a high-level programming language that helps describing The system features a high-level programming language that helps describing complex experiments, which is compiled and executed on dedicated hardware with nanosecond timing resolution and sub-microsecond latency. It includes graphical user interfaces to parametrize and schedule experiments and to visualize and explore the results.
complex experiments, which is compiled and executed on dedicated hardware with
nanosecond timing resolution and sub-microsecond latency.
Technologies employed include Python, Migen, MiSoC/mor1kx, LLVM and llvmlite. ARTIQ uses FPGA hardware to perform its time-critical tasks.
It is designed to be portable to hardware platforms from different vendors and FPGA manufacturers.
Currently, one configuration of a `low-cost open hardware FPGA board <http://pipistrello.saanlima.com/>`_ and several different configurations of a `high-end FPGA evaluation kit <http://www.xilinx.com/products/boards-and-kits/ek-k7-kc705-g.html>`_ are used and supported.
Any of these FPGA platforms can be combined with any number of additional peripherals, either already accessible from ARTIQ or made accessible with little effort.
Website: Custom hardware components with widely extended capabilities and advanced support for scalable and fully distributed real-time control of experiments `are being designed <https://github.com/m-labs/artiq-hardware>`_.
https://m-labs.hk/artiq
Copyright (C) 2014-2016 M-Labs Limited. Licensed under GNU GPL version 3. ARTIQ and its dependencies are available in the form of `conda packages <https://conda.anaconda.org/m-labs/label/main>`_ for both Linux and Windows.
Packages containing pre-compiled binary images to be loaded onto the hardware platforms are supplied for each configuration.
Like any open source software ARTIQ can equally be built and installed directly from `source <https://github.com/m-labs/artiq>`_.
ARTIQ is supported by M-Labs and developed openly.
Components, features, fixes, improvements, and extensions are funded by and developed for the partnering research groups.
Technologies employed include `Python <https://www.python.org/>`_, `Migen <https://github.com/m-labs/migen>`_, `MiSoC <https://github.com/m-labs/misoc>`_/`mor1kx <https://github.com/openrisc/mor1kx>`_, `LLVM <http://llvm.org/>`_/`llvmlite <https://github.com/numba/llvmlite>`_, and `Qt5 <http://www.qt.io/>`_.
Website: https://m-labs.hk/artiq
`Cite ARTIQ <http://dx.doi.org/10.5281/zenodo.51303>`_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``.
Copyright (C) 2014-2016 M-Labs Limited.
Licensed under GNU GPL version 3 or any later version.

View File

@ -3,6 +3,61 @@
Release notes Release notes
============= =============
1.3
---
No further notes.
1.2
---
No further notes.
1.1
---
* TCA6424A.set converts the "outputs" value to little-endian before programming
it into the registers.
1.0
---
No further notes.
1.0rc4
------
* setattr_argument and setattr_device add their key to kernel_invariants.
1.0rc3
------
* The HDF5 format has changed.
* The datasets are located in the HDF5 subgroup ``datasets``.
* Datasets are now stored without additional type conversions and annotations
from ARTIQ, trusting that h5py maps and converts types between HDF5 and
python/numpy "as expected".
* NumberValue now returns an integer if ``ndecimals`` = 0, ``scale`` = 1 and
``step`` is integer.
1.0rc2
------
* The CPU speed in the pipistrello gateware has been reduced from 83 1/3 MHz to
75 MHz. This will reduce the achievable sustained pulse rate and latency
accordingly. ISE was intermittently failing to meet timing (#341).
* set_dataset in broadcast mode no longer returns a Notifier. Mutating datasets
should be done with mutate_dataset instead (#345).
1.0rc1 1.0rc1
------ ------

View File

@ -1,6 +1,5 @@
#!/usr/bin/env python3.5 #!/usr/bin/env python3.5
import numpy as np
import PyQt5 # make sure pyqtgraph imports Qt5 import PyQt5 # make sure pyqtgraph imports Qt5
import pyqtgraph import pyqtgraph

View File

@ -19,7 +19,7 @@ class XYPlot(pyqtgraph.PlotWidget):
return return
x = data.get(self.args.x, (False, None))[1] x = data.get(self.args.x, (False, None))[1]
if x is None: if x is None:
x = list(range(len(y))) x = np.arange(len(y))
error = data.get(self.args.error, (False, None))[1] error = data.get(self.args.error, (False, None))[1]
fit = data.get(self.args.fit, (False, None))[1] fit = data.get(self.args.fit, (False, None))[1]
@ -42,10 +42,12 @@ class XYPlot(pyqtgraph.PlotWidget):
# See https://github.com/pyqtgraph/pyqtgraph/issues/211 # See https://github.com/pyqtgraph/pyqtgraph/issues/211
if hasattr(error, "__len__") and not isinstance(error, np.ndarray): if hasattr(error, "__len__") and not isinstance(error, np.ndarray):
error = np.array(error) error = np.array(error)
errbars = pg.ErrorBarItem(x=np.array(x), y=np.array(y), height=error) errbars = pyqtgraph.ErrorBarItem(
x=np.array(x), y=np.array(y), height=error)
self.addItem(errbars) self.addItem(errbars)
if fit is not None: if fit is not None:
self.plot(x, fit) xi = np.argsort(x)
self.plot(x[xi], fit[xi])
def main(): def main():

View File

@ -23,7 +23,7 @@ def _compute_ys(histogram_bins, histograms_counts):
class XYHistPlot(QtWidgets.QSplitter): class XYHistPlot(QtWidgets.QSplitter):
def __init__(self, args): def __init__(self, args):
QtWidgets.QSplitter.__init__(self) QtWidgets.QSplitter.__init__(self)
self.resize(1000,600) self.resize(1000, 600)
self.setWindowTitle("XY/Histogram") self.setWindowTitle("XY/Histogram")
self.xy_plot = pyqtgraph.PlotWidget() self.xy_plot = pyqtgraph.PlotWidget()
@ -121,7 +121,7 @@ class XYHistPlot(QtWidgets.QSplitter):
self._set_partial_data(xs, histograms_counts) self._set_partial_data(xs, histograms_counts)
else: else:
self._set_full_data(xs, histogram_bins, histograms_counts) self._set_full_data(xs, histogram_bins, histograms_counts)
def main(): def main():
applet = SimpleApplet(XYHistPlot) applet = SimpleApplet(XYHistPlot)

View File

@ -3,7 +3,7 @@ import argparse
import asyncio import asyncio
import os import os
from quamash import QEventLoop, QtWidgets, QtGui, QtCore from quamash import QEventLoop, QtWidgets, QtCore
from artiq.protocols.sync_struct import Subscriber, process_mod from artiq.protocols.sync_struct import Subscriber, process_mod
from artiq.protocols import pyon from artiq.protocols import pyon
@ -34,7 +34,7 @@ class AppletIPCClient(AsyncioChildComm):
self.close_cb() self.close_cb()
elif reply["action"] != "embed_done": elif reply["action"] != "embed_done":
logger.error("unexpected action reply to embed request: %s", logger.error("unexpected action reply to embed request: %s",
action) reply["action"])
self.close_cb() self.close_cb()
def fix_initial_size(self): def fix_initial_size(self):
@ -60,7 +60,7 @@ class AppletIPCClient(AsyncioChildComm):
raise ValueError("unknown action in parent message") raise ValueError("unknown action in parent message")
except: except:
logger.error("error processing parent message", logger.error("error processing parent message",
exc_info=True) exc_info=True)
self.close_cb() self.close_cb()
def subscribe(self, datasets, init_cb, mod_cb): def subscribe(self, datasets, init_cb, mod_cb):
@ -78,10 +78,10 @@ class SimpleApplet:
self.argparser = argparse.ArgumentParser(description=cmd_description) self.argparser = argparse.ArgumentParser(description=cmd_description)
self.argparser.add_argument("--update-delay", type=float, self.argparser.add_argument(
default=default_update_delay, "--update-delay", type=float, default=default_update_delay,
help="time to wait after a mod (buffering other mods) " help="time to wait after a mod (buffering other mods) "
"before updating (default: %(default).2f)") "before updating (default: %(default).2f)")
group = self.argparser.add_argument_group("standalone mode (default)") group = self.argparser.add_argument_group("standalone mode (default)")
group.add_argument( group.add_argument(
@ -93,8 +93,9 @@ class SimpleApplet:
"--port", default=3250, type=int, "--port", default=3250, type=int,
help="TCP port to connect to") help="TCP port to connect to")
self.argparser.add_argument("--embed", default=None, self.argparser.add_argument(
help="embed into GUI", metavar="IPC_ADDRESS") "--embed", default=None, help="embed into GUI",
metavar="IPC_ADDRESS")
self._arggroup_datasets = self.argparser.add_argument_group("datasets") self._arggroup_datasets = self.argparser.add_argument_group("datasets")
@ -147,7 +148,7 @@ class SimpleApplet:
# testing has shown that the following procedure must be # testing has shown that the following procedure must be
# followed exactly on Linux: # followed exactly on Linux:
# 1. applet creates widget # 1. applet creates widget
# 2. applet creates native window without showing it, and # 2. applet creates native window without showing it, and
# gets its ID # gets its ID
# 3. applet sends the ID to host, host embeds the widget # 3. applet sends the ID to host, host embeds the widget
# 4. applet shows the widget # 4. applet shows the widget

View File

@ -1,2 +1,3 @@
from .constness import Constness
from .domination import DominatorTree from .domination import DominatorTree
from .devirtualization import Devirtualization from .devirtualization import Devirtualization

View File

@ -0,0 +1,30 @@
"""
:class:`Constness` checks that no attribute marked
as constant is ever set.
"""
from pythonparser import algorithm, diagnostic
from .. import types
class Constness(algorithm.Visitor):
def __init__(self, engine):
self.engine = engine
self.in_assign = False
def visit_Assign(self, node):
self.visit(node.value)
self.in_assign = True
self.visit(node.targets)
self.in_assign = False
def visit_AttributeT(self, node):
self.generic_visit(node)
if self.in_assign:
typ = node.value.type.find()
if types.is_instance(typ) and node.attr in typ.constant_attributes:
diag = diagnostic.Diagnostic("error",
"cannot assign to constant attribute '{attr}' of class '{class}'",
{"attr": node.attr, "class": typ.name},
node.loc)
self.engine.process(diag)
return

View File

@ -29,6 +29,10 @@ class ClassDefT(ast.ClassDef):
_types = ("constructor_type",) _types = ("constructor_type",)
class FunctionDefT(ast.FunctionDef, scoped): class FunctionDefT(ast.FunctionDef, scoped):
_types = ("signature_type",) _types = ("signature_type",)
class QuotedFunctionDefT(FunctionDefT):
"""
:ivar flags: (set of str) Code generation flags (see :class:`ir.Function`).
"""
class ModuleT(ast.Module, scoped): class ModuleT(ast.Module, scoped):
pass pass

View File

@ -156,15 +156,9 @@ def obj_sequential():
def fn_watchdog(): def fn_watchdog():
return types.TBuiltinFunction("watchdog") return types.TBuiltinFunction("watchdog")
def fn_now():
return types.TBuiltinFunction("now")
def fn_delay(): def fn_delay():
return types.TBuiltinFunction("delay") return types.TBuiltinFunction("delay")
def fn_at():
return types.TBuiltinFunction("at")
def fn_now_mu(): def fn_now_mu():
return types.TBuiltinFunction("now_mu") return types.TBuiltinFunction("now_mu")
@ -255,6 +249,6 @@ def is_allocated(typ):
return not (is_none(typ) or is_bool(typ) or is_int(typ) or return not (is_none(typ) or is_bool(typ) or is_int(typ) or
is_float(typ) or is_range(typ) or is_float(typ) or is_range(typ) or
types._is_pointer(typ) or types.is_function(typ) or types._is_pointer(typ) or types.is_function(typ) or
types.is_c_function(typ) or types.is_rpc_function(typ) or types.is_c_function(typ) or types.is_rpc(typ) or
types.is_method(typ) or types.is_tuple(typ) or types.is_method(typ) or types.is_tuple(typ) or
types.is_value(typ)) types.is_value(typ))

View File

@ -5,7 +5,7 @@ the references to the host objects and translates the functions
annotated as ``@kernel`` when they are referenced. annotated as ``@kernel`` when they are referenced.
""" """
import sys, os, re, linecache, inspect, textwrap import sys, os, re, linecache, inspect, textwrap, types as pytypes
from collections import OrderedDict, defaultdict from collections import OrderedDict, defaultdict
from pythonparser import ast, algorithm, source, diagnostic, parse_buffer from pythonparser import ast, algorithm, source, diagnostic, parse_buffer
@ -16,9 +16,7 @@ from Levenshtein import ratio as similarity, jaro_winkler
from ..language import core as language_core from ..language import core as language_core
from . import types, builtins, asttyped, prelude from . import types, builtins, asttyped, prelude
from .transforms import ASTTypedRewriter, Inferencer, IntMonomorphizer from .transforms import ASTTypedRewriter, Inferencer, IntMonomorphizer
from .transforms.asttyped_rewriter import LocalExtractor
def coredevice_print(x): print(x)
class ObjectMap: class ObjectMap:
@ -54,6 +52,7 @@ class ASTSynthesizer:
self.object_map, self.type_map, self.value_map = object_map, type_map, value_map self.object_map, self.type_map, self.value_map = object_map, type_map, value_map
self.quote_function = quote_function self.quote_function = quote_function
self.expanded_from = expanded_from self.expanded_from = expanded_from
self.diagnostics = []
def finalize(self): def finalize(self):
self.source_buffer.source = self.source self.source_buffer.source = self.source
@ -101,17 +100,26 @@ class ASTSynthesizer:
return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(), return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(),
begin_loc=begin_loc, end_loc=end_loc, begin_loc=begin_loc, end_loc=end_loc,
loc=begin_loc.join(end_loc)) loc=begin_loc.join(end_loc))
elif inspect.isfunction(value) or inspect.ismethod(value): elif inspect.isfunction(value) or inspect.ismethod(value) or \
quote_loc = self._add('`') isinstance(value, pytypes.BuiltinFunctionType):
repr_loc = self._add(repr(value)) if inspect.ismethod(value):
unquote_loc = self._add('`') quoted_self = self.quote(value.__self__)
loc = quote_loc.join(unquote_loc) function_type = self.quote_function(value.__func__, self.expanded_from)
method_type = types.TMethod(quoted_self.type, function_type)
function_name, function_type = self.quote_function(value, self.expanded_from) dot_loc = self._add('.')
if function_name is None: name_loc = self._add(value.__func__.__name__)
return asttyped.QuoteT(value=value, type=function_type, loc=loc) loc = quoted_self.loc.join(name_loc)
return asttyped.QuoteT(value=value, type=method_type,
self_loc=quoted_self.loc, loc=loc)
else: else:
return asttyped.NameT(id=function_name, ctx=None, type=function_type, loc=loc) function_type = self.quote_function(value, self.expanded_from)
quote_loc = self._add('`')
repr_loc = self._add(repr(value))
unquote_loc = self._add('`')
loc = quote_loc.join(unquote_loc)
return asttyped.QuoteT(value=value, type=function_type, loc=loc)
else: else:
quote_loc = self._add('`') quote_loc = self._add('`')
repr_loc = self._add(repr(value)) repr_loc = self._add(repr(value))
@ -125,6 +133,35 @@ class ASTSynthesizer:
if typ in self.type_map: if typ in self.type_map:
instance_type, constructor_type = self.type_map[typ] instance_type, constructor_type = self.type_map[typ]
if hasattr(value, 'kernel_invariants') and \
value.kernel_invariants != instance_type.constant_attributes:
attr_diff = value.kernel_invariants.difference(
instance_type.constant_attributes)
if len(attr_diff) > 0:
diag = diagnostic.Diagnostic("warning",
"object {value} of type {typ} declares attribute(s) {attrs} as "
"kernel invariant, but other objects of the same type do not; "
"the invariant annotation on this object will be ignored",
{"value": repr(value),
"typ": types.TypePrinter().name(instance_type, max_depth=0),
"attrs": ", ".join(["'{}'".format(attr) for attr in attr_diff])},
loc)
self.diagnostics.append(diag)
attr_diff = instance_type.constant_attributes.difference(
value.kernel_invariants)
if len(attr_diff) > 0:
diag = diagnostic.Diagnostic("warning",
"object {value} of type {typ} does not declare attribute(s) {attrs} as "
"kernel invariant, but other objects of the same type do; "
"the invariant annotation on other objects will be ignored",
{"value": repr(value),
"typ": types.TypePrinter().name(instance_type, max_depth=0),
"attrs": ", ".join(["'{}'".format(attr) for attr in attr_diff])},
loc)
self.diagnostics.append(diag)
value.kernel_invariants = value.kernel_invariants.intersection(
instance_type.constant_attributes)
else: else:
if issubclass(typ, BaseException): if issubclass(typ, BaseException):
if hasattr(typ, 'artiq_builtin'): if hasattr(typ, 'artiq_builtin'):
@ -139,13 +176,16 @@ class ASTSynthesizer:
instance_type = types.TInstance("{}.{}".format(typ.__module__, typ.__qualname__), instance_type = types.TInstance("{}.{}".format(typ.__module__, typ.__qualname__),
OrderedDict()) OrderedDict())
instance_type.attributes['__objectid__'] = builtins.TInt32() instance_type.attributes['__objectid__'] = builtins.TInt32()
constructor_type = types.TConstructor(instance_type) constructor_type = types.TConstructor(instance_type)
constructor_type.attributes['__objectid__'] = builtins.TInt32() constructor_type.attributes['__objectid__'] = builtins.TInt32()
instance_type.constructor = constructor_type instance_type.constructor = constructor_type
self.type_map[typ] = instance_type, constructor_type self.type_map[typ] = instance_type, constructor_type
if hasattr(value, 'kernel_invariants'):
assert isinstance(value.kernel_invariants, set)
instance_type.constant_attributes = value.kernel_invariants
if isinstance(value, type): if isinstance(value, type):
self.value_map[constructor_type].append((value, loc)) self.value_map[constructor_type].append((value, loc))
return asttyped.QuoteT(value=value, type=constructor_type, return asttyped.QuoteT(value=value, type=constructor_type,
@ -155,7 +195,7 @@ class ASTSynthesizer:
return asttyped.QuoteT(value=value, type=instance_type, return asttyped.QuoteT(value=value, type=instance_type,
loc=loc) loc=loc)
def call(self, function_node, args, kwargs, callback=None): def call(self, callee, args, kwargs, callback=None):
""" """
Construct an AST fragment calling a function specified by Construct an AST fragment calling a function specified by
an AST node `function_node`, with given arguments. an AST node `function_node`, with given arguments.
@ -164,11 +204,11 @@ class ASTSynthesizer:
callback_node = self.quote(callback) callback_node = self.quote(callback)
cb_begin_loc = self._add("(") cb_begin_loc = self._add("(")
callee_node = self.quote(callee)
arg_nodes = [] arg_nodes = []
kwarg_nodes = [] kwarg_nodes = []
kwarg_locs = [] kwarg_locs = []
name_loc = self._add(function_node.name)
begin_loc = self._add("(") begin_loc = self._add("(")
for index, arg in enumerate(args): for index, arg in enumerate(args):
arg_nodes.append(self.quote(arg)) arg_nodes.append(self.quote(arg))
@ -189,9 +229,7 @@ class ASTSynthesizer:
cb_end_loc = self._add(")") cb_end_loc = self._add(")")
node = asttyped.CallT( node = asttyped.CallT(
func=asttyped.NameT(id=function_node.name, ctx=None, func=callee_node,
type=function_node.signature_type,
loc=name_loc),
args=arg_nodes, args=arg_nodes,
keywords=[ast.keyword(arg=kw, value=value, keywords=[ast.keyword(arg=kw, value=value,
arg_loc=arg_loc, equals_loc=equals_loc, arg_loc=arg_loc, equals_loc=equals_loc,
@ -201,7 +239,7 @@ class ASTSynthesizer:
starargs=None, kwargs=None, starargs=None, kwargs=None,
type=types.TVar(), iodelay=None, arg_exprs={}, type=types.TVar(), iodelay=None, arg_exprs={},
begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None, begin_loc=begin_loc, end_loc=end_loc, star_loc=None, dstar_loc=None,
loc=name_loc.join(end_loc)) loc=callee_node.loc.join(end_loc))
if callback is not None: if callback is not None:
node = asttyped.CallT( node = asttyped.CallT(
@ -213,19 +251,6 @@ class ASTSynthesizer:
return node return node
def assign_local(self, var_name, value):
name_loc = self._add(var_name)
_ = self._add(" ")
equals_loc = self._add("=")
_ = self._add(" ")
value_node = self.quote(value)
var_node = asttyped.NameT(id=var_name, ctx=None, type=value_node.type,
loc=name_loc)
return ast.Assign(targets=[var_node], value=value_node,
op_locs=[equals_loc], loc=name_loc.join(value_node.loc))
def assign_attribute(self, obj, attr_name, value): def assign_attribute(self, obj, attr_name, value):
obj_node = self.quote(obj) obj_node = self.quote(obj)
dot_loc = self._add(".") dot_loc = self._add(".")
@ -259,6 +284,35 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
self.host_environment = host_environment self.host_environment = host_environment
self.quote = quote self.quote = quote
def visit_quoted_function(self, node, function):
extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine)
extractor.visit(node)
# We quote the defaults so they end up in the global data in LLVM IR.
# This way there is no "life before main", i.e. they do not have to be
# constructed before the main translated call executes; but the Python
# semantics is kept.
defaults = function.__defaults__ or ()
quoted_defaults = []
for default, default_node in zip(defaults, node.args.defaults):
quoted_defaults.append(self.quote(default, default_node.loc))
node.args.defaults = quoted_defaults
node = asttyped.QuotedFunctionDefT(
typing_env=extractor.typing_env, globals_in_scope=extractor.global_,
signature_type=types.TVar(), return_type=types.TVar(),
name=node.name, args=node.args, returns=node.returns,
body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
loc=node.loc)
try:
self.env_stack.append(node.typing_env)
return self.generic_visit(node)
finally:
self.env_stack.pop()
def visit_Name(self, node): def visit_Name(self, node):
typ = super()._try_find_name(node.id) typ = super()._try_find_name(node.id)
if typ is not None: if typ is not None:
@ -268,7 +322,7 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
else: else:
# Try to find this value in the host environment and quote it. # Try to find this value in the host environment and quote it.
if node.id == "print": if node.id == "print":
return self.quote(coredevice_print, node.loc) return self.quote(print, node.loc)
elif node.id in self.host_environment: elif node.id in self.host_environment:
return self.quote(self.host_environment[node.id], node.loc) return self.quote(self.host_environment[node.id], node.loc)
else: else:
@ -371,24 +425,18 @@ class StitchingInferencer(Inferencer):
attr_value_type = builtins.TList(builtins.TInt64()) attr_value_type = builtins.TList(builtins.TInt64())
if attr_value_type is None: if attr_value_type is None:
# Slow path. We don't know what exactly is the attribute value, note = diagnostic.Diagnostic("note",
# so we quote it only for the error message that may possibly result. "while inferring a type for an attribute '{attr}' of a host object",
ast = self.quote(attr_value, object_loc.expanded_from) {"attr": attr_name},
loc)
def proxy_diagnostic(diag): with self.engine.context(note):
note = diagnostic.Diagnostic("note", # Slow path. We don't know what exactly is the attribute value,
"while inferring a type for an attribute '{attr}' of a host object", # so we quote it only for the error message that may possibly result.
{"attr": attr_name}, ast = self.quote(attr_value, object_loc.expanded_from)
loc) Inferencer(engine=self.engine).visit(ast)
diag.notes.append(note) IntMonomorphizer(engine=self.engine).visit(ast)
attr_value_type = ast.type
self.engine.process(diag)
proxy_engine = diagnostic.Engine()
proxy_engine.process = proxy_diagnostic
Inferencer(engine=proxy_engine).visit(ast)
IntMonomorphizer(engine=proxy_engine).visit(ast)
attr_value_type = ast.type
return attributes, attr_value_type return attributes, attr_value_type
@ -413,9 +461,8 @@ class StitchingInferencer(Inferencer):
if attr_name not in attributes: if attr_name not in attributes:
# We just figured out what the type should be. Add it. # We just figured out what the type should be. Add it.
attributes[attr_name] = attr_value_type attributes[attr_name] = attr_value_type
elif not types.is_rpc_function(attr_value_type): else:
# Does this conflict with an earlier guess? # Does this conflict with an earlier guess?
# RPC function types are exempt because RPCs are dynamically typed.
try: try:
attributes[attr_name].unify(attr_value_type) attributes[attr_name].unify(attr_value_type)
except types.UnificationError as e: except types.UnificationError as e:
@ -431,6 +478,16 @@ class StitchingInferencer(Inferencer):
super()._unify_attribute(result_type, value_node, attr_name, attr_loc, loc) super()._unify_attribute(result_type, value_node, attr_name, attr_loc, loc)
def visit_QuoteT(self, node):
if inspect.ismethod(node.value):
if types.is_rpc(types.get_method_function(node.type)):
return
self._unify_method_self(method_type=node.type,
attr_name=node.value.__func__.__name__,
attr_loc=None,
loc=node.loc,
self_loc=node.self_loc)
class TypedtreeHasher(algorithm.Visitor): class TypedtreeHasher(algorithm.Visitor):
def generic_visit(self, node): def generic_visit(self, node):
def freeze(obj): def freeze(obj):
@ -465,18 +522,16 @@ class Stitcher:
self.functions = {} self.functions = {}
self.function_map = {}
self.object_map = ObjectMap() self.object_map = ObjectMap()
self.type_map = {} self.type_map = {}
self.value_map = defaultdict(lambda: []) self.value_map = defaultdict(lambda: [])
def stitch_call(self, function, args, kwargs, callback=None): def stitch_call(self, function, args, kwargs, callback=None):
function_node = self._quote_embedded_function(function)
self.typedtree.append(function_node)
# We synthesize source code for the initial call so that # We synthesize source code for the initial call so that
# diagnostics would have something meaningful to display to the user. # diagnostics would have something meaningful to display to the user.
synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function)) synthesizer = self._synthesizer(self._function_loc(function.artiq_embedded.function))
call_node = synthesizer.call(function_node, args, kwargs, callback) call_node = synthesizer.call(function, args, kwargs, callback)
synthesizer.finalize() synthesizer.finalize()
self.typedtree.append(call_node) self.typedtree.append(call_node)
@ -496,9 +551,13 @@ class Stitcher:
break break
old_typedtree_hash = typedtree_hash old_typedtree_hash = typedtree_hash
# For every host class we embed, add an appropriate constructor # When we have an excess of type information, sometimes we can infer every type
# as a global. This is necessary for method lookup, which uses # in the AST without discovering every referenced attribute of host objects, so
# the getconstructor instruction. # do one last pass unconditionally.
inferencer.visit(self.typedtree)
# For every host class we embed, fill in the function slots
# with their corresponding closures.
for instance_type, constructor_type in list(self.type_map.values()): for instance_type, constructor_type in list(self.type_map.values()):
# Do we have any direct reference to a constructor? # Do we have any direct reference to a constructor?
if len(self.value_map[constructor_type]) > 0: if len(self.value_map[constructor_type]) > 0:
@ -509,13 +568,6 @@ class Stitcher:
instance, _instance_loc = self.value_map[instance_type][0] instance, _instance_loc = self.value_map[instance_type][0]
constructor = type(instance) constructor = type(instance)
self.globals[constructor_type.name] = constructor_type
synthesizer = self._synthesizer()
ast = synthesizer.assign_local(constructor_type.name, constructor)
synthesizer.finalize()
self._inject(ast)
for attr in constructor_type.attributes: for attr in constructor_type.attributes:
if types.is_function(constructor_type.attributes[attr]): if types.is_function(constructor_type.attributes[attr]):
synthesizer = self._synthesizer() synthesizer = self._synthesizer()
@ -523,7 +575,6 @@ class Stitcher:
getattr(constructor, attr)) getattr(constructor, attr))
synthesizer.finalize() synthesizer.finalize()
self._inject(ast) self._inject(ast)
# After we have found all functions, synthesize a module to hold them. # After we have found all functions, synthesize a module to hold them.
source_buffer = source.Buffer("", "<synthesized>") source_buffer = source.Buffer("", "<synthesized>")
self.typedtree = asttyped.ModuleT( self.typedtree = asttyped.ModuleT(
@ -541,7 +592,7 @@ class Stitcher:
value_map=self.value_map, value_map=self.value_map,
quote_function=self._quote_function) quote_function=self._quote_function)
def _quote_embedded_function(self, function): def _quote_embedded_function(self, function, flags):
if not hasattr(function, "artiq_embedded"): if not hasattr(function, "artiq_embedded"):
raise ValueError("{} is not an embedded function".format(repr(function))) raise ValueError("{} is not an embedded function".format(repr(function)))
@ -577,33 +628,39 @@ class Stitcher:
# Mangle the name, since we put everything into a single module. # Mangle the name, since we put everything into a single module.
function_node.name = "{}.{}".format(module_name, function.__qualname__) function_node.name = "{}.{}".format(module_name, function.__qualname__)
# Normally, LocalExtractor would populate the typing environment # Record the function in the function map so that LLVM IR generator
# of the module with the function name. However, since we run # can handle quoting it.
# ASTTypedRewriter on the function node directly, we need to do it self.function_map[function] = function_node.name
# explicitly.
function_type = types.TVar()
self.globals[function_node.name] = function_type
# Memoize the function before typing it to handle recursive # Memoize the function type before typing it to handle recursive
# invocations. # invocations.
self.functions[function] = function_node.name, function_type self.functions[function] = types.TVar()
# Rewrite into typed form. # Rewrite into typed form.
asttyped_rewriter = StitchingASTTypedRewriter( asttyped_rewriter = StitchingASTTypedRewriter(
engine=self.engine, prelude=self.prelude, engine=self.engine, prelude=self.prelude,
globals=self.globals, host_environment=host_environment, globals=self.globals, host_environment=host_environment,
quote=self._quote) quote=self._quote)
return asttyped_rewriter.visit(function_node) function_node = asttyped_rewriter.visit_quoted_function(function_node, embedded_function)
function_node.flags = flags
# Add it into our typedtree so that it gets inferenced and codegen'd.
self._inject(function_node)
# Tie the typing knot.
self.functions[function].unify(function_node.signature_type)
return function_node
def _function_loc(self, function): def _function_loc(self, function):
filename = function.__code__.co_filename filename = function.__code__.co_filename
line = function.__code__.co_firstlineno line = function.__code__.co_firstlineno
name = function.__code__.co_name name = function.__code__.co_name
source_line = linecache.getline(filename, line) source_line = linecache.getline(filename, line).lstrip()
while source_line.lstrip().startswith("@"): while source_line.startswith("@") or source_line == "":
line += 1 line += 1
source_line = linecache.getline(filename, line) source_line = linecache.getline(filename, line).lstrip()
if "<lambda>" in function.__qualname__: if "<lambda>" in function.__qualname__:
column = 0 # can't get column of lambda column = 0 # can't get column of lambda
@ -653,59 +710,44 @@ class Stitcher:
notes=self._call_site_note(loc, is_syscall)) notes=self._call_site_note(loc, is_syscall))
self.engine.process(diag) self.engine.process(diag)
elif param.default is not inspect.Parameter.empty: elif param.default is not inspect.Parameter.empty:
# Try and infer the type from the default value. notes = []
# This is tricky, because the default value might not have notes.append(diagnostic.Diagnostic("note",
# a well-defined type in APython. "expanded from here while trying to infer a type for an"
# In this case, we bail out, but mention why we do it. " unannotated optional argument '{argument}' from its default value",
ast = self._quote(param.default, None) {"argument": param.name},
self._function_loc(function)))
if loc is not None:
notes.append(self._call_site_note(loc, is_syscall))
def proxy_diagnostic(diag): with self.engine.context(*notes):
note = diagnostic.Diagnostic("note", # Try and infer the type from the default value.
"expanded from here while trying to infer a type for an" # This is tricky, because the default value might not have
" unannotated optional argument '{argument}' from its default value", # a well-defined type in APython.
{"argument": param.name}, # In this case, we bail out, but mention why we do it.
self._function_loc(function)) ast = self._quote(param.default, None)
diag.notes.append(note) Inferencer(engine=self.engine).visit(ast)
IntMonomorphizer(engine=self.engine).visit(ast)
note = self._call_site_note(loc, is_syscall) return ast.type
if note:
diag.notes += note
self.engine.process(diag)
proxy_engine = diagnostic.Engine()
proxy_engine.process = proxy_diagnostic
Inferencer(engine=proxy_engine).visit(ast)
IntMonomorphizer(engine=proxy_engine).visit(ast)
return ast.type
else: else:
# Let the rest of the program decide. # Let the rest of the program decide.
return types.TVar() return types.TVar()
def _quote_foreign_function(self, function, loc, syscall): def _quote_syscall(self, function, loc):
signature = inspect.signature(function) signature = inspect.signature(function)
arg_types = OrderedDict() arg_types = OrderedDict()
optarg_types = OrderedDict() optarg_types = OrderedDict()
for param in signature.parameters.values(): for param in signature.parameters.values():
if param.kind not in (inspect.Parameter.POSITIONAL_ONLY, if param.kind != inspect.Parameter.POSITIONAL_OR_KEYWORD:
inspect.Parameter.POSITIONAL_OR_KEYWORD): diag = diagnostic.Diagnostic("error",
# We pretend we don't see *args, kwpostargs=..., **kwargs. "system calls must only use positional arguments; '{argument}' isn't",
# Since every method can be still invoked without any arguments {"argument": param.name},
# going into *args and the slots after it, this is always safe, self._function_loc(function),
# if sometimes constraining. notes=self._call_site_note(loc, is_syscall=True))
# self.engine.process(diag)
# Accepting POSITIONAL_ONLY is OK, because the compiler
# desugars the keyword arguments into positional ones internally.
continue
if param.default is inspect.Parameter.empty: if param.default is inspect.Parameter.empty:
arg_types[param.name] = self._type_of_param(function, loc, param, arg_types[param.name] = self._type_of_param(function, loc, param, is_syscall=True)
is_syscall=syscall is not None)
elif syscall is None:
optarg_types[param.name] = self._type_of_param(function, loc, param,
is_syscall=False)
else: else:
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",
"system call argument '{argument}' must not have a default value", "system call argument '{argument}' must not have a default value",
@ -716,10 +758,8 @@ class Stitcher:
if signature.return_annotation is not inspect.Signature.empty: if signature.return_annotation is not inspect.Signature.empty:
ret_type = self._extract_annot(function, signature.return_annotation, ret_type = self._extract_annot(function, signature.return_annotation,
"return type", loc, is_syscall=syscall is not None) "return type", loc, is_syscall=True)
elif syscall is None: else:
ret_type = builtins.TNone()
else: # syscall is not None
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",
"system call must have a return type annotation", {}, "system call must have a return type annotation", {},
self._function_loc(function), self._function_loc(function),
@ -727,21 +767,35 @@ class Stitcher:
self.engine.process(diag) self.engine.process(diag)
ret_type = types.TVar() ret_type = types.TVar()
if syscall is None: function_type = types.TCFunction(arg_types, ret_type,
function_type = types.TRPCFunction(arg_types, optarg_types, ret_type, name=function.artiq_embedded.syscall,
service=self.object_map.store(function)) flags=function.artiq_embedded.flags)
self.functions[function] = function_type
return function_type
def _quote_rpc(self, callee, loc):
ret_type = builtins.TNone()
if isinstance(callee, pytypes.BuiltinFunctionType):
pass
elif isinstance(callee, pytypes.FunctionType) or isinstance(callee, pytypes.MethodType):
if isinstance(callee, pytypes.FunctionType):
signature = inspect.signature(callee)
else:
# inspect bug?
signature = inspect.signature(callee.__func__)
if signature.return_annotation is not inspect.Signature.empty:
ret_type = self._extract_annot(callee, signature.return_annotation,
"return type", loc, is_syscall=False)
else: else:
function_type = types.TCFunction(arg_types, ret_type, assert False
name=syscall)
self.functions[function] = None, function_type function_type = types.TRPC(ret_type, service=self.object_map.store(callee))
self.functions[callee] = function_type
return None, function_type return function_type
def _quote_function(self, function, loc): def _quote_function(self, function, loc):
if function in self.functions: if function not in self.functions:
result = self.functions[function]
else:
if hasattr(function, "artiq_embedded"): if hasattr(function, "artiq_embedded"):
if function.artiq_embedded.function is not None: if function.artiq_embedded.function is not None:
if function.__name__ == "<lambda>": if function.__name__ == "<lambda>":
@ -766,37 +820,30 @@ class Stitcher:
notes=[note]) notes=[note])
self.engine.process(diag) self.engine.process(diag)
# Insert the typed AST for the new function and restart inference. self._quote_embedded_function(function,
# It doesn't really matter where we insert as long as it is before flags=function.artiq_embedded.flags)
# the final call.
function_node = self._quote_embedded_function(function)
self._inject(function_node)
result = function_node.name, self.globals[function_node.name]
elif function.artiq_embedded.syscall is not None: elif function.artiq_embedded.syscall is not None:
# Insert a storage-less global whose type instructs the compiler # Insert a storage-less global whose type instructs the compiler
# to perform a system call instead of a regular call. # to perform a system call instead of a regular call.
result = self._quote_foreign_function(function, loc, self._quote_syscall(function, loc)
syscall=function.artiq_embedded.syscall)
elif function.artiq_embedded.forbidden is not None: elif function.artiq_embedded.forbidden is not None:
diag = diagnostic.Diagnostic("fatal", diag = diagnostic.Diagnostic("fatal",
"this function cannot be called as an RPC", {}, "this function cannot be called as an RPC", {},
self._function_loc(function), self._function_loc(function),
notes=self._call_site_note(loc, is_syscall=True)) notes=self._call_site_note(loc, is_syscall=False))
self.engine.process(diag) self.engine.process(diag)
else: else:
assert False assert False
else: else:
# Insert a storage-less global whose type instructs the compiler self._quote_rpc(function, loc)
# to perform an RPC instead of a regular call.
result = self._quote_foreign_function(function, loc, syscall=None)
function_name, function_type = result return self.functions[function]
if types.is_rpc_function(function_type):
function_type = types.instantiate(function_type)
return function_name, function_type
def _quote(self, value, loc): def _quote(self, value, loc):
synthesizer = self._synthesizer(loc) synthesizer = self._synthesizer(loc)
node = synthesizer.quote(value) node = synthesizer.quote(value)
synthesizer.finalize() synthesizer.finalize()
if len(synthesizer.diagnostics) > 0:
for warning in synthesizer.diagnostics:
self.engine.process(warning)
return node return node

View File

@ -23,12 +23,19 @@ def is_basic_block(typ):
return isinstance(typ, TBasicBlock) return isinstance(typ, TBasicBlock)
class TOption(types.TMono): class TOption(types.TMono):
def __init__(self, inner): def __init__(self, value):
super().__init__("option", {"inner": inner}) super().__init__("option", {"value": value})
def is_option(typ): def is_option(typ):
return isinstance(typ, TOption) return isinstance(typ, TOption)
class TKeyword(types.TMono):
def __init__(self, value):
super().__init__("keyword", {"value": value})
def is_keyword(typ):
return isinstance(typ, TKeyword)
class TExceptionTypeInfo(types.TMono): class TExceptionTypeInfo(types.TMono):
def __init__(self): def __init__(self):
super().__init__("exntypeinfo") super().__init__("exntypeinfo")
@ -423,6 +430,12 @@ class Function:
:ivar is_internal: :ivar is_internal:
(bool) if True, the function should not be accessible from outside (bool) if True, the function should not be accessible from outside
the module it is contained in the module it is contained in
:ivar is_cold:
(bool) if True, the function should be considered rarely called
:ivar is_generated:
(bool) if True, the function will not appear in backtraces
:ivar flags: (set of str) Code generation flags.
Flag ``fast-math`` is the equivalent of gcc's ``-ffast-math``.
""" """
def __init__(self, typ, name, arguments, loc=None): def __init__(self, typ, name, arguments, loc=None):
@ -431,13 +444,16 @@ class Function:
self.next_name = 1 self.next_name = 1
self.set_arguments(arguments) self.set_arguments(arguments)
self.is_internal = False self.is_internal = False
self.is_cold = False
self.is_generated = False
self.flags = {}
def _remove_name(self, name): def _remove_name(self, name):
self.names.remove(name) self.names.remove(name)
def _add_name(self, base_name): def _add_name(self, base_name):
if base_name == "": if base_name == "":
name = "v.{}".format(self.next_name) name = "UNN.{}".format(self.next_name)
self.next_name += 1 self.next_name += 1
elif base_name in self.names: elif base_name in self.names:
name = "{}.{}".format(base_name, self.next_name) name = "{}.{}".format(base_name, self.next_name)
@ -647,38 +663,6 @@ class SetLocal(Instruction):
def value(self): def value(self):
return self.operands[1] return self.operands[1]
class GetConstructor(Instruction):
"""
An intruction that loads a local variable with the given type
from an environment, possibly going through multiple levels of indirection.
:ivar var_name: (string) variable name
"""
"""
:param env: (:class:`Value`) local environment
:param var_name: (string) local variable name
:param var_type: (:class:`types.Type`) local variable type
"""
def __init__(self, env, var_name, var_type, name=""):
assert isinstance(env, Value)
assert isinstance(env.type, TEnvironment)
assert isinstance(var_name, str)
assert isinstance(var_type, types.Type)
super().__init__([env], var_type, name)
self.var_name = var_name
def copy(self, mapper):
self_copy = super().copy(mapper)
self_copy.var_name = self.var_name
return self_copy
def opcode(self):
return "getconstructor({})".format(repr(self.var_name))
def environment(self):
return self.operands[0]
class GetAttr(Instruction): class GetAttr(Instruction):
""" """
An intruction that loads an attribute from an object, An intruction that loads an attribute from an object,
@ -697,8 +681,12 @@ class GetAttr(Instruction):
if isinstance(attr, int): if isinstance(attr, int):
assert isinstance(obj.type, types.TTuple) assert isinstance(obj.type, types.TTuple)
typ = obj.type.elts[attr] typ = obj.type.elts[attr]
else: elif attr in obj.type.attributes:
typ = obj.type.attributes[attr] typ = obj.type.attributes[attr]
else:
typ = obj.type.constructor.attributes[attr]
if types.is_function(typ) or types.is_rpc(typ):
typ = types.TMethod(obj.type, typ)
super().__init__([obj], typ, name) super().__init__([obj], typ, name)
self.attr = attr self.attr = attr
@ -897,9 +885,11 @@ class Builtin(Instruction):
""" """
:param op: (string) operation name :param op: (string) operation name
""" """
def __init__(self, op, operands, typ, name=""): def __init__(self, op, operands, typ, name=None):
assert isinstance(op, str) assert isinstance(op, str)
for operand in operands: assert isinstance(operand, Value) for operand in operands: assert isinstance(operand, Value)
if name is None:
name = "BLT.{}".format(op)
super().__init__(operands, typ, name) super().__init__(operands, typ, name)
self.op = op self.op = op
@ -948,6 +938,8 @@ class Call(Instruction):
iodelay expressions for values of arguments iodelay expressions for values of arguments
:ivar static_target_function: (:class:`Function` or None) :ivar static_target_function: (:class:`Function` or None)
statically resolved callee statically resolved callee
:ivar is_cold: (bool)
the callee function is cold
""" """
""" """
@ -964,6 +956,7 @@ class Call(Instruction):
super().__init__([func] + args, func.type.ret, name) super().__init__([func] + args, func.type.ret, name)
self.arg_exprs = arg_exprs self.arg_exprs = arg_exprs
self.static_target_function = None self.static_target_function = None
self.is_cold = False
def copy(self, mapper): def copy(self, mapper):
self_copy = super().copy(mapper) self_copy = super().copy(mapper)
@ -1212,6 +1205,8 @@ class Invoke(Terminator):
iodelay expressions for values of arguments iodelay expressions for values of arguments
:ivar static_target_function: (:class:`Function` or None) :ivar static_target_function: (:class:`Function` or None)
statically resolved callee statically resolved callee
:ivar is_cold: (bool)
the callee function is cold
""" """
""" """
@ -1232,6 +1227,7 @@ class Invoke(Terminator):
super().__init__([func] + args + [normal, exn], func.type.ret, name) super().__init__([func] + args + [normal, exn], func.type.ret, name)
self.arg_exprs = arg_exprs self.arg_exprs = arg_exprs
self.static_target_function = None self.static_target_function = None
self.is_cold = False
def copy(self, mapper): def copy(self, mapper):
self_copy = super().copy(mapper) self_copy = super().copy(mapper)
@ -1329,7 +1325,7 @@ class Delay(Terminator):
:param target: (:class:`BasicBlock`) branch target :param target: (:class:`BasicBlock`) branch target
""" """
def __init__(self, interval, decomposition, target, name=""): def __init__(self, interval, decomposition, target, name=""):
assert isinstance(decomposition, Call) or \ assert isinstance(decomposition, Call) or isinstance(decomposition, Invoke) or \
isinstance(decomposition, Builtin) and decomposition.op in ("delay", "delay_mu") isinstance(decomposition, Builtin) and decomposition.op in ("delay", "delay_mu")
assert isinstance(target, BasicBlock) assert isinstance(target, BasicBlock)
super().__init__([decomposition, target], builtins.TNone(), name) super().__init__([decomposition, target], builtins.TNone(), name)

View File

@ -19,6 +19,7 @@ class Source:
else: else:
self.engine = engine self.engine = engine
self.function_map = {}
self.object_map = None self.object_map = None
self.type_map = {} self.type_map = {}
@ -45,6 +46,7 @@ class Source:
class Module: class Module:
def __init__(self, src, ref_period=1e-6): def __init__(self, src, ref_period=1e-6):
self.engine = src.engine self.engine = src.engine
self.function_map = src.function_map
self.object_map = src.object_map self.object_map = src.object_map
self.type_map = src.type_map self.type_map = src.type_map
@ -54,6 +56,7 @@ class Module:
escape_validator = validators.EscapeValidator(engine=self.engine) escape_validator = validators.EscapeValidator(engine=self.engine)
iodelay_estimator = transforms.IODelayEstimator(engine=self.engine, iodelay_estimator = transforms.IODelayEstimator(engine=self.engine,
ref_period=ref_period) ref_period=ref_period)
constness = analyses.Constness(engine=self.engine)
artiq_ir_generator = transforms.ARTIQIRGenerator(engine=self.engine, artiq_ir_generator = transforms.ARTIQIRGenerator(engine=self.engine,
module_name=src.name, module_name=src.name,
ref_period=ref_period) ref_period=ref_period)
@ -69,6 +72,7 @@ class Module:
monomorphism_validator.visit(src.typedtree) monomorphism_validator.visit(src.typedtree)
escape_validator.visit(src.typedtree) escape_validator.visit(src.typedtree)
iodelay_estimator.visit_fixpoint(src.typedtree) iodelay_estimator.visit_fixpoint(src.typedtree)
constness.visit(src.typedtree)
devirtualization.visit(src.typedtree) devirtualization.visit(src.typedtree)
self.artiq_ir = artiq_ir_generator.visit(src.typedtree) self.artiq_ir = artiq_ir_generator.visit(src.typedtree)
artiq_ir_generator.annotate_calls(devirtualization) artiq_ir_generator.annotate_calls(devirtualization)
@ -80,7 +84,7 @@ class Module:
"""Compile the module to LLVM IR for the specified target.""" """Compile the module to LLVM IR for the specified target."""
llvm_ir_generator = transforms.LLVMIRGenerator( llvm_ir_generator = transforms.LLVMIRGenerator(
engine=self.engine, module_name=self.name, target=target, engine=self.engine, module_name=self.name, target=target,
object_map=self.object_map, type_map=self.type_map) function_map=self.function_map, object_map=self.object_map, type_map=self.type_map)
return llvm_ir_generator.process(self.artiq_ir, attribute_writeback=True) return llvm_ir_generator.process(self.artiq_ir, attribute_writeback=True)
def entry_point(self): def entry_point(self):

View File

@ -36,9 +36,7 @@ def globals():
"watchdog": builtins.fn_watchdog(), "watchdog": builtins.fn_watchdog(),
# ARTIQ time management functions # ARTIQ time management functions
"now": builtins.fn_now(),
"delay": builtins.fn_delay(), "delay": builtins.fn_delay(),
"at": builtins.fn_at(),
"now_mu": builtins.fn_now_mu(), "now_mu": builtins.fn_now_mu(),
"delay_mu": builtins.fn_delay_mu(), "delay_mu": builtins.fn_delay_mu(),
"at_mu": builtins.fn_at_mu(), "at_mu": builtins.fn_at_mu(),

View File

@ -46,12 +46,14 @@ class RunTool:
def _dump(target, kind, suffix, content): def _dump(target, kind, suffix, content):
if target is not None: if target is not None:
print("====== {} DUMP ======".format(kind.upper()), file=sys.stderr) print("====== {} DUMP ======".format(kind.upper()), file=sys.stderr)
content_bytes = bytes(content(), 'utf-8') content_value = content()
if isinstance(content_value, str):
content_value = bytes(content_value, 'utf-8')
if target == "": if target == "":
file = tempfile.NamedTemporaryFile(suffix=suffix, delete=False) file = tempfile.NamedTemporaryFile(suffix=suffix, delete=False)
else: else:
file = open(target + suffix, "wb") file = open(target + suffix, "wb")
file.write(content_bytes) file.write(content_value)
file.close() file.close()
print("{} dumped as {}".format(kind, file.name), file=sys.stderr) print("{} dumped as {}".format(kind, file.name), file=sys.stderr)
@ -79,6 +81,48 @@ class Target:
def __init__(self): def __init__(self):
self.llcontext = ll.Context() self.llcontext = ll.Context()
def target_machine(self):
lltarget = llvm.Target.from_triple(self.triple)
llmachine = lltarget.create_target_machine(
features=",".join(["+{}".format(f) for f in self.features]),
reloc="pic", codemodel="default")
llmachine.set_verbose(True)
return llmachine
def optimize(self, llmodule):
llmachine = self.target_machine()
llpassmgr = llvm.create_module_pass_manager()
llmachine.target_data.add_pass(llpassmgr)
llmachine.add_analysis_passes(llpassmgr)
# Register our alias analysis passes.
llpassmgr.add_basic_alias_analysis_pass()
llpassmgr.add_type_based_alias_analysis_pass()
# Start by cleaning up after our codegen and exposing as much
# information to LLVM as possible.
llpassmgr.add_constant_merge_pass()
llpassmgr.add_cfg_simplification_pass()
llpassmgr.add_instruction_combining_pass()
llpassmgr.add_sroa_pass()
llpassmgr.add_dead_code_elimination_pass()
llpassmgr.add_function_attrs_pass()
llpassmgr.add_global_optimizer_pass()
# Now, actually optimize the code.
llpassmgr.add_function_inlining_pass(275)
llpassmgr.add_ipsccp_pass()
llpassmgr.add_instruction_combining_pass()
llpassmgr.add_gvn_pass()
llpassmgr.add_cfg_simplification_pass()
llpassmgr.add_licm_pass()
# Clean up after optimizing.
llpassmgr.add_dead_arg_elimination_pass()
llpassmgr.add_global_dce_pass()
llpassmgr.run(llmodule)
def compile(self, module): def compile(self, module):
"""Compile the module to a relocatable object for this target.""" """Compile the module to a relocatable object for this target."""
@ -102,14 +146,7 @@ class Target:
_dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", "_unopt.ll", _dump(os.getenv("ARTIQ_DUMP_UNOPT_LLVM"), "LLVM IR (generated)", "_unopt.ll",
lambda: str(llparsedmod)) lambda: str(llparsedmod))
llpassmgrbuilder = llvm.create_pass_manager_builder() self.optimize(llparsedmod)
llpassmgrbuilder.opt_level = 2 # -O2
llpassmgrbuilder.size_level = 1 # -Os
llpassmgrbuilder.inlining_threshold = 75 # -Os threshold
llpassmgr = llvm.create_module_pass_manager()
llpassmgrbuilder.populate(llpassmgr)
llpassmgr.run(llparsedmod)
_dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", ".ll", _dump(os.getenv("ARTIQ_DUMP_LLVM"), "LLVM IR (optimized)", ".ll",
lambda: str(llparsedmod)) lambda: str(llparsedmod))
@ -117,10 +154,7 @@ class Target:
return llparsedmod return llparsedmod
def assemble(self, llmodule): def assemble(self, llmodule):
lltarget = llvm.Target.from_triple(self.triple) llmachine = self.target_machine()
llmachine = lltarget.create_target_machine(
features=",".join(["+{}".format(f) for f in self.features]),
reloc="pic", codemodel="default")
_dump(os.getenv("ARTIQ_DUMP_ASM"), "Assembly", ".s", _dump(os.getenv("ARTIQ_DUMP_ASM"), "Assembly", ".s",
lambda: llmachine.emit_assembly(llmodule)) lambda: llmachine.emit_assembly(llmodule))
@ -194,6 +228,7 @@ class NativeTarget(Target):
class OR1KTarget(Target): class OR1KTarget(Target):
triple = "or1k-linux" triple = "or1k-linux"
data_layout = "E-m:e-p:32:32-i64:32-f64:32-v64:32-v128:32-a:0:32-n32" data_layout = "E-m:e-p:32:32-i8:8:8-i16:16:16-i64:32:32-" \
"f64:32:32-v64:32:32-v128:32:32-a0:0:32-n32"
features = ["mul", "div", "ffl1", "cmov", "addc"] features = ["mul", "div", "ffl1", "cmov", "addc"]
print_function = "core_log" print_function = "core_log"

View File

@ -3,8 +3,14 @@ import sys, os
from artiq.master.databases import DeviceDB from artiq.master.databases import DeviceDB
from artiq.master.worker_db import DeviceManager from artiq.master.worker_db import DeviceManager
import artiq.coredevice.core
from artiq.coredevice.core import Core, CompileError from artiq.coredevice.core import Core, CompileError
def _render_diagnostic(diagnostic, colored):
return "\n".join(diagnostic.render(only_line=True))
artiq.coredevice.core._render_diagnostic = _render_diagnostic
def main(): def main():
if len(sys.argv) > 1 and sys.argv[1] == "+diag": if len(sys.argv) > 1 and sys.argv[1] == "+diag":
del sys.argv[1] del sys.argv[1]
@ -35,7 +41,6 @@ def main():
print(core.comm.get_log()) print(core.comm.get_log())
core.comm.clear_log() core.comm.clear_log()
except CompileError as error: except CompileError as error:
print("\n".join(error.__cause__.diagnostic.render(only_line=True)))
if not diag: if not diag:
exit(1) exit(1)

View File

@ -28,7 +28,7 @@ def main():
llmachine = llvm.Target.from_triple(target.triple).create_target_machine() llmachine = llvm.Target.from_triple(target.triple).create_target_machine()
lljit = llvm.create_mcjit_compiler(llparsedmod, llmachine) lljit = llvm.create_mcjit_compiler(llparsedmod, llmachine)
llmain = lljit.get_pointer_to_global(llparsedmod.get_function(llmod.name + ".__modinit__")) llmain = lljit.get_function_address(llmod.name + ".__modinit__")
ctypes.CFUNCTYPE(None)(llmain)() ctypes.CFUNCTYPE(None)(llmain)()
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -32,12 +32,16 @@ def main():
experiment = testcase_vars["Benchmark"](dmgr) experiment = testcase_vars["Benchmark"](dmgr)
stitcher = Stitcher(core=experiment.core, dmgr=dmgr) stitcher = Stitcher(core=experiment.core, dmgr=dmgr)
stitcher.stitch_call(experiment.run, (experiment,), {}) stitcher.stitch_call(experiment.run, (), {})
stitcher.finalize() stitcher.finalize()
return stitcher return stitcher
stitcher = embed() stitcher = embed()
module = Module(stitcher) module = Module(stitcher)
target = OR1KTarget()
llvm_ir = target.compile(module)
elf_obj = target.assemble(llvm_ir)
elf_shlib = target.link([elf_obj], init_fn=module.entry_point())
benchmark(lambda: embed(), benchmark(lambda: embed(),
"ARTIQ embedding") "ARTIQ embedding")
@ -45,8 +49,17 @@ def main():
benchmark(lambda: Module(stitcher), benchmark(lambda: Module(stitcher),
"ARTIQ transforms and validators") "ARTIQ transforms and validators")
benchmark(lambda: OR1KTarget().compile_and_link([module]), benchmark(lambda: target.compile(module),
"LLVM optimization and linking") "LLVM optimizations")
benchmark(lambda: target.assemble(llvm_ir),
"LLVM machine code emission")
benchmark(lambda: target.link([elf_obj], init_fn=module.entry_point()),
"Linking")
benchmark(lambda: target.strip(elf_shlib),
"Stripping debug information")
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -224,7 +224,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
finally: finally:
self.current_class = old_class self.current_class = old_class
def visit_function(self, node, is_lambda, is_internal): def visit_function(self, node, is_lambda=False, is_internal=False, is_quoted=False,
flags={}):
if is_lambda: if is_lambda:
name = "lambda@{}:{}".format(node.loc.line(), node.loc.column()) name = "lambda@{}:{}".format(node.loc.line(), node.loc.column())
typ = node.type.find() typ = node.type.find()
@ -234,41 +235,50 @@ class ARTIQIRGenerator(algorithm.Visitor):
try: try:
defaults = [] defaults = []
for arg_name, default_node in zip(typ.optargs, node.args.defaults): if not is_quoted:
default = self.visit(default_node) for arg_name, default_node in zip(typ.optargs, node.args.defaults):
env_default_name = \ default = self.visit(default_node)
self.current_env.type.add("default$" + arg_name, default.type) env_default_name = \
self.append(ir.SetLocal(self.current_env, env_default_name, default)) self.current_env.type.add("$default." + arg_name, default.type)
defaults.append(env_default_name) self.append(ir.SetLocal(self.current_env, env_default_name, default))
def codegen_default(env_default_name):
return lambda: self.append(ir.GetLocal(self.current_env, env_default_name))
defaults.append(codegen_default(env_default_name))
else:
for default_node in node.args.defaults:
def codegen_default(default_node):
return lambda: self.visit(default_node)
defaults.append(codegen_default(default_node))
old_name, self.name = self.name, self.name + [name] old_name, self.name = self.name, self.name + [name]
env_arg = ir.EnvironmentArgument(self.current_env.type, "outerenv") env_arg = ir.EnvironmentArgument(self.current_env.type, "ARG.ENV")
old_args, self.current_args = self.current_args, {} old_args, self.current_args = self.current_args, {}
args = [] args = []
for arg_name in typ.args: for arg_name in typ.args:
arg = ir.Argument(typ.args[arg_name], "arg." + arg_name) arg = ir.Argument(typ.args[arg_name], "ARG." + arg_name)
self.current_args[arg_name] = arg self.current_args[arg_name] = arg
args.append(arg) args.append(arg)
optargs = [] optargs = []
for arg_name in typ.optargs: for arg_name in typ.optargs:
arg = ir.Argument(ir.TOption(typ.optargs[arg_name]), "arg." + arg_name) arg = ir.Argument(ir.TOption(typ.optargs[arg_name]), "ARG." + arg_name)
self.current_args[arg_name] = arg self.current_args[arg_name] = arg
optargs.append(arg) optargs.append(arg)
func = ir.Function(typ, ".".join(self.name), [env_arg] + args + optargs, func = ir.Function(typ, ".".join(self.name), [env_arg] + args + optargs,
loc=node.lambda_loc if is_lambda else node.keyword_loc) loc=node.lambda_loc if is_lambda else node.keyword_loc)
func.is_internal = is_internal func.is_internal = is_internal
func.flags = flags
self.functions.append(func) self.functions.append(func)
old_func, self.current_function = self.current_function, func old_func, self.current_function = self.current_function, func
if not is_lambda: if not is_lambda:
self.function_map[node] = func self.function_map[node] = func
entry = self.add_block() entry = self.add_block("entry")
old_block, self.current_block = self.current_block, entry old_block, self.current_block = self.current_block, entry
old_globals, self.current_globals = self.current_globals, node.globals_in_scope old_globals, self.current_globals = self.current_globals, node.globals_in_scope
@ -279,22 +289,23 @@ class ARTIQIRGenerator(algorithm.Visitor):
if var not in node.globals_in_scope} if var not in node.globals_in_scope}
env_type = ir.TEnvironment(name=func.name, env_type = ir.TEnvironment(name=func.name,
vars=env_without_globals, outer=self.current_env.type) vars=env_without_globals, outer=self.current_env.type)
env = self.append(ir.Alloc([], env_type, name="env")) env = self.append(ir.Alloc([], env_type, name="ENV"))
old_env, self.current_env = self.current_env, env old_env, self.current_env = self.current_env, env
if not is_lambda: if not is_lambda:
priv_env_type = ir.TEnvironment(name=func.name + ".priv", priv_env_type = ir.TEnvironment(name="{}.private".format(func.name),
vars={ "$return": typ.ret }) vars={ "$return": typ.ret })
priv_env = self.append(ir.Alloc([], priv_env_type, name="privenv")) priv_env = self.append(ir.Alloc([], priv_env_type, name="PRV"))
old_priv_env, self.current_private_env = self.current_private_env, priv_env old_priv_env, self.current_private_env = self.current_private_env, priv_env
self.append(ir.SetLocal(env, "$outer", env_arg)) self.append(ir.SetLocal(env, "$outer", env_arg))
for index, arg_name in enumerate(typ.args): for index, arg_name in enumerate(typ.args):
self.append(ir.SetLocal(env, arg_name, args[index])) self.append(ir.SetLocal(env, arg_name, args[index]))
for index, (arg_name, env_default_name) in enumerate(zip(typ.optargs, defaults)): for index, (arg_name, codegen_default) in enumerate(zip(typ.optargs, defaults)):
default = self.append(ir.GetLocal(self.current_env, env_default_name)) default = codegen_default()
value = self.append(ir.Builtin("unwrap_or", [optargs[index], default], value = self.append(ir.Builtin("unwrap_or", [optargs[index], default],
typ.optargs[arg_name])) typ.optargs[arg_name],
name="DEF.{}".format(arg_name)))
self.append(ir.SetLocal(env, arg_name, value)) self.append(ir.SetLocal(env, arg_name, value))
result = self.visit(node.body) result = self.visit(node.body)
@ -319,13 +330,15 @@ class ARTIQIRGenerator(algorithm.Visitor):
return self.append(ir.Closure(func, self.current_env)) return self.append(ir.Closure(func, self.current_env))
def visit_FunctionDefT(self, node, in_class=None): def visit_FunctionDefT(self, node):
func = self.visit_function(node, is_lambda=False, func = self.visit_function(node, is_internal=len(self.name) > 0)
is_internal=len(self.name) > 0 or '.' in node.name) if self.current_class is None:
if in_class is None:
self._set_local(node.name, func) self._set_local(node.name, func)
else: else:
self.append(ir.SetAttr(in_class, node.name, func)) self.append(ir.SetAttr(self.current_class, node.name, func))
def visit_QuotedFunctionDefT(self, node):
self.visit_function(node, is_internal=True, is_quoted=True, flags=node.flags)
def visit_Return(self, node): def visit_Return(self, node):
if node.value is None: if node.value is None:
@ -400,18 +413,18 @@ class ARTIQIRGenerator(algorithm.Visitor):
cond = self.coerce_to_bool(cond) cond = self.coerce_to_bool(cond)
head = self.current_block head = self.current_block
if_true = self.add_block() if_true = self.add_block("if.body")
self.current_block = if_true self.current_block = if_true
self.visit(node.body) self.visit(node.body)
post_if_true = self.current_block post_if_true = self.current_block
if any(node.orelse): if any(node.orelse):
if_false = self.add_block() if_false = self.add_block("if.else")
self.current_block = if_false self.current_block = if_false
self.visit(node.orelse) self.visit(node.orelse)
post_if_false = self.current_block post_if_false = self.current_block
tail = self.add_block() tail = self.add_block("if.tail")
self.current_block = tail self.current_block = tail
if not post_if_true.is_terminated(): if not post_if_true.is_terminated():
post_if_true.append(ir.Branch(tail)) post_if_true.append(ir.Branch(tail))
@ -498,9 +511,9 @@ class ARTIQIRGenerator(algorithm.Visitor):
head = self.add_block("for.head") head = self.add_block("for.head")
self.append(ir.Branch(head)) self.append(ir.Branch(head))
self.current_block = head self.current_block = head
phi = self.append(ir.Phi(length.type)) phi = self.append(ir.Phi(length.type, name="IND"))
phi.add_incoming(ir.Constant(0, phi.type), prehead) phi.add_incoming(ir.Constant(0, phi.type), prehead)
cond = self.append(ir.Compare(ast.Lt(loc=None), phi, length)) cond = self.append(ir.Compare(ast.Lt(loc=None), phi, length, name="CMP"))
break_block = self.add_block("for.break") break_block = self.add_block("for.break")
old_break, self.break_target = self.break_target, break_block old_break, self.break_target = self.break_target, break_block
@ -509,7 +522,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
old_continue, self.continue_target = self.continue_target, continue_block old_continue, self.continue_target = self.continue_target, continue_block
self.current_block = continue_block self.current_block = continue_block
updated_index = self.append(ir.Arith(ast.Add(loc=None), phi, ir.Constant(1, phi.type))) updated_index = self.append(ir.Arith(ast.Add(loc=None), phi, ir.Constant(1, phi.type),
name="IND.new"))
phi.add_incoming(updated_index, continue_block) phi.add_incoming(updated_index, continue_block)
self.append(ir.Branch(head)) self.append(ir.Branch(head))
@ -563,9 +577,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.current_block = raise_proxy self.current_block = raise_proxy
if exn is not None: if exn is not None:
if loc is None: assert loc is not None
loc = self.current_loc
loc_file = ir.Constant(loc.source_buffer.name, builtins.TStr()) loc_file = ir.Constant(loc.source_buffer.name, builtins.TStr())
loc_line = ir.Constant(loc.line(), builtins.TInt32()) loc_line = ir.Constant(loc.line(), builtins.TInt32())
loc_column = ir.Constant(loc.column(), builtins.TInt32()) loc_column = ir.Constant(loc.column(), builtins.TInt32())
@ -587,7 +599,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.append(ir.Reraise()) self.append(ir.Reraise())
def visit_Raise(self, node): def visit_Raise(self, node):
self.raise_exn(self.visit(node.exc)) self.raise_exn(self.visit(node.exc), loc=self.current_loc)
def visit_Try(self, node): def visit_Try(self, node):
dispatcher = self.add_block("try.dispatch") dispatcher = self.add_block("try.dispatch")
@ -596,13 +608,13 @@ class ARTIQIRGenerator(algorithm.Visitor):
# k for continuation # k for continuation
final_suffix = ".try@{}:{}".format(node.loc.line(), node.loc.column()) final_suffix = ".try@{}:{}".format(node.loc.line(), node.loc.column())
final_env_type = ir.TEnvironment(name=self.current_function.name + final_suffix, final_env_type = ir.TEnvironment(name=self.current_function.name + final_suffix,
vars={ "$k": ir.TBasicBlock() }) vars={ "$cont": ir.TBasicBlock() })
final_state = self.append(ir.Alloc([], final_env_type)) final_state = self.append(ir.Alloc([], final_env_type))
final_targets = [] final_targets = []
final_paths = [] final_paths = []
def final_branch(target, block): def final_branch(target, block):
block.append(ir.SetLocal(final_state, "$k", target)) block.append(ir.SetLocal(final_state, "$cont", target))
final_targets.append(target) final_targets.append(target)
final_paths.append(block) final_paths.append(block)
@ -695,19 +707,19 @@ class ARTIQIRGenerator(algorithm.Visitor):
block.append(ir.Branch(finalizer)) block.append(ir.Branch(finalizer))
if not body.is_terminated(): if not body.is_terminated():
body.append(ir.SetLocal(final_state, "$k", tail)) body.append(ir.SetLocal(final_state, "$cont", tail))
body.append(ir.Branch(finalizer)) body.append(ir.Branch(finalizer))
cleanup.append(ir.SetLocal(final_state, "$k", reraise)) cleanup.append(ir.SetLocal(final_state, "$cont", reraise))
cleanup.append(ir.Branch(finalizer)) cleanup.append(ir.Branch(finalizer))
for handler, post_handler in handlers: for handler, post_handler in handlers:
if not post_handler.is_terminated(): if not post_handler.is_terminated():
post_handler.append(ir.SetLocal(final_state, "$k", tail)) post_handler.append(ir.SetLocal(final_state, "$cont", tail))
post_handler.append(ir.Branch(finalizer)) post_handler.append(ir.Branch(finalizer))
if not post_finalizer.is_terminated(): if not post_finalizer.is_terminated():
dest = post_finalizer.append(ir.GetLocal(final_state, "$k")) dest = post_finalizer.append(ir.GetLocal(final_state, "$cont"))
post_finalizer.append(ir.IndirectBranch(dest, final_targets)) post_finalizer.append(ir.IndirectBranch(dest, final_targets))
else: else:
if not body.is_terminated(): if not body.is_terminated():
@ -752,7 +764,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
heads, tails = [], [] heads, tails = [], []
for stmt in node.body: for stmt in node.body:
self.current_block = self.add_block() self.current_block = self.add_block("interleave.branch")
heads.append(self.current_block) heads.append(self.current_block)
self.visit(stmt) self.visit(stmt)
tails.append(self.current_block) tails.append(self.current_block)
@ -760,7 +772,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
for head in heads: for head in heads:
interleave.add_destination(head) interleave.add_destination(head)
self.current_block = self.add_block() self.current_block = self.add_block("interleave.tail")
for tail in tails: for tail in tails:
if not tail.is_terminated(): if not tail.is_terminated():
tail.append(ir.Branch(self.current_block)) tail.append(ir.Branch(self.current_block))
@ -772,7 +784,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
for stmt in node.body: for stmt in node.body:
self.append(ir.Builtin("at_mu", [start_mu], builtins.TNone())) self.append(ir.Builtin("at_mu", [start_mu], builtins.TNone()))
block = self.add_block() block = self.add_block("parallel.branch")
if self.current_block.is_terminated(): if self.current_block.is_terminated():
self.warn_unreachable(stmt[0]) self.warn_unreachable(stmt[0])
else: else:
@ -806,8 +818,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.append(ir.Builtin("watchdog_clear", [watchdog_id], builtins.TNone()))) self.append(ir.Builtin("watchdog_clear", [watchdog_id], builtins.TNone())))
else: # user-defined context manager else: # user-defined context manager
context_mgr = self.visit(context_expr_node) context_mgr = self.visit(context_expr_node)
enter_fn = self._get_attribute(context_mgr, '__enter__') enter_fn = self.append(ir.GetAttr(context_mgr, '__enter__'))
exit_fn = self._get_attribute(context_mgr, '__exit__') exit_fn = self.append(ir.GetAttr(context_mgr, '__exit__'))
try: try:
self.current_assign = self._user_call(enter_fn, [], {}) self.current_assign = self._user_call(enter_fn, [], {})
@ -836,17 +848,17 @@ class ARTIQIRGenerator(algorithm.Visitor):
cond = self.visit(node.test) cond = self.visit(node.test)
head = self.current_block head = self.current_block
if_true = self.add_block() if_true = self.add_block("ifexp.body")
self.current_block = if_true self.current_block = if_true
true_result = self.visit(node.body) true_result = self.visit(node.body)
post_if_true = self.current_block post_if_true = self.current_block
if_false = self.add_block() if_false = self.add_block("ifexp.else")
self.current_block = if_false self.current_block = if_false
false_result = self.visit(node.orelse) false_result = self.visit(node.orelse)
post_if_false = self.current_block post_if_false = self.current_block
tail = self.add_block() tail = self.add_block("ifexp.tail")
self.current_block = tail self.current_block = tail
if not post_if_true.is_terminated(): if not post_if_true.is_terminated():
@ -880,10 +892,10 @@ class ARTIQIRGenerator(algorithm.Visitor):
if self.current_class is not None and \ if self.current_class is not None and \
name in self.current_class.type.attributes: name in self.current_class.type.attributes:
return self.append(ir.GetAttr(self.current_class, name, return self.append(ir.GetAttr(self.current_class, name,
name="local." + name)) name="FLD." + name))
return self.append(ir.GetLocal(self._env_for(name), name, return self.append(ir.GetLocal(self._env_for(name), name,
name="local." + name)) name="LOC." + name))
def _set_local(self, name, value): def _set_local(self, name, value):
if self.current_class is not None and \ if self.current_class is not None and \
@ -900,26 +912,6 @@ class ARTIQIRGenerator(algorithm.Visitor):
else: else:
return self._set_local(node.id, self.current_assign) return self._set_local(node.id, self.current_assign)
def _get_attribute(self, obj, attr_name):
if attr_name not in obj.type.find().attributes:
# A class attribute. Get the constructor (class object) and
# extract the attribute from it.
constr_type = obj.type.constructor
constr = self.append(ir.GetConstructor(self._env_for(constr_type.name),
constr_type.name, constr_type,
name="constructor." + constr_type.name))
if types.is_function(constr.type.attributes[attr_name]):
# A method. Construct a method object instead.
func = self.append(ir.GetAttr(constr, attr_name))
return self.append(ir.Alloc([func, obj],
types.TMethod(obj.type, func.type)))
else:
obj = constr
return self.append(ir.GetAttr(obj, attr_name,
name="{}.{}".format(_readable_name(obj), attr_name)))
def visit_AttributeT(self, node): def visit_AttributeT(self, node):
try: try:
old_assign, self.current_assign = self.current_assign, None old_assign, self.current_assign = self.current_assign, None
@ -928,13 +920,61 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.current_assign = old_assign self.current_assign = old_assign
if self.current_assign is None: if self.current_assign is None:
return self._get_attribute(obj, node.attr) return self.append(ir.GetAttr(obj, node.attr,
elif types.is_rpc_function(self.current_assign.type): name="{}.FLD.{}".format(_readable_name(obj), node.attr)))
# RPC functions are just type-level markers
return self.append(ir.Builtin("nop", [], builtins.TNone()))
else: else:
return self.append(ir.SetAttr(obj, node.attr, self.current_assign)) return self.append(ir.SetAttr(obj, node.attr, self.current_assign))
def _make_check(self, cond, exn_gen, loc=None, params=[]):
if loc is None:
loc = self.current_loc
try:
name = "check:{}:{}".format(loc.line(), loc.column())
args = [ir.EnvironmentArgument(self.current_env.type, "ARG.ENV")] + \
[ir.Argument(param.type, "ARG.{}".format(index))
for index, param in enumerate(params)]
typ = types.TFunction(OrderedDict([("arg{}".format(index), param.type)
for index, param in enumerate(params)]),
OrderedDict(),
builtins.TNone())
func = ir.Function(typ, ".".join(self.name + [name]), args, loc=loc)
func.is_internal = True
func.is_cold = True
func.is_generated = True
self.functions.append(func)
old_func, self.current_function = self.current_function, func
entry = self.add_block("entry")
old_block, self.current_block = self.current_block, entry
old_final_branch, self.final_branch = self.final_branch, None
old_unwind, self.unwind_target = self.unwind_target, None
self.raise_exn(exn_gen(*args[1:]), loc=loc)
finally:
self.current_function = old_func
self.current_block = old_block
self.final_branch = old_final_branch
self.unwind_target = old_unwind
# cond: bool Value, condition
# exn_gen: lambda()->exn Value, exception if condition not true
cond_block = self.current_block
self.current_block = body_block = self.add_block("check.body")
closure = self.append(ir.Closure(func, ir.Constant(None, ir.TEnvironment("check", {}))))
if self.unwind_target is None:
insn = self.append(ir.Call(closure, params, {}))
else:
after_invoke = self.add_block("check.invoke")
insn = self.append(ir.Invoke(closure, params, {}, after_invoke, self.unwind_target))
self.current_block = after_invoke
insn.is_cold = True
self.append(ir.Unreachable())
self.current_block = tail_block = self.add_block("check.tail")
cond_block.append(ir.BranchIf(cond, tail_block, body_block))
def _map_index(self, length, index, one_past_the_end=False, loc=None): def _map_index(self, length, index, one_past_the_end=False, loc=None):
lt_0 = self.append(ir.Compare(ast.Lt(loc=None), lt_0 = self.append(ir.Compare(ast.Lt(loc=None),
index, ir.Constant(0, index.type))) index, ir.Constant(0, index.type)))
@ -948,47 +988,35 @@ class ARTIQIRGenerator(algorithm.Visitor):
ir.Constant(False, builtins.TBool()))) ir.Constant(False, builtins.TBool())))
head = self.current_block head = self.current_block
self.current_block = out_of_bounds_block = self.add_block() self._make_check(
exn = self.alloc_exn(builtins.TException("IndexError"), in_bounds,
ir.Constant("index {0} out of bounds 0:{1}", builtins.TStr()), lambda index, length: self.alloc_exn(builtins.TException("IndexError"),
index, length) ir.Constant("index {0} out of bounds 0:{1}", builtins.TStr()),
self.raise_exn(exn, loc=loc) index, length),
params=[index, length],
self.current_block = in_bounds_block = self.add_block() loc=loc)
head.append(ir.BranchIf(in_bounds, in_bounds_block, out_of_bounds_block))
return mapped_index return mapped_index
def _make_check(self, cond, exn_gen, loc=None): def _make_loop(self, init, cond_gen, body_gen, name="loop"):
# cond: bool Value, condition
# exn_gen: lambda()->exn Value, exception if condition not true
cond_block = self.current_block
self.current_block = body_block = self.add_block()
self.raise_exn(exn_gen(), loc=loc)
self.current_block = tail_block = self.add_block()
cond_block.append(ir.BranchIf(cond, tail_block, body_block))
def _make_loop(self, init, cond_gen, body_gen):
# init: 'iter Value, initial loop variable value # init: 'iter Value, initial loop variable value
# cond_gen: lambda('iter Value)->bool Value, loop condition # cond_gen: lambda('iter Value)->bool Value, loop condition
# body_gen: lambda('iter Value)->'iter Value, loop body, # body_gen: lambda('iter Value)->'iter Value, loop body,
# returns next loop variable value # returns next loop variable value
init_block = self.current_block init_block = self.current_block
self.current_block = head_block = self.add_block() self.current_block = head_block = self.add_block("{}.head".format(name))
init_block.append(ir.Branch(head_block)) init_block.append(ir.Branch(head_block))
phi = self.append(ir.Phi(init.type)) phi = self.append(ir.Phi(init.type))
phi.add_incoming(init, init_block) phi.add_incoming(init, init_block)
cond = cond_gen(phi) cond = cond_gen(phi)
self.current_block = body_block = self.add_block() self.current_block = body_block = self.add_block("{}.body".format(name))
body = body_gen(phi) body = body_gen(phi)
self.append(ir.Branch(head_block)) self.append(ir.Branch(head_block))
phi.add_incoming(body, self.current_block) phi.add_incoming(body, self.current_block)
self.current_block = tail_block = self.add_block() self.current_block = tail_block = self.add_block("{}.tail".format(name))
head_block.append(ir.BranchIf(cond, body_block, tail_block)) head_block.append(ir.BranchIf(cond, body_block, tail_block))
return head_block, body_block, tail_block return head_block, body_block, tail_block
@ -1072,10 +1100,11 @@ class ARTIQIRGenerator(algorithm.Visitor):
name="slice.size")) name="slice.size"))
self._make_check( self._make_check(
self.append(ir.Compare(ast.LtE(loc=None), slice_size, length)), self.append(ir.Compare(ast.LtE(loc=None), slice_size, length)),
lambda: self.alloc_exn(builtins.TException("ValueError"), lambda slice_size, length: self.alloc_exn(builtins.TException("ValueError"),
ir.Constant("slice size {0} is larger than iterable length {1}", ir.Constant("slice size {0} is larger than iterable length {1}",
builtins.TStr()), builtins.TStr()),
slice_size, length), slice_size, length),
params=[slice_size, length],
loc=node.slice.loc) loc=node.slice.loc)
if self.current_assign is None: if self.current_assign is None:
@ -1090,7 +1119,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
prehead = self.current_block prehead = self.current_block
head = self.current_block = self.add_block() head = self.current_block = self.add_block("slice.head")
prehead.append(ir.Branch(head)) prehead.append(ir.Branch(head))
index = self.append(ir.Phi(node.slice.type, index = self.append(ir.Phi(node.slice.type,
@ -1105,7 +1134,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
bounded_down = self.append(ir.Compare(ast.Gt(loc=None), index, mapped_stop_index)) bounded_down = self.append(ir.Compare(ast.Gt(loc=None), index, mapped_stop_index))
within_bounds = self.append(ir.Select(counting_up, bounded_up, bounded_down)) within_bounds = self.append(ir.Select(counting_up, bounded_up, bounded_down))
body = self.current_block = self.add_block() body = self.current_block = self.add_block("slice.body")
if self.current_assign is None: if self.current_assign is None:
elem = self.iterable_get(value, index) elem = self.iterable_get(value, index)
@ -1121,7 +1150,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
other_index.add_incoming(next_other_index, body) other_index.add_incoming(next_other_index, body)
self.append(ir.Branch(head)) self.append(ir.Branch(head))
tail = self.current_block = self.add_block() tail = self.current_block = self.add_block("slice.tail")
head.append(ir.BranchIf(within_bounds, body, tail)) head.append(ir.BranchIf(within_bounds, body, tail))
if self.current_assign is None: if self.current_assign is None:
@ -1155,9 +1184,10 @@ class ARTIQIRGenerator(algorithm.Visitor):
self._make_check( self._make_check(
self.append(ir.Compare(ast.Eq(loc=None), length, self.append(ir.Compare(ast.Eq(loc=None), length,
ir.Constant(len(node.elts), self._size_type))), ir.Constant(len(node.elts), self._size_type))),
lambda: self.alloc_exn(builtins.TException("ValueError"), lambda length: self.alloc_exn(builtins.TException("ValueError"),
ir.Constant("list must be {0} elements long to decompose", builtins.TStr()), ir.Constant("list must be {0} elements long to decompose", builtins.TStr()),
length)) length),
params=[length])
for index, elt_node in enumerate(node.elts): for index, elt_node in enumerate(node.elts):
elt = self.append(ir.GetElem(self.current_assign, elt = self.append(ir.GetElem(self.current_assign,
@ -1215,7 +1245,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
value_tail = self.current_block value_tail = self.current_block
blocks.append((value, value_head, value_tail)) blocks.append((value, value_head, value_tail))
self.current_block = self.add_block() self.current_block = self.add_block("boolop.seq")
tail = self.current_block tail = self.current_block
phi = self.append(ir.Phi(node.type)) phi = self.append(ir.Phi(node.type))
@ -1384,26 +1414,26 @@ class ARTIQIRGenerator(algorithm.Visitor):
# If the length is the same, compare element-by-element # If the length is the same, compare element-by-element
# and break when the comparison result is false # and break when the comparison result is false
loop_head = self.add_block() loop_head = self.add_block("compare.head")
self.current_block = loop_head self.current_block = loop_head
index_phi = self.append(ir.Phi(self._size_type)) index_phi = self.append(ir.Phi(self._size_type))
index_phi.add_incoming(ir.Constant(0, self._size_type), head) index_phi.add_incoming(ir.Constant(0, self._size_type), head)
loop_cond = self.append(ir.Compare(ast.Lt(loc=None), index_phi, lhs_length)) loop_cond = self.append(ir.Compare(ast.Lt(loc=None), index_phi, lhs_length))
loop_body = self.add_block() loop_body = self.add_block("compare.body")
self.current_block = loop_body self.current_block = loop_body
lhs_elt = self.append(ir.GetElem(lhs, index_phi)) lhs_elt = self.append(ir.GetElem(lhs, index_phi))
rhs_elt = self.append(ir.GetElem(rhs, index_phi)) rhs_elt = self.append(ir.GetElem(rhs, index_phi))
body_result = self.polymorphic_compare_pair(op, lhs_elt, rhs_elt) body_result = self.polymorphic_compare_pair(op, lhs_elt, rhs_elt)
loop_body2 = self.add_block() loop_body2 = self.add_block("compare.body2")
self.current_block = loop_body2 self.current_block = loop_body2
index_next = self.append(ir.Arith(ast.Add(loc=None), index_phi, index_next = self.append(ir.Arith(ast.Add(loc=None), index_phi,
ir.Constant(1, self._size_type))) ir.Constant(1, self._size_type)))
self.append(ir.Branch(loop_head)) self.append(ir.Branch(loop_head))
index_phi.add_incoming(index_next, loop_body2) index_phi.add_incoming(index_next, loop_body2)
tail = self.add_block() tail = self.add_block("compare.tail")
self.current_block = tail self.current_block = tail
phi = self.append(ir.Phi(builtins.TBool())) phi = self.append(ir.Phi(builtins.TBool()))
head.append(ir.BranchIf(eq_length, loop_head, tail)) head.append(ir.BranchIf(eq_length, loop_head, tail))
@ -1449,14 +1479,14 @@ class ARTIQIRGenerator(algorithm.Visitor):
elt = self.iterable_get(haystack, index) elt = self.iterable_get(haystack, index)
cmp_result = self.polymorphic_compare_pair(ast.Eq(loc=None), needle, elt) cmp_result = self.polymorphic_compare_pair(ast.Eq(loc=None), needle, elt)
loop_body2 = self.add_block() loop_body2 = self.add_block("compare.body")
self.current_block = loop_body2 self.current_block = loop_body2
return self.append(ir.Arith(ast.Add(loc=None), index, return self.append(ir.Arith(ast.Add(loc=None), index,
ir.Constant(1, length.type))) ir.Constant(1, length.type)))
loop_head, loop_body, loop_tail = \ loop_head, loop_body, loop_tail = \
self._make_loop(ir.Constant(0, length.type), self._make_loop(ir.Constant(0, length.type),
lambda index: self.append(ir.Compare(ast.Lt(loc=None), index, length)), lambda index: self.append(ir.Compare(ast.Lt(loc=None), index, length)),
body_gen) body_gen, name="compare")
loop_body.append(ir.BranchIf(cmp_result, loop_tail, loop_body2)) loop_body.append(ir.BranchIf(cmp_result, loop_tail, loop_body2))
phi = loop_tail.prepend(ir.Phi(builtins.TBool())) phi = loop_tail.prepend(ir.Phi(builtins.TBool()))
@ -1497,7 +1527,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
result_tail = self.current_block result_tail = self.current_block
blocks.append((result, result_head, result_tail)) blocks.append((result, result_head, result_tail))
self.current_block = self.add_block() self.current_block = self.add_block("compare.seq")
lhs = rhs lhs = rhs
tail = self.current_block tail = self.current_block
@ -1639,21 +1669,14 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.polymorphic_print([self.visit(prefix)], self.polymorphic_print([self.visit(prefix)],
separator=" ", suffix="\x1E", as_rtio=True) separator=" ", suffix="\x1E", as_rtio=True)
self.polymorphic_print([self.visit(arg) for arg in args], self.polymorphic_print([self.visit(arg) for arg in args],
separator=" ", suffix="\n", as_rtio=True) separator=" ", suffix="\n\x1D", as_rtio=True)
return ir.Constant(None, builtins.TNone()) return ir.Constant(None, builtins.TNone())
elif types.is_builtin(typ, "now"): elif types.is_builtin(typ, "delay"):
if len(node.args) == 0 and len(node.keywords) == 0:
now_mu = self.append(ir.Builtin("now_mu", [], builtins.TInt64()))
now_mu_float = self.append(ir.Coerce(now_mu, builtins.TFloat()))
return self.append(ir.Arith(ast.Mult(loc=None), now_mu_float, self.ref_period))
else:
assert False
elif types.is_builtin(typ, "delay") or types.is_builtin(typ, "at"):
if len(node.args) == 1 and len(node.keywords) == 0: if len(node.args) == 1 and len(node.keywords) == 0:
arg = self.visit(node.args[0]) arg = self.visit(node.args[0])
arg_mu_float = self.append(ir.Arith(ast.Div(loc=None), arg, self.ref_period)) arg_mu_float = self.append(ir.Arith(ast.Div(loc=None), arg, self.ref_period))
arg_mu = self.append(ir.Coerce(arg_mu_float, builtins.TInt64())) arg_mu = self.append(ir.Coerce(arg_mu_float, builtins.TInt64()))
return self.append(ir.Builtin(typ.name + "_mu", [arg_mu], builtins.TNone())) return self.append(ir.Builtin("delay_mu", [arg_mu], builtins.TNone()))
else: else:
assert False assert False
elif types.is_builtin(typ, "now_mu") or types.is_builtin(typ, "delay_mu") \ elif types.is_builtin(typ, "now_mu") or types.is_builtin(typ, "delay_mu") \
@ -1686,58 +1709,71 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.engine.process(diag) self.engine.process(diag)
def _user_call(self, callee, positional, keywords, arg_exprs={}): def _user_call(self, callee, positional, keywords, arg_exprs={}):
if types.is_function(callee.type): if types.is_function(callee.type) or types.is_rpc(callee.type):
func = callee func = callee
self_arg = None self_arg = None
fn_typ = callee.type fn_typ = callee.type
offset = 0 offset = 0
elif types.is_method(callee.type): elif types.is_method(callee.type):
func = self.append(ir.GetAttr(callee, "__func__")) func = self.append(ir.GetAttr(callee, "__func__",
self_arg = self.append(ir.GetAttr(callee, "__self__")) name="{}.ENV".format(callee.name)))
self_arg = self.append(ir.GetAttr(callee, "__self__",
name="{}.SLF".format(callee.name)))
fn_typ = types.get_method_function(callee.type) fn_typ = types.get_method_function(callee.type)
offset = 1 offset = 1
else: else:
assert False assert False
args = [None] * (len(fn_typ.args) + len(fn_typ.optargs)) if types.is_rpc(fn_typ):
if self_arg is None:
for index, arg in enumerate(positional): args = positional
if index + offset < len(fn_typ.args):
args[index + offset] = arg
else: else:
args[index + offset] = self.append(ir.Alloc([arg], ir.TOption(arg.type))) args = [self_arg] + positional
for keyword in keywords: for keyword in keywords:
arg = keywords[keyword] arg = keywords[keyword]
if keyword in fn_typ.args: args.append(self.append(ir.Alloc([ir.Constant(keyword, builtins.TStr()), arg],
for index, arg_name in enumerate(fn_typ.args): ir.TKeyword(arg.type))))
if keyword == arg_name: else:
assert args[index] is None args = [None] * (len(fn_typ.args) + len(fn_typ.optargs))
args[index] = arg
break
elif keyword in fn_typ.optargs:
for index, optarg_name in enumerate(fn_typ.optargs):
if keyword == optarg_name:
assert args[len(fn_typ.args) + index] is None
args[len(fn_typ.args) + index] = \
self.append(ir.Alloc([arg], ir.TOption(arg.type)))
break
for index, optarg_name in enumerate(fn_typ.optargs): for index, arg in enumerate(positional):
if args[len(fn_typ.args) + index] is None: if index + offset < len(fn_typ.args):
args[len(fn_typ.args) + index] = \ args[index + offset] = arg
self.append(ir.Alloc([], ir.TOption(fn_typ.optargs[optarg_name]))) else:
args[index + offset] = self.append(ir.Alloc([arg], ir.TOption(arg.type)))
if self_arg is not None: for keyword in keywords:
assert args[0] is None arg = keywords[keyword]
args[0] = self_arg if keyword in fn_typ.args:
for index, arg_name in enumerate(fn_typ.args):
if keyword == arg_name:
assert args[index] is None
args[index] = arg
break
elif keyword in fn_typ.optargs:
for index, optarg_name in enumerate(fn_typ.optargs):
if keyword == optarg_name:
assert args[len(fn_typ.args) + index] is None
args[len(fn_typ.args) + index] = \
self.append(ir.Alloc([arg], ir.TOption(arg.type)))
break
assert None not in args for index, optarg_name in enumerate(fn_typ.optargs):
if args[len(fn_typ.args) + index] is None:
args[len(fn_typ.args) + index] = \
self.append(ir.Alloc([], ir.TOption(fn_typ.optargs[optarg_name])))
if self_arg is not None:
assert args[0] is None
args[0] = self_arg
assert None not in args
if self.unwind_target is None: if self.unwind_target is None:
insn = self.append(ir.Call(func, args, arg_exprs)) insn = self.append(ir.Call(func, args, arg_exprs))
else: else:
after_invoke = self.add_block() after_invoke = self.add_block("invoke")
insn = self.append(ir.Invoke(func, args, arg_exprs, insn = self.append(ir.Invoke(func, args, arg_exprs,
after_invoke, self.unwind_target)) after_invoke, self.unwind_target))
self.current_block = after_invoke self.current_block = after_invoke
@ -1752,7 +1788,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
if node.iodelay is not None and not iodelay.is_const(node.iodelay, 0): if node.iodelay is not None and not iodelay.is_const(node.iodelay, 0):
before_delay = self.current_block before_delay = self.current_block
during_delay = self.add_block() during_delay = self.add_block("delay.head")
before_delay.append(ir.Branch(during_delay)) before_delay.append(ir.Branch(during_delay))
self.current_block = during_delay self.current_block = during_delay
@ -1766,7 +1802,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.method_map[(attr_node.value.type.find(), attr_node.attr)].append(insn) self.method_map[(attr_node.value.type.find(), attr_node.attr)].append(insn)
if node.iodelay is not None and not iodelay.is_const(node.iodelay, 0): if node.iodelay is not None and not iodelay.is_const(node.iodelay, 0):
after_delay = self.add_block() after_delay = self.add_block("delay.tail")
self.append(ir.Delay(node.iodelay, insn, after_delay)) self.append(ir.Delay(node.iodelay, insn, after_delay))
self.current_block = after_delay self.current_block = after_delay
@ -1801,7 +1837,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
assert_subexprs = self.current_assert_subexprs = [] assert_subexprs = self.current_assert_subexprs = []
init = self.current_block init = self.current_block
prehead = self.current_block = self.add_block() prehead = self.current_block = self.add_block("assert.prehead")
cond = self.visit(node.test) cond = self.visit(node.test)
head = self.current_block head = self.current_block
finally: finally:
@ -1813,7 +1849,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
init.append(ir.SetLocal(assert_env, subexpr_name, empty)) init.append(ir.SetLocal(assert_env, subexpr_name, empty))
init.append(ir.Branch(prehead)) init.append(ir.Branch(prehead))
if_failed = self.current_block = self.add_block() if_failed = self.current_block = self.add_block("assert.fail")
if node.msg: if node.msg:
explanation = node.msg.s explanation = node.msg.s
@ -1831,7 +1867,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
subexpr_cond = self.append(ir.Builtin("is_some", [subexpr_value_opt], subexpr_cond = self.append(ir.Builtin("is_some", [subexpr_value_opt],
builtins.TBool())) builtins.TBool()))
subexpr_body = self.current_block = self.add_block() subexpr_body = self.current_block = self.add_block("assert.subexpr.body")
self.append(ir.Builtin("printf", [ self.append(ir.Builtin("printf", [
ir.Constant(" (%s) = ", builtins.TStr()), ir.Constant(" (%s) = ", builtins.TStr()),
ir.Constant(subexpr_node.loc.source(), builtins.TStr()) ir.Constant(subexpr_node.loc.source(), builtins.TStr())
@ -1841,14 +1877,14 @@ class ARTIQIRGenerator(algorithm.Visitor):
self.polymorphic_print([subexpr_value], separator="", suffix="\n") self.polymorphic_print([subexpr_value], separator="", suffix="\n")
subexpr_postbody = self.current_block subexpr_postbody = self.current_block
subexpr_tail = self.current_block = self.add_block() subexpr_tail = self.current_block = self.add_block("assert.subexpr.tail")
self.append(ir.Branch(subexpr_tail), block=subexpr_postbody) self.append(ir.Branch(subexpr_tail), block=subexpr_postbody)
self.append(ir.BranchIf(subexpr_cond, subexpr_body, subexpr_tail), block=subexpr_head) self.append(ir.BranchIf(subexpr_cond, subexpr_body, subexpr_tail), block=subexpr_head)
self.append(ir.Builtin("abort", [], builtins.TNone())) self.append(ir.Builtin("abort", [], builtins.TNone()))
self.append(ir.Unreachable()) self.append(ir.Unreachable())
tail = self.current_block = self.add_block() tail = self.current_block = self.add_block("assert.tail")
self.append(ir.BranchIf(cond, tail, if_failed), block=head) self.append(ir.BranchIf(cond, tail, if_failed), block=head)
def polymorphic_print(self, values, separator, suffix="", as_repr=False, as_rtio=False): def polymorphic_print(self, values, separator, suffix="", as_repr=False, as_rtio=False):
@ -1922,10 +1958,10 @@ class ARTIQIRGenerator(algorithm.Visitor):
is_last = self.append(ir.Compare(ast.Lt(loc=None), index, last)) is_last = self.append(ir.Compare(ast.Lt(loc=None), index, last))
head = self.current_block head = self.current_block
if_last = self.current_block = self.add_block() if_last = self.current_block = self.add_block("print.comma")
printf(", ") printf(", ")
tail = self.current_block = self.add_block() tail = self.current_block = self.add_block("print.tail")
if_last.append(ir.Branch(tail)) if_last.append(ir.Branch(tail))
head.append(ir.BranchIf(is_last, if_last, tail)) head.append(ir.BranchIf(is_last, if_last, tail))

View File

@ -190,16 +190,16 @@ class ASTTypedRewriter(algorithm.Transformer):
self.in_class = None self.in_class = None
def _try_find_name(self, name): def _try_find_name(self, name):
for typing_env in reversed(self.env_stack):
if name in typing_env:
return typing_env[name]
def _find_name(self, name, loc):
if self.in_class is not None: if self.in_class is not None:
typ = self.in_class.constructor_type.attributes.get(name) typ = self.in_class.constructor_type.attributes.get(name)
if typ is not None: if typ is not None:
return typ return typ
for typing_env in reversed(self.env_stack):
if name in typing_env:
return typing_env[name]
def _find_name(self, name, loc):
typ = self._try_find_name(name) typ = self._try_find_name(name)
if typ is not None: if typ is not None:
return typ return typ
@ -229,9 +229,11 @@ class ASTTypedRewriter(algorithm.Transformer):
extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine) extractor = LocalExtractor(env_stack=self.env_stack, engine=self.engine)
extractor.visit(node) extractor.visit(node)
signature_type = self._find_name(node.name, node.name_loc)
node = asttyped.FunctionDefT( node = asttyped.FunctionDefT(
typing_env=extractor.typing_env, globals_in_scope=extractor.global_, typing_env=extractor.typing_env, globals_in_scope=extractor.global_,
signature_type=self._find_name(node.name, node.name_loc), return_type=types.TVar(), signature_type=signature_type, return_type=types.TVar(),
name=node.name, args=node.args, returns=node.returns, name=node.name, args=node.args, returns=node.returns,
body=node.body, decorator_list=node.decorator_list, body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc, keyword_loc=node.keyword_loc, name_loc=node.name_loc,

View File

@ -32,9 +32,8 @@ class DeadCodeEliminator:
# a diagnostic for reads of uninitialized locals, and # a diagnostic for reads of uninitialized locals, and
# it also has to run after the interleaver, but interleaver # it also has to run after the interleaver, but interleaver
# doesn't like to work with IR before DCE. # doesn't like to work with IR before DCE.
if isinstance(insn, (ir.Phi, ir.Alloc, ir.GetConstructor, if isinstance(insn, (ir.Phi, ir.Alloc, ir.GetAttr, ir.GetElem, ir.Coerce,
ir.GetAttr, ir.GetElem, ir.Coerce, ir.Arith, ir.Arith, ir.Compare, ir.Select, ir.Quote, ir.Closure)) \
ir.Compare, ir.Closure, ir.Select, ir.Quote)) \
and not any(insn.uses): and not any(insn.uses):
insn.erase() insn.erase()
modified = True modified = True

View File

@ -92,6 +92,41 @@ class Inferencer(algorithm.Visitor):
attr_name=node.attr, attr_loc=node.attr_loc, attr_name=node.attr, attr_loc=node.attr_loc,
loc=node.loc) loc=node.loc)
def _unify_method_self(self, method_type, attr_name, attr_loc, loc, self_loc):
self_type = types.get_method_self(method_type)
function_type = types.get_method_function(method_type)
if len(function_type.args) < 1:
diag = diagnostic.Diagnostic("error",
"function '{attr}{type}' of class '{class}' cannot accept a self argument",
{"attr": attr_name, "type": types.TypePrinter().name(function_type),
"class": self_type.name},
loc)
self.engine.process(diag)
else:
def makenotes(printer, typea, typeb, loca, locb):
if attr_loc is None:
msgb = "reference to an instance with a method '{attr}{typeb}'"
else:
msgb = "reference to a method '{attr}{typeb}'"
return [
diagnostic.Diagnostic("note",
"expression of type {typea}",
{"typea": printer.name(typea)},
loca),
diagnostic.Diagnostic("note",
msgb,
{"attr": attr_name,
"typeb": printer.name(function_type)},
locb)
]
self._unify(self_type, list(function_type.args.values())[0],
self_loc, loc,
makenotes=makenotes,
when=" while inferring the type for self argument")
def _unify_attribute(self, result_type, value_node, attr_name, attr_loc, loc): def _unify_attribute(self, result_type, value_node, attr_name, attr_loc, loc):
object_type = value_node.type.find() object_type = value_node.type.find()
if not types.is_var(object_type): if not types.is_var(object_type):
@ -109,51 +144,18 @@ class Inferencer(algorithm.Visitor):
] ]
attr_type = object_type.attributes[attr_name] attr_type = object_type.attributes[attr_name]
if types.is_rpc_function(attr_type):
attr_type = types.instantiate(attr_type)
self._unify(result_type, attr_type, loc, None, self._unify(result_type, attr_type, loc, None,
makenotes=makenotes, when=" for attribute '{}'".format(attr_name)) makenotes=makenotes, when=" for attribute '{}'".format(attr_name))
elif types.is_instance(object_type) and \ elif types.is_instance(object_type) and \
attr_name in object_type.constructor.attributes: attr_name in object_type.constructor.attributes:
attr_type = object_type.constructor.attributes[attr_name].find() attr_type = object_type.constructor.attributes[attr_name].find()
if types.is_rpc_function(attr_type):
attr_type = types.instantiate(attr_type)
if types.is_function(attr_type): if types.is_function(attr_type):
# Convert to a method. # Convert to a method.
if len(attr_type.args) < 1: attr_type = types.TMethod(object_type, attr_type)
diag = diagnostic.Diagnostic("error", self._unify_method_self(attr_type, attr_name, attr_loc, loc, value_node.loc)
"function '{attr}{type}' of class '{class}' cannot accept a self argument", elif types.is_rpc(attr_type):
{"attr": attr_name, "type": types.TypePrinter().name(attr_type), # Convert to a method. We don't have to bother typechecking
"class": object_type.name}, # the self argument, since for RPCs anything goes.
loc)
self.engine.process(diag)
return
else:
def makenotes(printer, typea, typeb, loca, locb):
if attr_loc is None:
msgb = "reference to an instance with a method '{attr}{typeb}'"
else:
msgb = "reference to a method '{attr}{typeb}'"
return [
diagnostic.Diagnostic("note",
"expression of type {typea}",
{"typea": printer.name(typea)},
loca),
diagnostic.Diagnostic("note",
msgb,
{"attr": attr_name,
"typeb": printer.name(attr_type)},
locb)
]
self._unify(object_type, list(attr_type.args.values())[0],
value_node.loc, loc,
makenotes=makenotes,
when=" while inferring the type for self argument")
attr_type = types.TMethod(object_type, attr_type) attr_type = types.TMethod(object_type, attr_type)
if not types.is_var(attr_type): if not types.is_var(attr_type):
@ -405,7 +407,7 @@ class Inferencer(algorithm.Visitor):
return return
elif builtins.is_list(left.type) or builtins.is_list(right.type): elif builtins.is_list(left.type) or builtins.is_list(right.type):
list_, other = self._order_by_pred(builtins.is_list, left, right) list_, other = self._order_by_pred(builtins.is_list, left, right)
if not builtins.is_int(other.type): if not builtins.is_int(other.type) and not types.is_var(other.type):
printer = types.TypePrinter() printer = types.TypePrinter()
note1 = diagnostic.Diagnostic("note", note1 = diagnostic.Diagnostic("note",
"list operand of type {typea}", "list operand of type {typea}",
@ -846,6 +848,10 @@ class Inferencer(algorithm.Visitor):
# An user-defined class. # An user-defined class.
self._unify(node.type, typ.find().instance, self._unify(node.type, typ.find().instance,
node.loc, None) node.loc, None)
elif types.is_builtin(typ, "kernel"):
# Ignored.
self._unify(node.type, builtins.TNone(),
node.loc, None)
else: else:
assert False assert False
@ -867,6 +873,10 @@ class Inferencer(algorithm.Visitor):
return # not enough info yet return # not enough info yet
elif types.is_builtin(typ): elif types.is_builtin(typ):
return self.visit_builtin_call(node) return self.visit_builtin_call(node)
elif types.is_rpc(typ):
self._unify(node.type, typ.ret,
node.loc, None)
return
elif not (types.is_function(typ) or types.is_method(typ)): elif not (types.is_function(typ) or types.is_method(typ)):
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",
"cannot call this expression of type {type}", "cannot call this expression of type {type}",
@ -884,6 +894,12 @@ class Inferencer(algorithm.Visitor):
typ = types.get_method_function(typ) typ = types.get_method_function(typ)
if types.is_var(typ): if types.is_var(typ):
return # not enough info yet return # not enough info yet
elif types.is_rpc(typ):
self._unify(node.type, typ.ret,
node.loc, None)
return
elif typ.arity() == 0:
return # error elsewhere
typ_arity = typ.arity() - 1 typ_arity = typ.arity() - 1
typ_args = OrderedDict(list(typ.args.items())[1:]) typ_args = OrderedDict(list(typ.args.items())[1:])
@ -1188,7 +1204,9 @@ class Inferencer(algorithm.Visitor):
def visit_FunctionDefT(self, node): def visit_FunctionDefT(self, node):
for index, decorator in enumerate(node.decorator_list): for index, decorator in enumerate(node.decorator_list):
if types.is_builtin(decorator.type, "kernel"): if types.is_builtin(decorator.type, "kernel") or \
isinstance(decorator, asttyped.CallT) and \
types.is_builtin(decorator.func.type, "kernel"):
continue continue
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",
@ -1227,6 +1245,8 @@ class Inferencer(algorithm.Visitor):
self._unify(node.signature_type, signature_type, self._unify(node.signature_type, signature_type,
node.name_loc, None) node.name_loc, None)
visit_QuotedFunctionDefT = visit_FunctionDefT
def visit_ClassDefT(self, node): def visit_ClassDefT(self, node):
if any(node.decorator_list): if any(node.decorator_list):
diag = diagnostic.Diagnostic("error", diag = diagnostic.Diagnostic("error",

View File

@ -142,6 +142,8 @@ class IODelayEstimator(algorithm.Visitor):
body = node.body body = node.body
self.visit_function(node.args, body, node.signature_type.find(), node.loc) self.visit_function(node.args, body, node.signature_type.find(), node.loc)
visit_QuotedFunctionDefT = visit_FunctionDefT
def visit_LambdaT(self, node): def visit_LambdaT(self, node):
self.visit_function(node.args, node.body, node.type.find(), node.loc) self.visit_function(node.args, node.body, node.type.find(), node.loc)
@ -278,7 +280,7 @@ class IODelayEstimator(algorithm.Visitor):
context="as an argument for delay_mu()") context="as an argument for delay_mu()")
call_delay = value call_delay = value
elif not types.is_builtin(typ): elif not types.is_builtin(typ):
if types.is_function(typ): if types.is_function(typ) or types.is_rpc(typ):
offset = 0 offset = 0
elif types.is_method(typ): elif types.is_method(typ):
offset = 1 offset = 1
@ -286,35 +288,38 @@ class IODelayEstimator(algorithm.Visitor):
else: else:
assert False assert False
delay = typ.find().delay.find() if types.is_rpc(typ):
if types.is_var(delay): call_delay = iodelay.Const(0)
raise _UnknownDelay()
elif delay.is_indeterminate():
note = diagnostic.Diagnostic("note",
"function called here", {},
node.loc)
cause = delay.cause
cause = diagnostic.Diagnostic(cause.level, cause.reason, cause.arguments,
cause.location, cause.highlights,
cause.notes + [note])
raise _IndeterminateDelay(cause)
elif delay.is_fixed():
args = {}
for kw_node in node.keywords:
args[kw_node.arg] = kw_node.value
for arg_name, arg_node in zip(list(typ.args)[offset:], node.args):
args[arg_name] = arg_node
free_vars = delay.duration.free_vars()
node.arg_exprs = {
arg: self.evaluate(args[arg], abort=abort,
context="in the expression for argument '{}' "
"that affects I/O delay".format(arg))
for arg in free_vars
}
call_delay = delay.duration.fold(node.arg_exprs)
else: else:
assert False delay = typ.find().delay.find()
if types.is_var(delay):
raise _UnknownDelay()
elif delay.is_indeterminate():
note = diagnostic.Diagnostic("note",
"function called here", {},
node.loc)
cause = delay.cause
cause = diagnostic.Diagnostic(cause.level, cause.reason, cause.arguments,
cause.location, cause.highlights,
cause.notes + [note])
raise _IndeterminateDelay(cause)
elif delay.is_fixed():
args = {}
for kw_node in node.keywords:
args[kw_node.arg] = kw_node.value
for arg_name, arg_node in zip(list(typ.args)[offset:], node.args):
args[arg_name] = arg_node
free_vars = delay.duration.free_vars()
node.arg_exprs = {
arg: self.evaluate(args[arg], abort=abort,
context="in the expression for argument '{}' "
"that affects I/O delay".format(arg))
for arg in free_vars
}
call_delay = delay.duration.fold(node.arg_exprs)
else:
assert False
else: else:
call_delay = iodelay.Const(0) call_delay = iodelay.Const(0)

File diff suppressed because it is too large Load Diff

View File

@ -94,9 +94,6 @@ class TVar(Type):
else: else:
return self.find().fold(accum, fn) return self.find().fold(accum, fn)
def map(self, fn):
return fn(self)
def __repr__(self): def __repr__(self):
if self.parent is self: if self.parent is self:
return "<artiq.compiler.types.TVar %d>" % id(self) return "<artiq.compiler.types.TVar %d>" % id(self)
@ -141,21 +138,6 @@ class TMono(Type):
accum = self.params[param].fold(accum, fn) accum = self.params[param].fold(accum, fn)
return fn(accum, self) return fn(accum, self)
def map(self, fn):
params = OrderedDict()
for param in self.params:
params[param] = self.params[param].map(fn)
attributes = OrderedDict()
for attr in self.attributes:
attributes[attr] = self.attributes[attr].map(fn)
self_copy = self.__class__.__new__(self.__class__)
self_copy.name = self.name
self_copy.params = params
self_copy.attributes = attributes
return fn(self_copy)
def __repr__(self): def __repr__(self):
return "artiq.compiler.types.TMono(%s, %s)" % (repr(self.name), repr(self.params)) return "artiq.compiler.types.TMono(%s, %s)" % (repr(self.name), repr(self.params))
@ -202,9 +184,6 @@ class TTuple(Type):
accum = elt.fold(accum, fn) accum = elt.fold(accum, fn)
return fn(accum, self) return fn(accum, self)
def map(self, fn):
return fn(TTuple(list(map(lambda elt: elt.map(fn), self.elts))))
def __repr__(self): def __repr__(self):
return "artiq.compiler.types.TTuple(%s)" % repr(self.elts) return "artiq.compiler.types.TTuple(%s)" % repr(self.elts)
@ -276,23 +255,6 @@ class TFunction(Type):
accum = self.ret.fold(accum, fn) accum = self.ret.fold(accum, fn)
return fn(accum, self) return fn(accum, self)
def _map_args(self, fn):
args = OrderedDict()
for arg in self.args:
args[arg] = self.args[arg].map(fn)
optargs = OrderedDict()
for optarg in self.optargs:
optargs[optarg] = self.optargs[optarg].map(fn)
return args, optargs, self.ret.map(fn)
def map(self, fn):
args, optargs, ret = self._map_args(fn)
self_copy = TFunction(args, optargs, ret)
self_copy.delay = self.delay.map(fn)
return fn(self_copy)
def __repr__(self): def __repr__(self):
return "artiq.compiler.types.TFunction({}, {}, {})".format( return "artiq.compiler.types.TFunction({}, {}, {})".format(
repr(self.args), repr(self.optargs), repr(self.ret)) repr(self.args), repr(self.optargs), repr(self.ret))
@ -308,48 +270,27 @@ class TFunction(Type):
def __hash__(self): def __hash__(self):
return hash((_freeze(self.args), _freeze(self.optargs), self.ret)) return hash((_freeze(self.args), _freeze(self.optargs), self.ret))
class TRPCFunction(TFunction):
"""
A function type of a remote function.
:ivar service: (int) RPC service number
"""
attributes = OrderedDict()
def __init__(self, args, optargs, ret, service):
super().__init__(args, optargs, ret)
self.service = service
self.delay = TFixedDelay(iodelay.Const(0))
def unify(self, other):
if isinstance(other, TRPCFunction) and \
self.service == other.service:
super().unify(other)
elif isinstance(other, TVar):
other.unify(self)
else:
raise UnificationError(self, other)
def map(self, fn):
args, optargs, ret = self._map_args(fn)
self_copy = TRPCFunction(args, optargs, ret, self.service)
self_copy.delay = self.delay.map(fn)
return fn(self_copy)
class TCFunction(TFunction): class TCFunction(TFunction):
""" """
A function type of a runtime-provided C function. A function type of a runtime-provided C function.
:ivar name: (str) C function name :ivar name: (str) C function name
:ivar flags: (set of str) C function flags.
Flag ``nounwind`` means the function never raises an exception.
Flag ``nowrite`` means the function never writes any memory
that the ARTIQ Python code can observe.
""" """
attributes = OrderedDict() attributes = OrderedDict()
def __init__(self, args, ret, name): def __init__(self, args, ret, name, flags={}):
assert isinstance(flags, set)
for flag in flags:
assert flag in {'nounwind', 'nowrite'}
super().__init__(args, OrderedDict(), ret) super().__init__(args, OrderedDict(), ret)
self.name = name self.name = name
self.delay = TFixedDelay(iodelay.Const(0)) self.delay = TFixedDelay(iodelay.Const(0))
self.flags = flags
def unify(self, other): def unify(self, other):
if isinstance(other, TCFunction) and \ if isinstance(other, TCFunction) and \
@ -360,11 +301,49 @@ class TCFunction(TFunction):
else: else:
raise UnificationError(self, other) raise UnificationError(self, other)
def map(self, fn): class TRPC(Type):
args, _optargs, ret = self._map_args(fn) """
self_copy = TCFunction(args, ret, self.name) A type of a remote call.
self_copy.delay = self.delay.map(fn)
return fn(self_copy) :ivar ret: (:class:`Type`)
return type
:ivar service: (int) RPC service number
"""
attributes = OrderedDict()
def __init__(self, ret, service):
assert isinstance(ret, Type)
self.ret, self.service = ret, service
def find(self):
return self
def unify(self, other):
if isinstance(other, TRPC) and \
self.service == other.service:
self.ret.unify(other.ret)
elif isinstance(other, TVar):
other.unify(self)
else:
raise UnificationError(self, other)
def fold(self, accum, fn):
accum = self.ret.fold(accum, fn)
return fn(accum, self)
def __repr__(self):
return "artiq.compiler.types.TRPC({})".format(repr(self.ret))
def __eq__(self, other):
return isinstance(other, TRPC) and \
self.service == other.service
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash(self.service)
class TBuiltin(Type): class TBuiltin(Type):
""" """
@ -387,9 +366,6 @@ class TBuiltin(Type):
def fold(self, accum, fn): def fold(self, accum, fn):
return fn(accum, self) return fn(accum, self)
def map(self, fn):
return fn(self)
def __repr__(self): def __repr__(self):
return "artiq.compiler.types.{}({})".format(type(self).__name__, repr(self.name)) return "artiq.compiler.types.{}({})".format(type(self).__name__, repr(self.name))
@ -443,6 +419,7 @@ class TInstance(TMono):
assert isinstance(attributes, OrderedDict) assert isinstance(attributes, OrderedDict)
super().__init__(name) super().__init__(name)
self.attributes = attributes self.attributes = attributes
self.constant_attributes = set()
def __repr__(self): def __repr__(self):
return "artiq.compiler.types.TInstance({}, {})".format( return "artiq.compiler.types.TInstance({}, {})".format(
@ -481,9 +458,6 @@ class TValue(Type):
def fold(self, accum, fn): def fold(self, accum, fn):
return fn(accum, self) return fn(accum, self)
def map(self, fn):
return fn(self)
def __repr__(self): def __repr__(self):
return "artiq.compiler.types.TValue(%s)" % repr(self.value) return "artiq.compiler.types.TValue(%s)" % repr(self.value)
@ -534,10 +508,6 @@ class TDelay(Type):
# delay types do not participate in folding # delay types do not participate in folding
pass pass
def map(self, fn):
# or mapping
return self
def __eq__(self, other): def __eq__(self, other):
return isinstance(other, TDelay) and \ return isinstance(other, TDelay) and \
(self.duration == other.duration and \ (self.duration == other.duration and \
@ -561,18 +531,6 @@ def TFixedDelay(duration):
return TDelay(duration, None) return TDelay(duration, None)
def instantiate(typ):
tvar_map = dict()
def mapper(typ):
typ = typ.find()
if is_var(typ):
if typ not in tvar_map:
tvar_map[typ] = TVar()
return tvar_map[typ]
return typ
return typ.map(mapper)
def is_var(typ): def is_var(typ):
return isinstance(typ.find(), TVar) return isinstance(typ.find(), TVar)
@ -607,8 +565,8 @@ def _is_pointer(typ):
def is_function(typ): def is_function(typ):
return isinstance(typ.find(), TFunction) return isinstance(typ.find(), TFunction)
def is_rpc_function(typ): def is_rpc(typ):
return isinstance(typ.find(), TRPCFunction) return isinstance(typ.find(), TRPC)
def is_c_function(typ, name=None): def is_c_function(typ, name=None):
typ = typ.find() typ = typ.find()
@ -723,7 +681,7 @@ class TypePrinter(object):
return "(%s,)" % self.name(typ.elts[0], depth + 1) return "(%s,)" % self.name(typ.elts[0], depth + 1)
else: else:
return "(%s)" % ", ".join([self.name(typ, depth + 1) for typ in typ.elts]) return "(%s)" % ", ".join([self.name(typ, depth + 1) for typ in typ.elts])
elif isinstance(typ, (TFunction, TRPCFunction, TCFunction)): elif isinstance(typ, (TFunction, TCFunction)):
args = [] args = []
args += [ "%s:%s" % (arg, self.name(typ.args[arg], depth + 1)) args += [ "%s:%s" % (arg, self.name(typ.args[arg], depth + 1))
for arg in typ.args] for arg in typ.args]
@ -737,12 +695,12 @@ class TypePrinter(object):
elif not (delay.is_fixed() and iodelay.is_zero(delay.duration)): elif not (delay.is_fixed() and iodelay.is_zero(delay.duration)):
signature += " " + self.name(delay, depth + 1) signature += " " + self.name(delay, depth + 1)
if isinstance(typ, TRPCFunction):
return "[rpc #{}]{}".format(typ.service, signature)
if isinstance(typ, TCFunction): if isinstance(typ, TCFunction):
return "[ffi {}]{}".format(repr(typ.name), signature) return "[ffi {}]{}".format(repr(typ.name), signature)
elif isinstance(typ, TFunction): elif isinstance(typ, TFunction):
return signature return signature
elif isinstance(typ, TRPC):
return "[rpc #{}](...)->{}".format(typ.service, self.name(typ.ret, depth + 1))
elif isinstance(typ, TBuiltinFunction): elif isinstance(typ, TBuiltinFunction):
return "<function {}>".format(typ.name) return "<function {}>".format(typ.name)
elif isinstance(typ, (TConstructor, TExceptionConstructor)): elif isinstance(typ, (TConstructor, TExceptionConstructor)):

View File

@ -277,6 +277,8 @@ class EscapeValidator(algorithm.Visitor):
self.visit_in_region(node, Region(node.loc), node.typing_env, self.visit_in_region(node, Region(node.loc), node.typing_env,
args={ arg.arg: Argument(arg.loc) for arg in node.args.args }) args={ arg.arg: Argument(arg.loc) for arg in node.args.args })
visit_QuotedFunctionDefT = visit_FunctionDefT
def visit_ClassDefT(self, node): def visit_ClassDefT(self, node):
self.youngest_env[node.name] = self.youngest_region self.youngest_env[node.name] = self.youngest_region
self.visit_in_region(node, Region(node.loc), node.constructor_type.attributes) self.visit_in_region(node, Region(node.loc), node.constructor_type.attributes)

View File

@ -24,11 +24,13 @@ class MonomorphismValidator(algorithm.Visitor):
node.name_loc, notes=[note]) node.name_loc, notes=[note])
self.engine.process(diag) self.engine.process(diag)
visit_QuotedFunctionDefT = visit_FunctionDefT
def generic_visit(self, node): def generic_visit(self, node):
super().generic_visit(node) super().generic_visit(node)
if isinstance(node, asttyped.commontyped): if isinstance(node, asttyped.commontyped):
if types.is_polymorphic(node.type) and not types.is_rpc_function(node.type): if types.is_polymorphic(node.type):
note = diagnostic.Diagnostic("note", note = diagnostic.Diagnostic("note",
"the expression has type {type}", "the expression has type {type}",
{"type": types.TypePrinter().name(node.type)}, {"type": types.TypePrinter().name(node.type)},

View File

@ -109,8 +109,8 @@ class VCDManager:
self.codes = vcd_codes() self.codes = vcd_codes()
self.current_time = None self.current_time = None
def set_timescale_ns(self, timescale): def set_timescale_ps(self, timescale):
self.out.write("$timescale {}ns $end\n".format(timescale)) self.out.write("$timescale {}ps $end\n".format(round(timescale)))
def get_channel(self, name, width): def get_channel(self, name, width):
code = next(self.codes) code = next(self.codes)
@ -263,7 +263,7 @@ def _extract_log_chars(data):
n = data >> 24 n = data >> 24
data = (data << 8) & 0xffffffff data = (data << 8) & 0xffffffff
if not n: if not n:
return r continue
r += chr(n) r += chr(n)
return r return r
@ -278,11 +278,9 @@ class LogHandler:
def process_message(self, message): def process_message(self, message):
if isinstance(message, OutputMessage): if isinstance(message, OutputMessage):
message_payload = _extract_log_chars(message.data) self.current_entry += _extract_log_chars(message.data)
self.current_entry += message_payload if len(self.current_entry) > 1 and self.current_entry[-1] == "\x1D":
if len(message_payload) < 4: channel_name, log_message = self.current_entry[:-1].split("\x1E", maxsplit=1)
channel_name, log_message = \
self.current_entry.split(":", maxsplit=1)
vcd_value = "" vcd_value = ""
for c in log_message: for c in log_message:
vcd_value += "{:08b}".format(ord(c)) vcd_value += "{:08b}".format(ord(c))
@ -296,10 +294,9 @@ def get_vcd_log_channels(log_channel, messages):
for message in messages: for message in messages:
if (isinstance(message, OutputMessage) if (isinstance(message, OutputMessage)
and message.channel == log_channel): and message.channel == log_channel):
message_payload = _extract_log_chars(message.data) log_entry += _extract_log_chars(message.data)
log_entry += message_payload if len(log_entry) > 1 and log_entry[-1] == "\x1D":
if len(message_payload) < 4: channel_name, log_message = log_entry[:-1].split("\x1E", maxsplit=1)
channel_name, log_message = log_entry.split("\x1E", maxsplit=1)
l = len(log_message) l = len(log_message)
if channel_name in vcd_log_channels: if channel_name in vcd_log_channels:
if vcd_log_channels[channel_name] < l: if vcd_log_channels[channel_name] < l:
@ -370,7 +367,7 @@ def decoded_dump_to_vcd(fileobj, devices, dump):
vcd_manager = VCDManager(fileobj) vcd_manager = VCDManager(fileobj)
ref_period = get_ref_period(devices) ref_period = get_ref_period(devices)
if ref_period is not None: if ref_period is not None:
vcd_manager.set_timescale_ns(ref_period*1e9) vcd_manager.set_timescale_ps(ref_period*1e12)
else: else:
logger.warning("unable to determine core device ref_period") logger.warning("unable to determine core device ref_period")
ref_period = 1e-9 # guess ref_period = 1e-9 # guess

View File

@ -2,11 +2,11 @@ from artiq.language.core import *
from artiq.language.types import * from artiq.language.types import *
@syscall @syscall(flags={"nounwind", "nowrite"})
def cache_get(key: TStr) -> TList(TInt32): def cache_get(key: TStr) -> TList(TInt32):
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nowrite"})
def cache_put(key: TStr, value: TList(TInt32)) -> TNone: def cache_put(key: TStr, value: TList(TInt32)) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")

View File

@ -3,8 +3,10 @@ import logging
import traceback import traceback
from enum import Enum from enum import Enum
from fractions import Fraction from fractions import Fraction
from collections import namedtuple
from artiq.coredevice import exceptions from artiq.coredevice import exceptions
from artiq.language.core import int as wrapping_int
from artiq import __version__ as software_version from artiq import __version__ as software_version
@ -61,6 +63,9 @@ class RPCReturnValueError(ValueError):
pass pass
RPCKeyword = namedtuple('RPCKeyword', ['name', 'value'])
class CommGeneric: class CommGeneric:
def __init__(self): def __init__(self):
self._read_type = self._write_type = None self._read_type = self._write_type = None
@ -228,7 +233,8 @@ class CommGeneric:
raise UnsupportedDevice("Unsupported runtime ID: {}" raise UnsupportedDevice("Unsupported runtime ID: {}"
.format(runtime_id)) .format(runtime_id))
gateware_version = self._read_chunk(self._read_length).decode("utf-8") gateware_version = self._read_chunk(self._read_length).decode("utf-8")
if gateware_version != software_version: if gateware_version != software_version and \
gateware_version + ".dirty" != software_version:
logger.warning("Mismatch between gateware (%s) " logger.warning("Mismatch between gateware (%s) "
"and software (%s) versions", "and software (%s) versions",
gateware_version, software_version) gateware_version, software_version)
@ -297,7 +303,6 @@ class CommGeneric:
logger.debug("running kernel") logger.debug("running kernel")
_rpc_sentinel = object() _rpc_sentinel = object()
_rpc_undefined = object()
# See session.c:{send,receive}_rpc_value and llvm_ir_generator.py:_rpc_tag. # See session.c:{send,receive}_rpc_value and llvm_ir_generator.py:_rpc_tag.
def _receive_rpc_value(self, object_map): def _receive_rpc_value(self, object_map):
@ -312,9 +317,9 @@ class CommGeneric:
elif tag == "b": elif tag == "b":
return bool(self._read_int8()) return bool(self._read_int8())
elif tag == "i": elif tag == "i":
return self._read_int32() return wrapping_int(self._read_int32(), 32)
elif tag == "I": elif tag == "I":
return self._read_int64() return wrapping_int(self._read_int64(), 64)
elif tag == "f": elif tag == "f":
return self._read_float64() return self._read_float64()
elif tag == "F": elif tag == "F":
@ -331,27 +336,23 @@ class CommGeneric:
stop = self._receive_rpc_value(object_map) stop = self._receive_rpc_value(object_map)
step = self._receive_rpc_value(object_map) step = self._receive_rpc_value(object_map)
return range(start, stop, step) return range(start, stop, step)
elif tag == "o": elif tag == "k":
present = self._read_int8() name = self._read_string()
if present: value = self._receive_rpc_value(object_map)
return self._receive_rpc_value(object_map) return RPCKeyword(name, value)
else:
return self._rpc_undefined
elif tag == "O": elif tag == "O":
return object_map.retrieve(self._read_int32()) return object_map.retrieve(self._read_int32())
else: else:
raise IOError("Unknown RPC value tag: {}".format(repr(tag))) raise IOError("Unknown RPC value tag: {}".format(repr(tag)))
def _receive_rpc_args(self, object_map, defaults): def _receive_rpc_args(self, object_map):
args = [] args, kwargs = [], {}
default_arg_num = 0
while True: while True:
value = self._receive_rpc_value(object_map) value = self._receive_rpc_value(object_map)
if value is self._rpc_sentinel: if value is self._rpc_sentinel:
return args return args, kwargs
elif value is self._rpc_undefined: elif isinstance(value, RPCKeyword):
args.append(defaults[default_arg_num]) kwargs[value.name] = value.value
default_arg_num += 1
else: else:
args.append(value) args.append(value)
@ -442,13 +443,13 @@ class CommGeneric:
else: else:
service = object_map.retrieve(service_id) service = object_map.retrieve(service_id)
arguments = self._receive_rpc_args(object_map, service.__defaults__) args, kwargs = self._receive_rpc_args(object_map)
return_tags = self._read_bytes() return_tags = self._read_bytes()
logger.debug("rpc service: [%d]%r %r -> %s", service_id, service, arguments, return_tags) logger.debug("rpc service: [%d]%r %r %r -> %s", service_id, service, args, kwargs, return_tags)
try: try:
result = service(*arguments) result = service(*args, **kwargs)
logger.debug("rpc service: %d %r == %r", service_id, arguments, result) logger.debug("rpc service: %d %r %r == %r", service_id, args, kwargs, result)
if service_id != 0: if service_id != 0:
self._write_header(_H2DMsgType.RPC_REPLY) self._write_header(_H2DMsgType.RPC_REPLY)
@ -456,7 +457,7 @@ class CommGeneric:
self._send_rpc_value(bytearray(return_tags), result, result, service) self._send_rpc_value(bytearray(return_tags), result, result, service)
self._write_flush() self._write_flush()
except Exception as exn: except Exception as exn:
logger.debug("rpc service: %d %r ! %r", service_id, arguments, exn) logger.debug("rpc service: %d %r %r ! %r", service_id, args, kwargs, exn)
self._write_header(_H2DMsgType.RPC_EXCEPTION) self._write_header(_H2DMsgType.RPC_EXCEPTION)
@ -485,7 +486,13 @@ class CommGeneric:
for index in range(3): for index in range(3):
self._write_int64(0) self._write_int64(0)
(_, (filename, line, function, _), ) = traceback.extract_tb(exn.__traceback__, 2) tb = traceback.extract_tb(exn.__traceback__, 2)
if len(tb) == 2:
(_, (filename, line, function, _), ) = tb
elif len(tb) == 1:
((filename, line, function, _), ) = tb
else:
assert False
self._write_string(filename) self._write_string(filename)
self._write_int32(line) self._write_int32(line)
self._write_int32(-1) # column not known self._write_int32(-1) # column not known

View File

@ -37,7 +37,7 @@ class CompileError(Exception):
return "\n" + _render_diagnostic(self.diagnostic, colored=colors_supported) return "\n" + _render_diagnostic(self.diagnostic, colored=colors_supported)
@syscall @syscall(flags={"nounwind", "nowrite"})
def rtio_get_counter() -> TInt64: def rtio_get_counter() -> TInt64:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@ -58,6 +58,12 @@ class Core:
factor). factor).
:param comm_device: name of the device used for communications. :param comm_device: name of the device used for communications.
""" """
kernel_invariants = {
"core", "ref_period", "coarse_ref_period", "ref_multiplier",
"external_clock",
}
def __init__(self, dmgr, ref_period, external_clock=False, def __init__(self, dmgr, ref_period, external_clock=False,
ref_multiplier=8, comm_device="comm"): ref_multiplier=8, comm_device="comm"):
self.ref_period = ref_period self.ref_period = ref_period

View File

@ -10,25 +10,27 @@ PHASE_MODE_ABSOLUTE = 1
PHASE_MODE_TRACKING = 2 PHASE_MODE_TRACKING = 2
@syscall @syscall(flags={"nowrite"})
def dds_init(time_mu: TInt64, bus_channel: TInt32, channel: TInt32) -> TNone: def dds_init(time_mu: TInt64, bus_channel: TInt32, channel: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nowrite"})
def dds_set(time_mu: TInt64, bus_channel: TInt32, channel: TInt32, ftw: TInt32, def dds_set(time_mu: TInt64, bus_channel: TInt32, channel: TInt32, ftw: TInt32,
pow: TInt32, phase_mode: TInt32, amplitude: TInt32) -> TNone: pow: TInt32, phase_mode: TInt32, amplitude: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nowrite"})
def dds_batch_enter(time_mu: TInt64) -> TNone: def dds_batch_enter(time_mu: TInt64) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nowrite"})
def dds_batch_exit() -> TNone: def dds_batch_exit() -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
class _BatchContextManager: class _BatchContextManager:
kernel_invariants = {"core", "core_dds"}
def __init__(self, core_dds): def __init__(self, core_dds):
self.core_dds = core_dds self.core_dds = core_dds
self.core = self.core_dds.core self.core = self.core_dds.core
@ -50,6 +52,9 @@ class CoreDDS:
:param sysclk: DDS system frequency. The DDS system clock must be a :param sysclk: DDS system frequency. The DDS system clock must be a
phase-locked multiple of the RTIO clock. phase-locked multiple of the RTIO clock.
""" """
kernel_invariants = {"core", "sysclk", "batch"}
def __init__(self, dmgr, sysclk, core_device="core"): def __init__(self, dmgr, sysclk, core_device="core"):
self.core = dmgr.get(core_device) self.core = dmgr.get(core_device)
self.sysclk = sysclk self.sysclk = sysclk
@ -60,8 +65,8 @@ class CoreDDS:
"""Starts a DDS command batch. All DDS commands are buffered """Starts a DDS command batch. All DDS commands are buffered
after this call, until ``batch_exit`` is called. after this call, until ``batch_exit`` is called.
The time of execution of the DDS commands is the time of entering the The time of execution of the DDS commands is the time cursor position
batch (as closely as hardware permits).""" when the batch is entered."""
dds_batch_enter(now_mu()) dds_batch_enter(now_mu())
@kernel @kernel
@ -79,9 +84,16 @@ class _DDSGeneric:
This class should not be used directly, instead, use the chip-specific This class should not be used directly, instead, use the chip-specific
drivers such as ``AD9858`` and ``AD9914``. drivers such as ``AD9858`` and ``AD9914``.
The time cursor is not modified by any function in this class.
:param bus: name of the DDS bus device that this DDS is connected to. :param bus: name of the DDS bus device that this DDS is connected to.
:param channel: channel number of the DDS device to control. :param channel: channel number of the DDS device to control.
""" """
kernel_invariants = {
"core", "core_dds", "bus_channel", "channel", "pow_width"
}
def __init__(self, dmgr, bus_channel, channel, core_dds_device="core_dds"): def __init__(self, dmgr, bus_channel, channel, core_dds_device="core_dds"):
self.core_dds = dmgr.get(core_dds_device) self.core_dds = dmgr.get(core_dds_device)
self.core = self.core_dds.core self.core = self.core_dds.core
@ -89,48 +101,54 @@ class _DDSGeneric:
self.channel = channel self.channel = channel
self.phase_mode = PHASE_MODE_CONTINUOUS self.phase_mode = PHASE_MODE_CONTINUOUS
@portable @portable(flags=["fast-math"])
def frequency_to_ftw(self, frequency): def frequency_to_ftw(self, frequency):
"""Returns the frequency tuning word corresponding to the given """Returns the frequency tuning word corresponding to the given
frequency. frequency.
""" """
return round(int(2, width=64)**32*frequency/self.core_dds.sysclk) return round(int(2, width=64)**32*frequency/self.core_dds.sysclk)
@portable @portable(flags=["fast-math"])
def ftw_to_frequency(self, ftw): def ftw_to_frequency(self, ftw):
"""Returns the frequency corresponding to the given frequency tuning """Returns the frequency corresponding to the given frequency tuning
word. word.
""" """
return ftw*self.core_dds.sysclk/int(2, width=64)**32 return ftw*self.core_dds.sysclk/int(2, width=64)**32
@portable @portable(flags=["fast-math"])
def turns_to_pow(self, turns): def turns_to_pow(self, turns):
"""Returns the phase offset word corresponding to the given phase """Returns the phase offset word corresponding to the given phase
in turns.""" in turns."""
return round(turns*2**self.pow_width) return round(turns*2**self.pow_width)
@portable @portable(flags=["fast-math"])
def pow_to_turns(self, pow): def pow_to_turns(self, pow):
"""Returns the phase in turns corresponding to the given phase offset """Returns the phase in turns corresponding to the given phase offset
word.""" word."""
return pow/2**self.pow_width return pow/2**self.pow_width
@portable @portable(flags=["fast-math"])
def amplitude_to_asf(self, amplitude): def amplitude_to_asf(self, amplitude):
"""Returns amplitude scale factor corresponding to given amplitude.""" """Returns amplitude scale factor corresponding to given amplitude."""
return round(amplitude*0x0fff) return round(amplitude*0x0fff)
@portable @portable(flags=["fast-math"])
def asf_to_amplitude(self, asf): def asf_to_amplitude(self, asf):
"""Returns the amplitude corresponding to the given amplitude scale """Returns the amplitude corresponding to the given amplitude scale
factor.""" factor."""
return round(amplitude*0x0fff) return asf/0x0fff
@kernel @kernel
def init(self): def init(self):
"""Resets and initializes the DDS channel. """Resets and initializes the DDS channel.
The runtime does this for all channels upon core device startup.""" This needs to be done for each DDS channel before it can be used, and
it is recommended to use the startup kernel for this.
This function cannot be used in a batch; the correct way of
initializing multiple DDS channels is to call this function
sequentially with a delay between the calls. 2ms provides a good
timing margin."""
dds_init(now_mu(), self.bus_channel, self.channel) dds_init(now_mu(), self.bus_channel, self.channel)
@kernel @kernel
@ -163,6 +181,9 @@ class _DDSGeneric:
chip and can be retrieved via the ``pow_width`` attribute. The amplitude chip and can be retrieved via the ``pow_width`` attribute. The amplitude
width is 12. width is 12.
The "frequency update" pulse is sent to the DDS with a fixed latency
with respect to the current position of the time cursor.
:param frequency: frequency to generate. :param frequency: frequency to generate.
:param phase: adds an offset, in turns, to the phase. :param phase: adds an offset, in turns, to the phase.
:param phase_mode: if specified, overrides the default phase mode set :param phase_mode: if specified, overrides the default phase mode set

View File

@ -38,7 +38,7 @@ class CoreException:
elif address == last_address: elif address == last_address:
formatted_address = " (inlined)" formatted_address = " (inlined)"
else: else:
formatted_address = " (RA=0x{:x})".format(address) formatted_address = " (RA=+0x{:x})".format(address)
last_address = address last_address = address
filename = filename.replace(artiq_dir, "<artiq>") filename = filename.replace(artiq_dir, "<artiq>")

View File

@ -3,27 +3,27 @@ from artiq.language.types import TBool, TInt32, TNone
from artiq.coredevice.exceptions import I2CError from artiq.coredevice.exceptions import I2CError
@syscall @syscall(flags={"nowrite"})
def i2c_init(busno: TInt32) -> TNone: def i2c_init(busno: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nounwind", "nowrite"})
def i2c_start(busno: TInt32) -> TNone: def i2c_start(busno: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nounwind", "nowrite"})
def i2c_stop(busno: TInt32) -> TNone: def i2c_stop(busno: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nounwind", "nowrite"})
def i2c_write(busno: TInt32, b: TInt32) -> TBool: def i2c_write(busno: TInt32, b: TInt32) -> TBool:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nounwind", "nowrite"})
def i2c_read(busno: TInt32, ack: TBool) -> TInt32: def i2c_read(busno: TInt32, ack: TBool) -> TInt32:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@ -108,5 +108,10 @@ class TCA6424A:
A bit set to 1 means the TTL is an output. A bit set to 1 means the TTL is an output.
""" """
outputs_le = (
((outputs & 0xff0000) >> 16) |
(outputs & 0x00ff00) |
(outputs & 0x0000ff) << 16)
self._write24(0x8c, 0) # set all directions to output self._write24(0x8c, 0) # set all directions to output
self._write24(0x84, output) # set levels self._write24(0x84, outputs_le) # set levels

View File

@ -2,17 +2,17 @@ from artiq.language.core import syscall
from artiq.language.types import TInt64, TInt32, TNone from artiq.language.types import TInt64, TInt32, TNone
@syscall @syscall(flags={"nowrite"})
def rtio_output(time_mu: TInt64, channel: TInt32, addr: TInt32, data: TInt32 def rtio_output(time_mu: TInt64, channel: TInt32, addr: TInt32, data: TInt32
) -> TNone: ) -> TNone:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nowrite"})
def rtio_input_timestamp(timeout_mu: TInt64, channel: TInt32) -> TInt64: def rtio_input_timestamp(timeout_mu: TInt64, channel: TInt32) -> TInt64:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")
@syscall @syscall(flags={"nowrite"})
def rtio_input_data(channel: TInt32) -> TInt32: def rtio_input_data(channel: TInt32) -> TInt32:
raise NotImplementedError("syscall not simulated") raise NotImplementedError("syscall not simulated")

View File

@ -38,6 +38,21 @@ class SPIMaster:
* If desired, :meth:`write` ``data`` queuing the next * If desired, :meth:`write` ``data`` queuing the next
(possibly chained) transfer. (possibly chained) transfer.
**Notes**:
* In order to chain a transfer onto an in-flight transfer without
deasserting ``cs`` in between, the second :meth:`write` needs to
happen strictly later than ``2*ref_period_mu`` (two coarse RTIO
cycles) but strictly earlier than ``xfer_period_mu + write_period_mu``
after the first. Note that :meth:`write` already applies a delay of
``xfer_period_mu + write_period_mu``.
* A full transfer takes ``write_period_mu + xfer_period_mu``.
* Chained transfers can happen every ``xfer_period_mu``.
* Read data is available every ``xfer_period_mu`` starting
a bit after xfer_period_mu (depending on ``clk_phase``).
* As a consequence, in order to chain transfers together, new data must
be written before the pending transfer's read data becomes available.
:param channel: RTIO channel number of the SPI bus to control. :param channel: RTIO channel number of the SPI bus to control.
""" """
def __init__(self, dmgr, channel, core_device="core"): def __init__(self, dmgr, channel, core_device="core"):
@ -48,13 +63,6 @@ class SPIMaster:
self.write_period_mu = int(0, 64) self.write_period_mu = int(0, 64)
self.read_period_mu = int(0, 64) self.read_period_mu = int(0, 64)
self.xfer_period_mu = int(0, 64) self.xfer_period_mu = int(0, 64)
# A full transfer takes write_period_mu + xfer_period_mu.
# Chained transfers can happen every xfer_period_mu.
# The second transfer of a chain can be written 2*ref_period_mu
# after the first. Read data is available every xfer_period_mu starting
# a bit after xfer_period_mu (depending on clk_phase).
# To chain transfers together, new data must be written before
# pending transfer's read data becomes available.
@portable @portable
def frequency_to_div(self, f): def frequency_to_div(self, f):
@ -196,6 +204,7 @@ class SPIMaster:
deasserting ``cs`` in between. Once a transfer completes, deasserting ``cs`` in between. Once a transfer completes,
the previous transfer's read data is available in the the previous transfer's read data is available in the
``data`` register. ``data`` register.
* For bit alignment and bit ordering see :meth:`set_config`.
This method advances the timeline by the duration of the SPI transfer. This method advances the timeline by the duration of the SPI transfer.
If a transfer is to be chained, the timeline needs to be rewound. If a transfer is to be chained, the timeline needs to be rewound.
@ -207,6 +216,8 @@ class SPIMaster:
def read_async(self): def read_async(self):
"""Trigger an asynchronous read from the ``data`` register. """Trigger an asynchronous read from the ``data`` register.
For bit alignment and bit ordering see :meth:`set_config`.
Reads always finish in two cycles. Reads always finish in two cycles.
Every data register read triggered by a :meth:`read_async` Every data register read triggered by a :meth:`read_async`

View File

@ -10,6 +10,8 @@ class TTLOut:
:param channel: channel number :param channel: channel number
""" """
kernel_invariants = {"core", "channel"}
def __init__(self, dmgr, channel, core_device="core"): def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device) self.core = dmgr.get(core_device)
self.channel = channel self.channel = channel
@ -35,18 +37,26 @@ class TTLOut:
@kernel @kernel
def on(self): def on(self):
"""Sets the output to a logic high state.""" """Sets the output to a logic high state at the current position
of the time cursor.
The time cursor is not modified by this function."""
self.set_o(True) self.set_o(True)
@kernel @kernel
def off(self): def off(self):
"""Set the output to a logic low state.""" """Set the output to a logic low state at the current position
of the time cursor.
The time cursor is not modified by this function."""
self.set_o(False) self.set_o(False)
@kernel @kernel
def pulse_mu(self, duration): def pulse_mu(self, duration):
"""Pulse the output high for the specified duration """Pulse the output high for the specified duration
(in machine units).""" (in machine units).
The time cursor is advanced by the specified duration."""
self.on() self.on()
delay_mu(duration) delay_mu(duration)
self.off() self.off()
@ -54,7 +64,9 @@ class TTLOut:
@kernel @kernel
def pulse(self, duration): def pulse(self, duration):
"""Pulse the output high for the specified duration """Pulse the output high for the specified duration
(in seconds).""" (in seconds).
The time cursor is advanced by the specified duration."""
self.on() self.on()
delay(duration) delay(duration)
self.off() self.off()
@ -82,6 +94,8 @@ class TTLInOut:
:param channel: channel number :param channel: channel number
""" """
kernel_invariants = {"core", "channel"}
def __init__(self, dmgr, channel, core_device="core"): def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device) self.core = dmgr.get(core_device)
self.channel = channel self.channel = channel
@ -96,7 +110,8 @@ class TTLInOut:
@kernel @kernel
def output(self): def output(self):
"""Set the direction to output. """Set the direction to output at the current position of the time
cursor.
There must be a delay of at least one RTIO clock cycle before any There must be a delay of at least one RTIO clock cycle before any
other command can be issued.""" other command can be issued."""
@ -104,7 +119,8 @@ class TTLInOut:
@kernel @kernel
def input(self): def input(self):
"""Set the direction to input. """Set the direction to input at the current position of the time
cursor.
There must be a delay of at least one RTIO clock cycle before any There must be a delay of at least one RTIO clock cycle before any
other command can be issued.""" other command can be issued."""
@ -124,22 +140,30 @@ class TTLInOut:
@kernel @kernel
def on(self): def on(self):
"""Set the output to a logic high state. """Set the output to a logic high state at the current position of the
time cursor.
The channel must be in output mode.""" The channel must be in output mode.
The time cursor is not modified by this function."""
self.set_o(True) self.set_o(True)
@kernel @kernel
def off(self): def off(self):
"""Set the output to a logic low state. """Set the output to a logic low state at the current position of the
time cursor.
The channel must be in output mode.""" The channel must be in output mode.
The time cursor is not modified by this function."""
self.set_o(False) self.set_o(False)
@kernel @kernel
def pulse_mu(self, duration): def pulse_mu(self, duration):
"""Pulses the output high for the specified duration """Pulses the output high for the specified duration
(in machine units).""" (in machine units).
The time cursor is advanced by the specified duration."""
self.on() self.on()
delay_mu(duration) delay_mu(duration)
self.off() self.off()
@ -147,7 +171,9 @@ class TTLInOut:
@kernel @kernel
def pulse(self, duration): def pulse(self, duration):
"""Pulses the output high for the specified duration """Pulses the output high for the specified duration
(in seconds).""" (in seconds).
The time cursor is advanced by the specified duration."""
self.on() self.on()
delay(duration) delay(duration)
self.off() self.off()
@ -160,7 +186,9 @@ class TTLInOut:
@kernel @kernel
def gate_rising_mu(self, duration): def gate_rising_mu(self, duration):
"""Register rising edge events for the specified duration """Register rising edge events for the specified duration
(in machine units).""" (in machine units).
The time cursor is advanced by the specified duration."""
self._set_sensitivity(1) self._set_sensitivity(1)
delay_mu(duration) delay_mu(duration)
self._set_sensitivity(0) self._set_sensitivity(0)
@ -168,7 +196,9 @@ class TTLInOut:
@kernel @kernel
def gate_falling_mu(self, duration): def gate_falling_mu(self, duration):
"""Register falling edge events for the specified duration """Register falling edge events for the specified duration
(in machine units).""" (in machine units).
The time cursor is advanced by the specified duration."""
self._set_sensitivity(2) self._set_sensitivity(2)
delay_mu(duration) delay_mu(duration)
self._set_sensitivity(0) self._set_sensitivity(0)
@ -176,7 +206,9 @@ class TTLInOut:
@kernel @kernel
def gate_both_mu(self, duration): def gate_both_mu(self, duration):
"""Register both rising and falling edge events for the specified """Register both rising and falling edge events for the specified
duration (in machine units).""" duration (in machine units).
The time cursor is advanced by the specified duration."""
self._set_sensitivity(3) self._set_sensitivity(3)
delay_mu(duration) delay_mu(duration)
self._set_sensitivity(0) self._set_sensitivity(0)
@ -184,7 +216,9 @@ class TTLInOut:
@kernel @kernel
def gate_rising(self, duration): def gate_rising(self, duration):
"""Register rising edge events for the specified duration """Register rising edge events for the specified duration
(in seconds).""" (in seconds).
The time cursor is advanced by the specified duration."""
self._set_sensitivity(1) self._set_sensitivity(1)
delay(duration) delay(duration)
self._set_sensitivity(0) self._set_sensitivity(0)
@ -192,7 +226,9 @@ class TTLInOut:
@kernel @kernel
def gate_falling(self, duration): def gate_falling(self, duration):
"""Register falling edge events for the specified duration """Register falling edge events for the specified duration
(in seconds).""" (in seconds).
The time cursor is advanced by the specified duration."""
self._set_sensitivity(2) self._set_sensitivity(2)
delay(duration) delay(duration)
self._set_sensitivity(0) self._set_sensitivity(0)
@ -200,7 +236,9 @@ class TTLInOut:
@kernel @kernel
def gate_both(self, duration): def gate_both(self, duration):
"""Register both rising and falling edge events for the specified """Register both rising and falling edge events for the specified
duration (in seconds).""" duration (in seconds).
The time cursor is advanced by the specified duration."""
self._set_sensitivity(3) self._set_sensitivity(3)
delay(duration) delay(duration)
self._set_sensitivity(0) self._set_sensitivity(0)
@ -208,7 +246,9 @@ class TTLInOut:
@kernel @kernel
def count(self): def count(self):
"""Poll the RTIO input during all the previously programmed gate """Poll the RTIO input during all the previously programmed gate
openings, and returns the number of registered events.""" openings, and returns the number of registered events.
This function does not interact with the time cursor."""
count = 0 count = 0
while rtio_input_timestamp(self.i_previous_timestamp, self.channel) >= 0: while rtio_input_timestamp(self.i_previous_timestamp, self.channel) >= 0:
count += 1 count += 1
@ -220,7 +260,8 @@ class TTLInOut:
units), according to the gating. units), according to the gating.
If the gate is permanently closed, returns a negative value. If the gate is permanently closed, returns a negative value.
"""
This function does not interact with the time cursor."""
return rtio_input_timestamp(self.i_previous_timestamp, self.channel) return rtio_input_timestamp(self.i_previous_timestamp, self.channel)
@ -230,8 +271,12 @@ class TTLClockGen:
This should be used with TTL channels that have a clock generator This should be used with TTL channels that have a clock generator
built into the gateware (not compatible with regular TTL channels). built into the gateware (not compatible with regular TTL channels).
The time cursor is not modified by any function in this class.
:param channel: channel number :param channel: channel number
""" """
kernel_invariants = {"core", "channel", "acc_width"}
def __init__(self, dmgr, channel, core_device="core"): def __init__(self, dmgr, channel, core_device="core"):
self.core = dmgr.get(core_device) self.core = dmgr.get(core_device)
self.channel = channel self.channel = channel
@ -256,7 +301,8 @@ class TTLClockGen:
@kernel @kernel
def set_mu(self, frequency): def set_mu(self, frequency):
"""Set the frequency of the clock, in machine units. """Set the frequency of the clock, in machine units, at the current
position of the time cursor.
This also sets the phase, as the time of the first generated rising This also sets the phase, as the time of the first generated rising
edge corresponds to the time of the call. edge corresponds to the time of the call.
@ -270,8 +316,7 @@ class TTLClockGen:
Due to the way the clock generator operates, frequency tuning words Due to the way the clock generator operates, frequency tuning words
that are not powers of two cause jitter of one RTIO clock cycle at the that are not powers of two cause jitter of one RTIO clock cycle at the
output. output."""
"""
rtio_output(now_mu(), self.channel, 0, frequency) rtio_output(now_mu(), self.channel, 0, frequency)
self.previous_timestamp = now_mu() self.previous_timestamp = now_mu()

View File

@ -73,7 +73,7 @@ class Lda:
or by installing the libhidapi-libusb0 binary package on Debian-like OS. or by installing the libhidapi-libusb0 binary package on Debian-like OS.
On Windows you should put hidapi.dll shared library in the On Windows you should put hidapi.dll shared library in the
artiq\devices\lda folder. artiq\\\\devices\\\\lda folder.
""" """
_vendor_id = 0x041f _vendor_id = 0x041f

View File

@ -272,7 +272,7 @@
"\n", "\n",
"with h5py.File(glob.glob(f)[0]) as f:\n", "with h5py.File(glob.glob(f)[0]) as f:\n",
" print(\"available datasets\", list(f))\n", " print(\"available datasets\", list(f))\n",
" assert np.allclose(f[\"flopping_f_brightness\"], d)" " assert np.allclose(f[\"datasets/flopping_f_brightness\"], d)"
] ]
}, },
{ {

View File

@ -187,7 +187,7 @@
"arguments": { "arguments": {
"pdq2_devices": ["qc_q1_0", "qc_q1_1", "qc_q1_2", "qc_q1_3"], "pdq2_devices": ["qc_q1_0", "qc_q1_1", "qc_q1_2", "qc_q1_3"],
"trigger_device": "ttl2", "trigger_device": "ttl2",
"frame_devices": ["ttl3", "ttl4", "ttl5"] "frame_devices": ["ttl4", "ttl5", "ttl6"]
} }
}, },

View File

@ -22,7 +22,8 @@ class SubComponent2(HasEnvironment):
def build(self): def build(self):
self.setattr_argument("sc2_boolean", BooleanValue(False), self.setattr_argument("sc2_boolean", BooleanValue(False),
"Transporter") "Transporter")
self.setattr_argument("sc2_scan", Scannable(default=NoScan(325)), self.setattr_argument("sc2_scan", Scannable(
default=LinearScan(200, 300, 49)),
"Transporter") "Transporter")
self.setattr_argument("sc2_enum", EnumerationValue(["3", "4", "5"]), self.setattr_argument("sc2_enum", EnumerationValue(["3", "4", "5"]),
"Transporter") "Transporter")
@ -37,10 +38,15 @@ class SubComponent2(HasEnvironment):
class ArgumentsDemo(EnvExperiment): class ArgumentsDemo(EnvExperiment):
def build(self): def build(self):
self.setattr_argument("pyon_value", PYONValue(None)) # change the "foo" dataset and click the "recompute argument"
# buttons.
self.setattr_argument("pyon_value",
PYONValue(self.get_dataset("foo", default=42)))
self.setattr_argument("number", NumberValue(42e-6, self.setattr_argument("number", NumberValue(42e-6,
unit="us", scale=1e-6, unit="us", scale=1e-6,
ndecimals=4)) ndecimals=4))
self.setattr_argument("integer", NumberValue(42,
step=1, ndecimals=0))
self.setattr_argument("string", StringValue("Hello World")) self.setattr_argument("string", StringValue("Hello World"))
self.setattr_argument("scan", Scannable(global_max=400, self.setattr_argument("scan", Scannable(global_max=400,
default=NoScan(325), default=NoScan(325),
@ -62,7 +68,8 @@ class ArgumentsDemo(EnvExperiment):
print(self.pyon_value) print(self.pyon_value)
print(self.boolean) print(self.boolean)
print(self.enum) print(self.enum)
print(self.number) print(self.number, type(self.number))
print(self.integer, type(self.integer))
print(self.string) print(self.string)
for i in self.scan: for i in self.scan:
print(i) print(i)

View File

@ -13,8 +13,8 @@ class PhotonHistogram(EnvExperiment):
self.setattr_device("bdd_sw") self.setattr_device("bdd_sw")
self.setattr_device("pmt") self.setattr_device("pmt")
self.setattr_argument("nbins", NumberValue(100)) self.setattr_argument("nbins", NumberValue(100, ndecimals=0, step=1))
self.setattr_argument("repeats", NumberValue(100)) self.setattr_argument("repeats", NumberValue(100, ndecimals=0, step=1))
self.setattr_dataset("cool_f", 230*MHz) self.setattr_dataset("cool_f", 230*MHz)
self.setattr_dataset("detect_f", 220*MHz) self.setattr_dataset("detect_f", 220*MHz)
@ -54,6 +54,7 @@ class PhotonHistogram(EnvExperiment):
total = 0 total = 0
for i in range(self.repeats): for i in range(self.repeats):
delay(0.5*ms)
n = self.cool_detect() n = self.cool_detect()
if n >= self.nbins: if n >= self.nbins:
n = self.nbins - 1 n = self.nbins - 1

View File

@ -66,6 +66,7 @@ class Transport(EnvExperiment):
@kernel @kernel
def one(self): def one(self):
delay(1*ms)
self.cool() self.cool()
self.transport() self.transport()
return self.detect() return self.detect()

View File

@ -38,19 +38,19 @@ class FloppingF(EnvExperiment):
def run(self): def run(self):
l = len(self.frequency_scan) l = len(self.frequency_scan)
frequency = self.set_dataset("flopping_f_frequency", self.set_dataset("flopping_f_frequency",
np.full(l, np.nan), np.full(l, np.nan),
broadcast=True, save=False) broadcast=True, save=False)
brightness = self.set_dataset("flopping_f_brightness", self.set_dataset("flopping_f_brightness",
np.full(l, np.nan), np.full(l, np.nan),
broadcast=True) broadcast=True)
self.set_dataset("flopping_f_fit", np.full(l, np.nan), self.set_dataset("flopping_f_fit", np.full(l, np.nan),
broadcast=True, save=False) broadcast=True, save=False)
for i, f in enumerate(self.frequency_scan): for i, f in enumerate(self.frequency_scan):
m_brightness = model(f, self.F0) + self.noise_amplitude*random.random() m_brightness = model(f, self.F0) + self.noise_amplitude*random.random()
frequency[i] = f self.mutate_dataset("flopping_f_frequency", i, f)
brightness[i] = m_brightness self.mutate_dataset("flopping_f_brightness", i, m_brightness)
time.sleep(0.1) time.sleep(0.1)
self.scheduler.submit(self.scheduler.pipeline_name, self.scheduler.expid, self.scheduler.submit(self.scheduler.pipeline_name, self.scheduler.expid,
self.scheduler.priority, time.time() + 20, False) self.scheduler.priority, time.time() + 20, False)

View File

@ -20,16 +20,15 @@ class Histograms(EnvExperiment):
xs = np.empty(npoints) xs = np.empty(npoints)
xs.fill(np.nan) xs.fill(np.nan)
xs = self.set_dataset("hd_xs", xs, self.set_dataset("hd_xs", xs,
broadcast=True, save=False) broadcast=True, save=False)
counts = np.empty((npoints, nbins)) self.set_dataset("hd_counts", np.empty((npoints, nbins)),
counts = self.set_dataset("hd_counts", counts, broadcast=True, save=False)
broadcast=True, save=False)
for i in range(npoints): for i in range(npoints):
histogram, _ = np.histogram(np.random.normal(i, size=1000), histogram, _ = np.histogram(np.random.normal(i, size=1000),
bin_boundaries) bin_boundaries)
counts[i] = histogram self.mutate_dataset("hd_counts", i, histogram)
xs[i] = i % 8 self.mutate_dataset("hd_xs", i, i % 8)
sleep(0.3) sleep(0.3)

View File

@ -42,4 +42,4 @@ class AluminumSpectroscopy(EnvExperiment):
break break
if photon_count < self.photon_limit_low: if photon_count < self.photon_limit_low:
state_0_count += 1 state_0_count += 1
return state_0_count print(state_0_count)

View File

@ -96,6 +96,8 @@ def get_argparser():
parser_scan_repos = subparsers.add_parser( parser_scan_repos = subparsers.add_parser(
"scan-repository", help="trigger a repository (re)scan") "scan-repository", help="trigger a repository (re)scan")
parser_scan_repos.add_argument("--async", action="store_true",
help="trigger scan and return immediately")
parser_scan_repos.add_argument("revision", default=None, nargs="?", parser_scan_repos.add_argument("revision", default=None, nargs="?",
help="use a specific repository revision " help="use a specific repository revision "
"(defaults to head)") "(defaults to head)")
@ -159,7 +161,10 @@ def _action_scan_devices(remote, args):
def _action_scan_repository(remote, args): def _action_scan_repository(remote, args):
remote.scan_repository(args.revision) if args.async:
remote.scan_repository_async(args.revision)
else:
remote.scan_repository(args.revision)
def _action_ls(remote, args): def _action_ls(remote, args):

View File

@ -5,6 +5,7 @@ import argparse
import os import os
import subprocess import subprocess
import tempfile import tempfile
import shutil
from artiq import __artiq_dir__ as artiq_dir from artiq import __artiq_dir__ as artiq_dir
from artiq.frontend.bit2bin import bit2bin from artiq.frontend.bit2bin import bit2bin
@ -38,6 +39,8 @@ Prerequisites:
help="target board, default: %(default)s") help="target board, default: %(default)s")
parser.add_argument("-m", "--adapter", default="clock", parser.add_argument("-m", "--adapter", default="clock",
help="target adapter, default: %(default)s") help="target adapter, default: %(default)s")
parser.add_argument("--target-file", default=None,
help="use alternative OpenOCD target file")
parser.add_argument("-f", "--storage", help="write file to storage area") parser.add_argument("-f", "--storage", help="write file to storage area")
parser.add_argument("-d", "--dir", help="look for files in this directory") parser.add_argument("-d", "--dir", help="look for files in this directory")
parser.add_argument("ACTION", nargs="*", parser.add_argument("ACTION", nargs="*",
@ -72,6 +75,16 @@ def main():
if opts.dir is None: if opts.dir is None:
opts.dir = os.path.join(artiq_dir, "binaries", opts.dir = os.path.join(artiq_dir, "binaries",
"{}-{}".format(opts.target, opts.adapter)) "{}-{}".format(opts.target, opts.adapter))
if not os.path.exists(opts.dir):
raise SystemExit("Binaries directory '{}' does not exist"
.format(opts.dir))
scripts_path = ["share", "openocd", "scripts"]
if os.name == "nt":
scripts_path.insert(0, "Library")
scripts_path = os.path.abspath(os.path.join(
os.path.dirname(shutil.which("openocd")),
"..", *scripts_path))
conv = False conv = False
@ -85,7 +98,7 @@ def main():
"/usr/local/share/migen", "/usr/share/migen"]: "/usr/local/share/migen", "/usr/share/migen"]:
proxy_ = os.path.join(p, proxy_base) proxy_ = os.path.join(p, proxy_base)
if os.access(proxy_, os.R_OK): if os.access(proxy_, os.R_OK):
proxy = "jtagspi_init 0 {}".format(proxy_) proxy = "jtagspi_init 0 {{{}}}".format(proxy_)
break break
if not proxy: if not proxy:
raise SystemExit( raise SystemExit(
@ -94,22 +107,22 @@ def main():
elif action == "gateware": elif action == "gateware":
bin = os.path.join(opts.dir, "top.bin") bin = os.path.join(opts.dir, "top.bin")
if not os.access(bin, os.R_OK): if not os.access(bin, os.R_OK):
bin = tempfile.mkstemp()[1] bin_handle, bin = tempfile.mkstemp()
bit = os.path.join(opts.dir, "top.bit") bit = os.path.join(opts.dir, "top.bit")
conv = True conv = True
prog.append("jtagspi_program {} 0x{:x}".format( prog.append("jtagspi_program {{{}}} 0x{:x}".format(
bin, config["gateware"])) bin, config["gateware"]))
elif action == "bios": elif action == "bios":
prog.append("jtagspi_program {} 0x{:x}".format( prog.append("jtagspi_program {{{}}} 0x{:x}".format(
os.path.join(opts.dir, "bios.bin"), config["bios"])) os.path.join(opts.dir, "bios.bin"), config["bios"]))
elif action == "runtime": elif action == "runtime":
prog.append("jtagspi_program {} 0x{:x}".format( prog.append("jtagspi_program {{{}}} 0x{:x}".format(
os.path.join(opts.dir, "runtime.fbi"), config["runtime"])) os.path.join(opts.dir, "runtime.fbi"), config["runtime"]))
elif action == "storage": elif action == "storage":
prog.append("jtagspi_program {} 0x{:x}".format( prog.append("jtagspi_program {{{}}} 0x{:x}".format(
opts.storage, config["storage"])) opts.storage, config["storage"]))
elif action == "load": elif action == "load":
prog.append("pld load 0 {}".format( prog.append("pld load 0 {{{}}}".format(
os.path.join(opts.dir, "top.bit"))) os.path.join(opts.dir, "top.bit")))
elif action == "start": elif action == "start":
prog.append(config["start"]) prog.append(config["start"])
@ -118,10 +131,15 @@ def main():
prog.append("exit") prog.append("exit")
try: try:
if conv: if conv:
bit2bin(bit, bin) bit2bin(bit, bin_handle)
if opts.target_file is None:
target_file = os.path.join("board", opts.target + ".cfg")
else:
target_file = opts.target_file
subprocess.check_call([ subprocess.check_call([
"openocd", "openocd",
"-f", os.path.join("board", opts.target + ".cfg"), "-s", scripts_path,
"-f", target_file,
"-c", "; ".join(prog), "-c", "; ".join(prog),
]) ])
finally: finally:

View File

@ -37,12 +37,18 @@ def get_argparser():
class MainWindow(QtWidgets.QMainWindow): class MainWindow(QtWidgets.QMainWindow):
def __init__(self, server): def __init__(self, server):
QtWidgets.QMainWindow.__init__(self) QtWidgets.QMainWindow.__init__(self)
icon = QtGui.QIcon(os.path.join(artiq_dir, "gui", "logo.svg")) icon = QtGui.QIcon(os.path.join(artiq_dir, "gui", "logo.svg"))
self.setWindowIcon(icon) self.setWindowIcon(icon)
self.setWindowTitle("ARTIQ - {}".format(server)) self.setWindowTitle("ARTIQ - {}".format(server))
qfm = QtGui.QFontMetrics(self.font())
self.resize(140*qfm.averageCharWidth(), 38*qfm.lineSpacing())
self.exit_request = asyncio.Event() self.exit_request = asyncio.Event()
def closeEvent(self, *args): def closeEvent(self, event):
event.ignore()
self.exit_request.set() self.exit_request.set()
def save_state(self): def save_state(self):
@ -127,37 +133,37 @@ def main():
sub_clients["explist_status"], sub_clients["explist_status"],
rpc_clients["schedule"], rpc_clients["schedule"],
rpc_clients["experiment_db"]) rpc_clients["experiment_db"])
smgr.register(d_explorer)
d_datasets = datasets.DatasetsDock(sub_clients["datasets"], d_datasets = datasets.DatasetsDock(sub_clients["datasets"],
rpc_clients["dataset_db"]) rpc_clients["dataset_db"])
smgr.register(d_datasets)
d_applets = applets.AppletsDock(main_window, sub_clients["datasets"]) d_applets = applets.AppletsDock(main_window, sub_clients["datasets"])
atexit_register_coroutine(d_applets.stop) atexit_register_coroutine(d_applets.stop)
smgr.register(d_applets) smgr.register(d_applets)
if os.name != "nt": d_ttl_dds = moninj.MonInj()
d_ttl_dds = moninj.MonInj() loop.run_until_complete(d_ttl_dds.start(args.server, args.port_notify))
loop.run_until_complete(d_ttl_dds.start(args.server, args.port_notify)) atexit_register_coroutine(d_ttl_dds.stop)
atexit_register_coroutine(d_ttl_dds.stop)
d_schedule = schedule.ScheduleDock( d_schedule = schedule.ScheduleDock(
status_bar, rpc_clients["schedule"], sub_clients["schedule"]) status_bar, rpc_clients["schedule"], sub_clients["schedule"])
smgr.register(d_schedule)
logmgr = log.LogDockManager(main_window, sub_clients["log"]) logmgr = log.LogDockManager(main_window, sub_clients["log"])
smgr.register(logmgr) smgr.register(logmgr)
# lay out docks # lay out docks
if os.name != "nt": right_docks = [
main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, d_ttl_dds.dds_dock) d_explorer, d_shortcuts,
main_window.tabifyDockWidget(d_ttl_dds.dds_dock, d_ttl_dds.ttl_dock) d_ttl_dds.ttl_dock, d_ttl_dds.dds_dock,
main_window.tabifyDockWidget(d_ttl_dds.ttl_dock, d_applets) d_datasets, d_applets
main_window.tabifyDockWidget(d_applets, d_datasets) ]
else: main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, right_docks[0])
main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, d_applets) for d1, d2 in zip(right_docks, right_docks[1:]):
main_window.tabifyDockWidget(d_applets, d_datasets) main_window.tabifyDockWidget(d1, d2)
main_window.tabifyDockWidget(d_datasets, d_shortcuts)
main_window.addDockWidget(QtCore.Qt.BottomDockWidgetArea, d_schedule) main_window.addDockWidget(QtCore.Qt.BottomDockWidgetArea, d_schedule)
main_window.addDockWidget(QtCore.Qt.LeftDockWidgetArea, d_explorer)
# load/initialize state # load/initialize state
if os.name == "nt": if os.name == "nt":
@ -172,7 +178,7 @@ def main():
# create first log dock if not already in state # create first log dock if not already in state
d_log0 = logmgr.first_log_dock() d_log0 = logmgr.first_log_dock()
if d_log0 is not None: if d_log0 is not None:
main_window.tabifyDockWidget(d_shortcuts, d_log0) main_window.tabifyDockWidget(d_schedule, d_log0)
# run # run
main_window.show() main_window.show()

View File

@ -63,30 +63,30 @@ def main():
dataset_db = DatasetDB(args.dataset_db) dataset_db = DatasetDB(args.dataset_db)
dataset_db.start() dataset_db.start()
atexit_register_coroutine(dataset_db.stop) atexit_register_coroutine(dataset_db.stop)
worker_handlers = dict()
if args.git: if args.git:
repo_backend = GitBackend(args.repository) repo_backend = GitBackend(args.repository)
else: else:
repo_backend = FilesystemBackend(args.repository) repo_backend = FilesystemBackend(args.repository)
experiment_db = ExperimentDB(repo_backend, device_db.get_device_db) experiment_db = ExperimentDB(repo_backend, worker_handlers)
atexit.register(experiment_db.close) atexit.register(experiment_db.close)
experiment_db.scan_repository_async()
worker_handlers = { scheduler = Scheduler(RIDCounter(), worker_handlers, experiment_db)
scheduler.start()
atexit_register_coroutine(scheduler.stop)
worker_handlers.update({
"get_device_db": device_db.get_device_db, "get_device_db": device_db.get_device_db,
"get_device": device_db.get, "get_device": device_db.get,
"get_dataset": dataset_db.get, "get_dataset": dataset_db.get,
"update_dataset": dataset_db.update "update_dataset": dataset_db.update,
}
scheduler = Scheduler(RIDCounter(), worker_handlers, experiment_db)
worker_handlers.update({
"scheduler_submit": scheduler.submit, "scheduler_submit": scheduler.submit,
"scheduler_delete": scheduler.delete, "scheduler_delete": scheduler.delete,
"scheduler_request_termination": scheduler.request_termination, "scheduler_request_termination": scheduler.request_termination,
"scheduler_get_status": scheduler.get_status "scheduler_get_status": scheduler.get_status
}) })
scheduler.start() experiment_db.scan_repository_async()
atexit_register_coroutine(scheduler.stop)
bind = bind_address_from_args(args) bind = bind_address_from_args(args)

View File

@ -5,7 +5,6 @@ import textwrap
import sys import sys
import traceback import traceback
import numpy as np # Needed to use numpy in RPC call arguments on cmd line import numpy as np # Needed to use numpy in RPC call arguments on cmd line
import readline # This makes input() nicer
import pprint import pprint
from artiq.protocols.pc_rpc import AutoTarget, Client, RemoteError from artiq.protocols.pc_rpc import AutoTarget, Client, RemoteError
@ -87,6 +86,12 @@ def call_method(remote, method_name, args):
def interactive(remote): def interactive(remote):
try:
import readline # This makes input() nicer
except ImportError:
print("Warning: readline not available. "
"Install it to add line editing capabilities.")
while True: while True:
try: try:
cmd = input("({}) ".format(remote.get_selected_target())) cmd = input("({}) ".format(remote.get_selected_target()))

View File

@ -32,38 +32,38 @@ def flip32(data):
def bit2bin(bit, bin, flip=False): def bit2bin(bit, bin, flip=False):
bitfile = open(bit, "rb") with open(bit, "rb") as bitfile:
l, = struct.unpack(">H", bitfile.read(2))
if l != 9:
raise ValueError("Missing <0009> header, not a bit file")
l, = struct.unpack(">H", bitfile.read(2)) bitfile.read(l)
if l != 9: d = bitfile.read(*struct.unpack(">H", bitfile.read(2)))
raise ValueError("Missing <0009> header, not a bit file") if d != b"a":
raise ValueError("Missing <a> header, not a bit file")
bitfile.read(l) d = bitfile.read(*struct.unpack(">H", bitfile.read(2)))
d = bitfile.read(*struct.unpack(">H", bitfile.read(2))) print("Design name:", d)
if d != b"a":
raise ValueError("Missing <a> header, not a bit file")
d = bitfile.read(*struct.unpack(">H", bitfile.read(2))) while True:
print("Design name:", d) key = bitfile.read(1)
if not key:
while True: break
key = bitfile.read(1) if key in b"bcd":
if not key: d = bitfile.read(*struct.unpack(">H", bitfile.read(2)))
break name = {b"b": "Partname", b"c": "Date", b"d": "Time"}[key]
if key in b"bcd": print(name, d)
d = bitfile.read(*struct.unpack(">H", bitfile.read(2))) elif key == b"e":
name = {b"b": "Partname", b"c": "Date", b"d": "Time"}[key] l, = struct.unpack(">I", bitfile.read(4))
print(name, d) print("found binary data length:", l)
elif key == b"e": d = bitfile.read(l)
l, = struct.unpack(">I", bitfile.read(4)) if flip:
print("found binary data length:", l) d = flip32(d)
d = bitfile.read(l) with open(bin, "wb") as f:
if flip: f.write(d)
d = flip32(d) else:
open(bin, "wb").write(d) d = bitfile.read(*struct.unpack(">H", bitfile.read(2)))
else: print("Unexpected key: ", key, d)
d = bitfile.read(*struct.unpack(">H", bitfile.read(2)))
print("Unexpected key: ", key, d)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -36,7 +36,7 @@ fmc_adapter_io = [
Subsignal("wr_n", Pins("LPC:LA24_P")), Subsignal("wr_n", Pins("LPC:LA24_P")),
Subsignal("rd_n", Pins("LPC:LA25_N")), Subsignal("rd_n", Pins("LPC:LA25_N")),
Subsignal("rst", Pins("LPC:LA25_P")), Subsignal("rst", Pins("LPC:LA25_P")),
IOStandard("LVTTL")), IOStandard("LVTTL"), Misc("DRIVE=24")),
("i2c_fmc", 0, ("i2c_fmc", 0,
Subsignal("scl", Pins("LPC:IIC_SCL")), Subsignal("scl", Pins("LPC:IIC_SCL")),

View File

@ -8,9 +8,8 @@ __all__ = ["fmc_adapter_io"]
ttl_pins = [ ttl_pins = [
"LA00_CC_P", "LA02_P", "LA00_CC_N", "LA02_N", "LA01_CC_P", "LA01_CC_N", "LA06_P", "LA06_N", "LA00_CC_P", "LA02_P", "LA00_CC_N", "LA02_N", "LA01_CC_P", "LA01_CC_N", "LA06_P", "LA06_N",
"LA05_P", "LA05_N", "LA10_P", "LA09_P", "LA10_N", "LA09_N", "LA13_P", "LA14_P", "LA13_N", "LA05_P", "LA05_N", "LA10_P", "LA09_P", "LA10_N", "LA09_N", "LA13_P", "LA14_P",
"LA14_N", "LA17_CC_P", "LA17_CC_N", "LA18_CC_P", "LA18_CC_N", "LA23_P", "LA23_N", "LA27_P", "LA13_N", "LA14_N", "LA17_CC_P", "LA17_CC_N"
"LA26_P", "LA27_N", "LA26_N"
] ]
@ -18,7 +17,8 @@ def get_fmc_adapter_io():
ttl = itertools.count() ttl = itertools.count()
dds = itertools.count() dds = itertools.count()
i2c_fmc = itertools.count() i2c_fmc = itertools.count()
clk_m2c = itertools.count() spi = itertools.count()
clkout = itertools.count()
r = [] r = []
for connector in "LPC", "HPC": for connector in "LPC", "HPC":
@ -43,22 +43,29 @@ def get_fmc_adapter_io():
Subsignal("wr_n", FPins("FMC:LA24_P")), Subsignal("wr_n", FPins("FMC:LA24_P")),
Subsignal("rd_n", FPins("FMC:LA25_N")), Subsignal("rd_n", FPins("FMC:LA25_N")),
Subsignal("rst", FPins("FMC:LA25_P")), Subsignal("rst", FPins("FMC:LA25_P")),
IOStandard("LVTTL")), IOStandard("LVTTL"), Misc("DRIVE=24")),
("i2c_fmc", next(i2c_fmc), ("i2c_fmc", next(i2c_fmc),
Subsignal("scl", FPins("FMC:IIC_SCL")), Subsignal("scl", FPins("FMC:IIC_SCL")),
Subsignal("sda", FPins("FMC:IIC_SDA")), Subsignal("sda", FPins("FMC:IIC_SDA")),
IOStandard("LVCMOS25")), IOStandard("LVCMOS25")),
("clk_m2c", next(clk_m2c), ("clkout", next(clkout), FPins("FMC:CLK1_M2C_P"),
Subsignal("p", FPins("FMC:CLK0_M2C_P")), IOStandard("LVTTL")),
Subsignal("n", FPins("FMC:CLK0_M2C_N")),
IOStandard("LVDS")),
("clk_m2c", next(clk_m2c), ("spi", next(spi),
Subsignal("p", FPins("FMC:CLK1_M2C_P")), Subsignal("clk", FPins("FMC:LA18_CC_P")),
Subsignal("n", FPins("FMC:CLK1_M2C_N")), Subsignal("mosi", FPins("FMC:LA18_CC_N")),
IOStandard("LVDS")), Subsignal("miso", FPins("FMC:LA23_P")),
Subsignal("cs_n", FPins("FMC:LA23_N")),
IOStandard("LVTTL")),
("spi", next(spi),
Subsignal("clk", FPins("FMC:LA27_P")),
Subsignal("mosi", FPins("FMC:LA26_P")),
Subsignal("miso", FPins("FMC:LA27_N")),
Subsignal("cs_n", FPins("FMC:LA26_N")),
IOStandard("LVTTL")),
] ]
return r return r

View File

@ -6,7 +6,7 @@ from artiq.gateware.rtio.phy.wishbone import RT2WB
class _AD9xxx(Module): class _AD9xxx(Module):
def __init__(self, ftw_base, pads, nchannels, onehot=False, **kwargs): def __init__(self, ftw_base, pads, nchannels, onehot=False, **kwargs):
self.submodules._ll = ClockDomainsRenamer("rio")( self.submodules._ll = ClockDomainsRenamer("rio_phy")(
ad9xxx.AD9xxx(pads, **kwargs)) ad9xxx.AD9xxx(pads, **kwargs))
self.submodules._rt2wb = RT2WB(len(pads.a)+1, self._ll.bus) self.submodules._rt2wb = RT2WB(len(pads.a)+1, self._ll.bus)
self.rtlink = self._rt2wb.rtlink self.rtlink = self._rt2wb.rtlink

View File

@ -6,7 +6,7 @@ from artiq.gateware.rtio.phy.wishbone import RT2WB
class SPIMaster(Module): class SPIMaster(Module):
def __init__(self, pads, **kwargs): def __init__(self, pads, **kwargs):
self.submodules._ll = ClockDomainsRenamer("rio")( self.submodules._ll = ClockDomainsRenamer("rio_phy")(
SPIMasterWB(pads, **kwargs)) SPIMasterWB(pads, **kwargs))
self.submodules._rt2wb = RT2WB(2, self._ll.bus) self.submodules._rt2wb = RT2WB(2, self._ll.bus)
self.rtlink = self._rt2wb.rtlink self.rtlink = self._rt2wb.rtlink

View File

@ -125,7 +125,7 @@ class SPIMachine(Module):
NextState("WAIT"), NextState("WAIT"),
) )
).Else( ).Else(
self.reg.shift.eq(1), self.reg.shift.eq(~self.start),
NextState("SETUP"), NextState("SETUP"),
) )
) )

View File

@ -1,6 +1,7 @@
#!/usr/bin/env python3.5 #!/usr/bin/env python3.5
import argparse import argparse
import sys
from migen import * from migen import *
from migen.genlib.resetsync import AsyncResetSynchronizer from migen.genlib.resetsync import AsyncResetSynchronizer
@ -33,7 +34,7 @@ class _RTIOCRG(Module, AutoCSR):
# 10 MHz when using 125MHz input # 10 MHz when using 125MHz input
self.clock_domains.cd_ext_clkout = ClockDomain(reset_less=True) self.clock_domains.cd_ext_clkout = ClockDomain(reset_less=True)
ext_clkout = platform.request("user_sma_gpio_p") ext_clkout = platform.request("user_sma_gpio_p_33")
self.sync.ext_clkout += ext_clkout.eq(~ext_clkout) self.sync.ext_clkout += ext_clkout.eq(~ext_clkout)
@ -79,6 +80,16 @@ class _RTIOCRG(Module, AutoCSR):
] ]
# The default user SMA voltage on KC705 is 2.5V, and the Migen platform
# follows this default. But since the SMAs are on the same bank as the DDS,
# which is set to 3.3V by reprogramming the KC705 power ICs, we need to
# redefine them here.
_sma33_io = [
("user_sma_gpio_p_33", 0, Pins("Y23"), IOStandard("LVCMOS33")),
("user_sma_gpio_n_33", 0, Pins("Y24"), IOStandard("LVCMOS33")),
]
_ams101_dac = [ _ams101_dac = [
("ams101_dac", 0, ("ams101_dac", 0,
Subsignal("ldac", Pins("XADC:GPIO0")), Subsignal("ldac", Pins("XADC:GPIO0")),
@ -131,6 +142,7 @@ class _NIST_Ions(MiniSoC, AMPSoC):
self.platform.request("user_led", 0), self.platform.request("user_led", 0),
self.platform.request("user_led", 1))) self.platform.request("user_led", 1)))
self.platform.add_extension(_sma33_io)
self.platform.add_extension(_ams101_dac) self.platform.add_extension(_ams101_dac)
i2c = self.platform.request("i2c") i2c = self.platform.request("i2c")
@ -192,7 +204,7 @@ class NIST_QC1(_NIST_Ions):
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy)) rtio_channels.append(rtio.Channel.from_phy(phy))
phy = ttl_serdes_7series.Inout_8X(platform.request("user_sma_gpio_n")) phy = ttl_serdes_7series.Inout_8X(platform.request("user_sma_gpio_n_33"))
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512)) rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512))
phy = ttl_simple.Output(platform.request("user_led", 2)) phy = ttl_simple.Output(platform.request("user_led", 2))
@ -248,7 +260,7 @@ class NIST_CLOCK(_NIST_Ions):
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512)) rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512))
phy = ttl_serdes_7series.Inout_8X(platform.request("user_sma_gpio_n")) phy = ttl_serdes_7series.Inout_8X(platform.request("user_sma_gpio_n_33"))
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512)) rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512))
@ -300,7 +312,7 @@ class NIST_CLOCK(_NIST_Ions):
class NIST_QC2(_NIST_Ions): class NIST_QC2(_NIST_Ions):
""" """
NIST QC2 hardware, as used in Quantum I and Quantum II, with new backplane NIST QC2 hardware, as used in Quantum I and Quantum II, with new backplane
and 24 DDS channels. and 24 DDS channels. Two backplanes are used.
""" """
def __init__(self, cpu_type="or1k", **kwargs): def __init__(self, cpu_type="or1k", **kwargs):
_NIST_Ions.__init__(self, cpu_type, **kwargs) _NIST_Ions.__init__(self, cpu_type, **kwargs)
@ -310,36 +322,52 @@ class NIST_QC2(_NIST_Ions):
rtio_channels = [] rtio_channels = []
clock_generators = [] clock_generators = []
for backplane_offset in 0, 28:
# TTL0-23 are In+Out capable
for i in range(24):
phy = ttl_serdes_7series.Inout_8X(
platform.request("ttl", backplane_offset+i))
self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512))
# TTL24-26 are output only
for i in range(24, 27):
phy = ttl_serdes_7series.Output_8X(
platform.request("ttl", backplane_offset+i))
self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy))
# TTL27 is for the clock generator
phy = ttl_simple.ClockGen(
platform.request("ttl", backplane_offset+27))
self.submodules += phy
clock_generators.append(rtio.Channel.from_phy(phy))
phy = ttl_serdes_7series.Inout_8X(platform.request("user_sma_gpio_n")) # All TTL channels are In+Out capable
for i in range(40):
phy = ttl_serdes_7series.Inout_8X(
platform.request("ttl", i))
self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512))
# CLK0, CLK1 are for clock generators, on backplane SMP connectors
for i in range(2):
phy = ttl_simple.ClockGen(
platform.request("clkout", i))
self.submodules += phy
clock_generators.append(rtio.Channel.from_phy(phy))
# user SMA on KC705 board
phy = ttl_serdes_7series.Inout_8X(platform.request("user_sma_gpio_n_33"))
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512)) rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512))
phy = ttl_simple.Output(platform.request("user_led", 2)) phy = ttl_simple.Output(platform.request("user_led", 2))
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy)) rtio_channels.append(rtio.Channel.from_phy(phy))
# AMS101 DAC on KC705 XADC header - optional
ams101_dac = self.platform.request("ams101_dac", 0)
phy = ttl_simple.Output(ams101_dac.ldac)
self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["RTIO_REGULAR_TTL_COUNT"] = len(rtio_channels) self.config["RTIO_REGULAR_TTL_COUNT"] = len(rtio_channels)
# add clock generators after RTIO_REGULAR_TTL_COUNT # add clock generators after RTIO_REGULAR_TTL_COUNT
rtio_channels += clock_generators rtio_channels += clock_generators
phy = spi.SPIMaster(ams101_dac)
self.submodules += phy
self.config["RTIO_FIRST_SPI_CHANNEL"] = len(rtio_channels)
rtio_channels.append(rtio.Channel.from_phy(
phy, ofifo_depth=4, ififo_depth=4))
for i in range(4):
phy = spi.SPIMaster(self.platform.request("spi", i))
self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(
phy, ofifo_depth=128, ififo_depth=128))
self.config["RTIO_FIRST_DDS_CHANNEL"] = len(rtio_channels) self.config["RTIO_FIRST_DDS_CHANNEL"] = len(rtio_channels)
self.config["RTIO_DDS_COUNT"] = 2 self.config["RTIO_DDS_COUNT"] = 2
self.config["DDS_CHANNELS_PER_BUS"] = 12 self.config["DDS_CHANNELS_PER_BUS"] = 12

View File

@ -153,6 +153,7 @@ class NIST_QC1(BaseSoC, AMPSoC):
platform = self.platform platform = self.platform
platform.toolchain.map_opt += " -t 40"
platform.toolchain.bitgen_opt += " -g compress" platform.toolchain.bitgen_opt += " -g compress"
platform.toolchain.ise_commands += """ platform.toolchain.ise_commands += """
trce -v 12 -fastpaths -tsi {build_name}.tsi -o {build_name}.twr {build_name}.ncd {build_name}.pcf trce -v 12 -fastpaths -tsi {build_name}.tsi -o {build_name}.twr {build_name}.ncd {build_name}.pcf
@ -176,7 +177,7 @@ trce -v 12 -fastpaths -tsi {build_name}.tsi -o {build_name}.twr {build_name}.ncd
phy = ttl_serdes_spartan6.Inout_4X(platform.request("pmt", i), phy = ttl_serdes_spartan6.Inout_4X(platform.request("pmt", i),
self.rtio_crg.rtiox4_stb) self.rtio_crg.rtiox4_stb)
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=512, rtio_channels.append(rtio.Channel.from_phy(phy, ififo_depth=64,
ofifo_depth=4)) ofifo_depth=4))
# the last TTL is used for ClockGen # the last TTL is used for ClockGen
@ -191,7 +192,7 @@ trce -v 12 -fastpaths -tsi {build_name}.tsi -o {build_name}.twr {build_name}.ncd
phy = ttl_simple.Output(platform.request("ttl", i)) phy = ttl_simple.Output(platform.request("ttl", i))
self.submodules += phy self.submodules += phy
rtio_channels.append(rtio.Channel.from_phy(phy, ofifo_depth=256)) rtio_channels.append(rtio.Channel.from_phy(phy, ofifo_depth=64))
phy = ttl_simple.Output(platform.request("ext_led", 0)) phy = ttl_simple.Output(platform.request("ext_led", 0))
self.submodules += phy self.submodules += phy

View File

@ -3,6 +3,7 @@ import asyncio
import sys import sys
import shlex import shlex
from functools import partial from functools import partial
from itertools import count
from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5 import QtCore, QtGui, QtWidgets
@ -89,7 +90,10 @@ class _AppletDock(QDockWidgetCloseDetect):
def __init__(self, datasets_sub, uid, name, command): def __init__(self, datasets_sub, uid, name, command):
QDockWidgetCloseDetect.__init__(self, "Applet: " + name) QDockWidgetCloseDetect.__init__(self, "Applet: " + name)
self.setObjectName("applet" + str(uid)) self.setObjectName("applet" + str(uid))
self.setMinimumSize(QtCore.QSize(100, 100))
qfm = QtGui.QFontMetrics(self.font())
self.setMinimumSize(20*qfm.averageCharWidth(), 5*qfm.lineSpacing())
self.resize(40*qfm.averageCharWidth(), 10*qfm.lineSpacing())
self.datasets_sub = datasets_sub self.datasets_sub = datasets_sub
self.applet_name = name self.applet_name = name
@ -164,7 +168,6 @@ class _AppletDock(QDockWidgetCloseDetect):
self.starting_stopping = False self.starting_stopping = False
if delete_self: if delete_self:
self.setParent(None)
self.deleteLater() self.deleteLater()
async def restart(self): async def restart(self):
@ -215,7 +218,7 @@ class AppletsDock(QtWidgets.QDockWidget):
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu) self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
new_action = QtWidgets.QAction("New applet", self.table) new_action = QtWidgets.QAction("New applet", self.table)
new_action.triggered.connect(self.new) new_action.triggered.connect(lambda: self.new())
self.table.addAction(new_action) self.table.addAction(new_action)
templates_menu = QtWidgets.QMenu() templates_menu = QtWidgets.QMenu()
for name, template in _templates: for name, template in _templates:
@ -284,8 +287,8 @@ class AppletsDock(QtWidgets.QDockWidget):
def new(self, uid=None): def new(self, uid=None):
if uid is None: if uid is None:
uid = next(iter(set(range(len(self.applet_uids) + 1)) uid = next(i for i in count() if i not in self.applet_uids)
- self.applet_uids)) assert uid not in self.applet_uids, uid
self.applet_uids.add(uid) self.applet_uids.add(uid)
row = self.table.rowCount() row = self.table.rowCount()
@ -326,7 +329,6 @@ class AppletsDock(QtWidgets.QDockWidget):
self.applet_uids.remove(item.applet_uid) self.applet_uids.remove(item.applet_uid)
self.table.removeRow(row) self.table.removeRow(row)
async def stop(self): async def stop(self):
for row in range(self.table.rowCount()): for row in range(self.table.rowCount()):
dock = self.table.item(row, 0).applet_dock dock = self.table.item(row, 0).applet_dock

View File

@ -47,8 +47,6 @@ class DatasetsDock(QtWidgets.QDockWidget):
self.table = QtWidgets.QTreeView() self.table = QtWidgets.QTreeView()
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows) self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection) self.table.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection)
self.table.header().setSectionResizeMode(
QtWidgets.QHeaderView.ResizeToContents)
grid.addWidget(self.table, 1, 0) grid.addWidget(self.table, 1, 0)
self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu) self.table.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
@ -79,3 +77,9 @@ class DatasetsDock(QtWidgets.QDockWidget):
key = self.table_model.index_to_key(idx) key = self.table_model.index_to_key(idx)
if key is not None: if key is not None:
asyncio.ensure_future(self.dataset_ctl.delete(key)) asyncio.ensure_future(self.dataset_ctl.delete(key))
def save_state(self):
return bytes(self.table.header().saveState())
def restore_state(self, state):
self.table.header().restoreState(QtCore.QByteArray(state))

View File

@ -5,7 +5,6 @@ from PyQt5 import QtCore, QtGui, QtWidgets
from artiq.gui.tools import LayoutWidget, disable_scroll_wheel from artiq.gui.tools import LayoutWidget, disable_scroll_wheel
from artiq.gui.scanwidget import ScanWidget from artiq.gui.scanwidget import ScanWidget
from artiq.gui.scientific_spinbox import ScientificSpinBox
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -139,26 +138,26 @@ class _RangeScan(LayoutWidget):
scale = procdesc["scale"] scale = procdesc["scale"]
def apply_properties(spinbox): def apply_properties(widget):
spinbox.setDecimals(procdesc["ndecimals"]) widget.setDecimals(procdesc["ndecimals"])
if procdesc["global_min"] is not None: if procdesc["global_min"] is not None:
spinbox.setMinimum(procdesc["global_min"]/scale) widget.setMinimum(procdesc["global_min"]/scale)
else: else:
spinbox.setMinimum(float("-inf")) widget.setMinimum(float("-inf"))
if procdesc["global_max"] is not None: if procdesc["global_max"] is not None:
spinbox.setMaximum(procdesc["global_max"]/scale) widget.setMaximum(procdesc["global_max"]/scale)
else: else:
spinbox.setMaximum(float("inf")) widget.setMaximum(float("inf"))
if procdesc["global_step"] is not None: if procdesc["global_step"] is not None:
spinbox.setSingleStep(procdesc["global_step"]/scale) widget.setSingleStep(procdesc["global_step"]/scale)
if procdesc["unit"]: if procdesc["unit"]:
spinbox.setSuffix(" " + procdesc["unit"]) widget.setSuffix(" " + procdesc["unit"])
scanner = ScanWidget() scanner = ScanWidget()
disable_scroll_wheel(scanner) disable_scroll_wheel(scanner)
self.addWidget(scanner, 0, 0, -1, 1) self.addWidget(scanner, 0, 0, -1, 1)
start = ScientificSpinBox() start = QtWidgets.QDoubleSpinBox()
start.setStyleSheet("QDoubleSpinBox {color:blue}") start.setStyleSheet("QDoubleSpinBox {color:blue}")
start.setMinimumSize(110, 0) start.setMinimumSize(110, 0)
start.setSizePolicy(QtWidgets.QSizePolicy( start.setSizePolicy(QtWidgets.QSizePolicy(
@ -168,38 +167,47 @@ class _RangeScan(LayoutWidget):
npoints = QtWidgets.QSpinBox() npoints = QtWidgets.QSpinBox()
npoints.setMinimum(1) npoints.setMinimum(1)
npoints.setMaximum((1 << 31) - 1)
disable_scroll_wheel(npoints) disable_scroll_wheel(npoints)
self.addWidget(npoints, 1, 1) self.addWidget(npoints, 1, 1)
stop = ScientificSpinBox() stop = QtWidgets.QDoubleSpinBox()
stop.setStyleSheet("QDoubleSpinBox {color:red}") stop.setStyleSheet("QDoubleSpinBox {color:red}")
stop.setMinimumSize(110, 0) stop.setMinimumSize(110, 0)
disable_scroll_wheel(stop) disable_scroll_wheel(stop)
self.addWidget(stop, 2, 1) self.addWidget(stop, 2, 1)
apply_properties(start)
apply_properties(stop)
apply_properties(scanner)
def update_start(value): def update_start(value):
state["start"] = value*scale state["start"] = value*scale
scanner.setStart(value) scanner.setStart(value)
if start.value() != value:
start.setValue(value)
def update_stop(value): def update_stop(value):
state["stop"] = value*scale state["stop"] = value*scale
scanner.setStop(value) scanner.setStop(value)
if stop.value() != value:
stop.setValue(value)
def update_npoints(value): def update_npoints(value):
state["npoints"] = value state["npoints"] = value
scanner.setNum(value) scanner.setNum(value)
if npoints.value() != value:
npoints.setValue(value)
scanner.startChanged.connect(start.setValue) scanner.startChanged.connect(update_start)
scanner.numChanged.connect(npoints.setValue) scanner.numChanged.connect(update_npoints)
scanner.stopChanged.connect(stop.setValue) scanner.stopChanged.connect(update_stop)
start.valueChanged.connect(update_start) start.valueChanged.connect(update_start)
npoints.valueChanged.connect(update_npoints) npoints.valueChanged.connect(update_npoints)
stop.valueChanged.connect(update_stop) stop.valueChanged.connect(update_stop)
scanner.setStart(state["start"]/scale) scanner.setStart(state["start"]/scale)
scanner.setNum(state["npoints"]) scanner.setNum(state["npoints"])
scanner.setStop(state["stop"]/scale) scanner.setStop(state["stop"]/scale)
apply_properties(start)
apply_properties(stop)
class _ExplicitScan(LayoutWidget): class _ExplicitScan(LayoutWidget):
@ -210,14 +218,14 @@ class _ExplicitScan(LayoutWidget):
self.addWidget(QtWidgets.QLabel("Sequence:"), 0, 0) self.addWidget(QtWidgets.QLabel("Sequence:"), 0, 0)
self.addWidget(self.value, 0, 1) self.addWidget(self.value, 0, 1)
float_regexp = "[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?" float_regexp = r"(([+-]?\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?)"
regexp = "(float)?( +float)* *".replace("float", float_regexp) regexp = "(float)?( +float)* *".replace("float", float_regexp)
self.value.setValidator(QtGui.QRegExpValidator(QtCore.QRegExp(regexp), self.value.setValidator(QtGui.QRegExpValidator(QtCore.QRegExp(regexp)))
self.value))
self.value.setText(" ".join([str(x) for x in state["sequence"]])) self.value.setText(" ".join([str(x) for x in state["sequence"]]))
def update(text): def update(text):
state["sequence"] = [float(x) for x in text.split()] if self.value.hasAcceptableInput():
state["sequence"] = [float(x) for x in text.split()]
self.value.textEdited.connect(update) self.value.textEdited.connect(update)

View File

@ -113,6 +113,7 @@ class _ArgumentEditor(QtWidgets.QTreeWidget):
except: except:
logger.error("Could not recompute argument '%s' of '%s'", logger.error("Could not recompute argument '%s' of '%s'",
name, self.expurl, exc_info=True) name, self.expurl, exc_info=True)
return
argument = self.manager.get_submission_arguments(self.expurl)[name] argument = self.manager.get_submission_arguments(self.expurl)[name]
procdesc = arginfo[name][0] procdesc = arginfo[name][0]
@ -301,6 +302,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
except: except:
logger.error("Could not recompute arguments of '%s'", logger.error("Could not recompute arguments of '%s'",
self.expurl, exc_info=True) self.expurl, exc_info=True)
return
self.manager.initialize_submission_arguments(self.expurl, arginfo) self.manager.initialize_submission_arguments(self.expurl, arginfo)
self.argeditor.deleteLater() self.argeditor.deleteLater()

View File

@ -61,7 +61,7 @@ class _OpenFileDialog(QtWidgets.QDialog):
except: except:
logger.error("Failed to list directory '%s'", logger.error("Failed to list directory '%s'",
self.explorer.current_directory, exc_info=True) self.explorer.current_directory, exc_info=True)
self.explorer.current_directory = "" return
for name in sorted(contents, key=lambda x: (x[-1] not in "\\/", x)): for name in sorted(contents, key=lambda x: (x[-1] not in "\\/", x)):
if name[-1] in "\\/": if name[-1] in "\\/":
icon = QtWidgets.QStyle.SP_DirIcon icon = QtWidgets.QStyle.SP_DirIcon
@ -296,3 +296,11 @@ class ExplorerDock(QtWidgets.QDockWidget):
def update_cur_rev(self, cur_rev): def update_cur_rev(self, cur_rev):
self.revision.setText(cur_rev) self.revision.setText(cur_rev)
def save_state(self):
return {
"current_directory": self.current_directory
}
def restore_state(self, state):
self.current_directory = state["current_directory"]

View File

@ -10,20 +10,24 @@ from artiq.gui.tools import (LayoutWidget, log_level_to_name,
QDockWidgetCloseDetect) QDockWidgetCloseDetect)
def _make_wrappable(row, width=30): class ModelItem:
level, source, time, msg = row def __init__(self, parent, row):
msg = re.sub("(\\S{{{}}})".format(width), "\\1\u200b", msg) self.parent = parent
return [level, source, time, msg] self.row = row
self.children_by_row = []
class Model(QtCore.QAbstractTableModel): class Model(QtCore.QAbstractItemModel):
def __init__(self, init): def __init__(self, init):
QtCore.QAbstractTableModel.__init__(self) QtCore.QAbstractTableModel.__init__(self)
self.headers = ["Source", "Message"] self.headers = ["Source", "Message"]
self.children_by_row = []
self.entries = list(map(_make_wrappable, init)) self.entries = []
self.pending_entries = [] self.pending_entries = []
for entry in init:
self.append(entry)
self.depth = 1000 self.depth = 1000
timer = QtCore.QTimer(self) timer = QtCore.QTimer(self)
timer.timeout.connect(self.timer_tick) timer.timeout.connect(self.timer_tick)
@ -44,7 +48,11 @@ class Model(QtCore.QAbstractTableModel):
return None return None
def rowCount(self, parent): def rowCount(self, parent):
return len(self.entries) if parent.isValid():
item = parent.internalPointer()
return len(item.children_by_row)
else:
return len(self.entries)
def columnCount(self, parent): def columnCount(self, parent):
return len(self.headers) return len(self.headers)
@ -53,15 +61,9 @@ class Model(QtCore.QAbstractTableModel):
pass pass
def append(self, v): def append(self, v):
self.pending_entries.append(_make_wrappable(v)) severity, source, timestamp, message = v
self.pending_entries.append((severity, source, timestamp,
def insertRows(self, position, rows=1, index=QtCore.QModelIndex()): message.splitlines()))
self.beginInsertRows(QtCore.QModelIndex(), position, position+rows-1)
self.endInsertRows()
def removeRows(self, position, rows=1, index=QtCore.QModelIndex()):
self.beginRemoveRows(QtCore.QModelIndex(), position, position+rows-1)
self.endRemoveRows()
def timer_tick(self): def timer_tick(self):
if not self.pending_entries: if not self.pending_entries:
@ -69,45 +71,86 @@ class Model(QtCore.QAbstractTableModel):
nrows = len(self.entries) nrows = len(self.entries)
records = self.pending_entries records = self.pending_entries
self.pending_entries = [] self.pending_entries = []
self.beginInsertRows(QtCore.QModelIndex(), nrows, nrows+len(records)-1)
self.entries.extend(records) self.entries.extend(records)
self.insertRows(nrows, len(records)) for rec in records:
item = ModelItem(self, len(self.children_by_row))
self.children_by_row.append(item)
for i in range(len(rec[3])-1):
item.children_by_row.append(ModelItem(item, i))
self.endInsertRows()
if len(self.entries) > self.depth: if len(self.entries) > self.depth:
start = len(self.entries) - self.depth start = len(self.entries) - self.depth
self.beginRemoveRows(QtCore.QModelIndex(), 0, start-1)
self.entries = self.entries[start:] self.entries = self.entries[start:]
self.removeRows(0, start) self.children_by_row = self.children_by_row[start:]
for child in self.children_by_row:
child.row -= start
self.endRemoveRows()
def index(self, row, column, parent):
if parent.isValid():
parent_item = parent.internalPointer()
return self.createIndex(row, column,
parent_item.children_by_row[row])
else:
return self.createIndex(row, column, self.children_by_row[row])
def parent(self, index):
if index.isValid():
parent = index.internalPointer().parent
if parent is self:
return QtCore.QModelIndex()
else:
return self.createIndex(parent.row, 0, parent)
else:
return QtCore.QModelIndex()
def data(self, index, role): def data(self, index, role):
if index.isValid(): if not index.isValid():
if (role == QtCore.Qt.FontRole return
and index.column() == 1):
return self.fixed_font item = index.internalPointer()
elif role == QtCore.Qt.TextAlignmentRole: if item.parent is self:
return QtCore.Qt.AlignLeft | QtCore.Qt.AlignTop msgnum = item.row
elif role == QtCore.Qt.BackgroundRole: else:
level = self.entries[index.row()][0] msgnum = item.parent.row
if level >= logging.ERROR:
return self.error_bg if role == QtCore.Qt.FontRole and index.column() == 1:
elif level >= logging.WARNING: return self.fixed_font
return self.warning_bg elif role == QtCore.Qt.BackgroundRole:
else: level = self.entries[msgnum][0]
return self.white if level >= logging.ERROR:
elif role == QtCore.Qt.ForegroundRole: return self.error_bg
level = self.entries[index.row()][0] elif level >= logging.WARNING:
if level <= logging.DEBUG: return self.warning_bg
return self.debug_fg else:
else: return self.white
return self.black elif role == QtCore.Qt.ForegroundRole:
elif role == QtCore.Qt.DisplayRole: level = self.entries[msgnum][0]
v = self.entries[index.row()] if level <= logging.DEBUG:
column = index.column() return self.debug_fg
else:
return self.black
elif role == QtCore.Qt.DisplayRole:
v = self.entries[msgnum]
column = index.column()
if item.parent is self:
if column == 0: if column == 0:
return v[1] return v[1]
else: else:
return v[3] return v[3][0]
elif role == QtCore.Qt.ToolTipRole: else:
v = self.entries[index.row()] if column == 0:
return (log_level_to_name(v[0]) + ", " + return ""
time.strftime("%m/%d %H:%M:%S", time.localtime(v[2]))) else:
return v[3][item.row+1]
elif role == QtCore.Qt.ToolTipRole:
v = self.entries[msgnum]
return (log_level_to_name(v[0]) + ", " +
time.strftime("%m/%d %H:%M:%S", time.localtime(v[2])))
class _LogFilterProxyModel(QtCore.QSortFilterProxyModel): class _LogFilterProxyModel(QtCore.QSortFilterProxyModel):
@ -118,14 +161,19 @@ class _LogFilterProxyModel(QtCore.QSortFilterProxyModel):
def filterAcceptsRow(self, sourceRow, sourceParent): def filterAcceptsRow(self, sourceRow, sourceParent):
model = self.sourceModel() model = self.sourceModel()
if sourceParent.isValid():
parent_item = sourceParent.internalPointer()
msgnum = parent_item.row
else:
msgnum = sourceRow
accepted_level = model.entries[sourceRow][0] >= self.min_level accepted_level = model.entries[msgnum][0] >= self.min_level
if self.freetext: if self.freetext:
data_source = model.entries[sourceRow][1] data_source = model.entries[msgnum][1]
data_message = model.entries[sourceRow][3] data_message = model.entries[msgnum][3]
accepted_freetext = (self.freetext in data_source accepted_freetext = (self.freetext in data_source
or self.freetext in data_message) or any(self.freetext in m for m in data_message))
else: else:
accepted_freetext = True accepted_freetext = True
@ -176,26 +224,30 @@ class _LogDock(QDockWidgetCloseDetect):
grid.addWidget(newdock, 0, 4) grid.addWidget(newdock, 0, 4)
grid.layout.setColumnStretch(2, 1) grid.layout.setColumnStretch(2, 1)
self.log = QtWidgets.QTableView() self.log = QtWidgets.QTreeView()
self.log.setSelectionMode(QtWidgets.QAbstractItemView.NoSelection) self.log.setSelectionMode(QtWidgets.QAbstractItemView.NoSelection)
self.log.horizontalHeader().setSectionResizeMode(
QtWidgets.QHeaderView.ResizeToContents)
self.log.horizontalHeader().setStretchLastSection(True)
self.log.verticalHeader().setSectionResizeMode(
QtWidgets.QHeaderView.ResizeToContents)
self.log.verticalHeader().hide()
self.log.setHorizontalScrollMode( self.log.setHorizontalScrollMode(
QtWidgets.QAbstractItemView.ScrollPerPixel) QtWidgets.QAbstractItemView.ScrollPerPixel)
self.log.setVerticalScrollMode( self.log.setVerticalScrollMode(
QtWidgets.QAbstractItemView.ScrollPerPixel) QtWidgets.QAbstractItemView.ScrollPerPixel)
self.log.setShowGrid(False) self.log.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
self.log.setTextElideMode(QtCore.Qt.ElideNone)
grid.addWidget(self.log, 1, 0, colspan=5) grid.addWidget(self.log, 1, 0, colspan=5)
self.scroll_at_bottom = False self.scroll_at_bottom = False
self.scroll_value = 0 self.scroll_value = 0
# If Qt worked correctly, this would be nice to have. Alas, resizeSections
# is broken when the horizontal scrollbar is enabled.
# self.log.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
# sizeheader_action = QtWidgets.QAction("Resize header", self.log)
# sizeheader_action.triggered.connect(
# lambda: self.log.header().resizeSections(QtWidgets.QHeaderView.ResizeToContents))
# self.log.addAction(sizeheader_action)
log_sub.add_setmodel_callback(self.set_model) log_sub.add_setmodel_callback(self.set_model)
cw = QtGui.QFontMetrics(self.font()).averageCharWidth()
self.log.header().resizeSection(0, 26*cw)
def filter_level_changed(self): def filter_level_changed(self):
if not hasattr(self, "table_model_filter"): if not hasattr(self, "table_model_filter"):
return return
@ -219,21 +271,13 @@ class _LogDock(QDockWidgetCloseDetect):
if self.scroll_at_bottom: if self.scroll_at_bottom:
self.log.scrollToBottom() self.log.scrollToBottom()
# HACK:
# If we don't do this, after we first add some rows, the "Time"
# column gets undersized and the text in it gets wrapped.
# We can call self.log.resizeColumnsToContents(), which fixes
# that problem, but now the message column is too large and
# a horizontal scrollbar appears.
# This is almost certainly a Qt layout bug.
self.log.horizontalHeader().reset()
# HACK: # HACK:
# Qt intermittently likes to scroll back to the top when rows are removed. # Qt intermittently likes to scroll back to the top when rows are removed.
# Work around this by restoring the scrollbar to the previously memorized # Work around this by restoring the scrollbar to the previously memorized
# position, after the removal. # position, after the removal.
# Note that this works because _LogModel always does the insertion right # Note that this works because _LogModel always does the insertion right
# before the removal. # before the removal.
# TODO: check if this is still required after moving to QTreeView
def rows_removed(self): def rows_removed(self):
if self.scroll_at_bottom: if self.scroll_at_bottom:
self.log.scrollToBottom() self.log.scrollToBottom()
@ -257,7 +301,8 @@ class _LogDock(QDockWidgetCloseDetect):
def save_state(self): def save_state(self):
return { return {
"min_level_idx": self.filter_level.currentIndex(), "min_level_idx": self.filter_level.currentIndex(),
"freetext_filter": self.filter_freetext.text() "freetext_filter": self.filter_freetext.text(),
"header": bytes(self.log.header().saveState())
} }
def restore_state(self, state): def restore_state(self, state):
@ -279,6 +324,13 @@ class _LogDock(QDockWidgetCloseDetect):
# manually here, unlike for the combobox. # manually here, unlike for the combobox.
self.filter_freetext_changed() self.filter_freetext_changed()
try:
header = state["header"]
except KeyError:
pass
else:
self.log.header().restoreState(QtCore.QByteArray(header))
class LogDockManager: class LogDockManager:
def __init__(self, main_window, log_sub): def __init__(self, main_window, log_sub):
@ -304,7 +356,6 @@ class LogDockManager:
def on_dock_closed(self, name): def on_dock_closed(self, name):
dock = self.docks[name] dock = self.docks[name]
dock.setParent(None)
dock.deleteLater() dock.deleteLater()
del self.docks[name] del self.docks[name]
self.update_closable() self.update_closable()

View File

@ -55,8 +55,9 @@ class DictSyncModel(QtCore.QAbstractTableModel):
def __init__(self, headers, init): def __init__(self, headers, init):
self.headers = headers self.headers = headers
self.backing_store = init self.backing_store = init
self.row_to_key = sorted(self.backing_store.keys(), self.row_to_key = sorted(
key=lambda k: self.sort_key(k, self.backing_store[k])) self.backing_store.keys(),
key=lambda k: self.sort_key(k, self.backing_store[k]))
QtCore.QAbstractTableModel.__init__(self) QtCore.QAbstractTableModel.__init__(self)
def rowCount(self, parent): def rowCount(self, parent):
@ -73,8 +74,8 @@ class DictSyncModel(QtCore.QAbstractTableModel):
return self.convert(k, self.backing_store[k], index.column()) return self.convert(k, self.backing_store[k], index.column())
def headerData(self, col, orientation, role): def headerData(self, col, orientation, role):
if (orientation == QtCore.Qt.Horizontal if (orientation == QtCore.Qt.Horizontal and
and role == QtCore.Qt.DisplayRole): role == QtCore.Qt.DisplayRole):
return self.headers[col] return self.headers[col]
return None return None
@ -84,8 +85,8 @@ class DictSyncModel(QtCore.QAbstractTableModel):
while lo < hi: while lo < hi:
mid = (lo + hi)//2 mid = (lo + hi)//2
if (self.sort_key(self.row_to_key[mid], if (self.sort_key(self.row_to_key[mid],
self.backing_store[self.row_to_key[mid]]) self.backing_store[self.row_to_key[mid]]) <
< self.sort_key(k, v)): self.sort_key(k, v)):
lo = mid + 1 lo = mid + 1
else: else:
hi = mid hi = mid
@ -152,8 +153,8 @@ class ListSyncModel(QtCore.QAbstractTableModel):
index.column()) index.column())
def headerData(self, col, orientation, role): def headerData(self, col, orientation, role):
if (orientation == QtCore.Qt.Horizontal if (orientation == QtCore.Qt.Horizontal and
and role == QtCore.Qt.DisplayRole): role == QtCore.Qt.DisplayRole):
return self.headers[col] return self.headers[col]
return None return None
@ -204,8 +205,8 @@ class _DictSyncTreeSepItem:
self.is_node = False self.is_node = False
def __repr__(self): def __repr__(self):
return ("<DictSyncTreeSepItem {}, row={}, nchildren={}>" return ("<DictSyncTreeSepItem {}, row={}, nchildren={}>".
.format(self.name, self.row, len(self.children_by_row))) format(self.name, self.row, len(self.children_by_row)))
def _bisect_item(a, name): def _bisect_item(a, name):
@ -246,19 +247,30 @@ class DictSyncTreeSepModel(QtCore.QAbstractItemModel):
return len(self.headers) return len(self.headers)
def headerData(self, col, orientation, role): def headerData(self, col, orientation, role):
if (orientation == QtCore.Qt.Horizontal if (orientation == QtCore.Qt.Horizontal and
and role == QtCore.Qt.DisplayRole): role == QtCore.Qt.DisplayRole):
return self.headers[col] return self.headers[col]
return None return None
def index(self, row, column, parent): def index(self, row, column, parent):
if column >= len(self.headers):
return QtCore.QModelIndex()
if parent.isValid(): if parent.isValid():
parent_item = parent.internalPointer() parent_item = parent.internalPointer()
return self.createIndex(row, column, try:
parent_item.children_by_row[row]) child = parent_item.children_by_row[row]
except IndexError:
# This can happen when the last row is selected
# and then deleted; Qt will attempt to select
# the non-existent next one.
return QtCore.QModelIndex()
return self.createIndex(row, column, child)
else: else:
return self.createIndex(row, column, try:
self.children_by_row[row]) child = self.children_by_row[row]
except IndexError:
return QtCore.QModelIndex()
return self.createIndex(row, column, child)
def _index_item(self, item): def _index_item(self, item):
if item is self: if item is self:
@ -290,7 +302,7 @@ class DictSyncTreeSepModel(QtCore.QAbstractItemModel):
next_item.row += 1 next_item.row += 1
name_dict[name] = item name_dict[name] = item
self.endInsertRows() self.endInsertRows()
return item return item
def __setitem__(self, k, v): def __setitem__(self, k, v):

View File

@ -1,8 +1,8 @@
import asyncio import asyncio
import threading
import logging import logging
import socket import socket
import struct import struct
from operator import itemgetter
from PyQt5 import QtCore, QtWidgets, QtGui from PyQt5 import QtCore, QtWidgets, QtGui
@ -130,6 +130,9 @@ class _TTLWidget(_MoninjWidget):
else: else:
self._expctl_action.setChecked(True) self._expctl_action.setChecked(True)
def sort_key(self):
return self.channel
class _DDSWidget(_MoninjWidget): class _DDSWidget(_MoninjWidget):
def __init__(self, bus_channel, channel, sysclk, title): def __init__(self, bus_channel, channel, sysclk, title):
@ -162,6 +165,9 @@ class _DDSWidget(_MoninjWidget):
self._value.setText("<font size=\"6\">{:.7f} MHz</font>" self._value.setText("<font size=\"6\">{:.7f} MHz</font>"
.format(float(frequency)/1e6)) .format(float(frequency)/1e6))
def sort_key(self):
return (self.bus_channel, self.channel)
class _DeviceManager: class _DeviceManager:
def __init__(self, send_to_device, init): def __init__(self, send_to_device, init):
@ -247,7 +253,7 @@ class _MonInjDock(QtWidgets.QDockWidget):
grid_widget = QtWidgets.QWidget() grid_widget = QtWidgets.QWidget()
grid_widget.setLayout(grid) grid_widget.setLayout(grid)
for _, w in sorted(widgets, key=itemgetter(0)): for _, w in sorted(widgets, key=lambda i: i[1].sort_key()):
grid.addWidget(w) grid.addWidget(w)
scroll_area.setWidgetResizable(True) scroll_area.setWidgetResizable(True)
@ -261,34 +267,49 @@ class MonInj(TaskObject):
self.subscriber = Subscriber("devices", self.init_devices) self.subscriber = Subscriber("devices", self.init_devices)
self.dm = None self.dm = None
self.transport = None self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.socket.bind(("", 0))
# Never ceasing to disappoint, asyncio has an issue about UDP
# not being supported on Windows (ProactorEventLoop) open since 2014.
self.loop = asyncio.get_event_loop()
self.thread = threading.Thread(target=self.receiver_thread,
daemon=True)
self.thread.start()
async def start(self, server, port): async def start(self, server, port):
loop = asyncio.get_event_loop() await self.subscriber.connect(server, port)
await loop.create_datagram_endpoint(lambda: self,
family=socket.AF_INET)
try: try:
await self.subscriber.connect(server, port) TaskObject.start(self)
try:
TaskObject.start(self)
except:
await self.subscriber.close()
raise
except: except:
self.transport.close() await self.subscriber.close()
raise raise
async def stop(self): async def stop(self):
await TaskObject.stop(self) await TaskObject.stop(self)
await self.subscriber.close() await self.subscriber.close()
if self.transport is not None: try:
self.transport.close() # This is required to make recvfrom terminate in the thread.
self.transport = None # On Linux, this raises "OSError: Transport endpoint is not
# connected", but still has the intended effect.
self.socket.shutdown(socket.SHUT_RDWR)
except OSError:
pass
self.socket.close()
self.thread.join()
def connection_made(self, transport): def receiver_thread(self):
self.transport = transport while True:
try:
data, addr = self.socket.recvfrom(2048)
except OSError:
# Windows does this when the socket is terminated
break
if addr is None:
# Linux does this when the socket is terminated
break
self.loop.call_soon_threadsafe(self.datagram_received, data)
def datagram_received(self, data, addr): def datagram_received(self, data):
if self.dm is None: if self.dm is None:
logger.debug("received datagram, but device manager " logger.debug("received datagram, but device manager "
"is not present yet") "is not present yet")
@ -318,12 +339,6 @@ class MonInj(TaskObject):
except: except:
logger.warning("failed to process datagram", exc_info=True) logger.warning("failed to process datagram", exc_info=True)
def error_received(self, exc):
logger.warning("datagram endpoint error")
def connection_lost(self, exc):
self.transport = None
def send_to_device(self, data): def send_to_device(self, data):
if self.dm is None: if self.dm is None:
logger.debug("cannot sent to device yet, no device manager") logger.debug("cannot sent to device yet, no device manager")
@ -331,11 +346,13 @@ class MonInj(TaskObject):
ca = self.dm.get_core_addr() ca = self.dm.get_core_addr()
logger.debug("core device address: %s", ca) logger.debug("core device address: %s", ca)
if ca is None: if ca is None:
logger.warning("could not find core device address") logger.error("could not find core device address")
elif self.transport is None:
logger.warning("datagram endpoint not available")
else: else:
self.transport.sendto(data, (ca, 3250)) try:
self.socket.sendto(data, (ca, 3250))
except:
logger.debug("could not send to device",
exc_info=True)
async def _do(self): async def _do(self):
while True: while True:

View File

@ -14,11 +14,12 @@ class ScanWidget(QtWidgets.QWidget):
stopChanged = QtCore.pyqtSignal(float) stopChanged = QtCore.pyqtSignal(float)
numChanged = QtCore.pyqtSignal(int) numChanged = QtCore.pyqtSignal(int)
def __init__(self, zoomFactor=1.05, zoomMargin=.1, dynamicRange=1e9): def __init__(self):
QtWidgets.QWidget.__init__(self) QtWidgets.QWidget.__init__(self)
self.zoomMargin = zoomMargin self.zoomMargin = .1
self.dynamicRange = dynamicRange self.zoomFactor = 1.05
self.zoomFactor = zoomFactor self.dynamicRange = 1e9
self.suffix = ""
self.ticker = Ticker() self.ticker = Ticker()
@ -40,6 +41,7 @@ class ScanWidget(QtWidgets.QWidget):
qfm.lineSpacing()) qfm.lineSpacing())
self._start, self._stop, self._num = None, None, None self._start, self._stop, self._num = None, None, None
self._min, self._max = float("-inf"), float("inf")
self._axisView = None self._axisView = None
self._offset, self._drag, self._rubber = None, None, None self._offset, self._drag, self._rubber = None, None, None
@ -68,7 +70,15 @@ class ScanWidget(QtWidgets.QWidget):
left = self.width()/2 - center*scale left = self.width()/2 - center*scale
self._setView(left, scale) self._setView(left, scale)
def _clamp(self, v):
if v is None:
return None
v = max(self._min, v)
v = min(self._max, v)
return v
def setStart(self, val): def setStart(self, val):
val = self._clamp(val)
if self._start == val: if self._start == val:
return return
self._start = val self._start = val
@ -76,6 +86,7 @@ class ScanWidget(QtWidgets.QWidget):
self.startChanged.emit(val) self.startChanged.emit(val)
def setStop(self, val): def setStop(self, val):
val = self._clamp(val)
if self._stop == val: if self._stop == val:
return return
self._stop = val self._stop = val
@ -89,6 +100,31 @@ class ScanWidget(QtWidgets.QWidget):
self.update() self.update()
self.numChanged.emit(val) self.numChanged.emit(val)
def setMinimum(self, v):
self._min = v
self.setStart(self._start)
self.setStop(self._stop)
def setMaximum(self, v):
self._max = v
self.setStart(self._start)
self.setStop(self._stop)
def setDecimals(self, n):
# TODO
# the axis should always use compressed notation is useful
# do not:
# self.ticker.precision = n
pass
def setSingleStep(self, v):
# TODO
# use this (and maybe decimals) to snap to "nice" values when dragging
pass
def setSuffix(self, v):
self.suffix = v
def viewRange(self): def viewRange(self):
center = (self._stop + self._start)/2 center = (self._stop + self._start)/2
scale = self.width()*(1 - 2*self.zoomMargin) scale = self.width()*(1 - 2*self.zoomMargin)
@ -207,8 +243,11 @@ class ScanWidget(QtWidgets.QWidget):
ticks, prefix, labels = self.ticker(self._pixelToAxis(0), ticks, prefix, labels = self.ticker(self._pixelToAxis(0),
self._pixelToAxis(self.width())) self._pixelToAxis(self.width()))
painter.drawText(0, 0, prefix) rect = QtCore.QRect(0, 0, self.width(), lineSpacing)
painter.translate(0, lineSpacing) painter.drawText(rect, QtCore.Qt.AlignLeft, prefix)
painter.drawText(rect, QtCore.Qt.AlignRight, self.suffix)
painter.translate(0, lineSpacing + ascent)
for t, l in zip(ticks, labels): for t, l in zip(ticks, labels):
t = self._axisToPixel(t) t = self._axisToPixel(t)

View File

@ -2,7 +2,7 @@ import asyncio
import time import time
from functools import partial from functools import partial
from PyQt5 import QtCore, QtWidgets from PyQt5 import QtCore, QtWidgets, QtGui
from artiq.gui.models import DictSyncModel from artiq.gui.models import DictSyncModel
from artiq.tools import elide from artiq.tools import elide
@ -67,8 +67,6 @@ class ScheduleDock(QtWidgets.QDockWidget):
self.table = QtWidgets.QTableView() self.table = QtWidgets.QTableView()
self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows) self.table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
self.table.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection) self.table.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection)
self.table.horizontalHeader().setSectionResizeMode(
QtWidgets.QHeaderView.ResizeToContents)
self.table.verticalHeader().setSectionResizeMode( self.table.verticalHeader().setSectionResizeMode(
QtWidgets.QHeaderView.ResizeToContents) QtWidgets.QHeaderView.ResizeToContents)
self.table.verticalHeader().hide() self.table.verticalHeader().hide()
@ -89,17 +87,20 @@ class ScheduleDock(QtWidgets.QDockWidget):
self.table_model = Model(dict()) self.table_model = Model(dict())
schedule_sub.add_setmodel_callback(self.set_model) schedule_sub.add_setmodel_callback(self.set_model)
def rows_inserted_after(self): cw = QtGui.QFontMetrics(self.font()).averageCharWidth()
# HACK: h = self.table.horizontalHeader()
# workaround the usual Qt layout bug when the first row is inserted h.resizeSection(0, 7*cw)
# (columns are undersized if an experiment with a due date is scheduled h.resizeSection(1, 12*cw)
# and the schedule was empty) h.resizeSection(2, 16*cw)
self.table.horizontalHeader().reset() h.resizeSection(3, 6*cw)
h.resizeSection(4, 16*cw)
h.resizeSection(5, 30*cw)
h.resizeSection(6, 20*cw)
h.resizeSection(7, 20*cw)
def set_model(self, model): def set_model(self, model):
self.table_model = model self.table_model = model
self.table.setModel(self.table_model) self.table.setModel(self.table_model)
self.table_model.rowsInserted.connect(self.rows_inserted_after)
async def delete(self, rid, graceful): async def delete(self, rid, graceful):
if graceful: if graceful:
@ -118,3 +119,9 @@ class ScheduleDock(QtWidgets.QDockWidget):
msg = "Deleted RID {}".format(rid) msg = "Deleted RID {}".format(rid)
self.status_bar.showMessage(msg) self.status_bar.showMessage(msg)
asyncio.ensure_future(self.delete(rid, graceful)) asyncio.ensure_future(self.delete(rid, graceful))
def save_state(self):
return bytes(self.table.horizontalHeader().saveState())
def restore_state(self, state):
self.table.horizontalHeader().restoreState(QtCore.QByteArray(state))

View File

@ -1,71 +0,0 @@
import re
from PyQt5 import QtGui, QtWidgets
# after
# http://jdreaver.com/posts/2014-07-28-scientific-notation-spin-box-pyside.html
_inf = float("inf")
# Regular expression to find floats. Match groups are the whole string, the
# whole coefficient, the decimal part of the coefficient, and the exponent
# part.
_float_re = re.compile(r"(([+-]?\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?)")
def valid_float_string(string):
match = _float_re.search(string)
if match:
return match.groups()[0] == string
return False
class FloatValidator(QtGui.QValidator):
def validate(self, string, position):
if valid_float_string(string):
return self.Acceptable, string, position
if string == "" or string[position-1] in "eE.-+":
return self.Intermediate, string, position
return self.Invalid, string, position
def fixup(self, text):
match = _float_re.search(text)
if match:
return match.groups()[0]
return ""
class ScientificSpinBox(QtWidgets.QDoubleSpinBox):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.setMinimum(-_inf)
self.setMaximum(_inf)
self.validator = FloatValidator()
self.setDecimals(20)
def validate(self, text, position):
return self.validator.validate(text, position)
def fixup(self, text):
return self.validator.fixup(text)
def valueFromText(self, text):
return float(text)
def textFromValue(self, value):
return format_float(value)
def stepBy(self, steps):
text = self.cleanText()
groups = _float_re.search(text).groups()
decimal = float(groups[1])
decimal += steps
new_string = "{:g}".format(decimal) + (groups[3] if groups[3] else "")
self.lineEdit().setText(new_string)
def format_float(value):
"""Modified form of the 'g' format specifier."""
string = "{:g}".format(value)
string = string.replace("e+", "e")
string = re.sub("e(-?)0*(\d+)", r"e\1\2", string)
return string

View File

@ -71,8 +71,14 @@ class StateManager(TaskObject):
async def _do(self): async def _do(self):
try: try:
while True: try:
await asyncio.sleep(self.autosave_period) while True:
await asyncio.sleep(self.autosave_period)
self.save()
finally:
self.save() self.save()
finally: except asyncio.CancelledError:
self.save() pass
except:
logger.error("Uncaught exception attempting to save state",
exc_info=True)

View File

@ -96,13 +96,15 @@ class Ticker:
""" """
# this is after the matplotlib ScalarFormatter # this is after the matplotlib ScalarFormatter
# without any i18n # without any i18n
significand, exponent = "{:1.10e}".format(v).split("e") v = "{:.15e}".format(v)
significand = significand.rstrip("0").rstrip(".") if "e" not in v:
exponent_sign = exponent[0].replace("+", "") return v # short number, inf, NaN, -inf
mantissa, exponent = v.split("e")
mantissa = mantissa.rstrip("0").rstrip(".")
exponent_sign = exponent[0].lstrip("+")
exponent = exponent[1:].lstrip("0") exponent = exponent[1:].lstrip("0")
s = "{:s}e{:s}{:s}".format(significand, exponent_sign, return "{:s}e{:s}{:s}".format(mantissa, exponent_sign,
exponent).rstrip("e") exponent).rstrip("e")
return self.fix_minus(s)
def prefix(self, offset, magnitude): def prefix(self, offset, magnitude):
""" """
@ -115,7 +117,7 @@ class Ticker:
prefix += self.compact_exponential(offset) + " + " prefix += self.compact_exponential(offset) + " + "
if magnitude != 1.: if magnitude != 1.:
prefix += self.compact_exponential(magnitude) + " × " prefix += self.compact_exponential(magnitude) + " × "
return prefix return self.fix_minus(prefix)
def __call__(self, a, b): def __call__(self, a, b):
""" """

View File

@ -163,9 +163,9 @@ def round(value, width=32):
_ARTIQEmbeddedInfo = namedtuple("_ARTIQEmbeddedInfo", _ARTIQEmbeddedInfo = namedtuple("_ARTIQEmbeddedInfo",
"core_name function syscall forbidden") "core_name function syscall forbidden flags")
def kernel(arg): def kernel(arg=None, flags={}):
""" """
This decorator marks an object's method for execution on the core This decorator marks an object's method for execution on the core
device. device.
@ -192,13 +192,17 @@ def kernel(arg):
return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs) return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
run_on_core.artiq_embedded = _ARTIQEmbeddedInfo( run_on_core.artiq_embedded = _ARTIQEmbeddedInfo(
core_name=arg, function=function, syscall=None, core_name=arg, function=function, syscall=None,
forbidden=False) forbidden=False, flags=set(flags))
return run_on_core return run_on_core
return inner_decorator return inner_decorator
elif arg is None:
def inner_decorator(function):
return kernel(function, flags)
return inner_decorator
else: else:
return kernel("core")(arg) return kernel("core", flags)(arg)
def portable(function): def portable(arg=None, flags={}):
""" """
This decorator marks a function for execution on the same device as its This decorator marks a function for execution on the same device as its
caller. caller.
@ -208,12 +212,17 @@ def portable(function):
core device). A decorated function called from a kernel will be executed core device). A decorated function called from a kernel will be executed
on the core device (no RPC). on the core device (no RPC).
""" """
function.artiq_embedded = \ if arg is None:
_ARTIQEmbeddedInfo(core_name=None, function=function, syscall=None, def inner_decorator(function):
forbidden=False) return portable(function, flags)
return function return inner_decorator
else:
arg.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, function=arg, syscall=None,
forbidden=False, flags=set(flags))
return arg
def syscall(arg): def syscall(arg=None, flags={}):
""" """
This decorator marks a function as a system call. When executed on a core This decorator marks a function as a system call. When executed on a core
device, a C function with the provided name (or the same name as device, a C function with the provided name (or the same name as
@ -228,9 +237,14 @@ def syscall(arg):
def inner_decorator(function): def inner_decorator(function):
function.artiq_embedded = \ function.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, function=None, _ARTIQEmbeddedInfo(core_name=None, function=None,
syscall=function.__name__, forbidden=False) syscall=function.__name__, forbidden=False,
flags=set(flags))
return function return function
return inner_decorator return inner_decorator
elif arg is None:
def inner_decorator(function):
return syscall(function.__name__, flags)(function)
return inner_decorator
else: else:
return syscall(arg.__name__)(arg) return syscall(arg.__name__)(arg)
@ -241,7 +255,7 @@ def host_only(function):
""" """
function.artiq_embedded = \ function.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, function=None, syscall=None, _ARTIQEmbeddedInfo(core_name=None, function=None, syscall=None,
forbidden=True) forbidden=True, flags={})
return function return function

View File

@ -78,12 +78,23 @@ class EnumerationValue(_SimpleArgProcessor):
class NumberValue(_SimpleArgProcessor): class NumberValue(_SimpleArgProcessor):
"""An argument that can take a numerical value (typically floating point). """An argument that can take a numerical value.
:param unit: A string representing the unit of the value, for user If ndecimals = 0, scale = 1 and step is integer, then it returns
interface (UI) purposes. an integer value. Otherwise, it returns a floating point value.
:param scale: The scale of value for UI purposes. The displayed value is The simplest way to represent an integer argument is
divided by the scale. ``NumberValue(step=1, ndecimals=0)``.
For arguments with units, use both the unit parameter (a string for
display) and the scale parameter (a numerical scale for experiments).
For example, ``NumberValue(1, unit="ms", scale=1*ms)`` will display as
1 ms in the GUI window because of the unit setting, and appear as the
numerical value 0.001 in the code because of the scale setting.
:param unit: A string representing the unit of the value, for display
purposes only.
:param scale: A numerical scaling factor by which the displayed value is
multiplied when referenced in the experiment.
:param step: The step with which the value should be modified by up/down :param step: The step with which the value should be modified by up/down
buttons in a UI. The default is the scale divided by 10. buttons in a UI. The default is the scale divided by 10.
:param min: The minimum value of the argument. :param min: The minimum value of the argument.
@ -94,7 +105,8 @@ class NumberValue(_SimpleArgProcessor):
step=None, min=None, max=None, ndecimals=2): step=None, min=None, max=None, ndecimals=2):
if step is None: if step is None:
step = scale/10.0 step = scale/10.0
_SimpleArgProcessor.__init__(self, default) if default is not NoDefault:
self.default_value = default
self.unit = unit self.unit = unit
self.scale = scale self.scale = scale
self.step = step self.step = step
@ -102,8 +114,29 @@ class NumberValue(_SimpleArgProcessor):
self.max = max self.max = max
self.ndecimals = ndecimals self.ndecimals = ndecimals
def _is_int(self):
return (self.ndecimals == 0
and int(self.step) == self.step
and self.scale == 1)
def default(self):
if not hasattr(self, "default_value"):
raise DefaultMissing
if self._is_int():
return int(self.default_value)
else:
return float(self.default_value)
def process(self, x):
if self._is_int():
return int(x)
else:
return float(x)
def describe(self): def describe(self):
d = _SimpleArgProcessor.describe(self) d = {"ty": self.__class__.__name__}
if hasattr(self, "default_value"):
d["default"] = self.default_value
d["unit"] = self.unit d["unit"] = self.unit
d["scale"] = self.scale d["scale"] = self.scale
d["step"] = self.step d["step"] = self.step
@ -119,8 +152,8 @@ class StringValue(_SimpleArgProcessor):
class HasEnvironment: class HasEnvironment:
"""Provides methods to manage the environment of an experiment (devices, """Provides methods to manage the environment of an experiment (arguments,
parameters, results, arguments).""" devices, datasets)."""
def __init__(self, device_mgr=None, dataset_mgr=None, *, parent=None, def __init__(self, device_mgr=None, dataset_mgr=None, *, parent=None,
default_arg_none=False, enable_processors=False, **kwargs): default_arg_none=False, enable_processors=False, **kwargs):
self.requested_args = OrderedDict() self.requested_args = OrderedDict()
@ -143,11 +176,14 @@ class HasEnvironment:
def build(self): def build(self):
"""Must be implemented by the user to request arguments. """Must be implemented by the user to request arguments.
Other initialization steps such as requesting devices and parameters Other initialization steps such as requesting devices may also be
or initializing real-time results may also be performed here. performed here.
When the repository is scanned, any requested devices and parameters When the repository is scanned, any requested devices and arguments
are set to ``None``.""" are set to ``None``.
Datasets are read-only in this method.
"""
raise NotImplementedError raise NotImplementedError
def managers(self): def managers(self):
@ -164,6 +200,8 @@ class HasEnvironment:
def get_argument(self, key, processor=None, group=None): def get_argument(self, key, processor=None, group=None):
"""Retrieves and returns the value of an argument. """Retrieves and returns the value of an argument.
This function should only be called from ``build``.
:param key: Name of the argument. :param key: Name of the argument.
:param processor: A description of how to process the argument, such :param processor: A description of how to process the argument, such
as instances of ``BooleanValue`` and ``NumberValue``. as instances of ``BooleanValue`` and ``NumberValue``.
@ -195,13 +233,19 @@ class HasEnvironment:
def setattr_argument(self, key, processor=None, group=None): def setattr_argument(self, key, processor=None, group=None):
"""Sets an argument as attribute. The names of the argument and of the """Sets an argument as attribute. The names of the argument and of the
attribute are the same.""" attribute are the same.
The key is added to the instance's kernel invariants."""
setattr(self, key, self.get_argument(key, processor, group)) setattr(self, key, self.get_argument(key, processor, group))
kernel_invariants = getattr(self, "kernel_invariants", set())
self.kernel_invariants = kernel_invariants | {key}
def get_device_db(self): def get_device_db(self):
"""Returns the full contents of the device database.""" """Returns the full contents of the device database."""
if self.__parent is not None: if self.__parent is not None:
return self.__parent.get_device_db() return self.__parent.get_device_db()
if self.__device_mgr is None:
raise ValueError("Device manager not present")
return self.__device_mgr.get_device_db() return self.__device_mgr.get_device_db()
def get_device(self, key): def get_device(self, key):
@ -214,20 +258,22 @@ class HasEnvironment:
def setattr_device(self, key): def setattr_device(self, key):
"""Sets a device driver as attribute. The names of the device driver """Sets a device driver as attribute. The names of the device driver
and of the attribute are the same.""" and of the attribute are the same.
The key is added to the instance's kernel invariants."""
setattr(self, key, self.get_device(key)) setattr(self, key, self.get_device(key))
kernel_invariants = getattr(self, "kernel_invariants", set())
self.kernel_invariants = kernel_invariants | {key}
def set_dataset(self, key, value, def set_dataset(self, key, value,
broadcast=False, persist=False, save=True): broadcast=False, persist=False, save=True):
"""Sets the contents and handling modes of a dataset. """Sets the contents and handling modes of a dataset.
If the dataset is broadcasted, it must be PYON-serializable. Datasets must be scalars (``bool``, ``int``, ``float`` or NumPy scalar)
If the dataset is saved, it must be a scalar (``bool``, ``int``, or NumPy arrays.
``float`` or NumPy scalar) or a NumPy array.
:param broadcast: the data is sent in real-time to the master, which :param broadcast: the data is sent in real-time to the master, which
dispatches it. Returns a Notifier that can be used to mutate the dispatches it.
dataset.
:param persist: the master should store the data on-disk. Implies :param persist: the master should store the data on-disk. Implies
broadcast. broadcast.
:param save: the data is saved into the local storage of the current :param save: the data is saved into the local storage of the current
@ -238,7 +284,25 @@ class HasEnvironment:
return return
if self.__dataset_mgr is None: if self.__dataset_mgr is None:
raise ValueError("Dataset manager not present") raise ValueError("Dataset manager not present")
return self.__dataset_mgr.set(key, value, broadcast, persist, save) self.__dataset_mgr.set(key, value, broadcast, persist, save)
def mutate_dataset(self, key, index, value):
"""Mutate an existing dataset at the given index (e.g. set a value at
a given position in a NumPy array)
If the dataset was created in broadcast mode, the modification is
immediately transmitted.
If the index is a tuple of integers, it is interpreted as
``slice(*index)``.
If the index is a tuple of tuples, each sub-tuple is interpreted
as ``slice(*sub_tuple)`` (multi-dimensional slicing)."""
if self.__parent is not None:
self.__parent.mutate_dataset(key, index, value)
return
if self.__dataset_mgr is None:
raise ValueError("Dataset manager not present")
self.__dataset_mgr.mutate(key, index, value)
def get_dataset(self, key, default=NoDefault): def get_dataset(self, key, default=NoDefault):
"""Returns the contents of a dataset. """Returns the contents of a dataset.
@ -269,7 +333,7 @@ class HasEnvironment:
class Experiment: class Experiment:
"""Base class for experiments. """Base class for top-level experiments.
Deriving from this class enables automatic experiment discovery in Deriving from this class enables automatic experiment discovery in
Python modules. Python modules.
@ -315,15 +379,15 @@ class Experiment:
class EnvExperiment(Experiment, HasEnvironment): class EnvExperiment(Experiment, HasEnvironment):
"""Base class for experiments that use the ``HasEnvironment`` environment """Base class for top-level experiments that use the ``HasEnvironment``
manager. environment manager.
Most experiment should derive from this class.""" Most experiment should derive from this class."""
pass pass
def is_experiment(o): def is_experiment(o):
"""Checks if a Python object is an instantiable user experiment.""" """Checks if a Python object is a top-level experiment class."""
return (isclass(o) return (isclass(o)
and issubclass(o, Experiment) and issubclass(o, Experiment)
and o is not Experiment and o is not Experiment

View File

@ -14,11 +14,9 @@ Iterating multiple times on the same scan object is possible, with the scan
yielding the same values each time. Iterating concurrently on the yielding the same values each time. Iterating concurrently on the
same scan object (e.g. via nested loops) is also supported, and the same scan object (e.g. via nested loops) is also supported, and the
iterators are independent from each other. iterators are independent from each other.
Scan objects are supported both on the host and the core device.
""" """
from random import Random, shuffle import random
import inspect import inspect
from artiq.language.core import * from artiq.language.core import *
@ -89,12 +87,16 @@ class LinearScan(ScanObject):
class RandomScan(ScanObject): class RandomScan(ScanObject):
"""A scan object that yields a fixed number of randomly ordered evenly """A scan object that yields a fixed number of randomly ordered evenly
spaced values in a range.""" spaced values in a range."""
def __init__(self, start, stop, npoints, seed=0): def __init__(self, start, stop, npoints, seed=None):
self.start = start self.start = start
self.stop = stop self.stop = stop
self.npoints = npoints self.npoints = npoints
self.sequence = list(LinearScan(start, stop, npoints)) self.sequence = list(LinearScan(start, stop, npoints))
shuffle(self.sequence, Random(seed).random) if seed is None:
rf = random.random
else:
rf = Random(seed).random
random.shuffle(self.sequence, rf)
@portable @portable
def __iter__(self): def __iter__(self):
@ -137,6 +139,9 @@ class Scannable:
"""An argument (as defined in :class:`artiq.language.environment`) that """An argument (as defined in :class:`artiq.language.environment`) that
takes a scan object. takes a scan object.
For arguments with units, use both the unit parameter (a string for
display) and the scale parameter (a numerical scale for experiments).
:param global_min: The minimum value taken by the scanned variable, common :param global_min: The minimum value taken by the scanned variable, common
to all scan modes. The user interface takes this value to set the to all scan modes. The user interface takes this value to set the
range of its input widgets. range of its input widgets.
@ -145,9 +150,9 @@ class Scannable:
up/down buttons in a user interface. The default is the scale divided up/down buttons in a user interface. The default is the scale divided
by 10. by 10.
:param unit: A string representing the unit of the scanned variable, for :param unit: A string representing the unit of the scanned variable, for
user interface (UI) purposes. display purposes only.
:param scale: The scale of value for UI purposes. The displayed value is :param scale: A numerical scaling factor by which the displayed values
divided by the scale. are multiplied when referenced in the experiment.
:param ndecimals: The number of decimals a UI should use. :param ndecimals: The number of decimals a UI should use.
""" """
def __init__(self, default=NoDefault, unit="", scale=1.0, def __init__(self, default=NoDefault, unit="", scale=1.0,

View File

@ -14,8 +14,8 @@ logger = logging.getLogger(__name__)
async def _get_repository_entries(entry_dict, async def _get_repository_entries(entry_dict,
root, filename, get_device_db): root, filename, worker_handlers):
worker = Worker({"get_device_db": get_device_db}) worker = Worker(worker_handlers)
try: try:
description = await worker.examine("scan", os.path.join(root, filename)) description = await worker.examine("scan", os.path.join(root, filename))
except: except:
@ -31,12 +31,14 @@ async def _get_repository_entries(entry_dict,
"name (%s)", name) "name (%s)", name)
name = name.replace("/", "_") name = name.replace("/", "_")
if name in entry_dict: if name in entry_dict:
logger.warning("Duplicate experiment name: '%s'", name)
basename = name basename = name
i = 1 i = 1
while name in entry_dict: while name in entry_dict:
name = basename + str(i) name = basename + str(i)
i += 1 i += 1
logger.warning("Duplicate experiment name: '%s'\n"
"Renaming class '%s' in '%s' to '%s'",
basename, class_name, filename, name)
entry = { entry = {
"file": filename, "file": filename,
"class_name": class_name, "class_name": class_name,
@ -45,7 +47,7 @@ async def _get_repository_entries(entry_dict,
entry_dict[name] = entry entry_dict[name] = entry
async def _scan_experiments(root, get_device_db, subdir=""): async def _scan_experiments(root, worker_handlers, subdir=""):
entry_dict = dict() entry_dict = dict()
for de in os.scandir(os.path.join(root, subdir)): for de in os.scandir(os.path.join(root, subdir)):
if de.name.startswith("."): if de.name.startswith("."):
@ -54,13 +56,13 @@ async def _scan_experiments(root, get_device_db, subdir=""):
filename = os.path.join(subdir, de.name) filename = os.path.join(subdir, de.name)
try: try:
await _get_repository_entries( await _get_repository_entries(
entry_dict, root, filename, get_device_db) entry_dict, root, filename, worker_handlers)
except Exception as exc: except Exception as exc:
logger.warning("Skipping file '%s'", filename, logger.warning("Skipping file '%s'", filename,
exc_info=not isinstance(exc, WorkerInternalException)) exc_info=not isinstance(exc, WorkerInternalException))
if de.is_dir(): if de.is_dir():
subentries = await _scan_experiments( subentries = await _scan_experiments(
root, get_device_db, root, worker_handlers,
os.path.join(subdir, de.name)) os.path.join(subdir, de.name))
entries = {de.name + "/" + k: v for k, v in subentries.items()} entries = {de.name + "/" + k: v for k, v in subentries.items()}
entry_dict.update(entries) entry_dict.update(entries)
@ -77,9 +79,9 @@ def _sync_explist(target, source):
class ExperimentDB: class ExperimentDB:
def __init__(self, repo_backend, get_device_db_fn): def __init__(self, repo_backend, worker_handlers):
self.repo_backend = repo_backend self.repo_backend = repo_backend
self.get_device_db_fn = get_device_db_fn self.worker_handlers = worker_handlers
self.cur_rev = self.repo_backend.get_head_rev() self.cur_rev = self.repo_backend.get_head_rev()
self.repo_backend.request_rev(self.cur_rev) self.repo_backend.request_rev(self.cur_rev)
@ -107,7 +109,7 @@ class ExperimentDB:
self.repo_backend.release_rev(self.cur_rev) self.repo_backend.release_rev(self.cur_rev)
self.cur_rev = new_cur_rev self.cur_rev = new_cur_rev
self.status["cur_rev"] = new_cur_rev self.status["cur_rev"] = new_cur_rev
new_explist = await _scan_experiments(wd, self.get_device_db_fn) new_explist = await _scan_experiments(wd, self.worker_handlers)
_sync_explist(self.explist, new_explist) _sync_explist(self.explist, new_explist)
finally: finally:
@ -123,7 +125,7 @@ class ExperimentDB:
revision = self.cur_rev revision = self.cur_rev
wd, _ = self.repo_backend.request_rev(revision) wd, _ = self.repo_backend.request_rev(revision)
filename = os.path.join(wd, filename) filename = os.path.join(wd, filename)
worker = Worker({"get_device_db": self.get_device_db_fn}) worker = Worker(self.worker_handlers)
try: try:
description = await worker.examine("examine", filename) description = await worker.examine("examine", filename)
finally: finally:

View File

@ -41,7 +41,7 @@ def log_worker_exception():
class Worker: class Worker:
def __init__(self, handlers=dict(), send_timeout=2.0): def __init__(self, handlers=dict(), send_timeout=10.0):
self.handlers = handlers self.handlers = handlers
self.send_timeout = send_timeout self.send_timeout = send_timeout

View File

@ -1,14 +1,11 @@
from operator import setitem
from collections import OrderedDict from collections import OrderedDict
import importlib import importlib
import logging import logging
import os import os
import tempfile import tempfile
import time
import re import re
import numpy as np
import h5py
from artiq.protocols.sync_struct import Notifier from artiq.protocols.sync_struct import Notifier
from artiq.protocols.pc_rpc import AutoTarget, Client, BestEffortClient from artiq.protocols.pc_rpc import AutoTarget, Client, BestEffortClient
@ -44,7 +41,8 @@ class RIDCounter:
def _update_cache(self, rid): def _update_cache(self, rid):
contents = str(rid) + "\n" contents = str(rid) + "\n"
directory = os.path.abspath(os.path.dirname(self.cache_filename)) directory = os.path.abspath(os.path.dirname(self.cache_filename))
with tempfile.NamedTemporaryFile("w", dir=directory, delete=False) as f: with tempfile.NamedTemporaryFile("w", dir=directory, delete=False
) as f:
f.write(contents) f.write(contents)
tmpname = f.name tmpname = f.name
os.replace(tmpname, self.cache_filename) os.replace(tmpname, self.cache_filename)
@ -68,7 +66,7 @@ class RIDCounter:
except: except:
continue continue
minute_folders = filter(lambda x: re.fullmatch('\d\d-\d\d', x), minute_folders = filter(lambda x: re.fullmatch('\d\d-\d\d', x),
minute_folders) minute_folders)
for mf in minute_folders: for mf in minute_folders:
minute_path = os.path.join(day_path, mf) minute_path = os.path.join(day_path, mf)
try: try:
@ -148,65 +146,6 @@ class DeviceManager:
self.active_devices.clear() self.active_devices.clear()
def get_hdf5_output(start_time, rid, name):
dirname = os.path.join("results",
time.strftime("%Y-%m-%d", start_time),
time.strftime("%H-%M", start_time))
filename = "{:09}-{}.h5".format(rid, name)
os.makedirs(dirname, exist_ok=True)
return h5py.File(os.path.join(dirname, filename), "w")
_type_to_hdf5 = {
int: h5py.h5t.STD_I64BE,
float: h5py.h5t.IEEE_F64BE,
np.int8: h5py.h5t.STD_I8BE,
np.int16: h5py.h5t.STD_I16BE,
np.int32: h5py.h5t.STD_I32BE,
np.int64: h5py.h5t.STD_I64BE,
np.uint8: h5py.h5t.STD_U8BE,
np.uint16: h5py.h5t.STD_U16BE,
np.uint32: h5py.h5t.STD_U32BE,
np.uint64: h5py.h5t.STD_U64BE,
np.float16: h5py.h5t.IEEE_F16BE,
np.float32: h5py.h5t.IEEE_F32BE,
np.float64: h5py.h5t.IEEE_F64BE
}
def result_dict_to_hdf5(f, rd):
for name, data in rd.items():
flag = None
# beware: isinstance(True/False, int) == True
if isinstance(data, bool):
data = np.int8(data)
flag = "py_bool"
elif isinstance(data, int):
data = np.int64(data)
flag = "py_int"
if isinstance(data, np.ndarray):
dataset = f.create_dataset(name, data=data)
else:
ty = type(data)
if ty is str:
ty_h5 = "S{}".format(len(data))
data = data.encode()
else:
try:
ty_h5 = _type_to_hdf5[ty]
except KeyError:
raise TypeError("Type {} is not supported for HDF5 output"
.format(ty)) from None
dataset = f.create_dataset(name, (), ty_h5)
dataset[()] = data
if flag is not None:
dataset.attrs[flag] = np.int8(1)
class DatasetManager: class DatasetManager:
def __init__(self, ddb): def __init__(self, ddb):
self.broadcast = Notifier(dict()) self.broadcast = Notifier(dict())
@ -218,20 +157,33 @@ class DatasetManager:
def set(self, key, value, broadcast=False, persist=False, save=True): def set(self, key, value, broadcast=False, persist=False, save=True):
if persist: if persist:
broadcast = True broadcast = True
r = None
if broadcast: if broadcast:
self.broadcast[key] = (persist, value) self.broadcast[key] = persist, value
r = self.broadcast[key][1]
if save: if save:
self.local[key] = value self.local[key] = value
return r
def mutate(self, key, index, value):
target = None
if key in self.local:
target = self.local[key]
if key in self.broadcast.read:
target = self.broadcast[key][1]
if target is None:
raise KeyError("Cannot mutate non-existing dataset")
if isinstance(index, tuple):
if isinstance(index[0], tuple):
index = tuple(slice(*e) for e in index)
else:
index = slice(*index)
setitem(target, index, value)
def get(self, key): def get(self, key):
try: if key in self.local:
return self.local[key] return self.local[key]
except KeyError: else:
pass return self.ddb.get(key)
return self.ddb.get(key)
def write_hdf5(self, f): def write_hdf5(self, f):
result_dict_to_hdf5(f, self.local) for k, v in self.local.items():
f[k] = v

View File

@ -5,10 +5,12 @@ import logging
import traceback import traceback
from collections import OrderedDict from collections import OrderedDict
import h5py
import artiq import artiq
from artiq.protocols import pipe_ipc, pyon from artiq.protocols import pipe_ipc, pyon
from artiq.tools import multiline_log_config, file_import from artiq.tools import multiline_log_config, file_import
from artiq.master.worker_db import DeviceManager, DatasetManager, get_hdf5_output from artiq.master.worker_db import DeviceManager, DatasetManager
from artiq.language.environment import is_experiment from artiq.language.environment import is_experiment
from artiq.language.core import set_watchdog_factory, TerminationRequested from artiq.language.core import set_watchdog_factory, TerminationRequested
from artiq.coredevice.core import CompileError, host_only, _render_diagnostic from artiq.coredevice.core import CompileError, host_only, _render_diagnostic
@ -17,6 +19,7 @@ from artiq import __version__ as artiq_version
ipc = None ipc = None
def get_object(): def get_object():
line = ipc.readline().decode() line = ipc.readline().decode()
return pyon.decode(line) return pyon.decode(line)
@ -92,7 +95,7 @@ class Scheduler:
delete = staticmethod(make_parent_action("scheduler_delete")) delete = staticmethod(make_parent_action("scheduler_delete"))
request_termination = staticmethod( request_termination = staticmethod(
make_parent_action("scheduler_request_termination")) make_parent_action("scheduler_request_termination"))
get_status = staticmethod(make_parent_action("scheduler_get_status")) get_status = staticmethod(make_parent_action("scheduler_get_status"))
def set_run_info(self, rid, pipeline_name, expid, priority): def set_run_info(self, rid, pipeline_name, expid, priority):
self.rid = rid self.rid = rid
@ -124,14 +127,6 @@ class ExamineDeviceMgr:
return None return None
class DummyDatasetMgr:
def set(key, value, broadcast=False, persist=False, save=True):
return None
def get(key):
pass
def examine(device_mgr, dataset_mgr, file): def examine(device_mgr, dataset_mgr, file):
module = file_import(file) module = file_import(file)
for class_name, exp_class in module.__dict__.items(): for class_name, exp_class in module.__dict__.items():
@ -153,12 +148,6 @@ def examine(device_mgr, dataset_mgr, file):
register_experiment(class_name, name, arginfo) register_experiment(class_name, name, arginfo)
def string_to_hdf5(f, key, value):
dtype = "S{}".format(len(value))
dataset = f.create_dataset(key, (), dtype)
dataset[()] = value.encode()
def setup_diagnostics(experiment_file, repository_path): def setup_diagnostics(experiment_file, repository_path):
def render_diagnostic(self, diagnostic): def render_diagnostic(self, diagnostic):
message = "While compiling {}\n".format(experiment_file) + \ message = "While compiling {}\n".format(experiment_file) + \
@ -181,7 +170,8 @@ def setup_diagnostics(experiment_file, repository_path):
# putting inherently local objects (the diagnostic engine) into # putting inherently local objects (the diagnostic engine) into
# global slots, and there isn't any point in making it prettier by # global slots, and there isn't any point in making it prettier by
# wrapping it in layers of indirection. # wrapping it in layers of indirection.
artiq.coredevice.core._DiagnosticEngine.render_diagnostic = render_diagnostic artiq.coredevice.core._DiagnosticEngine.render_diagnostic = \
render_diagnostic
def main(): def main():
@ -220,6 +210,11 @@ def main():
exp = get_exp(experiment_file, expid["class_name"]) exp = get_exp(experiment_file, expid["class_name"])
device_mgr.virtual_devices["scheduler"].set_run_info( device_mgr.virtual_devices["scheduler"].set_run_info(
rid, obj["pipeline_name"], expid, obj["priority"]) rid, obj["pipeline_name"], expid, obj["priority"])
dirname = os.path.join("results",
time.strftime("%Y-%m-%d", start_time),
time.strftime("%H-%M", start_time))
os.makedirs(dirname, exist_ok=True)
os.chdir(dirname)
exp_inst = exp( exp_inst = exp(
device_mgr, dataset_mgr, enable_processors=True, device_mgr, dataset_mgr, enable_processors=True,
**expid["arguments"]) **expid["arguments"])
@ -234,17 +229,16 @@ def main():
exp_inst.analyze() exp_inst.analyze()
put_object({"action": "completed"}) put_object({"action": "completed"})
elif action == "write_results": elif action == "write_results":
f = get_hdf5_output(start_time, rid, exp.__name__) filename = "{:09}-{}.h5".format(rid, exp.__name__)
try: with h5py.File(filename, "w") as f:
dataset_mgr.write_hdf5(f) dataset_mgr.write_hdf5(f.create_group("datasets"))
string_to_hdf5(f, "artiq_version", artiq_version) f["artiq_version"] = artiq_version
if "repo_rev" in expid: f["rid"] = rid
string_to_hdf5(f, "repo_rev", expid["repo_rev"]) f["start_time"] = int(time.mktime(start_time))
finally: f["expid"] = pyon.encode(expid)
f.close()
put_object({"action": "completed"}) put_object({"action": "completed"})
elif action == "examine": elif action == "examine":
examine(ExamineDeviceMgr, DummyDatasetMgr, obj["file"]) examine(ExamineDeviceMgr, ParentDatasetDB, obj["file"])
put_object({"action": "completed"}) put_object({"action": "completed"})
elif action == "terminate": elif action == "terminate":
break break
@ -255,7 +249,7 @@ def main():
short_exc_info = type(exc).__name__ short_exc_info = type(exc).__name__
exc_str = str(exc) exc_str = str(exc)
if exc_str: if exc_str:
short_exc_info += ": " + exc_str short_exc_info += ": " + exc_str.splitlines()[0]
lines = ["Terminating with exception ("+short_exc_info+")\n"] lines = ["Terminating with exception ("+short_exc_info+")\n"]
if hasattr(exc, "artiq_core_exception"): if hasattr(exc, "artiq_core_exception"):
lines.append(str(exc.artiq_core_exception)) lines.append(str(exc.artiq_core_exception))

View File

@ -155,3 +155,12 @@ if sys.version_info[:3] == (3, 5, 1):
from asyncio import proactor_events from asyncio import proactor_events
proactor_events._ProactorBaseWritePipeTransport._loop_writing = _loop_writing proactor_events._ProactorBaseWritePipeTransport._loop_writing = _loop_writing
if sys.version_info[:3] == (3, 5, 2):
import asyncio
# See https://github.com/m-labs/artiq/issues/506
def _ipaddr_info(host, port, family, type, proto):
return None
asyncio.base_events._ipaddr_info = _ipaddr_info

View File

@ -2,6 +2,7 @@ import asyncio
import logging import logging
import re import re
from artiq.monkey_patches import *
from artiq.protocols.asyncio_server import AsyncioServer from artiq.protocols.asyncio_server import AsyncioServer
from artiq.tools import TaskObject, MultilineFormatter from artiq.tools import TaskObject, MultilineFormatter

View File

@ -20,6 +20,7 @@ import logging
import inspect import inspect
from operator import itemgetter from operator import itemgetter
from artiq.monkey_patches import *
from artiq.protocols import pyon from artiq.protocols import pyon
from artiq.protocols.asyncio_server import AsyncioServer as _AsyncioServer from artiq.protocols.asyncio_server import AsyncioServer as _AsyncioServer
@ -99,6 +100,8 @@ class Client:
in the middle of a RPC can break subsequent RPCs (from the same in the middle of a RPC can break subsequent RPCs (from the same
client). client).
""" """
kernel_invariants = set()
def __init__(self, host, port, target_name=AutoTarget, timeout=None): def __init__(self, host, port, target_name=AutoTarget, timeout=None):
self.__socket = socket.create_connection((host, port), timeout) self.__socket = socket.create_connection((host, port), timeout)
@ -186,6 +189,8 @@ class AsyncioClient:
Concurrent access from different asyncio tasks is supported; all calls Concurrent access from different asyncio tasks is supported; all calls
use a single lock. use a single lock.
""" """
kernel_invariants = set()
def __init__(self): def __init__(self):
self.__lock = asyncio.Lock() self.__lock = asyncio.Lock()
self.__reader = None self.__reader = None
@ -285,6 +290,8 @@ class BestEffortClient:
:param retry: Amount of time to wait between retries when reconnecting :param retry: Amount of time to wait between retries when reconnecting
in the background. in the background.
""" """
kernel_invariants = set()
def __init__(self, host, port, target_name, def __init__(self, host, port, target_name,
firstcon_timeout=0.5, retry=5.0): firstcon_timeout=0.5, retry=5.0):
self.__host = host self.__host = host
@ -518,9 +525,13 @@ class Server(_AsyncioServer):
else: else:
raise ValueError("Unknown action: {}" raise ValueError("Unknown action: {}"
.format(obj["action"])) .format(obj["action"]))
except Exception: except Exception as exc:
short_exc_info = type(exc).__name__
exc_str = str(exc)
if exc_str:
short_exc_info += ": " + exc_str.splitlines()[0]
obj = {"status": "failed", obj = {"status": "failed",
"message": traceback.format_exc()} "message": short_exc_info + "\n" + traceback.format_exc()}
line = pyon.encode(obj) + "\n" line = pyon.encode(obj) + "\n"
writer.write(line.encode()) writer.write(line.encode())
except (ConnectionResetError, ConnectionAbortedError, BrokenPipeError): except (ConnectionResetError, ConnectionAbortedError, BrokenPipeError):

View File

@ -22,7 +22,7 @@ class _BaseIO:
if os.name != "nt": if os.name != "nt":
async def _fds_to_asyncio(rfd, wfd, loop): async def _fds_to_asyncio(rfd, wfd, loop):
reader = asyncio.StreamReader(loop=loop) reader = asyncio.StreamReader(loop=loop, limit=4*1024*1024)
reader_protocol = asyncio.StreamReaderProtocol(reader, loop=loop) reader_protocol = asyncio.StreamReaderProtocol(reader, loop=loop)
rf = open(rfd, "rb", 0) rf = open(rfd, "rb", 0)
rt, _ = await loop.connect_read_pipe(lambda: reader_protocol, rf) rt, _ = await loop.connect_read_pipe(lambda: reader_protocol, rf)
@ -128,7 +128,7 @@ else: # windows
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
def factory(): def factory():
reader = asyncio.StreamReader(loop=loop) reader = asyncio.StreamReader(loop=loop, limit=4*1024*1024)
protocol = asyncio.StreamReaderProtocol(reader, protocol = asyncio.StreamReaderProtocol(reader,
self._child_connected, self._child_connected,
loop=loop) loop=loop)
@ -189,7 +189,7 @@ else: # windows
async def connect(self): async def connect(self):
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
self.reader = asyncio.StreamReader(loop=loop) self.reader = asyncio.StreamReader(loop=loop, limit=4*1024*1024)
reader_protocol = asyncio.StreamReaderProtocol( reader_protocol = asyncio.StreamReaderProtocol(
self.reader, loop=loop) self.reader, loop=loop)
transport, _ = await loop.create_pipe_connection( transport, _ = await loop.create_pipe_connection(

View File

@ -13,7 +13,7 @@ objects. Its main features are:
The main rationale for this new custom serializer (instead of using JSON) is The main rationale for this new custom serializer (instead of using JSON) is
that JSON does not support Numpy and more generally cannot be extended with that JSON does not support Numpy and more generally cannot be extended with
other data types while keeping a concise syntax. Here we can use the Python other data types while keeping a concise syntax. Here we can use the Python
function call syntax to mark special data types. function call syntax to express special data types.
""" """
@ -24,8 +24,10 @@ import os
import tempfile import tempfile
import numpy import numpy
from ..language.core import int as wrapping_int from ..language.core import int as wrapping_int
_encode_map = { _encode_map = {
type(None): "none", type(None): "none",
bool: "bool", bool: "bool",
@ -37,6 +39,7 @@ _encode_map = {
list: "list", list: "list",
set: "set", set: "set",
dict: "dict", dict: "dict",
slice: "slice",
wrapping_int: "number", wrapping_int: "number",
Fraction: "fraction", Fraction: "fraction",
OrderedDict: "ordereddict", OrderedDict: "ordereddict",
@ -125,6 +128,9 @@ class _Encoder:
r += "}" r += "}"
return r return r
def encode_slice(self, x):
return repr(x)
def encode_fraction(self, x): def encode_fraction(self, x):
return "Fraction({}, {})".format(self.encode(x.numerator), return "Fraction({}, {})".format(self.encode(x.numerator),
self.encode(x.denominator)) self.encode(x.denominator))
@ -176,6 +182,7 @@ _eval_dict = {
"null": None, "null": None,
"false": False, "false": False,
"true": True, "true": True,
"slice": slice,
"int": wrapping_int, "int": wrapping_int,
"Fraction": Fraction, "Fraction": Fraction,

View File

@ -14,6 +14,7 @@ import asyncio
from operator import getitem from operator import getitem
from functools import partial from functools import partial
from artiq.monkey_patches import *
from artiq.protocols import pyon from artiq.protocols import pyon
from artiq.protocols.asyncio_server import AsyncioServer from artiq.protocols.asyncio_server import AsyncioServer

Some files were not shown because too many files have changed in this diff Show More