forked from M-Labs/artiq
1
0
Fork 0

Compare commits

...

69 Commits

Author SHA1 Message Date
Sebastien Bourdeauducq 351a682ebd ad9912: fix ftw width docstring 2020-02-27 02:11:56 +08:00
Sebastien Bourdeauducq abd3c2a0db firmware: expose fmod to kernels. Closes #1417 2020-01-10 14:33:32 +08:00
Robert Jördens b0661ec055 comm_analyzer: don't assume every message has data
close #1377
2019-10-28 15:36:45 +01:00
Sebastien Bourdeauducq c60ea58e8d examples/kc705: fix dds_test 2019-10-17 07:37:29 +08:00
Sebastien Bourdeauducq aeb114adfb coreanalyzer: AD9914 fixes (#1376) 2019-10-17 07:37:25 +08:00
Charles Baynham 80c0aa5bfa pyon: Handle inf in decoding 2019-09-12 09:46:58 +08:00
Sebastien Bourdeauducq 8c88ce8169 artiq_run: fix expid class_name inconsistency 2019-09-09 15:17:02 +08:00
Sebastien Bourdeauducq 7cf25f8539 artiq_client: python 3.7 compatibility 2019-08-05 13:44:24 +08:00
Sebastien Bourdeauducq dd78c0e53d support overriding versioneer 2019-08-05 13:36:09 +08:00
Sebastien Bourdeauducq 23832ca445 add MAJOR_VERSION 2019-08-05 13:35:43 +08:00
Sebastien Bourdeauducq 5fe37cad30 dashboard: work around disappearing TTL/DDS panel bug. Closes #1307 2019-06-14 11:33:04 +08:00
David Nadlinger 2183c06154 firmware: Fix kernel RPC handling of zero-size values (e.g. empty arrays) 2019-04-11 11:29:58 +08:00
David Nadlinger 8b69babe8a firmware: Make "unexpected reply from kernel CPU" log messages unique
This makes it easier to localize issues based on the log output.
2019-04-11 11:29:51 +08:00
David Nadlinger 3e49da3d39 coredevice: Add test for recent kernel RPC fixes
This covers all three (de)serialisation fixes.
2019-04-11 11:29:45 +08:00
David Nadlinger ff1eb4858a compiler: Fix crash in escape analysis for assigning string literals 2019-04-11 11:29:39 +08:00
David Nadlinger 56e14bfac7 compiler: Fix comparison of tuples of lists 2019-04-11 11:29:34 +08:00
David Nadlinger 2cbb2b970c compiler: Fix comparison of nested lists 2019-04-11 11:29:27 +08:00
David Nadlinger debf64d1c2 firmware: Fix kernel RPC strings size (memory corruption)
Test case to follow.
2019-04-11 11:29:21 +08:00
David Nadlinger cb326aca70 firmware: Fix kernel RPC tuple size calculation (memory corruption)
Test case to follow.
2019-04-11 11:29:16 +08:00
David Nadlinger b89a67ef9d coredevice: Fix host-side serialization of (nested) lists
Test case to follow.
2019-04-11 11:29:09 +08:00
David Nadlinger 0c5fa373d5 compiler: Quote tuples as TTuple values
Previously, they would end up being TInstances,
rendering them useless.
2019-04-11 11:29:05 +08:00
David Nadlinger 01213e2381 gui: Fix crash when quickly opening/closing applets
Quickly closing/reopening applets (e.g. quickly clicking the checkbox
on an entire folder of applets) would previously lead to an occasional
KeyError on the self.dock_to_item access in on_dock_closed, as close()
would be invoked more than once.

The geometry/checked state handling can potentially be cleaned up
further, but at least this avoid the crash.
2019-03-11 20:39:22 +08:00
Sebastien Bourdeauducq 6113ecb199 kasli: add nrc variant 2019-03-11 16:27:52 +08:00
Astro 9bead085c7 manual: fix wavedrom extension json syntax
The leading empty line seems to be required by Sphinx 1.8.3.

The arguments must be strict JSON when prerendering for a target that is
not "html". Browser JSON parsing may have been more lenient.

Signed-off-by: Stephan Maka <stephan@spaceboyz.net>
2019-02-21 11:28:11 +08:00
Robert Jördens b53d874c6a dashboard: reconnect to core moninj
* handle disconnects like core device address changes and do a
  disconnect/connect iteration
* after connection failure wait 10 seconds and try again
* this addresses the slight regression from release-2
  to release-3 where the moninj protocol was made stateful
  (#838 and #1125)
* it would be much better to fix smoltcp/runtime to no loose the
  connection under pressure (#1125)
* the crashes reported in #838 look more like a race condition
* master disconnects still require dashboard restarts

Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-02-20 10:06:32 +01:00
whitequark 61f6619cb0 compiler: monomorphize casts first, but more carefully.
This reverts 425cd7851, which broke the use of casts to define
integer width.

Instead of it, two steps are taken:
  * First, literals are monomorphized, leading to predictable result.
  * Second, casts are monomorphized, in a top-bottom way. I.e.
    consider the expression `int64(round(x))`. If round() was visited
    first, the intermediate precision would be 32-bit, which is
    clearly undesirable. Therefore, contextual rules should take
    priority over non-contextual ones.

Fixes #1252.
2019-02-07 20:57:44 +08:00
David Nadlinger 04952cb230 conda: Require recent aiohttp
artiq_influxdb doesn't work with aiohttp 0.17 (anymore), as the
ClientSession API changed. I have not looked up precisely which
is the first version that works, but 3.x has been out for almost
a year and is available on the Anaconda/conda-forge channels.
2019-02-07 10:41:48 +08:00
Robert Jördens 162ce813ea ad53xx: ignore F3 (reserved)
Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-01-24 15:50:25 +01:00
Sebastien Bourdeauducq 1555fcf4ab ad9912: fix imports 2019-01-23 19:09:51 +08:00
Sebastien Bourdeauducq bcff533899 kasli_tester: fix grabber test 2019-01-23 17:59:58 +08:00
Sebastien Bourdeauducq 139e87de36 firmware: fix compilation error with more than 1 Grabber 2019-01-22 19:47:17 +08:00
Robert Jördens 51ee9122f2 ad9912: add some slack after init 2019-01-21 18:12:12 +01:00
Sebastien Bourdeauducq 76f63b29ba ad9910: add precision about tune_io_update_delay/tune_sync_delay order 2019-01-21 19:38:03 +08:00
Sebastien Bourdeauducq 222825d3a8 urukul: fix typos 2019-01-21 19:37:33 +08:00
Sebastien Bourdeauducq 9bbb67c340 kasli_tester: skip Zotino test when no Zotino is present 2019-01-21 18:12:03 +08:00
Sebastien Bourdeauducq 715dac5798 kasli: add Berkeley variant 2019-01-21 17:45:42 +08:00
Sebastien Bourdeauducq f0e4862d16 kasli_tester: skip Grabber test when no Grabber is present 2019-01-21 17:45:38 +08:00
David Nadlinger 4933686dd0 master/scheduler: Fix misleading indentation [nfc] 2019-01-21 10:25:01 +08:00
David Nadlinger 6969c889d8 manual: Minor grammar fixes 2019-01-21 10:24:11 +08:00
Robert Jördens 4f5f0538b7 doc/installing: cleanup and fixes
* fix broken and old URLs to anaconda/miniconda
* append conda-forge, do not prepend it (consistent with conda-forge
  instructions and does not blindly prefer packages in conda-forge over
  packages in defaults)
* shorten m-labs repo
2019-01-16 12:44:12 +01:00
Sebastien Bourdeauducq ea39160006 monkey_patches: disable for Python >= 3.6.7
3.6 >=7 are fixed
3.7.0 is incompatible with the monkey patches and they do more harm than good
3.7 >=1 are fixed
2019-01-15 20:34:31 +08:00
whitequark 56315203a9 compiler: first monomorphize ints, then casts.
Fixes #1242.
2019-01-13 11:53:58 +08:00
whitequark ed6d927f6c Improve Python 3.7 compatibility.
async is now a full (non-contextual) keyword.

There are two more instances:
   - artiq/frontend/artiq_client.py
   - artiq/devices/thorlabs_tcube/driver.py

It is not immediately clear how to fix those, so they are left for
later work.
2019-01-13 11:53:58 +08:00
David Nadlinger cc811429f4 coredevice.ad9910: Fix phase tracking ref_time passing
This is difficult to test without hardware mocks or some
form of phase readback, but the symptom was that e.g.
`self.dds.set(…, ref_time=now_mu() - 1)` would fail
periodically, that is, whenever bit 32 of the timestamp
would be set (which would be turned into the sign bit).

This is a fairly sinister issue, and is probably a compiler
bug of some sort (either accepts-invalid or wrong type inference).
2019-01-12 17:24:03 +00:00
Robert Jördens 5b2e7cb7f3 test_ad9910: relax tests
* tune_sync_delay: the opposite IO_UPDATE to SYNC_CLK alignment may not be perfectly
mis-aligned
* set_mu_speed: seems to be slower on the buildbot

Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-01-09 18:34:33 +01:00
Robert Jördens 41a816fd16 ad9910: fix RTIO fine timestamp nudging
Previously the TSC was truncated to an even coarse RTIO periods before doing
the setting SPI xfer. Afterwards the the IO update pulse would introduce
at least one but less than two RTIO cycles. Ultimately the RTIO TSC was
truncated again to even. If the SPI xfer takes an odd number of RTIO
periods, then a subsequent xfer would collide.

close #1229

Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-01-09 18:34:20 +01:00
Robert Jördens 51239ebdb6 ad9910: add more slack in tune_sync_delay
close #1235

Signed-off-by: Robert Jördens <rj@quartiq.de>
2019-01-09 18:34:03 +01:00
Sebastien Bourdeauducq 0303267d7e satman: wait for CPLL/QPLL lock after setting drtio_transceiver::stable_clkin 2019-01-07 17:09:35 +08:00
Sebastien Bourdeauducq 0ec01790fe firmware: fix not(has_spiflash) build 2019-01-05 23:41:50 +08:00
Sebastien Bourdeauducq f23d4e2762 update copyright year 2019-01-05 15:03:30 +08:00
Sebastien Bourdeauducq 6020b9c2f1 manual: update firmware/gateware build/flashing instructions. Closes #1223 2019-01-05 12:39:07 +08:00
Drew 9b081737d4 artiq_flash: change docs from old `-m` arg to `-V` (#1224) (#1227)
`-m` argument is deprecated. Changed to newer `-V` argument
Closes #1224

Signed-off-by: Drew Risinger <drewrisinger@users.noreply.github.com>
2019-01-05 10:23:47 +08:00
Drew Risinger 17fa5026a5 pyon: fix grammar in module docstring.
Signed-off-by: Drew Risinger <drewrisinger@users.noreply.github.com>
2019-01-05 10:23:47 +08:00
Drew 3312a033eb Docs: instructions to check if in plugdev group 2019-01-05 10:23:47 +08:00
Sebastien Bourdeauducq a53065ffc8 manual: move to correct directory for building rust crates. Closes #1222 2018-12-21 10:37:22 +08:00
Drew 8eee5dd414 tdr.py: typo (#1220) 2018-12-19 02:47:34 +08:00
Sebastien Bourdeauducq 371dda4a16 sync_struct: handle TimeoutError as subscriber disconnection. Closes #1215 2018-12-13 07:15:07 +08:00
Robert Jördens b1ea02ddf0 eem: name the servo submodule
This allows the migen namer to derive names for the ADC return clock
domain in the case of multiple SUServos

close #1201

Signed-off-by: Robert Jördens <rj@quartiq.de>
2018-12-11 11:37:36 +01:00
Sebastien Bourdeauducq e4ddfb303c Revert "firmware/ksupport: Update `cfg(not(has_rtio))` stub signatures"
release-4 does not have DRTIO switching nor the new RTIO CSR interface.

This reverts commit a7d7e91188.
2018-12-11 16:47:34 +08:00
David Nadlinger a7d7e91188 firmware/ksupport: Update `cfg(not(has_rtio))` stub signatures
This fixes up 8caea0e6d3,
but it is unclear whether anyone even uses a `not(has_rtio)`
configuration at this point.
2018-12-11 09:30:49 +01:00
Kaifeng 34d0f592f1 kasli_tester: add support for windows platform. (#1204) 2018-12-05 21:07:55 +08:00
Sebastien Bourdeauducq 7358262ab3 test_loopback_gate_timing: fix lat_offset 2018-12-02 20:52:54 +08:00
Sebastien Bourdeauducq 6fdfd6103f ctlmgr: do not raise exceptions in Controllers.__setitem__. Closes #1198 2018-12-01 18:06:53 +08:00
Sebastien Bourdeauducq e3624ca86d test_loopback_gate_timing: print input timing for debugging 2018-12-01 18:06:22 +08:00
Sebastien Bourdeauducq d6cc10b43c grabber: work around windows numpy int peculiarity (same as a81c12de9) 2018-11-30 18:41:31 +08:00
Sebastien Bourdeauducq 35f331f82c language: fix syscall arg handling 2018-11-30 17:59:42 +08:00
Sebastien Bourdeauducq a3c5cdeb86 manual/drtio: update output internal description (SED, 'destination' switching terminology) 2018-11-26 18:36:59 +08:00
Robert Jördens cc61ee9139 urukul: work around windows numpy int peculiarity
"OverflowError: Python int too large to convert to C long" otherwise

opticlock#74

Signed-off-by: Robert Jördens <rj@quartiq.de>
2018-11-25 16:58:10 +01:00
Sebastien Bourdeauducq 74371399de RELEASE_NOTES: remove ARTIQ-5 entry 2018-11-20 18:39:27 +08:00
62 changed files with 1176 additions and 265 deletions

1
MAJOR_VERSION Normal file
View File

@ -0,0 +1 @@
4

View File

@ -29,7 +29,7 @@ Website: https://m-labs.hk/artiq
License License
======= =======
Copyright (C) 2014-2018 M-Labs Limited. Copyright (C) 2014-2019 M-Labs Limited.
ARTIQ is free software: you can redistribute it and/or modify ARTIQ is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by it under the terms of the GNU Lesser General Public License as published by

View File

@ -3,13 +3,6 @@
Release notes Release notes
============= =============
ARTIQ-5
-------
5.0
***
ARTIQ-4 ARTIQ-4
------- -------

View File

@ -481,6 +481,12 @@ def get_versions():
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords. # case we can only use expanded keywords.
override = os.getenv("VERSIONEER_OVERRIDE")
if override:
return {"version": override, "full-revisionid": None,
"dirty": None,
"error": None, "date": None}
cfg = get_config() cfg = get_config()
verbose = cfg.verbose verbose = cfg.verbose

View File

@ -221,6 +221,17 @@ class ASTSynthesizer:
return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(), return asttyped.ListT(elts=elts, ctx=None, type=builtins.TList(),
begin_loc=begin_loc, end_loc=end_loc, begin_loc=begin_loc, end_loc=end_loc,
loc=begin_loc.join(end_loc)) loc=begin_loc.join(end_loc))
elif isinstance(value, tuple):
begin_loc = self._add("(")
elts = []
for index, elt in enumerate(value):
elts.append(self.quote(elt))
self._add(", ")
end_loc = self._add(")")
return asttyped.TupleT(elts=elts, ctx=None,
type=types.TTuple([e.type for e in elts]),
begin_loc=begin_loc, end_loc=end_loc,
loc=begin_loc.join(end_loc))
elif isinstance(value, numpy.ndarray): elif isinstance(value, numpy.ndarray):
begin_loc = self._add("numpy.array([") begin_loc = self._add("numpy.array([")
elts = [] elts = []
@ -1019,7 +1030,7 @@ class Stitcher:
function_type = types.TRPC(ret_type, function_type = types.TRPC(ret_type,
service=self.embedding_map.store_object(host_function), service=self.embedding_map.store_object(host_function),
async=is_async) is_async=is_async)
self.functions[function] = function_type self.functions[function] = function_type
return function_type return function_type

View File

@ -66,8 +66,8 @@ class Module:
interleaver = transforms.Interleaver(engine=self.engine) interleaver = transforms.Interleaver(engine=self.engine)
invariant_detection = analyses.InvariantDetection(engine=self.engine) invariant_detection = analyses.InvariantDetection(engine=self.engine)
cast_monomorphizer.visit(src.typedtree)
int_monomorphizer.visit(src.typedtree) int_monomorphizer.visit(src.typedtree)
cast_monomorphizer.visit(src.typedtree)
inferencer.visit(src.typedtree) inferencer.visit(src.typedtree)
monomorphism_validator.visit(src.typedtree) monomorphism_validator.visit(src.typedtree)
escape_validator.visit(src.typedtree) escape_validator.visit(src.typedtree)

View File

@ -1426,7 +1426,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
for index in range(len(lhs.type.elts)): for index in range(len(lhs.type.elts)):
lhs_elt = self.append(ir.GetAttr(lhs, index)) lhs_elt = self.append(ir.GetAttr(lhs, index))
rhs_elt = self.append(ir.GetAttr(rhs, index)) rhs_elt = self.append(ir.GetAttr(rhs, index))
elt_result = self.append(ir.Compare(op, lhs_elt, rhs_elt)) elt_result = self.polymorphic_compare_pair(op, lhs_elt, rhs_elt)
if result is None: if result is None:
result = elt_result result = elt_result
else: else:
@ -1453,6 +1453,7 @@ class ARTIQIRGenerator(algorithm.Visitor):
lhs_elt = self.append(ir.GetElem(lhs, index_phi)) lhs_elt = self.append(ir.GetElem(lhs, index_phi))
rhs_elt = self.append(ir.GetElem(rhs, index_phi)) rhs_elt = self.append(ir.GetElem(rhs, index_phi))
body_result = self.polymorphic_compare_pair(op, lhs_elt, rhs_elt) body_result = self.polymorphic_compare_pair(op, lhs_elt, rhs_elt)
body_end = self.current_block
loop_body2 = self.add_block("compare.body2") loop_body2 = self.add_block("compare.body2")
self.current_block = loop_body2 self.current_block = loop_body2
@ -1468,8 +1469,8 @@ class ARTIQIRGenerator(algorithm.Visitor):
phi.add_incoming(compare_length, head) phi.add_incoming(compare_length, head)
loop_head.append(ir.BranchIf(loop_cond, loop_body, tail)) loop_head.append(ir.BranchIf(loop_cond, loop_body, tail))
phi.add_incoming(ir.Constant(True, builtins.TBool()), loop_head) phi.add_incoming(ir.Constant(True, builtins.TBool()), loop_head)
loop_body.append(ir.BranchIf(body_result, loop_body2, tail)) body_end.append(ir.BranchIf(body_result, loop_body2, tail))
phi.add_incoming(body_result, loop_body) phi.add_incoming(body_result, body_end)
if isinstance(op, ast.NotEq): if isinstance(op, ast.NotEq):
result = self.append(ir.Select(phi, result = self.append(ir.Select(phi,

View File

@ -11,13 +11,12 @@ class CastMonomorphizer(algorithm.Visitor):
self.engine = engine self.engine = engine
def visit_CallT(self, node): def visit_CallT(self, node):
self.generic_visit(node)
if (types.is_builtin(node.func.type, "int") or if (types.is_builtin(node.func.type, "int") or
types.is_builtin(node.func.type, "int32") or types.is_builtin(node.func.type, "int32") or
types.is_builtin(node.func.type, "int64")): types.is_builtin(node.func.type, "int64")):
typ = node.type.find() typ = node.type.find()
if (not types.is_var(typ["width"]) and if (not types.is_var(typ["width"]) and
len(node.args) == 1 and
builtins.is_int(node.args[0].type) and builtins.is_int(node.args[0].type) and
types.is_var(node.args[0].type.find()["width"])): types.is_var(node.args[0].type.find()["width"])):
if isinstance(node.args[0], asttyped.BinOpT): if isinstance(node.args[0], asttyped.BinOpT):
@ -29,3 +28,20 @@ class CastMonomorphizer(algorithm.Visitor):
node.args[0].type.unify(typ) node.args[0].type.unify(typ)
if types.is_builtin(node.func.type, "int") or \
types.is_builtin(node.func.type, "round"):
typ = node.type.find()
if types.is_var(typ["width"]):
typ["width"].unify(types.TValue(32))
self.generic_visit(node)
def visit_CoerceT(self, node):
if isinstance(node.value, asttyped.NumT) and \
builtins.is_int(node.type) and \
builtins.is_int(node.value.type) and \
not types.is_var(node.type["width"]) and \
types.is_var(node.value.type["width"]):
node.value.type.unify(node.type)
self.generic_visit(node)

View File

@ -26,22 +26,3 @@ class IntMonomorphizer(algorithm.Visitor):
return return
node.type["width"].unify(types.TValue(width)) node.type["width"].unify(types.TValue(width))
def visit_CallT(self, node):
self.generic_visit(node)
if types.is_builtin(node.func.type, "int") or \
types.is_builtin(node.func.type, "round"):
typ = node.type.find()
if types.is_var(typ["width"]):
typ["width"].unify(types.TValue(32))
def visit_CoerceT(self, node):
if isinstance(node.value, asttyped.NumT) and \
builtins.is_int(node.type) and \
builtins.is_int(node.value.type) and \
not types.is_var(node.type["width"]) and \
types.is_var(node.value.type["width"]):
node.value.type.unify(node.type)
self.generic_visit(node)

View File

@ -1332,7 +1332,7 @@ class LLVMIRGenerator:
llargptr = self.llbuilder.gep(llargs, [ll.Constant(lli32, index)]) llargptr = self.llbuilder.gep(llargs, [ll.Constant(lli32, index)])
self.llbuilder.store(llargslot, llargptr) self.llbuilder.store(llargslot, llargptr)
if fun_type.async: if fun_type.is_async:
self.llbuilder.call(self.llbuiltin("rpc_send_async"), self.llbuilder.call(self.llbuiltin("rpc_send_async"),
[llservice, lltagptr, llargs]) [llservice, lltagptr, llargs])
else: else:
@ -1342,7 +1342,7 @@ class LLVMIRGenerator:
# Don't waste stack space on saved arguments. # Don't waste stack space on saved arguments.
self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr]) self.llbuilder.call(self.llbuiltin("llvm.stackrestore"), [llstackptr])
if fun_type.async: if fun_type.is_async:
# If this RPC is called using an `invoke` ARTIQ IR instruction, there will be # If this RPC is called using an `invoke` ARTIQ IR instruction, there will be
# no other instructions in this basic block. Since this RPC is async, it cannot # no other instructions in this basic block. Since this RPC is async, it cannot
# possibly raise an exception, so add an explicit jump to the normal successor. # possibly raise an exception, so add an explicit jump to the normal successor.
@ -1556,6 +1556,11 @@ class LLVMIRGenerator:
lleltsptr = llglobal.bitcast(lleltsary.type.element.as_pointer()) lleltsptr = llglobal.bitcast(lleltsary.type.element.as_pointer())
llconst = ll.Constant(llty, [lleltsptr, ll.Constant(lli32, len(llelts))]) llconst = ll.Constant(llty, [lleltsptr, ll.Constant(lli32, len(llelts))])
return llconst return llconst
elif types.is_tuple(typ):
assert isinstance(value, tuple), fail_msg
llelts = [self._quote(v, t, lambda: path() + [str(i)])
for i, (v, t) in enumerate(zip(value, typ.elts))]
return ll.Constant(llty, llelts)
elif types.is_rpc(typ) or types.is_c_function(typ) or types.is_builtin_function(typ): elif types.is_rpc(typ) or types.is_c_function(typ) or types.is_builtin_function(typ):
# RPC, C and builtin functions have no runtime representation. # RPC, C and builtin functions have no runtime representation.
return ll.Constant(llty, ll.Undefined) return ll.Constant(llty, ll.Undefined)

View File

@ -311,14 +311,14 @@ class TRPC(Type):
:ivar ret: (:class:`Type`) :ivar ret: (:class:`Type`)
return type return type
:ivar service: (int) RPC service number :ivar service: (int) RPC service number
:ivar async: (bool) whether the RPC blocks until return :ivar is_async: (bool) whether the RPC blocks until return
""" """
attributes = OrderedDict() attributes = OrderedDict()
def __init__(self, ret, service, async=False): def __init__(self, ret, service, is_async=False):
assert isinstance(ret, Type) assert isinstance(ret, Type)
self.ret, self.service, self.async = ret, service, async self.ret, self.service, self.is_async = ret, service, is_async
def find(self): def find(self):
return self return self
@ -326,7 +326,7 @@ class TRPC(Type):
def unify(self, other): def unify(self, other):
if isinstance(other, TRPC) and \ if isinstance(other, TRPC) and \
self.service == other.service and \ self.service == other.service and \
self.async == other.async: self.is_async == other.is_async:
self.ret.unify(other.ret) self.ret.unify(other.ret)
elif isinstance(other, TVar): elif isinstance(other, TVar):
other.unify(self) other.unify(self)
@ -343,7 +343,7 @@ class TRPC(Type):
def __eq__(self, other): def __eq__(self, other):
return isinstance(other, TRPC) and \ return isinstance(other, TRPC) and \
self.service == other.service and \ self.service == other.service and \
self.async == other.async self.is_async == other.is_async
def __ne__(self, other): def __ne__(self, other):
return not (self == other) return not (self == other)
@ -742,7 +742,7 @@ class TypePrinter(object):
return signature return signature
elif isinstance(typ, TRPC): elif isinstance(typ, TRPC):
return "[rpc{} #{}](...)->{}".format(typ.service, return "[rpc{} #{}](...)->{}".format(typ.service,
" async" if typ.async else "", " async" if typ.is_async else "",
self.name(typ.ret, depth + 1)) self.name(typ.ret, depth + 1))
elif isinstance(typ, TBuiltinFunction): elif isinstance(typ, TBuiltinFunction):
return "<function {}>".format(typ.name) return "<function {}>".format(typ.name)

View File

@ -51,10 +51,6 @@ class Region:
(other.range.begin_pos <= self.range.begin_pos <= other.range.end_pos and \ (other.range.begin_pos <= self.range.begin_pos <= other.range.end_pos and \
self.range.end_pos > other.range.end_pos) self.range.end_pos > other.range.end_pos)
def contract(self, other):
if not self.range:
self.range = other.range
def outlives(lhs, rhs): def outlives(lhs, rhs):
if not isinstance(lhs, Region): # lhs lives nonlexically if not isinstance(lhs, Region): # lhs lives nonlexically
return True return True
@ -69,8 +65,11 @@ class Region:
class RegionOf(algorithm.Visitor): class RegionOf(algorithm.Visitor):
""" """
Visit an expression and return the list of regions that must Visit an expression and return the region that must be alive for the
be alive for the expression to execute. expression to execute.
For expressions involving multiple regions, the shortest-lived one is
returned.
""" """
def __init__(self, env_stack, youngest_region): def __init__(self, env_stack, youngest_region):
@ -301,17 +300,20 @@ class EscapeValidator(algorithm.Visitor):
def visit_assignment(self, target, value): def visit_assignment(self, target, value):
value_region = self._region_of(value) value_region = self._region_of(value)
# If this is a variable, we might need to contract the live range.
if isinstance(value_region, Region):
for name in self._names_of(target):
region = self._region_of(name)
if isinstance(region, Region):
region.contract(value_region)
# If we assign to an attribute of a quoted value, there will be no names # If we assign to an attribute of a quoted value, there will be no names
# in the assignment lhs. # in the assignment lhs.
target_names = self._names_of(target) or [] target_names = self._names_of(target) or []
# Adopt the value region for any variables declared on the lhs.
for name in target_names:
region = self._region_of(name)
if isinstance(region, Region) and not region.present():
# Find the name's environment to overwrite the region.
for env in self.env_stack[::-1]:
if name.id in env:
env[name.id] = value_region
break
# The assigned value should outlive the assignee # The assigned value should outlive the assignee
target_regions = [self._region_of(name) for name in target_names] target_regions = [self._region_of(name) for name in target_names]
for target_region in target_regions: for target_region in target_regions:

View File

@ -176,7 +176,7 @@ class AD53xx:
(AD53XX_CMD_SPECIAL | AD53XX_SPECIAL_CONTROL | 0b0010) << 8) (AD53XX_CMD_SPECIAL | AD53XX_SPECIAL_CONTROL | 0b0010) << 8)
if not blind: if not blind:
ctrl = self.read_reg(channel=0, op=AD53XX_READ_CONTROL) ctrl = self.read_reg(channel=0, op=AD53XX_READ_CONTROL)
if ctrl != 0b0010: if (ctrl & 0b10111) != 0b00010:
raise ValueError("DAC CONTROL readback mismatch") raise ValueError("DAC CONTROL readback mismatch")
delay(15*us) delay(15*us)

View File

@ -256,9 +256,11 @@ class AD9910:
self.write32(_AD9910_REG_CFR1, 0x00000002 | (bits << 4)) self.write32(_AD9910_REG_CFR1, 0x00000002 | (bits << 4))
self.cpld.io_update.pulse(1*us) self.cpld.io_update.pulse(1*us)
# KLUDGE: ref_time default argument is explicitly marked int64() to avoid
# silent truncation of explicitly passed timestamps. (Compiler bug?)
@kernel @kernel
def set_mu(self, ftw, pow=0, asf=0x3fff, phase_mode=_PHASE_MODE_DEFAULT, def set_mu(self, ftw, pow=0, asf=0x3fff, phase_mode=_PHASE_MODE_DEFAULT,
ref_time=-1, profile=0): ref_time=int64(-1), profile=0):
"""Set profile 0 data in machine units. """Set profile 0 data in machine units.
This uses machine units (FTW, POW, ASF). The frequency tuning word This uses machine units (FTW, POW, ASF). The frequency tuning word
@ -285,8 +287,9 @@ class AD9910:
""" """
if phase_mode == _PHASE_MODE_DEFAULT: if phase_mode == _PHASE_MODE_DEFAULT:
phase_mode = self.phase_mode phase_mode = self.phase_mode
# Align to coarse RTIO which aligns SYNC_CLK # Align to coarse RTIO which aligns SYNC_CLK. I.e. clear fine TSC
at_mu(now_mu() & ~0xf) # This will not cause a collision or sequence error.
at_mu(now_mu() & ~7)
if phase_mode != PHASE_MODE_CONTINUOUS: if phase_mode != PHASE_MODE_CONTINUOUS:
# Auto-clear phase accumulator on IO_UPDATE. # Auto-clear phase accumulator on IO_UPDATE.
# This is active already for the next IO_UPDATE # This is active already for the next IO_UPDATE
@ -302,8 +305,8 @@ class AD9910:
pow += dt*ftw*self.sysclk_per_mu >> 16 pow += dt*ftw*self.sysclk_per_mu >> 16
self.write64(_AD9910_REG_PROFILE0 + profile, (asf << 16) | pow, ftw) self.write64(_AD9910_REG_PROFILE0 + profile, (asf << 16) | pow, ftw)
delay_mu(int64(self.io_update_delay)) delay_mu(int64(self.io_update_delay))
self.cpld.io_update.pulse_mu(8) # assumes 8 mu > t_SYSCLK self.cpld.io_update.pulse_mu(8) # assumes 8 mu > t_SYN_CCLK
at_mu(now_mu() & ~0xf) at_mu(now_mu() & ~7) # clear fine TSC again
if phase_mode != PHASE_MODE_CONTINUOUS: if phase_mode != PHASE_MODE_CONTINUOUS:
self.write32(_AD9910_REG_CFR1, 0x00000002) self.write32(_AD9910_REG_CFR1, 0x00000002)
# future IO_UPDATE will activate # future IO_UPDATE will activate
@ -335,7 +338,7 @@ class AD9910:
@kernel @kernel
def set(self, frequency, phase=0.0, amplitude=1.0, def set(self, frequency, phase=0.0, amplitude=1.0,
phase_mode=_PHASE_MODE_DEFAULT, ref_time=-1, profile=0): phase_mode=_PHASE_MODE_DEFAULT, ref_time=int64(-1), profile=0):
"""Set profile 0 data in SI units. """Set profile 0 data in SI units.
.. seealso:: :meth:`set_mu` .. seealso:: :meth:`set_mu`
@ -424,10 +427,12 @@ class AD9910:
This method first locates a valid SYNC_IN delay at zero validation This method first locates a valid SYNC_IN delay at zero validation
window size (setup/hold margin) by scanning around `search_seed`. It window size (setup/hold margin) by scanning around `search_seed`. It
then looks for similar valid delays at successively larger validation then looks for similar valid delays at successively larger validation
window sizes until none can be found. It then deacreses the validation window sizes until none can be found. It then decreases the validation
window a bit to provide some slack and stability and returns the window a bit to provide some slack and stability and returns the
optimal values. optimal values.
This method and :meth:`tune_io_update_delay` can be run in any order.
:param search_seed: Start value for valid SYNC_IN delay search. :param search_seed: Start value for valid SYNC_IN delay search.
Defaults to 15 (half range). Defaults to 15 (half range).
:return: Tuple of optimal delay and window size. :return: Tuple of optimal delay and window size.
@ -453,7 +458,7 @@ class AD9910:
# integrate SMP_ERR statistics for a few hundred cycles # integrate SMP_ERR statistics for a few hundred cycles
delay(100*us) delay(100*us)
err = urukul_sta_smp_err(self.cpld.sta_read()) err = urukul_sta_smp_err(self.cpld.sta_read())
delay(40*us) # slack delay(100*us) # slack
if not (err >> (self.chip_select - 4)) & 1: if not (err >> (self.chip_select - 4)) & 1:
next_seed = in_delay next_seed = in_delay
break break
@ -496,16 +501,17 @@ class AD9910:
self.write32(_AD9910_REG_RAMP_RATE, 0x00010000) self.write32(_AD9910_REG_RAMP_RATE, 0x00010000)
# dFTW = 1, (work around negative slope) # dFTW = 1, (work around negative slope)
self.write64(_AD9910_REG_RAMP_STEP, -1, 0) self.write64(_AD9910_REG_RAMP_STEP, -1, 0)
# delay io_update after RTIO/2 edge # delay io_update after RTIO edge
t = now_mu() + 0x10 & ~0xf t = now_mu() + 8 & ~7
at_mu(t + delay_start) at_mu(t + delay_start)
self.cpld.io_update.pulse_mu(32 - delay_start) # realign # assumes a maximum t_SYNC_CLK period
self.cpld.io_update.pulse_mu(16 - delay_start) # realign
# disable DRG autoclear and LRR on io_update # disable DRG autoclear and LRR on io_update
self.write32(_AD9910_REG_CFR1, 0x00000002) self.write32(_AD9910_REG_CFR1, 0x00000002)
# stop DRG # stop DRG
self.write64(_AD9910_REG_RAMP_STEP, 0, 0) self.write64(_AD9910_REG_RAMP_STEP, 0, 0)
at_mu(t + 0x1000 + delay_stop) at_mu(t + 0x1000 + delay_stop)
self.cpld.io_update.pulse_mu(32 - delay_stop) # realign self.cpld.io_update.pulse_mu(16 - delay_stop) # realign
ftw = self.read32(_AD9910_REG_FTW) # read out effective FTW ftw = self.read32(_AD9910_REG_FTW) # read out effective FTW
delay(100*us) # slack delay(100*us) # slack
# disable DRG # disable DRG
@ -525,6 +531,8 @@ class AD9910:
This method assumes that the IO_UPDATE TTLOut device has one machine This method assumes that the IO_UPDATE TTLOut device has one machine
unit resolution (SERDES). unit resolution (SERDES).
This method and :meth:`tune_sync_delay` can be run in any order.
:return: Stable IO_UPDATE delay to be passed to the constructor :return: Stable IO_UPDATE delay to be passed to the constructor
:class:`AD9910` via the device database. :class:`AD9910` via the device database.
""" """

View File

@ -1,7 +1,7 @@
from numpy import int32, int64 from numpy import int32, int64
from artiq.language.core import kernel, delay, portable from artiq.language.core import kernel, delay, portable
from artiq.language.units import us, ns from artiq.language.units import ms, us, ns
from artiq.coredevice.ad9912_reg import * from artiq.coredevice.ad9912_reg import *
from artiq.coredevice import spi2 as spi from artiq.coredevice import spi2 as spi
@ -107,6 +107,7 @@ class AD9912:
# I_cp = 375 µA, VCO high range # I_cp = 375 µA, VCO high range
self.write(AD9912_PLLCFG, 0b00000101, length=1) self.write(AD9912_PLLCFG, 0b00000101, length=1)
self.cpld.io_update.pulse(2*us) self.cpld.io_update.pulse(2*us)
delay(1*ms)
@kernel @kernel
def set_att_mu(self, att): def set_att_mu(self, att):
@ -135,7 +136,7 @@ class AD9912:
After the SPI transfer, the shared IO update pin is pulsed to After the SPI transfer, the shared IO update pin is pulsed to
activate the data. activate the data.
:param ftw: Frequency tuning word: 32 bit unsigned. :param ftw: Frequency tuning word: 48 bit unsigned.
:param pow: Phase tuning word: 16 bit unsigned. :param pow: Phase tuning word: 16 bit unsigned.
""" """
# streaming transfer of FTW and POW # streaming transfer of FTW and POW

View File

@ -360,8 +360,8 @@ class SPIMaster2Handler(WishboneHandler):
def process_message(self, message): def process_message(self, message):
self.stb.set_value("1") self.stb.set_value("1")
self.stb.set_value("0") self.stb.set_value("0")
data = message.data
if isinstance(message, OutputMessage): if isinstance(message, OutputMessage):
data = message.data
address = message.address address = message.address
if address == 1: if address == 1:
logger.debug("SPI config @%d data=0x%08x", logger.debug("SPI config @%d data=0x%08x",
@ -462,7 +462,7 @@ def get_ref_period(devices):
def get_dds_sysclk(devices): def get_dds_sysclk(devices):
return get_single_device_argument(devices, "artiq.coredevice.ad9914", return get_single_device_argument(devices, "artiq.coredevice.ad9914",
("ad9914",), "sysclk") ("AD9914",), "sysclk")
def create_channel_handlers(vcd_manager, devices, ref_period, def create_channel_handlers(vcd_manager, devices, ref_period,
@ -485,8 +485,7 @@ def create_channel_handlers(vcd_manager, devices, ref_period,
if dds_bus_channel in channel_handlers: if dds_bus_channel in channel_handlers:
dds_handler = channel_handlers[dds_bus_channel] dds_handler = channel_handlers[dds_bus_channel]
else: else:
dds_handler = DDSHandler(vcd_manager, desc["class"], dds_handler = DDSHandler(vcd_manager, dds_onehot_sel, dds_sysclk)
dds_onehot_sel, dds_sysclk)
channel_handlers[dds_bus_channel] = dds_handler channel_handlers[dds_bus_channel] = dds_handler
dds_handler.add_dds_channel(name, dds_channel) dds_handler.add_dds_channel(name, dds_channel)
if (desc["module"] == "artiq.coredevice.spi2" and if (desc["module"] == "artiq.coredevice.spi2" and

View File

@ -307,7 +307,7 @@ class CommKernel:
args.append(value) args.append(value)
def _skip_rpc_value(self, tags): def _skip_rpc_value(self, tags):
tag = tags.pop(0) tag = chr(tags.pop(0))
if tag == "t": if tag == "t":
length = tags.pop(0) length = tags.pop(0)
for _ in range(length): for _ in range(length):
@ -403,7 +403,7 @@ class CommKernel:
return msg return msg
def _serve_rpc(self, embedding_map): def _serve_rpc(self, embedding_map):
async = self._read_bool() is_async = self._read_bool()
service_id = self._read_int32() service_id = self._read_int32()
args, kwargs = self._receive_rpc_args(embedding_map) args, kwargs = self._receive_rpc_args(embedding_map)
return_tags = self._read_bytes() return_tags = self._read_bytes()
@ -413,9 +413,9 @@ class CommKernel:
else: else:
service = embedding_map.retrieve_object(service_id) service = embedding_map.retrieve_object(service_id)
logger.debug("rpc service: [%d]%r%s %r %r -> %s", service_id, service, logger.debug("rpc service: [%d]%r%s %r %r -> %s", service_id, service,
(" (async)" if async else ""), args, kwargs, return_tags) (" (async)" if is_async else ""), args, kwargs, return_tags)
if async: if is_async:
service(*args, **kwargs) service(*args, **kwargs)
return return

View File

@ -23,7 +23,7 @@ class Grabber:
count_width = min(31, 2*res_width + 16 - count_shift) count_width = min(31, 2*res_width + 16 - count_shift)
# This value is inserted by the gateware to mark the start of a series of # This value is inserted by the gateware to mark the start of a series of
# ROI engine outputs for one video frame. # ROI engine outputs for one video frame.
self.sentinel = int32(2**count_width) self.sentinel = int32(int64(2**count_width))
@kernel @kernel
def setup_roi(self, n, x0, y0, x1, y1): def setup_roi(self, n, x0, y0, x1, y1):

View File

@ -1,7 +1,7 @@
from artiq.language.core import kernel, delay, portable, at_mu, now_mu from artiq.language.core import kernel, delay, portable, at_mu, now_mu
from artiq.language.units import us, ms from artiq.language.units import us, ms
from numpy import int32 from numpy import int32, int64
from artiq.coredevice import spi2 as spi from artiq.coredevice import spi2 as spi
@ -175,7 +175,7 @@ class CPLD:
self.cfg_reg = urukul_cfg(rf_sw=rf_sw, led=0, profile=0, self.cfg_reg = urukul_cfg(rf_sw=rf_sw, led=0, profile=0,
io_update=0, mask_nu=0, clk_sel=clk_sel, io_update=0, mask_nu=0, clk_sel=clk_sel,
sync_sel=sync_sel, rst=0, io_rst=0) sync_sel=sync_sel, rst=0, io_rst=0)
self.att_reg = int32(att) self.att_reg = int32(int64(att))
self.sync_div = sync_div self.sync_div = sync_div
@kernel @kernel
@ -327,7 +327,7 @@ class CPLD:
and align it to the current RTIO timestamp. and align it to the current RTIO timestamp.
The SYNC_IN signal is derived from the coarse RTIO clock The SYNC_IN signal is derived from the coarse RTIO clock
and the divider must be a power of two two. and the divider must be a power of two.
Configure ``sync_sel == 0``. Configure ``sync_sel == 0``.
:param div: SYNC_IN frequency divider. Must be a power of two. :param div: SYNC_IN frequency divider. Must be a power of two.

View File

@ -243,7 +243,7 @@ def setup_from_ddb(ddb):
class _DeviceManager: class _DeviceManager:
def __init__(self): def __init__(self):
self.core_addr = None self.core_addr = None
self.new_core_addr = asyncio.Event() self.reconnect_core = asyncio.Event()
self.core_connection = None self.core_connection = None
self.core_connector_task = asyncio.ensure_future(self.core_connector()) self.core_connector_task = asyncio.ensure_future(self.core_connector())
@ -268,7 +268,7 @@ class _DeviceManager:
if core_addr != self.core_addr: if core_addr != self.core_addr:
self.core_addr = core_addr self.core_addr = core_addr
self.new_core_addr.set() self.reconnect_core.set()
self.dds_sysclk = dds_sysclk self.dds_sysclk = dds_sysclk
@ -383,19 +383,25 @@ class _DeviceManager:
widget.cur_override_level = bool(value) widget.cur_override_level = bool(value)
widget.refresh_display() widget.refresh_display()
def disconnect_cb(self):
logger.error("lost connection to core device moninj")
self.reconnect_core.set()
async def core_connector(self): async def core_connector(self):
while True: while True:
await self.new_core_addr.wait() await self.reconnect_core.wait()
self.new_core_addr.clear() self.reconnect_core.clear()
if self.core_connection is not None: if self.core_connection is not None:
await self.core_connection.close() await self.core_connection.close()
self.core_connection = None self.core_connection = None
new_core_connection = CommMonInj(self.monitor_cb, self.injection_status_cb, new_core_connection = CommMonInj(self.monitor_cb, self.injection_status_cb,
lambda: logger.error("lost connection to core device moninj")) self.disconnect_cb)
try: try:
await new_core_connection.connect(self.core_addr, 1383) await new_core_connection.connect(self.core_addr, 1383)
except: except:
logger.error("failed to connect to core device moninj", exc_info=True) logger.error("failed to connect to core device moninj", exc_info=True)
await asyncio.sleep(10.)
self.reconnect_core.set()
else: else:
self.core_connection = new_core_connection self.core_connection = new_core_connection
for ttl_channel in self.ttl_widgets.keys(): for ttl_channel in self.ttl_widgets.keys():

View File

@ -178,13 +178,17 @@ class Controllers:
raise ValueError raise ValueError
def __setitem__(self, k, v): def __setitem__(self, k, v):
if (isinstance(v, dict) and v["type"] == "controller" and try:
self.host_filter in get_ip_addresses(v["host"])): if (isinstance(v, dict) and v["type"] == "controller" and
v["command"] = v["command"].format(name=k, self.host_filter in get_ip_addresses(v["host"])):
bind=self.host_filter, v["command"] = v["command"].format(name=k,
port=v["port"]) bind=self.host_filter,
self.queue.put_nowait(("set", (k, v))) port=v["port"])
self.active_or_queued.add(k) self.queue.put_nowait(("set", (k, v)))
self.active_or_queued.add(k)
except:
logger.error("Failed to process device database entry %s", k,
exc_info=True)
def __delitem__(self, k): def __delitem__(self, k):
if k in self.active_or_queued: if k in self.active_or_queued:

View File

@ -0,0 +1,234 @@
core_addr = "kasli-1.lab.m-labs.hk"
device_db = {
"core": {
"type": "local",
"module": "artiq.coredevice.core",
"class": "Core",
"arguments": {"host": core_addr, "ref_period": 1e-9}
},
"core_log": {
"type": "controller",
"host": "::1",
"port": 1068,
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
},
"core_cache": {
"type": "local",
"module": "artiq.coredevice.cache",
"class": "CoreCache"
},
"core_dma": {
"type": "local",
"module": "artiq.coredevice.dma",
"class": "CoreDMA"
},
"i2c_switch0": {
"type": "local",
"module": "artiq.coredevice.i2c",
"class": "PCA9548",
"arguments": {"address": 0xe0}
},
"i2c_switch1": {
"type": "local",
"module": "artiq.coredevice.i2c",
"class": "PCA9548",
"arguments": {"address": 0xe2}
},
}
device_db.update({
"ttl" + str(i): {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut" if i < 4 else "TTLOut",
"arguments": {"channel": i},
} for i in range(16)
})
for j in range(3):
device_db.update({
"spi_urukul{}".format(j): {
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 16 + 7*j}
},
"ttl_urukul{}_sync".format(j): {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLClockGen",
"arguments": {"channel": 17 + 7*j, "acc_width": 4}
},
"ttl_urukul{}_io_update".format(j): {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 18 + 7*j}
},
"ttl_urukul{}_sw0".format(j): {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 19 + 7*j}
},
"ttl_urukul{}_sw1".format(j): {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 20 + 7*j}
},
"ttl_urukul{}_sw2".format(j): {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 21 + 7*j}
},
"ttl_urukul{}_sw3".format(j): {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 22 + 7*j}
},
"urukul{}_cpld".format(j): {
"type": "local",
"module": "artiq.coredevice.urukul",
"class": "CPLD",
"arguments": {
"spi_device": "spi_urukul{}".format(j),
"sync_device": "ttl_urukul{}_sync".format(j),
"io_update_device": "ttl_urukul{}_io_update".format(j),
"refclk": 125e6,
"clk_sel": 2
}
}
})
device_db.update({
"urukul{}_ch{}".format(j, i): {
"type": "local",
"module": "artiq.coredevice.ad9910",
"class": "AD9910",
"arguments": {
"pll_n": 32,
"chip_select": 4 + i,
"cpld_device": "urukul{}_cpld".format(j),
"sw_device": "ttl_urukul{}_sw{}".format(j, i)
}
} for i in range(4)
})
device_db.update(
spi_urukul3={
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 37}
},
ttl_urukul3_io_update={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 38}
},
ttl_urukul3_sw0={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 39}
},
ttl_urukul3_sw1={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 40}
},
ttl_urukul3_sw2={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 41}
},
ttl_urukul3_sw3={
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 42}
},
urukul3_cpld={
"type": "local",
"module": "artiq.coredevice.urukul",
"class": "CPLD",
"arguments": {
"spi_device": "spi_urukul3",
"io_update_device": "ttl_urukul3_io_update",
"refclk": 125e6,
"clk_sel": 0
}
}
)
for i in range(4):
device_db["urukul3_ch" + str(i)] = {
"type": "local",
"module": "artiq.coredevice.ad9912",
"class": "AD9912",
"arguments": {
"pll_n": 8,
"chip_select": 4 + i,
"cpld_device": "urukul3_cpld",
"sw_device": "ttl_urukul3_sw" + str(i)
}
}
device_db.update({
"spi_zotino0": {
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 43}
},
"ttl_zotino0_ldac": {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 44}
},
"ttl_zotino0_clr": {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 45}
},
"zotino0": {
"type": "local",
"module": "artiq.coredevice.zotino",
"class": "Zotino",
"arguments": {
"spi_device": "spi_zotino0",
"ldac_device": "ttl_zotino0_ldac",
"clr_device": "ttl_zotino0_clr"
}
}
})
device_db.update({
"led0": {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 46}
},
"led1": {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 47}
}
})

View File

@ -0,0 +1,449 @@
# Autogenerated for the nrc variant
core_addr = "192.168.1.75"
device_db = {
"core": {
"type": "local",
"module": "artiq.coredevice.core",
"class": "Core",
"arguments": {"host": core_addr, "ref_period": 1e-09}
},
"core_log": {
"type": "controller",
"host": "::1",
"port": 1068,
"command": "aqctl_corelog -p {port} --bind {bind} " + core_addr
},
"core_cache": {
"type": "local",
"module": "artiq.coredevice.cache",
"class": "CoreCache"
},
"core_dma": {
"type": "local",
"module": "artiq.coredevice.dma",
"class": "CoreDMA"
},
"i2c_switch0": {
"type": "local",
"module": "artiq.coredevice.i2c",
"class": "PCA9548",
"arguments": {"address": 0xe0}
},
"i2c_switch1": {
"type": "local",
"module": "artiq.coredevice.i2c",
"class": "PCA9548",
"arguments": {"address": 0xe2}
},
}
# standalone peripherals
device_db["ttl0"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut",
"arguments": {"channel": 0x000000},
}
device_db["ttl1"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut",
"arguments": {"channel": 0x000001},
}
device_db["ttl2"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut",
"arguments": {"channel": 0x000002},
}
device_db["ttl3"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLInOut",
"arguments": {"channel": 0x000003},
}
device_db["ttl4"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000004},
}
device_db["ttl5"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000005},
}
device_db["ttl6"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000006},
}
device_db["ttl7"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000007},
}
device_db["ttl8"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000008},
}
device_db["ttl9"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000009},
}
device_db["ttl10"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00000a},
}
device_db["ttl11"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00000b},
}
device_db["ttl12"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00000c},
}
device_db["ttl13"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00000d},
}
device_db["ttl14"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00000e},
}
device_db["ttl15"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00000f},
}
device_db["spi_urukul0"]={
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 0x000010}
}
device_db["ttl_urukul0_io_update"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000011}
}
device_db["ttl_urukul0_sw0"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000012}
}
device_db["ttl_urukul0_sw1"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000013}
}
device_db["ttl_urukul0_sw2"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000014}
}
device_db["ttl_urukul0_sw3"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000015}
}
device_db["urukul0_cpld"] = {
"type": "local",
"module": "artiq.coredevice.urukul",
"class": "CPLD",
"arguments": {
"spi_device": "spi_urukul0",
"sync_device": None,
"io_update_device": "ttl_urukul0_io_update",
"refclk": 125000000.0,
"clk_sel": 2
}
}
device_db["urukul0_ch0"] = {
"type": "local",
"module": "artiq.coredevice.ad9910",
"class": "AD9910",
"arguments": {
"pll_n": 32,
"chip_select": 4,
"cpld_device": "urukul0_cpld",
"sw_device": "ttl_urukul0_sw0"
}
}
device_db["urukul0_ch1"] = {
"type": "local",
"module": "artiq.coredevice.ad9910",
"class": "AD9910",
"arguments": {
"pll_n": 32,
"chip_select": 5,
"cpld_device": "urukul0_cpld",
"sw_device": "ttl_urukul0_sw1"
}
}
device_db["urukul0_ch2"] = {
"type": "local",
"module": "artiq.coredevice.ad9910",
"class": "AD9910",
"arguments": {
"pll_n": 32,
"chip_select": 6,
"cpld_device": "urukul0_cpld",
"sw_device": "ttl_urukul0_sw2"
}
}
device_db["urukul0_ch3"] = {
"type": "local",
"module": "artiq.coredevice.ad9910",
"class": "AD9910",
"arguments": {
"pll_n": 32,
"chip_select": 7,
"cpld_device": "urukul0_cpld",
"sw_device": "ttl_urukul0_sw3"
}
}
device_db["spi_urukul1"]={
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 0x000016}
}
device_db["ttl_urukul1_io_update"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000017}
}
device_db["ttl_urukul1_sw0"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000018}
}
device_db["ttl_urukul1_sw1"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000019}
}
device_db["ttl_urukul1_sw2"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00001a}
}
device_db["ttl_urukul1_sw3"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00001b}
}
device_db["urukul1_cpld"] = {
"type": "local",
"module": "artiq.coredevice.urukul",
"class": "CPLD",
"arguments": {
"spi_device": "spi_urukul1",
"sync_device": None,
"io_update_device": "ttl_urukul1_io_update",
"refclk": 125000000.0,
"clk_sel": 2
}
}
device_db["urukul1_ch0"] = {
"type": "local",
"module": "artiq.coredevice.ad9912",
"class": "AD9912",
"arguments": {
"pll_n": 8,
"chip_select": 4,
"cpld_device": "urukul1_cpld",
"sw_device": "ttl_urukul1_sw0"
}
}
device_db["urukul1_ch1"] = {
"type": "local",
"module": "artiq.coredevice.ad9912",
"class": "AD9912",
"arguments": {
"pll_n": 8,
"chip_select": 5,
"cpld_device": "urukul1_cpld",
"sw_device": "ttl_urukul1_sw1"
}
}
device_db["urukul1_ch2"] = {
"type": "local",
"module": "artiq.coredevice.ad9912",
"class": "AD9912",
"arguments": {
"pll_n": 8,
"chip_select": 6,
"cpld_device": "urukul1_cpld",
"sw_device": "ttl_urukul1_sw2"
}
}
device_db["urukul1_ch3"] = {
"type": "local",
"module": "artiq.coredevice.ad9912",
"class": "AD9912",
"arguments": {
"pll_n": 8,
"chip_select": 7,
"cpld_device": "urukul1_cpld",
"sw_device": "ttl_urukul1_sw3"
}
}
device_db["spi_sampler0_adc"] = {
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 0x00001c}
}
device_db["spi_sampler0_pgia"] = {
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 0x00001d}
}
device_db["spi_sampler0_cnv"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x00001e},
}
device_db["sampler0"] = {
"type": "local",
"module": "artiq.coredevice.sampler",
"class": "Sampler",
"arguments": {
"spi_adc_device": "spi_sampler0_adc",
"spi_pgia_device": "spi_sampler0_pgia",
"cnv_device": "spi_sampler0_cnv"
}
}
device_db["spi_zotino0"] = {
"type": "local",
"module": "artiq.coredevice.spi2",
"class": "SPIMaster",
"arguments": {"channel": 0x00001f}
}
device_db["ttl_zotino0_ldac"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000020}
}
device_db["ttl_zotino0_clr"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000021}
}
device_db["zotino0"] = {
"type": "local",
"module": "artiq.coredevice.zotino",
"class": "Zotino",
"arguments": {
"spi_device": "spi_zotino0",
"ldac_device": "ttl_zotino0_ldac",
"clr_device": "ttl_zotino0_clr"
}
}
device_db["led0"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000022}
}
device_db["led1"] = {
"type": "local",
"module": "artiq.coredevice.ttl",
"class": "TTLOut",
"arguments": {"channel": 0x000023}
}

View File

@ -1,8 +1,12 @@
import sys import sys
import os
import select import select
from artiq.experiment import * from artiq.experiment import *
if os.name == "nt":
import msvcrt
def chunker(seq, size): def chunker(seq, size):
res = [] res = []
@ -16,11 +20,17 @@ def chunker(seq, size):
def is_enter_pressed() -> TBool: def is_enter_pressed() -> TBool:
if select.select([sys.stdin,], [], [], 0.0)[0]: if os.name == "nt":
sys.stdin.read(1) if msvcrt.kbhit() and msvcrt.getch() == b"\r":
return True return True
else:
return False
else: else:
return False if select.select([sys.stdin, ], [], [], 0.0)[0]:
sys.stdin.read(1)
return True
else:
return False
class KasliTester(EnvExperiment): class KasliTester(EnvExperiment):
@ -270,14 +280,15 @@ class KasliTester(EnvExperiment):
zotino.load() zotino.load()
def test_zotinos(self): def test_zotinos(self):
print("*** Testing Zotino DACs.") if self.zotinos:
print("Voltages:") print("*** Testing Zotino DACs.")
for card_n, (card_name, card_dev) in enumerate(self.zotinos): print("Voltages:")
voltages = [2*card_n + (-1)**i*0.1*(i//2+1) for i in range(32)] for card_n, (card_name, card_dev) in enumerate(self.zotinos):
print(card_name, " ".join(["{:.1f}".format(x) for x in voltages])) voltages = [2*card_n + (-1)**i*0.1*(i//2+1) for i in range(32)]
self.set_zotino_voltages(card_dev, voltages) print(card_name, " ".join(["{:.1f}".format(x) for x in voltages]))
print("Press ENTER when done.") self.set_zotino_voltages(card_dev, voltages)
input() print("Press ENTER when done.")
input()
@kernel @kernel
def grabber_capture(self, card_dev, rois): def grabber_capture(self, card_dev, rois):
@ -297,21 +308,22 @@ class KasliTester(EnvExperiment):
card_dev.input_mu(n) card_dev.input_mu(n)
self.core.break_realtime() self.core.break_realtime()
card_dev.gate_roi(0) card_dev.gate_roi(0)
print("ROI sums: {}".format(n)) print("ROI sums:", n)
def test_grabbers(self): def test_grabbers(self):
print("*** Testing Grabber Frame Grabbers.") if self.grabbers:
print("Activate the camera's frame grabber output, type 'g', press " print("*** Testing Grabber Frame Grabbers.")
"ENTER, and trigger the camera.") print("Activate the camera's frame grabber output, type 'g', press "
print("Just press ENTER to skip the test.") "ENTER, and trigger the camera.")
if input().strip().lower() != "g": print("Just press ENTER to skip the test.")
print("skipping...") if input().strip().lower() != "g":
return print("skipping...")
rois = [[0, 0, 0, 2, 2], [1, 0, 0, 2048, 2048]] return
print("ROIs: {}".format(rois)) rois = [[0, 0, 0, 2, 2], [1, 0, 0, 2048, 2048]]
for card_n, (card_name, card_dev) in enumerate(self.grabbers): print("ROIs:", rois)
print(card_name) for card_n, (card_name, card_dev) in enumerate(self.grabbers):
self.grabber_capture(card_dev, rois) print(card_name)
self.grabber_capture(card_dev, rois)
def run(self): def run(self):
print("****** Kasli system tester ******") print("****** Kasli system tester ******")

View File

@ -6,9 +6,9 @@ class DDSTest(EnvExperiment):
def build(self): def build(self):
self.setattr_device("core") self.setattr_device("core")
self.setattr_device("dds0") self.dds0 = self.get_device("ad9914dds0")
self.setattr_device("dds1") self.dds1 = self.get_device("ad9914dds1")
self.setattr_device("dds2") self.dds2 = self.get_device("ad9914dds2")
self.setattr_device("ttl0") self.setattr_device("ttl0")
self.setattr_device("ttl1") self.setattr_device("ttl1")
self.setattr_device("ttl2") self.setattr_device("ttl2")

View File

@ -44,7 +44,7 @@ class TDR(EnvExperiment):
try: try:
self.many(n, self.core.seconds_to_mu(pulse)) self.many(n, self.core.seconds_to_mu(pulse))
except PulseNotReceivedError: except PulseNotReceivedError:
print("to few edges: cable too long or wiring bad") print("too few edges: cable too long or wiring bad")
else: else:
print(self.t) print(self.t)
t_rise = mu_to_seconds(self.t[0], self.core)/n - latency t_rise = mu_to_seconds(self.t[0], self.core)/n - latency

View File

@ -207,7 +207,7 @@ pub extern fn main() -> i32 {
println!(r"|_| |_|_|____/ \___/ \____|"); println!(r"|_| |_|_|____/ \___/ \____|");
println!(""); println!("");
println!("MiSoC Bootloader"); println!("MiSoC Bootloader");
println!("Copyright (c) 2017-2018 M-Labs Limited"); println!("Copyright (c) 2017-2019 M-Labs Limited");
println!(""); println!("");
if startup() { if startup() {

View File

@ -68,6 +68,7 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(sqrt), api!(sqrt),
api!(round), api!(round),
api!(floor), api!(floor),
api!(fmod),
/* exceptions */ /* exceptions */
api!(_Unwind_Resume = ::unwind::_Unwind_Resume), api!(_Unwind_Resume = ::unwind::_Unwind_Resume),

View File

@ -9,6 +9,7 @@ enum State {
Watch Watch
} }
#[derive(Clone, Copy)]
struct Info { struct Info {
state: State, state: State,
frame_size: (u16, u16), frame_size: (u16, u16),

View File

@ -7,7 +7,8 @@ pub enum Error {
Truncated { offset: usize }, Truncated { offset: usize },
InvalidSize { offset: usize, size: usize }, InvalidSize { offset: usize, size: usize },
MissingSeparator { offset: usize }, MissingSeparator { offset: usize },
Utf8Error(str::Utf8Error) Utf8Error(str::Utf8Error),
NoFlash,
} }
impl fmt::Display for Error { impl fmt::Display for Error {
@ -24,40 +25,13 @@ impl fmt::Display for Error {
&Error::MissingSeparator { offset } => &Error::MissingSeparator { offset } =>
write!(f, "missing separator at offset {}", offset), write!(f, "missing separator at offset {}", offset),
&Error::Utf8Error(err) => &Error::Utf8Error(err) =>
write!(f, "{}", err) write!(f, "{}", err),
&Error::NoFlash =>
write!(f, "flash memory is not present"),
} }
} }
} }
struct FmtWrapper<'a> {
buf: &'a mut [u8],
offset: usize,
}
impl<'a> FmtWrapper<'a> {
fn new(buf: &'a mut [u8]) -> Self {
FmtWrapper {
buf: buf,
offset: 0,
}
}
fn contents(&self) -> &[u8] {
&self.buf[..self.offset]
}
}
impl<'a> fmt::Write for FmtWrapper<'a> {
fn write_str(&mut self, s: &str) -> fmt::Result {
let bytes = s.as_bytes();
let remainder = &mut self.buf[self.offset..];
let remainder = &mut remainder[..bytes.len()];
remainder.copy_from_slice(bytes);
self.offset += bytes.len();
Ok(())
}
}
#[cfg(has_spiflash)] #[cfg(has_spiflash)]
mod imp { mod imp {
use core::str; use core::str;
@ -65,8 +39,37 @@ mod imp {
use cache; use cache;
use spiflash; use spiflash;
use super::Error; use super::Error;
use core::fmt;
use core::fmt::Write; use core::fmt::Write;
use super::FmtWrapper;
struct FmtWrapper<'a> {
buf: &'a mut [u8],
offset: usize,
}
impl<'a> FmtWrapper<'a> {
fn new(buf: &'a mut [u8]) -> Self {
FmtWrapper {
buf: buf,
offset: 0,
}
}
fn contents(&self) -> &[u8] {
&self.buf[..self.offset]
}
}
impl<'a> fmt::Write for FmtWrapper<'a> {
fn write_str(&mut self, s: &str) -> fmt::Result {
let bytes = s.as_bytes();
let remainder = &mut self.buf[self.offset..];
let remainder = &mut remainder[..bytes.len()];
remainder.copy_from_slice(bytes);
self.offset += bytes.len();
Ok(())
}
}
// One flash sector immediately before the firmware. // One flash sector immediately before the firmware.
const ADDR: usize = ::mem::FLASH_BOOT_ADDRESS - spiflash::SECTOR_SIZE; const ADDR: usize = ::mem::FLASH_BOOT_ADDRESS - spiflash::SECTOR_SIZE;
@ -284,24 +287,26 @@ mod imp {
#[cfg(not(has_spiflash))] #[cfg(not(has_spiflash))]
mod imp { mod imp {
use super::Error;
pub fn read<F: FnOnce(Result<&[u8], Error>) -> R, R>(_key: &str, f: F) -> R { pub fn read<F: FnOnce(Result<&[u8], Error>) -> R, R>(_key: &str, f: F) -> R {
f(Err(())) f(Err(Error::NoFlash))
} }
pub fn read_str<F: FnOnce(Result<&str, Error>) -> R, R>(_key: &str, f: F) -> R { pub fn read_str<F: FnOnce(Result<&str, Error>) -> R, R>(_key: &str, f: F) -> R {
f(Err(())) f(Err(Error::NoFlash))
} }
pub fn write(_key: &str, _value: &[u8]) -> Result<(), Error> { pub fn write(_key: &str, _value: &[u8]) -> Result<(), Error> {
Err(()) Err(Error::NoFlash)
} }
pub fn remove(_key: &str) -> Result<(), Error> { pub fn remove(_key: &str) -> Result<(), Error> {
Err(()) Err(Error::NoFlash)
} }
pub fn erase() -> Result<(), Error> { pub fn erase() -> Result<(), Error> {
Err(()) Err(Error::NoFlash)
} }
} }

View File

@ -255,13 +255,14 @@ mod tag {
Tag::Int32 => 4, Tag::Int32 => 4,
Tag::Int64 => 8, Tag::Int64 => 8,
Tag::Float64 => 8, Tag::Float64 => 8,
Tag::String => 4, Tag::String => 8,
Tag::Bytes => 8, Tag::Bytes => 8,
Tag::ByteArray => 8, Tag::ByteArray => 8,
Tag::Tuple(it, arity) => { Tag::Tuple(it, arity) => {
let mut size = 0; let mut size = 0;
let mut it = it.clone();
for _ in 0..arity { for _ in 0..arity {
let tag = it.clone().next().expect("truncated tag"); let tag = it.next().expect("truncated tag");
size += tag.size(); size += tag.size();
} }
size size

View File

@ -223,7 +223,7 @@ unsafe fn kern_load(io: &Io, session: &mut Session, library: &[u8])
Err(Error::Load(format!("{}", error))) Err(Error::Load(format!("{}", error)))
} }
other => other =>
unexpected!("unexpected reply from kernel CPU: {:?}", other) unexpected!("unexpected kernel CPU reply to load request: {:?}", other)
} }
}) })
} }
@ -274,15 +274,22 @@ fn process_host_message(io: &Io,
let slot = kern_recv(io, |reply| { let slot = kern_recv(io, |reply| {
match reply { match reply {
&kern::RpcRecvRequest(slot) => Ok(slot), &kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!("unexpected reply from kernel CPU: {:?}", other) other => unexpected!(
"expected root value slot from kernel CPU, not {:?}", other)
} }
})?; })?;
rpc::recv_return(stream, &tag, slot, &|size| -> Result<_, Error<SchedError>> { rpc::recv_return(stream, &tag, slot, &|size| -> Result<_, Error<SchedError>> {
if size == 0 {
// Don't try to allocate zero-length values, as RpcRecvReply(0) is
// used to terminate the kernel-side receive loop.
return Ok(0 as *mut ())
}
kern_send(io, &kern::RpcRecvReply(Ok(size)))?; kern_send(io, &kern::RpcRecvReply(Ok(size)))?;
Ok(kern_recv(io, |reply| { Ok(kern_recv(io, |reply| {
match reply { match reply {
&kern::RpcRecvRequest(slot) => Ok(slot), &kern::RpcRecvRequest(slot) => Ok(slot),
other => unexpected!("unexpected reply from kernel CPU: {:?}", other) other => unexpected!(
"expected nested value slot from kernel CPU, not {:?}", other)
} }
})?) })?)
})?; })?;
@ -301,8 +308,8 @@ fn process_host_message(io: &Io,
kern_recv(io, |reply| { kern_recv(io, |reply| {
match reply { match reply {
&kern::RpcRecvRequest(_) => Ok(()), &kern::RpcRecvRequest(_) => Ok(()),
other => other => unexpected!(
unexpected!("unexpected reply from kernel CPU: {:?}", other) "expected (ignored) root value slot from kernel CPU, not {:?}", other)
} }
})?; })?;

View File

@ -302,6 +302,7 @@ pub extern fn main() -> i32 {
unsafe { unsafe {
csr::drtio_transceiver::stable_clkin_write(1); csr::drtio_transceiver::stable_clkin_write(1);
} }
clock::spin_us(1500); // wait for CPLL/QPLL lock
init_rtio_crg(); init_rtio_crg();
#[cfg(has_allaki_atts)] #[cfg(has_allaki_atts)]

View File

@ -175,7 +175,7 @@ def _action_scan_devices(remote, args):
def _action_scan_repository(remote, args): def _action_scan_repository(remote, args):
if args.async: if getattr(args, "async"):
remote.scan_repository_async(args.revision) remote.scan_repository_async(args.revision)
else: else:
remote.scan_repository(args.revision) remote.scan_repository(args.revision)

View File

@ -213,6 +213,10 @@ def main():
smgr.start() smgr.start()
atexit_register_coroutine(smgr.stop) atexit_register_coroutine(smgr.stop)
# work around for https://github.com/m-labs/artiq/issues/1307
d_ttl_dds.ttl_dock.show()
d_ttl_dds.dds_dock.show()
# create first log dock if not already in state # create first log dock if not already in state
d_log0 = logmgr.first_log_dock() d_log0 = logmgr.first_log_dock()
if d_log0 is not None: if d_log0 is not None:

View File

@ -175,7 +175,7 @@ def _build_experiment(device_mgr, dataset_mgr, args):
file = getattr(module, "__file__") file = getattr(module, "__file__")
expid = { expid = {
"file": file, "file": file,
"experiment": args.experiment, "class_name": args.experiment,
"arguments": arguments "arguments": arguments
} }
device_mgr.virtual_devices["scheduler"].expid = expid device_mgr.virtual_devices["scheduler"].expid = expid

View File

@ -493,6 +493,7 @@ class SUServo(_EEM):
sampler_pads = servo_pads.SamplerPads(target.platform, eem_sampler) sampler_pads = servo_pads.SamplerPads(target.platform, eem_sampler)
urukul_pads = servo_pads.UrukulPads( urukul_pads = servo_pads.UrukulPads(
target.platform, eem_urukul0, eem_urukul1) target.platform, eem_urukul0, eem_urukul1)
target.submodules += sampler_pads, urukul_pads
# timings in units of RTIO coarse period # timings in units of RTIO coarse period
adc_p = servo.ADCParams(width=16, channels=8, lanes=4, t_cnvh=4, adc_p = servo.ADCParams(width=16, channels=8, lanes=4, t_cnvh=4,
# account for SCK DDR to CONV latency # account for SCK DDR to CONV latency
@ -505,7 +506,9 @@ class SUServo(_EEM):
channels=adc_p.channels, clk=clk) channels=adc_p.channels, clk=clk)
su = servo.Servo(sampler_pads, urukul_pads, adc_p, iir_p, dds_p) su = servo.Servo(sampler_pads, urukul_pads, adc_p, iir_p, dds_p)
su = ClockDomainsRenamer("rio_phy")(su) su = ClockDomainsRenamer("rio_phy")(su)
target.submodules += sampler_pads, urukul_pads, su # explicitly name the servo submodule to enable the migen namer to derive
# a name for the adc return clock domain
setattr(target.submodules, "suservo_eem{}".format(eems_sampler[0]), su)
ctrls = [rtservo.RTServoCtrl(ctrl) for ctrl in su.iir.ctrl] ctrls = [rtservo.RTServoCtrl(ctrl) for ctrl in su.iir.ctrl]
target.submodules += ctrls target.submodules += ctrls

View File

@ -532,6 +532,76 @@ class NUDT(_StandaloneBase):
self.add_rtio(self.rtio_channels) self.add_rtio(self.rtio_channels)
class Berkeley(_StandaloneBase):
def __init__(self, hw_rev=None, **kwargs):
if hw_rev is None:
hw_rev = "v1.1"
_StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs)
self.config["SI5324_AS_SYNTHESIZER"] = None
# self.config["SI5324_EXT_REF"] = None
self.config["RTIO_FREQUENCY"] = "125.0"
if hw_rev == "v1.0":
# EEM clock fan-out from Si5324, not MMCX
self.comb += self.platform.request("clk_sel").eq(1)
self.rtio_channels = []
eem.DIO.add_std(self, 0,
ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X)
eem.DIO.add_std(self, 1,
ttl_serdes_7series.Output_8X, ttl_serdes_7series.Output_8X)
eem.Urukul.add_std(self, 2, 3, ttl_serdes_7series.Output_8X,
ttl_simple.ClockGen)
eem.Urukul.add_std(self, 4, 5, ttl_serdes_7series.Output_8X,
ttl_simple.ClockGen)
eem.Urukul.add_std(self, 6, 7, ttl_serdes_7series.Output_8X,
ttl_simple.ClockGen)
eem.Urukul.add_std(self, 9, 8, ttl_serdes_7series.Output_8X)
eem.Zotino.add_std(self, 10, ttl_serdes_7series.Output_8X)
for i in (1, 2):
sfp_ctl = self.platform.request("sfp_ctl", i)
phy = ttl_simple.Output(sfp_ctl.led)
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels)
class NRC(_StandaloneBase):
def __init__(self, hw_rev=None, **kwargs):
if hw_rev is None:
hw_rev = "v1.1"
_StandaloneBase.__init__(self, hw_rev=hw_rev, **kwargs)
self.config["SI5324_AS_SYNTHESIZER"] = None
self.config["RTIO_FREQUENCY"] = "125.0"
self.rtio_channels = []
eem.DIO.add_std(self, 0,
ttl_serdes_7series.InOut_8X, ttl_serdes_7series.Output_8X)
eem.DIO.add_std(self, 1,
ttl_serdes_7series.Output_8X, ttl_serdes_7series.Output_8X)
eem.Urukul.add_std(self, 2, 3, ttl_serdes_7series.Output_8X)
eem.Urukul.add_std(self, 4, 5, ttl_serdes_7series.Output_8X)
eem.Sampler.add_std(self, 6, 7, ttl_serdes_7series.Output_8X)
eem.Zotino.add_std(self, 8, ttl_serdes_7series.Output_8X)
for i in (1, 2):
sfp_ctl = self.platform.request("sfp_ctl", i)
phy = ttl_simple.Output(sfp_ctl.led)
self.submodules += phy
self.rtio_channels.append(rtio.Channel.from_phy(phy))
self.config["HAS_RTIO_LOG"] = None
self.config["RTIO_LOG_CHANNEL"] = len(self.rtio_channels)
self.rtio_channels.append(rtio.LogChannel())
self.add_rtio(self.rtio_channels)
class PTB(_StandaloneBase): class PTB(_StandaloneBase):
"""PTB Kasli variant """PTB Kasli variant
@ -1125,7 +1195,7 @@ class VLBAISatellite(_SatelliteBase):
VARIANTS = {cls.__name__.lower(): cls for cls in [ VARIANTS = {cls.__name__.lower(): cls for cls in [
Opticlock, SUServo, PTB, PTB2, HUB, LUH, Opticlock, SUServo, PTB, PTB2, HUB, LUH,
SYSU, MITLL, MITLL2, USTC, Tsinghua, Tsinghua2, WIPM, NUDT, SYSU, MITLL, MITLL2, USTC, Tsinghua, Tsinghua2, WIPM, NUDT, Berkeley, NRC,
VLBAIMaster, VLBAISatellite, Tester, Master, Satellite]} VLBAIMaster, VLBAISatellite, Tester, Master, Satellite]}

View File

@ -321,7 +321,6 @@ class AppletsDock(QtWidgets.QDockWidget):
self.main_window = main_window self.main_window = main_window
self.datasets_sub = datasets_sub self.datasets_sub = datasets_sub
self.dock_to_item = dict()
self.applet_uids = set() self.applet_uids = set()
self.table = QtWidgets.QTreeWidget() self.table = QtWidgets.QTreeWidget()
@ -414,12 +413,12 @@ class AppletsDock(QtWidgets.QDockWidget):
finally: finally:
self.table.itemChanged.connect(self.item_changed) self.table.itemChanged.connect(self.item_changed)
def create(self, uid, name, spec): def create(self, item, name, spec):
dock = _AppletDock(self.datasets_sub, uid, name, spec) dock = _AppletDock(self.datasets_sub, item.applet_uid, name, spec)
self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock) self.main_window.addDockWidget(QtCore.Qt.RightDockWidgetArea, dock)
dock.setFloating(True) dock.setFloating(True)
asyncio.ensure_future(dock.start()) asyncio.ensure_future(dock.start())
dock.sigClosed.connect(partial(self.on_dock_closed, dock)) dock.sigClosed.connect(partial(self.on_dock_closed, item, dock))
return dock return dock
def item_changed(self, item, column): def item_changed(self, item, column):
@ -437,15 +436,15 @@ class AppletsDock(QtWidgets.QDockWidget):
if item.applet_dock is None: if item.applet_dock is None:
name = item.text(0) name = item.text(0)
spec = self.get_spec(item) spec = self.get_spec(item)
dock = self.create(item.applet_uid, name, spec) dock = self.create(item, name, spec)
item.applet_dock = dock item.applet_dock = dock
if item.applet_geometry is not None: if item.applet_geometry is not None:
dock.restoreGeometry(item.applet_geometry) dock.restoreGeometry(item.applet_geometry)
# geometry is now handled by main window state # geometry is now handled by main window state
item.applet_geometry = None item.applet_geometry = None
self.dock_to_item[dock] = item
else: else:
dock = item.applet_dock dock = item.applet_dock
item.applet_dock = None
if dock is not None: if dock is not None:
# This calls self.on_dock_closed # This calls self.on_dock_closed
dock.close() dock.close()
@ -455,12 +454,9 @@ class AppletsDock(QtWidgets.QDockWidget):
else: else:
raise ValueError raise ValueError
def on_dock_closed(self, dock): def on_dock_closed(self, item, dock):
item = self.dock_to_item[dock]
item.applet_dock = None
item.applet_geometry = dock.saveGeometry() item.applet_geometry = dock.saveGeometry()
asyncio.ensure_future(dock.terminate()) asyncio.ensure_future(dock.terminate())
del self.dock_to_item[dock]
item.setCheckState(0, QtCore.Qt.Unchecked) item.setCheckState(0, QtCore.Qt.Unchecked)
def get_untitled(self): def get_untitled(self):

View File

@ -118,7 +118,7 @@ def syscall(arg=None, flags={}):
def inner_decorator(function): def inner_decorator(function):
function.artiq_embedded = \ function.artiq_embedded = \
_ARTIQEmbeddedInfo(core_name=None, portable=False, function=None, _ARTIQEmbeddedInfo(core_name=None, portable=False, function=None,
syscall=function.__name__, forbidden=False, syscall=arg, forbidden=False,
flags=set(flags)) flags=set(flags))
return function return function
return inner_decorator return inner_decorator

View File

@ -214,7 +214,7 @@ class PrepareStage(TaskObject):
if run.worker.closed.is_set(): if run.worker.closed.is_set():
break break
if run.worker.closed.is_set(): if run.worker.closed.is_set():
continue continue
run.status = RunStatus.preparing run.status = RunStatus.preparing
try: try:
await run.build() await run.build()

View File

@ -5,7 +5,7 @@ import socket
__all__ = [] __all__ = []
if sys.version_info[:3] >= (3, 5, 2): if sys.version_info[:3] >= (3, 5, 2) and sys.version_info[:3] <= (3, 6, 6):
import asyncio import asyncio
# See https://github.com/m-labs/artiq/issues/506 # See https://github.com/m-labs/artiq/issues/506

View File

@ -1,5 +1,5 @@
""" """
This module provide serialization and deserialization functions for Python This module provides serialization and deserialization functions for Python
objects. Its main features are: objects. Its main features are:
* Human-readable format compatible with the Python syntax. * Human-readable format compatible with the Python syntax.
@ -193,6 +193,7 @@ _eval_dict = {
"null": None, "null": None,
"false": False, "false": False,
"true": True, "true": True,
"inf": numpy.inf,
"slice": slice, "slice": slice,
"nan": numpy.nan, "nan": numpy.nan,

View File

@ -244,7 +244,7 @@ class Publisher(AsyncioServer):
await writer.drain() await writer.drain()
finally: finally:
self._recipients[notifier_name].remove(queue) self._recipients[notifier_name].remove(queue)
except (ConnectionResetError, ConnectionAbortedError, BrokenPipeError): except (ConnectionResetError, ConnectionAbortedError, BrokenPipeError, TimeoutError):
# subscribers disconnecting are a normal occurrence # subscribers disconnecting are a normal occurrence
pass pass
finally: finally:

View File

@ -180,7 +180,7 @@ class AD9910Test(ExperimentCase):
self.execute(AD9910Exp, "set_speed_mu") self.execute(AD9910Exp, "set_speed_mu")
dt = self.dataset_mgr.get("dt") dt = self.dataset_mgr.get("dt")
print(dt) print(dt)
self.assertLess(dt, 10*us) self.assertLess(dt, 11*us)
def test_sync_window(self): def test_sync_window(self):
self.execute(AD9910Exp, "sync_window") self.execute(AD9910Exp, "sync_window")
@ -201,8 +201,8 @@ class AD9910Test(ExperimentCase):
n = max(bins2) n = max(bins2)
# no edge at optimal delay # no edge at optimal delay
self.assertEqual(bins2[(dly + 1) & 3], 0) self.assertEqual(bins2[(dly + 1) & 3], 0)
# edge at expected position # many edges near expected position
self.assertEqual(bins2[(dly + 3) & 3], n) self.assertGreater(bins2[(dly + 3) & 3], n*.9)
def test_sw_readback(self): def test_sw_readback(self):
self.execute(AD9910Exp, "sw_readback") self.execute(AD9910Exp, "sw_readback")

View File

@ -61,6 +61,9 @@ class RoundtripTest(ExperimentCase):
def test_list_tuple(self): def test_list_tuple(self):
self.assertRoundtrip(([1, 2], [3, 4])) self.assertRoundtrip(([1, 2], [3, 4]))
def test_list_mixed_tuple(self):
self.assertRoundtrip([(0x12345678, [("foo", [0.0, 1.0], [0, 1])])])
class _DefaultArg(EnvExperiment): class _DefaultArg(EnvExperiment):
def build(self): def build(self):
@ -344,7 +347,43 @@ class _ListTuple(EnvExperiment):
[numpy.int32(base_b + i) for i in range(n)] [numpy.int32(base_b + i) for i in range(n)]
class _NestedTupleList(EnvExperiment):
def build(self):
self.setattr_device("core")
self.data = [(0x12345678, [("foo", [0.0, 1.0], [2, 3])]),
(0x76543210, [("bar", [4.0, 5.0], [6, 7])])]
def get_data(self) -> TList(TTuple(
[TInt32, TList(TTuple([TStr, TList(TFloat), TList(TInt32)]))])):
return self.data
@kernel
def run(self):
a = self.get_data()
if a != self.data:
raise ValueError
class _EmptyList(EnvExperiment):
def build(self):
self.setattr_device("core")
def get_empty(self) -> TList(TInt32):
return []
@kernel
def run(self):
a = self.get_empty()
if a != []:
raise ValueError
class ListTupleTest(ExperimentCase): class ListTupleTest(ExperimentCase):
def test_list_tuple(self): def test_list_tuple(self):
exp = self.create(_ListTuple) self.create(_ListTuple).run()
exp.run()
def test_nested_tuple_list(self):
self.create(_NestedTupleList).run()
def test_empty_list(self):
self.create(_EmptyList).run()

View File

@ -248,12 +248,13 @@ class LoopbackGateTiming(EnvExperiment):
gate_end_mu = now_mu() gate_end_mu = now_mu()
# gateware latency offset between gate and input # gateware latency offset between gate and input
lat_offset = 12*8 lat_offset = 11*8
out_mu = gate_start_mu - loop_delay_mu + lat_offset out_mu = gate_start_mu - loop_delay_mu + lat_offset
at_mu(out_mu) at_mu(out_mu)
self.loop_out.pulse_mu(24) self.loop_out.pulse_mu(24)
in_mu = self.loop_in.timestamp_mu(gate_end_mu) in_mu = self.loop_in.timestamp_mu(gate_end_mu)
print("timings: ", gate_start_mu, in_mu - lat_offset, gate_end_mu)
if in_mu < 0: if in_mu < 0:
raise PulseNotReceived() raise PulseNotReceived()
if not (gate_start_mu <= (in_mu - lat_offset) <= gate_end_mu): if not (gate_start_mu <= (in_mu - lat_offset) <= gate_end_mu):

View File

@ -0,0 +1,9 @@
# RUN: %python -m artiq.compiler.testbench.embedding %s
from artiq.language.core import *
values = (1, 2)
@kernel
def entrypoint():
assert values == (1, 2)

View File

@ -9,3 +9,7 @@ def foo():
@kernel @kernel
def entrypoint(): def entrypoint():
foo() foo()
# Test reassigning strings.
a = "a"
b = a

View File

@ -7,3 +7,10 @@ assert (x, y) == (1, 2)
lst = [1, 2, 3] lst = [1, 2, 3]
assert [x*x for x in lst] == [1, 4, 9] assert [x*x for x in lst] == [1, 4, 9]
assert [0] == [0]
assert [0] != [1]
assert [[0]] == [[0]]
assert [[0]] != [[1]]
assert [[[0]]] == [[[0]]]
assert [[[0]]] != [[[1]]]

View File

@ -5,3 +5,9 @@ x, y = 2, 1
x, y = y, x x, y = y, x
assert x == 1 and y == 2 assert x == 1 and y == 2
assert (1, 2) + (3.0,) == (1, 2, 3.0) assert (1, 2) + (3.0,) == (1, 2, 3.0)
assert (0,) == (0,)
assert (0,) != (1,)
assert ([0],) == ([0],)
assert ([0],) != ([1],)

View File

@ -0,0 +1,8 @@
# RUN: %python -m artiq.compiler.testbench.signature %s >%t
# RUN: OutputCheck %s --file-to-check=%t
x = 0x100000000
# CHECK-L: x: numpy.int64
y = int32(x)
# CHECK-L: y: numpy.int32

View File

@ -40,7 +40,7 @@ requirements:
- quamash - quamash
- pyqtgraph 0.10.0 - pyqtgraph 0.10.0
- pygit2 - pygit2
- aiohttp - aiohttp >=3
- pythonparser >=1.1 - pythonparser >=1.1
- levenshtein - levenshtein

View File

@ -42,7 +42,7 @@ requirements:
- quamash - quamash
- pyqtgraph 0.10.0 - pyqtgraph 0.10.0
- pygit2 - pygit2
- aiohttp - aiohttp >=3
- levenshtein - levenshtein
test: test:

View File

@ -84,7 +84,7 @@ master_doc = 'index'
# General information about the project. # General information about the project.
project = 'ARTIQ' project = 'ARTIQ'
copyright = '2014-2018, M-Labs Limited' copyright = '2014-2019, M-Labs Limited'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the

View File

@ -56,7 +56,7 @@ Preparing the build environment for the core device
--------------------------------------------------- ---------------------------------------------------
These steps are required to generate code that can run on the core These steps are required to generate code that can run on the core
device. They are necessary both for building the MiSoC BIOS device. They are necessary both for building the firmware
and the ARTIQ kernels. and the ARTIQ kernels.
* Install required host packages: :: * Install required host packages: ::
@ -117,6 +117,7 @@ and the ARTIQ kernels.
$ sudo mkdir /usr/local/rust-or1k $ sudo mkdir /usr/local/rust-or1k
$ sudo chown $USER.$USER /usr/local/rust-or1k $ sudo chown $USER.$USER /usr/local/rust-or1k
$ make install $ make install
$ cd ..
$ destdir="/usr/local/rust-or1k/lib/rustlib/or1k-unknown-none/lib/" $ destdir="/usr/local/rust-or1k/lib/rustlib/or1k-unknown-none/lib/"
$ rustc="rustc --out-dir ${destdir} -L ${destdir} --target or1k-unknown-none -g -C target-feature=+mul,+div,+ffl1,+cmov,+addc -C opt-level=s --crate-type rlib" $ rustc="rustc --out-dir ${destdir} -L ${destdir} --target or1k-unknown-none -g -C target-feature=+mul,+div,+ffl1,+cmov,+addc -C opt-level=s --crate-type rlib"
@ -225,6 +226,10 @@ These steps are required to generate gateware bitstream (``.bit``) files, build
.. _build-target-binaries: .. _build-target-binaries:
* For Kasli::
$ python3 -m artiq.gateware.targets.kasli -V <your_variant>
* For KC705:: * For KC705::
$ python3 -m artiq.gateware.targets.kc705 -V nist_clock # or nist_qc2 $ python3 -m artiq.gateware.targets.kc705 -V nist_clock # or nist_qc2
@ -233,13 +238,9 @@ These steps are required to generate gateware bitstream (``.bit``) files, build
.. _flash-target-binaries: .. _flash-target-binaries:
* Then, gather the binaries and flash them: :: * Then, flash the binaries: ::
$ mkdir binaries $ artiq_flash --srcbuild artiq_kasli -V <your_variant>
$ cp misoc_nist_qcX_<board>/gateware/top.bit binaries
$ cp misoc_nist_qcX_<board>/software/bios/bios.bin binaries
$ cp misoc_nist_qcX_<board>/software/runtime/runtime.fbi binaries
$ artiq_flash -d binaries
* Check that the board boots by running a serial terminal program (you may need to press its FPGA reconfiguration button or power-cycle it to load the gateware bitstream that was newly written into the flash): :: * Check that the board boots by running a serial terminal program (you may need to press its FPGA reconfiguration button or power-cycle it to load the gateware bitstream that was newly written into the flash): ::

View File

@ -74,7 +74,7 @@ As part of the DRTIO link initialization, a real-time packet is sent by the core
RTIO outputs RTIO outputs
++++++++++++ ++++++++++++
Controlling a remote RTIO output involves placing the RTIO event into the FIFO of the remote device. The core device maintains a cache of the space available in each channel FIFO of the remote device. If, according to the cache, there is space available, then a packet containing the event information (timestamp, address, channel, data) is sent immediately and the cached value is decremented by one. If, according to the cache, no space is available, then the core device sends a request for the space available in the remote FIFO and updates the cache. The process repeats until at least one FIFO entry is available for the event, at which point a packet containing the event information is sent as before. Controlling a remote RTIO output involves placing the RTIO event into the buffer of the destination. The core device maintains a cache of the buffer space available in each destination. If, according to the cache, there is space available, then a packet containing the event information (timestamp, address, channel, data) is sent immediately and the cached value is decremented by one. If, according to the cache, no space is available, then the core device sends a request for the space available in the destination and updates the cache. The process repeats until at least one remote buffer entry is available for the event, at which point a packet containing the event information is sent as before.
Detecting underflow conditions is the responsibility of the core device; should an underflow occur then no DRTIO packet is transmitted. Sequence errors are handled similarly. Detecting underflow conditions is the responsibility of the core device; should an underflow occur then no DRTIO packet is transmitted. Sequence errors are handled similarly.

View File

@ -21,7 +21,7 @@ The conda package contains pre-built binaries that you can directly flash to you
Installing Anaconda or Miniconda Installing Anaconda or Miniconda
-------------------------------- --------------------------------
You can either install Anaconda (choose Python 3.5) from https://store.continuum.io/cshop/anaconda/ or install the more minimalistic Miniconda (choose Python 3.5) from http://conda.pydata.org/miniconda.html You can either install Anaconda from https://www.anaconda.com/download/ or install the more minimalistic Miniconda from https://conda.io/miniconda.html
After installing either Anaconda or Miniconda, open a new terminal (also known as command line, console, or shell and denoted here as lines starting with ``$``) and verify the following command works:: After installing either Anaconda or Miniconda, open a new terminal (also known as command line, console, or shell and denoted here as lines starting with ``$``) and verify the following command works::
@ -35,17 +35,17 @@ Installing the ARTIQ packages
.. note:: .. note::
On a system with a pre-existing conda installation, it is recommended to update conda to the latest version prior to installing ARTIQ. On a system with a pre-existing conda installation, it is recommended to update conda to the latest version prior to installing ARTIQ.
First add the conda-forge repository containing ARTIQ dependencies to your conda configuration:: Add the M-Labs ``main`` Anaconda package repository containing stable releases and release candidates::
$ conda config --prepend channels http://conda.anaconda.org/conda-forge/label/main $ conda config --prepend channels m-labs
Then add the M-Labs ``main`` Anaconda package repository containing stable releases and release candidates::
$ conda config --prepend channels http://conda.anaconda.org/m-labs/label/main
.. note:: .. note::
To use the development versions of ARTIQ, also add the ``dev`` label (http://conda.anaconda.org/m-labs/label/dev). To use the development versions of ARTIQ, also add the ``dev`` label (m-labs/label/dev).
Development versions are built for every change and contain more features, but are not as well-tested and are more likely to contain more bugs or inconsistencies than the releases in the ``main`` label. Development versions are built for every change and contain more features, but are not as well-tested and are more likely to contain more bugs or inconsistencies than the releases in the default ``main`` label.
Add the conda-forge repository containing ARTIQ dependencies to your conda configuration::
$ conda config --add channels conda-forge
Then prepare to create a new conda environment with the ARTIQ package and the matching binaries for your hardware: Then prepare to create a new conda environment with the ARTIQ package and the matching binaries for your hardware:
choose a suitable name for the environment, for example ``artiq-main`` if you intend to track the main label, ``artiq-3`` for the 3.x release series, or ``artiq-2016-04-01`` if you consider the environment a snapshot of ARTIQ on 2016-04-01. choose a suitable name for the environment, for example ``artiq-main`` if you intend to track the main label, ``artiq-3`` for the 3.x release series, or ``artiq-2016-04-01`` if you consider the environment a snapshot of ARTIQ on 2016-04-01.
@ -117,7 +117,7 @@ Configuring OpenOCD
Some additional steps are necessary to ensure that OpenOCD can communicate with the FPGA board. Some additional steps are necessary to ensure that OpenOCD can communicate with the FPGA board.
On Linux, first ensure that the current user belongs to the ``plugdev`` group. If it does not, run ``sudo adduser $USER plugdev`` and relogin. If you installed OpenOCD using conda and are using the conda environment ``artiq-main``, then execute the statements below. If you are using a different environment, you will have to replace ``artiq-main`` with the name of your environment:: On Linux, first ensure that the current user belongs to the ``plugdev`` group (i.e. `plugdev` shown when you run `$ groups`). If it does not, run ``sudo adduser $USER plugdev`` and relogin. If you installed OpenOCD using conda and are using the conda environment ``artiq-main``, then execute the statements below. If you are using a different environment, you will have to replace ``artiq-main`` with the name of your environment::
$ sudo cp ~/.conda/envs/artiq-main/share/openocd/contrib/60-openocd.rules /etc/udev/rules.d $ sudo cp ~/.conda/envs/artiq-main/share/openocd/contrib/60-openocd.rules /etc/udev/rules.d
$ sudo udevadm trigger $ sudo udevadm trigger
@ -147,7 +147,7 @@ Then, you can flash the board:
* For the KC705 board (selecting the appropriate hardware peripheral):: * For the KC705 board (selecting the appropriate hardware peripheral)::
$ artiq_flash -t kc705 -m [nist_clock/nist_qc2] $ artiq_flash -t kc705 -V [nist_clock/nist_qc2]
The SW13 switches also need to be set to 00001. The SW13 switches also need to be set to 00001.
@ -162,10 +162,10 @@ This should be done after either installation method (conda or source).
.. _flash-mac-ip-addr: .. _flash-mac-ip-addr:
* Set the MAC and IP address in the :ref:`core device configuration flash storage <core-device-flash-storage>` (see above for the ``-t`` and ``-m`` options to ``artiq_flash`` that may be required): :: * Set the MAC and IP address in the :ref:`core device configuration flash storage <core-device-flash-storage>` (see above for the ``-t`` and ``-V`` options to ``artiq_flash`` that may be required): ::
$ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx $ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx
$ artiq_flash -t [board] -m [adapter] -f flash_storage.img storage start $ artiq_flash -t [board] -V [adapter] -f flash_storage.img storage start
* (optional) Flash the idle kernel * (optional) Flash the idle kernel

View File

@ -27,4 +27,4 @@ Website: https://m-labs.hk/artiq
`Cite ARTIQ <http://dx.doi.org/10.5281/zenodo.51303>`_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``. `Cite ARTIQ <http://dx.doi.org/10.5281/zenodo.51303>`_ as ``Bourdeauducq, Sébastien et al. (2016). ARTIQ 1.0. Zenodo. 10.5281/zenodo.51303``.
Copyright (C) 2014-2018 M-Labs Limited. Licensed under GNU LGPL version 3+. Copyright (C) 2014-2019 M-Labs Limited. Licensed under GNU LGPL version 3+.

View File

@ -40,7 +40,7 @@ Experiment scheduling
Basics Basics
------ ------
To use hardware resources more efficiently, potentially compute-intensive pre-computation and analysis phases of other experiments is executed in parallel with the body of the current experiment that accesses the hardware. To use hardware resources more efficiently, potentially compute-intensive pre-computation and analysis phases of other experiments are executed in parallel with the body of the current experiment that accesses the hardware.
.. seealso:: These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`. .. seealso:: These steps are implemented in :class:`~artiq.language.environment.Experiment`. However, user-written experiments should usually derive from (sub-class) :class:`artiq.language.environment.EnvExperiment`.
@ -83,7 +83,7 @@ Multiple pipelines
Multiple pipelines can operate in parallel inside the same master. It is the responsibility of the user to ensure that experiments scheduled in one pipeline will never conflict with those of another pipeline over resources (e.g. same devices). Multiple pipelines can operate in parallel inside the same master. It is the responsibility of the user to ensure that experiments scheduled in one pipeline will never conflict with those of another pipeline over resources (e.g. same devices).
Pipelines are identified by their name, and are automatically created (when an experiment is scheduled with a pipeline name that does not exist) and destroyed (when it runs empty). Pipelines are identified by their name, and are automatically created (when an experiment is scheduled with a pipeline name that does not exist) and destroyed (when they run empty).
Git integration Git integration

View File

@ -55,21 +55,22 @@ Then later, when the wall clock reaches the respective timestamps the RTIO gatew
The following diagram shows what is going on at the different levels of the software and gateware stack (assuming one machine unit of time is 1 ns): The following diagram shows what is going on at the different levels of the software and gateware stack (assuming one machine unit of time is 1 ns):
.. wavedrom:: .. wavedrom::
{ {
signal: [ "signal": [
{name: 'kernel', wave: 'x32.3x', data: ['on()', 'delay(2*us)', 'off()'], node: '..A.XB'}, {"name": "kernel", "wave": "x32.3x", "data": ["on()", "delay(2*us)", "off()"], "node": "..A.XB"},
{name: 'now', wave: '2...2.', data: ['7000', '9000'], node: '..P..Q'}, {"name": "now", "wave": "2...2.", "data": ["7000", "9000"], "node": "..P..Q"},
{}, {},
{name: 'slack', wave: 'x2x.2x', data: ['4400', '5800']}, {"name": "slack", "wave": "x2x.2x", "data": ["4400", "5800"]},
{}, {},
{name: 'rtio_counter', wave: 'x2x|2x|2x2x', data: ['2600', '3200', '7000', '9000'], node: ' V.W'}, {"name": "rtio_counter", "wave": "x2x|2x|2x2x", "data": ["2600", "3200", "7000", "9000"], "node": " V.W"},
{name: 'ttl', wave: 'x1.0', node: ' R.S', phase: -6.5}, {"name": "ttl", "wave": "x1.0", "node": " R.S", "phase": -6.5},
{ node: ' T.U', phase: -6.5} { "node": " T.U", "phase": -6.5}
],
edge: [
'A~>R', 'P~>R', 'V~>R', 'B~>S', 'Q~>S', 'W~>S',
'R-T', 'S-U', 'T<->U 2µs'
], ],
"edge": [
"A~>R", "P~>R", "V~>R", "B~>S", "Q~>S", "W~>S",
"R-T", "S-U", "T<->U 2µs"
]
} }
The sequence is exactly equivalent to:: The sequence is exactly equivalent to::
@ -95,19 +96,20 @@ The experiment attempts to handle the exception by moving the cursor forward and
ttl.on() ttl.on()
.. wavedrom:: .. wavedrom::
{ {
signal: [ "signal": [
{name: 'kernel', wave: 'x34..2.3x', data: ['on()', 'RTIOUnderflow', 'delay()', 'on()'], node: '..AB....C', phase: -3}, {"name": "kernel", "wave": "x34..2.3x", "data": ["on()", "RTIOUnderflow", "delay()", "on()"], "node": "..AB....C", "phase": -3},
{name: 'now_mu', wave: '2.....2', data: ['t0', 't1'], node: '.D.....E', phase: -4}, {"name": "now_mu", "wave": "2.....2", "data": ["t0", "t1"], "node": ".D.....E", "phase": -4},
{}, {},
{name: 'slack', wave: '2x....2', data: ['< 0', '> 0'], node: '.T', phase: -4}, {"name": "slack", "wave": "2x....2", "data": ["< 0", "> 0"], "node": ".T", "phase": -4},
{}, {},
{name: 'rtio_counter', wave: 'x2x.2x....2x2', data: ['t0', '> t0', '< t1', 't1'], node: '............P'}, {"name": "rtio_counter", "wave": "x2x.2x....2x2", "data": ["t0", "> t0", "< t1", "t1"], "node": "............P"},
{name: 'tll', wave: 'x...........1', node: '.R..........S', phase: -.5} {"name": "tll", "wave": "x...........1", "node": ".R..........S", "phase": -0.5}
], ],
edge: [ "edge": [
'A-~>R forbidden', 'D-~>R', 'T-~B exception', "A-~>R forbidden", "D-~>R", "T-~B exception",
'C~>S allowed', 'E~>S', 'P~>S' "C~>S allowed", "E~>S", "P~>S"
] ]
} }
@ -170,17 +172,18 @@ In these situations where ``count()`` leads to a synchronization of timeline cur
Similar situations arise with methods such as :meth:`artiq.coredevice.ttl.TTLInOut.sample_get` and :meth:`artiq.coredevice.ttl.TTLInOut.watch_done`. Similar situations arise with methods such as :meth:`artiq.coredevice.ttl.TTLInOut.sample_get` and :meth:`artiq.coredevice.ttl.TTLInOut.watch_done`.
.. wavedrom:: .. wavedrom::
{ {
signal: [ "signal": [
{name: 'kernel', wave: '3..5.|2.3..x..', data: ['gate_rising()', 'count()', 'delay()', 'pulse()'], node: '.A.B..C.ZD.E'}, {"name": "kernel", "wave": "3..5.|2.3..x..", "data": ["gate_rising()", "count()", "delay()", "pulse()"], "node": ".A.B..C.ZD.E"},
{name: 'now_mu', wave: '2.2..|..2.2.', node: '.P.Q....XV.W'}, {"name": "now_mu", "wave": "2.2..|..2.2.", "node": ".P.Q....XV.W"},
{}, {},
{}, {},
{name: 'input gate', wave: 'x1.0', node: '.T.U', phase: -2.5}, {"name": "input gate", "wave": "x1.0", "node": ".T.U", "phase": -2.5},
{name: 'output', wave: 'x1.0', node: '.R.S', phase: -10.5} {"name": "output", "wave": "x1.0", "node": ".R.S", "phase": -10.5}
], ],
edge: [ "edge": [
'A~>T', 'P~>T', 'B~>U', 'Q~>U', 'U~>C', 'D~>R', 'E~>S', 'V~>R', 'W~>S' "A~>T", "P~>T", "B~>U", "Q~>U", "U~>C", "D~>R", "E~>S", "V~>R", "W~>S"
] ]
} }
@ -214,20 +217,21 @@ This is demonstrated in the following example where a pulse is split across two
Here, ``run()`` calls ``k1()`` which exits leaving the cursor one second after the rising edge and ``k2()`` then submits a falling edge at that position. Here, ``run()`` calls ``k1()`` which exits leaving the cursor one second after the rising edge and ``k2()`` then submits a falling edge at that position.
.. wavedrom:: .. wavedrom::
{ {
signal: [ "signal": [
{name: 'kernel', wave: '3.2..2..|3.', data: ['k1: on()', 'k1: delay(dt)', 'k1->k2 swap', 'k2: off()'], node: '..A........B'}, {"name": "kernel", "wave": "3.2..2..|3.", "data": ["k1: on()", "k1: delay(dt)", "k1->k2 swap", "k2: off()"], "node": "..A........B"},
{name: 'now', wave: '2....2...|.', data: ['t', 't+dt'], node: '..P........Q'}, {"name": "now", "wave": "2....2...|.", "data": ["t", "t+dt"], "node": "..P........Q"},
{}, {},
{}, {},
{name: 'rtio_counter', wave: 'x......|2xx|2', data: ['t', 't+dt'], node: '........V...W'}, {"name": "rtio_counter", "wave": "x......|2xx|2", "data": ["t", "t+dt"], "node": "........V...W"},
{name: 'ttl', wave: 'x1...0', node: '.R...S', phase: -7.5}, {"name": "ttl", "wave": "x1...0", "node": ".R...S", "phase": -7.5},
{ node: ' T...U', phase: -7.5} { "node": " T...U", "phase": -7.5}
],
edge: [
'A~>R', 'P~>R', 'V~>R', 'B~>S', 'Q~>S', 'W~>S',
'R-T', 'S-U', 'T<->U dt'
], ],
"edge": [
"A~>R", "P~>R", "V~>R", "B~>S", "Q~>S", "W~>S",
"R-T", "S-U", "T<->U dt"
]
} }
@ -239,18 +243,19 @@ If a previous kernel sets timeline cursor far in the future this effectively loc
When a kernel should wait until all the events on a particular channel have been executed, use the :meth:`artiq.coredevice.ttl.TTLOut.sync` method of a channel: When a kernel should wait until all the events on a particular channel have been executed, use the :meth:`artiq.coredevice.ttl.TTLOut.sync` method of a channel:
.. wavedrom:: .. wavedrom::
{ {
signal: [ "signal": [
{name: 'kernel', wave: 'x3x.|5.|x', data: ['on()', 'sync()'], node: '..A.....Y'}, {"name": "kernel", "wave": "x3x.|5.|x", "data": ["on()", "sync()"], "node": "..A.....Y"},
{name: 'now', wave: '2..', data: ['7000'], node: '..P'}, {"name": "now", "wave": "2..", "data": ["7000"], "node": "..P"},
{}, {},
{}, {},
{name: 'rtio_counter', wave: 'x2x.|..2x', data: ['2000', '7000'], node: ' ....V'}, {"name": "rtio_counter", "wave": "x2x.|..2x", "data": ["2000", "7000"], "node": " ....V"},
{name: 'ttl', wave: 'x1', node: ' R', phase: -6.5}, {"name": "ttl", "wave": "x1", "node": " R", "phase": -6.5}
],
edge: [
'A~>R', 'P~>R', 'V~>R', 'V~>Y'
], ],
"edge": [
"A~>R", "P~>R", "V~>R", "V~>Y"
]
} }
RTIO reset RTIO reset

View File

@ -901,6 +901,12 @@ def get_versions():
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords. # case we can only use expanded keywords.
override = os.getenv("VERSIONEER_OVERRIDE")
if override:
return {"version": override, "full-revisionid": None,
"dirty": None,
"error": None, "date": None}
cfg = get_config() cfg = get_config()
verbose = cfg.verbose verbose = cfg.verbose
@ -1404,6 +1410,12 @@ def get_versions(verbose=False):
Returns dict with two keys: 'version' and 'full'. Returns dict with two keys: 'version' and 'full'.
""" """
override = os.getenv("VERSIONEER_OVERRIDE")
if override:
return {"version": override, "full-revisionid": None,
"dirty": None,
"error": None, "date": None}
if "versioneer" in sys.modules: if "versioneer" in sys.modules:
# see the discussion in cmdclass.py:get_cmdclass() # see the discussion in cmdclass.py:get_cmdclass()
del sys.modules["versioneer"] del sys.modules["versioneer"]