It was possible to crash the dashboard by opening the context menu
before an applet entry had been selected for the first time (e.g.
immediately after startup) and selecting one of the Group CCB
actions, as the enable update slot would not have been run.
This broke after b8cd163978, but
is invalid code to start with; this would have previously
crashed the code generator had the code actually been compiled.
(Allowing implicit conversion to bool would be a separate debate.)
The previous code could have never worked as-is, as the result slot
went unused, and it tried to append the load instruction to the
block just terminated with the invoke.
GitHub: Fixes#1506, #1531.
Since we don't implement any integer-like operations for TBool
(addition, bitwise not, etc.), TBool is currently neither
strictly equivalent to builtin bool nor numpy.bool_, but through
very obvious compiler errors (operation not supported) rather than
silently different runtime behaviour.
Just mapping both to TBool thus is a huge improvement over the
current behaviour (where numpy.False_ is a true-like object). In
the future, we could still implement more operations for TBool,
presumably following numpy.bool_ rather than the builtin type,
just like builtin integers get translated to the numpy-like
TInt{32,64}.
GitHub: Fixes#1275.
Previously, any type would be accepted for the test expression,
leading to internal errors in the code generator if the passed
value wasn't in fact a bool.
array([...]), the constructor for NumPy arrays, currently has the
status of some weird kind of macro in ARTIQ Python, as it needs
to determine the number of dimensions in the resulting array
type, which is a fixed type parameter on which inference cannot
be performed.
This leads to an ambiguity for empty lists, which could contain
elements of arbitrary type, including other lists (which would
add to the number of dimensions).
Previously, I had chosen to make array([]) to be of completely
indeterminate type for this reason. However, this is different
to how the call behaves in host NumPy, where this is a well-formed
call creating an empty 1D array (or 2D for array([[], []]), etc.).
This commit adds special matching for (recursive lists of) empty
ListT AST nodes to treat them as scalar dimensions, with the
element type still unknown.
This also happens to fix type inference for embedding empty 1D
NumPy arrays from host object attributes, although multi-dimensional
arrays will still require work (see GitHub #1633).
GitHub: Fixes#1626.
Strided slicing of one-dimensional arrays (i.e. with non-trivial
steps) might have previously been working, but would have had
different semantics, as all slices were copies rather than a view
into the original data.
Fixing this in the future will require adding support for an index
stride field/tuple to our array representation (and all the
associated indexing logic).
GitHub: Fixes#1627.
This was a long-standing issue affecting both lists and
the new NumPy array implementation, just caused by the
generic inference passes not being run on the slice
subexpressions (and thus e.g. ints not being monomorphized).
GitHub: Fixes#1632.
Resolves error message shown.
The following error message is shown when worker_impl.py:199 is run:
```
WARNING:worker(RID,EXPERIMENT):py.warnings:/nix/store/77sw4p03cb7rdayx86agi4yqxh5wq46b-python3.7-artiq-5.7141.1b68906/lib/python3.7/site-packages/artiq/master/worker_impl.py:199: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
logging.warn(message)
```
recv() returns 0 instead of data if the socket has already
been closed. This is translated into a zero-length list on
the Python layer. Previously, the code would enter an
infinite loop if the socket was closed while attempting
to receive data.
This partially reverts commit b5e1bd3fa2,
which had removed keepalive. This, however, led to experiments
hanging forever if the core device had dropped the connection
(e.g. to a kernel CPU panic, or the device being rebooted).
The chosen keepalive settings are fairly conservative (with the
10 s timeout) to avoid any possible interaction with smoltcp's
3 s ARP try interval (see GitHub issue #1150), even though this
should be a non-issue now due to the larger ARP cache.
* KC705 master: user can no longer choose whether or not the SMA acts as the 2nd DRTIO channel; SFP and SMA now act as the 1st and 2nd channel respectively by default.
* KC705 satellite: user should now use `--sma` to enable using the SMA as the satellite channel; SFP acts as the satellite channel by default.
* Two DRTIO channels (i.e. satellite and repeater) are enabled by default.
* User can choose either the SFP or SMA as the satellite channel (by passing `--drtio-sat sfp` or --drtio-sat sma` to the argparser), and the unchosen would become the repeater channel.
* One DRTIO master channel is enabled by default.
* User can set the SMA as the 2nd master channel (by passing --drtio-sma to the argparser).
* Multi-channel (i.e. with repeaters) on KC705 satellite is supported but has not been implemented yet.
When run on the host, the `turns_to_pow` retrun-type is numpy.int64.
Sensibly, the compiler does not attempt to convert `numpy.int64` to `int32`.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
This allows assert() to be used on Zynq, where abort() is not
currently implemented for kernels. Furthermore, this is arguably
the more natural implementation of assertions on all kernel targets
(i.e. where embedding into host Python is used), as it matches host
Python behavior, and the exception information actually makes it to
the user rather than leading to a ConnectionClosed error.
Since this does not implement printing of the subexpressions, I
left the old print+abort implementation as default for the time
being.
The lit/integration/instance.py diff isn't just a spurious change;
the exception-based assert implementation exposes a limitation in
the existing closure lifetime tracking algorithm (which is not
supposed to be what is tested there).
GitHub: Fixes#1539.
Jagged arrays are no longer silently inferred as dtype=object,
as per NEP-34.
The compiler ndarray (re)implementation is unchanged, so the
test still fails.
The blind counter should be held in reset whenever the input is high,
not just when there is a rising edge (otherwise the counter runs down
during the main pulse and can then re-trigger on jitter from the falling edge)
* Repeat information about matching log2_width a few times
in the hope that people read it. #1518
* Pass through log2_width in kasli_generic json. close#1481
* Check DAC value range. #1518
Added more tests and use normal rpc instead of async rpc.
Async RPC does not represent the real throughput which is limited by the
hardware and the network. Normal RPC which requires a response from the
remote is closer to real usecases.
Even though the code already used non-greedy wildcards before,
it would not find the shortest match, as earlier match starts
would still take precedence.
This could possibly be sped up a bit in CPython by doing
everything inside re using lookahead-assertion trickery, but the
current code is already imperceptibly fast for hundreds of
choices.
This generates rather more code than necessary, but has
the advantage of automatically handling incomplete
multi-dimensional subscripts which still leave arrays
behind.
Lists and arrays no longer have the same representation all
the way through codegen, as used to be the case.
This could/should be made more efficient later, eliding the
temporary copies.
Left generic transpose (shape order inversion) for now, as that
would be less ugly if we implement forwarding to Python function
bodies for array function implementations.
Needs a runtime test case.
LLVM will take care of optimising the loops. This was still
unnecessarily painful; implementing generics and implementing
this in ARTIQ Python looks very attractive right now.
Relies on the runtime to provide the necessary
(libm-compatible) functions.
The test is nifty, but a bit brittle; if this breaks in the
future because of optimizer changes, do not hesitate to convert
this into a more pedestrian test case.
So far, this is not exposed to the user beyond implicit conversions.
Note that all the implicit conversions, such as triggered by adding
arrays of mismatching types, or dividing integer arrays, are currently
emitted in a maximally inefficient way, where a temporary copy is first
made for the type conversion. The conversions would more sensibly be
implemented during the per-element operations to save on the extra
copies, but the current behaviour fell out of the rest of the IR
generator structure without extra changes.
Matches NumPy. Slicing a TList reallocates, this doesn't; offsetting
couldn't be handled in the IR without introducing new semantics
(the Alloc kludge; could/should be made its own IR type).
Still needs support through all the rest of the compiler, and
support for higher-dimensional arrays.
Alternatively, we could always assume ndarrays of ndarrays
are rectangular (i.e. ban array/list element types), and
detect mismatch at runtime. This might turn out to be
preferrable to be able to construct matrices from rows/columns.
`array()` is disallowed for no particularly good reason but
numpy API compatibility.
* coredevice.ad9910: Add return type hints to conversion functions
* coredevice.ad9910: Make set_pow write correct number of bits
The AD9910 expects 16 bits. Thus, if writing 32 bits to the POW register, the chip would likely enter a locked-up state.
* coredevice.ad9910: Correct data alignment in write_16
Co-authored-by: Robert Jördens <rj@quartiq.de>
* coredevice.ad9910: Add function to read from 16 bit registers
Co-authored-by: drmota <peter.drmota@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
This reverts commits f8d1506922
and cf19c9512d.
While the commit just fixes a clear typo in the implementation,
it turns out the original algorithm isn't flexible enough to
capture functions that transitively return references to
long-lived data. For instance, while cache_get() is special-cased
in the compiler to be recognised as returning a value of Global()
lifetime, a function just forwarding to it (as seen in the
embedding tests) isn't anymore.
A separate issue is also that this makes implementing functions
that take lists and return references to global data in user code
impossible, which central parts of the Oxford codebase rely on.
Just reverting for now to unblock master; a fix is easily designed,
but needs testing.
* Never drive SDL or SDA high. They are specified to be open
collector/drain and pulled up by resistive pullups. Driving
high fails miserably in a multi-master topology (e.g. with
a USB I2C interface). It would only ever be implemented to
speed up the bus actively but that's tricky and completely
unnecessary here.
* Make the handover states between the I2C protocol phases (start, stop,
restart, write, read) well defined. Add comments stressing those
pre/postconditions.
* Add checks for SDA arbitration failures and stuck SCL.
* Remove wrong, misleading or redundant comments.
Before, the system would enter a boot loop when a panic occurred
while the kernel CPU was active (and panic_reset == 1), as
kernel::start() for the startup kernel would panic.
See test case – previously, the highest-priority pending run would
be used to calculate the timeout, rather than the earliest one.
This probably managed to go undetected for that long as any unrelated
changes to the pipeline (e.g. new submissions, or experiments pausing)
would also cause _get_run() to be re-evaluated.
Previously, a significant risk of losing experimental results would
be associated with long-running experiments, as any stray exceptions
while run()ing the experiment – for instance, due to infrequent
network glitches or hardware reliability issue – would cause no
HDF5 file to be written. This was especially troublesome as long
experiments would suffer from a higher probability of unanticipated
failures, while at the same time being more costly to re-take in
terms of wall-clock time.
Unanticipated uncaught exceptions like that were enough of an issue
that several Oxford codebases had come up with their own half-baked
mitigation strategies, from swallowing all exceptions in run() by
convention, to always broadcasting all results to uniquely named
datasets such that the partial results could be recovered and written
to HDF5 by manually run recovery experiments.
This commit addresses the problem at its source, changing the worker
behaviour such that an HDF5 file is always written as soon as run()
starts.
* ad9910: fix asf range
The ASF is a 14-bit word. The highest possible value is 0x3fff, not
0x3ffe. `int(round(1.0 * 0x3fff)) == 0x3fff`.
I don't remember and understand why this was 0x3ffe since the beginning.
0x3fff was already used as a default in `set_mu()`
Signed-off-by: Robert Jördens <rj@quartiq.de>
* RELEASE_NOTES: ad9910 asf scale change
Co-authored-by: David Nadlinger <code@klickverbot.at>
* Input validation and masking of SI -> mu conversions (close#1446)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* Update RELEASE_NOTES
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
* ad53xx: voltage_to_mu() validation & documentation (closes#1443, #1444)
The voltage input (float) is checked for validity. If we need more
speed, we may want to check the DAC-code for over/underflow instead.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx documentation: voltage_to_mu is only valid for 16-bit DACs
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* AD53xx: add voltage_to_mu method (closes#1341)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx: improve voltage_to_mu performance
Interger comparison is faster than floating point math.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* AD53xx: voltage_to_mu method now uses attribute values
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* Fixup RELEASE_NOTES.rst
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx: documentation improvements
voltage_to_mu return value
14-bit DAC support
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Edges on pulses shorter than the RTIO period were missed because the
reference sample and the last sample of the serdes word are the same.
This change enables detection of edges on pulses as short as the
serdes UI (and shorter as long as the pulse still hits a serdes sample
aperture).
In any RTIO period, only the leading event corresponding to the first
edge with slope according to sensitivity is registerd. If the channel is
sensitive to both rising and falling edges and if the pulse is contained
within an RTIO period, or if it is sensitive only to one edge slope and
there are multiple pulses in an RTIO period, only the leading event is
seen. Thus this possibility of lost events is still there. Only the
conditions under which loss occurs are reduced.
In testing with the kasli-ptb6 variant, this also improves resource
usage (a couple hundred LUT) and timing (0.1 ns WNS).