This allows assert() to be used on Zynq, where abort() is not
currently implemented for kernels. Furthermore, this is arguably
the more natural implementation of assertions on all kernel targets
(i.e. where embedding into host Python is used), as it matches host
Python behavior, and the exception information actually makes it to
the user rather than leading to a ConnectionClosed error.
Since this does not implement printing of the subexpressions, I
left the old print+abort implementation as default for the time
being.
The lit/integration/instance.py diff isn't just a spurious change;
the exception-based assert implementation exposes a limitation in
the existing closure lifetime tracking algorithm (which is not
supposed to be what is tested there).
GitHub: Fixes#1539.
Jagged arrays are no longer silently inferred as dtype=object,
as per NEP-34.
The compiler ndarray (re)implementation is unchanged, so the
test still fails.
The blind counter should be held in reset whenever the input is high,
not just when there is a rising edge (otherwise the counter runs down
during the main pulse and can then re-trigger on jitter from the falling edge)
* Repeat information about matching log2_width a few times
in the hope that people read it. #1518
* Pass through log2_width in kasli_generic json. close#1481
* Check DAC value range. #1518
Added more tests and use normal rpc instead of async rpc.
Async RPC does not represent the real throughput which is limited by the
hardware and the network. Normal RPC which requires a response from the
remote is closer to real usecases.
Even though the code already used non-greedy wildcards before,
it would not find the shortest match, as earlier match starts
would still take precedence.
This could possibly be sped up a bit in CPython by doing
everything inside re using lookahead-assertion trickery, but the
current code is already imperceptibly fast for hundreds of
choices.
This generates rather more code than necessary, but has
the advantage of automatically handling incomplete
multi-dimensional subscripts which still leave arrays
behind.
Lists and arrays no longer have the same representation all
the way through codegen, as used to be the case.
This could/should be made more efficient later, eliding the
temporary copies.
Left generic transpose (shape order inversion) for now, as that
would be less ugly if we implement forwarding to Python function
bodies for array function implementations.
Needs a runtime test case.
LLVM will take care of optimising the loops. This was still
unnecessarily painful; implementing generics and implementing
this in ARTIQ Python looks very attractive right now.
Relies on the runtime to provide the necessary
(libm-compatible) functions.
The test is nifty, but a bit brittle; if this breaks in the
future because of optimizer changes, do not hesitate to convert
this into a more pedestrian test case.
So far, this is not exposed to the user beyond implicit conversions.
Note that all the implicit conversions, such as triggered by adding
arrays of mismatching types, or dividing integer arrays, are currently
emitted in a maximally inefficient way, where a temporary copy is first
made for the type conversion. The conversions would more sensibly be
implemented during the per-element operations to save on the extra
copies, but the current behaviour fell out of the rest of the IR
generator structure without extra changes.
Matches NumPy. Slicing a TList reallocates, this doesn't; offsetting
couldn't be handled in the IR without introducing new semantics
(the Alloc kludge; could/should be made its own IR type).
Still needs support through all the rest of the compiler, and
support for higher-dimensional arrays.
Alternatively, we could always assume ndarrays of ndarrays
are rectangular (i.e. ban array/list element types), and
detect mismatch at runtime. This might turn out to be
preferrable to be able to construct matrices from rows/columns.
`array()` is disallowed for no particularly good reason but
numpy API compatibility.
* coredevice.ad9910: Add return type hints to conversion functions
* coredevice.ad9910: Make set_pow write correct number of bits
The AD9910 expects 16 bits. Thus, if writing 32 bits to the POW register, the chip would likely enter a locked-up state.
* coredevice.ad9910: Correct data alignment in write_16
Co-authored-by: Robert Jördens <rj@quartiq.de>
* coredevice.ad9910: Add function to read from 16 bit registers
Co-authored-by: drmota <peter.drmota@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
This reverts commits f8d1506922
and cf19c9512d.
While the commit just fixes a clear typo in the implementation,
it turns out the original algorithm isn't flexible enough to
capture functions that transitively return references to
long-lived data. For instance, while cache_get() is special-cased
in the compiler to be recognised as returning a value of Global()
lifetime, a function just forwarding to it (as seen in the
embedding tests) isn't anymore.
A separate issue is also that this makes implementing functions
that take lists and return references to global data in user code
impossible, which central parts of the Oxford codebase rely on.
Just reverting for now to unblock master; a fix is easily designed,
but needs testing.
* Never drive SDL or SDA high. They are specified to be open
collector/drain and pulled up by resistive pullups. Driving
high fails miserably in a multi-master topology (e.g. with
a USB I2C interface). It would only ever be implemented to
speed up the bus actively but that's tricky and completely
unnecessary here.
* Make the handover states between the I2C protocol phases (start, stop,
restart, write, read) well defined. Add comments stressing those
pre/postconditions.
* Add checks for SDA arbitration failures and stuck SCL.
* Remove wrong, misleading or redundant comments.
Before, the system would enter a boot loop when a panic occurred
while the kernel CPU was active (and panic_reset == 1), as
kernel::start() for the startup kernel would panic.
See test case – previously, the highest-priority pending run would
be used to calculate the timeout, rather than the earliest one.
This probably managed to go undetected for that long as any unrelated
changes to the pipeline (e.g. new submissions, or experiments pausing)
would also cause _get_run() to be re-evaluated.
Previously, a significant risk of losing experimental results would
be associated with long-running experiments, as any stray exceptions
while run()ing the experiment – for instance, due to infrequent
network glitches or hardware reliability issue – would cause no
HDF5 file to be written. This was especially troublesome as long
experiments would suffer from a higher probability of unanticipated
failures, while at the same time being more costly to re-take in
terms of wall-clock time.
Unanticipated uncaught exceptions like that were enough of an issue
that several Oxford codebases had come up with their own half-baked
mitigation strategies, from swallowing all exceptions in run() by
convention, to always broadcasting all results to uniquely named
datasets such that the partial results could be recovered and written
to HDF5 by manually run recovery experiments.
This commit addresses the problem at its source, changing the worker
behaviour such that an HDF5 file is always written as soon as run()
starts.
* ad9910: fix asf range
The ASF is a 14-bit word. The highest possible value is 0x3fff, not
0x3ffe. `int(round(1.0 * 0x3fff)) == 0x3fff`.
I don't remember and understand why this was 0x3ffe since the beginning.
0x3fff was already used as a default in `set_mu()`
Signed-off-by: Robert Jördens <rj@quartiq.de>
* RELEASE_NOTES: ad9910 asf scale change
Co-authored-by: David Nadlinger <code@klickverbot.at>
* Input validation and masking of SI -> mu conversions (close#1446)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* Update RELEASE_NOTES
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
* ad53xx: voltage_to_mu() validation & documentation (closes#1443, #1444)
The voltage input (float) is checked for validity. If we need more
speed, we may want to check the DAC-code for over/underflow instead.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx documentation: voltage_to_mu is only valid for 16-bit DACs
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* AD53xx: add voltage_to_mu method (closes#1341)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx: improve voltage_to_mu performance
Interger comparison is faster than floating point math.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* AD53xx: voltage_to_mu method now uses attribute values
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* Fixup RELEASE_NOTES.rst
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx: documentation improvements
voltage_to_mu return value
14-bit DAC support
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Edges on pulses shorter than the RTIO period were missed because the
reference sample and the last sample of the serdes word are the same.
This change enables detection of edges on pulses as short as the
serdes UI (and shorter as long as the pulse still hits a serdes sample
aperture).
In any RTIO period, only the leading event corresponding to the first
edge with slope according to sensitivity is registerd. If the channel is
sensitive to both rising and falling edges and if the pulse is contained
within an RTIO period, or if it is sensitive only to one edge slope and
there are multiple pulses in an RTIO period, only the leading event is
seen. Thus this possibility of lost events is still there. Only the
conditions under which loss occurs are reduced.
In testing with the kasli-ptb6 variant, this also improves resource
usage (a couple hundred LUT) and timing (0.1 ns WNS).
* This targets unrelease CPLD gateware (https://github.com/quartiq/mirny/issues/1)
* includes initial coredevice driver, eem shims, and kasli_generic tooling
* addresses the ARTIQ side of #1130
* Register abstraction to be written
Signed-off-by: Robert Jördens <rj@quartiq.de>