previously ttl_counter_0 and ttl_0 could be on completely different physical ttl output channels
with this change, ttl_0_counter (note the changed key format) is always on the same channel as ttl_0
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
We need to check if our inference reached a fixed point. This is checked
using hash of the types in the AST, which is very slow. This patch
avoids computing the hash if we can make sure that the AST is definitely
changed, which is when we parse a new function.
For some simple programs with many functions, this can significantly
reduce the compile time by up to ~30%.
According to PEP484, type hint can be a string literal for forward
references. With PEP563, type hint would be preserved in annotations in
string form.
This breaks the internal dataset representation used by applets
and when saving to disk (``dataset_db.pyon``).
See ``test/test_dataset_db.py`` and ``test/test_datasets.py``
for examples.
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
Previous to this commit `set_nco_phase()` set the phase of the DUC instead
of the NCO. Setting the phase of the NCO may be desirable to utilise the
auto-sync functionality of the double-buffered DAC-NCO settings.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
1. Clarify which features require additional configuration via the `dac`
constructor argument.
2. Document when DAC settings apply immediatly/are staged.
3. Document how staged DAC settings may be applied
4. Calrify operation of `dac_sync`
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
When Phaser is powered on and `init()` is first called, enabling the
DAC-mixer while leaving the NCO disabled causes malformed output.
This commit implements a workaround by making sure the NCO is enabled,
before being set to the disired state.
This commit also avoids the following procedure, resulting in
malformed output:
1. Operate Phaser with the DAC Mixer and NCO enabled
2. Set the NCO to a non-zero frequency
3. Disable the NCO in the device_db
4. Re-initialise Phaser
After this procedure, with CMIX disabled, incorrect output is produced.
To clear the fault one must re-enable the NCO and write the NCO freqeuncy
to zero before disabling the NCO.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
The CMIX bits are bits 12-15 in register 0x0d. This has been checked
against the datasheet and verified on hardware. Until now, the bit for
CMIX1 was written to CMIX0. The CMIX0 bit was written to a reserved bit.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
in some use cases a larger tunable range than available via the DUC may
be needed. Some use cases may wish to combine the coarse mixer with the
DUC to extend the tunable range.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Currently, `init()` leaves a single oscillator at full scale. The phase
accumulator of this oscillator is held continuously cleared. Provided no
upconverting mechanism is active (DUC, CMIX, NCO), this produces a full-scale
DC voltage. The DC voltage is blocked by hardware capacitors. This behaviour
is not mentioned by the `init` documentation.
If one attempts to use any other oscillator without reducing the amplitude
of the oscillator enabled by `init`, there is by significant clipping.
In the case that the NCO or CMIX are configured via the device_db
(suggested in the docs), leaving the osillator at full scale results in
full RF output power after calling `init()`. This may plausibly damage loads
driven by phaser.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
The suitable PFD clock depends on the use case and will likely need
to be configured by some users. All things being equal, a higher PFD
clock is desirable as is results in lower local oscillator phase-noise.
Phaser was designed around a maximum PFD clock of 62.5 MHz. In integer mode,
with no local oscillator frequency divisor set, a 62.5 MHz PFD clock results
in a 125 MHz local oscillator step size. Given the +-200 MHz range of the DUC
(more if using the DAC mixer), this step size will be acceptable to many.
This seems like the most appropreate default configuration as it should offer
the best phase-noise performance.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
`sif_sync` must be triggered to apply NCO frequency changes. To achieve per
channel frequency tunability exeeding the range of the DUC, the NCO frequeny must
adjusted. User code will need to trigger `sif_sync` to achieve this.
`sif_sync` can only be triggered if the bit was cleared. To avoid this pitfall,
the clearing of `sif_sync` is automated.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Currently running `voltage_to_mu()` or `voltage_group_to_mu()` on the host will
convert all machine unit values to int64. This leads to issues when machine units
are returned from RPCs.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
It was possible to crash the dashboard by opening the context menu
before an applet entry had been selected for the first time (e.g.
immediately after startup) and selecting one of the Group CCB
actions, as the enable update slot would not have been run.
This broke after b8cd163978, but
is invalid code to start with; this would have previously
crashed the code generator had the code actually been compiled.
(Allowing implicit conversion to bool would be a separate debate.)
The previous code could have never worked as-is, as the result slot
went unused, and it tried to append the load instruction to the
block just terminated with the invoke.
GitHub: Fixes#1506, #1531.
Since we don't implement any integer-like operations for TBool
(addition, bitwise not, etc.), TBool is currently neither
strictly equivalent to builtin bool nor numpy.bool_, but through
very obvious compiler errors (operation not supported) rather than
silently different runtime behaviour.
Just mapping both to TBool thus is a huge improvement over the
current behaviour (where numpy.False_ is a true-like object). In
the future, we could still implement more operations for TBool,
presumably following numpy.bool_ rather than the builtin type,
just like builtin integers get translated to the numpy-like
TInt{32,64}.
GitHub: Fixes#1275.
Previously, any type would be accepted for the test expression,
leading to internal errors in the code generator if the passed
value wasn't in fact a bool.
array([...]), the constructor for NumPy arrays, currently has the
status of some weird kind of macro in ARTIQ Python, as it needs
to determine the number of dimensions in the resulting array
type, which is a fixed type parameter on which inference cannot
be performed.
This leads to an ambiguity for empty lists, which could contain
elements of arbitrary type, including other lists (which would
add to the number of dimensions).
Previously, I had chosen to make array([]) to be of completely
indeterminate type for this reason. However, this is different
to how the call behaves in host NumPy, where this is a well-formed
call creating an empty 1D array (or 2D for array([[], []]), etc.).
This commit adds special matching for (recursive lists of) empty
ListT AST nodes to treat them as scalar dimensions, with the
element type still unknown.
This also happens to fix type inference for embedding empty 1D
NumPy arrays from host object attributes, although multi-dimensional
arrays will still require work (see GitHub #1633).
GitHub: Fixes#1626.
Strided slicing of one-dimensional arrays (i.e. with non-trivial
steps) might have previously been working, but would have had
different semantics, as all slices were copies rather than a view
into the original data.
Fixing this in the future will require adding support for an index
stride field/tuple to our array representation (and all the
associated indexing logic).
GitHub: Fixes#1627.
This was a long-standing issue affecting both lists and
the new NumPy array implementation, just caused by the
generic inference passes not being run on the slice
subexpressions (and thus e.g. ints not being monomorphized).
GitHub: Fixes#1632.
Resolves error message shown.
The following error message is shown when worker_impl.py:199 is run:
```
WARNING:worker(RID,EXPERIMENT):py.warnings:/nix/store/77sw4p03cb7rdayx86agi4yqxh5wq46b-python3.7-artiq-5.7141.1b68906/lib/python3.7/site-packages/artiq/master/worker_impl.py:199: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
logging.warn(message)
```
recv() returns 0 instead of data if the socket has already
been closed. This is translated into a zero-length list on
the Python layer. Previously, the code would enter an
infinite loop if the socket was closed while attempting
to receive data.
This partially reverts commit b5e1bd3fa2,
which had removed keepalive. This, however, led to experiments
hanging forever if the core device had dropped the connection
(e.g. to a kernel CPU panic, or the device being rebooted).
The chosen keepalive settings are fairly conservative (with the
10 s timeout) to avoid any possible interaction with smoltcp's
3 s ARP try interval (see GitHub issue #1150), even though this
should be a non-issue now due to the larger ARP cache.
* KC705 master: user can no longer choose whether or not the SMA acts as the 2nd DRTIO channel; SFP and SMA now act as the 1st and 2nd channel respectively by default.
* KC705 satellite: user should now use `--sma` to enable using the SMA as the satellite channel; SFP acts as the satellite channel by default.
* Two DRTIO channels (i.e. satellite and repeater) are enabled by default.
* User can choose either the SFP or SMA as the satellite channel (by passing `--drtio-sat sfp` or --drtio-sat sma` to the argparser), and the unchosen would become the repeater channel.
* One DRTIO master channel is enabled by default.
* User can set the SMA as the 2nd master channel (by passing --drtio-sma to the argparser).
* Multi-channel (i.e. with repeaters) on KC705 satellite is supported but has not been implemented yet.
When run on the host, the `turns_to_pow` retrun-type is numpy.int64.
Sensibly, the compiler does not attempt to convert `numpy.int64` to `int32`.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
This allows assert() to be used on Zynq, where abort() is not
currently implemented for kernels. Furthermore, this is arguably
the more natural implementation of assertions on all kernel targets
(i.e. where embedding into host Python is used), as it matches host
Python behavior, and the exception information actually makes it to
the user rather than leading to a ConnectionClosed error.
Since this does not implement printing of the subexpressions, I
left the old print+abort implementation as default for the time
being.
The lit/integration/instance.py diff isn't just a spurious change;
the exception-based assert implementation exposes a limitation in
the existing closure lifetime tracking algorithm (which is not
supposed to be what is tested there).
GitHub: Fixes#1539.
Jagged arrays are no longer silently inferred as dtype=object,
as per NEP-34.
The compiler ndarray (re)implementation is unchanged, so the
test still fails.
The blind counter should be held in reset whenever the input is high,
not just when there is a rising edge (otherwise the counter runs down
during the main pulse and can then re-trigger on jitter from the falling edge)
* Repeat information about matching log2_width a few times
in the hope that people read it. #1518
* Pass through log2_width in kasli_generic json. close#1481
* Check DAC value range. #1518
Added more tests and use normal rpc instead of async rpc.
Async RPC does not represent the real throughput which is limited by the
hardware and the network. Normal RPC which requires a response from the
remote is closer to real usecases.
Even though the code already used non-greedy wildcards before,
it would not find the shortest match, as earlier match starts
would still take precedence.
This could possibly be sped up a bit in CPython by doing
everything inside re using lookahead-assertion trickery, but the
current code is already imperceptibly fast for hundreds of
choices.
This generates rather more code than necessary, but has
the advantage of automatically handling incomplete
multi-dimensional subscripts which still leave arrays
behind.
Lists and arrays no longer have the same representation all
the way through codegen, as used to be the case.
This could/should be made more efficient later, eliding the
temporary copies.
Left generic transpose (shape order inversion) for now, as that
would be less ugly if we implement forwarding to Python function
bodies for array function implementations.
Needs a runtime test case.
LLVM will take care of optimising the loops. This was still
unnecessarily painful; implementing generics and implementing
this in ARTIQ Python looks very attractive right now.
Relies on the runtime to provide the necessary
(libm-compatible) functions.
The test is nifty, but a bit brittle; if this breaks in the
future because of optimizer changes, do not hesitate to convert
this into a more pedestrian test case.
So far, this is not exposed to the user beyond implicit conversions.
Note that all the implicit conversions, such as triggered by adding
arrays of mismatching types, or dividing integer arrays, are currently
emitted in a maximally inefficient way, where a temporary copy is first
made for the type conversion. The conversions would more sensibly be
implemented during the per-element operations to save on the extra
copies, but the current behaviour fell out of the rest of the IR
generator structure without extra changes.
Matches NumPy. Slicing a TList reallocates, this doesn't; offsetting
couldn't be handled in the IR without introducing new semantics
(the Alloc kludge; could/should be made its own IR type).
Still needs support through all the rest of the compiler, and
support for higher-dimensional arrays.
Alternatively, we could always assume ndarrays of ndarrays
are rectangular (i.e. ban array/list element types), and
detect mismatch at runtime. This might turn out to be
preferrable to be able to construct matrices from rows/columns.
`array()` is disallowed for no particularly good reason but
numpy API compatibility.
* coredevice.ad9910: Add return type hints to conversion functions
* coredevice.ad9910: Make set_pow write correct number of bits
The AD9910 expects 16 bits. Thus, if writing 32 bits to the POW register, the chip would likely enter a locked-up state.
* coredevice.ad9910: Correct data alignment in write_16
Co-authored-by: Robert Jördens <rj@quartiq.de>
* coredevice.ad9910: Add function to read from 16 bit registers
Co-authored-by: drmota <peter.drmota@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
This reverts commits f8d1506922
and cf19c9512d.
While the commit just fixes a clear typo in the implementation,
it turns out the original algorithm isn't flexible enough to
capture functions that transitively return references to
long-lived data. For instance, while cache_get() is special-cased
in the compiler to be recognised as returning a value of Global()
lifetime, a function just forwarding to it (as seen in the
embedding tests) isn't anymore.
A separate issue is also that this makes implementing functions
that take lists and return references to global data in user code
impossible, which central parts of the Oxford codebase rely on.
Just reverting for now to unblock master; a fix is easily designed,
but needs testing.
* Never drive SDL or SDA high. They are specified to be open
collector/drain and pulled up by resistive pullups. Driving
high fails miserably in a multi-master topology (e.g. with
a USB I2C interface). It would only ever be implemented to
speed up the bus actively but that's tricky and completely
unnecessary here.
* Make the handover states between the I2C protocol phases (start, stop,
restart, write, read) well defined. Add comments stressing those
pre/postconditions.
* Add checks for SDA arbitration failures and stuck SCL.
* Remove wrong, misleading or redundant comments.
Before, the system would enter a boot loop when a panic occurred
while the kernel CPU was active (and panic_reset == 1), as
kernel::start() for the startup kernel would panic.
See test case – previously, the highest-priority pending run would
be used to calculate the timeout, rather than the earliest one.
This probably managed to go undetected for that long as any unrelated
changes to the pipeline (e.g. new submissions, or experiments pausing)
would also cause _get_run() to be re-evaluated.
Previously, a significant risk of losing experimental results would
be associated with long-running experiments, as any stray exceptions
while run()ing the experiment – for instance, due to infrequent
network glitches or hardware reliability issue – would cause no
HDF5 file to be written. This was especially troublesome as long
experiments would suffer from a higher probability of unanticipated
failures, while at the same time being more costly to re-take in
terms of wall-clock time.
Unanticipated uncaught exceptions like that were enough of an issue
that several Oxford codebases had come up with their own half-baked
mitigation strategies, from swallowing all exceptions in run() by
convention, to always broadcasting all results to uniquely named
datasets such that the partial results could be recovered and written
to HDF5 by manually run recovery experiments.
This commit addresses the problem at its source, changing the worker
behaviour such that an HDF5 file is always written as soon as run()
starts.
* ad9910: fix asf range
The ASF is a 14-bit word. The highest possible value is 0x3fff, not
0x3ffe. `int(round(1.0 * 0x3fff)) == 0x3fff`.
I don't remember and understand why this was 0x3ffe since the beginning.
0x3fff was already used as a default in `set_mu()`
Signed-off-by: Robert Jördens <rj@quartiq.de>
* RELEASE_NOTES: ad9910 asf scale change
Co-authored-by: David Nadlinger <code@klickverbot.at>
* Input validation and masking of SI -> mu conversions (close#1446)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* Update RELEASE_NOTES
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
* ad53xx: voltage_to_mu() validation & documentation (closes#1443, #1444)
The voltage input (float) is checked for validity. If we need more
speed, we may want to check the DAC-code for over/underflow instead.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx documentation: voltage_to_mu is only valid for 16-bit DACs
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* AD53xx: add voltage_to_mu method (closes#1341)
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx: improve voltage_to_mu performance
Interger comparison is faster than floating point math.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* AD53xx: voltage_to_mu method now uses attribute values
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* Fixup RELEASE_NOTES.rst
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
* ad53xx: documentation improvements
voltage_to_mu return value
14-bit DAC support
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Edges on pulses shorter than the RTIO period were missed because the
reference sample and the last sample of the serdes word are the same.
This change enables detection of edges on pulses as short as the
serdes UI (and shorter as long as the pulse still hits a serdes sample
aperture).
In any RTIO period, only the leading event corresponding to the first
edge with slope according to sensitivity is registerd. If the channel is
sensitive to both rising and falling edges and if the pulse is contained
within an RTIO period, or if it is sensitive only to one edge slope and
there are multiple pulses in an RTIO period, only the leading event is
seen. Thus this possibility of lost events is still there. Only the
conditions under which loss occurs are reduced.
In testing with the kasli-ptb6 variant, this also improves resource
usage (a couple hundred LUT) and timing (0.1 ns WNS).
* This targets unrelease CPLD gateware (https://github.com/quartiq/mirny/issues/1)
* includes initial coredevice driver, eem shims, and kasli_generic tooling
* addresses the ARTIQ side of #1130
* Register abstraction to be written
Signed-off-by: Robert Jördens <rj@quartiq.de>
Interestingly enough, these actually seem to give a measurable
speedup (if small – about 1% improvement out of 6s whole-program
compile-time in one particular test case).
The previous implementation of is_mono() had also interesting
behaviour if `name` wasn't given; it would test only for the
presence of any keys specified via keyword arguments,
disregarding their values. Looking at uses across the current
ARTIQ codebase, I could neither find a case where this would
have actually been triggered, nor any rationale for it.
With the short-circuited implementation from this commit,
is_mono() now checks name/all of params against any specified
conditions.
This removes:
* host-side keepalive, which turns out not to be required
* custom connection timeout (the default is OK)
* SSH tunneling support (doesn't seem to be actually used anywhere)
This was mistakenly included in fb2b634c4a, and broke the test
case verifying that using None as an ARTIQ type annotation in fact
generates an error message.
With support for polymorphism (or type erasure on pointers to
member functions) being absent in the ARTIQ compiler, code
generation is vital to be able to implement abstractions that
work with user-provided lists/trees of objects with uniform
interfaces (e.g. a common base class, or duck typing), but
different concrete types.
@kernel_from_string has been in production use for exactly
this use case in Oxford for the better part of a year now
(various places in ndscan).
GitHub: Fixes#1089.
This will be displayed by GitHub below the directory listing, and was
inspired by observing new users disregard the examples/ tree entirely
(even though the experiments and device DBs within would have cleared
up their getting-started confusion) due to the perceived complexity
wall induced by the wealth of subdirectories.