Also factors out duplicate code for (de)serializing
elements of lists and ndarrays, and replaces the rounding
calculations by the well-known, much faster power-of-two-only
bit-twiddling version.
GitHub: Fixes#1934.
In particular, i64/double are actually supposed to be aligned
to their size on RISC-V (at least according to the ELF psABI),
though it is unclear to me whether this actually caused any
issues.
`llvm-addr2line` is not included as part of the llvm binary package for Windows. This causes ARTIQ python compilations issues when conda is not used (so the `llvm-tools` conda package is not installed, which provides `llvm-addr2line` currently).
Artiq assumes that all exceptions raised by the kernel can be constructed with
a single string argument. This isn't always the case. Especially for
exceptions that originated in python and were propagated to the kernel over
rpc.
With out this change a mosek solver failure looks like:
```
ERROR root:logging_tools.py:41 Terminating with exception (TypeError: __init__() missing 1 required positional argument: 'msg')
Traceback (most recent call last):
File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/master/worker_impl.py", line 540, in main
exp_inst.run()
File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/test_tools/experiment.py", line 82, in wrapper
meth()
File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/language/core.py", line 54, in run_on_core
return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/core.py", line 152, in run
self.comm.serve(embedding_map, symbolizer, demangler)
File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/comm_kernel.py", line 720, in serve
self._serve_exception(embedding_map, symbolizer, demangler)
File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/comm_kernel.py", line 699, in _serve_exception
python_exn = python_exn_type(
TypeError: __init__() missing 1 required positional argument: 'msg'
```
With this change we get:
```
ERROR root:logging_tools.py:41 Terminating with exception (RuntimeError: Exception type=<class 'mosek.Error'>, which couldn't be reconstructed (__init__() missing 1 required positional argument: 'msg'))
Core Device Traceback:
Traceback (most recent call first):
File "/home/mb/oxionics/ion-transport/tests/test_end_to_end.py", line 280, in get_transport
return self.seq.solve()
File "/home/mb/oxionics/ion-transport/tests/test_end_to_end.py", line 288, in artiq_worker_test_end_to_end.TransportTestScan.run(..., ...) (RA=+0x2e4)
self.seq.record(self.get_transport(1e-6 + 1e-7 * x))
mosek.Error(27): rescode.err_license_expired(1001): The license has expired.
End of Core Device Traceback
Traceback (most recent call last):
File "/home/mb/oxionics/artiq/artiq/master/worker_impl.py", line 540, in main
exp_inst.run()
File "/home/mb/oxionics/artiq/artiq/test_tools/experiment.py", line 82, in wrapper
meth()
File "/home/mb/oxionics/artiq/artiq/language/core.py", line 54, in run_on_core
return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
File "/home/mb/oxionics/artiq/artiq/coredevice/core.py", line 152, in run
self.comm.serve(embedding_map, symbolizer, demangler)
File "/home/mb/oxionics/artiq/artiq/coredevice/comm_kernel.py", line 732, in serve
self._serve_exception(embedding_map, symbolizer, demangler)
File "/home/mb/oxionics/artiq/artiq/coredevice/comm_kernel.py", line 714, in _serve_exception
raise python_exn
RuntimeError: Exception type=<class 'mosek.Error'>, which couldn't be reconstructed (__init__() missing 1 required positional argument: 'msg')
```
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
* master: (25 commits)
flake: update rpi-1 host key
aqctl_moninj_proxy: clear listeners on disconnect
Add method to check if termination is requested (#811, #1932)
moninj: fix underflows by order of operation fix channel toggle
moninj: fix underflows for urukul freq set
Urukul monitoring (#1142, #1921)
moninj: make receive_task private again
moninj,corelog: fix/cleanup exception handling (#1897)
aqctl_corelog: enable keepalive, terminate on connection failure
Modify log for matching the style
Add log message when dashboard connected to proxy
Public receive_task for the use in proxy
applets.simple: Actually forward dataset_prefixes when using IPC
master: Fixup 32db6ff978 (argument_ui support)
Revert "add pull.yml (#1918)"
add pull.yml (#1918)
Allow experiments to specify a custom argument editor UI (#1916)
dashboard: Add submit/close hooks for custom argument editors
dashboard: Plumb through datasets client to ExperimentManager
dashboard: Add cmdline option to load plugins on startup
...
On the master/EnvExperiment side, the only addition is an optional
property `argument_ui` that is made accessible to the dashboard, e.g.
class Example(EnvExperiment):
argument_ui = "ndscan"
def build(self):
…
Clients – primarily artiq_dashboard, but in principle e.g. a
command-line UI could do the same – can then compare the value to a
list of well-known names and prefer any matching custom UI handlers.
On the dashboard side, this commit adds the mechanism to register
a custom argument editor for a given argument_ui string, i.e. the
widget that displays the parameter values within the wider
experiment UI shell with the submit button, pipeline parameters, and
so on. The registry remains empty by default and would be filled by
out-of-tree plugins such as ndscan.
The UI state readback is implemented somewhat defensively to avoid
needless disruptions to users when upgrading.
These are used by ndscan, as re-serialising the entire ndscan
parameter metadata tree, which can grow to be quite extensive,
on every single Qt change event is a bit excessive (and would
probably cause a bit of lag while typing for big experiments
on low-end machines).
This is analogous to the explist/schedule subscribers, and allows
custom argument editors (such as ndscan) to provide hints/defaults/…
from datasets once available.
This allows ndscan v0.3+ to use the IPC interface for efficiency;
previously, the non-upstreamed RID dataset namespace feature allowed
the applets to somewhat efficient subscribe directly to the master
process via the socket interface.
When serialising a list of objects `_send_rpc_value` makes a copy of the
upcoming tags to pass repeatedly to the recursive call. Then uses
`_skip_rpc_value` to skip over the tags that should have been processed.
This didn't handle numpy arrays so, after processing a list of arrays it
got out of sync and failed.
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
KeyError raised when trying to load default_state()
due to missing Key "seed" in "RangeScan" and "CenterScan" in
state. Add {"seed": None} to resolve the bug.
nac3ld will not generate PLT & its relocation section. There might not be a pltrel in that case.
On the other hand, rebinding will not be limited to the symbols in the PLT when linked with nac3ld.
Thus the renaming.
Instead of automatically closing and draining the TcpStream in the Drop
implementation instead expect the user to call TcpStream::close.
Add close called to all users of TcpStream.
Document the requirement to call close on TcpListener::accept, this seems
to be the only way to get a new TcpStream at the moment.
DHCP is enabled by setting the `ip` config entry to "use_dhcp". Reusing this
config field rather than creating a new one means that there is no ambiguity
over which config field takes precedence.
Adds a thread to configure the interface based on DHCP events
Adds a `Dhcpv4Socket` as a wrapper around smoltcp's version
Formalises the storage of the IP addresses so that we can update one in
another module.
There's also a workaround for the first DHCP discover packet frequently
going missing.
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
Main changes:
Deal with interfaces now being generic over mediums, update interface name
and initialisation.
Interfaces now own their sockets. So we store a reference to the Interface
instead of the SocketSet in Scheduler and IO.
Sockets are no longer reference counted. We never called the function to
increase the socket's reference count, so now we just remove it where it
was previously released. This will result in the socket being dropped at
a different time, but I think that should be fine.
Tested firmware upload to the bootloader and spamming artiq_coremgmt log
calls to download the log from the firmware.
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
Remove the implementation of setting keepalive settings on sockets and use
the implementation from sipyco instead.
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
The representation of TList(T) is changed from `{T*, u32}` to
`{T*, u32}*`. The old representation forbids changing the length of a
list when the list is passed as a parameter into functions, as the
length is passed by value. The representation now matches with nac3.
We don't need to know whether there's a outer finally block
that's already implicit in the current break and continue
target.
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
When we have a trys inside a loop then we want to make sure any
finallys are executed by break and continue inside this try. But
this shouldn't pull finallys defined outside the loop in to the
loop. This change resets the `outer_final` attribute when
visiting for and while loops so that this doesn't happen.
Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
* init() now also clear and resets more state including the interpolators.
If not done, this PLL unlocks/locks may lead to random interpolator state
on boot to which the CICs react badly.
* Use and expose `t_frame`
* Clarify implementation state of `read()`
Otherwise, the exception message might be allocated on a stack, and will
become a dangling pointer when the exception is raised.
This will break some code that constructs exceptions with a function by
passing the message as a parameter because we cannot know if the parameter
is a constant. A way to mitigate this would be to defer this check to
LLVM IR codegen stage, and do inlining first for those exception
allocation functions, but I am not sure if we will guarantee inlining
for certain functions, and whether this is really needed.
Note that because we changed exception representation from using string
names as exception identifier into using integer IDs, we need to
initialize the embedding map in order to allocate the integer IDs. Also,
we can no longer print the exception names and messages from the kernel,
we will need the host to map exception IDs to names, and may need the
host to map string IDs to actual strings (messages can be static strings
in the firmware, or strings stored in the host only).
We now check for exception IDs for lit tests, which are fixed because we
preallocated all builtin exceptions.
Ported from:
M-Labs/artiq-zynq#162
This includes new API for exception handling, some refactoring to avoid
code duplication for exception structures, and modified protocols to
send nested exceptions and avoid string allocation.
Instead of removing basic blocks with no predecessor, we will now mark
and remove all blocks that are unreachable from the entry block. This
can handle loops that are dead code. This is needed as we will now
generate more complicated code for exception handling which the old dead
code eliminator failed to handle.
Exceptions are now allocated in the runtime when we raise the exception,
and destroyed when we exit the catch block. Nested exception and try
block is now supported, and should behave the same as in CPython.
Exceptions raised in except blocks will now unwind through finally
blocks, matching the behavior in CPython. Reraise will now preserve
backtrace.
Phi block LLVM IR generation is modified to handle landingpads, which
one ARTIQ IR will map to multiple LLVM IR.
Exception name is replaced by exception ID, which requires no
allocation. Other strings in the exception can now be 'host-only'
strings, which is represented by a CSlice with len = usize::MAX and
ptr = key, to avoid the need for allocation when raising exceptions
through RPC.
* Revert "Merge pull request #1544 from airwoodix/dataset-compression"
This reverts commit 311a818a49, reversing
changes made to 7ffe4dc2e3.
* fix accidental revert of f42bea06a8
* coredevice: Change Urukul default single-tone profile to 7
This allows using the internal profile control in RAM modulation mode (which always starts to play back at profile 0) without competing for the content of the profile 0 register used in single tone mode.
Signed-off-by: Peter Drmota <peter.drmota@physics.ox.ac.uk>
* ad9910/set_mu: comment on caveats when setting register
* ad9910: avoid unnecessary write/param
Credit: Solution proposed by @pmldrmota in https://github.com/m-labs/artiq/pull/1584#issuecomment-987774353
* revert 1064fdff (`set_mu()` comments)
158a7be7 had addressed this issue.
Co-authored-by: occheung <dc@m-labs.hk>
LLVM 6 seemed not to mind the mismatch, but more recent
versions produce miscompilations without this.
Needs llvmlite support (GitHub: numba/llvmlite#702).
* coredevice.ad9910: Add set_cfr2 function and extend arguments of set_cfr1 and set_sync
* SUServo: Wrap CPLD and DDS devices in a list
* SUServo: Refactor [nfc]
Co-authored-by: drmota <peter.drmota@physics.ox.ac.uk>
Co-authored-by: David Nadlinger <code@klickverbot.at>
Removed test cases that do not respect lifetime/scope constraint.
See discussion in artiq-zynq repo: M-Labs/artiq-zynq#119
Referred to the patch from @dnadlinger. 5faa30a837
In the origin implementation, the `nowrite` flag literally means not writing memory at all.
Due to the usage of flags on certain functions, it results in the same issues found in artiq-zynq after optimization passes. (M-Labs/artiq-zynq#119)
A fix wrote by @dnadlinger can resolve this issue. (c1e46cc7c8)
The reason of the borrow stuff is explained in M-Labs/artiq-zynq#76 (artiq-zyna repo).
As for `cache_get()`, compiler will perform stack allocation to pre-allocate the returned structure, and pass to cache_get alongside the `key`.
However, ksupport fails to recognize the passed memory, so it will always assume the passed memory as the key.
ld.lld has a habit of not putting the headers under any load sections.
However, the headers are needed by libunwind to handle exception raised by the kernel.
Creating PT_LOAD section with FILEHDR and PHDRS solves this issue. Other PHDRS are also specified as linkers (not limited to ld.lld) will not create additional unspecified headers even when necessary.
Previously we kept GNU Binutils because they are less of a pain to support
on Windoze - the source of so many problems - but with RISC-V we need to
update LLVM anyway.
previously ttl_counter_0 and ttl_0 could be on completely different physical ttl output channels
with this change, ttl_0_counter (note the changed key format) is always on the same channel as ttl_0
Signed-off-by: Leon Riesebos <leon.riesebos@duke.edu>
We need to check if our inference reached a fixed point. This is checked
using hash of the types in the AST, which is very slow. This patch
avoids computing the hash if we can make sure that the AST is definitely
changed, which is when we parse a new function.
For some simple programs with many functions, this can significantly
reduce the compile time by up to ~30%.
According to PEP484, type hint can be a string literal for forward
references. With PEP563, type hint would be preserved in annotations in
string form.
This breaks the internal dataset representation used by applets
and when saving to disk (``dataset_db.pyon``).
See ``test/test_dataset_db.py`` and ``test/test_datasets.py``
for examples.
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
Previous to this commit `set_nco_phase()` set the phase of the DUC instead
of the NCO. Setting the phase of the NCO may be desirable to utilise the
auto-sync functionality of the double-buffered DAC-NCO settings.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
1. Clarify which features require additional configuration via the `dac`
constructor argument.
2. Document when DAC settings apply immediatly/are staged.
3. Document how staged DAC settings may be applied
4. Calrify operation of `dac_sync`
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
When Phaser is powered on and `init()` is first called, enabling the
DAC-mixer while leaving the NCO disabled causes malformed output.
This commit implements a workaround by making sure the NCO is enabled,
before being set to the disired state.
This commit also avoids the following procedure, resulting in
malformed output:
1. Operate Phaser with the DAC Mixer and NCO enabled
2. Set the NCO to a non-zero frequency
3. Disable the NCO in the device_db
4. Re-initialise Phaser
After this procedure, with CMIX disabled, incorrect output is produced.
To clear the fault one must re-enable the NCO and write the NCO freqeuncy
to zero before disabling the NCO.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
The CMIX bits are bits 12-15 in register 0x0d. This has been checked
against the datasheet and verified on hardware. Until now, the bit for
CMIX1 was written to CMIX0. The CMIX0 bit was written to a reserved bit.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
in some use cases a larger tunable range than available via the DUC may
be needed. Some use cases may wish to combine the coarse mixer with the
DUC to extend the tunable range.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Currently, `init()` leaves a single oscillator at full scale. The phase
accumulator of this oscillator is held continuously cleared. Provided no
upconverting mechanism is active (DUC, CMIX, NCO), this produces a full-scale
DC voltage. The DC voltage is blocked by hardware capacitors. This behaviour
is not mentioned by the `init` documentation.
If one attempts to use any other oscillator without reducing the amplitude
of the oscillator enabled by `init`, there is by significant clipping.
In the case that the NCO or CMIX are configured via the device_db
(suggested in the docs), leaving the osillator at full scale results in
full RF output power after calling `init()`. This may plausibly damage loads
driven by phaser.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
The suitable PFD clock depends on the use case and will likely need
to be configured by some users. All things being equal, a higher PFD
clock is desirable as is results in lower local oscillator phase-noise.
Phaser was designed around a maximum PFD clock of 62.5 MHz. In integer mode,
with no local oscillator frequency divisor set, a 62.5 MHz PFD clock results
in a 125 MHz local oscillator step size. Given the +-200 MHz range of the DUC
(more if using the DAC mixer), this step size will be acceptable to many.
This seems like the most appropreate default configuration as it should offer
the best phase-noise performance.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
`sif_sync` must be triggered to apply NCO frequency changes. To achieve per
channel frequency tunability exeeding the range of the DUC, the NCO frequeny must
adjusted. User code will need to trigger `sif_sync` to achieve this.
`sif_sync` can only be triggered if the bit was cleared. To avoid this pitfall,
the clearing of `sif_sync` is automated.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
Currently running `voltage_to_mu()` or `voltage_group_to_mu()` on the host will
convert all machine unit values to int64. This leads to issues when machine units
are returned from RPCs.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
It was possible to crash the dashboard by opening the context menu
before an applet entry had been selected for the first time (e.g.
immediately after startup) and selecting one of the Group CCB
actions, as the enable update slot would not have been run.
This broke after b8cd163978, but
is invalid code to start with; this would have previously
crashed the code generator had the code actually been compiled.
(Allowing implicit conversion to bool would be a separate debate.)
The previous code could have never worked as-is, as the result slot
went unused, and it tried to append the load instruction to the
block just terminated with the invoke.
GitHub: Fixes#1506, #1531.
Since we don't implement any integer-like operations for TBool
(addition, bitwise not, etc.), TBool is currently neither
strictly equivalent to builtin bool nor numpy.bool_, but through
very obvious compiler errors (operation not supported) rather than
silently different runtime behaviour.
Just mapping both to TBool thus is a huge improvement over the
current behaviour (where numpy.False_ is a true-like object). In
the future, we could still implement more operations for TBool,
presumably following numpy.bool_ rather than the builtin type,
just like builtin integers get translated to the numpy-like
TInt{32,64}.
GitHub: Fixes#1275.
Previously, any type would be accepted for the test expression,
leading to internal errors in the code generator if the passed
value wasn't in fact a bool.
array([...]), the constructor for NumPy arrays, currently has the
status of some weird kind of macro in ARTIQ Python, as it needs
to determine the number of dimensions in the resulting array
type, which is a fixed type parameter on which inference cannot
be performed.
This leads to an ambiguity for empty lists, which could contain
elements of arbitrary type, including other lists (which would
add to the number of dimensions).
Previously, I had chosen to make array([]) to be of completely
indeterminate type for this reason. However, this is different
to how the call behaves in host NumPy, where this is a well-formed
call creating an empty 1D array (or 2D for array([[], []]), etc.).
This commit adds special matching for (recursive lists of) empty
ListT AST nodes to treat them as scalar dimensions, with the
element type still unknown.
This also happens to fix type inference for embedding empty 1D
NumPy arrays from host object attributes, although multi-dimensional
arrays will still require work (see GitHub #1633).
GitHub: Fixes#1626.
Strided slicing of one-dimensional arrays (i.e. with non-trivial
steps) might have previously been working, but would have had
different semantics, as all slices were copies rather than a view
into the original data.
Fixing this in the future will require adding support for an index
stride field/tuple to our array representation (and all the
associated indexing logic).
GitHub: Fixes#1627.
This was a long-standing issue affecting both lists and
the new NumPy array implementation, just caused by the
generic inference passes not being run on the slice
subexpressions (and thus e.g. ints not being monomorphized).
GitHub: Fixes#1632.
Resolves error message shown.
The following error message is shown when worker_impl.py:199 is run:
```
WARNING:worker(RID,EXPERIMENT):py.warnings:/nix/store/77sw4p03cb7rdayx86agi4yqxh5wq46b-python3.7-artiq-5.7141.1b68906/lib/python3.7/site-packages/artiq/master/worker_impl.py:199: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
logging.warn(message)
```
recv() returns 0 instead of data if the socket has already
been closed. This is translated into a zero-length list on
the Python layer. Previously, the code would enter an
infinite loop if the socket was closed while attempting
to receive data.
This partially reverts commit b5e1bd3fa2,
which had removed keepalive. This, however, led to experiments
hanging forever if the core device had dropped the connection
(e.g. to a kernel CPU panic, or the device being rebooted).
The chosen keepalive settings are fairly conservative (with the
10 s timeout) to avoid any possible interaction with smoltcp's
3 s ARP try interval (see GitHub issue #1150), even though this
should be a non-issue now due to the larger ARP cache.
* KC705 master: user can no longer choose whether or not the SMA acts as the 2nd DRTIO channel; SFP and SMA now act as the 1st and 2nd channel respectively by default.
* KC705 satellite: user should now use `--sma` to enable using the SMA as the satellite channel; SFP acts as the satellite channel by default.
* Two DRTIO channels (i.e. satellite and repeater) are enabled by default.
* User can choose either the SFP or SMA as the satellite channel (by passing `--drtio-sat sfp` or --drtio-sat sma` to the argparser), and the unchosen would become the repeater channel.
* One DRTIO master channel is enabled by default.
* User can set the SMA as the 2nd master channel (by passing --drtio-sma to the argparser).
* Multi-channel (i.e. with repeaters) on KC705 satellite is supported but has not been implemented yet.
When run on the host, the `turns_to_pow` retrun-type is numpy.int64.
Sensibly, the compiler does not attempt to convert `numpy.int64` to `int32`.
Signed-off-by: Marius Weber <marius.weber@physics.ox.ac.uk>
This allows assert() to be used on Zynq, where abort() is not
currently implemented for kernels. Furthermore, this is arguably
the more natural implementation of assertions on all kernel targets
(i.e. where embedding into host Python is used), as it matches host
Python behavior, and the exception information actually makes it to
the user rather than leading to a ConnectionClosed error.
Since this does not implement printing of the subexpressions, I
left the old print+abort implementation as default for the time
being.
The lit/integration/instance.py diff isn't just a spurious change;
the exception-based assert implementation exposes a limitation in
the existing closure lifetime tracking algorithm (which is not
supposed to be what is tested there).
GitHub: Fixes#1539.
Jagged arrays are no longer silently inferred as dtype=object,
as per NEP-34.
The compiler ndarray (re)implementation is unchanged, so the
test still fails.