On the master/EnvExperiment side, the only addition is an optional
property `argument_ui` that is made accessible to the dashboard, e.g.
class Example(EnvExperiment):
argument_ui = "ndscan"
def build(self):
…
Clients – primarily artiq_dashboard, but in principle e.g. a
command-line UI could do the same – can then compare the value to a
list of well-known names and prefer any matching custom UI handlers.
On the dashboard side, this commit adds the mechanism to register
a custom argument editor for a given argument_ui string, i.e. the
widget that displays the parameter values within the wider
experiment UI shell with the submit button, pipeline parameters, and
so on. The registry remains empty by default and would be filled by
out-of-tree plugins such as ndscan.
The UI state readback is implemented somewhat defensively to avoid
needless disruptions to users when upgrading.
* Revert "Merge pull request #1544 from airwoodix/dataset-compression"
This reverts commit 311a818a49, reversing
changes made to 7ffe4dc2e3.
* fix accidental revert of f42bea06a8
This breaks the internal dataset representation used by applets
and when saving to disk (``dataset_db.pyon``).
See ``test/test_dataset_db.py`` and ``test/test_datasets.py``
for examples.
Signed-off-by: Etienne Wodey <wodey@iqo.uni-hannover.de>
Resolves error message shown.
The following error message is shown when worker_impl.py:199 is run:
```
WARNING:worker(RID,EXPERIMENT):py.warnings:/nix/store/77sw4p03cb7rdayx86agi4yqxh5wq46b-python3.7-artiq-5.7141.1b68906/lib/python3.7/site-packages/artiq/master/worker_impl.py:199: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
logging.warn(message)
```
See test case – previously, the highest-priority pending run would
be used to calculate the timeout, rather than the earliest one.
This probably managed to go undetected for that long as any unrelated
changes to the pipeline (e.g. new submissions, or experiments pausing)
would also cause _get_run() to be re-evaluated.
Previously, a significant risk of losing experimental results would
be associated with long-running experiments, as any stray exceptions
while run()ing the experiment – for instance, due to infrequent
network glitches or hardware reliability issue – would cause no
HDF5 file to be written. This was especially troublesome as long
experiments would suffer from a higher probability of unanticipated
failures, while at the same time being more costly to re-take in
terms of wall-clock time.
Unanticipated uncaught exceptions like that were enough of an issue
that several Oxford codebases had come up with their own half-baked
mitigation strategies, from swallowing all exceptions in run() by
convention, to always broadcasting all results to uniquely named
datasets such that the partial results could be recovered and written
to HDF5 by manually run recovery experiments.
This commit addresses the problem at its source, changing the worker
behaviour such that an HDF5 file is always written as soon as run()
starts.