This reverts commits f8d1506922
and cf19c9512d.
While the commit just fixes a clear typo in the implementation,
it turns out the original algorithm isn't flexible enough to
capture functions that transitively return references to
long-lived data. For instance, while cache_get() is special-cased
in the compiler to be recognised as returning a value of Global()
lifetime, a function just forwarding to it (as seen in the
embedding tests) isn't anymore.
A separate issue is also that this makes implementing functions
that take lists and return references to global data in user code
impossible, which central parts of the Oxford codebase rely on.
Just reverting for now to unblock master; a fix is easily designed,
but needs testing.
I contemplated putting this in the "Breaking changes" section,
as it might break user code that has avoided being hit by
memory corruption from the use-after free by chance (even
though it was always an accepts-illegal bug).
* Never drive SDL or SDA high. They are specified to be open
collector/drain and pulled up by resistive pullups. Driving
high fails miserably in a multi-master topology (e.g. with
a USB I2C interface). It would only ever be implemented to
speed up the bus actively but that's tricky and completely
unnecessary here.
* Make the handover states between the I2C protocol phases (start, stop,
restart, write, read) well defined. Add comments stressing those
pre/postconditions.
* Add checks for SDA arbitration failures and stuck SCL.
* Remove wrong, misleading or redundant comments.
Before, the system would enter a boot loop when a panic occurred
while the kernel CPU was active (and panic_reset == 1), as
kernel::start() for the startup kernel would panic.
See test case – previously, the highest-priority pending run would
be used to calculate the timeout, rather than the earliest one.
This probably managed to go undetected for that long as any unrelated
changes to the pipeline (e.g. new submissions, or experiments pausing)
would also cause _get_run() to be re-evaluated.
Previously, a significant risk of losing experimental results would
be associated with long-running experiments, as any stray exceptions
while run()ing the experiment – for instance, due to infrequent
network glitches or hardware reliability issue – would cause no
HDF5 file to be written. This was especially troublesome as long
experiments would suffer from a higher probability of unanticipated
failures, while at the same time being more costly to re-take in
terms of wall-clock time.
Unanticipated uncaught exceptions like that were enough of an issue
that several Oxford codebases had come up with their own half-baked
mitigation strategies, from swallowing all exceptions in run() by
convention, to always broadcasting all results to uniquely named
datasets such that the partial results could be recovered and written
to HDF5 by manually run recovery experiments.
This commit addresses the problem at its source, changing the worker
behaviour such that an HDF5 file is always written as soon as run()
starts.
* ad9910: fix asf range
The ASF is a 14-bit word. The highest possible value is 0x3fff, not
0x3ffe. `int(round(1.0 * 0x3fff)) == 0x3fff`.
I don't remember and understand why this was 0x3ffe since the beginning.
0x3fff was already used as a default in `set_mu()`
Signed-off-by: Robert Jördens <rj@quartiq.de>
* RELEASE_NOTES: ad9910 asf scale change
Co-authored-by: David Nadlinger <code@klickverbot.at>