Lists and arrays no longer have the same representation all
the way through codegen, as used to be the case.
This could/should be made more efficient later, eliding the
temporary copies.
Left generic transpose (shape order inversion) for now, as that
would be less ugly if we implement forwarding to Python function
bodies for array function implementations.
Needs a runtime test case.
LLVM will take care of optimising the loops. This was still
unnecessarily painful; implementing generics and implementing
this in ARTIQ Python looks very attractive right now.
Relies on the runtime to provide the necessary
(libm-compatible) functions.
The test is nifty, but a bit brittle; if this breaks in the
future because of optimizer changes, do not hesitate to convert
this into a more pedestrian test case.
So far, this is not exposed to the user beyond implicit conversions.
Note that all the implicit conversions, such as triggered by adding
arrays of mismatching types, or dividing integer arrays, are currently
emitted in a maximally inefficient way, where a temporary copy is first
made for the type conversion. The conversions would more sensibly be
implemented during the per-element operations to save on the extra
copies, but the current behaviour fell out of the rest of the IR
generator structure without extra changes.
Matches NumPy. Slicing a TList reallocates, this doesn't; offsetting
couldn't be handled in the IR without introducing new semantics
(the Alloc kludge; could/should be made its own IR type).
Still needs support through all the rest of the compiler, and
support for higher-dimensional arrays.
Alternatively, we could always assume ndarrays of ndarrays
are rectangular (i.e. ban array/list element types), and
detect mismatch at runtime. This might turn out to be
preferrable to be able to construct matrices from rows/columns.
`array()` is disallowed for no particularly good reason but
numpy API compatibility.
* coredevice.ad9910: Add return type hints to conversion functions
* coredevice.ad9910: Make set_pow write correct number of bits
The AD9910 expects 16 bits. Thus, if writing 32 bits to the POW register, the chip would likely enter a locked-up state.
* coredevice.ad9910: Correct data alignment in write_16
Co-authored-by: Robert Jördens <rj@quartiq.de>
* coredevice.ad9910: Add function to read from 16 bit registers
Co-authored-by: drmota <peter.drmota@physics.ox.ac.uk>
Co-authored-by: Robert Jördens <rj@quartiq.de>
This reverts commits f8d1506922
and cf19c9512d.
While the commit just fixes a clear typo in the implementation,
it turns out the original algorithm isn't flexible enough to
capture functions that transitively return references to
long-lived data. For instance, while cache_get() is special-cased
in the compiler to be recognised as returning a value of Global()
lifetime, a function just forwarding to it (as seen in the
embedding tests) isn't anymore.
A separate issue is also that this makes implementing functions
that take lists and return references to global data in user code
impossible, which central parts of the Oxford codebase rely on.
Just reverting for now to unblock master; a fix is easily designed,
but needs testing.
I contemplated putting this in the "Breaking changes" section,
as it might break user code that has avoided being hit by
memory corruption from the use-after free by chance (even
though it was always an accepts-illegal bug).