mirror of https://github.com/m-labs/artiq.git
cb3b811fd7
After this commit, the delay instruction (again) does not generate
any LLVM IR: all heavy lifting is relegated to the delay and delay_mu
intrinsics. When the interleave transform needs to adjust the global
timeline, it synthesizes a delay_mu intrinsnic. This way,
the interleave transformation becomes composable, as the input and
the output IR invariants are the same.
Also, code generation is adjusted so that a basic block is split off
not only after a delay call, but also before one; otherwise, e.g.,
code immediately at the beginning of a `with parallel:` branch
would have no choice but to execute after another branch has already
advanced the timeline.
This takes care of issue #1 described in
|
||
---|---|---|
.. | ||
codegen | ||
devirtualization | ||
embedding | ||
exceptions | ||
inferencer | ||
integration | ||
interleaving | ||
iodelay | ||
local_access | ||
monomorphism | ||
time | ||
lit.cfg |