This commit solves issue #2 described in 50e7b44; a function call
is now a valid decomposition for a delay instruction, and this
metadata is propagated when the interleaver converts delays.
However, the interleaver does not yet detect that a called function
is compound, i.e. it is not correct.
After this commit, the delay instruction (again) does not generate
any LLVM IR: all heavy lifting is relegated to the delay and delay_mu
intrinsics. When the interleave transform needs to adjust the global
timeline, it synthesizes a delay_mu intrinsnic. This way,
the interleave transformation becomes composable, as the input and
the output IR invariants are the same.
Also, code generation is adjusted so that a basic block is split off
not only after a delay call, but also before one; otherwise, e.g.,
code immediately at the beginning of a `with parallel:` branch
would have no choice but to execute after another branch has already
advanced the timeline.
This takes care of issue #1 described in 50e7b44 and is a step
to solving issue #2.
The previous implementation was completely wrong: it always advanced
the global timeline by the same amount as the non-interleaved basic
block did.
The new implementation only advances the global timeline by
the difference between its current time and the virtual time of
the branch, which requires it to adjust the delay instructions.
Previously, the delay expression was present in the IR twice: once
as the iodelay.Expr transformation-visible form, and once as regular
IR instructions, with the latter form being passed to the delay_mu
builtin and advancing the runtime timeline.
As a result of this change, this strategy is no longer valid:
we can meaningfully mutate the iodelay.Expr form but not the IR
instruction form. Thus, IR instructions are no longer generated for
delay expressions, and the LLVM lowering pass now has to lower
the iodelay.Expr objects as well.
This works OK for flat `with parallel:` expressions, but breaks down
outside of `with parallel:` or when calls are present. The reasons
it breaks down are as follows:
* Outside of `with parallel:`, delay() and delay_mu() must accept
any expression, but iodelay.Expr's are not nearly expressive
enough. So, the IR instruction form must actually be kept as well.
* A delay instruction is currently inserted after a call to
a user-defined function; this delay instruction introduces
a point where basic block reordering is possible as well as
provides delay information. However, the callee knows nothing
about the context in which it is called, which means that
the runtime timeline is advanced twice. So, a new terminator
instruction must be added that combines the properties of delay
and call instructions (and another for delay and invoke as well).
The delay instruction is just like a branch (discontinuity
in instruction flow), but it also carries metadata: how long
did the execution of its basic block take. This metadata only
matters during inlining and interleaving, so we treat it here
as a mere branch.