With statement type #23
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
It seems that
with
statement is currently used for making a parallel context. There is a discussion about deep parallel that would make all statements within a parallel block parallel to each other, but would not affect statements within a function.Should we require
expr1
inwith expr1 [as expr2]
to have a certain type/implement a certain interface? Like the__enter__
and__exit__
method.It seems to me that it is possible to separate the
with parallel
construct from the core compiler: implement theparallel()
as a context manager and implement@kernel
as a decorator that would wrap the function withwith sequential()
. And with this strategy, only deep parallel is possible (similar to how a simulation would implement this, probably).Just an FYI, this is what we do in our simulation. See here, where we wrap the function call in a
sequential
statement.I think this is the case now. See for example this kernel-compatible context. Though currently there are a few minor limitations. The
__call__
function is not supported and I am not sure if we can return a value from__enter__
for assigning toas expr2
. It could be nice to have those for NAC3.Thanks for the reference.
I don't think we need to support
__call__
for this. From https://docs.python.org/3/reference/compound_stmts.html#with, the documentation mentioned:But I think that we should be aware of the performance implication of this: Having many
with
context would slow down exception processing, because the__exit__()
method has to be called when we exit from the suite, similar to afinally
block.Note that as we don't allow dynamic type, we would probably ignore the requirement to pass the exception object as an argument to the
__exit__()
function.That is actually true. I should elaborate a bit there. We normally do not construct the context object inside the kernel, but instead we create it earlier on the host and just pass the object to the
with
statement (example here). Hence, we can not use syntax such aswith self.context_object(some_setting=1) as foo
: configuring the context and immediately using. That would require support for the__call__
function of that object. So this would be a nice gimmic, but is not crucial.Fair point. We mainly use context for safety. Slower exception processing is totally fine as long as the exception control flow is handled correctly. Currently there are some issues with that.
I think that is also how it works now. As long as the
__exit__()
signature is the same as defined for Python, I am fine with it!with parallel
is a special construct; an ad-hoc implementation (outside the core compiler of course) sounds fine to me. We don't need special support from the compiler other than being able to plug such an ad-hoc implementation (which only needs to be notified of parallel/sequential contexts being entered and exited). Implementing the implicit top-levelwith sequential
can be simply done in the timing management code (probably in Rust) and does not have to be an actual Python wrapper.What ad-hoc implementation are you thinking about? I think we need to design the interface if we need some sort of plugin capability for the core compiler.
How exactly should we do this? Are you thinking about rewriting all calls to timing related APIs within the
with parallel
block into calling a parallel version of the API, and use the sequential version by default for those not in awith parallel
block?See https://github.com/m-labs/artiq/blob/master/artiq/sim/time.py
The compiler simply needs to provide a means to call enter_sequential/enter_parallel/exit.
The stack of SequentialTimeContext/ParallelTimeContext could be in a statically allocated buffer, with a predefined maximum number of nested contexts that can be entered.
By the way, deep parallel semantics will cause the delay function to be slower than in the current design.
...unless we move some of the implementation to gateware, which can be done for softcore systems (on Zynq it would actually make things slower due to the PS/PL latency).
And in the code I linked, this is done by initializing the stack accordingly.
https://github.com/m-labs/artiq/blob/master/artiq/sim/time.py#L29
Three questions:
__enter__()
and__exit__()
.ebb67eaeee/artiq/language/core.py (L212-L245)
with sequential
block? I don't have much experience with these semantics.Why?
I think for ad-hoc implementation, the easiest way to do this is to do AST rewrite in nac3embedded. No complicated interfaces is needed for the core compiler.
And I don't have a preference between implementing the shallow/deep parallel semantic . I mention the deep parallel semantic just because it should be possible to do without using any compiler level support.
Although personally I do think the deep parallel semantic would make more sense...
Currently this is for simulation, and implements deep parallel. I propose to rewrite it in Rust and integrate it in nac3embedded and/or the runtime. AST rewrite is one option, a simple API for the compiler to make it emit Rust function calls on entering/leaving context managers with certain names is another.
Isn't this just how
with
block works in python? Just requireexpr
inwith expr
to implement the__enter__(self)
and__exit__(self)
function, we can type-check this and do code generation for this. Is there a reason to provide a compiler API to handle this?Sure, if we have sufficient support in the compiler for context managers and for global objects (like
parallel
andsequential
would need to be) then we can use it. But a shortcut is possible.