forked from M-Labs/artiq
1
0
Fork 0

doc: convert to 'with parallel'

This commit is contained in:
Sebastien Bourdeauducq 2016-02-23 15:53:41 +08:00
parent 82f4fd1290
commit 5d41113dbc
3 changed files with 16 additions and 27 deletions

View File

@ -156,23 +156,23 @@ The core device records the real-time IO waveforms into a circular buffer. It is
Afterwards, the recorded data can be extracted and written to a VCD file using ``artiq_coreanalyzer -w rtio.vcd`` (see: :ref:`core-device-rtio-analyzer-tool`). VCD files can be viewed using third-party tools such as GtkWave. Afterwards, the recorded data can be extracted and written to a VCD file using ``artiq_coreanalyzer -w rtio.vcd`` (see: :ref:`core-device-rtio-analyzer-tool`). VCD files can be viewed using third-party tools such as GtkWave.
Interleave and sequential blocks Parallel and sequential blocks
-------------------------------- ------------------------------
It is often necessary that several pulses overlap one another. This can be expressed through the use of ``with interleave`` constructs, in which all statements execute at the same time. The execution time of the ``interleave`` block is the execution time of its longest statement. It is often necessary that several pulses overlap one another. This can be expressed through the use of ``with parallel`` constructs, in which all statements execute at the same time. The execution time of the ``parallel`` block is the execution time of its longest statement.
Try the following code and observe the generated pulses on a 2-channel oscilloscope or logic analyzer: :: Try the following code and observe the generated pulses on a 2-channel oscilloscope or logic analyzer: ::
for i in range(1000000): for i in range(1000000):
with interleave: with parallel:
self.ttl0.pulse(2*us) self.ttl0.pulse(2*us)
self.ttl1.pulse(4*us) self.ttl1.pulse(4*us)
delay(4*us) delay(4*us)
Within a interleave block, some statements can be made sequential again using a ``with sequential`` construct. Observe the pulses generated by this code: :: Within a parallel block, some statements can be made sequential again using a ``with sequential`` construct. Observe the pulses generated by this code: ::
for i in range(1000000): for i in range(1000000):
with interleave: with parallel:
with sequential: with sequential:
self.ttl0.pulse(2*us) self.ttl0.pulse(2*us)
delay(1*us) delay(1*us)
@ -180,5 +180,5 @@ Within a interleave block, some statements can be made sequential again using a
self.ttl1.pulse(4*us) self.ttl1.pulse(4*us)
delay(4*us) delay(4*us)
.. warning:: .. note::
In its current implementation, ARTIQ only supports those pulse sequences that can be interleaved at compile time into a sequential series of on/off events. Combinations of ``interleave``/``sequential`` blocks that require multithreading (due to the parallel execution of long loops, complex algorithms, or algorithms that depend on external input) will cause the compiler to return an error. Branches of a ``parallel`` block are executed one after another, with a reset of the internal RTIO time variable before moving to the next branch. If a branch takes a lot of CPU time, it may cause an underflow when the next branch begins its execution.

View File

@ -117,10 +117,10 @@ dds.pulse(200*MHz, 11*us) # exactly 1 ms after trigger
\footnotesize \footnotesize
\begin{minted}[frame=leftline]{python} \begin{minted}[frame=leftline]{python}
with sequential: with sequential:
with interleave: with parallel:
a.pulse(100*MHz, 10*us) a.pulse(100*MHz, 10*us)
b.pulse(200*MHz, 20*us) b.pulse(200*MHz, 20*us)
with interleave: with parallel:
c.pulse(300*MHz, 30*us) c.pulse(300*MHz, 30*us)
d.pulse(400*MHz, 20*us) d.pulse(400*MHz, 20*us)
\end{minted} \end{minted}
@ -128,11 +128,10 @@ with sequential:
\begin{itemize} \begin{itemize}
\item Experiments are inherently parallel: \item Experiments are inherently parallel:
simultaneous laser pulses, parallel cooling of ions in different trap zones simultaneous laser pulses, parallel cooling of ions in different trap zones
\item \verb!interleave! and \verb!sequential! contexts with arbitrary nesting \item \verb!parallel! and \verb!sequential! contexts with arbitrary nesting
\item \verb!a! and \verb!b! pulses both start at the same time \item \verb!a! and \verb!b! pulses both start at the same time
\item \verb!c! and \verb!d! pulses both start when \verb!a! and \verb!b! are both done \item \verb!c! and \verb!d! pulses both start when \verb!a! and \verb!b! are both done
(after 20\,µs) (after 20\,µs)
\item Implemented by inlining, loop-unrolling, and interleaving
\end{itemize} \end{itemize}
\end{frame} \end{frame}
@ -181,7 +180,7 @@ class Experiment:
@kernel @kernel
def run(self): def run(self):
with interleave: with parallel:
self.ion1.cool(duration=10*us) self.ion1.cool(duration=10*us)
self.ion2.cool(frequency=...) self.ion2.cool(frequency=...)
self.transporter.move(speed=...) self.transporter.move(speed=...)
@ -219,11 +218,6 @@ class Experiment:
\frametitle{Kernel deployment to the core device} \frametitle{Kernel deployment to the core device}
\footnotesize \footnotesize
\begin{itemize} \begin{itemize}
\item RPC and exception mappings are generated
\item Constants and small kernels are inlined
\item Small loops are unrolled
\item Statements in interleave blocks are interleaved
\item Time is converted to RTIO clock cycles
\item The Python AST is converted to LLVM IR \item The Python AST is converted to LLVM IR
\item The LLVM IR is compiled to OpenRISC machine code \item The LLVM IR is compiled to OpenRISC machine code
\item The OpenRISC binary is sent to the core device \item The OpenRISC binary is sent to the core device

View File

@ -133,10 +133,10 @@ dds.pulse(200*MHz, 11*us) # exactly 1 ms after trigger
\footnotesize \footnotesize
\begin{minted}[frame=leftline]{python} \begin{minted}[frame=leftline]{python}
with sequential: with sequential:
with interleave: with parallel:
a.pulse(100*MHz, 10*us) a.pulse(100*MHz, 10*us)
b.pulse(200*MHz, 20*us) b.pulse(200*MHz, 20*us)
with interleave: with parallel:
c.pulse(300*MHz, 30*us) c.pulse(300*MHz, 30*us)
d.pulse(400*MHz, 20*us) d.pulse(400*MHz, 20*us)
\end{minted} \end{minted}
@ -144,7 +144,7 @@ with sequential:
\begin{itemize} \begin{itemize}
\item Experiments are inherently parallel: \item Experiments are inherently parallel:
simultaneous laser pulses, parallel cooling of ions in different trap zones simultaneous laser pulses, parallel cooling of ions in different trap zones
\item \verb!interleave! and \verb!sequential! contexts with arbitrary nesting \item \verb!parallel! and \verb!sequential! contexts with arbitrary nesting
\item \verb!a! and \verb!b! pulses both start at the same time \item \verb!a! and \verb!b! pulses both start at the same time
\item \verb!c! and \verb!d! pulses both start when \verb!a! and \verb!b! are both done \item \verb!c! and \verb!d! pulses both start when \verb!a! and \verb!b! are both done
(after 20\,µs) (after 20\,µs)
@ -197,7 +197,7 @@ class Experiment:
@kernel @kernel
def run(self): def run(self):
with interleave: with parallel:
self.ion1.cool(duration=10*us) self.ion1.cool(duration=10*us)
self.ion2.cool(frequency=...) self.ion2.cool(frequency=...)
self.transporter.move(speed=...) self.transporter.move(speed=...)
@ -235,11 +235,6 @@ class Experiment:
\frametitle{Kernel deployment to the core device} \frametitle{Kernel deployment to the core device}
\footnotesize \footnotesize
\begin{itemize} \begin{itemize}
\item RPC and exception mappings are generated
\item Constants and small kernels are inlined
\item Small loops are unrolled
\item Statements in interleave blocks are interleaved
\item Time is converted to RTIO clock cycles
\item The Python AST is converted to LLVM IR \item The Python AST is converted to LLVM IR
\item The LLVM IR is compiled to OpenRISC machine code \item The LLVM IR is compiled to OpenRISC machine code
\item The OpenRISC binary is sent to the core device \item The OpenRISC binary is sent to the core device