forked from M-Labs/artiq
1
0
Fork 0

Compare commits

...

107 Commits

Author SHA1 Message Date
abdul124 38c72fdab4 firmware/ksupport: restore exception id order 2024-08-23 16:25:49 +08:00
abdul124 1f1ae74a5f coredevice: add UnwrapNoneError class 2024-08-23 16:25:49 +08:00
abdul124 89f75c989c compiler: restore exception ids used 2024-08-23 16:25:49 +08:00
abdul124 70c2ca58af firmware/ksupport: improve comments and syscall name 2024-08-21 11:27:42 +08:00
abdul124 d389f8e25b coredevice/test: add unittests for exceptions 2024-08-21 11:27:42 +08:00
abdul124 de6f83b009 firmware/ksupport: add exception unittests 2024-08-21 11:27:42 +08:00
abdul124 d8386f2c2b coredevice/comm_kernel: map exceptions to correct names 2024-08-21 11:27:42 +08:00
abdul124 9591549b37 sync exception names and ids 2024-08-21 11:27:42 +08:00
Harry Ying e82cf94cae test_scheduler: remove the exact ordering assertion
Currently the exact ordering is compared in the `pending_priorities`
test which is not necessary and is non-deterministic. This patch only
 asserts the middle priority experiment is ran before the high priority
 experiment scheduled in the future.

Signed-off-by: Kanyang Ying <lexugeyky@outlook.com>
2024-07-11 09:38:42 +02:00
Simon Renblad 7e6a94bdff browser: fix ctrl scroll type error 2024-07-10 10:42:23 +02:00
architeuthidae 7f79dedea3 math_fns: Delete duplicated definition 2024-07-04 13:32:25 +02:00
Egor Savkin bb014d8187 Update dead links, partially remove unavailable resources
Signed-off-by: Egor Savkin <es@m-labs.hk>
2024-06-27 14:03:48 +08:00
Sébastien Bourdeauducq 3c33b22cb7 doc: fix conda URL 2024-06-27 13:11:49 +08:00
Florian Agbuya c1144079ab test_embedding: add int boundary test from 25168422a
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2024-06-07 14:12:03 +08:00
Florian Agbuya 25346780bf compiler: fix int boundary checks
Signed-off-by: Florian Agbuya <fa@m-labs.ph>
2024-06-07 14:12:03 +08:00
mwojcik c81280174c remove cargo-xbuild, fix nix build 2024-06-06 11:02:48 +08:00
architeuthis 2b944822d3 Minor manual fixes 2024-06-03 15:12:39 +08:00
mwojcik 9569cfb263 adf5356: sync before setting muxout 2024-05-31 08:34:58 +08:00
Simon Renblad db79100a56 browser, dashboard: fix restore scrollbar state 2024-03-20 11:36:33 +08:00
morgan 254277578d ddb template: remove extra clk_div white space 2024-03-15 11:01:15 +08:00
morgan 02dfb3e5ac artiq_ddb_template: backport clk_div config fix
add pll_en to json schema and ddb_template
set CLK IN divided by 1 as default when bypassing AD9910 PLL
raise ValueError when bypassing AD9912 PLL
2024-02-16 16:17:50 +08:00
Florian Agbuya 329c5c1af8 scan: fix deprecated shuffle parameter in python 3.11 2024-01-04 10:41:44 +08:00
Sebastien Bourdeauducq 2eddbadfef ad53xx: fix `load()` references in documentation 2023-10-16 07:56:15 +02:00
Jonathan Coates cc81464f53 Fix type annotations with mixed tuples
The type checker/inferer visits every node in an AST tree, including
function return annotations. This means for a function definition like

    def f() -> TTuple([TInt32, TBool]):
      ...

We attempt to type check the list [TInt32, TBool], which generates the
unification constraint builtins.TBool ~ builtins.TInt. This causes an
internal error due to compiler weirdness.

We can avoid this by just nulling-out the return annotation in the
embedding stage. The return type isn't actually used anywhere (it's
extracted via the inspect module instead), so this is entirely safe.

Arguments aren't affected by this, as we already nulled out the
annotation (see visit_arg in embedding.py).

Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-09-27 10:27:05 +08:00
David Nadlinger 3f27c76619 compiler: Fix crash on multiple types with the same name
The original fix in 21574bdfa9
was incomplete, as it only addressed the TInstance types, but
not their linked (typ.constructor) TConstructor instances.

This would (potentially among other issues) cause assertion
errors in llvm_ir_generator due to the wrong associated globals
being referenced; see added test case for an example that
previously caused such a crash.

Also modified the name collision detection from O(len(type_map))
(so quadratic overall in the number of custom types) to cache
names in sets for O(1) lookup.
2023-09-27 10:26:54 +08:00
Florian Agbuya 93edfebb7e mirny: fix minor doc formatting 2023-09-06 15:52:56 +08:00
Florian Agbuya 568ef2336c mirny: fix doc formatting 2023-09-05 17:00:46 +08:00
Florian Agbuya f6ce6bb806 i2c: fix doc formatting 2023-09-05 17:00:46 +08:00
Denis Ovchinnikov 21c6f57ce1 jsonschema: add kasli_soc HW revision v1.1 2023-07-27 11:05:13 +08:00
sven-oxionics 928ca50762 Fix panic when receiving empty strings in rpc calls
Receiving an empty string in an RPC call currently panics.

When `length` is zero, a call to the `alloc` function (as implemented in `artiq/firmware/runtime/session.rs`) returns a null pointer. Constructing a `CMutSlice` from a null pointer panics.
A `CMutSlice` consists of a pointer and the length. Rust's documentation of the `core::ptr` module states: "The canonical way to obtain a pointer that is valid for zero-sized accesses is `NonNull::dangling`."
This commits adds a check for the length of a string received in an RPC call. Only for lengths greater than zero a memory allocation is performed. For zero-length strings, a dangling pointer is used.

Test plan:
Invoke the following experiment, which returns an empty string over RPC:
```
class ReturnEmptyString(artiq.experiment.EnvExperiment):
    def build(self):
        self.core: Core = self.get_device("core")

    @kernel
    def run(self):
        x = self.do_rpc()
        print(x)

    @rpc
    def do_rpc(self) -> TStr:
        return ""
```

Signed-off-by: Sven Over (Oxford Ionics) <sven.over@oxionics.com>
2023-07-18 12:01:25 +08:00
Florian Agbuya 2c7b62254e gui/experiments: cast Qt timestamp to int preventing float type error 2023-07-14 16:34:09 +08:00
linuswck f10c876ed7
kasli-soc: fix GTX initialization
The changes are backported from PR2128.

Org Problem: DRIO cannot establish connections with satellite after updatting
the IBUFDS_GTE2 parameters on commit d6704d30e9.

Description of Changes:
- CPLL Parameters are added.
- CEB signal of IBUFDS_GTE2 is asserted by NOT(OR(stable_clkin, GTX CPLL Locked)).
- Modify the GTX Init FSM so that the PLL Reset and GTX Reset are done in two seperated state.
- Restart of GTX module now only resets GTX transceiver.
2023-07-13 16:37:14 +08:00
Sebastien Bourdeauducq 6fbfa12e88 gtx_7series_init: GTH -> GTX (NFC) 2023-07-10 11:36:28 +08:00
Sebastien Bourdeauducq ad06924daa flake: update dependencies 2023-07-10 11:32:31 +08:00
mwojcik 93e26cacdd rtio_clocking: remove confusing log message 2023-07-10 02:36:34 +00:00
mwojcik ff976754cf fix missing DIFF_TERM for Sampler and Mirny inputs 2023-06-02 17:21:32 +08:00
Sebastien Bourdeauducq d6704d30e9 fix IBUFDS_GTE2 parameters 2023-05-30 11:49:13 +08:00
Sebastien Bourdeauducq 1315654f0a update dependencies, misoc libunwind cleanup 2023-05-30 11:46:23 +08:00
Charles Baynham 3a9213d5eb worker: Wait until datasets are written before quitting
Avoids a race condition in worker_impl.py where HDF5 dataset saving was
cut off before it finished for large datasets.
2023-05-24 06:58:15 +08:00
Jonathan Coates c691560fd6 dma: fix off-by-one error in RawSlicer (#2090)
Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-05-23 11:16:24 +08:00
Jonathan Coates 0323946ffb Fix mismatched signatures for the wide interface
Lists are passed by-reference from python code, and so should be
&CSlice<_> not CSlice<_>.

Signed-off-by: Jonathan Coates <jonathan.coates@oxionics.com>
2023-05-19 10:18:30 +08:00
Hartmann Michael (IFAG PSS SIS SCE QSE) 19059a3385 doc: conda installation notes 2023-05-12 17:45:22 +08:00
Sebastien Bourdeauducq dbd5a1765d flake: update dependencies 2023-05-01 09:35:38 +08:00
Sebastien Bourdeauducq 55c8ef588c fix default version string 2023-04-22 09:36:07 +08:00
Ikko Eltociear Ashimine 3acffe8b9f fix typo in comm_analyzer.py
error_occured -> error_occurred
occured -> occurred
2023-04-02 09:27:07 +08:00
Egor Savkin 1d45bed90a firmware: assume empty config records as removed (#2064)
This will return `KeyNotFound` for empty values, which are produced by `remove` operation

Signed-off-by: Egor Savkin <es@m-labs.hk>
2023-03-13 18:19:27 +08:00
Ikko Eltociear Ashimine 6bf3f53367 fix typo in developing_a_ndsp.rst
occurence -> occurrence
2023-03-11 18:32:49 +08:00
Sebastien Bourdeauducq 8a2ea578b8 flake: vivado 2022.2 2023-02-20 17:34:25 +08:00
Sebastien Bourdeauducq 81dbbd08b2 doc: duplicate nixConfig 2023-01-04 15:14:28 +08:00
David Nadlinger 1dd0d3432c firmware: Fix object references in tuples
Sine 8740ec3dd, the alignment() information from
"run-time type information" (i.e. the Tag type) is also
used when sending tuples to the host.
2022-12-19 01:04:01 +00:00
David Nadlinger e0c7880c77 RELEASE_NOTES: Two typo/formatting fixes 2022-12-18 17:27:49 +00:00
David Nadlinger c811efd9a7 RELEASE_NOTES: Fix up punctuation 2022-12-18 17:27:44 +00:00
David Nadlinger 5f8eeb47bb firmware: Rename si5324 crystal_{ref -> as_ckin2} [nfc]
This would have made the issue in the pre-740543d4e code
much more obvious (the config option by itself does not
have any effect on the choice of active reference input).
2022-12-18 17:27:38 +00:00
David Nadlinger 1db3a42ad7 firmware: Fix Si5324 initialisation for satellites
Commit 740543d4e2 had unintentionally broken DRTIO
satellites, as si5324::setup is also used there. This
imports setup_si5324_as_synthesizer() from artiq-zynq,
where the input selection was already explicitly done.

GitHub: Fixes #2028.
2022-12-18 17:27:27 +00:00
SingularitySurfer ce57d6c346 implement pca9539 and runtime io-expander chip selection
better comments and address translation

fix spurious };

unwrap init in runtime and return err instead of panic

propagate error

del unnecessary use

Signed-off-by: SingularitySurfer <Norman_Krackow@gmx.de>
2022-12-14 22:47:20 +08:00
David Nadlinger fe32104185 master/scheduler: Unbreak submitting from repository
This is a fix-up to commit 2a58981822.
2022-12-14 09:13:55 +08:00
Egor Savkin 1c39ac8fb4 Scheduler: replace relative path to absolute
Signed-off-by: Egor Savkin <es@m-labs.hk>
2022-12-09 21:51:18 +08:00
Egor Savkin 19824cefba worker_impl: do not write results without rid (#2020) 2022-12-09 16:21:11 +08:00
David Nadlinger 8f3d06a515 runtime/rtio_clocking: Deduplicate/document input selection [nfc] 2022-12-05 10:11:48 +08:00
David Nadlinger 0775ae1c19 firmware: Fix Kasli v2 runtime rtio_clock selection
SI5324_EXT_REF now only controls the (deprecated) fallbacks
for when the rtio_clock option is not set.
2022-12-05 10:11:11 +08:00
Egor Savkin 696418c2a9 browser: tolerate missing HDF5 metadata 2022-12-02 16:31:47 +08:00
David Nadlinger 520692073e firmware/runtime: Fix Ext0_Synth0_*to125 log messages 2022-12-02 10:49:07 +08:00
Egor Savkin 47581e0de9 browser: fix dummy device creation failure on analyze 2022-12-01 17:48:17 +08:00
Sebastien Bourdeauducq ec5c1b2478 Revert "language: check_unprocessed_arguments after constructing experiment"
This reverts commit d7240c17fc.
2022-11-30 07:44:57 +08:00
Nico Pulido d7240c17fc language: check_unprocessed_arguments after constructing experiment
Signed-off-by: Nico Pulido-Mateo <pulido@iqo.uni-hannover.de>
2022-11-29 19:06:31 +08:00
David Nadlinger 75d75cc13c compiler: Add missing sections to kernel linker script
This caused sporadic LoadFaults with LLD 14 and above, as they
happened to lay out the (not otherwise mentioned) GOT/PLT such
that they would overlap with the stack guard page.

LLD does support the --orphan-handling=error option, which
would be useful to avoid similar problems in the future, but
then we'd need to mention all the other misc sections
(symbol table, comments) in the linker script as well.

GitHub: Fixes #1975.
2022-11-25 09:50:18 +08:00
Etienne Wodey 079d57b54d ddb_template: propagate fastino log2_width setting
Signed-off-by: Etienne Wodey <etienne.wodey@aqt.eu>
2022-11-17 10:55:31 +08:00
David Nadlinger 3e7680e45b firmware/rpc_proto: Remove unnecessary cast [nfc] 2022-11-15 11:06:38 +08:00
David Nadlinger 60a2ff3799 firmware/rpc_proto: Fix typo breaking receiving of arrays
This was introduced in 8740ec3dd5.
2022-11-15 11:06:38 +08:00
David Nadlinger d422de387e firmware/rpc_proto: Fix size/alignment calculation for structs with tail padding
Also factors out duplicate code for (de)serializing
elements of lists and ndarrays, and replaces the rounding
calculations by the well-known, much faster power-of-two-only
bit-twiddling version.

GitHub: Fixes #1934.
2022-11-15 11:06:38 +08:00
David Nadlinger d73915f904 firmware/ksupport: Include .gcc_except_table (LSDA)
For whatever reason, no language-specific unwind data
was generated for ksupport code so far, but rustc does
emit it for an upcoming refactoring.
2022-11-15 11:06:38 +08:00
David Nadlinger 1ddefaa42f firmware/ksupport: Document rpc_recv alignment requirements [nfc] 2022-11-15 11:06:38 +08:00
David Nadlinger 23a4db494f compiler: Extract maximum alignment from target data layout
In particular, i64/double are actually supposed to be aligned
to their size on RISC-V (at least according to the ELF psABI),
though it is unclear to me whether this actually caused any
issues.
2022-11-15 11:06:38 +08:00
David Nadlinger a83f330d74 compiler/targets: Fix refactoring leftover for native (host) target
It's unclear whether this actually caused any issues, or why this
wasn't done before (instead just setting the now-removed endianness
flag).
2022-11-15 11:06:06 +08:00
火焚 富良 484c88af24 compiler: fix const str/bytes handling (#1990) 2022-11-11 13:16:42 +08:00
Sebastien Bourdeauducq 694a3490c6 dashboard: restore connection/version message 2022-10-21 19:17:17 +08:00
Sebastien Bourdeauducq 10bf8704c1 dashboard: remove incorrect moninj proxy message 2022-10-21 19:13:30 +08:00
Fabian Schwartau ab8bb9ef31 Fixed two too low delay values in Phaser init
Signed-off-by: Fabian Schwartau <fabian@opencode.eu>
2022-10-19 22:21:47 +02:00
Sebastien Bourdeauducq de34aedfa3 flake: use nixpkgs llvmlite 2022-10-19 19:38:15 +08:00
wlph17 0119577c33 flake: set Nix Qt environment variables in development shell
allows applets to run standalone via ``python -m ...`` without requiring the Nix Qt wrapper
2022-10-07 11:32:45 +08:00
Michael Birtwell ad13e2205d Improve exception reports when exception can't be reconstructed
Artiq assumes that all exceptions raised by the kernel can be constructed with
a single string argument. This isn't always the case. Especially for
exceptions that originated in python and were propagated to the kernel over
rpc.

With out this change a mosek solver failure looks like:
```
ERROR    root:logging_tools.py:41 Terminating with exception (TypeError: __init__() missing 1 required positional argument: 'msg')
Traceback (most recent call last):
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/master/worker_impl.py", line 540, in main
    exp_inst.run()
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/test_tools/experiment.py", line 82, in wrapper
    meth()
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/language/core.py", line 54, in run_on_core
    return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/core.py", line 152, in run
    self.comm.serve(embedding_map, symbolizer, demangler)
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/comm_kernel.py", line 720, in serve
    self._serve_exception(embedding_map, symbolizer, demangler)
  File "/home/mb/.cache/pypoetry/virtualenvs/ion-transport-1-b41LI0-py3.8/lib/python3.8/site-packages/artiq/coredevice/comm_kernel.py", line 699, in _serve_exception
    python_exn = python_exn_type(
TypeError: __init__() missing 1 required positional argument: 'msg'
```

With this change we get:
```
ERROR    root:logging_tools.py:41 Terminating with exception (RuntimeError: Exception type=<class 'mosek.Error'>, which couldn't be reconstructed (__init__() missing 1 required positional argument: 'msg'))
Core Device Traceback:
Traceback (most recent call first):
  File "/home/mb/oxionics/ion-transport/tests/test_end_to_end.py", line 280, in get_transport
    return self.seq.solve()
  File "/home/mb/oxionics/ion-transport/tests/test_end_to_end.py", line 288, in artiq_worker_test_end_to_end.TransportTestScan.run(..., ...) (RA=+0x2e4)
    self.seq.record(self.get_transport(1e-6 + 1e-7 * x))
mosek.Error(27): rescode.err_license_expired(1001): The license has expired.

End of Core Device Traceback
Traceback (most recent call last):
  File "/home/mb/oxionics/artiq/artiq/master/worker_impl.py", line 540, in main
    exp_inst.run()
  File "/home/mb/oxionics/artiq/artiq/test_tools/experiment.py", line 82, in wrapper
    meth()
  File "/home/mb/oxionics/artiq/artiq/language/core.py", line 54, in run_on_core
    return getattr(self, arg).run(run_on_core, ((self,) + k_args), k_kwargs)
  File "/home/mb/oxionics/artiq/artiq/coredevice/core.py", line 152, in run
    self.comm.serve(embedding_map, symbolizer, demangler)
  File "/home/mb/oxionics/artiq/artiq/coredevice/comm_kernel.py", line 732, in serve
    self._serve_exception(embedding_map, symbolizer, demangler)
  File "/home/mb/oxionics/artiq/artiq/coredevice/comm_kernel.py", line 714, in _serve_exception
    raise python_exn
RuntimeError: Exception type=<class 'mosek.Error'>, which couldn't be reconstructed (__init__() missing 1 required positional argument: 'msg')
```

Signed-off-by: Michael Birtwell <michael.birtwell@oxionics.com>
2022-09-26 20:26:44 +08:00
mwojcik 40df2a6526 dashboard moninj: add tooltip for off button 2022-09-19 10:22:49 +08:00
mwojcik 247f10176a dashboard moninj: check if ad9910 was init 2022-09-19 10:22:49 +08:00
mwojcik 72b92f559d moninj: fix ad9914 behavior, comment cleanup 2022-09-19 10:22:49 +08:00
mwojcik 5581ae15ca moninj: dds inj: extract shared code
detect urukul already init in more than one way
detect ad9912 channel already init
2022-09-19 10:22:49 +08:00
mwojcik 3038639802 afws_client: fix argument order 2022-08-25 13:18:01 +08:00
fanmingyu212 efa514989c doc: updates artiq_flash syntax in developing.rst 2022-08-25 12:57:48 +08:00
Sebastien Bourdeauducq 9aa81e1234 versioneer: fix default 2022-08-18 14:35:43 +08:00
kk1050 ebe7348a92 dashboard: use break_realtime instead of reset for Urukul set freq (#1940) 2022-08-16 14:02:55 +08:00
Sebastien Bourdeauducq 5016004d26 dashboard: improve moninj logging 2022-08-12 13:42:21 +08:00
cc78078 bfab7a7422 kasli: relocate the SatelliteBase Error LED code (#1955) 2022-08-12 13:42:21 +08:00
cc78078 97cba3fd5f kasli: add Error LED to MasterBase and SatelliteBase 2022-08-11 15:08:34 +08:00
Deepskyhunter eba143a475 dashboard/moninj: make arguments a dict for DDS setters 2022-08-02 17:10:58 +08:00
Alex Wong Tat Hang 2e6ad950b7 gtx_7series: fix IBUFGS_GTE2 buffer parameters
Co-authored-by: topquark12 <aw@m-labs.hk>
2022-08-01 10:23:46 +08:00
Robert Jördens 30cb821197 manual/installing: fix conda channel url and note 2022-07-25 15:49:35 +02:00
Sebastien Bourdeauducq 560b7a5448 typo 2022-07-21 11:58:36 +08:00
Sebastien Bourdeauducq de6f44467f doc: specify release-7 branch in flake URLs 2022-07-09 12:27:02 +08:00
Sebastien Bourdeauducq 59dfb9e902 remove beta from version string 2022-07-08 18:20:23 +08:00
Sebastien Bourdeauducq 2e05a1bd0d Revert "Upgrade smoltcp 0.6.0 -> 0.8.0"
This reverts commit c60de48a30.
2022-07-08 17:58:13 +08:00
Sebastien Bourdeauducq d622fb8db7 Revert "DHCP support for core device firmware"
This reverts commit 6ffb1f83ee.
2022-07-08 17:58:00 +08:00
Sebastien Bourdeauducq c4a9fa78ee Revert "Prefer DHCP to the built-in static IPs"
This reverts commit 596b9a265c.
2022-07-08 17:57:25 +08:00
Sebastien Bourdeauducq 3adcf37625 Revert "Ensure that pending data is sent when closing sockets"
This reverts commit 73082d116f.
2022-07-08 17:56:33 +08:00
Sebastien Bourdeauducq 9941ee3d2a Revert "Use an Ipv4AddrConfig enum instead of the USE_DHCP constant"
This reverts commit 1fe59d27dc.
2022-07-08 17:56:20 +08:00
Sebastien Bourdeauducq 542a5f934f Revert "Require explicitly closing TcpStreams"
This reverts commit 671453938b.
2022-07-08 17:56:12 +08:00
Sebastien Bourdeauducq f1b2a7041a Revert "Centralise all uses of the IPv4 index in net_settings.rs"
This reverts commit 95378cf9c9.
2022-07-08 17:56:04 +08:00
Sebastien Bourdeauducq 93e82e2201 Revert "Use new ip_addr_storage module instead of net_settings"
This reverts commit 50dbda4f43.
2022-07-08 17:55:57 +08:00
Sebastien Bourdeauducq 7b72c9e915 remove WRPLL 2022-07-08 17:55:26 +08:00
94 changed files with 1475 additions and 3357 deletions

View File

@ -17,7 +17,7 @@ Highlights:
- Almazny mezzanine board for Mirny
- Phaser: improved documentation, exposed the DAC coarse mixer and ``sif_sync``, exposed upconverter calibration
and enabling/disabling of upconverter LO & RF outputs, added helpers to align Phaser updates to the
RTIO timeline (``get_next_frame_mu()``
RTIO timeline (``get_next_frame_mu()``).
- Urukul: ``get()``, ``get_mu()``, ``get_att()``, and ``get_att_mu()`` functions added for AD9910 and AD9912.
* Softcore targets now use the RISC-V architecture (VexRiscv) instead of OR1K (mor1kx).
* Gateware FPU is supported on KC705 and Kasli 2.0.
@ -67,9 +67,9 @@ Breaking changes:
generated for some configurations.
* Phaser: fixed coarse mixer frequency configuration
* Mirny: Added extra delays in ``ADF5356.sync()``. This avoids the need of an extra delay before
calling `ADF5356.init()`.
calling ``ADF5356.init()``.
* The deprecated ``set_dataset(..., save=...)`` is no longer supported.
* The ``PCA9548`` I2C switch class was renamed to ``I2CSwitch``, to accomodate support for PCA9547,
* The ``PCA9548`` I2C switch class was renamed to ``I2CSwitch``, to accommodate support for PCA9547,
and possibly other switches in future. Readback has been removed, and now only one channel per
switch is supported.

View File

@ -1,4 +1,4 @@
import os
def get_version():
return os.getenv("VERSIONEER_OVERRIDE", default="7.0.beta")
return os.getenv("VERSIONEER_OVERRIDE", default="7.0")

View File

@ -285,8 +285,8 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
state = self.argeditor.save_state()
self.argeditor.deleteLater()
self.argeditor = _ArgumentEditor(self)
self.argeditor.restore_state(state)
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
self.argeditor.restore_state(state)
async def load_hdf5_task(self, filename=None):
if filename is None:
@ -406,7 +406,7 @@ class ExperimentsArea(QtWidgets.QMdiArea):
self.worker_handlers = {
"get_device_db": lambda: {},
"get_device": lambda k: {"type": "dummy"},
"get_device": lambda key, resolve_alias=False: {"type": "dummy"},
"get_dataset": self._ddb.get,
"update_dataset": self._ddb.update,
}

View File

@ -83,7 +83,7 @@ class ZoomIconView(QtWidgets.QListView):
w = self.iconSize().width()*self.zoom_step**(
ev.angleDelta().y()/120.)
if a <= w <= b:
self.setIconSize(QtCore.QSize(w, w*self.aspect))
self.setIconSize(QtCore.QSize(int(w), int(w*self.aspect)))
else:
QtWidgets.QListView.wheelEvent(self, ev)
@ -102,13 +102,14 @@ class Hdf5FileSystemModel(QtWidgets.QFileSystemModel):
h5 = open_h5(info)
if h5 is not None:
try:
expid = pyon.decode(h5["expid"][()])
start_time = datetime.fromtimestamp(h5["start_time"][()])
expid = pyon.decode(h5["expid"][()]) if "expid" in h5 else dict()
start_time = datetime.fromtimestamp(h5["start_time"][()]) if "start_time" in h5 else "<none>"
v = ("artiq_version: {}\nrepo_rev: {}\nfile: {}\n"
"class_name: {}\nrid: {}\nstart_time: {}").format(
h5["artiq_version"][()], expid["repo_rev"],
expid.get("file", "<none>"), expid["class_name"],
h5["rid"][()], start_time)
h5["artiq_version"][()] if "artiq_version" in h5 else "<none>",
expid.get("repo_rev", "<none>"),
expid.get("file", "<none>"), expid.get("class_name", "<none>"),
h5["rid"][()] if "rid" in h5 else "<none>", start_time)
return v
except:
logger.warning("unable to read metadata from %s",
@ -174,14 +175,14 @@ class FilesDock(QtWidgets.QDockWidget):
logger.debug("loading datasets from %s", info.filePath())
with f:
try:
expid = pyon.decode(f["expid"][()])
start_time = datetime.fromtimestamp(f["start_time"][()])
expid = pyon.decode(f["expid"][()]) if "expid" in f else dict()
start_time = datetime.fromtimestamp(f["start_time"][()]) if "start_time" in f else "<none>"
v = {
"artiq_version": f["artiq_version"][()],
"repo_rev": expid["repo_rev"],
"artiq_version": f["artiq_version"][()] if "artiq_version" in f else "<none>",
"repo_rev": expid.get("repo_rev", "<none>"),
"file": expid.get("file", "<none>"),
"class_name": expid["class_name"],
"rid": f["rid"][()],
"class_name": expid.get("class_name", "<none>"),
"rid": f["rid"][()] if "rid" in f else "<none>",
"start_time": start_time,
}
self.metadata_changed.emit(v)

View File

@ -45,12 +45,23 @@ class EmbeddingMap:
self.object_forward_map = {}
self.object_reverse_map = {}
self.module_map = {}
# type_map connects the host Python `type` to the pair of associated
# `(TInstance, TConstructor)`s. The `used_…_names` sets cache the
# respective `.name`s for O(1) collision avoidance.
self.type_map = {}
self.used_instance_type_names = set()
self.used_constructor_type_names = set()
self.function_map = {}
self.str_forward_map = {}
self.str_reverse_map = {}
self.preallocate_runtime_exception_names(["RuntimeError",
# Keep this list of exceptions in sync with `EXCEPTION_ID_LOOKUP` in `artiq::firmware::ksupport::eh_artiq`
# The exceptions declared here must be defined in `artiq.coredevice.exceptions`
# Verify synchronization by running the test cases in `artiq.test.coredevice.test_exceptions`
self.preallocate_runtime_exception_names([
"0:RuntimeError",
"RTIOUnderflow",
"RTIOOverflow",
"RTIODestinationUnreachable",
@ -59,7 +70,9 @@ class EmbeddingMap:
"CacheError",
"SPIError",
"0:ZeroDivisionError",
"0:IndexError"])
"0:IndexError",
"UnwrapNoneError",
])
def preallocate_runtime_exception_names(self, names):
for i, name in enumerate(names):
@ -91,16 +104,6 @@ class EmbeddingMap:
# Types
def store_type(self, host_type, instance_type, constructor_type):
self._rename_type(instance_type)
self.type_map[host_type] = (instance_type, constructor_type)
def retrieve_type(self, host_type):
return self.type_map[host_type]
def has_type(self, host_type):
return host_type in self.type_map
def _rename_type(self, new_instance_type):
# Generally, user-defined types that have exact same name (which is to say, classes
# defined inside functions) do not pose a problem to the compiler. The two places which
# cannot handle this are:
@ -109,12 +112,29 @@ class EmbeddingMap:
# Since handling #2 requires renaming on ARTIQ side anyway, it's more straightforward
# to do it once when embedding (since non-embedded code cannot define classes in
# functions). Also, easier to debug.
n = 0
for host_type in self.type_map:
instance_type, constructor_type = self.type_map[host_type]
if instance_type.name == new_instance_type.name:
n += 1
new_instance_type.name = "{}.{}".format(new_instance_type.name, n)
suffix = 0
new_instance_name = instance_type.name
new_constructor_name = constructor_type.name
while True:
if (new_instance_name not in self.used_instance_type_names
and new_constructor_name not in self.used_constructor_type_names):
break
suffix += 1
new_instance_name = f"{instance_type.name}.{suffix}"
new_constructor_name = f"{constructor_type.name}.{suffix}"
self.used_instance_type_names.add(new_instance_name)
instance_type.name = new_instance_name
self.used_constructor_type_names.add(new_constructor_name)
constructor_type.name = new_constructor_name
self.type_map[host_type] = (instance_type, constructor_type)
def retrieve_type(self, host_type):
return self.type_map[host_type]
def has_type(self, host_type):
return host_type in self.type_map
def attribute_count(self):
count = 0
@ -532,7 +552,7 @@ class StitchingASTTypedRewriter(ASTTypedRewriter):
node = asttyped.QuotedFunctionDefT(
typing_env=extractor.typing_env, globals_in_scope=extractor.global_,
signature_type=types.TVar(), return_type=types.TVar(),
name=node.name, args=node.args, returns=node.returns,
name=node.name, args=node.args, returns=None,
body=node.body, decorator_list=node.decorator_list,
keyword_loc=node.keyword_loc, name_loc=node.name_loc,
arrow_loc=node.arrow_loc, colon_loc=node.colon_loc, at_locs=node.at_locs,
@ -644,9 +664,9 @@ class StitchingInferencer(Inferencer):
if elt.__class__ == float:
state |= IS_FLOAT
elif elt.__class__ == int:
if -2**31 < elt < 2**31-1:
if -2**31 <= elt <= 2**31-1:
state |= IS_INT32
elif -2**63 < elt < 2**63-1:
elif -2**63 <= elt <= 2**63-1:
state |= IS_INT64
else:
state = -1

View File

@ -33,9 +33,19 @@ SECTIONS
KEEP(*(.eh_frame_hdr))
} : text : eh_frame
.got :
{
*(.got)
} : text
.got.plt :
{
*(.got.plt)
} : text
.data :
{
*(.data)
*(.data .data.*)
} : data
.dynamic :
@ -51,6 +61,10 @@ SECTIONS
_end = .;
}
/* Kernel stack grows downward from end of memory, so put guard page after
* all the program contents. Note: This requires all loaded sections (at
* least those accessed) to be explicitly listed in the above!
*/
. = ALIGN(0x1000);
_sstack_guard = .;
}

View File

@ -61,22 +61,6 @@ unary_fp_runtime_calls = [
("cbrt", "cbrt"),
]
#: float -> float numpy.* math functions lowered to runtime calls.
unary_fp_runtime_calls = [
("tan", "tan"),
("arcsin", "asin"),
("arccos", "acos"),
("arctan", "atan"),
("sinh", "sinh"),
("cosh", "cosh"),
("tanh", "tanh"),
("arcsinh", "asinh"),
("arccosh", "acosh"),
("arctanh", "atanh"),
("expm1", "expm1"),
("cbrt", "cbrt"),
]
scipy_special_unary_runtime_calls = [
("erf", "erf"),
("erfc", "erfc"),

View File

@ -263,7 +263,7 @@ class NativeTarget(Target):
def __init__(self):
super().__init__()
self.triple = llvm.get_default_triple()
host_data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data)
self.data_layout = str(llvm.targets.Target.from_default_triple().create_target_machine().target_data)
class RV32IMATarget(Target):
triple = "riscv32-unknown-linux"

View File

@ -14,9 +14,9 @@ class IntMonomorphizer(algorithm.Visitor):
def visit_NumT(self, node):
if builtins.is_int(node.type):
if types.is_var(node.type["width"]):
if -2**31 < node.n < 2**31-1:
if -2**31 <= node.n <= 2**31-1:
width = 32
elif -2**63 < node.n < 2**63-1:
elif -2**63 <= node.n <= 2**63-1:
width = 64
else:
diag = diagnostic.Diagnostic("error",

View File

@ -177,6 +177,15 @@ class LLVMIRGenerator:
self.empty_metadata = self.llmodule.add_metadata([])
self.quote_fail_msg = None
# Maximum alignment required according to the target platform ABI. As this is
# not directly exposed by LLVM, just take the maximum across all the "big"
# elementary types we use. (Vector types, should we ever support them, are
# likely contenders for even larger alignment requirements.)
self.max_target_alignment = max(map(
lambda t: self.abi_layout_info.get_size_align(t)[1],
[lli64, lldouble, llptr]
))
def add_pred(self, pred, block):
if block not in self.llpred_map:
self.llpred_map[block] = set()
@ -335,8 +344,8 @@ class LLVMIRGenerator:
else:
value = const.value
llptr = self.llstr_of_str(const.value, linkage="private", unnamed_addr=True)
lllen = ll.Constant(lli32, len(const.value))
llptr = self.llstr_of_str(value, linkage="private", unnamed_addr=True)
lllen = ll.Constant(lli32, len(value))
return ll.Constant(llty, (llptr, lllen))
else:
assert False
@ -1529,7 +1538,7 @@ class LLVMIRGenerator:
self.llbuilder.position_at_end(llalloc)
llalloca = self.llbuilder.alloca(lli8, llsize, name="rpc.alloc")
llalloca.align = 4 # maximum alignment required by OR1K ABI
llalloca.align = self.max_target_alignment
llphi.add_incoming(llalloca, llalloc)
self.llbuilder.branch(llhead)

View File

@ -233,7 +233,7 @@ class AD53xx:
def write_gain_mu(self, channel, gain=0xffff):
"""Program the gain register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:).
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer.
:param gain: 16-bit gain register value (default: 0xffff)
@ -245,7 +245,7 @@ class AD53xx:
def write_offset_mu(self, channel, offset=0x8000):
"""Program the offset register for a DAC channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:).
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer.
:param offset: 16-bit offset register value (default: 0x8000)
@ -258,7 +258,7 @@ class AD53xx:
"""Program the DAC offset voltage for a channel.
An offset of +V can be used to trim out a DAC offset error of -V.
The DAC output is not updated until LDAC is pulsed (see :meth load:).
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer.
:param voltage: the offset voltage
@ -270,7 +270,7 @@ class AD53xx:
def write_dac_mu(self, channel, value):
"""Program the DAC input register for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:).
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer.
"""
self.bus.write(
@ -280,7 +280,7 @@ class AD53xx:
def write_dac(self, channel, voltage):
"""Program the DAC output voltage for a channel.
The DAC output is not updated until LDAC is pulsed (see :meth load:).
The DAC output is not updated until LDAC is pulsed (see :meth:`load`).
This method advances the timeline by the duration of one SPI transfer.
"""
self.write_dac_mu(channel, voltage_to_mu(voltage, self.offset_dacs,
@ -313,7 +313,7 @@ class AD53xx:
If no LDAC device was defined, the LDAC pulse is skipped.
See :meth load:.
See :meth:`load`.
:param values: list of DAC values to program
:param channels: list of DAC channels to program. If not specified,
@ -355,7 +355,7 @@ class AD53xx:
""" Two-point calibration of a DAC channel.
Programs the offset and gain register to trim out DAC errors. Does not
take effect until LDAC is pulsed (see :meth load:).
take effect until LDAC is pulsed (see :meth:`load`).
Calibration consists of measuring the DAC output voltage for a channel
with the DAC set to zero-scale (0x0000) and full-scale (0xffff).

View File

@ -80,10 +80,11 @@ class ADF5356:
:param blind: Do not attempt to verify presence.
"""
self.sync()
if not blind:
# MUXOUT = VDD
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 1)
self.sync()
self.write(self.regs[4])
delay(1000 * us)
if not self.read_muxout():
raise ValueError("MUXOUT not high")
@ -91,7 +92,7 @@ class ADF5356:
# MUXOUT = DGND
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 2)
self.sync()
self.write(self.regs[4])
delay(1000 * us)
if self.read_muxout():
raise ValueError("MUXOUT not low")
@ -99,8 +100,7 @@ class ADF5356:
# MUXOUT = digital lock-detect
self.regs[4] = ADF5356_REG4_MUXOUT_UPDATE(self.regs[4], 6)
else:
self.sync()
self.write(self.regs[4])
@kernel
def set_att(self, att):

View File

@ -102,15 +102,15 @@ def decode_dump(data):
# messages are big endian
parts = struct.unpack(endian + "IQbbb", data[:15])
(sent_bytes, total_byte_count,
error_occured, log_channel, dds_onehot_sel) = parts
error_occurred, log_channel, dds_onehot_sel) = parts
expected_len = sent_bytes + 15
if expected_len != len(data):
raise ValueError("analyzer dump has incorrect length "
"(got {}, expected {})".format(
len(data), expected_len))
if error_occured:
logger.warning("error occured within the analyzer, "
if error_occurred:
logger.warning("error occurred within the analyzer, "
"data may be corrupted")
if total_byte_count > sent_bytes:
logger.info("analyzer ring buffer has wrapped %d times",

View File

@ -3,6 +3,7 @@ import logging
import traceback
import numpy
import socket
import builtins
from enum import Enum
from fractions import Fraction
from collections import namedtuple
@ -449,12 +450,12 @@ class CommKernel:
self._write_bool(value)
elif tag == "i":
check(isinstance(value, (int, numpy.int32)) and
(-2**31 <= value < 2**31),
(-2**31 <= value <= 2**31-1),
lambda: "32-bit int")
self._write_int32(value)
elif tag == "I":
check(isinstance(value, (int, numpy.int32, numpy.int64)) and
(-2**63 <= value < 2**63),
(-2**63 <= value <= 2**63-1),
lambda: "64-bit int")
self._write_int64(value)
elif tag == "f":
@ -463,8 +464,8 @@ class CommKernel:
self._write_float64(value)
elif tag == "F":
check(isinstance(value, Fraction) and
(-2**63 <= value.numerator < 2**63) and
(-2**63 <= value.denominator < 2**63),
(-2**63 <= value.numerator <= 2**63-1) and
(-2**63 <= value.denominator <= 2**63-1),
lambda: "64-bit Fraction")
self._write_int64(value.numerator)
self._write_int64(value.denominator)
@ -600,9 +601,10 @@ class CommKernel:
self._write_int32(embedding_map.store_str(function))
else:
exn_type = type(exn)
if exn_type in (ZeroDivisionError, ValueError, IndexError, RuntimeError) or \
hasattr(exn, "artiq_builtin"):
name = "0:{}".format(exn_type.__name__)
if exn_type in builtins.__dict__.values():
name = "0:{}".format(exn_type.__qualname__)
elif hasattr(exn, "artiq_builtin"):
name = "0:{}.{}".format(exn_type.__module__, exn_type.__qualname__)
else:
exn_id = embedding_map.store_object(exn_type)
name = "{}:{}.{}".format(exn_id,
@ -686,8 +688,14 @@ class CommKernel:
else:
python_exn_type = embedding_map.retrieve_object(core_exn.id)
try:
python_exn = python_exn_type(
nested_exceptions[-1][1].format(*nested_exceptions[0][2]))
except Exception as ex:
python_exn = RuntimeError(
f"Exception type={python_exn_type}, which couldn't be "
f"reconstructed ({ex})"
)
python_exn.artiq_core_exception = core_exn
raise python_exn

View File

@ -52,6 +52,9 @@ def rtio_get_destination_status(linkno: TInt32) -> TBool:
def rtio_get_counter() -> TInt64:
raise NotImplementedError("syscall not simulated")
@syscall
def test_exception_id_sync(id: TInt32) -> TNone:
raise NotImplementedError("syscall not simulated")
class Core:
"""Core device driver.

View File

@ -122,7 +122,7 @@
},
"hw_rev": {
"type": "string",
"enum": ["v1.0"]
"enum": ["v1.0", "v1.1"]
}
}
}
@ -302,6 +302,12 @@
"pll_n": {
"type": "integer"
},
"pll_en": {
"type": "integer",
"minimum": 0,
"maximum": 1,
"default": 1
},
"pll_vco": {
"type": "integer"
},
@ -405,6 +411,12 @@
"type": "integer",
"default": 32
},
"pll_en": {
"type": "integer",
"minimum": 0,
"maximum": 1,
"default": 1
},
"pll_vco": {
"type": "integer"
}

View File

@ -6,12 +6,26 @@ import os
from artiq import __artiq_dir__ as artiq_dir
from artiq.coredevice.runtime import source_loader
"""
This file provides class definition for all the exceptions declared in `EmbeddingMap` in `artiq.compiler.embedding`
For Python builtin exceptions, use the `builtins` module
For ARTIQ specific exceptions, inherit from `Exception` class
"""
ZeroDivisionError = builtins.ZeroDivisionError
ValueError = builtins.ValueError
IndexError = builtins.IndexError
RuntimeError = builtins.RuntimeError
AssertionError = builtins.AssertionError
AttributeError = builtins.AttributeError
IndexError = builtins.IndexError
IOError = builtins.IOError
KeyError = builtins.KeyError
NotImplementedError = builtins.NotImplementedError
OverflowError = builtins.OverflowError
RuntimeError = builtins.RuntimeError
TimeoutError = builtins.TimeoutError
TypeError = builtins.TypeError
ValueError = builtins.ValueError
ZeroDivisionError = builtins.ZeroDivisionError
OSError = builtins.OSError
class CoreException:
@ -150,13 +164,17 @@ class DMAError(Exception):
class ClockFailure(Exception):
"""Raised when RTIO PLL has lost lock."""
artiq_builtin = True
class I2CError(Exception):
"""Raised when a I2C transaction fails."""
pass
artiq_builtin = True
class SPIError(Exception):
"""Raised when a SPI transaction fails."""
pass
artiq_builtin = True
class UnwrapNoneError(Exception):
"""Raised when unwrapping a none Option."""
artiq_builtin = True

View File

@ -161,6 +161,7 @@ class I2CSwitch:
@kernel
def set(self, channel):
"""Enable one channel.
:param channel: channel number (0-7)
"""
i2c_switch_select(self.busno, self.address >> 1, 1 << channel)

View File

@ -183,7 +183,7 @@ class Almazny:
"""
Almazny (High frequency mezzanine board for Mirny)
:param host_mirny - Mirny device Almazny is connected to
:param host_mirny: Mirny device Almazny is connected to
"""
def __init__(self, dmgr, host_mirny):
@ -223,9 +223,10 @@ class Almazny:
def set_att(self, channel, att, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel - index of the register [0-3]
:param att_mu - attenuation setting in dBm [0-31.5]
:param rf_switch - rf switch (bool)
:param channel: index of the register [0-3]
:param att: attenuation setting in dBm [0-31.5]
:param rf_switch: rf switch (bool)
"""
self.set_att_mu(channel, self.att_to_mu(att), rf_switch)
@ -233,9 +234,10 @@ class Almazny:
def set_att_mu(self, channel, att_mu, rf_switch=True):
"""
Sets attenuators on chosen shift register (channel).
:param channel - index of the register [0-3]
:param att_mu - attenuation setting in machine units [0-63]
:param rf_switch - rf switch (bool)
:param channel: index of the register [0-3]
:param att_mu: attenuation setting in machine units [0-63]
:param rf_switch: rf switch (bool)
"""
self.channel_sw[channel] = 1 if rf_switch else 0
self.att_mu[channel] = att_mu
@ -245,7 +247,8 @@ class Almazny:
def output_toggle(self, oe):
"""
Toggles output on all shift registers on or off.
:param oe - toggle output enable (bool)
:param oe: toggle output enable (bool)
"""
self.output_enable = oe
cfg_reg = self.mirny_cpld.read_reg(1)

View File

@ -237,7 +237,7 @@ class Phaser:
for data in self.dac_mmap:
self.dac_write(data >> 16, data)
delay(40*us)
delay(120*us)
self.dac_sync()
delay(40*us)
@ -616,7 +616,7 @@ class Phaser:
.. note:: Synchronising the NCO clears the phase-accumulator
"""
config1f = self.dac_read(0x1f)
delay(.1*ms)
delay(.4*ms)
self.dac_write(0x1f, config1f & ~int32(1 << 1))
self.dac_write(0x1f, config1f | (1 << 1))

View File

@ -268,7 +268,7 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
datetime.setDate(QtCore.QDate.currentDate())
else:
datetime.setDateTime(QtCore.QDateTime.fromMSecsSinceEpoch(
scheduling["due_date"]*1000))
int(scheduling["due_date"]*1000)))
datetime_en.setChecked(scheduling["due_date"] is not None)
def update_datetime(dt):
@ -422,8 +422,8 @@ class _ExperimentDock(QtWidgets.QMdiSubWindow):
editor_class = self.manager.get_argument_editor_class(self.expurl)
self.argeditor = editor_class(self.manager, self, self.expurl)
self.argeditor.restore_state(argeditor_state)
self.layout.addWidget(self.argeditor, 0, 0, 1, 5)
self.argeditor.restore_state(argeditor_state)
def contextMenuEvent(self, event):
menu = QtWidgets.QMenu(self)

View File

@ -8,8 +8,11 @@ from PyQt5 import QtCore, QtWidgets, QtGui
from sipyco.sync_struct import Subscriber
from artiq.coredevice.comm_moninj import *
from artiq.coredevice.ad9910 import _AD9910_REG_PROFILE0, _AD9910_REG_PROFILE7, _AD9910_REG_FTW
from artiq.coredevice.ad9912_reg import AD9912_POW1
from artiq.coredevice.ad9910 import (
_AD9910_REG_PROFILE0, _AD9910_REG_PROFILE7,
_AD9910_REG_FTW, _AD9910_REG_CFR1
)
from artiq.coredevice.ad9912_reg import AD9912_POW1, AD9912_SER_CONF
from artiq.gui.tools import LayoutWidget
from artiq.gui.flowlayout import FlowLayout
@ -311,6 +314,10 @@ class _DDSWidget(QtWidgets.QFrame):
apply.clicked.connect(self.apply_changes)
if self.dds_model.is_urukul:
off_btn.clicked.connect(self.off_clicked)
off_btn.setToolTip(textwrap.dedent(
"""Note: If TTL RTIO sw for the channel is switched high,
this button will not disable the channel.
Use the TTL override instead."""))
self.value_edit.returnPressed.connect(lambda: self.apply_changes(None))
self.value_edit.escapePressedConnect(self.cancel_changes)
cancel.clicked.connect(self.cancel_changes)
@ -534,7 +541,7 @@ class _DeviceManager:
"log_level": logging.WARNING,
"content": content,
"class_name": class_name,
"arguments": []
"arguments": {}
}
scheduling = {
"pipeline_name": "main",
@ -549,33 +556,55 @@ class _DeviceManager:
scheduling["flush"])
logger.info("Submitted '%s', RID is %d", title, rid)
def dds_set_frequency(self, dds_channel, dds_model, freq):
def _dds_faux_injection(self, dds_channel, dds_model, action, title, log_msg):
# create kernel and fill it in and send-by-content
# initialize CPLD (if applicable)
if dds_model.is_urukul:
# urukuls need CPLD init and switch to on
# keep previous config if it was set already
cpld_dev = """self.setattr_device("core_cache")
self.setattr_device("{}")""".format(dds_model.cpld)
cpld_init = """cfg = self.core_cache.get("_{cpld}_cfg")
if len(cfg) > 0:
self.{cpld}.cfg_reg = cfg[0]
else:
# `sta`/`rf_sw`` variables are guaranteed for urukuls
# so {action} can use it
# if there's no RF enabled, CPLD may have not been initialized
# but if there is, it has been initialised - no need to do again
cpld_init = """delay(15*ms)
was_init = self.core_cache.get("_{cpld}_init")
sta = self.{cpld}.sta_read()
rf_sw = urukul_sta_rf_sw(sta)
if rf_sw == 0 and len(was_init) == 0:
delay(15*ms)
self.{cpld}.init()
self.core_cache.put("_{cpld}_cfg", [self.{cpld}.cfg_reg])
cfg = self.core_cache.get("_{cpld}_cfg")
self.core_cache.put("_{cpld}_init", [1])
""".format(cpld=dds_model.cpld)
cfg_sw = """self.{}.cfg_sw(True)
cfg[0] = self.{}.cfg_reg
""".format(dds_channel, dds_model.cpld)
else:
cpld_dev = ""
cpld_init = ""
cfg_sw = ""
# AD9912/9910: init channel (if uninitialized)
if dds_model.dds_type == "AD9912":
# 0xFF before init, 0x99 after
channel_init = """
if self.{dds_channel}.read({cfgreg}, length=1) == 0xFF:
delay(10*ms)
self.{dds_channel}.init()
""".format(dds_channel=dds_channel, cfgreg=AD9912_SER_CONF)
elif dds_model.dds_type == "AD9910":
# -1 before init, 2 after
channel_init = """
if self.{dds_channel}.read32({cfgreg}) == -1:
delay(10*ms)
self.{dds_channel}.init()
""".format(dds_channel=dds_channel, cfgreg=AD9912_SER_CONF)
else:
channel_init = "self.{dds_channel}.init()".format(dds_channel=dds_channel)
dds_exp = textwrap.dedent("""
from artiq.experiment import *
from artiq.coredevice.urukul import *
class SetDDS(EnvExperiment):
class {title}(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("{dds_channel}")
@ -583,55 +612,57 @@ class _DeviceManager:
@kernel
def run(self):
self.core.reset()
self.core.break_realtime()
{cpld_init}
delay(5*ms)
self.{dds_channel}.init()
self.{dds_channel}.set({freq})
{cfg_sw}
""".format(dds_channel=dds_channel, freq=freq,
delay(10*ms)
{channel_init}
delay(15*ms)
{action}
""".format(title=title, action=action,
dds_channel=dds_channel,
cpld_dev=cpld_dev, cpld_init=cpld_init,
cfg_sw=cfg_sw))
channel_init=channel_init))
asyncio.ensure_future(
self._submit_by_content(
dds_exp,
title,
log_msg))
def dds_set_frequency(self, dds_channel, dds_model, freq):
action = "self.{ch}.set({freq})".format(
freq=freq, ch=dds_channel)
if dds_model.is_urukul:
action += """
ch_no = self.{ch}.chip_select - 4
self.{cpld}.cfg_switches(rf_sw | 1 << ch_no)
""".format(ch=dds_channel, cpld=dds_model.cpld)
self._dds_faux_injection(
dds_channel,
dds_model,
action,
"SetDDS",
"Set DDS {} {}MHz".format(dds_channel, freq/1e6)))
"Set DDS {} {}MHz".format(dds_channel, freq/1e6))
def dds_channel_toggle(self, dds_channel, dds_model, sw=True):
# urukul only
toggle_exp = textwrap.dedent("""
from artiq.experiment import *
class ToggleDDS(EnvExperiment):
def build(self):
self.setattr_device("core")
self.setattr_device("{ch}")
self.setattr_device("core_cache")
self.setattr_device("{cpld}")
@kernel
def run(self):
self.core.reset()
cfg = self.core_cache.get("_{cpld}_cfg")
if len(cfg) > 0:
self.{cpld}.cfg_reg = cfg[0]
if sw:
switch = "| 1 << ch_no"
else:
delay(15*ms)
self.{cpld}.init()
self.core_cache.put("_{cpld}_cfg", [self.{cpld}.cfg_reg])
cfg = self.core_cache.get("_{cpld}_cfg")
delay(5*ms)
self.{ch}.init()
self.{ch}.cfg_sw({sw})
cfg[0] = self.{cpld}.cfg_reg
""".format(ch=dds_channel, cpld=dds_model.cpld, sw=sw))
asyncio.ensure_future(
self._submit_by_content(
toggle_exp,
switch = "& ~(1 << ch_no)"
action = """
ch_no = self.{dds_channel}.chip_select - 4
self.{cpld}.cfg_switches(rf_sw {switch})
""".format(
dds_channel=dds_channel,
cpld=dds_model.cpld,
switch=switch
)
self._dds_faux_injection(
dds_channel,
dds_model,
action,
"ToggleDDS",
"Toggle DDS {} {}".format(dds_channel, "on" if sw else "off"))
)
def setup_ttl_monitoring(self, enable, channel):
if self.mi_connection is not None:
@ -695,11 +726,11 @@ class _DeviceManager:
logger.info("cancelled connection to moninj")
break
except:
logger.error("failed to connect to moninj", exc_info=True)
logger.error("failed to connect to moninj. Is aqctl_moninj_proxy running?", exc_info=True)
await asyncio.sleep(10.)
self.reconnect_mi.set()
else:
logger.info("ARTIQ dashboard connected to moninj proxy (%s)",
logger.info("ARTIQ dashboard connected to moninj (%s)",
self.mi_addr)
self.mi_connection = new_mi_connection
for ttl_channel in self.ttl_widgets.keys():

View File

@ -1,10 +1,42 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
[[package]]
name = "aho-corasick"
version = "0.7.18"
name = "addr2line"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e37cfd5e7657ada45f742d6e99ca5788580b5c529dc78faf11ece6dc702656f"
checksum = "7c0929d69e78dd9bf5408269919fcbcaeb2e35e5d43e5815517cdc6a8e11a423"
dependencies = [
"cpp_demangle",
"fallible-iterator",
"gimli",
"object",
"rustc-demangle",
"smallvec",
]
[[package]]
name = "adler"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ee2a4ec343196209d6594e19543ae87a39f96d5534d7174822a3ad825dd6ed7e"
[[package]]
name = "adler"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "ahash"
version = "0.4.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0453232ace82dee0dd0b4c87a59bd90f7b53b314f3e0f61fe2ee7c8a16482289"
[[package]]
name = "aho-corasick"
version = "0.7.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7404febffaa47dac81aa44dba71523c9d069b1bdc50a77db41195149e17f68e5"
dependencies = [
"memchr",
]
@ -24,9 +56,9 @@ dependencies = [
[[package]]
name = "bit_field"
version = "0.10.1"
version = "0.10.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dcb6dd1c2376d2e096796e234a70e17e94cc2d5d54ff8ce42b28cef1d0d359a4"
checksum = "dc827186963e592360843fb5ba4b973e145841266c1357f7180c43526f2e5b61"
[[package]]
name = "bitflags"
@ -66,12 +98,26 @@ dependencies = [
name = "bootloader"
version = "0.0.0"
dependencies = [
"addr2line",
"board_misoc",
"build_misoc",
"byteorder",
"cc",
"compiler_builtins",
"crc",
"dlmalloc",
"fortanix-sgx-abi",
"getopts",
"hashbrown",
"hermit-abi",
"libc 0.2.79",
"memchr",
"miniz_oxide 0.4.0",
"riscv",
"rustc-demangle",
"smoltcp",
"unicode-width",
"wasi",
]
[[package]]
@ -92,9 +138,9 @@ checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610"
[[package]]
name = "cc"
version = "1.0.70"
version = "1.0.60"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d26a6ce4b6a484fa3edb70f7efa6fc430fd2b87285fe8b84304fd0936faa0dc0"
checksum = "ef611cc68ff783f18535d77ddd080185275713d852c4f5cbb6122c462a7a825c"
[[package]]
name = "cfg-if"
@ -114,6 +160,15 @@ version = "0.1.39"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3748f82c7d366a0b4950257d19db685d4958d2fa27c6d164a3f069fec42b748b"
[[package]]
name = "cpp_demangle"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eeaa953eaad386a53111e47172c2fedba671e5684c8dd601a5f474f4f118710f"
dependencies = [
"cfg-if 1.0.0",
]
[[package]]
name = "crc"
version = "1.8.1"
@ -123,12 +178,30 @@ dependencies = [
"build_const",
]
[[package]]
name = "crc32fast"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a97769d94ddab943e4510d138150169a2758b5ef3eb191a9ee688de3e23ef7b3"
dependencies = [
"cfg-if 1.0.0",
]
[[package]]
name = "cslice"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0f8cb7306107e4b10e64994de6d3274bd08996a7c1322a27b86482392f96be0a"
[[package]]
name = "dlmalloc"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "332570860c2edf2d57914987bf9e24835425f75825086b6ba7d1e6a3e4f1f254"
dependencies = [
"libc 0.2.79",
]
[[package]]
name = "dyld"
version = "0.0.0"
@ -137,7 +210,6 @@ version = "0.0.0"
name = "eh"
version = "0.0.0"
dependencies = [
"compiler_builtins",
"cslice",
"libc 0.1.0",
"unwind",
@ -160,12 +232,71 @@ dependencies = [
"synstructure",
]
[[package]]
name = "fallible-iterator"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4443176a9f2c162692bd3d352d745ef9413eec5782a80d8fd6f8a1ac692a07f7"
[[package]]
name = "flate2"
version = "1.0.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f54427cfd1c7829e2a139fcefea601bf088ebca651d2bf53ebc600eac295dae"
dependencies = [
"crc32fast",
"miniz_oxide 0.7.3",
]
[[package]]
name = "fortanix-sgx-abi"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c56c422ef86062869b2d57ae87270608dc5929969dd130a6e248979cf4fb6ca6"
[[package]]
name = "fringe"
version = "1.2.1"
source = "git+https://git.m-labs.hk/M-Labs/libfringe.git?rev=3ecbe5#3ecbe53f7644b18ee46ebd5b2ca12c9cbceec43a"
dependencies = [
"libc 0.2.101",
"libc 0.2.79",
]
[[package]]
name = "getopts"
version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "14dbbfd5c71d70241ecf9e6f13737f7b5ce823821063188d7e46c41d371eebd5"
dependencies = [
"unicode-width",
]
[[package]]
name = "gimli"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6503fe142514ca4799d4c26297c4248239fe8838d827db6bd6065c6ed29a6ce"
dependencies = [
"fallible-iterator",
"stable_deref_trait",
]
[[package]]
name = "hashbrown"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "00d63df3d41950fb462ed38308eea019113ad1508da725bbedcd0fa5a85ef5f7"
dependencies = [
"ahash",
]
[[package]]
name = "hermit-abi"
version = "0.1.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5aca5565f760fb5b220e499d72710ed156fdb74e631659e99377d9ebfbd13ae8"
dependencies = [
"libc 0.2.79",
]
[[package]]
@ -206,9 +337,9 @@ version = "0.1.0"
[[package]]
name = "libc"
version = "0.2.101"
version = "0.2.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3cb00336871be5ed2c8ed44b60ae9959dc5b9f08539422ed43f09e34ecaeba21"
checksum = "2448f6066e80e3bfc792e9c98bf705b4b0fc6e8ef5b43e5889aff0eaa9c58743"
[[package]]
name = "log"
@ -241,16 +372,38 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c75de51135344a4f8ed3cfe2720dc27736f7711989703a0b43aadf3753c55577"
[[package]]
name = "managed"
version = "0.8.0"
name = "memchr"
version = "2.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ca88d725a0a943b096803bd34e73a4437208b6077654cc4ecb2947a5f91618d"
checksum = "0ee1c47aaa256ecabcaea351eae4a9b01ef39ed810004e298d2511ed284b1525"
[[package]]
name = "memchr"
version = "2.4.1"
name = "miniz_oxide"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "308cc39be01b73d0d18f82a0e7b2a3df85245f84af96fdddc5d202d27e47b86a"
checksum = "be0f75932c1f6cfae3c04000e40114adf955636e19040f9c0a2c380702aa1c7f"
dependencies = [
"adler 0.2.3",
]
[[package]]
name = "miniz_oxide"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87dfd01fe195c66b572b37921ad8803d010623c0aca821bea2302239d155cdae"
dependencies = [
"adler 1.0.2",
]
[[package]]
name = "object"
version = "0.22.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d3b63360ec3cb337817c2dbd47ab4a0f170d285d8e5a2064600f3def1402397"
dependencies = [
"flate2",
"wasmparser",
]
[[package]]
name = "proto_artiq"
@ -274,9 +427,9 @@ checksum = "7a6e920b65c65f10b2ae65c831a81a073a89edd28c7cce89475bff467ab4167a"
[[package]]
name = "regex"
version = "1.5.4"
version = "1.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d07a8629359eb56f1e2fb1652bb04212c072a87ba68546a04065d525673ac461"
checksum = "2a26af418b574bd56588335b3a3659a65725d4e636eb1016c2f9e3b38c7cc759"
dependencies = [
"aho-corasick",
"memchr",
@ -285,9 +438,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.6.25"
version = "0.6.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f497285884f3fcff424ffc933e56d7cbca511def0c9831a7f9b5f6153e3cc89b"
checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1"
[[package]]
name = "riscv"
@ -327,13 +480,19 @@ dependencies = [
"io",
"log",
"logger_artiq",
"managed 0.7.2",
"managed",
"proto_artiq",
"riscv",
"smoltcp",
"unwind_backtrace",
]
[[package]]
name = "rustc-demangle"
version = "0.1.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e3bad0ee36814ca07d7968269dd4b7ec89ec2da10c4bb613928d3077083c232"
[[package]]
name = "rustc_version"
version = "0.2.3"
@ -370,16 +529,28 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
[[package]]
name = "smoltcp"
version = "0.8.0"
name = "smallvec"
version = "1.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2308a1657c8db1f5b4993bab4e620bdbe5623bd81f254cf60326767bb243237"
checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67"
[[package]]
name = "smoltcp"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fe46639fd2ec79eadf8fe719f237a7a0bd4dac5d957f1ca5bbdbc1c3c39e53a"
dependencies = [
"bitflags",
"byteorder",
"managed 0.8.0",
"managed",
]
[[package]]
name = "stable_deref_trait"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a8f112729512f8e442d81f95a8a7ddf2b7c6b8a1a6f509a95864142b30cab2d3"
[[package]]
name = "syn"
version = "0.11.11"
@ -410,6 +581,12 @@ dependencies = [
"syn",
]
[[package]]
name = "unicode-width"
version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9337591893a19b88d8d87f2cec1e73fad5cdfd10e5a6f349f498ad6ea2ffb1e3"
[[package]]
name = "unicode-xid"
version = "0.0.4"
@ -431,3 +608,15 @@ dependencies = [
"libc 0.1.0",
"unwind",
]
[[package]]
name = "wasi"
version = "0.9.0+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cccddf32554fecc6acb585f82a32a72e28b48f8c4c1883ddfeeeaa96f7d8e519"
[[package]]
name = "wasmparser"
version = "0.57.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32fddd575d477c6e9702484139cf9f23dcd554b06d185ed0f56c857dd3a47aa6"

View File

@ -13,8 +13,25 @@ path = "main.rs"
build_misoc = { path = "../libbuild_misoc" }
[dependencies]
byteorder = { version = "1.0", default-features = false }
byteorder = { version = "=1.4.3", default-features = false }
crc = { version = "1.7", default-features = false }
board_misoc = { path = "../libboard_misoc", features = ["uart_console", "smoltcp"] }
smoltcp = { version = "0.8.0", default-features = false, features = ["medium-ethernet", "proto-ipv4", "proto-ipv6", "socket-tcp"] }
smoltcp = { version = "0.6.0", default-features = false, features = ["ethernet", "proto-ipv4", "proto-ipv6", "socket-tcp"] }
riscv = { version = "0.6.0", features = ["inline-asm"] }
# insanity required by using cargo build-std over xbuild with nix
[dev-dependencies]
getopts = "=0.2.21"
libc = "=0.2.79"
unicode-width = "=0.1.8"
addr2line = "=0.14.0"
memchr = "=2.3.4"
hashbrown = "=0.9.0"
miniz_oxide = "=0.4.0"
rustc-demangle = "=0.1.18"
hermit-abi = "=0.1.17"
dlmalloc = "=0.2.1"
wasi = "=0.9.0"
fortanix-sgx-abi = "=0.3.3"
cc = "=1.0.60"
compiler_builtins = "=0.1.39"

View File

@ -3,8 +3,6 @@ include $(MISOC_DIRECTORY)/software/common.mak
RUSTFLAGS += -Cpanic=abort
export XBUILD_SYSROOT_PATH=$(BUILDINC_DIRECTORY)/../sysroot
all:: bootloader.bin
.PHONY: $(RUSTOUT)/libbootloader.a

View File

@ -18,8 +18,6 @@ use board_misoc::slave_fpga;
use board_misoc::{clock, ethmac, net_settings};
use board_misoc::uart_console::Console;
use riscv::register::{mcause, mepc, mtval};
use smoltcp::iface::SocketStorage;
use smoltcp::wire::{HardwareAddress, IpAddress, Ipv4Address};
fn check_integrity() -> bool {
extern {
@ -398,9 +396,6 @@ fn network_boot() {
println!("Initializing network...");
// Assuming only one socket is ever needed by the bootloader.
// The smoltcp reuses the listening socket when the connection is established.
let mut sockets = [SocketStorage::EMPTY];
let mut net_device = unsafe { ethmac::EthernetDevice::new() };
net_device.reset_phy_if_any();
@ -410,25 +405,22 @@ fn network_boot() {
let net_addresses = net_settings::get_adresses();
println!("Network addresses: {}", net_addresses);
let mut ip_addrs = [
IpCidr::new(IpAddress::Ipv4(Ipv4Address::UNSPECIFIED), 0),
IpCidr::new(net_addresses.ipv4_addr, 0),
IpCidr::new(net_addresses.ipv6_ll_addr, 0),
IpCidr::new(net_addresses.ipv6_ll_addr, 0)
];
if let net_settings::Ipv4AddrConfig::Static(ipv4) = net_addresses.ipv4_addr {
ip_addrs[0] = IpCidr::new(IpAddress::Ipv4(ipv4), 0);
}
let mut interface = match net_addresses.ipv6_addr {
Some(addr) => {
ip_addrs[2] = IpCidr::new(addr, 0);
smoltcp::iface::InterfaceBuilder::new(net_device, &mut sockets[..])
.hardware_addr(HardwareAddress::Ethernet(net_addresses.hardware_addr))
smoltcp::iface::EthernetInterfaceBuilder::new(net_device)
.ethernet_addr(net_addresses.hardware_addr)
.ip_addrs(&mut ip_addrs[..])
.neighbor_cache(neighbor_cache)
.finalize()
}
None =>
smoltcp::iface::InterfaceBuilder::new(net_device, &mut sockets[..])
.hardware_addr(HardwareAddress::Ethernet(net_addresses.hardware_addr))
smoltcp::iface::EthernetInterfaceBuilder::new(net_device)
.ethernet_addr(net_addresses.hardware_addr)
.ip_addrs(&mut ip_addrs[..2])
.neighbor_cache(neighbor_cache)
.finalize()
@ -437,10 +429,14 @@ fn network_boot() {
let mut rx_storage = [0; 4096];
let mut tx_storage = [0; 128];
let mut socket_set_entries: [_; 1] = Default::default();
let mut sockets =
smoltcp::socket::SocketSet::new(&mut socket_set_entries[..]);
let tcp_rx_buffer = smoltcp::socket::TcpSocketBuffer::new(&mut rx_storage[..]);
let tcp_tx_buffer = smoltcp::socket::TcpSocketBuffer::new(&mut tx_storage[..]);
let tcp_socket = smoltcp::socket::TcpSocket::new(tcp_rx_buffer, tcp_tx_buffer);
let tcp_handle = interface.add_socket(tcp_socket);
let tcp_handle = sockets.add(tcp_socket);
let mut net_conn = NetConn::new();
let mut boot_time = None;
@ -450,7 +446,7 @@ fn network_boot() {
loop {
let timestamp = clock::get_ms() as i64;
{
let socket = &mut *interface.get_socket::<smoltcp::socket::TcpSocket>(tcp_handle);
let socket = &mut *sockets.get::<smoltcp::socket::TcpSocket>(tcp_handle);
match boot_time {
None => {
@ -479,7 +475,7 @@ fn network_boot() {
}
}
match interface.poll(smoltcp::time::Instant::from_millis(timestamp)) {
match interface.poll(&mut sockets, smoltcp::time::Instant::from_millis(timestamp)) {
Ok(_) => (),
Err(smoltcp::Error::Unrecognized) => (),
Err(err) => println!("Network error: {}", err)

View File

@ -14,8 +14,6 @@ LDFLAGS += --eh-frame-hdr \
RUSTFLAGS += -Cpanic=unwind
export XBUILD_SYSROOT_PATH=$(BUILDINC_DIRECTORY)/../sysroot
all:: ksupport.elf
.PHONY: $(RUSTOUT)/libksupport.a
@ -26,7 +24,7 @@ $(RUSTOUT)/libksupport.a:
ksupport.elf: $(RUSTOUT)/libksupport.a glue.o
$(link) -T $(KSUPPORT_DIRECTORY)/ksupport.ld \
-lunwind-$(CPU)-elf -lprintf-float -lm
-lunwind-$(CPU)-libc -lprintf-float -lm
%.o: $(KSUPPORT_DIRECTORY)/%.c
$(compile)

View File

@ -167,4 +167,12 @@ static mut API: &'static [(&'static str, *const ())] = &[
api!(spi_set_config = ::nrt_bus::spi::set_config),
api!(spi_write = ::nrt_bus::spi::write),
api!(spi_read = ::nrt_bus::spi::read),
/*
* syscall for unit tests
* Used in `artiq.tests.coredevice.test_exceptions.ExceptionTest.test_raise_exceptions_kernel`
* This syscall checks that the exception IDs used in the Python `EmbeddingMap` (in `artiq.compiler.embedding`)
* match the `EXCEPTION_ID_LOOKUP` defined in the firmware (`artiq::firmware::ksupport::eh_artiq`)
*/
api!(test_exception_id_sync = ::eh_artiq::test_exception_id_sync)
];

View File

@ -333,6 +333,7 @@ extern fn stop_fn(_version: c_int,
}
}
// Must be kept in sync with `artiq.compiler.embedding`
static EXCEPTION_ID_LOOKUP: [(&str, u32); 11] = [
("RuntimeError", 0),
("RTIOUnderflow", 1),
@ -356,3 +357,30 @@ pub fn get_exception_id(name: &str) -> u32 {
unimplemented!("unallocated internal exception id")
}
/// Takes as input exception id from host
/// Generates a new exception with:
/// * `id` set to `exn_id`
/// * `message` set to corresponding exception name from `EXCEPTION_ID_LOOKUP`
///
/// The message is matched on host to ensure correct exception is being referred
/// This test checks the synchronization of exception ids for runtime errors
#[no_mangle]
#[unwind(allowed)]
pub extern fn test_exception_id_sync(exn_id: u32) {
let message = EXCEPTION_ID_LOOKUP
.iter()
.find_map(|&(name, id)| if id == exn_id { Some(name) } else { None })
.unwrap_or("unallocated internal exception id");
let exn = Exception {
id: exn_id,
file: file!().as_c_slice(),
line: 0,
column: 0,
function: "test_exception_id_sync".as_c_slice(),
message: message.as_c_slice(),
param: [0, 0, 0]
};
unsafe { raise(&exn) };
}

View File

@ -60,6 +60,11 @@ SECTIONS
KEEP(*(.eh_frame_hdr))
} > ksupport :text :eh_frame
.gcc_except_table :
{
*(.gcc_except_table)
} > ksupport
.data :
{
*(.data .data.*)

View File

@ -156,6 +156,21 @@ extern fn rpc_send_async(service: u32, tag: &CSlice<u8>, data: *const *const ())
})
}
/// Receives the result from an RPC call into the given memory buffer.
///
/// To handle aggregate objects with an a priori unknown size and number of
/// sub-allocations (e.g. a list of list of lists, where, at each level, the number of
/// elements is not statically known), this function needs to be called in a loop:
///
/// On the first call, `slot` should be a buffer of suitable size and alignment for
/// the top-level return value (e.g. in the case of a list, the pointer/length pair).
/// A return value of zero indicates that the value has been completely received.
/// As long as the return value is positive, another allocation with the given number of
/// bytes is needed, so the function should be called again with such a buffer (aligned
/// to the maximum required for any of the possible types according to the target ABI).
///
/// If the RPC call resulted in an exception, it is reconstructed and raised.
#[unwind(allowed)]
extern fn rpc_recv(slot: *mut ()) -> usize {
send(&RpcRecvRequest(slot));
@ -327,7 +342,7 @@ extern fn dma_record_output(target: i32, word: i32) {
}
#[unwind(aborts)]
extern fn dma_record_output_wide(target: i32, words: CSlice<i32>) {
extern fn dma_record_output_wide(target: i32, words: &CSlice<i32>) {
assert!(words.len() <= 16); // enforce the hardware limit
unsafe {

View File

@ -89,7 +89,7 @@ mod imp {
}
}
pub extern fn output_wide(target: i32, data: CSlice<i32>) {
pub extern fn output_wide(target: i32, data: &CSlice<i32>) {
unsafe {
csr::rtio::target_write(target as u32);
// writing target clears o_data
@ -235,7 +235,7 @@ mod imp {
unimplemented!("not(has_rtio)")
}
pub extern fn output_wide(_target: i32, _data: CSlice<i32>) {
pub extern fn output_wide(_target: i32, _data: &CSlice<i32>) {
unimplemented!("not(has_rtio)")
}

View File

@ -22,8 +22,6 @@ pub mod rpc_queue;
#[cfg(has_si5324)]
pub mod si5324;
#[cfg(has_wrpll)]
pub mod wrpll;
#[cfg(has_hmc830_7043)]
pub mod hmc830_7043;

View File

@ -28,7 +28,7 @@ pub struct FrequencySettings {
pub n31: u32,
pub n32: u32,
pub bwsel: u8,
pub crystal_ref: bool
pub crystal_as_ckin2: bool
}
pub enum Input {
@ -83,7 +83,7 @@ fn map_frequency_settings(settings: &FrequencySettings) -> Result<FrequencySetti
n31: settings.n31 - 1,
n32: settings.n32 - 1,
bwsel: settings.bwsel,
crystal_ref: settings.crystal_ref
crystal_as_ckin2: settings.crystal_as_ckin2
};
Ok(r)
}
@ -216,13 +216,14 @@ pub fn bypass(input: Input) -> Result<()> {
pub fn setup(settings: &FrequencySettings, input: Input) -> Result<()> {
let s = map_frequency_settings(settings)?;
let cksel_reg = match input {
Input::Ckin1 => 0b00,
Input::Ckin2 => 0b01,
};
init()?;
if settings.crystal_ref {
if settings.crystal_as_ckin2 {
write(0, read(0)? | 0x40)?; // FREE_RUN=1
}
write(2, (read(2)? & 0x0f) | (s.bwsel << 4))?;

View File

@ -1,538 +0,0 @@
use board_misoc::{csr, clock};
mod i2c {
use board_misoc::{csr, clock};
#[derive(Debug, Clone, Copy)]
pub enum Dcxo {
Main,
Helper
}
fn half_period() { clock::spin_us(1) }
const SDA_MASK: u8 = 2;
const SCL_MASK: u8 = 1;
fn sda_i(dcxo: Dcxo) -> bool {
let reg = match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_in_read() },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_in_read() },
};
reg & SDA_MASK != 0
}
fn sda_oe(dcxo: Dcxo, oe: bool) {
let reg = match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_oe_read() },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_oe_read() },
};
let reg = if oe { reg | SDA_MASK } else { reg & !SDA_MASK };
match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_oe_write(reg) },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_oe_write(reg) }
}
}
fn sda_o(dcxo: Dcxo, o: bool) {
let reg = match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_out_read() },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_out_read() },
};
let reg = if o { reg | SDA_MASK } else { reg & !SDA_MASK };
match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_out_write(reg) },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_out_write(reg) }
}
}
fn scl_oe(dcxo: Dcxo, oe: bool) {
let reg = match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_oe_read() },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_oe_read() },
};
let reg = if oe { reg | SCL_MASK } else { reg & !SCL_MASK };
match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_oe_write(reg) },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_oe_write(reg) }
}
}
fn scl_o(dcxo: Dcxo, o: bool) {
let reg = match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_out_read() },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_out_read() },
};
let reg = if o { reg | SCL_MASK } else { reg & !SCL_MASK };
match dcxo {
Dcxo::Main => unsafe { csr::wrpll::main_dcxo_gpio_out_write(reg) },
Dcxo::Helper => unsafe { csr::wrpll::helper_dcxo_gpio_out_write(reg) }
}
}
pub fn init(dcxo: Dcxo) -> Result<(), &'static str> {
// Set SCL as output, and high level
scl_o(dcxo, true);
scl_oe(dcxo, true);
// Prepare a zero level on SDA so that sda_oe pulls it down
sda_o(dcxo, false);
// Release SDA
sda_oe(dcxo, false);
// Check the I2C bus is ready
half_period();
half_period();
if !sda_i(dcxo) {
// Try toggling SCL a few times
for _bit in 0..8 {
scl_o(dcxo, false);
half_period();
scl_o(dcxo, true);
half_period();
}
}
if !sda_i(dcxo) {
return Err("SDA is stuck low and doesn't get unstuck");
}
Ok(())
}
pub fn start(dcxo: Dcxo) {
// Set SCL high then SDA low
scl_o(dcxo, true);
half_period();
sda_oe(dcxo, true);
half_period();
}
pub fn stop(dcxo: Dcxo) {
// First, make sure SCL is low, so that the target releases the SDA line
scl_o(dcxo, false);
half_period();
// Set SCL high then SDA high
sda_oe(dcxo, true);
scl_o(dcxo, true);
half_period();
sda_oe(dcxo, false);
half_period();
}
pub fn write(dcxo: Dcxo, data: u8) -> bool {
// MSB first
for bit in (0..8).rev() {
// Set SCL low and set our bit on SDA
scl_o(dcxo, false);
sda_oe(dcxo, data & (1 << bit) == 0);
half_period();
// Set SCL high ; data is shifted on the rising edge of SCL
scl_o(dcxo, true);
half_period();
}
// Check ack
// Set SCL low, then release SDA so that the I2C target can respond
scl_o(dcxo, false);
half_period();
sda_oe(dcxo, false);
// Set SCL high and check for ack
scl_o(dcxo, true);
half_period();
// returns true if acked (I2C target pulled SDA low)
!sda_i(dcxo)
}
pub fn read(dcxo: Dcxo, ack: bool) -> u8 {
// Set SCL low first, otherwise setting SDA as input may cause a transition
// on SDA with SCL high which will be interpreted as START/STOP condition.
scl_o(dcxo, false);
half_period(); // make sure SCL has settled low
sda_oe(dcxo, false);
let mut data: u8 = 0;
// MSB first
for bit in (0..8).rev() {
scl_o(dcxo, false);
half_period();
// Set SCL high and shift data
scl_o(dcxo, true);
half_period();
if sda_i(dcxo) { data |= 1 << bit }
}
// Send ack
// Set SCL low and pull SDA low when acking
scl_o(dcxo, false);
if ack { sda_oe(dcxo, true) }
half_period();
// then set SCL high
scl_o(dcxo, true);
half_period();
data
}
}
mod si549 {
use board_misoc::clock;
use super::i2c;
#[cfg(any(soc_platform = "metlino", soc_platform = "sayma_amc", soc_platform = "sayma_rtm"))]
pub const ADDRESS: u8 = 0x55;
#[cfg(soc_platform = "kasli")]
pub const ADDRESS: u8 = 0x67;
pub fn write(dcxo: i2c::Dcxo, reg: u8, val: u8) -> Result<(), &'static str> {
i2c::start(dcxo);
if !i2c::write(dcxo, ADDRESS << 1) {
return Err("Si549 failed to ack write address")
}
if !i2c::write(dcxo, reg) {
return Err("Si549 failed to ack register")
}
if !i2c::write(dcxo, val) {
return Err("Si549 failed to ack value")
}
i2c::stop(dcxo);
Ok(())
}
pub fn write_no_ack_value(dcxo: i2c::Dcxo, reg: u8, val: u8) -> Result<(), &'static str> {
i2c::start(dcxo);
if !i2c::write(dcxo, ADDRESS << 1) {
return Err("Si549 failed to ack write address")
}
if !i2c::write(dcxo, reg) {
return Err("Si549 failed to ack register")
}
i2c::write(dcxo, val);
i2c::stop(dcxo);
Ok(())
}
pub fn read(dcxo: i2c::Dcxo, reg: u8) -> Result<u8, &'static str> {
i2c::start(dcxo);
if !i2c::write(dcxo, ADDRESS << 1) {
return Err("Si549 failed to ack write address")
}
if !i2c::write(dcxo, reg) {
return Err("Si549 failed to ack register")
}
i2c::stop(dcxo);
i2c::start(dcxo);
if !i2c::write(dcxo, (ADDRESS << 1) | 1) {
return Err("Si549 failed to ack read address")
}
let val = i2c::read(dcxo, false);
i2c::stop(dcxo);
Ok(val)
}
pub fn program(dcxo: i2c::Dcxo, hsdiv: u16, lsdiv: u8, fbdiv: u64) -> Result<(), &'static str> {
i2c::init(dcxo)?;
write(dcxo, 255, 0x00)?; // PAGE
write_no_ack_value(dcxo, 7, 0x80)?; // RESET
clock::spin_us(100_000); // required? not specified in datasheet.
write(dcxo, 255, 0x00)?; // PAGE
write(dcxo, 69, 0x00)?; // Disable FCAL override.
// Note: Value 0x00 from Table 5.6 is inconsistent with Table 5.7,
// which shows bit 0 as reserved and =1.
write(dcxo, 17, 0x00)?; // Synchronously disable output
// The Si549 has no ID register, so we check that it responds correctly
// by writing values to a RAM-like register and reading them back.
for test_value in 0..255 {
write(dcxo, 23, test_value)?;
let readback = read(dcxo, 23)?;
if readback != test_value {
return Err("Si549 detection failed");
}
}
write(dcxo, 23, hsdiv as u8)?;
write(dcxo, 24, (hsdiv >> 8) as u8 | (lsdiv << 4))?;
write(dcxo, 26, fbdiv as u8)?;
write(dcxo, 27, (fbdiv >> 8) as u8)?;
write(dcxo, 28, (fbdiv >> 16) as u8)?;
write(dcxo, 29, (fbdiv >> 24) as u8)?;
write(dcxo, 30, (fbdiv >> 32) as u8)?;
write(dcxo, 31, (fbdiv >> 40) as u8)?;
write(dcxo, 7, 0x08)?; // Start FCAL
write(dcxo, 17, 0x01)?; // Synchronously enable output
Ok(())
}
// Si549 digital frequency trim ("all-digital PLL" register)
// ∆ f_out = adpll * 0.0001164e-6 (0.1164 ppb/lsb)
// max trim range is +- 950 ppm
pub fn set_adpll(dcxo: i2c::Dcxo, adpll: i32) -> Result<(), &'static str> {
write(dcxo, 231, adpll as u8)?;
write(dcxo, 232, (adpll >> 8) as u8)?;
write(dcxo, 233, (adpll >> 16) as u8)?;
clock::spin_us(100);
Ok(())
}
pub fn get_adpll(dcxo: i2c::Dcxo) -> Result<i32, &'static str> {
let b1 = read(dcxo, 231)? as i32;
let b2 = read(dcxo, 232)? as i32;
let b3 = read(dcxo, 233)? as i8 as i32;
Ok(b3 << 16 | b2 << 8 | b1)
}
}
// to do: load from gateware config
const DDMTD_COUNTER_N: u32 = 15;
const DDMTD_COUNTER_M: u32 = (1 << DDMTD_COUNTER_N);
const F_SYS: f64 = csr::CONFIG_CLOCK_FREQUENCY as f64;
const F_MAIN: f64 = 125.0e6;
const F_HELPER: f64 = F_MAIN * DDMTD_COUNTER_M as f64 / (DDMTD_COUNTER_M + 1) as f64;
const F_BEAT: f64 = F_MAIN - F_HELPER;
const TIME_STEP: f32 = 1./F_BEAT as f32;
fn ddmtd_tag_to_s(mu: f32) -> f32 {
return (mu as f32)*TIME_STEP;
}
fn get_frequencies() -> (u32, u32, u32) {
unsafe {
csr::wrpll::frequency_counter_update_en_write(1);
// wait for at least one full update cycle (> 2 timer periods)
clock::spin_us(200_000);
csr::wrpll::frequency_counter_update_en_write(0);
let helper = csr::wrpll::frequency_counter_counter_helper_read();
let main = csr::wrpll::frequency_counter_counter_rtio_read();
let cdr = csr::wrpll::frequency_counter_counter_rtio_rx0_read();
(helper, main, cdr)
}
}
fn log_frequencies() -> (u32, u32, u32) {
let (f_helper, f_main, f_cdr) = get_frequencies();
let conv_khz = |f| 4*(f as u64)*(csr::CONFIG_CLOCK_FREQUENCY as u64)/(1000*(1 << 23));
info!("helper clock frequency: {}kHz ({})", conv_khz(f_helper), f_helper);
info!("main clock frequency: {}kHz ({})", conv_khz(f_main), f_main);
info!("CDR clock frequency: {}kHz ({})", conv_khz(f_cdr), f_cdr);
(f_helper, f_main, f_cdr)
}
fn get_tags() -> (i32, i32, u16, u16) {
unsafe {
csr::wrpll::tag_arm_write(1);
while csr::wrpll::tag_arm_read() != 0 {}
let main_diff = csr::wrpll::main_diff_tag_read() as i32;
let helper_diff = csr::wrpll::helper_diff_tag_read() as i32;
let ref_tag = csr::wrpll::ref_tag_read();
let main_tag = csr::wrpll::main_tag_read();
(main_diff, helper_diff, ref_tag, main_tag)
}
}
fn print_tags() {
const NUM_TAGS: usize = 30;
let mut main_diffs = [0; NUM_TAGS]; // input to main loop filter
let mut helper_diffs = [0; NUM_TAGS]; // input to helper loop filter
let mut ref_tags = [0; NUM_TAGS];
let mut main_tags = [0; NUM_TAGS];
let mut jitter = [0 as f32; NUM_TAGS];
for i in 0..NUM_TAGS {
let (main_diff, helper_diff, ref_tag, main_tag) = get_tags();
main_diffs[i] = main_diff;
helper_diffs[i] = helper_diff;
ref_tags[i] = ref_tag;
main_tags[i] = main_tag;
}
info!("DDMTD ref tags: {:?}", ref_tags);
info!("DDMTD main tags: {:?}", main_tags);
info!("DDMTD main diffs: {:?}", main_diffs);
info!("DDMTD helper diffs: {:?}", helper_diffs);
// look at the difference between the main DCXO and reference...
let t0 = main_diffs[0];
main_diffs.iter_mut().for_each(|x| *x -= t0);
// crude estimate of the max difference across our sample set (assumes no unwrapping issues...)
let delta = main_diffs[main_diffs.len()-1] as f32 / (main_diffs.len()-1) as f32;
info!("detla: {:?} tags", delta);
let delta_f: f32 = delta/DDMTD_COUNTER_M as f32 * F_BEAT as f32;
info!("MAIN <-> ref frequency difference: {:?} Hz ({:?} ppm)", delta_f, delta_f/F_HELPER as f32 * 1e6);
jitter.iter_mut().enumerate().for_each(|(i, x)| *x = main_diffs[i] as f32 - delta*(i as f32));
info!("jitter: {:?} tags", jitter);
let var = jitter.iter().map(|x| x*x).fold(0 as f32, |acc, x| acc + x as f32) / NUM_TAGS as f32;
info!("variance: {:?} tags^2", var);
}
pub fn init() {
info!("initializing WR PLL...");
unsafe { csr::wrpll::helper_reset_write(1); }
unsafe {
csr::wrpll::helper_dcxo_i2c_address_write(si549::ADDRESS);
csr::wrpll::main_dcxo_i2c_address_write(si549::ADDRESS);
}
#[cfg(rtio_frequency = "125.0")]
let (h_hsdiv, h_lsdiv, h_fbdiv) = (0x05c, 0, 0x04b5badb98a);
#[cfg(rtio_frequency = "125.0")]
let (m_hsdiv, m_lsdiv, m_fbdiv) = (0x05c, 0, 0x04b5c447213);
si549::program(i2c::Dcxo::Main, m_hsdiv, m_lsdiv, m_fbdiv)
.expect("cannot initialize main Si549");
si549::program(i2c::Dcxo::Helper, h_hsdiv, h_lsdiv, h_fbdiv)
.expect("cannot initialize helper Si549");
// Si549 Settling Time for Large Frequency Change.
// Datasheet said 10ms but it lied.
clock::spin_us(50_000);
unsafe { csr::wrpll::helper_reset_write(0); }
clock::spin_us(1);
}
pub fn diagnostics() {
info!("WRPLL diagnostics...");
info!("Untrimmed oscillator frequencies:");
log_frequencies();
info!("Increase helper DCXO frequency by +10ppm (1.25kHz):");
si549::set_adpll(i2c::Dcxo::Helper, 85911).expect("ADPLL write failed");
// to do: add check on frequency?
log_frequencies();
}
fn trim_dcxos(f_helper: u32, f_main: u32, f_cdr: u32) -> Result<(i32, i32), &'static str> {
info!("Trimming oscillator frequencies...");
const DCXO_STEP: i64 = (1.0e6/0.0001164) as i64;
const ADPLL_MAX: i64 = (950.0/0.0001164) as i64;
const TIMER_WIDTH: u32 = 23;
const COUNTER_DIV: u32 = 2;
// how many counts we expect to measure
const SYS_COUNTS: i64 = (1 << (TIMER_WIDTH - COUNTER_DIV)) as i64;
const EXP_MAIN_COUNTS: i64 = ((SYS_COUNTS as f64) * (F_MAIN/F_SYS)) as i64;
const EXP_HELPER_COUNTS: i64 = ((SYS_COUNTS as f64) * (F_HELPER/F_SYS)) as i64;
// calibrate the SYS clock to the CDR clock and correct the measured counts
// assume frequency errors are small so we can make an additive correction
// positive error means sys clock is too fast
let sys_err: i64 = EXP_MAIN_COUNTS - (f_cdr as i64);
let main_err: i64 = EXP_MAIN_COUNTS - (f_main as i64) - sys_err;
let helper_err: i64 = EXP_HELPER_COUNTS - (f_helper as i64) - sys_err;
info!("sys count err {}", sys_err);
info!("main counts err {}", main_err);
info!("helper counts err {}", helper_err);
// calculate required adjustment to the ADPLL register see
// https://www.silabs.com/documents/public/data-sheets/si549-datasheet.pdf
// section 5.6
let helper_adpll: i64 = helper_err*DCXO_STEP/EXP_HELPER_COUNTS;
let main_adpll: i64 = main_err*DCXO_STEP/EXP_MAIN_COUNTS;
if helper_adpll.abs() > ADPLL_MAX {
return Err("helper DCXO offset too large");
}
if main_adpll.abs() > ADPLL_MAX {
return Err("main DCXO offset too large");
}
info!("ADPLL offsets: helper={} main={}", helper_adpll, main_adpll);
Ok((helper_adpll as i32, main_adpll as i32))
}
fn statistics(data: &[u16]) -> (f32, f32) {
let sum = data.iter().fold(0 as u32, |acc, x| acc + *x as u32);
let mean = sum as f32 / data.len() as f32;
let squared_sum = data.iter().fold(0 as u32, |acc, x| acc + (*x as u32).pow(2));
let variance = (squared_sum as f32 / data.len() as f32) - mean;
return (mean, variance)
}
fn select_recovered_clock_int(rc: bool) -> Result<(), &'static str> {
info!("Untrimmed oscillator frequencies:");
let (f_helper, f_main, f_cdr) = log_frequencies();
if rc {
let (helper_adpll, main_adpll) = trim_dcxos(f_helper, f_main, f_cdr)?;
// to do: add assertion on max frequency shift here?
si549::set_adpll(i2c::Dcxo::Helper, helper_adpll).expect("ADPLL write failed");
si549::set_adpll(i2c::Dcxo::Main, main_adpll).expect("ADPLL write failed");
log_frequencies();
clock::spin_us(100_000); // TO DO: remove/reduce!
print_tags();
info!("increasing main DCXO by 1ppm (125Hz):");
si549::set_adpll(i2c::Dcxo::Main, main_adpll + 8591).expect("ADPLL write failed");
clock::spin_us(100_000);
print_tags();
si549::set_adpll(i2c::Dcxo::Main, main_adpll).expect("ADPLL write failed");
unsafe {
csr::wrpll::adpll_offset_helper_write(helper_adpll as u32);
csr::wrpll::adpll_offset_main_write(main_adpll as u32);
csr::wrpll::helper_dcxo_gpio_enable_write(0);
csr::wrpll::main_dcxo_gpio_enable_write(0);
csr::wrpll::helper_dcxo_errors_write(0xff);
csr::wrpll::main_dcxo_errors_write(0xff);
csr::wrpll::collector_reset_write(0);
}
clock::spin_us(1_000); // wait for the collector to produce meaningful output
unsafe {
csr::wrpll::filter_reset_write(0);
}
clock::spin_us(100_000);
print_tags();
// let mut tags = [0; 10];
// for i in 0..tags.len() {
// tags[i] = get_ddmtd_helper_tag();
// }
// info!("DDMTD helper tags: {:?}", tags);
unsafe {
csr::wrpll::filter_reset_write(1);
csr::wrpll::collector_reset_write(1);
}
clock::spin_us(50_000);
unsafe {
csr::wrpll::helper_dcxo_gpio_enable_write(1);
csr::wrpll::main_dcxo_gpio_enable_write(1);
}
unsafe {
info!("error {} {}",
csr::wrpll::helper_dcxo_errors_read(),
csr::wrpll::main_dcxo_errors_read());
}
info!("new ADPLL: {} {}",
si549::get_adpll(i2c::Dcxo::Helper)?,
si549::get_adpll(i2c::Dcxo::Main)?);
} else {
si549::set_adpll(i2c::Dcxo::Helper, 0).expect("ADPLL write failed");
si549::set_adpll(i2c::Dcxo::Main, 0).expect("ADPLL write failed");
}
Ok(())
}
pub fn select_recovered_clock(rc: bool) {
if rc {
info!("switching to recovered clock");
} else {
info!("switching to local XO clock");
}
match select_recovered_clock_int(rc) {
Ok(()) => info!("clock transition completed"),
Err(e) => error!("clock transition failed: {}", e)
}
}

View File

@ -15,7 +15,7 @@ build_misoc = { path = "../libbuild_misoc" }
[dependencies]
byteorder = { version = "1.0", default-features = false }
log = { version = "0.4", default-features = false, optional = true }
smoltcp = { version = "0.8.0", default-features = false, optional = true }
smoltcp = { version = "0.6.0", default-features = false, optional = true }
riscv = { version = "0.6.0", features = ["inline-asm"] }
[features]

View File

@ -163,7 +163,7 @@ mod imp {
while let Some(result) = iter.next() {
let (record_key, record_value) = result?;
if key.as_bytes() == record_key {
found = true;
found = !record_value.is_empty();
// last write wins
value = record_value
}

View File

@ -1,5 +1,14 @@
use i2c;
use csr;
use i2c;
// Only the bare minimum registers. Bits/IO connections equivalent between IC types.
struct Registers {
// PCA9539 equivalent register names in comments
iodira: u8, // Configuration Port 0
iodirb: u8, // Configuration Port 1
gpioa: u8, // Output Port 0
gpiob: u8, // Output Port 1
}
pub struct IoExpander {
busno: u8,
@ -9,15 +18,17 @@ pub struct IoExpander {
iodir: [u8; 2],
out_current: [u8; 2],
out_target: [u8; 2],
registers: Registers,
}
impl IoExpander {
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))]
pub fn new(index: u8) -> Self {
pub fn new(index: u8) -> Result<Self, &'static str> {
const VIRTUAL_LED_MAPPING0: [(u8, u8, u8); 2] = [(0, 0, 6), (1, 1, 6)];
const VIRTUAL_LED_MAPPING1: [(u8, u8, u8); 2] = [(2, 0, 6), (3, 1, 6)];
// Both expanders on SHARED I2C bus
match index {
let mut io_expander = match index {
0 => IoExpander {
busno: 0,
port: 11,
@ -26,6 +37,12 @@ impl IoExpander {
iodir: [0xff; 2],
out_current: [0; 2],
out_target: [0; 2],
registers: Registers {
iodira: 0x00,
iodirb: 0x01,
gpioa: 0x12,
gpiob: 0x13,
},
},
1 => IoExpander {
busno: 0,
@ -35,9 +52,33 @@ impl IoExpander {
iodir: [0xff; 2],
out_current: [0; 2],
out_target: [0; 2],
registers: Registers {
iodira: 0x00,
iodirb: 0x01,
gpioa: 0x12,
gpiob: 0x13,
},
_ => panic!("incorrect I/O expander index"),
},
_ => return Err("incorrect I/O expander index"),
};
if !io_expander.check_ack()? {
#[cfg(feature = "log")]
log::info!(
"MCP23017 io expander {} not found. Checking for PCA9539.",
index
);
io_expander.address += 0xa8; // translate to PCA9539 addresses (see schematic)
io_expander.registers = Registers {
iodira: 0x06,
iodirb: 0x07,
gpioa: 0x02,
gpiob: 0x03,
};
if !io_expander.check_ack()? {
return Err("Neither MCP23017 nor PCA9539 io expander found.");
};
}
Ok(io_expander)
}
#[cfg(soc_platform = "kasli")]
@ -57,9 +98,18 @@ impl IoExpander {
Ok(())
}
fn check_ack(&self) -> Result<bool, &'static str> {
// Check for ack from io expander
self.select()?;
i2c::start(self.busno)?;
let ack = i2c::write(self.busno, self.address)?;
i2c::stop(self.busno)?;
Ok(ack)
}
fn update_iodir(&self) -> Result<(), &'static str> {
self.write(0x00, self.iodir[0])?;
self.write(0x01, self.iodir[1])?;
self.write(self.registers.iodira, self.iodir[0])?;
self.write(self.registers.iodirb, self.iodir[1])?;
Ok(())
}
@ -72,9 +122,9 @@ impl IoExpander {
self.update_iodir()?;
self.out_current[0] = 0x00;
self.write(0x12, 0x00)?;
self.write(self.registers.gpioa, 0x00)?;
self.out_current[1] = 0x00;
self.write(0x13, 0x00)?;
self.write(self.registers.gpiob, 0x00)?;
Ok(())
}
@ -94,20 +144,18 @@ impl IoExpander {
pub fn service(&mut self) -> Result<(), &'static str> {
for (led, port, bit) in self.virtual_led_mapping.iter() {
let level = unsafe {
(csr::virtual_leds::status_read() >> led) & 1
};
let level = unsafe { (csr::virtual_leds::status_read() >> led) & 1 };
self.set(*port, *bit, level != 0);
}
if self.out_target != self.out_current {
self.select()?;
if self.out_target[0] != self.out_current[0] {
self.write(0x12, self.out_target[0])?;
self.write(self.registers.gpioa, self.out_target[0])?;
self.out_current[0] = self.out_target[0];
}
if self.out_target[1] != self.out_current[1] {
self.write(0x13, self.out_target[1])?;
self.write(self.registers.gpiob, self.out_target[1])?;
self.out_current[1] = self.out_target[1];
}
}

View File

@ -1,5 +1,6 @@
#![no_std]
#![feature(llvm_asm)]
#![feature(asm)]
extern crate byteorder;
#[cfg(feature = "log")]

View File

@ -1,43 +1,15 @@
use core::fmt;
use core::fmt::{Display, Formatter};
use core::str::FromStr;
use smoltcp::wire::{EthernetAddress, IpAddress, Ipv4Address};
use smoltcp::wire::{EthernetAddress, IpAddress};
use config;
#[cfg(soc_platform = "kasli")]
use i2c_eeprom;
pub enum Ipv4AddrConfig {
UseDhcp,
Static(Ipv4Address),
}
impl FromStr for Ipv4AddrConfig {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(if s == "use_dhcp" {
Ipv4AddrConfig::UseDhcp
} else {
Ipv4AddrConfig::Static(Ipv4Address::from_str(s)?)
})
}
}
impl Display for Ipv4AddrConfig {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
match self {
Ipv4AddrConfig::UseDhcp => write!(f, "use_dhcp"),
Ipv4AddrConfig::Static(ipv4) => write!(f, "{}", ipv4)
}
}
}
pub struct NetAddresses {
pub hardware_addr: EthernetAddress,
pub ipv4_addr: Ipv4AddrConfig,
pub ipv4_addr: IpAddress,
pub ipv6_ll_addr: IpAddress,
pub ipv6_addr: Option<IpAddress>
}
@ -79,7 +51,16 @@ pub fn get_adresses() -> NetAddresses {
let ipv4_addr;
match config::read_str("ip", |r| r.map(|s| s.parse())) {
Ok(Ok(addr)) => ipv4_addr = addr,
_ => ipv4_addr = Ipv4AddrConfig::UseDhcp,
_ => {
#[cfg(soc_platform = "kasli")]
{ ipv4_addr = IpAddress::v4(192, 168, 1, 70); }
#[cfg(soc_platform = "sayma_amc")]
{ ipv4_addr = IpAddress::v4(192, 168, 1, 60); }
#[cfg(soc_platform = "metlino")]
{ ipv4_addr = IpAddress::v4(192, 168, 1, 65); }
#[cfg(soc_platform = "kc705")]
{ ipv4_addr = IpAddress::v4(192, 168, 1, 50); }
}
}
let ipv6_ll_addr = IpAddress::v6(

View File

@ -11,4 +11,3 @@ path = "lib.rs"
cslice = { version = "0.3" }
libc = { path = "../libc" }
unwind = { path = "../libunwind" }
compiler_builtins = "=0.1.39" # A dependency of alloc, libeh doesn't need it

View File

@ -6,22 +6,81 @@ use io::{ProtoRead, Read, Write, ProtoWrite, Error};
use self::tag::{Tag, TagIterator, split_tag};
#[inline]
fn alignment_offset(alignment: isize, ptr: isize) -> isize {
(-ptr).rem_euclid(alignment)
fn round_up(val: usize, power_of_two: usize) -> usize {
assert!(power_of_two.is_power_of_two());
let max_rem = power_of_two - 1;
(val + max_rem) & (!max_rem)
}
#[inline]
unsafe fn round_up_mut<T>(ptr: *mut T, power_of_two: usize) -> *mut T {
round_up(ptr as usize, power_of_two) as *mut T
}
#[inline]
unsafe fn round_up_const<T>(ptr: *const T, power_of_two: usize) -> *const T {
round_up(ptr as usize, power_of_two) as *const T
}
#[inline]
unsafe fn align_ptr<T>(ptr: *const ()) -> *const T {
let alignment = core::mem::align_of::<T>() as isize;
let fix = alignment_offset(alignment as isize, ptr as isize);
((ptr as isize) + fix) as *const T
round_up_const(ptr, core::mem::align_of::<T>()) as *const T
}
#[inline]
unsafe fn align_ptr_mut<T>(ptr: *mut ()) -> *mut T {
let alignment = core::mem::align_of::<T>() as isize;
let fix = alignment_offset(alignment as isize, ptr as isize);
((ptr as isize) + fix) as *mut T
round_up_mut(ptr, core::mem::align_of::<T>()) as *mut T
}
/// Reads (deserializes) `length` array or list elements of type `tag` from `reader`,
/// writing them into the buffer given by `storage`.
///
/// `alloc` is used for nested allocations (if elements themselves contain
/// lists/arrays), see [recv_value].
unsafe fn recv_elements<R, E>(
reader: &mut R,
tag: Tag,
length: usize,
storage: *mut (),
alloc: &dyn Fn(usize) -> Result<*mut (), E>,
) -> Result<(), E>
where
R: Read + ?Sized,
E: From<Error<R::ReadError>>,
{
// List of simple types are special-cased in the protocol for performance.
match tag {
Tag::Bool => {
let dest = slice::from_raw_parts_mut(storage as *mut u8, length);
reader.read_exact(dest)?;
},
Tag::Int32 => {
let dest = slice::from_raw_parts_mut(storage as *mut u8, length * 4);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(storage as *mut i32, length);
NativeEndian::from_slice_i32(dest);
},
Tag::Int64 | Tag::Float64 => {
let dest = slice::from_raw_parts_mut(storage as *mut u8, length * 8);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(storage as *mut i64, length);
NativeEndian::from_slice_i64(dest);
},
_ => {
let mut data = storage;
for _ in 0..length {
recv_value(reader, tag, &mut data, alloc)?
}
}
}
Ok(())
}
/// Reads (deserializes) a value of type `tag` from `reader`, writing the results to
/// the kernel-side buffer `data` (the passed pointer to which is incremented to point
/// past the just-received data). For nested allocations (lists/arrays), `alloc` is
/// invoked any number of times with the size of the required allocation as a parameter
/// (which is assumed to be correctly aligned for all payload types).
unsafe fn recv_value<R, E>(reader: &mut R, tag: Tag, data: &mut *mut (),
alloc: &dyn Fn(usize) -> Result<*mut (), E>)
-> Result<(), E>
@ -53,105 +112,73 @@ unsafe fn recv_value<R, E>(reader: &mut R, tag: Tag, data: &mut *mut (),
Tag::String | Tag::Bytes | Tag::ByteArray => {
consume_value!(CMutSlice<u8>, |ptr| {
let length = reader.read_u32()? as usize;
if length > 0 {
*ptr = CMutSlice::new(alloc(length)? as *mut u8, length);
reader.read_exact((*ptr).as_mut())?;
} else {
*ptr = CMutSlice::new(core::ptr::NonNull::<u8>::dangling().as_ptr(), 0);
}
Ok(())
})
}
Tag::Tuple(it, arity) => {
*data = data.offset(alignment_offset(tag.alignment() as isize, *data as isize));
let alignment = tag.alignment();
*data = round_up_mut(*data, alignment);
let mut it = it.clone();
for _ in 0..arity {
let tag = it.next().expect("truncated tag");
recv_value(reader, tag, data, alloc)?
}
// Take into account any tail padding (if element(s) with largest alignment
// are not at the end).
*data = round_up_mut(*data, alignment);
Ok(())
}
Tag::List(it) => {
#[repr(C)]
struct List { elements: *mut (), length: u32 }
consume_value!(*mut List, |ptr| {
struct List { elements: *mut (), length: usize }
consume_value!(*mut List, |ptr_to_list| {
let tag = it.clone().next().expect("truncated tag");
let padding = if let Tag::Int64 | Tag::Float64 = tag { 4 } else { 0 };
let length = reader.read_u32()? as usize;
let data = alloc(tag.size() * length + padding + 8)? as *mut u8;
*ptr = data as *mut List;
let ptr = data as *mut List;
let mut data = data.offset(8 + alignment_offset(tag.alignment() as isize, data as isize)) as *mut ();
(*ptr).length = length as u32;
(*ptr).elements = data;
match tag {
Tag::Bool => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length);
reader.read_exact(dest)?;
},
Tag::Int32 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 4);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i32, length);
NativeEndian::from_slice_i32(dest);
},
Tag::Int64 | Tag::Float64 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 8);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i64, length);
NativeEndian::from_slice_i64(dest);
},
_ => {
for _ in 0..length {
recv_value(reader, tag, &mut data, alloc)?
}
}
}
Ok(())
// To avoid multiple kernel CPU roundtrips, use a single allocation for
// both the pointer/length List (slice) and the backing storage for the
// elements. We can assume that alloc() is aligned suitably, so just
// need to take into account any extra padding required.
// (Note: On RISC-V, there will never actually be any types with
// alignment larger than 8 bytes, so storage_offset == 0 always.)
let list_size = 4 + 4;
let storage_offset = round_up(list_size, tag.alignment());
let storage_size = tag.size() * length;
let allocation = alloc(storage_offset + storage_size)? as *mut u8;
*ptr_to_list = allocation as *mut List;
let storage = allocation.offset(storage_offset as isize) as *mut ();
(**ptr_to_list).length = length;
(**ptr_to_list).elements = storage;
recv_elements(reader, tag, length, storage, alloc)
})
}
Tag::Array(it, num_dims) => {
consume_value!(*mut (), |buffer| {
let mut total_len: u32 = 1;
// Deserialize length along each dimension and compute total number of
// elements.
let mut total_len: usize = 1;
for _ in 0..num_dims {
let len = reader.read_u32()?;
let len = reader.read_u32()? as usize;
total_len *= len;
consume_value!(u32, |ptr| *ptr = len )
consume_value!(usize, |ptr| *ptr = len )
}
let length = total_len as usize;
// Allocate backing storage for elements; deserialize them.
let elt_tag = it.clone().next().expect("truncated tag");
let padding = if let Tag::Int64 | Tag::Float64 = tag { 4 } else { 0 };
let mut data = alloc(elt_tag.size() * length + padding)?;
data = data.offset(alignment_offset(tag.alignment() as isize, data as isize));
*buffer = data;
match elt_tag {
Tag::Bool => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length);
reader.read_exact(dest)?;
},
Tag::Int32 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 4);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i32, length);
NativeEndian::from_slice_i32(dest);
},
Tag::Int64 | Tag::Float64 => {
let dest = slice::from_raw_parts_mut(data as *mut u8, length * 8);
reader.read_exact(dest)?;
let dest = slice::from_raw_parts_mut(data as *mut i64, length);
NativeEndian::from_slice_i64(dest);
},
_ => {
for _ in 0..length {
recv_value(reader, elt_tag, &mut data, alloc)?
}
}
}
Ok(())
*buffer = alloc(elt_tag.size() * total_len)?;
recv_elements(reader, elt_tag, total_len, *buffer, alloc)
})
}
Tag::Range(it) => {
*data = data.offset(alignment_offset(tag.alignment() as isize, *data as isize));
*data = round_up_mut(*data, tag.alignment());
let tag = it.clone().next().expect("truncated tag");
recv_value(reader, tag, data, alloc)?;
recv_value(reader, tag, data, alloc)?;
@ -180,6 +207,36 @@ pub fn recv_return<R, E>(reader: &mut R, tag_bytes: &[u8], data: *mut (),
Ok(())
}
unsafe fn send_elements<W>(writer: &mut W, elt_tag: Tag, length: usize, data: *const ())
-> Result<(), Error<W::WriteError>>
where W: Write + ?Sized
{
writer.write_u8(elt_tag.as_u8())?;
match elt_tag {
// we cannot use NativeEndian::from_slice_i32 as the data is not mutable,
// and that is not needed as the data is already in native endian
Tag::Bool => {
let slice = slice::from_raw_parts(data as *const u8, length);
writer.write_all(slice)?;
},
Tag::Int32 => {
let slice = slice::from_raw_parts(data as *const u8, length * 4);
writer.write_all(slice)?;
},
Tag::Int64 | Tag::Float64 => {
let slice = slice::from_raw_parts(data as *const u8, length * 8);
writer.write_all(slice)?;
},
_ => {
let mut data = data;
for _ in 0..length {
send_value(writer, elt_tag, &mut data)?;
}
}
}
Ok(())
}
unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
-> Result<(), Error<W::WriteError>>
where W: Write + ?Sized
@ -213,10 +270,13 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
Tag::Tuple(it, arity) => {
let mut it = it.clone();
writer.write_u8(arity)?;
let mut max_alignment = 0;
for _ in 0..arity {
let tag = it.next().expect("truncated tag");
max_alignment = core::cmp::max(max_alignment, tag.alignment());
send_value(writer, tag, data)?
}
*data = round_up_const(*data, max_alignment);
Ok(())
}
Tag::List(it) => {
@ -226,30 +286,7 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
let length = (**ptr).length as usize;
writer.write_u32((**ptr).length)?;
let tag = it.clone().next().expect("truncated tag");
let mut data = (**ptr).elements;
writer.write_u8(tag.as_u8())?;
match tag {
// we cannot use NativeEndian::from_slice_i32 as the data is not mutable,
// and that is not needed as the data is already in native endian
Tag::Bool => {
let slice = slice::from_raw_parts(data as *const u8, length);
writer.write_all(slice)?;
},
Tag::Int32 => {
let slice = slice::from_raw_parts(data as *const u8, length * 4);
writer.write_all(slice)?;
},
Tag::Int64 | Tag::Float64 => {
let slice = slice::from_raw_parts(data as *const u8, length * 8);
writer.write_all(slice)?;
},
_ => {
for _ in 0..length {
send_value(writer, tag, &mut data)?;
}
}
}
Ok(())
send_elements(writer, tag, length, (**ptr).elements)
})
}
Tag::Array(it, num_dims) => {
@ -265,30 +302,7 @@ unsafe fn send_value<W>(writer: &mut W, tag: Tag, data: &mut *const ())
})
}
let length = total_len as usize;
let mut data = *buffer;
writer.write_u8(elt_tag.as_u8())?;
match elt_tag {
// we cannot use NativeEndian::from_slice_i32 as the data is not mutable,
// and that is not needed as the data is already in native endian
Tag::Bool => {
let slice = slice::from_raw_parts(data as *const u8, length);
writer.write_all(slice)?;
},
Tag::Int32 => {
let slice = slice::from_raw_parts(data as *const u8, length * 4);
writer.write_all(slice)?;
},
Tag::Int64 | Tag::Float64 => {
let slice = slice::from_raw_parts(data as *const u8, length * 8);
writer.write_all(slice)?;
},
_ => {
for _ in 0..length {
send_value(writer, elt_tag, &mut data)?;
}
}
}
Ok(())
send_elements(writer, elt_tag, length, *buffer)
})
}
Tag::Range(it) => {
@ -349,7 +363,7 @@ pub fn send_args<W>(writer: &mut W, service: u32, tag_bytes: &[u8], data: *const
mod tag {
use core::fmt;
use super::alignment_offset;
use super::round_up;
pub fn split_tag(tag_bytes: &[u8]) -> (&[u8], &[u8]) {
let tag_separator =
@ -416,16 +430,18 @@ mod tag {
let it = it.clone();
it.take(3).map(|t| t.alignment()).max().unwrap()
}
// CSlice basically
Tag::Bytes | Tag::String | Tag::ByteArray | Tag::List(_) =>
// the ptr/length(s) pair is basically CSlice
Tag::Bytes | Tag::String | Tag::ByteArray | Tag::List(_) | Tag::Array(_, _) =>
core::mem::align_of::<CSlice<()>>(),
// array buffer is allocated, so no need for alignment first
Tag::Array(_, _) => 1,
// will not be sent from the host
_ => unreachable!("unexpected tag from host")
Tag::Keyword(_) => unreachable!("Tag::Keyword should not appear in composite types"),
Tag::Object => core::mem::align_of::<u32>(),
}
}
/// Returns the "alignment size" of a value with the type described by the tag
/// (in bytes), i.e. the stride between successive elements in a list/array of
/// the given type, or the offset from a struct element of this type to the
/// next field.
pub fn size(self) -> usize {
match self {
Tag::None => 0,
@ -438,12 +454,18 @@ mod tag {
Tag::ByteArray => 8,
Tag::Tuple(it, arity) => {
let mut size = 0;
let mut max_alignment = 0;
let mut it = it.clone();
for _ in 0..arity {
let tag = it.next().expect("truncated tag");
let alignment = tag.alignment();
max_alignment = core::cmp::max(max_alignment, alignment);
size = round_up(size, alignment);
size += tag.size();
size += alignment_offset(tag.alignment() as isize, size as isize) as usize;
}
// Take into account any tail padding (if element(s) with largest
// alignment are not at the end).
size = round_up(size, max_alignment);
size
}
Tag::List(_) => 8,

View File

@ -17,7 +17,7 @@ failure = { version = "0.1", default-features = false }
failure_derive = { version = "0.1", default-features = false }
byteorder = { version = "1.0", default-features = false }
cslice = { version = "0.3" }
log = { version = "0.4", default-features = false }
log = { version = "=0.4.14", default-features = false }
managed = { version = "^0.7.1", default-features = false, features = ["alloc", "map"] }
eh = { path = "../libeh" }
unwind_backtrace = { path = "../libunwind_backtrace" }
@ -27,13 +27,9 @@ board_misoc = { path = "../libboard_misoc", features = ["uart_console", "smoltcp
logger_artiq = { path = "../liblogger_artiq" }
board_artiq = { path = "../libboard_artiq" }
proto_artiq = { path = "../libproto_artiq", features = ["log", "alloc"] }
smoltcp = { version = "0.6.0", default-features = false, features = ["alloc", "ethernet", "proto-ipv4", "proto-ipv6", "socket-tcp"] }
riscv = { version = "0.6.0", features = ["inline-asm"] }
[dependencies.smoltcp]
version = "0.8.0"
default-features = false
features = ["alloc", "medium-ethernet", "proto-ipv4", "proto-ipv6", "socket-tcp", "socket-dhcpv4"]
[dependencies.fringe]
git = "https://git.m-labs.hk/M-Labs/libfringe.git"
rev = "3ecbe5"

View File

@ -10,8 +10,6 @@ LDFLAGS += \
RUSTFLAGS += -Cpanic=unwind
export XBUILD_SYSROOT_PATH=$(BUILDINC_DIRECTORY)/../sysroot
all:: runtime.bin runtime.fbi
.PHONY: $(RUSTOUT)/libruntime.a
@ -22,7 +20,7 @@ $(RUSTOUT)/libruntime.a:
runtime.elf: $(RUSTOUT)/libruntime.a ksupport_data.o
$(link) -T $(RUNTIME_DIRECTORY)/runtime.ld \
-lunwind-bare -m elf32lriscv
-lunwind-vexriscv-bare -m elf32lriscv
ksupport_data.o: ../ksupport/ksupport.elf
$(LD) -r -m elf32lriscv -b binary -o $@ $<

View File

@ -79,7 +79,5 @@ pub fn thread(io: Io) {
Ok(()) => (),
Err(err) => error!("analyzer aborted: {}", err)
}
stream.close().expect("analyzer: close socket")
}
}

View File

@ -1,59 +0,0 @@
use board_misoc::clock;
use sched;
use sched::Dhcpv4Socket;
use smoltcp::socket::Dhcpv4Event;
use smoltcp::wire::{Ipv4Address, Ipv4Cidr};
pub fn dhcp_thread(io: sched::Io) {
let mut socket = Dhcpv4Socket::new(&io);
let mut last_ip: Option<Ipv4Cidr> = None;
let mut done_reset = false;
let start_time = clock::get_ms();
loop {
// A significant amount of the time our first discover isn't received
// by the server. This is likely to be because the ethernet device isn't quite
// ready at the point that it is sent. The following makes recovery from
// that faster.
if !done_reset && last_ip.is_none() && start_time + 1000 < clock::get_ms() {
info!("Didn't get initial IP in first second. Assuming packet loss, trying a reset.");
socket.reset();
done_reset = true;
}
if let Some(event) = socket.poll() {
match event {
Dhcpv4Event::Configured(config) => {
// Only compare the ip address in the config with previous config because we
// ignore the rest of the config i.e. we don't do any DNS or require a default
// gateway.
let changed = if let Some(last_ip) = last_ip {
if config.address != last_ip {
info!("IP address changed {} -> {}", last_ip, config.address);
true
} else {
false
}
} else {
info!("Acquired IP address: None -> {}", config.address);
true
};
if changed {
last_ip = Some(config.address);
io.set_ipv4_address(&config.address);
}
}
Dhcpv4Event::Deconfigured => {
if let Some(ip) = last_ip {
info!("Lost IP address {} -> None", ip);
io.set_ipv4_address(&Ipv4Cidr::new(Ipv4Address::UNSPECIFIED, 0));
last_ip = None;
}
// We always get one of these events at the start, ignore that one
}
}
}
// We want to poll after every poll of the interface. So we need to
// do a minimal yield here.
io.relinquish().unwrap();
}
}

View File

@ -1,40 +0,0 @@
use smoltcp::iface::{Interface, InterfaceBuilder};
use smoltcp::phy::Device;
use smoltcp::wire::{IpAddress, IpCidr, Ipv4Address, Ipv4Cidr};
use board_misoc::net_settings::{Ipv4AddrConfig, NetAddresses};
const IPV4_INDEX: usize = 0;
const IPV6_LL_INDEX: usize = 1;
const IPV6_INDEX: usize = 2;
const IP_ADDRESS_STORAGE_SIZE: usize = 3;
pub trait InterfaceBuilderEx {
fn init_ip_addrs(self, net_addresses: &NetAddresses) -> Self;
}
impl<'a, DeviceT: for<'d> Device<'d>> InterfaceBuilderEx for InterfaceBuilder<'a, DeviceT> {
fn init_ip_addrs(self, net_addresses: &NetAddresses) -> Self {
let mut storage = [
IpCidr::new(IpAddress::Ipv4(Ipv4Address::UNSPECIFIED), 0); IP_ADDRESS_STORAGE_SIZE
];
if let Ipv4AddrConfig::Static(ipv4) = net_addresses.ipv4_addr {
storage[IPV4_INDEX] = IpCidr::new(IpAddress::Ipv4(ipv4), 0);
}
storage[IPV6_LL_INDEX] = IpCidr::new(net_addresses.ipv6_ll_addr, 0);
if let Some(ipv6) = net_addresses.ipv6_addr {
storage[IPV6_INDEX] = IpCidr::new(ipv6, 0);
}
self.ip_addrs(storage)
}
}
pub trait InterfaceEx {
fn update_ipv4_addr(&mut self, addr: &Ipv4Cidr);
}
impl<'a, DeviceT: for<'d> Device<'d>> InterfaceEx for Interface<'a, DeviceT> {
fn update_ipv4_addr(&mut self, addr: &Ipv4Cidr) {
self.update_ip_addrs(|storage| storage[IPV4_INDEX] = IpCidr::Ipv4(*addr))
}
}

View File

@ -27,12 +27,11 @@ extern crate riscv;
use core::cell::RefCell;
use core::convert::TryFrom;
use smoltcp::wire::HardwareAddress;
use smoltcp::wire::IpCidr;
use board_misoc::{csr, ident, clock, spiflash, config, net_settings, pmp, boot};
#[cfg(has_ethmac)]
use board_misoc::ethmac;
use board_misoc::net_settings::{Ipv4AddrConfig};
#[cfg(has_drtio)]
use board_artiq::drtioaux;
use board_artiq::drtio_routing;
@ -42,7 +41,6 @@ use proto_artiq::{mgmt_proto, moninj_proto, rpc_proto, session_proto, kernel_pro
use proto_artiq::analyzer_proto;
use riscv::register::{mcause, mepc, mtval};
use ip_addr_storage::InterfaceBuilderEx;
mod rtio_clocking;
mod rtio_mgt;
@ -60,8 +58,6 @@ mod session;
mod moninj;
#[cfg(has_rtio_analyzer)]
mod analyzer;
mod dhcp;
mod ip_addr_storage;
#[cfg(has_grabber)]
fn grabber_thread(io: sched::Io) {
@ -104,8 +100,8 @@ fn startup() {
let (mut io_expander0, mut io_expander1);
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))]
{
io_expander0 = board_misoc::io_expander::IoExpander::new(0);
io_expander1 = board_misoc::io_expander::IoExpander::new(1);
io_expander0 = board_misoc::io_expander::IoExpander::new(0).unwrap();
io_expander1 = board_misoc::io_expander::IoExpander::new(1).unwrap();
io_expander0.init().expect("I2C I/O expander #0 initialization failed");
io_expander1.init().expect("I2C I/O expander #1 initialization failed");
@ -127,35 +123,54 @@ fn startup() {
net_device.reset_phy_if_any();
let net_device = {
use smoltcp::phy::Tracer;
use smoltcp::time::Instant;
use smoltcp::wire::PrettyPrinter;
use smoltcp::wire::EthernetFrame;
// We can't create the function pointer as a separate variable here because the type of
// the packet argument Packet isn't accessible and rust's type inference isn't sufficient
// to propagate in to a local var.
match config::read_str("net_trace", |r| r.map(|s| s == "1")) {
Ok(true) => Tracer::new(net_device, |timestamp, packet| {
fn net_trace_writer(timestamp: Instant, printer: PrettyPrinter<EthernetFrame<&[u8]>>) {
print!("\x1b[37m[{:6}.{:03}s]\n{}\x1b[0m\n",
timestamp.secs(), timestamp.millis(), packet)
}),
_ => Tracer::new(net_device, |_, _| {}),
timestamp.secs(), timestamp.millis(), printer)
}
fn net_trace_silent(_timestamp: Instant, _printer: PrettyPrinter<EthernetFrame<&[u8]>>) {}
let net_trace_fn: fn(Instant, PrettyPrinter<EthernetFrame<&[u8]>>);
match config::read_str("net_trace", |r| r.map(|s| s == "1")) {
Ok(true) => net_trace_fn = net_trace_writer,
_ => net_trace_fn = net_trace_silent
}
smoltcp::phy::EthernetTracer::new(net_device, net_trace_fn)
};
let neighbor_cache =
smoltcp::iface::NeighborCache::new(alloc::collections::btree_map::BTreeMap::new());
let net_addresses = net_settings::get_adresses();
info!("network addresses: {}", net_addresses);
let use_dhcp = if matches!(net_addresses.ipv4_addr, Ipv4AddrConfig::UseDhcp) {
info!("Will try to acquire an IPv4 address with DHCP");
true
} else {
false
};
let interface = smoltcp::iface::InterfaceBuilder::new(net_device, vec![])
.hardware_addr(HardwareAddress::Ethernet(net_addresses.hardware_addr))
.init_ip_addrs(&net_addresses)
let mut interface = match net_addresses.ipv6_addr {
Some(addr) => {
let ip_addrs = [
IpCidr::new(net_addresses.ipv4_addr, 0),
IpCidr::new(net_addresses.ipv6_ll_addr, 0),
IpCidr::new(addr, 0)
];
smoltcp::iface::EthernetInterfaceBuilder::new(net_device)
.ethernet_addr(net_addresses.hardware_addr)
.ip_addrs(ip_addrs)
.neighbor_cache(neighbor_cache)
.finalize();
.finalize()
}
None => {
let ip_addrs = [
IpCidr::new(net_addresses.ipv4_addr, 0),
IpCidr::new(net_addresses.ipv6_ll_addr, 0)
];
smoltcp::iface::EthernetInterfaceBuilder::new(net_device)
.ethernet_addr(net_addresses.hardware_addr)
.ip_addrs(ip_addrs)
.neighbor_cache(neighbor_cache)
.finalize()
}
};
#[cfg(has_drtio)]
let drtio_routing_table = urc::Urc::new(RefCell::new(
@ -169,13 +184,9 @@ fn startup() {
drtio_routing::interconnect_disable_all();
let aux_mutex = sched::Mutex::new();
let mut scheduler = sched::Scheduler::new(interface);
let mut scheduler = sched::Scheduler::new();
let io = scheduler.io();
if use_dhcp {
io.spawn(4096, dhcp::dhcp_thread);
}
rtio_mgt::startup(&io, &aux_mutex, &drtio_routing_table, &up_destinations);
io.spawn(4096, mgmt::thread);
@ -200,7 +211,19 @@ fn startup() {
let mut net_stats = ethmac::EthernetStatistics::new();
loop {
scheduler.run();
scheduler.run_network();
{
let sockets = &mut *scheduler.sockets().borrow_mut();
loop {
let timestamp = smoltcp::time::Instant::from_millis(clock::get_ms() as i64);
match interface.poll(sockets, timestamp) {
Ok(true) => (),
Ok(false) => break,
Err(smoltcp::Error::Unrecognized) => (),
Err(err) => debug!("network error: {}", err)
}
}
}
if let Some(_net_stats_diff) = net_stats.update() {
debug!("ethernet mac:{}", ethmac::EthernetStatistics::new());

View File

@ -131,7 +131,6 @@ pub fn thread(io: Io) {
Err(Error::Io(IoError::UnexpectedEnd)) => (),
Err(err) => error!("aborted: {}", err)
}
stream.close().expect("mgmt: close socket");
});
}
}

View File

@ -214,7 +214,6 @@ pub fn thread(io: Io, aux_mutex: &Mutex, routing_table: &Urc<RefCell<drtio_routi
Ok(()) => {},
Err(err) => error!("moninj aborted: {}", err)
}
stream.close().expect("moninj: close socket");
});
}
}

View File

@ -103,7 +103,6 @@ pub mod crg {
#[cfg(not(has_rtio_clock_switch))]
pub fn init() -> bool {
info!("Using internal RTIO clock");
unsafe {
csr::rtio_crg::pll_reset_write(0);
}
@ -117,11 +116,23 @@ pub mod crg {
pub fn check() -> bool { true }
}
// Si5324 input to select for locking to an external clock (as opposed to
// a recovered link clock in DRTIO satellites, which is handled elsewhere).
#[cfg(all(si5324_as_synthesizer, soc_platform = "kasli", hw_rev = "v2.0"))]
const SI5324_EXT_INPUT: si5324::Input = si5324::Input::Ckin1;
#[cfg(all(si5324_as_synthesizer, soc_platform = "kasli", not(hw_rev = "v2.0")))]
const SI5324_EXT_INPUT: si5324::Input = si5324::Input::Ckin2;
#[cfg(all(si5324_as_synthesizer, soc_platform = "metlino"))]
const SI5324_EXT_INPUT: si5324::Input = si5324::Input::Ckin2;
#[cfg(all(si5324_as_synthesizer, soc_platform = "kc705"))]
const SI5324_EXT_INPUT: si5324::Input = si5324::Input::Ckin2;
#[cfg(si5324_as_synthesizer)]
fn setup_si5324_as_synthesizer(cfg: RtioClock) {
let si5324_settings = match cfg {
let (si5324_settings, si5324_ref_input) = match cfg {
RtioClock::Ext0_Synth0_10to125 => { // 125 MHz output from 10 MHz CLKINx reference, 504 Hz BW
info!("using 10MHz reference to make 125MHz RTIO clock with PLL");
(
si5324::FrequencySettings {
n1_hs : 10,
nc1_ls : 4,
@ -130,11 +141,14 @@ fn setup_si5324_as_synthesizer(cfg: RtioClock) {
n31 : 6,
n32 : 6,
bwsel : 4,
crystal_ref: false
}
crystal_as_ckin2: false
},
SI5324_EXT_INPUT
)
},
RtioClock::Ext0_Synth0_100to125 => { // 125MHz output, from 100MHz CLKINx reference, 586 Hz loop bandwidth
info!("using 10MHz reference to make 125MHz RTIO clock with PLL");
info!("using 100MHz reference to make 125MHz RTIO clock with PLL");
(
si5324::FrequencySettings {
n1_hs : 10,
nc1_ls : 4,
@ -143,11 +157,14 @@ fn setup_si5324_as_synthesizer(cfg: RtioClock) {
n31 : 52,
n32 : 52,
bwsel : 4,
crystal_ref: false
}
crystal_as_ckin2: false
},
SI5324_EXT_INPUT
)
},
RtioClock::Ext0_Synth0_125to125 => { // 125MHz output, from 125MHz CLKINx reference, 606 Hz loop bandwidth
info!("using 10MHz reference to make 125MHz RTIO clock with PLL");
info!("using 125MHz reference to make 125MHz RTIO clock with PLL");
(
si5324::FrequencySettings {
n1_hs : 5,
nc1_ls : 8,
@ -156,11 +173,14 @@ fn setup_si5324_as_synthesizer(cfg: RtioClock) {
n31 : 63,
n32 : 63,
bwsel : 4,
crystal_ref: false
}
crystal_as_ckin2: false
},
SI5324_EXT_INPUT
)
},
RtioClock::Int_150 => { // 150MHz output, from crystal
info!("using internal 150MHz RTIO clock");
(
si5324::FrequencySettings {
n1_hs : 9,
nc1_ls : 4,
@ -169,11 +189,14 @@ fn setup_si5324_as_synthesizer(cfg: RtioClock) {
n31 : 7139,
n32 : 7139,
bwsel : 3,
crystal_ref: true
}
crystal_as_ckin2: true
},
RtioClock::Int_100 => { // 100MHz output, from crystal. Also used as reference for Sayma HMC830.
si5324::Input::Ckin2
)
},
RtioClock::Int_100 => { // 100MHz output, from crystal
info!("using internal 100MHz RTIO clock");
(
si5324::FrequencySettings {
n1_hs : 9,
nc1_ls : 6,
@ -182,11 +205,14 @@ fn setup_si5324_as_synthesizer(cfg: RtioClock) {
n31 : 7139,
n32 : 7139,
bwsel : 3,
crystal_ref: true
}
crystal_as_ckin2: true
},
si5324::Input::Ckin2
)
},
RtioClock::Int_125 => { // 125MHz output, from crystal, 7 Hz
info!("using internal 125MHz RTIO clock");
(
si5324::FrequencySettings {
n1_hs : 10,
nc1_ls : 4,
@ -195,11 +221,14 @@ fn setup_si5324_as_synthesizer(cfg: RtioClock) {
n31 : 4565,
n32 : 4565,
bwsel : 4,
crystal_ref: true
}
}
crystal_as_ckin2: true
},
si5324::Input::Ckin2
)
},
_ => { // 125MHz output like above, default (if chosen option is not supported)
warn!("rtio_clock setting '{:?}' is not supported. Falling back to default internal 125MHz RTIO clock.", cfg);
(
si5324::FrequencySettings {
n1_hs : 10,
nc1_ls : 4,
@ -208,20 +237,12 @@ fn setup_si5324_as_synthesizer(cfg: RtioClock) {
n31 : 4565,
n32 : 4565,
bwsel : 4,
crystal_ref: true
}
crystal_as_ckin2: true
},
si5324::Input::Ckin2
)
}
};
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0", not(si5324_ext_ref)))]
let si5324_ref_input = si5324::Input::Ckin2;
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0", si5324_ext_ref))]
let si5324_ref_input = si5324::Input::Ckin1;
#[cfg(all(soc_platform = "kasli", not(hw_rev = "v2.0")))]
let si5324_ref_input = si5324::Input::Ckin2;
#[cfg(soc_platform = "metlino")]
let si5324_ref_input = si5324::Input::Ckin2;
#[cfg(soc_platform = "kc705")]
let si5324_ref_input = si5324::Input::Ckin2;
si5324::setup(&si5324_settings, si5324_ref_input).expect("cannot initialize Si5324");
}
@ -229,18 +250,10 @@ pub fn init() {
let clock_cfg = get_rtio_clock_cfg();
#[cfg(si5324_as_synthesizer)]
{
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))]
let si5324_ext_input = si5324::Input::Ckin1;
#[cfg(all(soc_platform = "kasli", not(hw_rev = "v2.0")))]
let si5324_ext_input = si5324::Input::Ckin2;
#[cfg(soc_platform = "metlino")]
let si5324_ext_input = si5324::Input::Ckin2;
#[cfg(soc_platform = "kc705")]
let si5324_ext_input = si5324::Input::Ckin2;
match clock_cfg {
RtioClock::Ext0_Bypass => {
info!("using external RTIO clock with PLL bypass");
si5324::bypass(si5324_ext_input).expect("cannot bypass Si5324")
si5324::bypass(SI5324_EXT_INPUT).expect("cannot bypass Si5324")
},
_ => setup_si5324_as_synthesizer(clock_cfg),
}

View File

@ -2,21 +2,18 @@
use core::mem;
use core::result;
use core::cell::{Cell, RefCell, RefMut};
use core::cell::{Cell, RefCell};
use alloc::vec::Vec;
use fringe::OwnedStack;
use fringe::generator::{Generator, Yielder, State as GeneratorState};
use smoltcp::time::Duration;
use smoltcp::Error as NetworkError;
use smoltcp::wire::{IpEndpoint, Ipv4Cidr};
use smoltcp::iface::{Interface, SocketHandle};
use smoltcp::wire::IpEndpoint;
use smoltcp::socket::{SocketHandle, SocketRef};
use io::{Read, Write};
use board_misoc::clock;
use urc::Urc;
use board_misoc::ethmac::EthernetDevice;
use smoltcp::phy::Tracer;
use ip_addr_storage::InterfaceEx;
#[derive(Fail, Debug)]
pub enum Error {
@ -34,6 +31,8 @@ impl From<NetworkError> for Error {
}
}
type SocketSet = ::smoltcp::socket::SocketSet<'static, 'static, 'static>;
#[derive(Debug)]
struct WaitRequest {
event: Option<*mut dyn FnMut() -> bool>,
@ -60,7 +59,7 @@ impl Thread {
unsafe fn new<F>(io: &Io, stack_size: usize, f: F) -> ThreadHandle
where F: 'static + FnOnce(Io) + Send {
let spawned = io.spawned.clone();
let network = io.network.clone();
let sockets = io.sockets.clone();
// Add a 4k stack guard to the stack of any new threads
let stack = OwnedStack::new(stack_size + 4096);
@ -68,8 +67,8 @@ impl Thread {
generator: Generator::unsafe_new(stack, |yielder, _| {
f(Io {
yielder: Some(yielder),
spawned,
network
spawned: spawned,
sockets: sockets
})
}),
waiting_for: WaitRequest {
@ -116,21 +115,19 @@ impl ThreadHandle {
}
}
type Network = Interface<'static, Tracer<EthernetDevice>>;
pub struct Scheduler {
threads: Vec<ThreadHandle>,
spawned: Urc<RefCell<Vec<ThreadHandle>>>,
network: Urc<RefCell<Network>>,
sockets: Urc<RefCell<SocketSet>>,
run_idx: usize,
}
impl Scheduler {
pub fn new(network: Network) -> Scheduler {
pub fn new() -> Scheduler {
Scheduler {
threads: Vec::new(),
spawned: Urc::new(RefCell::new(Vec::new())),
network: Urc::new(RefCell::new(network)),
sockets: Urc::new(RefCell::new(SocketSet::new(Vec::new()))),
run_idx: 0,
}
}
@ -139,11 +136,13 @@ impl Scheduler {
Io {
yielder: None,
spawned: self.spawned.clone(),
network: self.network.clone()
sockets: self.sockets.clone()
}
}
pub fn run(&mut self) {
self.sockets.borrow_mut().prune();
self.threads.append(&mut *self.spawned.borrow_mut());
if self.threads.len() == 0 { return }
@ -189,17 +188,8 @@ impl Scheduler {
}
}
pub fn run_network(&mut self) {
let mut interface = self.network.borrow_mut();
loop {
let timestamp = smoltcp::time::Instant::from_millis(clock::get_ms() as i64);
match interface.poll(timestamp) {
Ok(true) => (),
Ok(false) => break,
Err(smoltcp::Error::Unrecognized) => (),
Err(err) => debug!("network error: {}", err)
}
}
pub fn sockets(&self) -> &RefCell<SocketSet> {
&*self.sockets
}
}
@ -207,7 +197,7 @@ impl Scheduler {
pub struct Io<'a> {
yielder: Option<&'a Yielder<WaitResult, WaitRequest>>,
spawned: Urc<RefCell<Vec<ThreadHandle>>>,
network: Urc<RefCell<Network>>,
sockets: Urc<RefCell<SocketSet>>,
}
impl<'a> Io<'a> {
@ -274,10 +264,6 @@ impl<'a> Io<'a> {
pub fn join(&self, handle: ThreadHandle) -> Result<(), Error> {
self.until(move || handle.terminated())
}
pub fn set_ipv4_address(&self, addr: &Ipv4Cidr) {
self.network.borrow_mut().update_ipv4_addr(addr)
}
}
#[derive(Clone)]
@ -305,10 +291,10 @@ impl<'a> Drop for MutexGuard<'a> {
macro_rules! until {
($socket:expr, $ty:ty, |$var:ident| $cond:expr) => ({
let (network, handle) = ($socket.io.network.clone(), $socket.handle);
let (sockets, handle) = ($socket.io.sockets.clone(), $socket.handle);
$socket.io.until(move || {
let mut network = network.borrow_mut();
let $var = network.get_socket::<$ty>(handle);
let mut sockets = sockets.borrow_mut();
let $var = sockets.get::<$ty>(handle);
$cond
})
})
@ -330,9 +316,9 @@ impl<'a> TcpListener<'a> {
fn new_lower(io: &'a Io<'a>, buffer_size: usize) -> SocketHandle {
let rx_buffer = vec![0; buffer_size];
let tx_buffer = vec![0; buffer_size];
io.network
io.sockets
.borrow_mut()
.add_socket(TcpSocketLower::new(
.add(TcpSocketLower::new(
TcpSocketBuffer::new(rx_buffer),
TcpSocketBuffer::new(tx_buffer)))
}
@ -347,9 +333,9 @@ impl<'a> TcpListener<'a> {
}
fn with_lower<F, R>(&self, f: F) -> R
where F: FnOnce(&mut TcpSocketLower) -> R {
let mut network = self.io.network.borrow_mut();
let result = f(network.get_socket(self.handle.get()));
where F: FnOnce(SocketRef<TcpSocketLower>) -> R {
let mut sockets = self.io.sockets.borrow_mut();
let result = f(sockets.get(self.handle.get()));
result
}
@ -367,7 +353,7 @@ impl<'a> TcpListener<'a> {
pub fn listen<T: Into<IpEndpoint>>(&self, endpoint: T) -> Result<(), Error> {
let endpoint = endpoint.into();
self.with_lower(|s| s.listen(endpoint))
self.with_lower(|mut s| s.listen(endpoint))
.map(|()| {
self.endpoint.set(endpoint);
()
@ -375,18 +361,14 @@ impl<'a> TcpListener<'a> {
.map_err(|err| err.into())
}
/// Accept a TCP connection
///
/// When the returned TcpStream is dropped it is immediately forgotten about. In order to
/// ensure that pending data is sent and the far end is notified, `close` must be called.
pub fn accept(&self) -> Result<TcpStream<'a>, Error> {
// We're waiting until at least one half of the connection becomes open.
// This handles the case where a remote socket immediately sends a FIN--
// that still counts as accepting even though nothing may be sent.
let (network, handle) = (self.io.network.clone(), self.handle.get());
let (sockets, handle) = (self.io.sockets.clone(), self.handle.get());
self.io.until(move || {
let mut network = network.borrow_mut();
let socket = network.get_socket::<TcpSocketLower>(handle);
let mut sockets = sockets.borrow_mut();
let socket = sockets.get::<TcpSocketLower>(handle);
socket.may_send() || socket.may_recv()
})?;
@ -403,14 +385,14 @@ impl<'a> TcpListener<'a> {
}
pub fn close(&self) {
self.with_lower(|s| s.close())
self.with_lower(|mut s| s.close())
}
}
impl<'a> Drop for TcpListener<'a> {
fn drop(&mut self) {
self.with_lower(|s| s.close());
self.io.network.borrow_mut().remove_socket(self.handle.get());
self.with_lower(|mut s| s.close());
self.io.sockets.borrow_mut().release(self.handle.get())
}
}
@ -434,9 +416,9 @@ impl<'a> TcpStream<'a> {
}
fn with_lower<F, R>(&self, f: F) -> R
where F: FnOnce(&mut TcpSocketLower) -> R {
let mut network = self.io.network.borrow_mut();
let result = f(network.get_socket(self.handle));
where F: FnOnce(SocketRef<TcpSocketLower>) -> R {
let mut sockets = self.io.sockets.borrow_mut();
let result = f(sockets.get(self.handle));
result
}
@ -473,7 +455,7 @@ impl<'a> TcpStream<'a> {
}
pub fn set_timeout(&self, value: Option<u64>) {
self.with_lower(|s| s.set_timeout(value.map(Duration::from_millis)))
self.with_lower(|mut s| s.set_timeout(value.map(Duration::from_millis)))
}
pub fn keep_alive(&self) -> Option<u64> {
@ -481,11 +463,11 @@ impl<'a> TcpStream<'a> {
}
pub fn set_keep_alive(&self, value: Option<u64>) {
self.with_lower(|s| s.set_keep_alive(value.map(Duration::from_millis)))
self.with_lower(|mut s| s.set_keep_alive(value.map(Duration::from_millis)))
}
pub fn close(&self) -> Result<(), Error> {
self.with_lower(|s| s.close());
self.with_lower(|mut s| s.close());
until!(self, TcpSocketLower, |s| !s.is_open())?;
// right now the socket may be in TIME-WAIT state. if we don't give it a chance to send
// a packet, and the user code executes a loop { s.listen(); s.read(); s.close(); }
@ -499,33 +481,23 @@ impl<'a> Read for TcpStream<'a> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, Self::ReadError> {
// Only borrow the underlying socket for the span of the next statement.
let result = self.with_lower(|s| s.recv_slice(buf));
let result = self.with_lower(|mut s| s.recv_slice(buf));
match result {
// Slow path: we need to block until buffer is non-empty.
Ok(0) => {
until!(self, TcpSocketLower, |s| s.can_recv() || !s.may_recv())?;
match self.with_lower(|s| s.recv_slice(buf)) {
match self.with_lower(|mut s| s.recv_slice(buf)) {
Ok(length) => Ok(length),
Err(NetworkError::Finished) |
Err(NetworkError::Illegal) => Ok(0),
Err(e) => {
panic!("Unexpected error from smoltcp: {}", e);
}
_ => unreachable!()
}
}
// Fast path: we had data in buffer.
Ok(length) => Ok(length),
// We've received a fin.
Err(NetworkError::Finished) |
// Error path: the receive half of the socket is not open.
Err(NetworkError::Illegal) => Ok(0),
// No other error may be returned.
Err(e) => {
// This could return Err(Error::Network(e)) rather than panic,
// but I expect that'll just cause a panic later perhaps with
// less interesting context.
panic!("Unexpected error from smoltcp: {}", e);
}
Err(_) => unreachable!()
}
}
}
@ -536,12 +508,12 @@ impl<'a> Write for TcpStream<'a> {
fn write(&mut self, buf: &[u8]) -> Result<usize, Self::WriteError> {
// Only borrow the underlying socket for the span of the next statement.
let result = self.with_lower(|s| s.send_slice(buf));
let result = self.with_lower(|mut s| s.send_slice(buf));
match result {
// Slow path: we need to block until buffer is non-full.
Ok(0) => {
until!(self, TcpSocketLower, |s| s.can_send() || !s.may_send())?;
match self.with_lower(|s| s.send_slice(buf)) {
match self.with_lower(|mut s| s.send_slice(buf)) {
Ok(length) => Ok(length),
Err(NetworkError::Illegal) => Ok(0),
_ => unreachable!()
@ -568,62 +540,7 @@ impl<'a> Write for TcpStream<'a> {
impl<'a> Drop for TcpStream<'a> {
fn drop(&mut self) {
// There's no point calling the lower close here unless we also defer the removal of the
// socket from smoltcp until it's had a chance to process the event
let (unsent_bytes, is_open) = self.with_lower(
|s| (s.send_queue(), s.is_open())
);
if is_open {
warn!(
"Dropping open TcpStream in state {}, with {} unsent bytes",
self.with_lower(|s| s.state()), unsent_bytes
)
} else if unsent_bytes != 0 {
debug!("Dropping socket with {} bytes unsent", unsent_bytes)
}
self.io.network.borrow_mut().remove_socket(self.handle);
}
}
pub struct Dhcpv4Socket<'a> {
io: &'a Io<'a>,
handle: SocketHandle,
}
impl<'a> Dhcpv4Socket<'a> {
fn new_lower(io: &'a Io<'a>) -> SocketHandle {
let socket = smoltcp::socket::Dhcpv4Socket::new();
io.network.borrow_mut().add_socket(socket)
}
pub fn new(io: &'a Io<'a>) -> Self {
Self {
io,
handle: Self::new_lower(io)
}
}
fn lower(&mut self) -> RefMut<smoltcp::socket::Dhcpv4Socket> {
RefMut::map(
self.io.network.borrow_mut(),
|network| network.get_socket::<smoltcp::socket::Dhcpv4Socket>(self.handle),
)
}
pub fn poll(&mut self) -> Option<smoltcp::socket::Dhcpv4Event> {
self.lower().poll()
}
pub fn reset(&mut self) {
self.lower().reset()
}
}
impl<'a> Drop for Dhcpv4Socket<'a> {
fn drop(&mut self) {
let mut network = self.io.network.borrow_mut();
network.remove_socket(self.handle);
self.with_lower(|mut s| s.close());
self.io.sockets.borrow_mut().release(self.handle)
}
}

View File

@ -650,7 +650,6 @@ pub fn thread(io: Io, aux_mutex: &Mutex,
error!("session aborted: {}", err);
}
}
stream.close().expect("session: close socket");
});
}

View File

@ -5,8 +5,6 @@ LDFLAGS += -L../libbase
RUSTFLAGS += -Cpanic=abort
export XBUILD_SYSROOT_PATH=$(BUILDINC_DIRECTORY)/../sysroot
all:: satman.bin satman.fbi
.PHONY: $(RUSTOUT)/libsatman.a

View File

@ -12,8 +12,6 @@ use core::convert::TryFrom;
use board_misoc::{csr, ident, clock, uart_logger, i2c, pmp};
#[cfg(has_si5324)]
use board_artiq::si5324;
#[cfg(has_wrpll)]
use board_artiq::wrpll;
use board_artiq::{spi, drtioaux};
use board_artiq::drtio_routing;
#[cfg(has_hmc830_7043)]
@ -436,7 +434,7 @@ const SI5324_SETTINGS: si5324::FrequencySettings
n31 : 75,
n32 : 75,
bwsel : 4,
crystal_ref: true
crystal_as_ckin2: true
};
#[cfg(all(has_si5324, rtio_frequency = "125.0"))]
@ -449,7 +447,7 @@ const SI5324_SETTINGS: si5324::FrequencySettings
n31 : 63,
n32 : 63,
bwsel : 4,
crystal_ref: true
crystal_as_ckin2: true
};
#[cfg(all(has_si5324, rtio_frequency = "100.0"))]
@ -462,7 +460,7 @@ const SI5324_SETTINGS: si5324::FrequencySettings
n31 : 50,
n32 : 50,
bwsel : 4,
crystal_ref: true
crystal_as_ckin2: true
};
#[no_mangle]
@ -488,21 +486,10 @@ pub extern fn main() -> i32 {
let (mut io_expander0, mut io_expander1);
#[cfg(all(soc_platform = "kasli", hw_rev = "v2.0"))]
{
io_expander0 = board_misoc::io_expander::IoExpander::new(0);
io_expander1 = board_misoc::io_expander::IoExpander::new(1);
io_expander0 = board_misoc::io_expander::IoExpander::new(0).unwrap();
io_expander1 = board_misoc::io_expander::IoExpander::new(1).unwrap();
io_expander0.init().expect("I2C I/O expander #0 initialization failed");
io_expander1.init().expect("I2C I/O expander #1 initialization failed");
#[cfg(has_wrpll)]
{
io_expander0.set_oe(1, 1 << 7).unwrap();
io_expander0.set(1, 7, true);
io_expander0.service().unwrap();
io_expander1.set_oe(0, 1 << 7).unwrap();
io_expander1.set_oe(1, 1 << 7).unwrap();
io_expander1.set(0, 7, true);
io_expander1.set(1, 7, true);
io_expander1.service().unwrap();
}
// Actively drive TX_DISABLE to false on SFP0..3
io_expander0.set_oe(0, 1 << 1).unwrap();
@ -519,8 +506,6 @@ pub extern fn main() -> i32 {
#[cfg(has_si5324)]
si5324::setup(&SI5324_SETTINGS, si5324::Input::Ckin1).expect("cannot initialize Si5324");
#[cfg(has_wrpll)]
wrpll::init();
unsafe {
csr::drtio_transceiver::stable_clkin_write(1);
@ -530,8 +515,6 @@ pub extern fn main() -> i32 {
unsafe {
csr::drtio_transceiver::txenable_write(0xffffffffu32 as _);
}
#[cfg(has_wrpll)]
wrpll::diagnostics();
init_rtio_crg();
#[cfg(has_hmc830_7043)]
@ -582,8 +565,6 @@ pub extern fn main() -> i32 {
si5324::siphaser::select_recovered_clock(true).expect("failed to switch clocks");
si5324::siphaser::calibrate_skew().expect("failed to calibrate skew");
}
#[cfg(has_wrpll)]
wrpll::select_recovered_clock(true);
drtioaux::reset(0);
drtiosat_reset(false);
@ -632,7 +613,7 @@ pub extern fn main() -> i32 {
if is_up && !was_up {
/*
* One side of the JESD204 elastic buffer is clocked by the jitter filter
* (Si5324 or WRPLL), the other by the RTM.
* (Si5324), the other by the RTM.
* The elastic buffer can operate only when those two clocks are derived from
* the same oscillator.
* This is the case when either of those conditions is true:
@ -664,8 +645,6 @@ pub extern fn main() -> i32 {
info!("uplink is down, switching to local oscillator clock");
#[cfg(has_si5324)]
si5324::siphaser::select_recovered_clock(false).expect("failed to switch clocks");
#[cfg(has_wrpll)]
wrpll::select_recovered_clock(false);
}
}

View File

@ -248,9 +248,8 @@ def main():
server_description = server_name + " ({})".format(args.server)
else:
server_description = args.server
logging.info("ARTIQ dashboard version: %s",
artiq_version)
logging.info("ARTIQ dashboard connected to moninj_proxy (%s)", server_description)
logging.info("ARTIQ dashboard %s connected to master %s",
artiq_version, server_description)
# run
main_window.show()
loop.run_until_complete(main_window.exit_request.wait())

View File

@ -173,6 +173,11 @@ class PeripheralManager:
urukul_name = self.get_name("urukul")
synchronization = peripheral["synchronization"]
channel = count(0)
pll_en = peripheral["pll_en"]
dds = peripheral["dds"]
if dds == "ad9912" and not pll_en:
raise ValueError("PLL bypass is not supported on AD9912")
self.gen("""
device_db["eeprom_{name}"] = {{
"type": "local",
@ -231,14 +236,15 @@ class PeripheralManager:
"sync_device": {sync_device},
"io_update_device": "ttl_{name}_io_update",
"refclk": {refclk},
"clk_sel": {clk_sel}
"clk_sel": {clk_sel},
"clk_div": {clk_div}
}}
}}""",
name=urukul_name,
sync_device="\"ttl_{name}_sync\"".format(name=urukul_name) if synchronization else "None",
refclk=peripheral.get("refclk", self.master_description["rtio_frequency"]),
clk_sel=peripheral["clk_sel"])
dds = peripheral["dds"]
clk_sel=peripheral["clk_sel"],
clk_div= 0 if pll_en else 1)
pll_vco = peripheral.get("pll_vco")
for i in range(4):
if dds == "ad9910":
@ -248,6 +254,7 @@ class PeripheralManager:
"module": "artiq.coredevice.ad9910",
"class": "AD9910",
"arguments": {{
"pll_en": {pll_en},
"pll_n": {pll_n},
"chip_select": {chip_select},
"cpld_device": "{name}_cpld"{sw}{pll_vco}{sync_delay_seed}{io_update_delay}
@ -258,7 +265,7 @@ class PeripheralManager:
uchn=i,
sw=",\n \"sw_device\": \"ttl_{name}_sw{uchn}\"".format(name=urukul_name, uchn=i) if len(peripheral["ports"]) > 1 else "",
pll_vco=",\n \"pll_vco\": {}".format(pll_vco) if pll_vco is not None else "",
pll_n=peripheral.get("pll_n", 32),
pll_n=peripheral.get("pll_n", 32), pll_en=pll_en,
sync_delay_seed=",\n \"sync_delay_seed\": \"eeprom_{}:{}\"".format(urukul_name, 64 + 4*i) if synchronization else "",
io_update_delay=",\n \"io_update_delay\": \"eeprom_{}:{}\"".format(urukul_name, 64 + 4*i) if synchronization else "")
elif dds == "ad9912":
@ -552,10 +559,11 @@ class PeripheralManager:
"type": "local",
"module": "artiq.coredevice.fastino",
"class": "Fastino",
"arguments": {{"channel": 0x{channel:06x}}}
"arguments": {{"channel": 0x{channel:06x}, "log2_width": {log2_width}}}
}}""",
name=self.get_name("fastino"),
channel=rtio_offset)
channel=rtio_offset,
log2_width=peripheral["log2_width"])
return 1
def process_phaser(self, rtio_offset, peripheral):

View File

@ -74,6 +74,8 @@ class GTX_20X(Module):
p_CPLL_REFCLK_DIV=1,
p_RXOUT_DIV=2,
p_TXOUT_DIV=2,
p_CPLL_INIT_CFG=0x00001E,
p_CPLL_LOCK_CFG=0x01C0,
i_CPLLRESET=cpllreset,
i_CPLLPD=cpllreset,
o_CPLLLOCK=cplllock,
@ -290,13 +292,15 @@ class GTX(Module, TransceiverInterface):
# # #
refclk = Signal()
stable_clkin_n = Signal()
stable_clkin = Signal()
self.specials += Instance("IBUFDS_GTE2",
i_CEB=stable_clkin_n,
i_CEB=~stable_clkin,
i_I=clock_pads.p,
i_IB=clock_pads.n,
o_O=refclk
)
o_O=refclk,
p_CLKCM_CFG="TRUE",
p_CLKRCV_TRST="TRUE",
p_CLKSWING_CFG=3)
rtio_tx_clk = Signal()
channel_interfaces = []
@ -323,7 +327,6 @@ class GTX(Module, TransceiverInterface):
TransceiverInterface.__init__(self, channel_interfaces)
for n, gtx in enumerate(self.gtxs):
self.comb += [
stable_clkin_n.eq(~self.stable_clkin.storage),
gtx.txenable.eq(self.txenable.storage[n])
]
@ -332,6 +335,9 @@ class GTX(Module, TransceiverInterface):
self.cd_rtio.clk.eq(self.gtxs[master].cd_rtio_tx.clk),
self.cd_rtio.rst.eq(reduce(or_, [gtx.cd_rtio_tx.rst for gtx in self.gtxs]))
]
self.comb += stable_clkin.eq(self.stable_clkin.storage | self.gtxs[0].tx_init.cplllock)
# Connect slave i's `rtio_rx` clock to `rtio_rxi` clock
for i in range(nchannels):
self.comb += [

View File

@ -108,9 +108,9 @@ class GTXInit(Module):
startup_fsm.act("INITIAL",
startup_timer.wait.eq(1),
If(startup_timer.done, NextState("RESET_ALL"))
If(startup_timer.done, NextState("RESET_PLL"))
)
startup_fsm.act("RESET_ALL",
startup_fsm.act("RESET_PLL",
gtXxreset.eq(1),
self.cpllreset.eq(1),
pll_reset_timer.wait.eq(1),
@ -118,19 +118,24 @@ class GTXInit(Module):
)
startup_fsm.act("RELEASE_PLL_RESET",
gtXxreset.eq(1),
If(cplllock, NextState("RELEASE_GTH_RESET"))
If(cplllock, NextState("RESET_GTX"))
)
startup_fsm.act("RESET_GTX",
gtXxreset.eq(1),
pll_reset_timer.wait.eq(1),
If(pll_reset_timer.done, NextState("RELEASE_GTX_RESET"))
)
# Release GTX reset and wait for GTX resetdone
# (from UG476, GTX is reset on falling edge
# of gttxreset)
if rx:
startup_fsm.act("RELEASE_GTH_RESET",
startup_fsm.act("RELEASE_GTX_RESET",
Xxuserrdy.eq(1),
cdr_stable_timer.wait.eq(1),
If(Xxresetdone & cdr_stable_timer.done, NextState("DELAY_ALIGN"))
)
else:
startup_fsm.act("RELEASE_GTH_RESET",
startup_fsm.act("RELEASE_GTX_RESET",
Xxuserrdy.eq(1),
If(Xxresetdone, NextState("DELAY_ALIGN"))
)
@ -227,7 +232,7 @@ class GTXInit(Module):
startup_fsm.act("READY",
Xxuserrdy.eq(1),
self.done.eq(1),
If(self.restart, NextState("RESET_ALL"))
If(self.restart, NextState("RESET_GTX"))
)

View File

@ -1,2 +0,0 @@
from artiq.gateware.drtio.wrpll.core import WRPLL
from artiq.gateware.drtio.wrpll.ddmtd import DDMTDSamplerExtFF, DDMTDSamplerGTP

View File

@ -1,156 +0,0 @@
from migen import *
from migen.genlib.resetsync import AsyncResetSynchronizer
from migen.genlib.cdc import MultiReg, PulseSynchronizer
from misoc.interconnect.csr import *
from artiq.gateware.drtio.wrpll.si549 import Si549
from artiq.gateware.drtio.wrpll.ddmtd import DDMTD, Collector
from artiq.gateware.drtio.wrpll import thls, filters
class FrequencyCounter(Module, AutoCSR):
def __init__(self, timer_width=23, counter_width=23, domains=["helper", "rtio", "rtio_rx0"]):
for domain in domains:
name = "counter_" + domain
counter = CSRStatus(counter_width, name=name)
setattr(self, name, counter)
self.update_en = CSRStorage()
timer = Signal(timer_width)
timer_tick = Signal()
self.sync += Cat(timer, timer_tick).eq(timer + 1)
for domain in domains:
sync_domain = getattr(self.sync, domain)
divider = Signal(2)
sync_domain += divider.eq(divider + 1)
divided = Signal()
divided.attr.add("no_retiming")
sync_domain += divided.eq(divider[-1])
divided_sys = Signal()
self.specials += MultiReg(divided, divided_sys)
divided_sys_r = Signal()
divided_tick = Signal()
self.sync += divided_sys_r.eq(divided_sys)
self.comb += divided_tick.eq(divided_sys & ~divided_sys_r)
counter = Signal(counter_width)
counter_csr = getattr(self, "counter_" + domain)
self.sync += [
If(timer_tick,
If(self.update_en.storage, counter_csr.status.eq(counter)),
counter.eq(0),
).Else(
If(divided_tick, counter.eq(counter + 1))
)
]
class WRPLL(Module, AutoCSR):
def __init__(self, helper_clk_pads, main_dcxo_i2c, helper_dxco_i2c, ddmtd_inputs, N=15):
self.helper_reset = CSRStorage(reset=1)
self.collector_reset = CSRStorage(reset=1)
self.filter_reset = CSRStorage(reset=1)
self.adpll_offset_helper = CSRStorage(24)
self.adpll_offset_main = CSRStorage(24)
self.tag_arm = CSR()
self.main_diff_tag = CSRStatus(32)
self.helper_diff_tag = CSRStatus(32)
self.ref_tag = CSRStatus(N)
self.main_tag = CSRStatus(N)
main_diff_tag_32 = Signal((32, True))
helper_diff_tag_32 = Signal((32, True))
self.comb += [
self.main_diff_tag.status.eq(main_diff_tag_32),
self.helper_diff_tag.status.eq(helper_diff_tag_32)
]
self.clock_domains.cd_helper = ClockDomain()
self.clock_domains.cd_collector = ClockDomain()
self.clock_domains.cd_filter = ClockDomain()
self.helper_reset.storage.attr.add("no_retiming")
self.filter_reset.storage.attr.add("no_retiming")
self.specials += Instance("IBUFGDS",
i_I=helper_clk_pads.p, i_IB=helper_clk_pads.n,
o_O=self.cd_helper.clk)
self.comb += [
self.cd_collector.clk.eq(self.cd_collector.clk),
self.cd_filter.clk.eq(self.cd_helper.clk),
]
self.specials += [
AsyncResetSynchronizer(self.cd_helper, self.helper_reset.storage),
AsyncResetSynchronizer(self.cd_collector, self.collector_reset.storage),
AsyncResetSynchronizer(self.cd_filter, self.filter_reset.storage)
]
self.submodules.helper_dcxo = Si549(helper_dxco_i2c)
self.submodules.main_dcxo = Si549(main_dcxo_i2c)
# for diagnostics and PLL initialization
self.submodules.frequency_counter = FrequencyCounter()
ddmtd_counter = Signal(N)
self.sync.helper += ddmtd_counter.eq(ddmtd_counter + 1)
self.submodules.ddmtd_ref = DDMTD(ddmtd_counter, ddmtd_inputs.rec_clk)
self.submodules.ddmtd_main = DDMTD(ddmtd_counter, ddmtd_inputs.main_xo)
collector_cd = ClockDomainsRenamer("collector")
filter_cd = ClockDomainsRenamer("filter")
self.submodules.collector = collector_cd(Collector(N))
self.submodules.filter_helper = filter_cd(
thls.make(filters.helper, data_width=48))
self.submodules.filter_main = filter_cd(
thls.make(filters.main, data_width=48))
self.comb += [
self.collector.tag_ref.eq(self.ddmtd_ref.h_tag),
self.collector.ref_stb.eq(self.ddmtd_ref.h_tag_update),
self.collector.tag_main.eq(self.ddmtd_main.h_tag),
self.collector.main_stb.eq(self.ddmtd_main.h_tag_update)
]
collector_stb_ps = PulseSynchronizer("helper", "sys")
self.submodules += collector_stb_ps
self.sync.helper += collector_stb_ps.i.eq(self.collector.out_stb)
collector_stb_sys = Signal()
self.sync += collector_stb_sys.eq(collector_stb_ps.o)
main_diff_tag_sys = Signal((N+2, True))
helper_diff_tag_sys = Signal((N+2, True))
ref_tag_sys = Signal(N)
main_tag_sys = Signal(N)
self.specials += MultiReg(self.collector.out_main, main_diff_tag_sys)
self.specials += MultiReg(self.collector.out_helper, helper_diff_tag_sys)
self.specials += MultiReg(self.collector.tag_ref, ref_tag_sys)
self.specials += MultiReg(self.collector.tag_main, main_tag_sys)
self.sync += [
If(self.tag_arm.re & self.tag_arm.r, self.tag_arm.w.eq(1)),
If(collector_stb_sys,
self.tag_arm.w.eq(0),
If(self.tag_arm.w,
main_diff_tag_32.eq(main_diff_tag_sys),
helper_diff_tag_32.eq(helper_diff_tag_sys),
self.ref_tag.status.eq(ref_tag_sys),
self.main_tag.status.eq(main_tag_sys)
)
)
]
self.comb += [
self.filter_helper.input.eq(self.collector.out_helper << 22),
self.filter_helper.input_stb.eq(self.collector.out_stb),
self.filter_main.input.eq(self.collector.out_main),
self.filter_main.input_stb.eq(self.collector.out_stb)
]
self.sync.helper += [
self.helper_dcxo.adpll_stb.eq(self.filter_helper.output_stb),
self.helper_dcxo.adpll.eq(self.filter_helper.output + self.adpll_offset_helper.storage),
self.main_dcxo.adpll_stb.eq(self.filter_main.output_stb),
self.main_dcxo.adpll.eq(self.filter_main.output + self.adpll_offset_main.storage)
]

View File

@ -1,221 +0,0 @@
from migen import *
from migen.genlib.cdc import PulseSynchronizer, MultiReg
from migen.genlib.fsm import FSM
from misoc.interconnect.csr import *
class DDMTDSamplerExtFF(Module):
def __init__(self, ddmtd_inputs):
self.rec_clk = Signal()
self.main_xo = Signal()
# # #
# TODO: s/h timing at FPGA pads
if hasattr(ddmtd_inputs, "rec_clk"):
rec_clk_1 = ddmtd_inputs.rec_clk
else:
rec_clk_1 = Signal()
self.specials += Instance("IBUFDS",
i_I=ddmtd_inputs.rec_clk_p, i_IB=ddmtd_inputs.rec_clk_n,
o_O=rec_clk_1)
if hasattr(ddmtd_inputs, "main_xo"):
main_xo_1 = ddmtd_inputs.main_xo
else:
main_xo_1 = Signal()
self.specials += Instance("IBUFDS",
i_I=ddmtd_inputs.main_xo_p, i_IB=ddmtd_inputs.main_xo_n,
o_O=main_xo_1)
self.specials += [
Instance("FD", i_C=ClockSignal("helper"),
i_D=rec_clk_1, o_Q=self.rec_clk,
attr={("IOB", "TRUE")}),
Instance("FD", i_C=ClockSignal("helper"),
i_D=main_xo_1, o_Q=self.main_xo,
attr={("IOB", "TRUE")}),
]
class DDMTDSamplerGTP(Module):
def __init__(self, gtp, main_xo_pads):
self.rec_clk = Signal()
self.main_xo = Signal()
# # #
# Getting the main XO signal from IBUFDS_GTE2 is problematic because
# the transceiver PLL craps out if an improper clock signal is applied,
# so we are disabling the buffer until the clock is stable.
main_xo_se = Signal()
rec_clk_1 = Signal()
main_xo_1 = Signal()
self.specials += [
Instance("IBUFDS",
i_I=main_xo_pads.p, i_IB=main_xo_pads.n,
o_O=main_xo_se),
Instance("FD", i_C=ClockSignal("helper"),
i_D=gtp.cd_rtio_rx0.clk, o_Q=rec_clk_1,
attr={("DONT_TOUCH", "TRUE")}),
Instance("FD", i_C=ClockSignal("helper"),
i_D=rec_clk_1, o_Q=self.rec_clk,
attr={("DONT_TOUCH", "TRUE")}),
Instance("FD", i_C=ClockSignal("helper"),
i_D=main_xo_se, o_Q=main_xo_1,
attr={("IOB", "TRUE")}),
Instance("FD", i_C=ClockSignal("helper"),
i_D=main_xo_1, o_Q=self.main_xo,
attr={("DONT_TOUCH", "TRUE")}),
]
class DDMTDDeglitcherFirstEdge(Module):
def __init__(self, input_signal, blind_period=128):
self.detect = Signal()
self.tag_correction = 0
rising = Signal()
input_signal_r = Signal()
self.sync.helper += [
input_signal_r.eq(input_signal),
rising.eq(input_signal & ~input_signal_r)
]
blind_counter = Signal(max=blind_period)
self.sync.helper += [
If(blind_counter != 0, blind_counter.eq(blind_counter - 1)),
If(input_signal_r, blind_counter.eq(blind_period - 1)),
self.detect.eq(rising & (blind_counter == 0))
]
class DDMTD(Module):
def __init__(self, counter, input_signal):
# in helper clock domain
self.h_tag = Signal(len(counter))
self.h_tag_update = Signal()
# # #
deglitcher = DDMTDDeglitcherFirstEdge(input_signal)
self.submodules += deglitcher
self.sync.helper += [
self.h_tag_update.eq(0),
If(deglitcher.detect,
self.h_tag_update.eq(1),
self.h_tag.eq(counter + deglitcher.tag_correction)
)
]
class Collector(Module):
"""Generates loop filter inputs from DDMTD outputs.
The input to the main DCXO lock loop filter is the difference between the
reference and main tags after unwrapping (see below).
The input to the helper DCXO lock loop filter is the difference between the
current reference tag and the previous reference tag after unwrapping.
When the WR PLL is locked, the following ideally (no noise/jitter) obtain:
- f_main = f_ref
- f_helper = f_ref * 2^N/(2^N+1)
- f_beat = f_ref - f_helper = f_ref / (2^N + 1) (cycle time is: dt=1/f_beat)
- the reference and main DCXO tags are equal to each other at every cycle
(the main DCXO lock drives this difference to 0)
- the reference and main DCXO tags both have the same value at each cycle
(the tag difference for each DDMTD is given by
f_helper*dt = f_helper/f_beat = 2^N, which causes the N-bit DDMTD counter
to wrap around and come back to its previous value)
Note that we currently lock the frequency of the helper DCXO to the
reference clock, not it's phase. As a result, while the tag differences are
controlled, their absolute values are arbitrary. We could consider moving
the helper lock to a phase lock at some point in the future...
Since the DDMTD counter is only N bits, it is possible for tag values to
wrap around. This will happen frequently if the locked tags happens to be
near the edges of the counter, so that jitter can easily cause a phase wrap.
But, it can also easily happen during lock acquisition or other transients.
To avoid glitches in the output, we unwrap the tag differences. Currently
we do this in hardware, but we should consider extending the processor to
allow us to do it inside the filters. Since the processor uses wider
signals, this would significantly extend the overall glitch-free
range of the PLL and may aid lock acquisition.
"""
def __init__(self, N):
self.ref_stb = Signal()
self.main_stb = Signal()
self.tag_ref = Signal(N)
self.tag_main = Signal(N)
self.out_stb = Signal()
self.out_main = Signal((N+2, True))
self.out_helper = Signal((N+2, True))
self.out_tag_ref = Signal(N)
self.out_tag_main = Signal(N)
tag_ref_r = Signal(N)
tag_main_r = Signal(N)
main_tag_diff = Signal((N+2, True))
helper_tag_diff = Signal((N+2, True))
# # #
fsm = FSM(reset_state="IDLE")
self.submodules += fsm
fsm.act("IDLE",
NextValue(self.out_stb, 0),
If(self.ref_stb & self.main_stb,
NextValue(tag_ref_r, self.tag_ref),
NextValue(tag_main_r, self.tag_main),
NextState("DIFF")
).Elif(self.ref_stb,
NextValue(tag_ref_r, self.tag_ref),
NextState("WAITMAIN")
).Elif(self.main_stb,
NextValue(tag_main_r, self.tag_main),
NextState("WAITREF")
)
)
fsm.act("WAITREF",
If(self.ref_stb,
NextValue(tag_ref_r, self.tag_ref),
NextState("DIFF")
)
)
fsm.act("WAITMAIN",
If(self.main_stb,
NextValue(tag_main_r, self.tag_main),
NextState("DIFF")
)
)
fsm.act("DIFF",
NextValue(main_tag_diff, tag_main_r - tag_ref_r),
NextValue(helper_tag_diff, tag_ref_r - self.out_tag_ref),
NextState("UNWRAP")
)
fsm.act("UNWRAP",
If(main_tag_diff - self.out_main > 2**(N-1),
NextValue(main_tag_diff, main_tag_diff - 2**N)
).Elif(self.out_main - main_tag_diff > 2**(N-1),
NextValue(main_tag_diff, main_tag_diff + 2**N)
),
If(helper_tag_diff - self.out_helper > 2**(N-1),
NextValue(helper_tag_diff, helper_tag_diff - 2**N)
).Elif(self.out_helper - helper_tag_diff > 2**(N-1),
NextValue(helper_tag_diff, helper_tag_diff + 2**N)
),
NextState("OUTPUT")
)
fsm.act("OUTPUT",
NextValue(self.out_tag_ref, tag_ref_r),
NextValue(self.out_tag_main, tag_main_r),
NextValue(self.out_main, main_tag_diff),
NextValue(self.out_helper, helper_tag_diff),
NextValue(self.out_stb, 1),
NextState("IDLE")
)

View File

@ -1,61 +0,0 @@
helper_xn1 = 0
helper_xn2 = 0
helper_yn0 = 0
helper_yn1 = 0
helper_yn2 = 0
helper_out = 0
main_xn1 = 0
main_xn2 = 0
main_yn0 = 0
main_yn1 = 0
main_yn2 = 0
def helper(tag_diff):
global helper_xn1, helper_xn2, helper_yn0, \
helper_yn1, helper_yn2, helper_out
helper_xn0 = 0 - tag_diff # *(2**22)
helper_yr = 4294967296
helper_yn2 = helper_yn1
helper_yn1 = helper_yn0
helper_yn0 = (284885690 * (helper_xn0
+ (217319150 * helper_xn1 >> 44)
- (17591968725108 * helper_xn2 >> 44)
) >> 44
) + (35184372088832*helper_yn1 >> 44) - helper_yn2
helper_xn2 = helper_xn1
helper_xn1 = helper_xn0
helper_out = 268435456*helper_yn0 >> 44
helper_out = min(helper_out, helper_yr)
helper_out = max(helper_out, 0 - helper_yr)
return helper_out
def main(main_xn0):
global main_xn1, main_xn2, main_yn0, main_yn1, main_yn2
main_yr = 4294967296
main_yn2 = main_yn1
main_yn1 = main_yn0
main_yn0 = (
((133450380908*(((35184372088832*main_xn0) >> 44) +
((17592186044417*main_xn1) >> 44))) >> 44) +
((29455872930889*main_yn1) >> 44) -
((12673794781453*main_yn2) >> 44))
main_xn2 = main_xn1
main_xn1 = main_xn0
main_yn0 = min(main_yn0, main_yr)
main_yn0 = max(main_yn0, 0 - main_yr)
return main_yn0

View File

@ -1,340 +0,0 @@
from migen import *
from migen.genlib.fsm import *
from migen.genlib.cdc import MultiReg, PulseSynchronizer, BlindTransfer
from misoc.interconnect.csr import *
class I2CClockGen(Module):
def __init__(self, width):
self.load = Signal(width)
self.clk2x = Signal()
cnt = Signal.like(self.load)
self.comb += [
self.clk2x.eq(cnt == 0),
]
self.sync += [
If(self.clk2x,
cnt.eq(self.load),
).Else(
cnt.eq(cnt - 1),
)
]
class I2CMasterMachine(Module):
def __init__(self, clock_width):
self.scl = Signal(reset=1)
self.sda_o = Signal(reset=1)
self.sda_i = Signal()
self.submodules.cg = CEInserter()(I2CClockGen(clock_width))
self.start = Signal()
self.stop = Signal()
self.write = Signal()
self.ack = Signal()
self.data = Signal(8)
self.ready = Signal()
###
bits = Signal(4)
data = Signal(8)
fsm = CEInserter()(FSM("IDLE"))
self.submodules += fsm
fsm.act("IDLE",
self.ready.eq(1),
If(self.start,
NextState("START0"),
).Elif(self.stop,
NextState("STOP0"),
).Elif(self.write,
NextValue(bits, 8),
NextValue(data, self.data),
NextState("WRITE0")
)
)
fsm.act("START0",
NextValue(self.scl, 1),
NextState("START1")
)
fsm.act("START1",
NextValue(self.sda_o, 0),
NextState("IDLE")
)
fsm.act("STOP0",
NextValue(self.scl, 0),
NextState("STOP1")
)
fsm.act("STOP1",
NextValue(self.sda_o, 0),
NextState("STOP2")
)
fsm.act("STOP2",
NextValue(self.scl, 1),
NextState("STOP3")
)
fsm.act("STOP3",
NextValue(self.sda_o, 1),
NextState("IDLE")
)
fsm.act("WRITE0",
NextValue(self.scl, 0),
NextState("WRITE1")
)
fsm.act("WRITE1",
If(bits == 0,
NextValue(self.sda_o, 1),
NextState("READACK0"),
).Else(
NextValue(self.sda_o, data[7]),
NextState("WRITE2"),
)
)
fsm.act("WRITE2",
NextValue(self.scl, 1),
NextValue(data[1:], data[:-1]),
NextValue(bits, bits - 1),
NextState("WRITE0"),
)
fsm.act("READACK0",
NextValue(self.scl, 1),
NextState("READACK1"),
)
fsm.act("READACK1",
NextValue(self.ack, ~self.sda_i),
NextState("IDLE")
)
run = Signal()
idle = Signal()
self.comb += [
run.eq((self.start | self.stop | self.write) & self.ready),
idle.eq(~run & fsm.ongoing("IDLE")),
self.cg.ce.eq(~idle),
fsm.ce.eq(run | self.cg.clk2x),
]
class ADPLLProgrammer(Module):
def __init__(self):
self.i2c_divider = Signal(16)
self.i2c_address = Signal(7)
self.adpll = Signal(24)
self.stb = Signal()
self.busy = Signal()
self.nack = Signal()
self.scl = Signal()
self.sda_i = Signal()
self.sda_o = Signal()
self.scl.attr.add("no_retiming")
self.sda_o.attr.add("no_retiming")
# # #
master = I2CMasterMachine(16)
self.submodules += master
self.comb += [
master.cg.load.eq(self.i2c_divider),
self.scl.eq(master.scl),
master.sda_i.eq(self.sda_i),
self.sda_o.eq(master.sda_o)
]
fsm = FSM()
self.submodules += fsm
adpll = Signal.like(self.adpll)
fsm.act("IDLE",
If(self.stb,
NextValue(adpll, self.adpll),
NextState("START")
)
)
fsm.act("START",
master.start.eq(1),
If(master.ready, NextState("DEVADDRESS"))
)
fsm.act("DEVADDRESS",
master.data.eq(self.i2c_address << 1),
master.write.eq(1),
If(master.ready, NextState("REGADRESS"))
)
fsm.act("REGADRESS",
master.data.eq(231),
master.write.eq(1),
If(master.ready,
If(master.ack,
NextState("DATA0")
).Else(
self.nack.eq(1),
NextState("STOP")
)
)
)
fsm.act("DATA0",
master.data.eq(adpll[0:8]),
master.write.eq(1),
If(master.ready,
If(master.ack,
NextState("DATA1")
).Else(
self.nack.eq(1),
NextState("STOP")
)
)
)
fsm.act("DATA1",
master.data.eq(adpll[8:16]),
master.write.eq(1),
If(master.ready,
If(master.ack,
NextState("DATA2")
).Else(
self.nack.eq(1),
NextState("STOP")
)
)
)
fsm.act("DATA2",
master.data.eq(adpll[16:24]),
master.write.eq(1),
If(master.ready,
If(~master.ack, self.nack.eq(1)),
NextState("STOP")
)
)
fsm.act("STOP",
master.stop.eq(1),
If(master.ready,
If(~master.ack, self.nack.eq(1)),
NextState("IDLE")
)
)
self.comb += self.busy.eq(~fsm.ongoing("IDLE"))
def simulate_programmer():
from migen.sim.core import run_simulation
dut = ADPLLProgrammer()
def generator():
yield dut.i2c_divider.eq(4)
yield dut.i2c_address.eq(0x55)
yield
yield dut.adpll.eq(0x123456)
yield dut.stb.eq(1)
yield
yield dut.stb.eq(0)
yield
while (yield dut.busy):
yield
for _ in range(20):
yield
run_simulation(dut, generator(), vcd_name="tb.vcd")
class Si549(Module, AutoCSR):
def __init__(self, pads):
self.gpio_enable = CSRStorage(reset=1)
self.gpio_in = CSRStatus(2)
self.gpio_out = CSRStorage(2)
self.gpio_oe = CSRStorage(2)
self.i2c_divider = CSRStorage(16, reset=75)
self.i2c_address = CSRStorage(7)
self.errors = CSR(2)
# in helper clock domain
self.adpll = Signal(24)
self.adpll_stb = Signal()
# # #
programmer = ClockDomainsRenamer("helper")(ADPLLProgrammer())
self.submodules += programmer
self.i2c_divider.storage.attr.add("no_retiming")
self.i2c_address.storage.attr.add("no_retiming")
self.specials += [
MultiReg(self.i2c_divider.storage, programmer.i2c_divider, "helper"),
MultiReg(self.i2c_address.storage, programmer.i2c_address, "helper")
]
self.comb += [
programmer.adpll.eq(self.adpll),
programmer.stb.eq(self.adpll_stb)
]
self.gpio_enable.storage.attr.add("no_retiming")
self.gpio_out.storage.attr.add("no_retiming")
self.gpio_oe.storage.attr.add("no_retiming")
# SCL GPIO and mux
ts_scl = TSTriple(1)
self.specials += ts_scl.get_tristate(pads.scl)
status = Signal()
self.comb += self.gpio_in.status[0].eq(status)
self.specials += MultiReg(ts_scl.i, status)
self.comb += [
If(self.gpio_enable.storage,
ts_scl.o.eq(self.gpio_out.storage[0]),
ts_scl.oe.eq(self.gpio_oe.storage[0])
).Else(
ts_scl.o.eq(0),
ts_scl.oe.eq(~programmer.scl)
)
]
# SDA GPIO and mux
ts_sda = TSTriple(1)
self.specials += ts_sda.get_tristate(pads.sda)
status = Signal()
self.comb += self.gpio_in.status[1].eq(status)
self.specials += MultiReg(ts_sda.i, status)
self.comb += [
If(self.gpio_enable.storage,
ts_sda.o.eq(self.gpio_out.storage[1]),
ts_sda.oe.eq(self.gpio_oe.storage[1])
).Else(
ts_sda.o.eq(0),
ts_sda.oe.eq(~programmer.sda_o)
)
]
self.specials += MultiReg(ts_sda.i, programmer.sda_i, "helper")
# Error reporting
collision_cdc = BlindTransfer("helper", "sys")
self.submodules += collision_cdc
self.comb += collision_cdc.i.eq(programmer.stb & programmer.busy)
nack_cdc = PulseSynchronizer("helper", "sys")
self.submodules += nack_cdc
self.comb += nack_cdc.i.eq(programmer.nack)
for n, trig in enumerate([collision_cdc.o, nack_cdc.o]):
self.sync += [
If(self.errors.re & self.errors.r[n], self.errors.w[n].eq(0)),
If(trig, self.errors.w[n].eq(1))
]
if __name__ == "__main__":
simulate_programmer()

View File

@ -1,618 +0,0 @@
import inspect
import ast
from copy import copy
import operator
from functools import reduce
from collections import OrderedDict
from migen import *
from migen.genlib.fsm import *
class Isn:
def __init__(self, immediate=None, inputs=None, outputs=None):
if inputs is None:
inputs = []
if outputs is None:
outputs = []
self.immediate = immediate
self.inputs = inputs
self.outputs = outputs
def __repr__(self):
r = "<"
r += self.__class__.__name__
if self.immediate is not None:
r += " (" + str(self.immediate) + ")"
for inp in self.inputs:
r += " r" + str(inp)
if self.outputs:
r += " ->"
for outp in self.outputs:
r += " r" + str(outp)
r += ">"
return r
class NopIsn(Isn):
opcode = 0
class AddIsn(Isn):
opcode = 1
class SubIsn(Isn):
opcode = 2
class MulShiftIsn(Isn):
opcode = 3
# opcode = 4: MulShift with alternate shift
class MinIsn(Isn):
opcode = 5
class MaxIsn(Isn):
opcode = 6
class CopyIsn(Isn):
opcode = 7
class InputIsn(Isn):
opcode = 8
class OutputIsn(Isn):
opcode = 9
class EndIsn(Isn):
opcode = 10
class ASTCompiler:
def __init__(self):
self.program = []
self.data = []
self.next_ssa_reg = -1
self.constants = dict()
self.names = dict()
self.globals = OrderedDict()
def get_ssa_reg(self):
r = self.next_ssa_reg
self.next_ssa_reg -= 1
return r
def add_global(self, name):
if name not in self.globals:
r = len(self.data)
self.data.append(0)
self.names[name] = r
self.globals[name] = r
def input(self, name):
target = self.get_ssa_reg()
self.program.append(InputIsn(outputs=[target]))
self.names[name] = target
def emit(self, node):
if isinstance(node, ast.BinOp):
if isinstance(node.op, ast.RShift):
if not isinstance(node.left, ast.BinOp) or not isinstance(node.left.op, ast.Mult):
raise NotImplementedError
if not isinstance(node.right, ast.Num):
raise NotImplementedError
left = self.emit(node.left.left)
right = self.emit(node.left.right)
cons = lambda **kwargs: MulShiftIsn(immediate=node.right.n, **kwargs)
else:
left = self.emit(node.left)
right = self.emit(node.right)
if isinstance(node.op, ast.Add):
cons = AddIsn
elif isinstance(node.op, ast.Sub):
cons = SubIsn
elif isinstance(node.op, ast.Mult):
cons = lambda **kwargs: MulShiftIsn(immediate=0, **kwargs)
else:
raise NotImplementedError
output = self.get_ssa_reg()
self.program.append(cons(inputs=[left, right], outputs=[output]))
return output
elif isinstance(node, ast.Call):
if not isinstance(node.func, ast.Name):
raise NotImplementedError
funcname = node.func.id
if node.keywords:
raise NotImplementedError
inputs = [self.emit(x) for x in node.args]
if funcname == "min":
cons = MinIsn
elif funcname == "max":
cons = MaxIsn
else:
raise NotImplementedError
output = self.get_ssa_reg()
self.program.append(cons(inputs=inputs, outputs=[output]))
return output
elif isinstance(node, (ast.Num, ast.UnaryOp)):
if isinstance(node, ast.UnaryOp):
if not isinstance(node.operand, ast.Num):
raise NotImplementedError
if isinstance(node.op, ast.UAdd):
transform = lambda x: x
elif isinstance(node.op, ast.USub):
transform = operator.neg
elif isinstance(node.op, ast.Invert):
transform = operator.invert
else:
raise NotImplementedError
node = node.operand
else:
transform = lambda x: x
n = transform(node.n)
if n in self.constants:
return self.constants[n]
else:
r = len(self.data)
self.data.append(n)
self.constants[n] = r
return r
elif isinstance(node, ast.Name):
return self.names[node.id]
elif isinstance(node, ast.Assign):
output = self.emit(node.value)
for target in node.targets:
assert isinstance(target, ast.Name)
self.names[target.id] = output
elif isinstance(node, ast.Return):
value = self.emit(node.value)
self.program.append(OutputIsn(inputs=[value]))
elif isinstance(node, ast.Global):
pass
else:
raise NotImplementedError
class Processor:
def __init__(self, data_width=32, multiplier_stages=2):
self.data_width = data_width
self.multiplier_stages = multiplier_stages
self.multiplier_shifts = []
self.program_rom_size = None
self.data_ram_size = None
self.opcode_bits = 4
self.reg_bits = None
def get_instruction_latency(self, isn):
return {
AddIsn: 2,
SubIsn: 2,
MulShiftIsn: 1 + self.multiplier_stages,
MinIsn: 2,
MaxIsn: 2,
CopyIsn: 1,
InputIsn: 1
}[isn.__class__]
def encode_instruction(self, isn, exit):
opcode = isn.opcode
if isn.immediate is not None and not isinstance(isn, MulShiftIsn):
r0 = isn.immediate
if len(isn.inputs) >= 1:
r1 = isn.inputs[0]
else:
r1 = 0
else:
if len(isn.inputs) >= 1:
r0 = isn.inputs[0]
else:
r0 = 0
if len(isn.inputs) >= 2:
r1 = isn.inputs[1]
else:
r1 = 0
r = 0
for value, bits in ((exit, self.reg_bits), (r1, self.reg_bits), (r0, self.reg_bits), (opcode, self.opcode_bits)):
r <<= bits
r |= value
return r
def instruction_bits(self):
return 3*self.reg_bits + self.opcode_bits
def implement(self, program, data):
return ProcessorImpl(self, program, data)
class Scheduler:
def __init__(self, processor, reserved_data, program):
self.processor = processor
self.reserved_data = reserved_data
self.used_registers = set(range(self.reserved_data))
self.exits = dict()
self.program = program
self.remaining = copy(program)
self.output = []
def allocate_register(self):
r = min(set(range(max(self.used_registers) + 2)) - self.used_registers)
self.used_registers.add(r)
return r
def free_register(self, r):
assert r >= self.reserved_data
self.used_registers.discard(r)
def find_inputs(self, cycle, isn):
mapped_inputs = []
for inp in isn.inputs:
if inp >= 0:
mapped_inputs.append(inp)
else:
found = False
for i in range(cycle):
if i in self.exits:
r, rm = self.exits[i]
if r == inp:
mapped_inputs.append(rm)
found = True
break
if not found:
return None
return mapped_inputs
def schedule_one(self, isn):
cycle = len(self.output)
mapped_inputs = self.find_inputs(cycle, isn)
if mapped_inputs is None:
return False
if isn.outputs:
# check that exit slot is free
latency = self.processor.get_instruction_latency(isn)
exit = cycle + latency
if exit in self.exits:
return False
# avoid RAW hazard with global writeback
for output in isn.outputs:
if output >= 0:
for risn in self.remaining:
for inp in risn.inputs:
if inp == output:
return False
# Instruction can be scheduled
self.remaining.remove(isn)
for inp, minp in zip(isn.inputs, mapped_inputs):
can_free = inp < 0 and all(inp != rinp for risn in self.remaining for rinp in risn.inputs)
if can_free:
self.free_register(minp)
if isn.outputs:
assert len(isn.outputs) == 1
if isn.outputs[0] < 0:
output = self.allocate_register()
else:
output = isn.outputs[0]
self.exits[exit] = (isn.outputs[0], output)
self.output.append(isn.__class__(immediate=isn.immediate, inputs=mapped_inputs))
return True
def schedule(self):
while self.remaining:
success = False
for isn in self.remaining:
if self.schedule_one(isn):
success = True
break
if not success:
self.output.append(NopIsn())
self.output += [NopIsn()]*(max(self.exits.keys()) - len(self.output) + 1)
return self.output
class CompiledProgram:
def __init__(self, processor, program, exits, data, glbs):
self.processor = processor
self.program = program
self.exits = exits
self.data = data
self.globals = glbs
def pretty_print(self):
for cycle, isn in enumerate(self.program):
l = "{:4d} {:15}".format(cycle, str(isn))
if cycle in self.exits:
l += " -> r{}".format(self.exits[cycle])
print(l)
def dimension_processor(self):
self.processor.program_rom_size = len(self.program)
self.processor.data_ram_size = len(self.data)
self.processor.reg_bits = (self.processor.data_ram_size - 1).bit_length()
for isn in self.program:
if isinstance(isn, MulShiftIsn) and isn.immediate not in self.processor.multiplier_shifts:
self.processor.multiplier_shifts.append(isn.immediate)
def encode(self):
r = []
for i, isn in enumerate(self.program):
exit = self.exits.get(i, 0)
r.append(self.processor.encode_instruction(isn, exit))
return r
def compile(processor, function):
node = ast.parse(inspect.getsource(function))
assert isinstance(node, ast.Module)
assert len(node.body) == 1
node = node.body[0]
assert isinstance(node, ast.FunctionDef)
assert len(node.args.args) == 1
arg = node.args.args[0].arg
body = node.body
astcompiler = ASTCompiler()
for node in body:
if isinstance(node, ast.Global):
for name in node.names:
astcompiler.add_global(name)
arg_r = astcompiler.input(arg)
for node in body:
astcompiler.emit(node)
if isinstance(node, ast.Return):
break
for glbl, location in astcompiler.globals.items():
new_location = astcompiler.names[glbl]
if new_location != location:
astcompiler.program.append(CopyIsn(inputs=[new_location], outputs=[location]))
scheduler = Scheduler(processor, len(astcompiler.data), astcompiler.program)
scheduler.schedule()
program = copy(scheduler.output)
program.append(EndIsn())
max_reg = max(max(max(isn.inputs + [0]) for isn in program), max(v[1] for k, v in scheduler.exits.items()))
return CompiledProgram(
processor=processor,
program=program,
exits={k: v[1] for k, v in scheduler.exits.items()},
data=astcompiler.data + [0]*(max_reg - len(astcompiler.data) + 1),
glbs=astcompiler.globals)
class BaseUnit(Module):
def __init__(self, data_width):
self.stb_i = Signal()
self.i0 = Signal((data_width, True))
self.i1 = Signal((data_width, True))
self.stb_o = Signal()
self.o = Signal((data_width, True))
class NopUnit(BaseUnit):
pass
class OpUnit(BaseUnit):
def __init__(self, op, data_width, stages, op_data_width=None):
BaseUnit.__init__(self, data_width)
# work around Migen's mishandling of Verilog's cretinous operator width rules
if op_data_width is None:
op_data_width = data_width
if stages > 1:
# Vivado backward retiming for DSP does not work correctly if DSP inputs
# are not registered.
i0 = Signal.like(self.i0)
i1 = Signal.like(self.i1)
stb_i = Signal()
self.sync += [
i0.eq(self.i0),
i1.eq(self.i1),
stb_i.eq(self.stb_i)
]
output_stages = stages - 1
else:
i0, i1, stb_i = self.i0, self.i1, self.stb_i
output_stages = stages
o = Signal((op_data_width, True))
self.comb += o.eq(op(i0, i1))
stb_o = stb_i
for i in range(output_stages):
n_o = Signal((data_width, True))
if stages > 1:
n_o.attr.add(("retiming_backward", 1))
n_stb_o = Signal()
self.sync += [
n_o.eq(o),
n_stb_o.eq(stb_o)
]
o = n_o
stb_o = n_stb_o
self.comb += [
self.o.eq(o),
self.stb_o.eq(stb_o)
]
class SelectUnit(BaseUnit):
def __init__(self, op, data_width):
BaseUnit.__init__(self, data_width)
self.sync += [
self.stb_o.eq(self.stb_i),
If(op(self.i0, self.i1),
self.o.eq(self.i0)
).Else(
self.o.eq(self.i1)
)
]
class CopyUnit(BaseUnit):
def __init__(self, data_width):
BaseUnit.__init__(self, data_width)
self.comb += [
self.stb_o.eq(self.stb_i),
self.o.eq(self.i0)
]
class InputUnit(BaseUnit):
def __init__(self, data_width, input_stb, input):
BaseUnit.__init__(self, data_width)
self.buffer = Signal(data_width)
self.comb += [
self.stb_o.eq(self.stb_i),
self.o.eq(self.buffer)
]
class OutputUnit(BaseUnit):
def __init__(self, data_width, output_stb, output):
BaseUnit.__init__(self, data_width)
self.sync += [
output_stb.eq(self.stb_i),
output.eq(self.i0)
]
class ProcessorImpl(Module):
def __init__(self, pd, program, data):
self.input_stb = Signal()
self.input = Signal((pd.data_width, True))
self.output_stb = Signal()
self.output = Signal((pd.data_width, True))
self.busy = Signal()
# # #
program_mem = Memory(pd.instruction_bits(), pd.program_rom_size, init=program)
data_mem0 = Memory(pd.data_width, pd.data_ram_size, init=data)
data_mem1 = Memory(pd.data_width, pd.data_ram_size, init=data)
self.specials += program_mem, data_mem0, data_mem1
pc = Signal(pd.instruction_bits())
pc_next = Signal.like(pc)
pc_en = Signal()
self.sync += pc.eq(pc_next)
self.comb += [
If(pc_en,
pc_next.eq(pc + 1)
).Else(
pc_next.eq(0)
)
]
program_mem_port = program_mem.get_port()
self.specials += program_mem_port
self.comb += program_mem_port.adr.eq(pc_next)
s = 0
opcode = Signal(pd.opcode_bits)
self.comb += opcode.eq(program_mem_port.dat_r[s:s+pd.opcode_bits])
s += pd.opcode_bits
r0 = Signal(pd.reg_bits)
self.comb += r0.eq(program_mem_port.dat_r[s:s+pd.reg_bits])
s += pd.reg_bits
r1 = Signal(pd.reg_bits)
self.comb += r1.eq(program_mem_port.dat_r[s:s+pd.reg_bits])
s += pd.reg_bits
exit = Signal(pd.reg_bits)
self.comb += exit.eq(program_mem_port.dat_r[s:s+pd.reg_bits])
data_read_port0 = data_mem0.get_port()
data_read_port1 = data_mem1.get_port()
self.specials += data_read_port0, data_read_port1
self.comb += [
data_read_port0.adr.eq(r0),
data_read_port1.adr.eq(r1)
]
data_write_port = data_mem0.get_port(write_capable=True)
data_write_port_dup = data_mem1.get_port(write_capable=True)
self.specials += data_write_port, data_write_port_dup
self.comb += [
data_write_port_dup.we.eq(data_write_port.we),
data_write_port_dup.adr.eq(data_write_port.adr),
data_write_port_dup.dat_w.eq(data_write_port.dat_w),
data_write_port.adr.eq(exit)
]
nop = NopUnit(pd.data_width)
adder = OpUnit(operator.add, pd.data_width, 1)
subtractor = OpUnit(operator.sub, pd.data_width, 1)
if pd.multiplier_shifts:
if len(pd.multiplier_shifts) != 1:
raise NotImplementedError
multiplier = OpUnit(lambda a, b: a * b >> pd.multiplier_shifts[0],
pd.data_width, pd.multiplier_stages, op_data_width=2*pd.data_width)
else:
multiplier = NopUnit(pd.data_width)
minu = SelectUnit(operator.lt, pd.data_width)
maxu = SelectUnit(operator.gt, pd.data_width)
copier = CopyUnit(pd.data_width)
inu = InputUnit(pd.data_width, self.input_stb, self.input)
outu = OutputUnit(pd.data_width, self.output_stb, self.output)
units = [nop, adder, subtractor, multiplier, minu, maxu, copier, inu, outu]
self.submodules += units
for unit in units:
self.sync += unit.stb_i.eq(0)
self.comb += [
unit.i0.eq(data_read_port0.dat_r),
unit.i1.eq(data_read_port1.dat_r),
If(unit.stb_o,
data_write_port.we.eq(1),
data_write_port.dat_w.eq(unit.o)
)
]
decode_table = [
(NopIsn.opcode, nop),
(AddIsn.opcode, adder),
(SubIsn.opcode, subtractor),
(MulShiftIsn.opcode, multiplier),
(MulShiftIsn.opcode + 1, multiplier),
(MinIsn.opcode, minu),
(MaxIsn.opcode, maxu),
(CopyIsn.opcode, copier),
(InputIsn.opcode, inu),
(OutputIsn.opcode, outu)
]
for allocated_opcode, unit in decode_table:
self.sync += If(pc_en & (opcode == allocated_opcode), unit.stb_i.eq(1))
fsm = FSM()
self.submodules += fsm
fsm.act("IDLE",
pc_en.eq(0),
NextValue(inu.buffer, self.input),
If(self.input_stb, NextState("PROCESSING"))
)
fsm.act("PROCESSING",
self.busy.eq(1),
pc_en.eq(1),
If(opcode == EndIsn.opcode,
pc_en.eq(0),
NextState("IDLE")
)
)
def make(function, **kwargs):
proc = Processor(**kwargs)
cp = compile(proc, function)
cp.dimension_processor()
return proc.implement(cp.encode(), cp.data)

View File

@ -259,25 +259,25 @@ class Sampler(_EEM):
ios = [
("sampler{}_adc_spi_p".format(eem), 0,
Subsignal("clk", Pins(_eem_pin(eem, 0, "p"))),
Subsignal("miso", Pins(_eem_pin(eem, 1, "p"))),
Subsignal("miso", Pins(_eem_pin(eem, 1, "p")), Misc("DIFF_TERM=TRUE")),
iostandard(eem),
),
("sampler{}_adc_spi_n".format(eem), 0,
Subsignal("clk", Pins(_eem_pin(eem, 0, "n"))),
Subsignal("miso", Pins(_eem_pin(eem, 1, "n"))),
Subsignal("miso", Pins(_eem_pin(eem, 1, "n")), Misc("DIFF_TERM=TRUE")),
iostandard(eem),
),
("sampler{}_pgia_spi_p".format(eem), 0,
Subsignal("clk", Pins(_eem_pin(eem, 4, "p"))),
Subsignal("mosi", Pins(_eem_pin(eem, 5, "p"))),
Subsignal("miso", Pins(_eem_pin(eem, 6, "p"))),
Subsignal("miso", Pins(_eem_pin(eem, 6, "p")), Misc("DIFF_TERM=TRUE")),
Subsignal("cs_n", Pins(_eem_pin(eem, 7, "p"))),
iostandard(eem),
),
("sampler{}_pgia_spi_n".format(eem), 0,
Subsignal("clk", Pins(_eem_pin(eem, 4, "n"))),
Subsignal("mosi", Pins(_eem_pin(eem, 5, "n"))),
Subsignal("miso", Pins(_eem_pin(eem, 6, "n"))),
Subsignal("miso", Pins(_eem_pin(eem, 6, "n")), Misc("DIFF_TERM=TRUE")),
Subsignal("cs_n", Pins(_eem_pin(eem, 7, "n"))),
iostandard(eem),
),
@ -633,14 +633,14 @@ class Mirny(_EEM):
("mirny{}_spi_p".format(eem), 0,
Subsignal("clk", Pins(_eem_pin(eem, 0, "p"))),
Subsignal("mosi", Pins(_eem_pin(eem, 1, "p"))),
Subsignal("miso", Pins(_eem_pin(eem, 2, "p"))),
Subsignal("miso", Pins(_eem_pin(eem, 2, "p")), Misc("DIFF_TERM=TRUE")),
Subsignal("cs_n", Pins(_eem_pin(eem, 3, "p"))),
iostandard(eem),
),
("mirny{}_spi_n".format(eem), 0,
Subsignal("clk", Pins(_eem_pin(eem, 0, "n"))),
Subsignal("mosi", Pins(_eem_pin(eem, 1, "n"))),
Subsignal("miso", Pins(_eem_pin(eem, 2, "n"))),
Subsignal("miso", Pins(_eem_pin(eem, 2, "n")), Misc("DIFF_TERM=TRUE")),
Subsignal("cs_n", Pins(_eem_pin(eem, 3, "n"))),
iostandard(eem),
),

View File

@ -122,7 +122,7 @@ class RawSlicer(Module):
for i in range(out_size)})),
If(shift_buf, Case(self.source_consume,
{i: buf.eq(buf[i*g:])
for i in range(out_size)})),
for i in range(out_size + 1)})),
]
fsm = FSM(reset_state="FETCH")

View File

@ -21,7 +21,6 @@ from artiq.gateware.rtio.xilinx_clocking import RTIOClockMultiplier, fix_serdes_
from artiq.gateware import eem
from artiq.gateware.drtio.transceiver import gtp_7series
from artiq.gateware.drtio.siphaser import SiPhaser7Series
from artiq.gateware.drtio.wrpll import WRPLL, DDMTDSamplerGTP
from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer
from artiq.gateware.drtio import *
from artiq.build_soc import *
@ -282,6 +281,9 @@ class MasterBase(MiniSoC, AMPSoC):
platform = self.platform
if platform.hw_rev == "v2.0":
self.submodules.error_led = gpio.GPIOOut(Cat(
self.platform.request("error_led")))
self.csr_devices.append("error_led")
self.submodules += SMAClkinForward(platform)
i2c = self.platform.request("i2c")
@ -416,7 +418,10 @@ class MasterBase(MiniSoC, AMPSoC):
self.specials += Instance("IBUFDS_GTE2",
i_CEB=self.disable_cdr_clk_ibuf,
i_I=cdr_clk_clean.p, i_IB=cdr_clk_clean.n,
o_O=cdr_clk_clean_buf)
o_O=cdr_clk_clean_buf,
p_CLKCM_CFG="TRUE",
p_CLKRCV_TRST="TRUE",
p_CLKSWING_CFG=3)
# Note precisely the rules Xilinx made up:
# refclksel=0b001 GTREFCLK0 selected
# refclksel=0b010 GTREFCLK1 selected
@ -443,7 +448,7 @@ class SatelliteBase(BaseSoC):
}
mem_map.update(BaseSoC.mem_map)
def __init__(self, rtio_clk_freq=125e6, enable_sata=False, *, with_wrpll=False, gateware_identifier_str=None, hw_rev="v2.0", **kwargs):
def __init__(self, rtio_clk_freq=125e6, enable_sata=False, *, gateware_identifier_str=None, hw_rev="v2.0", **kwargs):
if hw_rev in ("v1.0", "v1.1"):
cpu_bus_width = 32
else:
@ -459,6 +464,11 @@ class SatelliteBase(BaseSoC):
platform = self.platform
if self.platform.hw_rev == "v2.0":
self.submodules.error_led = gpio.GPIOOut(Cat(
self.platform.request("error_led")))
self.csr_devices.append("error_led")
disable_cdr_clk_ibuf = Signal(reset=1)
disable_cdr_clk_ibuf.attr.add("no_retiming")
if self.platform.hw_rev == "v2.0":
@ -469,7 +479,10 @@ class SatelliteBase(BaseSoC):
self.specials += Instance("IBUFDS_GTE2",
i_CEB=disable_cdr_clk_ibuf,
i_I=cdr_clk_clean.p, i_IB=cdr_clk_clean.n,
o_O=cdr_clk_clean_buf)
o_O=cdr_clk_clean_buf,
p_CLKCM_CFG="TRUE",
p_CLKRCV_TRST="TRUE",
p_CLKSWING_CFG=3)
qpll_drtio_settings = QPLLSettings(
refclksel=0b001,
fbdiv=4,
@ -560,24 +573,7 @@ class SatelliteBase(BaseSoC):
rtio_clk_period = 1e9/rtio_clk_freq
self.config["RTIO_FREQUENCY"] = str(rtio_clk_freq/1e6)
if with_wrpll:
self.submodules.wrpll_sampler = DDMTDSamplerGTP(
self.drtio_transceiver,
platform.request("cdr_clk_clean_fabric"))
helper_clk_pads = platform.request("ddmtd_helper_clk")
self.submodules.wrpll = WRPLL(
helper_clk_pads=helper_clk_pads,
main_dcxo_i2c=platform.request("ddmtd_main_dcxo_i2c"),
helper_dxco_i2c=platform.request("ddmtd_helper_dcxo_i2c"),
ddmtd_inputs=self.wrpll_sampler)
self.csr_devices.append("wrpll")
# note: do not use self.wrpll.cd_helper.clk; otherwise, vivado craps out with:
# critical warning: create_clock attempting to set clock on an unknown port/pin
# command: "create_clock -period 7.920000 -waveform {0.000000 3.960000} -name
# helper_clk [get_xlnx_outside_genome_inst_pin 20 0]
platform.add_period_constraint(helper_clk_pads.p, rtio_clk_period*0.99)
platform.add_false_path_constraints(self.crg.cd_sys.clk, helper_clk_pads.p)
else:
self.submodules.siphaser = SiPhaser7Series(
si5324_clkin=platform.request("cdr_clk") if platform.hw_rev == "v2.0"
else platform.request("si5324_clkin"),
@ -596,9 +592,6 @@ class SatelliteBase(BaseSoC):
platform.add_false_path_constraints(
self.crg.cd_sys.clk,
gtp.txoutclk, gtp.rxoutclk)
if with_wrpll:
platform.add_false_path_constraints(
helper_clk_pads.p, gtp.rxoutclk)
for gtp in self.drtio_transceiver.gtps[1:]:
platform.add_period_constraint(gtp.rxoutclk, rtio_clk_period)
platform.add_false_path_constraints(
@ -677,14 +670,11 @@ def main():
parser.add_argument("-V", "--variant", default="tester",
help="variant: {} (default: %(default)s)".format(
"/".join(sorted(VARIANTS.keys()))))
parser.add_argument("--with-wrpll", default=False, action="store_true")
parser.add_argument("--gateware-identifier-str", default=None,
help="Override ROM identifier")
args = parser.parse_args()
argdict = dict()
if args.with_wrpll:
argdict["with_wrpll"] = True
argdict["gateware_identifier_str"] = args.gateware_identifier_str
variant = args.variant.lower()

View File

@ -21,7 +21,6 @@ from artiq.gateware import fmcdio_vhdci_eem
from artiq.gateware.rtio.phy import ttl_simple, ttl_serdes_ultrascale, sawg
from artiq.gateware.drtio.transceiver import gth_ultrascale
from artiq.gateware.drtio.siphaser import SiPhaser7Series
from artiq.gateware.drtio.wrpll import WRPLL, DDMTDSamplerExtFF
from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer
from artiq.gateware.drtio import *
from artiq.build_soc import *
@ -54,7 +53,7 @@ class SatelliteBase(MiniSoC):
}
mem_map.update(MiniSoC.mem_map)
def __init__(self, rtio_clk_freq=125e6, identifier_suffix="", gateware_identifier_str=None, with_sfp=False, *, with_wrpll, **kwargs):
def __init__(self, rtio_clk_freq=125e6, identifier_suffix="", gateware_identifier_str=None, with_sfp=False, *, **kwargs):
MiniSoC.__init__(self,
cpu_type="vexriscv",
cpu_bus_width=64,
@ -69,10 +68,6 @@ class SatelliteBase(MiniSoC):
platform = self.platform
if with_wrpll:
clock_recout_pads = platform.request("ddmtd_rec_clk")
else:
clock_recout_pads = None
if with_sfp:
# Use SFP0 to connect to master (Kasli)
self.comb += platform.request("sfp_tx_disable", 0).eq(0)
@ -84,7 +79,7 @@ class SatelliteBase(MiniSoC):
data_pads=[drtio_uplink, platform.request("rtm_amc_link")],
sys_clk_freq=self.clk_freq,
rtio_clk_freq=rtio_clk_freq,
clock_recout_pads=clock_recout_pads)
clock_recout_pads=None)
self.csr_devices.append("drtio_transceiver")
self.submodules.rtio_tsc = rtio.TSC("sync", glbl_fine_ts_width=3)
@ -134,23 +129,6 @@ class SatelliteBase(MiniSoC):
rtio_clk_period = 1e9/rtio_clk_freq
self.config["RTIO_FREQUENCY"] = str(rtio_clk_freq/1e6)
if with_wrpll:
self.comb += [
platform.request("filtered_clk_sel").eq(0),
platform.request("ddmtd_main_dcxo_oe").eq(1),
platform.request("ddmtd_helper_dcxo_oe").eq(1)
]
self.submodules.wrpll_sampler = DDMTDSamplerExtFF(
platform.request("ddmtd_inputs"))
self.submodules.wrpll = WRPLL(
helper_clk_pads=platform.request("ddmtd_helper_clk"),
main_dcxo_i2c=platform.request("ddmtd_main_dcxo_i2c"),
helper_dxco_i2c=platform.request("ddmtd_helper_dcxo_i2c"),
ddmtd_inputs=self.wrpll_sampler)
self.csr_devices.append("wrpll")
platform.add_period_constraint(self.wrpll.cd_helper.clk, rtio_clk_period*0.99)
platform.add_false_path_constraints(self.crg.cd_sys.clk, self.wrpll.cd_helper.clk)
else:
self.comb += platform.request("filtered_clk_sel").eq(1)
self.submodules.siphaser = SiPhaser7Series(
si5324_clkin=platform.request("si5324_clkin"),
@ -433,7 +411,6 @@ def main():
default="sawg",
help="Change type of signal generator. This is used exclusively for "
"development and debugging.")
parser.add_argument("--with-wrpll", default=False, action="store_true")
parser.add_argument("--gateware-identifier-str", default=None,
help="Override ROM identifier")
args = parser.parse_args()
@ -443,13 +420,11 @@ def main():
soc = Satellite(
with_sfp=args.sfp,
jdcg_type=args.jdcg_type,
with_wrpll=args.with_wrpll,
gateware_identifier_str=args.gateware_identifier_str,
**soc_sayma_amc_argdict(args))
elif variant == "simplesatellite":
soc = SimpleSatellite(
with_sfp=args.sfp,
with_wrpll=args.with_wrpll,
gateware_identifier_str=args.gateware_identifier_str,
**soc_sayma_amc_argdict(args))
else:

View File

@ -21,7 +21,6 @@ from artiq.gateware.rtio.phy import ttl_simple, ttl_serdes_7series
from artiq.gateware.rtio.xilinx_clocking import RTIOClockMultiplier, fix_serdes_timing_path
from artiq.gateware.drtio.transceiver import gtp_7series
from artiq.gateware.drtio.siphaser import SiPhaser7Series
from artiq.gateware.drtio.wrpll import WRPLL, DDMTDSamplerGTP
from artiq.gateware.drtio.rx_synchronizer import XilinxRXSynchronizer
from artiq.gateware.drtio import *
from artiq.build_soc import add_identifier
@ -34,7 +33,7 @@ class _SatelliteBase(BaseSoC):
}
mem_map.update(BaseSoC.mem_map)
def __init__(self, rtio_clk_freq, *, with_wrpll, gateware_identifier_str, **kwargs):
def __init__(self, rtio_clk_freq, *, gateware_identifier_str, **kwargs):
BaseSoC.__init__(self,
cpu_type="vexriscv",
cpu_bus_width=64,
@ -51,7 +50,10 @@ class _SatelliteBase(BaseSoC):
self.specials += Instance("IBUFDS_GTE2",
i_CEB=disable_cdrclkc_ibuf,
i_I=cdrclkc_clkout.p, i_IB=cdrclkc_clkout.n,
o_O=cdrclkc_clkout_buf)
o_O=cdrclkc_clkout_buf,
p_CLKCM_CFG="TRUE",
p_CLKRCV_TRST="TRUE",
p_CLKSWING_CFG=3)
qpll_drtio_settings = QPLLSettings(
refclksel=0b001,
fbdiv=4,
@ -96,25 +98,7 @@ class _SatelliteBase(BaseSoC):
gtp = self.drtio_transceiver.gtps[0]
rtio_clk_period = 1e9/rtio_clk_freq
self.config["RTIO_FREQUENCY"] = str(rtio_clk_freq/1e6)
if with_wrpll:
self.comb += [
platform.request("filtered_clk_sel").eq(0),
platform.request("ddmtd_main_dcxo_oe").eq(1),
platform.request("ddmtd_helper_dcxo_oe").eq(1)
]
self.submodules.wrpll_sampler = DDMTDSamplerGTP(
self.drtio_transceiver,
platform.request("cdr_clk_clean_fabric"))
self.submodules.wrpll = WRPLL(
helper_clk_pads=platform.request("ddmtd_helper_clk"),
main_dcxo_i2c=platform.request("ddmtd_main_dcxo_i2c"),
helper_dxco_i2c=platform.request("ddmtd_helper_dcxo_i2c"),
ddmtd_inputs=self.wrpll_sampler)
self.csr_devices.append("wrpll")
platform.add_period_constraint(self.wrpll.cd_helper.clk, rtio_clk_period*0.99)
platform.add_false_path_constraints(self.crg.cd_sys.clk, self.wrpll.cd_helper.clk)
platform.add_false_path_constraints(self.wrpll.cd_helper.clk, gtp.rxoutclk)
else:
self.comb += platform.request("filtered_clk_sel").eq(1)
self.submodules.siphaser = SiPhaser7Series(
si5324_clkin=platform.request("si5324_clkin"),
@ -258,7 +242,6 @@ def main():
soc_sayma_rtm_args(parser)
parser.add_argument("--rtio-clk-freq",
default=150, type=int, help="RTIO clock frequency in MHz")
parser.add_argument("--with-wrpll", default=False, action="store_true")
parser.add_argument("--gateware-identifier-str", default=None,
help="Override ROM identifier")
parser.set_defaults(output_dir=os.path.join("artiq_sayma", "rtm"))
@ -266,7 +249,6 @@ def main():
soc = Satellite(
rtio_clk_freq=1e6*args.rtio_clk_freq,
with_wrpll=args.with_wrpll,
gateware_identifier_str=args.gateware_identifier_str,
**soc_sayma_rtm_argdict(args))
builder = SatmanSoCBuilder(soc, **builder_argdict(args))

View File

@ -68,6 +68,7 @@ def do_dma(dut, address):
test_writes1 = [
(0x01, 0x23, 0x12, 0x33),
(0x901, 0x902, 0x11, 0xeeeeeeeeeeeeeefffffffffffffffffffffffffffffff28888177772736646717738388488),
(0x82, 0x289, 0x99, int.from_bytes(b"\xf0" * 64, "little")),
(0x81, 0x288, 0x88, 0x8888)
]

View File

@ -1,158 +0,0 @@
import unittest
import numpy as np
from migen import *
from artiq.gateware.drtio.wrpll.ddmtd import Collector
from artiq.gateware.drtio.wrpll import thls, filters
class HelperChainTB(Module):
def __init__(self, N):
self.tag_ref = Signal(N)
self.input_stb = Signal()
self.adpll = Signal((24, True))
self.out_stb = Signal()
###
self.submodules.collector = Collector(N)
self.submodules.loop_filter = thls.make(filters.helper, data_width=48)
self.comb += [
self.collector.tag_ref.eq(self.tag_ref),
self.collector.ref_stb.eq(self.input_stb),
self.collector.main_stb.eq(self.input_stb),
self.loop_filter.input.eq(self.collector.out_helper << 22),
self.loop_filter.input_stb.eq(self.collector.out_stb),
self.adpll.eq(self.loop_filter.output),
self.out_stb.eq(self.loop_filter.output_stb),
]
class TestDSP(unittest.TestCase):
def test_main_collector(self):
N = 2
collector = Collector(N=N)
# check collector phase unwrapping
tags = [(0, 0, 0),
(0, 1, 1),
(2, 1, -1),
(3, 1, -2),
(0, 1, -3),
(1, 1, -4),
(2, 1, -5),
(3, 1, -6),
(3, 3, -4),
(0, 0, -4),
(0, 1, -3),
(0, 2, -2),
(0, 3, -1),
(0, 0, 0)]
for i in range(10):
tags.append((i % (2**N), (i+1) % (2**N), 1))
def generator():
for tag_ref, tag_main, out in tags:
yield collector.tag_ref.eq(tag_ref)
yield collector.tag_main.eq(tag_main)
yield collector.main_stb.eq(1)
yield collector.ref_stb.eq(1)
yield
yield collector.main_stb.eq(0)
yield collector.ref_stb.eq(0)
while not (yield collector.out_stb):
yield
out_main = yield collector.out_main
self.assertEqual(out_main, out)
run_simulation(collector, generator())
def test_helper_collector(self):
N = 3
collector = Collector(N=N)
# check collector phase unwrapping
tags = [((2**N - 1 - tag) % (2**N), -1) for tag in range(20)]
tags += [((tags[-1][0] + 1 + tag) % (2**N), 1) for tag in range(20)]
tags += [((tags[-1][0] - 2 - 2*tag) % (2**N), -2) for tag in range(20)]
def generator():
for tag_ref, out in tags:
yield collector.tag_ref.eq(tag_ref)
yield collector.main_stb.eq(1)
yield collector.ref_stb.eq(1)
yield
yield collector.main_stb.eq(0)
yield collector.ref_stb.eq(0)
while not (yield collector.out_stb):
yield
out_helper = yield collector.out_helper
self.assertEqual(out_helper, out)
run_simulation(collector, generator())
# test helper collector + filter against output from MATLAB model
def test_helper_chain(self):
pll = HelperChainTB(15)
initial_helper_out = -8000
ref_tags = np.array([
24778, 16789, 8801, 814, 25596, 17612, 9628, 1646,
26433, 18453, 10474, 2496, 27287, 19311, 11337, 3364, 28160,
20190, 12221, 4253, 29054, 21088, 13124, 5161, 29966, 22005,
14045, 6087, 30897, 22940, 14985, 7031, 31847, 23895, 15944,
7995, 47, 24869, 16923, 8978, 1035, 25861, 17920, 9981,
2042, 26873, 18937, 11002, 3069, 27904, 19973, 12042, 4113,
28953, 21026, 13100, 5175, 30020, 22098, 14177, 6257, 31106,
23189, 15273, 7358, 32212, 24300, 16388, 8478, 569, 25429,
17522, 9617, 1712, 26577, 18675, 10774, 2875, 27745, 19848,
11951, 4056, 28930, 21038, 13147, 5256, 30135, 22247, 14361,
6475, 31359, 23476, 15595, 7714, 32603, 24725, 16847, 8971,
1096
])
adpll_sim = np.array([
8, 24, 41, 57, 74, 91, 107, 124, 140, 157, 173,
190, 206, 223, 239, 256, 273, 289, 306, 322, 339, 355,
372, 388, 405, 421, 438, 454, 471, 487, 504, 520, 537,
553, 570, 586, 603, 619, 636, 652, 668, 685, 701, 718,
734, 751, 767, 784, 800, 817, 833, 850, 866, 882, 899,
915, 932, 948, 965, 981, 998, 1014, 1030, 1047, 1063, 1080,
1096, 1112, 1129, 1145, 1162, 1178, 1194, 1211, 1227, 1244, 1260,
1276, 1293, 1309, 1326, 1342, 1358, 1375, 1391, 1407, 1424, 1440,
1457, 1473, 1489, 1506, 1522, 1538, 1555, 1571, 1587, 1604, 1620,
1636])
def sim():
yield pll.collector.out_helper.eq(initial_helper_out)
for ref_tag, adpll_matlab in zip(ref_tags, adpll_sim):
# feed collector
yield pll.tag_ref.eq(int(ref_tag))
yield pll.input_stb.eq(1)
yield
yield pll.input_stb.eq(0)
while not (yield pll.collector.out_stb):
yield
tag_diff = yield pll.collector.out_helper
while not (yield pll.loop_filter.output_stb):
yield
adpll_migen = yield pll.adpll
self.assertEqual(adpll_migen, adpll_matlab)
yield
run_simulation(pll, [sim()])

View File

@ -1,55 +0,0 @@
import unittest
from migen import *
from artiq.gateware.drtio.wrpll import thls
a = 0
def simple_test(x):
global a
a = a + (x*4 >> 1)
return a
class TestTHLS(unittest.TestCase):
def test_thls(self):
global a
proc = thls.Processor()
a = 0
cp = thls.compile(proc, simple_test)
print("Program:")
cp.pretty_print()
cp.dimension_processor()
print("Encoded program:", cp.encode())
proc_impl = proc.implement(cp.encode(), cp.data)
def send_values(values):
for value in values:
yield proc_impl.input.eq(value)
yield proc_impl.input_stb.eq(1)
yield
yield proc_impl.input.eq(0)
yield proc_impl.input_stb.eq(0)
yield
while (yield proc_impl.busy):
yield
@passive
def receive_values(callback):
while True:
while not (yield proc_impl.output_stb):
yield
callback((yield proc_impl.output))
yield
send_list = [42, 40, 10, 10]
receive_list = []
run_simulation(proc_impl, [send_values(send_list), receive_values(receive_list.append)])
print("Execution:", send_list, "->", receive_list)
a = 0
expected_list = [simple_test(x) for x in send_list]
self.assertEqual(receive_list, expected_list)

View File

@ -485,7 +485,7 @@ def is_experiment(o):
def is_public_experiment(o):
"""Checks if a Pyhton object is a top-level,
"""Checks if a Python object is a top-level,
non underscore-prefixed, experiment class.
"""
return is_experiment(o) and not o.__name__.startswith("_")

View File

@ -83,8 +83,7 @@ class RangeScan(ScanObject):
self.sequence = [i*dx + start for i in range(npoints)]
if randomize:
rng = random.Random(seed)
random.shuffle(self.sequence, rng.random)
random.Random(seed).shuffle(self.sequence)
def __iter__(self):
return iter(self.sequence)
@ -120,8 +119,7 @@ class CenterScan(ScanObject):
for i in range(n) for sign in [-1, 1]][1:]
if randomize:
rng = random.Random(seed)
random.shuffle(self.sequence, rng.random)
random.Random(seed).shuffle(self.sequence)
def __iter__(self):
return iter(self.sequence)

View File

@ -1,6 +1,7 @@
import asyncio
import logging
import csv
import os.path
from enum import Enum
from time import time
@ -131,7 +132,8 @@ class RunPool:
writer.writerow([rid, start_time, expid["file"]])
def submit(self, expid, priority, due_date, flush, pipeline_name):
# mutates expid to insert head repository revision if None.
# mutates expid to insert head repository revision if None and
# replaces relative path with the absolute one.
# called through scheduler.
rid = self.ridc.get()
if "repo_rev" in expid:
@ -140,7 +142,10 @@ class RunPool:
wd, repo_msg = self.experiment_db.repo_backend.request_rev(
expid["repo_rev"])
else:
if "file" in expid:
expid["file"] = os.path.abspath(expid["file"])
wd, repo_msg = None, None
run = Run(rid, pipeline_name, wd, expid, priority, due_date, flush,
self, repo_msg=repo_msg)
if self.log_submissions is not None:
@ -425,7 +430,8 @@ class Scheduler:
When called through an experiment, the default values of
``pipeline_name``, ``expid`` and ``priority`` correspond to those of
the current run."""
# mutates expid to insert head repository revision if None
# mutates expid to insert head repository revision if None, and
# replaces relative file path with absolute one
if self._terminated:
return
try:

View File

@ -350,9 +350,13 @@ def main():
elif action == "analyze":
try:
exp_inst.analyze()
put_completed()
finally:
# browser's analyze shouldn't write results,
# since it doesn't run the experiment and cannot have rid
if rid is not None:
write_results()
put_completed()
elif action == "examine":
examine(ExamineDeviceMgr, ExamineDatasetMgr, obj["file"])
put_completed()

View File

@ -74,11 +74,17 @@ class RoundtripTest(ExperimentCase):
def test_object_list(self):
self.assertRoundtrip([object(), object()])
def test_object_tuple(self):
self.assertRoundtrip((False, object(), True, 0x12345678))
def test_list_tuple(self):
self.assertRoundtrip(([1, 2], [3, 4]))
def test_list_mixed_tuple(self):
self.assertRoundtrip([(0x12345678, [("foo", [0.0, 1.0], [0, 1])])])
self.assertRoundtrip([
(0x12345678, [("foo", [0.0, 1.0], [0, 1])]),
(0x23456789, [("bar", [2.0, 3.0], [2, 3])])])
self.assertRoundtrip([(0, 1.0, 0), (1, 1.5, 2), (2, 1.9, 4)])
def test_array_1d(self):
self.assertArrayRoundtrip(numpy.array([True, False]))
@ -520,19 +526,32 @@ class NumpyBoolTest(ExperimentCase):
class _Alignment(EnvExperiment):
def build(self):
self.setattr_device("core")
self.a = False
self.b = 1234.5678
self.c = True
self.d = True
self.e = 2345.6789
self.f = False
@rpc
def a_tuple(self) -> TList(TTuple([TBool, TFloat, TBool])):
return [(True, 1234.5678, True)]
def get_tuples(self) -> TList(TTuple([TBool, TFloat, TBool])):
return [(self.a, self.b, self.c), (self.d, self.e, self.f)]
@kernel
def run(self):
a, b, c = self.a_tuple()[0]
d, e, f = self.a_tuple()[0]
assert a == d
assert b == e
assert c == f
return 0
# Run two RPCs before checking to catch any obvious allocation size calculation
# issues (i.e. use of uninitialised stack memory).
tuples0 = self.get_tuples()
tuples1 = self.get_tuples()
for tuples in [tuples0, tuples1]:
a, b, c = tuples[0]
d, e, f = tuples[1]
assert a == self.a
assert b == self.b
assert c == self.c
assert d == self.d
assert e == self.e
assert f == self.f
class AlignmentTest(ExperimentCase):
@ -554,3 +573,29 @@ class NumpyQuotingTest(ExperimentCase):
def test_issue_1871(self):
"""Ensure numpy.array() does not break NumPy math functions"""
self.create(_NumpyQuoting).run()
class _IntBoundary(EnvExperiment):
def build(self):
self.setattr_device("core")
self.int32_min = numpy.iinfo(numpy.int32).min
self.int32_max = numpy.iinfo(numpy.int32).max
self.int64_min = numpy.iinfo(numpy.int64).min
self.int64_max = numpy.iinfo(numpy.int64).max
@kernel
def test_int32_bounds(self, min_val: TInt32, max_val: TInt32):
return min_val == self.int32_min and max_val == self.int32_max
@kernel
def test_int64_bounds(self, min_val: TInt64, max_val: TInt64):
return min_val == self.int64_min and max_val == self.int64_max
@kernel
def run(self):
self.test_int32_bounds(self.int32_min, self.int32_max)
self.test_int64_bounds(self.int64_min, self.int64_max)
class IntBoundaryTest(ExperimentCase):
def test_int_boundary(self):
self.create(_IntBoundary).run()

View File

@ -0,0 +1,59 @@
import unittest
import artiq.coredevice.exceptions as exceptions
from artiq.experiment import *
from artiq.test.hardware_testbench import ExperimentCase
from artiq.compiler.embedding import EmbeddingMap
from artiq.coredevice.core import test_exception_id_sync
"""
Test sync in exceptions raised between host and kernel
Check `artiq.compiler.embedding` and `artiq::firmware::ksupport::eh_artiq`
Considers the following two cases:
1) Exception raised on kernel and passed to host
2) Exception raised in a host function called from kernel
Ensures same exception is raised on both kernel and host in either case
"""
exception_names = EmbeddingMap().str_reverse_map
class _TestExceptionSync(EnvExperiment):
def build(self):
self.setattr_device("core")
@rpc
def _raise_exception_host(self, id):
exn = exception_names[id].split('.')[-1].split(':')[-1]
exn = getattr(exceptions, exn)
raise exn
@kernel
def raise_exception_host(self, id):
self._raise_exception_host(id)
@kernel
def raise_exception_kernel(self, id):
test_exception_id_sync(id)
class ExceptionTest(ExperimentCase):
def test_raise_exceptions_kernel(self):
exp = self.create(_TestExceptionSync)
for id, name in list(exception_names.items())[::-1]:
name = name.split('.')[-1].split(':')[-1]
with self.assertRaises(getattr(exceptions, name)) as ctx:
exp.raise_exception_kernel(id)
self.assertEqual(str(ctx.exception).strip("'"), name)
def test_raise_exceptions_host(self):
exp = self.create(_TestExceptionSync)
for id, name in exception_names.items():
name = name.split('.')[-1].split(':')[-1]
with self.assertRaises(getattr(exceptions, name)) as ctx:
exp.raise_exception_host(id)

View File

@ -0,0 +1,51 @@
# RUN: %python -m artiq.compiler.testbench.embedding %s
from artiq.language.core import *
class InnerA:
def __init__(self, val):
self.val = val
@kernel
def run_once(self):
return self.val
class InnerB:
def __init__(self, val):
self.val = val
@kernel
def run_once(self):
return self.val
def make_runner(InnerCls, val):
class Runner:
def __init__(self):
self.inner = InnerCls(val)
@kernel
def run_once(self):
return self.inner.run_once()
return Runner()
class Parent:
def __init__(self):
self.a = make_runner(InnerA, 1)
self.b = make_runner(InnerB, 42.0)
@kernel
def run_once(self):
return self.a.run_once() + self.b.run_once()
parent = Parent()
@kernel
def entrypoint():
parent.run_once()

View File

@ -0,0 +1,16 @@
# RUN: %python -m artiq.compiler.testbench.embedding %s
from artiq.language.core import *
from artiq.language.types import *
@kernel
def consume_tuple(x: TTuple([TInt32, TBool])):
print(x)
@kernel
def return_tuple() -> TTuple([TInt32, TBool]):
return (123, False)
@kernel
def entrypoint():
consume_tuple(return_tuple())

View File

@ -141,132 +141,21 @@ class SchedulerCase(unittest.TestCase):
high_priority = 3
middle_priority = 2
low_priority = 1
# "late" is far in the future, beyond any reasonable test timeout.
late = time() + 100000
early = time() + 1
expect = [
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": low_priority,
"pipeline": "main",
"due_date": None,
"status": "pending",
"expid": expid_bg,
"flush": False
},
"key": 0
},
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": high_priority,
"pipeline": "main",
"due_date": late,
"status": "pending",
"expid": expid_empty,
"flush": False
},
"key": 1
},
{
"path": [],
"action": "setitem",
"value": {
"repo_msg": None,
"priority": middle_priority,
"pipeline": "main",
"due_date": early,
"status": "pending",
"expid": expid_empty,
"flush": False
},
"key": 2
},
{
"path": [0],
"action": "setitem",
"value": "preparing",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "prepare_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "running",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "preparing",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "prepare_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "paused",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "running",
"key": "status"
},
{
middle_priority_done = asyncio.Event()
def notify(mod):
# Watch for the "middle_priority" experiment's completion
if mod == {
"path": [2],
"action": "setitem",
"value": "run_done",
"key": "status"
},
{
"path": [0],
"action": "setitem",
"value": "running",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "analyzing",
"key": "status"
},
{
"path": [2],
"action": "setitem",
"value": "deleting",
"key": "status"
},
{
"path": [],
"action": "delitem",
"key": 2
},
]
done = asyncio.Event()
expect_idx = 0
def notify(mod):
nonlocal expect_idx
self.assertEqual(mod, expect[expect_idx])
expect_idx += 1
if expect_idx >= len(expect):
done.set()
}:
middle_priority_done.set()
scheduler.notifier.publish = notify
scheduler.start()
@ -275,7 +164,9 @@ class SchedulerCase(unittest.TestCase):
scheduler.submit("main", expid_empty, high_priority, late)
scheduler.submit("main", expid_empty, middle_priority, early)
loop.run_until_complete(done.wait())
# Check that the "middle_priority" experiment finishes. This would hang for a
# long time (until "late") if the due times were not being respected.
loop.run_until_complete(middle_priority_done.wait())
scheduler.notifier.publish = None
loop.run_until_complete(scheduler.stop())

View File

@ -9,14 +9,14 @@ Developing ARTIQ
The easiest way to obtain an ARTIQ development environment is via the Nix package manager on Linux. The Nix system is used on the `M-Labs Hydra server <https://nixbld.m-labs.hk/>`_ to build ARTIQ and its dependencies continuously; it ensures that all build instructions are up-to-date and allows binary packages to be used on developers' machines, in particular for large tools such as the Rust compiler.
ARTIQ itself does not depend on Nix, and it is also possible to compile everything from source (look into the ``flake.nix`` file and/or nixpkgs, and run the commands manually) - but Nix makes the process a lot easier.
* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra <https://nixbld.m-labs.hk/>`_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.
* Download Vivado from Xilinx and install it (by running the official installer in a FHS chroot environment if using NixOS; the ARTIQ flake provides such an environment, which can be entered with the command ``vivado-env``). If you do not want to write to ``/opt``, you can install it in a folder of your home directory. The "appropriate" Vivado version to use for building the bitstream can vary. Some versions contain bugs that lead to hidden or visible failures, others work fine. Refer to `Hydra <https://nixbld.m-labs.hk/>`_ and/or the ``flake.nix`` file from the ARTIQ repository in order to determine which version is used at M-Labs. If the Vivado GUI installer crashes, you may be able to work around the problem by running it in unattended mode with a command such as ``./xsetup -a XilinxEULA,3rdPartyEULA,WebTalkTerms -b Install -e 'Vitis Unified Software Platform' -l /opt/Xilinx/``.Platform' -l /opt/Xilinx/``.
* During the Vivado installation, uncheck ``Install cable drivers`` (they are not required as we use better and open source alternatives).
* Install the `Nix package manager <http://nixos.org/nix/>`_, version 2.4 or later. Prefer a single-user installation for simplicity.
* If you did not install Vivado in its default location ``/opt``, clone the ARTIQ Git repository and edit ``flake.nix`` accordingly.
* Enable flakes in Nix by e.g. adding ``experimental-features = nix-command flakes`` to ``nix.conf`` (for example ``~/.config/nix/nix.conf``).
* Enter the development shell by running ``nix develop git+https://github.com/m-labs/artiq.git``, or alternatively by cloning the ARTIQ Git repository and running ``nix develop`` at the root (where ``flake.nix`` is).
* Enter the development shell by running ``nix develop git+https://github.com/m-labs/artiq.git\?ref=release-7``, or alternatively by cloning the ARTIQ Git repository and running ``nix develop`` at the root (where ``flake.nix`` is).
* You can then build the firmware and gateware with a command such as ``$ python -m artiq.gateware.targets.kasli``. If you are using a JSON system description file, use ``$ python -m artiq.gateware.targets.kasli_generic file.json``.
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli -V <your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <configuring-openocd>`. OpenOCD is already part of the flake's development environment.
* Flash the binaries into the FPGA board with a command such as ``$ artiq_flash --srcbuild -d artiq_kasli/<your_variant>``. You need to configure OpenOCD as explained :ref:`in the user section <configuring-openocd>`. OpenOCD is already part of the flake's development environment.
* Check that the board boots and examine the UART messages by running a serial terminal program, e.g. ``$ flterm /dev/ttyUSB1`` (``flterm`` is part of MiSoC and installed in the flake's development environment). Leave the terminal running while you are flashing the board, so that you see the startup messages when the board boots immediately after flashing. You can also restart the board (without reflashing it) with ``$ artiq_flash start``.
* The communication parameters are 115200 8-N-1. Ensure that your user has access to the serial device (e.g. by adding the user account to the ``dialout`` group).

View File

@ -132,7 +132,7 @@ We suggest that you define a function ``get_argparser`` that returns the argumen
Logging
-------
For the debug, information and warning messages, use the ``logging`` Python module and print the log on the standard error output (the default setting). The logging level is by default "WARNING", meaning that only warning messages and more critical messages will get printed (and no debug nor information messages). By calling ``sipyco.common_args.verbosity_args`` with the parser as argument, you add support for the ``--verbose`` (``-v``) and ``--quiet`` (``-q``) arguments in the parser. Each occurence of ``-v`` (resp. ``-q``) in the arguments will increase (resp. decrease) the log level of the logging module. For instance, if only one ``-v`` is present in the arguments, then more messages (info, warning and above) will get printed. If only one ``-q`` is present in the arguments, then only errors and critical messages will get printed. If ``-qq`` is present in the arguments, then only critical messages will get printed, but no debug/info/warning/error.
For the debug, information and warning messages, use the ``logging`` Python module and print the log on the standard error output (the default setting). The logging level is by default "WARNING", meaning that only warning messages and more critical messages will get printed (and no debug nor information messages). By calling ``sipyco.common_args.verbosity_args`` with the parser as argument, you add support for the ``--verbose`` (``-v``) and ``--quiet`` (``-q``) arguments in the parser. Each occurrence of ``-v`` (resp. ``-q``) in the arguments will increase (resp. decrease) the log level of the logging module. For instance, if only one ``-v`` is present in the arguments, then more messages (info, warning and above) will get printed. If only one ``-q`` is present in the arguments, then only errors and critical messages will get printed. If ``-qq`` is present in the arguments, then only critical messages will get printed, but no debug/info/warning/error.
The program below exemplifies how to use logging: ::

View File

@ -81,7 +81,7 @@ It will give you the ``/dev/ttyUSBxx`` (or the ``COMxx`` for Windows) device
names.
The ``hwid:`` field gives you the string you can pass via the ``hwgrep://``
feature of pyserial
`serial_for_url() <http://pyserial.sourceforge.net/pyserial_api.html#serial.serial_for_url>`_
`serial_for_url() <https://pythonhosted.org/pyserial/pyserial_api.html#serial.serial_for_url>`_
in order to open a serial device.
The preferred way to specify a serial device is to make use of the ``hwgrep://``

View File

@ -23,10 +23,10 @@ As a very first step, we will turn on a LED on the core device. Create a file ``
The central part of our code is our ``LED`` class, which derives from :class:`artiq.language.environment.EnvExperiment`. Among other features, :class:`~artiq.language.environment.EnvExperiment` calls our :meth:`~artiq.language.environment.Experiment.build` method and provides the :meth:`~artiq.language.environment.HasEnvironment.setattr_device` method that interfaces to the device database to create the appropriate device drivers and make those drivers accessible as ``self.core`` and ``self.led``. The :func:`~artiq.language.core.kernel` decorator (``@kernel``) tells the system that the :meth:`~artiq.language.environment.Experiment.run` method must be compiled for and executed on the core device (instead of being interpreted and executed as regular Python code on the host). The decorator uses ``self.core`` internally, which is why we request the core device using :meth:`~artiq.language.environment.HasEnvironment.setattr_device` like any other.
Copy the file ``device_db.py`` (containing the device database) from the ``examples/master`` folder of ARTIQ into the same directory as ``led.py`` (alternatively, you can use the ``--device-db`` option of ``artiq_run``). You will probably want to set the IP address of the core device in ``device_db.py`` so that the computer can connect to it (it is the ``host`` parameter of the ``comm`` entry). See :ref:`device-db` for more information. The example device database is designed for the ``nist_clock`` hardware adapter on the KC705; see :ref:`board-ports` for RTIO channel assignments if you need to adapt the device database to a different hardware platform.
You will need to supply the correct device database for your core device; it is generated by a Python script typically called ``device_db.py`` (see also :ref:`device_db`). If you purchased a system from M-Labs, the device database is provided either on the USB stick or inside ~/artiq on the NUC; otherwise, you can also find examples in the ``examples`` folder of ARTIQ, sorted inside the corresponding subfolder for your core device. Copy ``device_db.py`` into the same directory as ``led.py`` (or use the ``--device-db`` option of ``artiq_run``). The field ``core_addr``, placed at the top of the file, needs to match the IP address of your core device so your computer can communicate with it. If you purchased a pre-assembled system it is normally already set correctly.
.. note::
To obtain the examples, you can find where the ARTIQ package is installed on your machine with: ::
To access the examples, you can find where the ARTIQ package is installed on your machine with: ::
python3 -c "import artiq; print(artiq.__path__[0])"

View File

@ -10,7 +10,7 @@ Starting your first experiment with the master
In the previous tutorial, we used the ``artiq_run`` utility to execute our experiments, which is a simple stand-alone tool that bypasses the ARTIQ management system. We will now see how to run an experiment using the master (the central program in the management system that schedules and executes experiments) and the dashboard (that connects to the master and controls it).
First, create a folder ``~/artiq-master`` and copy the file ``device_db.py`` (containing the device database) found in the ``examples/master`` directory from the ARTIQ sources. The master uses those files in the same way as ``artiq_run``.
First, create a folder ``~/artiq-master`` and copy your ``device_db.py`` into it (the file containing the device database, found either with your materials or in the ``examples`` subfolder for your core device, as described in :ref:`connecting-to-the-core-device`) The master uses those files in the same way as ``artiq_run``.
Then create a ``~/artiq-master/repository`` sub-folder to contain experiments. The master scans this ``repository`` folder to determine what experiments are available (the name of the folder can be changed using ``-r``).

View File

@ -14,14 +14,14 @@ In the current state of affairs, we recommend that Linux users install ARTIQ via
Installing via Nix (Linux)
--------------------------
First, install the Nix package manager. Some distributions provide a package for the Nix package manager, otherwise, it can be installed via the script on the `Nix website <http://nixos.org/nix/>`_. Make sure you get Nix version 2.4 or higher.
First, install the Nix package manager. Some distributions provide a package for the Nix package manager, otherwise, it can be installed via the script on the `Nix website <https://nixos.org/download/>`_. Make sure you get Nix version 2.4 or higher.
Once Nix is installed, enable Flakes: ::
$ mkdir -p ~/.config/nix
$ echo "experimental-features = nix-command flakes" > ~/.config/nix/nix.conf
The easiest way to obtain ARTIQ is then to install it into the user environment with ``$ nix profile install git+https://github.com/m-labs/artiq.git``. Answer "Yes" to the questions about setting Nix configuration options. This provides a minimal installation of ARTIQ where the usual commands (``artiq_master``, ``artiq_dashboard``, ``artiq_run``, etc.) are available.
The easiest way to obtain ARTIQ is then to install it into the user environment with ``$ nix profile install git+https://github.com/m-labs/artiq.git\?ref=release-7``. Answer "Yes" to the questions about setting Nix configuration options. This provides a minimal installation of ARTIQ where the usual commands (``artiq_master``, ``artiq_dashboard``, ``artiq_run``, etc.) are available.
This installation is however quite limited, as Nix creates a dedicated Python environment for the ARTIQ commands alone. This means that other useful Python packages that you may want (pandas, matplotlib, ...) are not available to them.
@ -30,8 +30,8 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
::
{
inputs.artiq.url = "git+https://github.com/m-labs/artiq.git";
inputs.extrapkg.url = "git+https://git.m-labs.hk/M-Labs/artiq-extrapkg.git";
inputs.artiq.url = "git+https://github.com/m-labs/artiq.git?ref=release-7";
inputs.extrapkg.url = "git+https://git.m-labs.hk/M-Labs/artiq-extrapkg.git?ref=release-7";
inputs.extrapkg.inputs.artiq.follows = "artiq";
outputs = { self, artiq, extrapkg }:
let
@ -79,6 +79,10 @@ Installing multiple packages and making them visible to the ARTIQ commands requi
];
};
};
nixConfig = { # work around https://github.com/NixOS/nix/issues/6771
extra-trusted-public-keys = "nixbld.m-labs.hk-1:5aSRVA5b320xbNvu30tqxVPXpld73bhtOeH6uAjRyHc=";
extra-substituters = "https://nixbld.m-labs.hk";
};
}
@ -96,7 +100,7 @@ Installing via Conda (Windows, Linux)
.. warning::
For Linux users, the Nix package manager is preferred, as it is more reliable and faster than Conda.
First, install `Anaconda <https://www.anaconda.com/distribution/>`_ or the more minimalistic `Miniconda <https://conda.io/en/latest/miniconda.html>`_.
First, install `Anaconda <https://www.anaconda.com/download>`_ or the more minimalistic `Miniconda <https://conda.io/en/latest/miniconda.html>`_.
After installing either Anaconda or Miniconda, open a new terminal (also known as command line, console, or shell and denoted here as lines starting with ``$``) and verify the following command works::
@ -108,12 +112,18 @@ Controllers for third-party devices (e.g. Thorlabs TCube, Lab Brick Digital Atte
Set up the Conda channel and install ARTIQ into a new Conda environment: ::
$ conda config --prepend channels https://conda.m-labs.hk/artiq-beta
$ conda config --prepend channels https://conda.m-labs.hk/artiq-legacy
$ conda config --append channels conda-forge
$ conda create -n artiq artiq
.. note::
If you do not need to flash boards, the ``artiq`` package is sufficient. The packages named ``artiq-board-*`` contain only firmware for the FPGA board, and you should not install them unless you are reflashing an FPGA board. Controllers for third-party devices (e.g. Thorlabs TCube, Lab Brick Digital Attenuator, etc.) that are not shipped with ARTIQ can also be installed with Conda. Browse `Hydra <https://nixbld.m-labs.hk/project/artiq>`_ or see the list of NDSPs in this manual to find the names of the corresponding packages.
The board-specific files containing bitstream and firmware for the FPGA board can be obtained through AFWS, and are only required when flashing. Controllers for third-party devices (e.g. Thorlabs TCube, Lab Brick Digital Attenuator, etc.) that are not shipped with ARTIQ can also be installed with Conda. Browse `Hydra <https://nixbld.m-labs.hk/project/artiq>`_ or see the list of NDSPs in this manual to find the names of the corresponding packages.
.. note::
On Windows, if the last command that creates and installs the ARTIQ environment fails with an error similar to "seeking backwards is not allowed", try to re-run the command with admin rights.
.. note::
For commercial use you might need a license for Anaconda/Miniconda or for using the Anaconda package channel. `Miniforge <https://github.com/conda-forge/miniforge>`_ might be an alternative in a commercial environment as it does not include the Anaconda package channel by default. If you want to use Anaconda/Miniconda/Miniforge in a commercial environment, please check the license and the latest terms of service.
After the installation, activate the newly created environment by name. ::
@ -219,7 +229,7 @@ If you have an active firmware subscription with M-Labs or QUARTIQ, you can obta
Run the command::
$ afws_client [username] build [variant] [afws_directory]
$ afws_client [username] build [afws_directory] [variant]
Replace ``[username]`` with the login name that was given to you with the subscription, ``[variant]`` with the name of your system variant, and ``[afws_directory]`` with the name of an empty directory, which will be created by the command if it does not exist. Enter your password when prompted and wait for the build (if applicable) and download to finish. If you experience issues with the AFWS client, write to the helpdesk@ email.
@ -261,19 +271,13 @@ If you purchased a Kasli device from M-Labs, it usually comes with the IP addres
and then reboot the device (with ``artiq_flash start`` or a power cycle).
If the ip config field is not set, or set to "use_dhcp" then the device will attempt to obtain an IP address using
DHCP. If a static IP address is wanted, install OpenOCD as before, and flash the IP (and, if necessary, MAC) addresses
directly: ::
In other cases, install OpenOCD as before, and flash the IP (and, if necessary, MAC) addresses directly: ::
$ artiq_mkfs flash_storage.img -s mac xx:xx:xx:xx:xx:xx -s ip xx.xx.xx.xx
$ artiq_flash -t [board] -V [variant] -f flash_storage.img storage start
For Kasli devices, flashing a MAC address is not necessary as they can obtain it from their EEPROM.
If DHCP has been used the address can be found in the console output, which can be viewed using: ::
$ python -m misoc.tools.flterm /dev/ttyUSB2
Check that you can ping the device. If ping fails, check that the Ethernet link LED is ON - on Kasli, it is the LED next to the SFP0 connector. As a next step, look at the messages emitted on the UART during boot. Use a program such as flterm or PuTTY to connect to the device's serial port at 115200bps 8-N-1 and reboot the device. On Kasli, the serial port is on FTDI channel 2 with v1.1 hardware (with channel 0 being JTAG) and on FTDI channel 1 with v1.0 hardware.
If you want to use IPv6, the device also has a link-local address that corresponds to its EUI-64, and an additional arbitrary IPv6 address can be defined by using the ``ip6`` configuration key. All IPv4 and IPv6 addresses can be used at the same time.

View File

@ -3,28 +3,28 @@ List of available NDSPs
The following network device support packages are available for ARTIQ. If you would like to add yours to this list, just send us an email or a pull request.
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Equipment | Nix package | Conda package | Documentation | URL |
+=================================+===================================+==================================+=====================================================================================================+========================================================+
+=================================+===================================+==================================+==================================================================+========================================================+
| PDQ2 | Not available | Not available | `HTML <https://pdq.readthedocs.io>`_ | https://github.com/m-labs/pdq |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Lab Brick Digital Attenuator | ``lda`` | ``lda`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/lda-manual-html/latest/download/1>`_ | https://github.com/m-labs/lda |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Novatech 409B | ``novatech409b`` | ``novatech409b`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/novatech409b-manual-html/latest/download/1>`_ | https://github.com/m-labs/novatech409b |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Thorlabs T-Cubes | ``thorlabs_tcube`` | ``thorlabs_tcube`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/thorlabs_tcube-manual-html/latest/download/1>`_ | https://github.com/m-labs/thorlabs_tcube |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Korad KA3005P | ``korad_ka3005p`` | ``korad_ka3005p`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/korad_ka3005p-manual-html/latest/download/1>`_ | https://github.com/m-labs/korad_ka3005p |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Newfocus 8742 | ``newfocus8742`` | ``newfocus8742`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/newfocus8742-manual-html/latest/download/1>`_ | https://github.com/quartiq/newfocus8742 |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Lab Brick Digital Attenuator | ``lda`` | ``lda`` | Not available | https://github.com/m-labs/lda |
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Novatech 409B | ``novatech409b`` | ``novatech409b`` | Not available | https://github.com/m-labs/novatech409b |
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Thorlabs T-Cubes | ``thorlabs_tcube`` | ``thorlabs_tcube`` | Not available | https://github.com/m-labs/thorlabs_tcube |
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Korad KA3005P | ``korad_ka3005p`` | ``korad_ka3005p`` | Not available | https://github.com/m-labs/korad_ka3005p |
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Newfocus 8742 | ``newfocus8742`` | ``newfocus8742`` | Not available | https://github.com/quartiq/newfocus8742 |
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Princeton Instruments PICam | Not available | Not available | Not available | https://github.com/quartiq/picam |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| Anel HUT2 power distribution | ``hut2`` | ``hut2`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/hut2-manual-html/latest/download/1>`_ | https://github.com/quartiq/hut2 |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| Anel HUT2 power distribution | ``hut2`` | ``hut2`` | Not available | https://github.com/quartiq/hut2 |
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| TOPTICA lasers | ``toptica-lasersdk-artiq`` | ``toptica-lasersdk-artiq`` | Not available | https://github.com/quartiq/lasersdk-artiq |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
| HighFinesse wavemeters | ``highfinesse-net`` | ``highfinesse-net`` | `HTML <https://nixbld.m-labs.hk/job/artiq/full/highfinesse-net-manual-html/latest/download/1>`_ | https://github.com/quartiq/highfinesse-net |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| HighFinesse wavemeters | ``highfinesse-net`` | ``highfinesse-net`` | Not available | https://github.com/quartiq/highfinesse-net |
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+
| InfluxDB database | Not available | Not available | `HTML <https://gitlab.com/charlesbaynham/artiq_influx_generic>`_ | https://gitlab.com/charlesbaynham/artiq_influx_generic |
+---------------------------------+-----------------------------------+----------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------------------------+
+---------------------------------+-----------------------------------+----------------------------------+------------------------------------------------------------------+--------------------------------------------------------+

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

View File

@ -2,6 +2,7 @@
"nodes": {
"artiq-comtools": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": [
"nixpkgs"
],
@ -10,11 +11,11 @@
]
},
"locked": {
"lastModified": 1654007592,
"narHash": "sha256-vaDFhE1ItjqtIcinC/6RAJGbj44pxxMUEeQUa3FtgEE=",
"lastModified": 1717637438,
"narHash": "sha256-BXFidNm3Em8iChPGu1L0s2bY+f2yQ0VVid4MuOoTehw=",
"owner": "m-labs",
"repo": "artiq-comtools",
"rev": "cb73281154656ee8f74db1866859e31bf42755cd",
"rev": "78d27026efe76a13f7b4698a554f55811369ec4d",
"type": "github"
},
"original": {
@ -23,14 +24,32 @@
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"mozilla-overlay": {
"flake": false,
"locked": {
"lastModified": 1657214286,
"narHash": "sha256-rO/4oymKXU09wG2bcTt4uthPCp1XsBZjxuCJo3yVXNs=",
"lastModified": 1704373101,
"narHash": "sha256-+gi59LRWRQmwROrmE1E2b3mtocwueCQqZ60CwLG+gbg=",
"owner": "mozilla",
"repo": "nixpkgs-mozilla",
"rev": "0508a66e28a5792fdfb126bbf4dec1029c2509e0",
"rev": "9b11a87c0cc54e308fa83aac5b4ee1816d5418a2",
"type": "github"
},
"original": {
@ -41,11 +60,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1657123678,
"narHash": "sha256-cowVkScfUPlbBXUp08MeVk/wgm9E1zp1uC+9no2hZYw=",
"lastModified": 1685573264,
"narHash": "sha256-Zffu01pONhs/pqH07cjlF10NnMDLok8ix5Uk4rhOnZQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "316b762afdb9e142a803f29c49a88b4a47db80ee",
"rev": "380be19fbd2d9079f677978361792cb25e8a3635",
"type": "github"
},
"original": {
@ -73,11 +92,11 @@
]
},
"locked": {
"lastModified": 1654830914,
"narHash": "sha256-tratXcWu6Dgzd0Qd9V6EMjuNlE9qDN1pKFhP+Gt0b64=",
"lastModified": 1717637367,
"narHash": "sha256-4mSm9wl5EMgzzrW6w86IDUevkEOT99FESHGcxcyQbD0=",
"owner": "m-labs",
"repo": "sipyco",
"rev": "58b0935f7ae47659abee5b5792fa594153328d6f",
"rev": "02b96ec2473a3c3d3c980899de2564ddce949dab",
"type": "github"
},
"original": {
@ -89,11 +108,11 @@
"src-migen": {
"flake": false,
"locked": {
"lastModified": 1656649178,
"narHash": "sha256-A91sZRrprEuPOtIUVxm6wX5djac9wnNZQ4+cU1nvJPc=",
"lastModified": 1715484909,
"narHash": "sha256-4DCHBUBfc/VA+7NW2Hr0+JP4NnKPru2uVJyZjCCk0Ws=",
"owner": "m-labs",
"repo": "migen",
"rev": "0fb91737090fe45fd764ea3f71257a4c53c7a4ae",
"rev": "4790bb577681a8c3a8d226bc196a4e5deb39e4df",
"type": "github"
},
"original": {
@ -105,11 +124,11 @@
"src-misoc": {
"flake": false,
"locked": {
"lastModified": 1649324486,
"narHash": "sha256-Mw/fQS3lHFvCm7L1k63joRkz5uyijQfywcOq+X2+o2s=",
"ref": "master",
"rev": "f1dc58d2b8c222ba41c25cee4301626625f46e43",
"revCount": 2420,
"lastModified": 1715647536,
"narHash": "sha256-q+USDcaKHABwW56Jzq8u94iGPWlyLXMyVt0j/Gyg+IE=",
"ref": "refs/heads/master",
"rev": "fea9de558c730bc394a5936094ae95bb9d6fa726",
"revCount": 2455,
"submodules": true,
"type": "git",
"url": "https://github.com/m-labs/misoc.git"
@ -135,6 +154,21 @@
"repo": "pythonparser",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

View File

@ -21,7 +21,7 @@
artiqVersionMajor = 7;
artiqVersionMinor = self.sourceInfo.revCount or 0;
artiqVersionId = self.sourceInfo.shortRev or "unknown";
artiqVersion = (builtins.toString artiqVersionMajor) + "." + (builtins.toString artiqVersionMinor) + "." + artiqVersionId + ".beta";
artiqVersion = (builtins.toString artiqVersionMajor) + "." + (builtins.toString artiqVersionMinor) + "." + artiqVersionId;
artiqRev = self.sourceInfo.rev or "unknown";
rustManifest = pkgs.fetchurl {
@ -112,28 +112,6 @@
'';
};
llvmlite-new = pkgs.python3Packages.buildPythonPackage rec {
pname = "llvmlite";
version = "0.38.0";
src = pkgs.python3Packages.fetchPypi {
inherit pname;
version = "0.38.0";
sha256 = "qZ0WbM87EW87ntI7m3C6JBVkCpyXjzqqE/rUnFj0llw=";
};
nativeBuildInputs = [ pkgs.llvm_11 ];
# Disable static linking
# https://github.com/numba/llvmlite/issues/93
postPatch = ''
substituteInPlace ffi/Makefile.linux --replace "-static-libstdc++" ""
substituteInPlace llvmlite/tests/test_binding.py --replace "test_linux" "nope"
'';
# Set directory containing llvm-config binary
preConfigure = ''
export LLVM_CONFIG=${pkgs.llvm_11.dev}/bin/llvm-config
'';
doCheck = false; # FIXME
};
artiq = pkgs.python3Packages.buildPythonPackage rec {
pname = "artiq";
version = artiqVersion;
@ -147,8 +125,8 @@
nativeBuildInputs = [ pkgs.qt5.wrapQtAppsHook ];
# keep llvm_x and lld_x in sync with llvmlite
propagatedBuildInputs = [ pkgs.llvm_11 pkgs.lld_11 llvmlite-new sipyco.packages.x86_64-linux.sipyco pythonparser artiq-comtools.packages.x86_64-linux.artiq-comtools ]
++ (with pkgs.python3Packages; [ pyqtgraph pygit2 numpy dateutil scipy prettytable pyserial python-Levenshtein h5py pyqt5 qasync tqdm ]);
propagatedBuildInputs = [ pkgs.llvm_11 pkgs.lld_11 sipyco.packages.x86_64-linux.sipyco pythonparser artiq-comtools.packages.x86_64-linux.artiq-comtools ]
++ (with pkgs.python3Packages; [ llvmlite pyqtgraph pygit2 numpy dateutil scipy prettytable pyserial python-Levenshtein h5py pyqt5 qasync tqdm ]);
dontWrapQtApps = true;
postFixup = ''
@ -178,6 +156,7 @@
migen = pkgs.python3Packages.buildPythonPackage rec {
name = "migen";
src = src-migen;
format = "pyproject";
propagatedBuildInputs = [ pkgs.python3Packages.colorama ];
};
@ -224,20 +203,6 @@
propagatedBuildInputs = with pkgs.python3Packages; [ pyserial prettytable msgpack migen ];
};
cargo-xbuild = rustPlatform.buildRustPackage rec {
pname = "cargo-xbuild";
version = "0.6.5";
src = pkgs.fetchFromGitHub {
owner = "rust-osdev";
repo = pname;
rev = "v${version}";
sha256 = "18djvygq9v8rmfchvi2hfj0i6fhn36m716vqndqnj56fiqviwxvf";
};
cargoSha256 = "13sj9j9kl6js75h9xq0yidxy63vixxm9q3f8jil6ymarml5wkhx8";
};
vivadoEnv = pkgs.buildFHSUserEnv {
name = "vivado-env";
targetPkgs = vivadoDeps;
@ -246,7 +211,7 @@
vivado = pkgs.buildFHSUserEnv {
name = "vivado";
targetPkgs = vivadoDeps;
profile = "set -e; source /opt/Xilinx/Vivado/2021.2/settings64.sh";
profile = "set -e; source /opt/Xilinx/Vivado/2022.2/settings64.sh";
runScript = "vivado";
};
@ -269,7 +234,6 @@
pkgs.lld_11
vivado
rustPlatform.cargoSetupHook
cargo-xbuild
];
buildPhase =
''
@ -425,7 +389,6 @@
(pkgs.python3.withPackages(ps: with packages.x86_64-linux; [ migen misoc jesd204b artiq ps.paramiko ps.jsonschema microscope ]))
rustPlatform.rust.rustc
rustPlatform.rust.cargo
cargo-xbuild
pkgs.llvmPackages_11.clang-unwrapped
pkgs.llvm_11
pkgs.lld_11
@ -442,6 +405,8 @@
];
shellHook = ''
export LIBARTIQ_SUPPORT=`libartiq-support`
export QT_PLUGIN_PATH=${pkgs.qt5.qtbase}/${pkgs.qt5.qtbase.dev.qtPluginPrefix}
export QML2_IMPORT_PATH=${pkgs.qt5.qtbase}/${pkgs.qt5.qtbase.dev.qtQmlPrefix}
'';
};

View File

@ -11,7 +11,7 @@ def get_rev():
"""
def get_version():
return os.getenv("VERSIONEER_OVERRIDE", default="7.0.beta")
return os.getenv("VERSIONEER_OVERRIDE", default="7.0")
def get_rev():
return os.getenv("VERSIONEER_REV", default="unknown")