Feature Request: EdgeCounters for NIST QC2 gateware #205
Labels
No Milestone
No Assignees
4 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: M-Labs/artiq-zynq#205
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Several end users at NIST using the QC2 gateware on zc706 have requested EdgeCounters for some TTLs.
I tried to add this to the gateware target myself, but the build failed for reasons that weren't clear to me and I didn't have time to investigate further.
I think a reasonable solution is to add 4 EdgeCounter RTIO channels at the end of the RTIO channel list (to maintain device_db back compatibility as much as possible), corresponding to the first 4 TTLInOut channels.
Four channels should be plenty - most experiments would only use one EdgeCounter, for counting a photomultiplying tube.
I made the changes, and it does compile, but I'd like you to test them out first before they're merged into the master branch here.
https://git.m-labs.hk/mwojcik/artiq-zynq/src/branch/qc2_edge_counters
Build failed, but it seems like it may be something weird with my system. Freshly cloned from your fork, I get:
The detailed log seems to indicate something wrong with Vivado:
However, the file clearly exists:
On my system I have
/opt/Xilinx
symlinked to the actual install location. Could that be causing issues? I guess I can change this pretty easily in the build script to point to the actual Vivado location.EDIT: I just verified that this is the same error I got when I tried to do this myself, and that my diff more or less matches yours. At least I knew what to do in pronciple, but this error is mysterious.
That said, building with the same command from artiq-zynq/master works fine, so my Vivado installation isn't completely broken.
Also my nix-fu isn't completely up to scratch: I see that the vivado location actually comes from
flake.nix
in theartiq
repository. I'm not really sure how to add an entry to the localflake.nix
to override this.I would create a Vivado entry similar to the one in
artiq
, but with changed location, as a variable in the localflake.nix
- and then replaceartiqpkgs.vivado
(line 172) with your own package.Yes - nix builds derivations in a sandbox where access to the filesystem is blocked except for dependencies and sandbox paths.
Still failing with same error after I exchange /opt/ for my path to Xilinx. See attached diff for the changes I made (dummy path in the diff, but you get the picture). The real path is in /home/<user>/Xilinx
Note as I mentioned that the symlinked structure worked fine when I was building from the
artiq-zynq
repository master branch.Regardless, this seems to be an issue with my setup and is unrelated to the issue at hand - @mwojcik if you attach the boot.bin binary I can get someone here at NIST to test the gateware.
@ljstephenson - Try the attached compiled nist_qc2_master - let me know if you need any other variant.
Sorry! I had this ready - but .bin files? or over 5mb files? are not allowed, giving no error - and I haven't noticed that it did not attach. Here's the zip.
One of my colleagues has been trying to test the gateware and ran into the following problem; calling
fetch_count
on the counter hangs as if there was no input to fetch, despite an earlier call togate_rising
.This is with master running software version 7.8123.3038639, on W10. Other experiments ran normally.
I would like to follow up on this issue. Please let me know if we can provide more information to help resolve it.
That was my bad, off-by-one error in RTIO channel list in the
device_db
. Previouslyttl1_counter
would've yielded results from TTL0.I tried that experiment code from @ljstephenson after fixing that problem and it doesn't hang anymore. After the pull request is merged, it should be available for you to easily download with AFWS, just remember to use the updated device_db.