Problems with DMA on Kasli-Soc #196

Open
opened 2022-07-15 22:11:57 +08:00 by mbirtwell · 28 comments

I've been having a play with a Kasli-SoC, and I'm having some trouble with DMA. It seems that whenever I try and playback a DMA recording containing 128 events or more it hangs. I guess this is to do with the fifo size. In addition if I try and playback substantially more events say 250, have it hang, delete the experiment then try a simple experiment that just writes to a channel I get RTIO destination unreachable errors. I put a print in the start of rtio_csr::output and csr::rtio::o_status_read was returning 4 immeditately. As far as I can tell that persisted until the kasli was power cycled.

I've been having a play with a Kasli-SoC, and I'm having some trouble with DMA. It seems that whenever I try and playback a DMA recording containing 128 events or more it hangs. I guess this is to do with the fifo size. In addition if I try and playback substantially more events say 250, have it hang, delete the experiment then try a simple experiment that just writes to a channel I get RTIO destination unreachable errors. I put a print in the start of rtio_csr::output and csr::rtio::o_status_read was returning 4 immeditately. As far as I can tell that persisted until the kasli was power cycled.

@sb10q can you guys reproduce this?

@sb10q can you guys reproduce this?
Owner

I've been having a play with a Kasli-SoC, and I'm having some trouble with DMA. It seems that whenever I try and playback a DMA recording containing 128 events or more it hangs. I guess this is to do with the fifo size.

The kernel will hang until some events in the FIFO is consumed, then a new event will be queued into the FIFO, and repeat until everything is pushed into the FIFO. The behavior should be the same as Kasli. It is not an exclusive behavior of DMA either.

In addition if I try and playback substantially more events say 250, have it hang, delete the experiment then try a simple experiment that just writes to a channel I get RTIO destination unreachable errors.

So is this a DRTIO system? I tried Kasli-SoC standalone and the pairing of a Kasli-SoC Master & a Kasli 1.1 satellite. The DMA hangs as usual, but I don't see the error.

These are my experiment code:

from artiq.experiment import *


class DMA(EnvExperiment):
    def build(self):
        ttl_device_name = "ttl8"
        self.setattr_device("core")
        self.setattr_device("core_dma")
        self.trace_name = "test_rtio"
        self.ttl = self.get_device(ttl_device_name)

    @kernel
    def record_many(self, n):
        with self.core_dma.record(self.trace_name):
            for i in range(n//2):
                delay(100*ms)
                self.ttl.on()
                delay(100*ms)
                self.ttl.off()

    @kernel
    def playback(self):
        handle = self.core_dma.get_handle(self.trace_name)
        self.core.break_realtime()
        self.core_dma.playback_handle(handle)

    @kernel
    def run(self):
        self.core.reset()
        self.record_many(512)
        self.playback()


class LED(EnvExperiment):
    def build(self):
        self.setattr_device("core")
        self.setattr_device("ttl8")

    @kernel
    def run(self):
        self.ttl8.on()

I first run the DMA experiment, then it hangs. I then cancelled the experiemnt from dashboard, and run the LED experiment.

> I've been having a play with a Kasli-SoC, and I'm having some trouble with DMA. It seems that whenever I try and playback a DMA recording containing 128 events or more it hangs. I guess this is to do with the fifo size. The kernel will hang until some events in the FIFO is consumed, then a new event will be queued into the FIFO, and repeat until everything is pushed into the FIFO. The behavior should be the same as Kasli. It is not an exclusive behavior of DMA either. > In addition if I try and playback substantially more events say 250, have it hang, delete the experiment then try a simple experiment that just writes to a channel I get RTIO destination unreachable errors. So is this a DRTIO system? I tried Kasli-SoC standalone and the pairing of a Kasli-SoC Master & a Kasli 1.1 satellite. The DMA hangs as usual, but I don't see the error. These are my experiment code: ```python from artiq.experiment import * class DMA(EnvExperiment): def build(self): ttl_device_name = "ttl8" self.setattr_device("core") self.setattr_device("core_dma") self.trace_name = "test_rtio" self.ttl = self.get_device(ttl_device_name) @kernel def record_many(self, n): with self.core_dma.record(self.trace_name): for i in range(n//2): delay(100*ms) self.ttl.on() delay(100*ms) self.ttl.off() @kernel def playback(self): handle = self.core_dma.get_handle(self.trace_name) self.core.break_realtime() self.core_dma.playback_handle(handle) @kernel def run(self): self.core.reset() self.record_many(512) self.playback() class LED(EnvExperiment): def build(self): self.setattr_device("core") self.setattr_device("ttl8") @kernel def run(self): self.ttl8.on() ``` I first run the DMA experiment, then it hangs. I then cancelled the experiemnt from dashboard, and run the LED experiment.
Owner

I get RTIO destination unreachable errors

Did you patch the capacitors on your board? The transceiver power supplies are unstable on the first batch of TS boards and need a capacitor replacement. Without it the DRTIO links fail to establish or are not stable. M-Labs boards should not be affected.

> I get RTIO destination unreachable errors Did you patch the capacitors on your board? The transceiver power supplies are unstable on the first batch of TS boards and need a capacitor replacement. Without it the DRTIO links fail to establish or are not stable. M-Labs boards should not be affected.
Author

(Sorry been on holiday)

The kernel will hang until some events in the FIFO is consumed, then a new event will be queued into the FIFO, and repeat until everything is pushed into the FIFO. The behavior should be the same as Kasli. It is not an exclusive behavior of DMA either.

Yes, I understand that. When I say it's hung I mean that I've waited many orders of magnitude longer than I expect the DMA sequence playout to take and it's not finished.

So is this a DRTIO system?

No, that's no DRTIO involved. This is a single Kasli-SoC.

I first run the DMA experiment, then it hangs. I then cancelled the experiemnt from dashboard, and run the LED experiment.

Is this an attempt to reproduce the RTIO unreachable errors. I'm a little bit concerned that I mis-reported that number of RTIO events that I had used to generate that. Would you mind trying with 10,000 events.

Did you patch the capacitors on your board? The transceiver power supplies are unstable on the first batch of TS boards and need a capacitor replacement. Without it the DRTIO links fail to establish or are not stable. M-Labs boards should not be affected.

We did get them from Technosystems. We're just in the process of getting the capacitors and patching the boards. I'll try reproducing this again once we've done that.

(Sorry been on holiday) > The kernel will hang until some events in the FIFO is consumed, then a new event will be queued into the FIFO, and repeat until everything is pushed into the FIFO. The behavior should be the same as Kasli. It is not an exclusive behavior of DMA either. Yes, I understand that. When I say it's hung I mean that I've waited many orders of magnitude longer than I expect the DMA sequence playout to take and it's not finished. > So is this a DRTIO system? No, that's no DRTIO involved. This is a single Kasli-SoC. > I first run the DMA experiment, then it hangs. I then cancelled the experiemnt from dashboard, and run the LED experiment. Is this an attempt to reproduce the RTIO unreachable errors. I'm a little bit concerned that I mis-reported that number of RTIO events that I had used to generate that. Would you mind trying with 10,000 events. > Did you patch the capacitors on your board? The transceiver power supplies are unstable on the first batch of TS boards and need a capacitor replacement. Without it the DRTIO links fail to establish or are not stable. M-Labs boards should not be affected. We did get them from Technosystems. We're just in the process of getting the capacitors and patching the boards. I'll try reproducing this again once we've done that.
Author

Did you patch the capacitors on your board? The transceiver power supplies are unstable on the first batch of TS boards and need a capacitor replacement. Without it the DRTIO links fail to establish or are not stable. M-Labs boards should not be affected.

We did get them from Technosystems. We're just in the process of getting the capacitors and patching the boards. I'll try reproducing this again once we've done that.

We've now tried this with the boards patched and see the same misbehaviour.

I first run the DMA experiment, then it hangs. I then cancelled the experiemnt from dashboard, and run the LED experiment.

I'm sorry @occheung I had not read your experiment closely enough before to spot the size of the delays you are using. I understand that the DMA playout blocks until all of the events have been queued in the fifo. The difference between what I'm doing and what you are is that my DMA sequences should be played out in a few ms. e.g. if you change the delays in your experiments from 100ms to 30us, you'd expect the whole thing to playout in ~30ms. That's more representative of the kind of DMA sequence that I'm interested in. And this doesn't complete playout even if you wait many minutes.

I think I should be clear that when I say hanging I mean that it's taking so much longer than I think that it should take that I strongly suspect that it will never finish. In this case the DMA hanging is the primary bug that I'm reporting here, my observations about the unreachable errors are secondary and I mention them because I thought they might be helpful in diagnosing the underlying issue.

So, if we take this code as an example:

class DMA(EnvExperiment):
    def build(self):
        self.setattr_device("core")
        self.setattr_device("core_dma")
        self.trace_name = "test_rtio"
        # self.fastino: Fastino = self.get_device("fast4")
        self.fastino: Fastino = self.get_device("fastino0")

    @kernel
    def record_many(self, n):
        with self.core_dma.record(self.trace_name):
            for _ in range(n):
                delay(27 * us)
                self.fastino.set_dac_mu(0, 32500)

    @kernel
    def playback(self):
        handle = self.core_dma.get_handle(self.trace_name)
        print("Playing out DMA")
        start = self.core.get_rtio_counter_mu()
        self.core.break_realtime()
        self.core_dma.playback_handle(handle)
        end = self.core.get_rtio_counter_mu()
        print("Playout of DMA complete: ", end - start)

    @kernel
    def run(self):
        self.record_many(512)
        self.playback()

This runs on a Kasli v2.0 and prints:

print:Playout of DMA complete:  7144248

Which suggests that playout takes on the order of 8ms. On the Kasli-SoC this doesn't finish in the time that I'm willing to wait. If you do get this to work on the Kasli-SoC you'll likely encounter another bug that I'm about to raise.

> > Did you patch the capacitors on your board? The transceiver power supplies are unstable on the first batch of TS boards and need a capacitor replacement. Without it the DRTIO links fail to establish or are not stable. M-Labs boards should not be affected. > > We did get them from Technosystems. We're just in the process of getting the capacitors and patching the boards. I'll try reproducing this again once we've done that. We've now tried this with the boards patched and see the same misbehaviour. > I first run the DMA experiment, then it hangs. I then cancelled the experiemnt from dashboard, and run the LED experiment. I'm sorry @occheung I had not read your experiment closely enough before to spot the size of the delays you are using. I understand that the DMA playout blocks until all of the events have been queued in the fifo. The difference between what I'm doing and what you are is that my DMA sequences should be played out in a few ms. e.g. if you change the delays in your experiments from 100ms to 30us, you'd expect the whole thing to playout in ~30ms. That's more representative of the kind of DMA sequence that I'm interested in. And this doesn't complete playout even if you wait many minutes. I think I should be clear that when I say hanging I mean that it's taking so much longer than I think that it should take that I strongly suspect that it will never finish. In this case the DMA hanging is the primary bug that I'm reporting here, my observations about the unreachable errors are secondary and I mention them because I thought they might be helpful in diagnosing the underlying issue. So, if we take this code as an example: ``` class DMA(EnvExperiment): def build(self): self.setattr_device("core") self.setattr_device("core_dma") self.trace_name = "test_rtio" # self.fastino: Fastino = self.get_device("fast4") self.fastino: Fastino = self.get_device("fastino0") @kernel def record_many(self, n): with self.core_dma.record(self.trace_name): for _ in range(n): delay(27 * us) self.fastino.set_dac_mu(0, 32500) @kernel def playback(self): handle = self.core_dma.get_handle(self.trace_name) print("Playing out DMA") start = self.core.get_rtio_counter_mu() self.core.break_realtime() self.core_dma.playback_handle(handle) end = self.core.get_rtio_counter_mu() print("Playout of DMA complete: ", end - start) @kernel def run(self): self.record_many(512) self.playback() ``` This runs on a Kasli v2.0 and prints: ``` print:Playout of DMA complete: 7144248 ``` Which suggests that playout takes on the order of 8ms. On the Kasli-SoC this doesn't finish in the time that I'm willing to wait. If you do get this to work on the Kasli-SoC you'll likely encounter another bug that I'm about to raise.
Author

I've raised the other issue that I was refering to as #201 (get_rtio_counter_mu always returns 0).

I also forgot to mention yesterday that I was reviewing the firmware code and I couldn't find anywhere that we added a sentinel event to the DMA buffer. In the Kasli v2.0 firmware we add a record with length == 0 and no other bytes here, which the gateware seems to require to signal the end of the DMA sequence (here). I couldn't find a similar line in the Kasli Zynq firmware so I added one:

diff --git a/src/runtime/src/kernel/dma.rs b/src/runtime/src/kernel/dma.rs
index a19d95f..9aa93d9 100644
--- a/src/runtime/src/kernel/dma.rs
+++ b/src/runtime/src/kernel/dma.rs
@@ -70,6 +70,7 @@ pub extern fn dma_record_stop(duration: i64) {
                        rtio::output_wide as *const ()).unwrap();

         let mut recorder = RECORDER.take().unwrap();
+        recorder.buffer.push(0);
         recorder.duration = duration;
         KERNEL_CHANNEL_1TO0.as_mut().unwrap().send(
             Message::DmaPutRequest(recorder)

This didn't change the behaviour for me. It seems likely that that byte happens to be 0. But unless I've missed something we do need a change like this.

I've raised the other issue that I was refering to as https://git.m-labs.hk/M-Labs/artiq-zynq/issues/201 (`get_rtio_counter_mu` always returns 0). I also forgot to mention yesterday that I was reviewing the firmware code and I couldn't find anywhere that we added a sentinel event to the DMA buffer. In the Kasli v2.0 firmware we add a record with length == 0 and no other bytes [here](https://github.com/m-labs/artiq/blob/c800b6c8d3db08b11efaf761c6eb94af5a09d309/artiq/firmware/runtime/rtio_dma.rs#L44), which the gateware seems to require to signal the end of the DMA sequence ([here](https://github.com/m-labs/artiq/blob/8da924ec0fb5a1358831510b967753adc344936a/artiq/gateware/rtio/dma.py#L189)). I couldn't find a similar line in the Kasli Zynq firmware so I added one: ``` diff --git a/src/runtime/src/kernel/dma.rs b/src/runtime/src/kernel/dma.rs index a19d95f..9aa93d9 100644 --- a/src/runtime/src/kernel/dma.rs +++ b/src/runtime/src/kernel/dma.rs @@ -70,6 +70,7 @@ pub extern fn dma_record_stop(duration: i64) { rtio::output_wide as *const ()).unwrap(); let mut recorder = RECORDER.take().unwrap(); + recorder.buffer.push(0); recorder.duration = duration; KERNEL_CHANNEL_1TO0.as_mut().unwrap().send( Message::DmaPutRequest(recorder) ``` This didn't change the behaviour for me. It seems likely that that byte happens to be 0. But unless I've missed something we do need a change like this.
esavkin was assigned by sb10q 2022-09-30 14:47:10 +08:00
Owner

I am trying to reproduce the issue, however I'm unable to make it hang. Moreover, I've run this code

So, if we take this code as an example:

And got nearly the same result, as you mentioned for Kasli v2.0.

Playing out DMA
Playout of DMA complete:  7125224
I am trying to reproduce the issue, however I'm unable to make it hang. Moreover, I've run this code > So, if we take this code as an example: And got nearly the same result, as you mentioned for Kasli v2.0. ``` Playing out DMA Playout of DMA complete: 7125224 ```
Author

@esavkin, which firmware/gateware are you using? I'd like to try the same ones just incase anything is changing.

@esavkin, which firmware/gateware are you using? I'd like to try the same ones just incase anything is changing.
Author

I started out with the builds from https://nixbld.m-labs.hk/eval/4640, but I built my own from master when they didn't appear to be working.

I started out with the builds from https://nixbld.m-labs.hk/eval/4640, but I built my own from master when they didn't appear to be working.
Owner

which firmware/gateware are you using?

current master's kasli_soc-master-jtag

> which firmware/gateware are you using? current master's kasli_soc-master-jtag
Owner

@mbirtwell is this solved?

@mbirtwell is this solved?
Author

I've not had a chance to retest this. I'd ideally like to use a build from hydra that's known to work for someone else when I do.

I've not had a chance to retest this. I'd ideally like to use a build from hydra that's known to work for someone else when I do.
Owner

There are no hydra builds for Kasli-SoC, it's all AFWS.

There are no hydra builds for Kasli-SoC, it's all AFWS.
Owner

Other than the demo/master/satellite, but nobody uses that, it's mostly there just to check that the build doesn't break.

Other than the demo/master/satellite, but nobody uses that, it's mostly there just to check that the build doesn't break.
Author

So I've been using the master builds just to get going and do some benchmarking ahead of trying to use this in an actual system.

I've just done some more testing and it does in fact work me now. I've tried to bisect where it starts working and I have:

hydra build result
4640 broken
4740 broken
4801 broken
4862 works
5027 works
5219 works

(Admittedly I was using my experiment from #201, but based on brief testing that seems to be a good proxy for these issues)

It looks kind of modal. As if something has changed to fix it, but the commit between 4801 and 4862 is 4a4f7b0ddc which updates the artiq version and doesn't seem to have any relevant changes, the only dim possibility is a3ae82502c, but I think that's for a different board.

So yay it works. I'd love to know why it didn't work, why it does now, and why it won't break again. But I suppose you don't always get what you want.

So I've been using the master builds just to get going and do some benchmarking ahead of trying to use this in an actual system. I've just done some more testing and it does in fact work me now. I've tried to bisect where it starts working and I have: | hydra build | result | | -------- | -------- | | 4640 | broken | | 4740 | broken | | 4801 | broken | | 4862 | works | | 5027 | works | | 5219 | works | (Admittedly I was using my experiment from #201, but based on brief testing that seems to be a good proxy for these issues) It looks kind of modal. As if something has changed to fix it, but the commit between 4801 and 4862 is https://git.m-labs.hk/M-Labs/artiq-zynq/commit/4a4f7b0ddc4b7d4ba3d46a9197901908ecccbc90 which updates the artiq version and doesn't seem to have any relevant changes, the only dim possibility is https://github.com/m-labs/artiq/commit/a3ae82502cd22ab79ea595120b30f14af79aa49a, but I think that's for a different board. So yay it works. I'd love to know why it didn't work, why it does now, and why it won't break again. But I suppose you don't always get what you want.

I wanted to add onto this issue. We aren't using Kasli-SoC, but a ZC706 FPGA+Evaluation board. We also observe DMA hanging like @mbirtwell was (glad it's resolved for you!). The sequence plays but won't finish or raise any errors. We're using the zc706-nist_qc2-sd:134016 binary build.

We will try flashing to 138454 this afternoon or tomorrow and will let you know if that fixes the bug for us.

I wanted to add onto this issue. We aren't using Kasli-SoC, but a ZC706 FPGA+Evaluation board. We also observe DMA hanging like @mbirtwell was (glad it's resolved for you!). The sequence plays but won't finish or raise any errors. We're using the `zc706-nist_qc2-sd:134016` binary build. We will try flashing to 138454 this afternoon or tomorrow and will let you know if that fixes the bug for us.
Owner

There seems to be some insane Vivado behavior if you connect a BRAM to the AXI HP ports directly, which is worked around in another project in 13a44fc185.
Someone should take a look at the netlist and see if there's any such connection with the ARTIQ DMA core.

There seems to be some insane Vivado behavior if you connect a BRAM to the AXI HP ports directly, which is worked around in another project in https://git.m-labs.hk/M-Labs/rust-pitaya/commit/13a44fc18591c605e89db86c74ea804a3ed8a0d8. Someone should take a look at the netlist and see if there's any such connection with the ARTIQ DMA core.

By "someone should take a look" do you mean someone at M-Labs will do this? @sb10q

By "someone should take a look" do you mean someone at M-Labs will do this? @sb10q

We finally got around to testing this with the latest build, zc706-nist_qc2-sd:138422, and the issue is persisting with no change in symptoms. Are there any other debugging steps we should take @sb10q ?

We finally got around to testing this with the latest build, `zc706-nist_qc2-sd:138422`, and the issue is persisting with no change in symptoms. Are there any other debugging steps we should take @sb10q ?

@sb10q I wanted to check back in about this and see if there is anything else we should try as the problem is persisting.

@sb10q I wanted to check back in about this and see if there is anything else we should try as the problem is persisting.

@sb10q This is still broken as of stable gateware build zc706-nist_qc2-sd:145490. It seems to work on beta build 145564, but that build causes the FPGA to disconnect from the host sporadically. Please let us know if there's anything we can do to help troubleshoot -- it's starting to limit what we can do experimentally.

@sb10q This is still broken as of stable gateware build `zc706-nist_qc2-sd:145490`. It seems to work on beta build 145564, but that build causes the FPGA to disconnect from the host sporadically. Please let us know if there's anything we can do to help troubleshoot -- it's starting to limit what we can do experimentally.
Owner

I tried running on our zc706 with zc706-nist_qc2-jtag on latest release-7 and master branches (https://nixbld.m-labs.hk/eval/5814 and https://nixbld.m-labs.hk/eval/5815 respective) this code:

from artiq.experiment import *

class DMA(EnvExperiment):
    def build(self):
        self.setattr_device("core")
        self.setattr_device("core_dma")
        self.trace_name = "test_rtio"
        self.ttl = self.get_device("ttl0")

    @kernel
    def record_many(self, n):
        with self.core_dma.record(self.trace_name):
            for _ in range(n):
                delay(12*us)
                self.ttl.on()
                delay(13*us)
                self.ttl.off()


    @kernel
    def playback(self):
        handle = self.core_dma.get_handle(self.trace_name)
        print("Playing out DMA")
        start = self.core.get_rtio_counter_mu()
        self.core.break_realtime()
        self.core_dma.playback_handle(handle)
        end = self.core.get_rtio_counter_mu()
        print("Playout of DMA complete: ", end - start)

    @kernel
    def run(self):
        self.record_many(512)
        self.playback()
        

and got nearly the same result:

$ artiq_run dma_test.py 
Playing out DMA
Playout of DMA complete:  9819152
I tried running on our zc706 with `zc706-nist_qc2-jtag` on latest release-7 and master branches (https://nixbld.m-labs.hk/eval/5814 and https://nixbld.m-labs.hk/eval/5815 respective) this code: ```py from artiq.experiment import * class DMA(EnvExperiment): def build(self): self.setattr_device("core") self.setattr_device("core_dma") self.trace_name = "test_rtio" self.ttl = self.get_device("ttl0") @kernel def record_many(self, n): with self.core_dma.record(self.trace_name): for _ in range(n): delay(12*us) self.ttl.on() delay(13*us) self.ttl.off() @kernel def playback(self): handle = self.core_dma.get_handle(self.trace_name) print("Playing out DMA") start = self.core.get_rtio_counter_mu() self.core.break_realtime() self.core_dma.playback_handle(handle) end = self.core.get_rtio_counter_mu() print("Playout of DMA complete: ", end - start) @kernel def run(self): self.record_many(512) self.playback() ``` and got nearly the same result: ```bash $ artiq_run dma_test.py Playing out DMA Playout of DMA complete: 9819152 ```
Owner

I tried the same script from my previous comment on kasli-soc for firmware/gateware revision 0812f22423 (the commit just before mentioned 4a4f7b0ddc) for such configuration with real attached hardware (I changed ttl0 to ttl4 in the script):

{
  "target": "kasli_soc",
  "min_artiq_version": "8.0",
  "variant": "test",
  "hw_rev": "v1.0",
  "base": "standalone",
  "core_addr": "192.168.1.42",
  "peripherals": [
    {
      "type": "dio",
      "board": "DIO_BNC",
      "ports": [0],
      "bank_direction_low": "input",
      "bank_direction_high": "output"
    }
  ]
}

and got the same result.
Please advise which exact configuration should I try, since now I suspect it can be some Vivado related issue.

I tried the same script from my previous comment on kasli-soc for firmware/gateware revision 0812f22423c8aab5af6506ee28838469e35063f0 (the commit just before mentioned 4a4f7b0ddc) for such configuration with real attached hardware (I changed ttl0 to ttl4 in the script): ```json { "target": "kasli_soc", "min_artiq_version": "8.0", "variant": "test", "hw_rev": "v1.0", "base": "standalone", "core_addr": "192.168.1.42", "peripherals": [ { "type": "dio", "board": "DIO_BNC", "ports": [0], "bank_direction_low": "input", "bank_direction_high": "output" } ] } ``` and got the same result. Please advise which exact configuration should I try, since now I suspect it can be some Vivado related issue.
Owner

Please advise which exact configuration should I try, since now I suspect it can be some Vivado related issue.

Please pay attention to the posts. They reference certain binary builds such as zc706-nist_qc2-sd:138422 and zc706-nist_qc2-sd:145490.

> Please advise which exact configuration should I try, since now I suspect it can be some Vivado related issue. Please pay attention to the posts. They reference certain binary builds such as zc706-nist_qc2-sd:138422 and zc706-nist_qc2-sd:145490.
Owner

certain binary builds such as zc706-nist_qc2-sd:138422 and zc706-nist_qc2-sd:145490.

I tried again builds 138422 and 145446 (same evaluation as 145490, but sd variant), downloaded from hydra. Still got the same results on my test script.

> certain binary builds such as zc706-nist_qc2-sd:138422 and zc706-nist_qc2-sd:145490. I tried again builds 138422 and 145446 (same evaluation as 145490, but sd variant), downloaded from hydra. Still got the same results on my test script.
Owner

Possibly fixed by ea9fe9b4e1.

@jfniedermeyer can you check?

Possibly fixed by https://github.com/m-labs/artiq/commit/ea9fe9b4e1b43b4cbb4c79dea722e469540d2530. @jfniedermeyer can you check?

Thanks for the update, @sb10q. We're about to travel for two weeks of conferences. We'll test it as soon as we're back.

Thanks for the update, @sb10q. We're about to travel for two weeks of conferences. We'll test it as soon as we're back.

@sb10q, using build 147779, which pulls in that PR, the issue persists with the same symptoms.

@sb10q, using build 147779, which pulls in that PR, the issue persists with the same symptoms.
Sign in to join this conversation.
No Milestone
No Assignees
7 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: M-Labs/artiq-zynq#196
No description provided.