49: fix memory safety issue in ethernet interface (closes #33) r=jordens a=cjbe

The CPU is allowed to implement normal memory writes out-of-order. Here
the write to the OWN flag in the DMA descriptor (normal memory) was
placed after the DMA tail pointer advance (in device memory, so not
reorderable). This meant the ethernet DMA engine stalled as it saw
a descriptor it did not own, and only restarted and sent the packet
when the next packet was released.

This fix will work as long as the CPU data cache is disabled. If we
want to enable the cache, the simplest method would be to mark SRAM3
as uncacheable via the MPU.

Co-authored-by: Chris Ballance <chris.ballance@physics.ox.ac.uk>
This commit is contained in:
bors[bot] 2019-11-16 07:26:22 +00:00 committed by GitHub
commit 8045c19f53
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -245,7 +245,13 @@ impl RxRing {
let addr = &self.desc_buf[self.cur_desc] as *const _ as u32;
assert_eq!(addr & 0x3, 0);
let dma = unsafe { pac::Peripherals::steal().ETHERNET_DMA };
// Ensure changes to the descriptor (in particular, the OWN flag) are
// committed before DMA engine sees tail pointer store.
cortex_m::asm::dsb();
dma.dmacrx_dtpr.write(|w| unsafe { w.bits(addr) });
self.cur_desc = self.next_desc();
@ -314,7 +320,13 @@ impl TxRing {
let addr = &self.desc_buf[self.cur_desc] as *const _ as u32;
assert_eq!(addr & 0x3, 0);
let dma = unsafe { pac::Peripherals::steal().ETHERNET_DMA };
// Ensure packet contents as well as changes to the descriptor have been
// committed before DMA engine sees the tail pointer store.
cortex_m::asm::dsb();
dma.dmactx_dtpr.write(|w| unsafe { w.bits(addr) });
}
}
@ -490,6 +502,10 @@ impl Device {
});
eth_mtl.mtltx_qomr.modify(|_, w| w.ftq().set_bit());
// Ensure ring buffer descriptors have been set up in memory before
// enabling DMA engine.
cortex_m::asm::dsb();
// Manage DMA transmission and reception
eth_dma.dmactx_cr.modify(|_, w| w.st().set_bit());
eth_dma.dmacrx_cr.modify(|_, w| w.sr().set_bit());