Remove cortex-m dependencies for delay #2
Loading…
Reference in New Issue
No description provided.
Delete Branch "occheung/ENC424J600:generalize_delay"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Replaced cortex-m specific delay with
DelayUs
fromembedded-hal
.What is the significance of
b8094f84f3
?revert the accidental change of example in
eeeb162cc5
😄 OK. What about
eeeb162cc5
that usesasm_delay
andhal_delay
at different places? Is there an issue with using the same kind of timer for delaying for different purposes, e.g. setting SPI high/low, and delay 25us after writing ETHRST?SpiPort
needs to own a delay.The issue of using the delay from hal is that one of its component (
SYST
) is pretty much a singleton. IfSpiPort
takes away the delay, then delay in the main program cannot be implemented usingDelay
in hal.The
delay()
incortex-m
is totally reusable, as long as we know thesysclk
frequency.We should probably just not use the
Delay
from hal at all.@ -132,3 +140,3 @@
Ok(_) => {
// Disable chip select
cortex_m::asm::delay(10_u32);
self.delay_us(1);
Wouldn't that significantly increase the delay, and noticeably slow things down?
This delay gets called when modifying register.
Sending/Receiving packets do require read/write to register. But most of the SPI transcation should be to the SRAM buffer, given a large enough packet is sent.
When Dip comes back, I'd like to know the actual reason behind this delay between CS assertion/dessertion and the actual SPI transfer. Must the delay be in nanoseconds?
The set of delay is to comply with the SPI specification of ENC424J600 mentioned in the datasheet (p.148). However, other functions that involve SPI transaction does not have such delay. The delay could be unnecessary.
(Note: However, if the delay is removed and the code is optimized, the CS could possibly be too short for 16-bits read/write functions, as the 8-bits R/W is called back-to-back. I can only faintly recall this potential issue, please verify if that is an issue at all.)
Anyway, these delay is only called after a complete SPI transaction. It can be extended to 1 microsecond. This delay is only called a fixed number of times per packet sent.
Agh! Finally found someone else complaining about a similar situation as we do: https://github.com/rust-embedded/embedded-hal/issues/264
I'm going to read through it and see what steps we can take.
@occheung Thanks for typing the feedback 😃
@ -0,0 +6,4 @@
frequency_ms: u32,
}
impl AsmDelay {
Do we really need to reinvent this? I would have thought there's already a stm32 crate that implements these embedded-hal delays.
Perhaps we can try this? https://crates.io/crates/asm-delay
@ -81,14 +84,14 @@ impl <SPI: Transfer<u8>,
let econ2 = self.spi_port.read_reg_8b(spi::addrs::ECON2)?;
self.spi_port.write_reg_8b(spi::addrs::ECON2, 0x10 | (econ2 & 0b11101111))?;
// Wait for 25us
This comment is just an obvious English translation of the code. Remove.
It seems the embedded-hal delays have a resolution (µs) that is too coarse for this anyway.
What about simply putting a
delay_ns
callback in theSpiPort
object, and then users of this library can set it to their own function based oncortex_m::asm::delay
?So, the reason I started using HAL's SysTick timer in
50c3003210
is that using CYCCNT isn't too accurate e.g. for socket timeout. I think it would be nice to keep that in our example since I don't think polling the sockets require nanosecond precision.However, I do see that embedded_hal's delay might not be suitable for interacting with the Ethernet controller. It looks plausible to me that
SpiEth
andspi::SpiPort
absorbs a "delay in nanoseconds" function in some way. For example, we can makespi::SpiPort
take a function pointer when being init'ed. By doing so we could get rid of choosing between asm's and hal's delays completely. But one of the STM32-specific restriction that still stands is that there's only one SYST to serve one purpose at a time.asm::delay
doesn't useSYST
, does it?Ok. With a
delay_ns
closure, it should eliminate the need to own a delay structure inside the Ethernet driver as well.No
asm::delay
doesn't need to borrow the timer hardware, but the user of this crate needs to know what kind of function / closure they're going to pass toSpiEth
/spi::SpiPort
, like whether it is making use of the timers or the Cortex-M instruction cycles. Either the user is responsible for that, or we restrict the type of the "delay" closure - which I'm not too sure how to do yet.Is there a benefit to being able to specify a timer, or is it always going to be short delays done by
asm::delay
or a similar function?To the best of my knowledge, using hardware timers must produce more accurate timing than using assembly because one can make use of STM32 timers to enable interrupts to update a counter (e.g. wall clock) accurately, while asm delays could be prolonged when interrupts are being serviced. That's why I think the driver crate itself doesn't need to use hardware timers as we're just giving enough time for the controller hardware to process data or switch between states, but our TCP example can keep using them.
@ -169,3 +181,3 @@
// Read MAC
let mut eth_mac_addr: [u8; 6] = [0; 6];
spi_eth.read_from_mac(&mut eth_mac_addr);
spi_eth.read_from_mac(&mut eth_mac_addr).unwrap();
unrelated change
@ -1,5 +1,5 @@
use embedded_hal::{
blocking::spi::Transfer,
blocking::{spi::Transfer},
not needed
@ -67,2 +69,3 @@
F: FnMut(u32) -> ()> SpiPort<SPI, NSS, F> {
// TODO: return as Result()
pub fn new(spi: SPI, mut nss: NSS) -> Self {
pub fn new(spi: SPI, mut nss: NSS, f: F) -> Self {
why
f
and notdelay_ns
?@ -97,1 +101,4 @@
pub fn delay_ns(time_ns: u32) {
cortex_m::asm::delay((time_ns*168_000_000)/1_000_000_000 + 1)
}
Where is this used?
Removed in
5bdfd21e93
.@ -155,2 +163,3 @@
clocks);
enc424j600::SpiEth::new(spi_eth_port, spi1_nss)
let delay_ns_fp: fn(u32) -> () = |time_ns| {
fp?
I meant function pointer.
Changed to
delay_ns
.