Homepage GitHub

Time Synchronization with a High-performance CAN Controller

Hi.

Here’s the scenario - I need to synchronize the clocks on multiple nodes on a CAN bus. The uavcan.time.Synchronization protocol looks like the right approach.
To implement the protocol, it is necessary to determine precisely when the time master’s sync message has been delivered to the bus. Using a SPI-connected CAN controller downstream of a buffered SPI controller, the only practical way to do this is via an interrupt that is raised when the CAN controller has finished sending the message.
But I only really want this interrupt to occur for uavcan.time.Synchronization messages - that way I can use the interrupt signal itself to latch a counter that is keeping track of system time - no software latency involved. The good news is that the Microchip MCP2518FD I’m using has 32 FIFOs that can be configured individually as TX or RX buffers, and each buffer’s priority and interrupt sources can be selected, as well.
If I designate one FIFO as a highest-priority TX FIFO, and enable the “TX buffer empty” interrupt for only that FIFO, I have exactly the setup I need.
Except that when I pop a frame out of UAVCAN (in the form of libcanard), the information that this is a uavcan.time.Synchronization message has been obscured, and I don’t know to which FIFO to send it. I can of course parse the “extended id” to extract the message priority, but that relies on non-public features of the protocol.
Is there a better way to do this?
Thanks,

-Nick

It might make sense to manually create the uavcan.time.Synchronization frame and not send it through libcanard at all. The CAN ID and tail byte are pretty easy to construct, especially for single-frame transfers. The biggest thing to watch out for is that you make sure you manage the transfer ID in the tail byte. As well, it seems that uavcan.time.Synchronization is sealed, so you don’t have to worry about anything changing on you.

libcanard is, after all, just a tool to make your life easier. It doesn’t hide the bitfields because they’re a private part of the specification, it hides them to make sure you don’t mess them up :slight_smile:

Specifically you’d care about parts 4.2.1 and 4.2.2 of the spec if you did it this manual way.

1 Like

Hi David.

Thanks for the reply. On the face of it, there doesn’t seem to be much to recommend one approach over the other. But you should never underestimate the ability of a programmer to abuse your interfaces. :slightly_smiling_face:
I wonder if there would be any support for adding a user-definable field to the CanardTransfer and CanardFrame structs in libcanard, say a void* that would be ignored by the library other than to copy its value from a CanardTransfer to the resulting CanardFrames.How that would work for received frames isn’t obvious, though.

Thanks again,

-Nick

Hello Nick,

If you manage to implement the time synchronization protocol (either the libcanard one or your own version), could you share with us the typical results you obtain? In terms of phase alignment and jitter.

I am looking for sub-millisecond performances with realtime OS on microcontrollers, and I wonder if I should add a dedicated “IRQ line” to achieve it, or if the uavcan protocol is enough…

Thank you

Best regards

I can’t immediately back it up with data but generally, a sub-millisecond accuracy should not be a problem with high-speed CAN bus. The critical factor is the accuracy of the TX timestamps you obtain at the publisher and the RX timestamps at the subscriber. Even if your hardware does not support hardware timestamping (STM32’s bxCAN, for example, doesn’t), in a typical application you can generally achieve the worst-case IRQ latency for software timestamping under 100 microseconds, which basically puts the upper bound on the worst-case error.

Hi @Jishin42.

I’ll post an update here when I get it working. Our application requires sub-microsecond synchronization between nodes, which is within the theoretical limits of the algorithm. Millisecond synchronization shouldn’t pose much of a problem.

-Nick

Hi @Jishin42.

I thought I update this thread, even though work is not yet complete. Using an extended Kalman filter (EKF) to discipline the local clock, we have been able to reduce clock jitter to a couple hundred microseconds without yet having eliminated software-related interrupt latency. Once we’re able to do that, we expect to be able to tune the EKF to meet our sub-microsecond target, but that will have to wait for new hardware.

-Nick

1 Like

Thank you for the update, it looks promising