From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Automotive and Embedded Networks

cancel
Showing results for 
Search instead for 
Did you mean: 

Get actual time taken to transmit and receive CAN frame (From the hardwares perspective)

Solved!
Go to solution
Hi All,
 
I would like to determine the time taken between when a frame was transmitted out of an XNET port and the time it was received on another. For example, I have started a cyclic, single point output session on my CAN1 output which will write a frame every 100ms, and a single point input session to read the frames the interface received on CAN2 input.

 

I would like to calculate exactly how long it took from the time the frame was sent out on the hardware interface(CAN1) until the time is was received by the other hardware interface(CAN2). I want this measurement totally independent from the XNET software APIs that write/read the frame, it must be the actual hardware time that the frame was transmitted to the time it was received. 

 

Can anyone explain how I would achieve this? Do I need to use the hardware synchronization capability to get accurate measurements (It should be accurate to microseconds)?

Or can I just use the XNET Read (State Time Comm).vi as the time the XNET hardware sent the first frame out and use this as my T0?

 

I am using a PXI chassis with an  PXIe-8840 embedded controller and PXIe-8510 card as my XNET hardware.

 

Thanks

0 Kudos
Message 1 of 13
(6,858 Views)

What you are asking for still isn't clear.  It can be interpreted a few ways.

 

The time the message starts being transmitted to when it is completely received (can be calculated based on the baud rate and payload size).  The time for the bit to propagate down the wire (speed of light?).  The time from the write command, to the time it actually goes out (priority of other messages may delay this one).  The time from the write command to the time an acknowledgment (priority could delay along with a corrupt message needing a retransmit).  Or several others.

 

In all of these cases I think the best approach is going to be to use DAQ hardware and read the bits as a buffered waveform of data, then looking for the transitions and parsing out the bits of the CAN frame as defined here.  You can use buffered DAQ to read at a very high frequency then post process the waveform to look for when the transmission of that ID starts, to when the acknowledgement is seen.  You may want to start with an oscilloscope that has CAN bus protocol support, then work on reading the same data with DAQ hardware.

0 Kudos
Message 2 of 13
(6,854 Views)

I know we can calculate the time it should of taken, but we would like to measure this time to see what the Actual time was. In other words I don't care if the transmit of the message was delayed due to priority or any other reason, I just want to know an exact timestamp of when the message left the interface and compare this with the timestamp of the received message(which we already know from the read frame)

 

So my question is, using perhaps the echo functionality from the transmit bus can I tell what time the message was transmitted from that bus?. Something like this is explained in this post https://forums.ni.com/t5/Automotive-and-Embedded-Networks/XNET-Timestamp-and-Windows-Timestamp-Synch..., I just don't follow the implementation exactly. Perhaps you can shed some light?

 

Thanks

0 Kudos
Message 3 of 13
(6,838 Views)

When the baud rate is set, it is set.  If the message was received then the baud rate was consistent for the transmission of the message.  This means a 500k baud will send 500,000 bits per second.  This isn't theoretical but actual and can be measured with a scope but again this must exist if the data was read.  If you receive the message and isn't corrupt, then you know the transmission time, and you can calculate the payload size.

 

Lucky for you there is already a VI for doing this and there is going to be some variation due to the amount of stuffing bits that are needed.  So for instance with a baud rate of 500k, sending 8 bytes of payload, on an extended ID, the time it takes for the transmission to start, to the time the transmission ends will be between 262 and 310 microseconds, and a scope should confirm this.  If the requirement is to prove that a transmission takes some amount of time, then measuring the baud rate, and calculating the payload size should be sufficient.  You know the baud rate, you know the payload, you know the message was received therefor it must have taken the amount of time calculated.  This calculation was done using this VI by the way.

 

<LabVIEW>\examples\CompactRIO\Module Specific\NI 985x\cRIO CAN Periodic Transmit\CAN Rate Calculator.vi

 

Now if you are talking about the amount of time it takes for the end of the transmission being sent, and the end of the transmission being received, then I'd say that will be close to the speed of light depending on your medium, so roughly 300 million meters per second.  At that point Heisenberg comes into play and anything you use to measure it will have larger uncertainly than the signal itself.  So with a 1 meter cable roughtly .000000003 seconds or 3 nanoseconds. 

 

I give these two answers because again you aren't being clear on the requirements of the time the message left the interface.  Do you mean the start of leaving?  Or the end of leaving?  Same with the receive side.

0 Kudos
Message 4 of 13
(6,833 Views)

Thanks Hooovahh,

 

So if we can trust the calculated values then I wounder why we need the XNET Connect Terminals.vi for synchronization? Where would these be useful?

 

Would you mind attaching that CAN Rate Calculator.vi, I don't have the CRio modules installed.

 

Thank you for your prompt replies!

0 Kudos
Message 5 of 13
(6,827 Views)

Timing is a complex subject with quite a bit of nuance. CAN is not deterministic and messages can be be delayed for any number of reasons such as loss of arbitration or bus errors.

 

XNET devices will attach a timestamp to a frame immediately after the acknowledge bit is transmitted or detected. These timestamps are only attached to frames read through input sessions. This allows us to define a specific instant in time when the frame has won arbitration and completed transmission without errors. If you want to know when the first bit of the frame was placed on the bus, the frame length will have to be calculated including stuff bits and the resulting time subtracted from the timestamp of the frame. Since we are working with hardware, the propagation time of the acknowledge bit from receiver to transmitter can usually be neglected.

 

XNET devices use device driven DMA to move frame data from the host OS to the XNET firmware. The difference in time between calling XNET Write.vi and the first bit being transmitted on the bus is hardware dependent and difficult to measure. PXI/PCI hardware is very fast and the highest performing on the order of 10s of microseconds. Ethernet cDAQ is on the slow end of the spectrum due to reliance on non-deterministic TCP/IP transmission path and can be in the millisecond range. Once frame data has reached the XNET firmware it will be transmitted as soon as the frame can win arbitration. Being completely implemented in hardware, the first bit of the frame will move through the XNET firmware and show up on the bus within sub microseconds or faster and can usually be neglected.

 

For applications where high degrees of timing accuracy are required there are additional details to consider. When a session is created the hardware polls the host OS for the current time to populate the hardware timestamp register with a certain value. This polling process is asynchronously performed in software and results a non-zero delta between the hardware timestamp register on the XNET device and the real time clock on the host OS. The time delta is hardware dependent with PXI/PCI typically being in the range of 10s of microseconds. USB hardware is very close to that but Ethernet cDAQ must accept the losses from the underlying TCP/IP bus path and results in larger deltas. The XNET Connect Terminals.vi allows the hardware to start the session and increment it's internal timestamp register synchronous to other devices. If the devices share a common time base and start time, the original time-delta from session creation can be removed for very precise time stamp measurement.

 

 

Jeff L
National Instruments
0 Kudos
Message 6 of 13
(6,822 Views)

Thank you jefeL,

 

So what your saying is I can calculate the beginning of the frame write time (i.e the moment the frame won arbitration) in reverse by subtracting the calculated frame length including stuff bits, from the timestamp of the received frame?? Can you possibly attach an example of how to calculate the frame time?

 

Also I am a little bit confused on the XNET Connect Terminals.vi that you mentioned for precise time measurements? I saw a post you made previously on this but was a little confused by it. So if I set the start trigger to be triggered by PXI_Trig0 and I recieve a start trigger frame a time "X" and receive my actual frame at time "Y". What does the timestamp "X" of the start trigger frame tell me? And how does it relate to the timestamp "Y" of the actual received CAN Data frame?

 

I have attached the vi you posted previously with all the values I read saved in the controls as default values on the front panel.  I am not sure why I am getting negative values. Basically CAN1 is connected to CAN2 via a gateway that routes the message 0x100 from CAN1 directly to CAN2. The vi shows the results.

 

Thanks

 

 

 

 

 

0 Kudos
Message 7 of 13
(6,806 Views)
Solution
Accepted by topic author jkerrigan12

The attached example is meant to demonstrate concepts while configured in a loop back fashion i.e. connecting CAN1 to CAN2 directly and with a no termination cable. The example uses the start trigger as a real world physical event which can then be used as a reference point in time to base all the subsequent timestamps on. Both interfaces begin incrementing their internal hardware timestamp registers when the start trigger is detected and will not drift over time since they share a common time base. The problem is that both interfaces query the host OS for the current time in a asynchronous manner. As a consequence the start trigger timestamp from CAN1 will be different from the start trigger timestamp of CAN2 even though they were both started based on the same physical signal. If we want to know how much delta is in the timestamps due to the asynchronous calls to the host OS, we can look at the "Relative Delta" indicator which does not remove the asynchronous components.

 

To remove the asynchronous component, we subtract the start trigger timestamp from each subsequent timestamp which results in a very accurate timestamp relative to when the start trigger occurred. Note in the example the indicator that shows the relative timestamp is called "xxx-T0(Absolute)" which is a little confusing. Trx-T0 (Absolute) represents the delta in time between the start trigger and the ack bit of the first frame received on the synchronized interface, CAN2. Since both interfaces started on the same physical trigger, we can perform the same calculation on the other interface and expect it to be very close. Ttx-T0(Absolute) represents the same delta on the synchronizing interface, CAN1. Comparing the delta from each interface demonstrates that we have achieved tight synchronization in the "Absoulute Delta indicator". 

 

It is important to note that the receiver will actually drive the acknowledge bit, and in theory detect it first, thus applying the timestamp. For this reason the example is configured to subtract the receivers timestamp from the transmitters timestamp which should be higher when directly connected in a loopback. Once we introduce a gateway into the mix this is no longer the case. The transmitter will receive an ack from the gateway before the gateway can propagate that frame to the other CAN interface. This should explain the negative readings and higher deltas that you are seeing.

Jeff L
National Instruments
0 Kudos
Message 8 of 13
(6,783 Views)

Thanks JefeL,

 

This is making a little more sense to me now. So with my gateway connected between CAN1 and CAN2, am I correct in saying that the timestamp of the echo frame on the transmitting interface(CAN1) would represent the time the gateway acknowledged(received) the frame. And the timestamp of the received frame on the receiving interface(CAN2) represents the time the gateway would of transmitted the frame?

 

So If I want to now figure out how long it takes to route a frame from CAN1 to CAN2, with a gateway connected in-between (instead of doing a loopback), I can just use the absolute delta from this same example, is that correct?

 

Thank you for clarifying all this!

 

 

0 Kudos
Message 9 of 13
(6,734 Views)

You are correct. The timestamp of the echo frame on CAN1 would represent the time the gateway acknowledged the initial frame transmission. The timestamp of the received frame on CAN2 would be when the acknowledge bit was sent back to the gateway. Assuming no bus transmission errors the frames will be identical and they were both timestamped at their acknowledgement. This allows us to compare the timestamps of each frame directly because back calculating the first bit would be the same for both frames. If we then remove the asynchronous components from each timestamp, the delta between them will produce a very accurate representation of the total transmission time.

Jeff L
National Instruments
0 Kudos
Message 10 of 13
(6,720 Views)