Automotive and Embedded Networks

cancel
Showing results for 
Search instead for 
Did you mean: 

XNet signals Lost

Solved!
Go to solution

Hello Seniors,

 

In the past few days I have been trying few things with XNet communication and I have few observations(questions) that I would like to get more information about them.

 

My System Setup:

  • cRIO – 9067

NI 9860 – Slot 8

                Other slots are populated by different DIO & AIO modules but I am not using them in my project

  • cDAQ – 9174 - NI 9862 – Slot 1
  • NI USB 8473s for CAN

My Frame Information is as follows:

 

Application Protocol – None

Comment – None(blank)

ID – 1 to 125

Name – Frame_(ID)

Payload length – 8

CAN. Ext ID? – False

CAN. Timing Type – Event Data

CAN. Tx Time – 1ms

 

Total no. of Signals – 1000

Size bit – 8

Type – UI8

 

Attached you will find images that are as follows:

  • Create Session.jpg – Image of code used to create Session
  • Write(Tx).jpg – Shows how XNet is written
  • Read(Rx).jpg – Shows how Read XNet has been used
  • XNet Monitor First 25.jpg – Shows Status of XNet Monitor of the first 25 frames
  • XNet Monitor Last 25.jpg – Shows Status of XNet Monitor of the last 25 frames
  • NI XNet CAN comm.jpg – Shows status of XNET read vi used to read CAN Comm status

OBSERVATIONS:

  • I have observed that when I run my application some of the frames just drop over the period of time as you can see in the images attached. For example – No. of frames in the images are all different and the one that are light grey have dropped over the period of time. Any thoughts as to why this can happen?
  • I have also observed over my testing that generally out of 125 frames, frames at the starting drop out more often rather than the one at the last. I could not understand why this is happening either.
  • Also while testing we have observed that there are no errors as shown in image attached.

Few more important things that might help you apart from the information statedabove:

 

  1. Each frame has 8 signals (8 x 125 = 1000)
  2. We have two XNET sessions (one TX on CAN1 and one RX on CAN2)
  3. TX write is called every second (1Hz) with an array of 1000 DBL values passed to it
  4. We create the database programmatically in “:memory:” for the 1000 signals and 125 frames
  5. The CAN bus is configured at 125000 baud rate

Any help/suggestions on these finding would help me make progress.

0 Kudos
Message 1 of 8
(6,746 Views)

Hi All,

 

I have some new findings that I wanted to share with you.

 

  • Reduced the number of signals to 200 and I observed that the number of signals dropping on XNET communication is “Zero”. I ran the test for half and hour and observed no signal dropped.
  • Observed that if the number of signals are more than 260 , I observed one of the signal dropped out, reducing the number to less than 250 seems like an acceptable number for XNet session.

 

Questions based on findings:

  • Is there a limit on the number of signals in a single XNet session? Is 256 a limit?
  • Can we have more than 256 signals if we divide them into multiple XNet sessions on the same XNet Interface?

 

Thanks, I will keep on sharing my findings as I do my testing.

0 Kudos
Message 2 of 8
(6,744 Views)

I ran a quick test and I could sucessfully transmit the maximum of 500 frames per interface with 8 signals per frame using the attached test. I simply used the bus monitor to receive the frames via a input stream session. I was using a cDAQ 9171 with a 9862 module. I would expect similar performance from the cRIO.  What else is your application doing? Is your write loop actually executing at 1 ms?

 

Can you post a test case VI that demonstrates the behavior?

 

Jeff L
National Instruments
0 Kudos
Message 3 of 8
(6,722 Views)

Hello Jeff,

 

Thanks for your response.

 

I went through your code and found that the CAN transmit time selected in your code is 1 seconds which we would like to decrease to maybe 1 msec as we would want our XNet communication to be as fast as possible.

 

And if I changed that part of code and ran it on my System I could see the same kind of response in XNet Bus Monitor.

 

And regarding our loop rate that is actually running at 100ms and not 1ms.

 

Also is there a way to check if XNet communication signals are dropping, since there is no error out, is there a property node or warning code which we can use to handle these kind of situations.

 

Attached images would show the code changes and xnet bus monitor response.

 

If you feel that my understanding of transmit time is wrong please guide me on that.

 

Thanks

Lakhvir

Download All
0 Kudos
Message 4 of 8
(6,718 Views)

It seems you are well beyond 100% bus load with a transmit time of 1 ms per frame.  At 125 Kbaud we can expect a real world transmission rate of slightly less than 1 ms per frame. It will vary a little due to stuff bits and processing overhead. When you are calling XNET write, the signals are placed in their frames and sent to a hardware buffer. Since we are working in single point mode, the folowing call to XNET write will overwrite the data in the hardware buffer becasue not all of the frames have been sent out on the bus. At 100% bus load some frames will simply never "get their turn" at the front of the hardware buffer. Try slowing down your overlal loop rate to allow all of the data to be sent or increasing your transmit time (interpreted as a minimum seperation). In my tests I can sucessfully send all the signals with a loop rate of 250 ms and a transmit time of 0.001 s . We simply have to provide enough time for the frames to transmit. The bus monitor also has a indicator for Bus Load which is very usefull for diagnosing certain problems.

 

If you are concerned about losing signal data you should be using a queued or stream session type as they use hardware buffers.

 

Jeff L
National Instruments
0 Kudos
Message 5 of 8
(6,710 Views)

Jeff,

As suggested I tried running the application with the transmit time of 0.001 sec and loop rate of 250 ms but I still see the same kind of response i.e. signals dropping  [DEFINE THIS - i.e. we stop receiving them  in NI-XNET bus monitor] {GRADUALLY} over a period of time. Even with increased timings (loop rate of 500ms) I still see signals getting dropped. Is there any way to catch the signals that are dropped by XNET programatically (from the XNET Session) without the support of XNet Bus monitor.

Another point worth noticing is that few of the property nodes in the vi that you shared does not exist in my system although they appear under different name (new names are "ByteOrder" and "InterfaceBaudRate" as you can see in attached VI), I was wondering if this issue of different behaviours between our systems is caused by differnet Xnet versions. My XNet version is 15.0 I am just curios to know which version are you using in your system.


You can try running this vi in your system and see if the response is same. I am running this vi on cDAQ9174 with a NI9860 in Slot 4 and NI9862 in slot1.


I observed that there are few signals that keep on dropping although they were good when the application started For example - Signal 35 dropped after 426 frames(refer attached xnet bus monitor snapshot)

In my second run I observed that although my Peak Bus load was only 81% I still had signals dropping in that run also. So the assumption that the bus load of 100% causes the signals to drop is not entirely true in this run. Also worth mentioning is that when we ran our original tests on RT our bus load was hardly 50% & some signals were randomly dropping if we had more than 256 signals.

Let me know if with the same vi your observations are any different. Please refer to the attachments.

Regards

Lakhvir

Download All
0 Kudos
Message 6 of 8
(6,704 Views)

The best resource I can point you to is the XNET manual:

http://www.ni.com/pdf/manuals/372840k.pdf

 

It is quite long but with judicial use of control-F you can find some detaild information on the session types and how they handle data. Signal output sessions do not use software queues to store the values we write so when you are writing the first 1000 signal values, the hardware places 125 frames into individual hardware buffers in the C-Series module.  If you write to a signal twice before the frame is sent on the bus the old data will be overwritten. That is how the single point mode is expected to work and that is what you are seeing.

 

When we start to write more frames to the hardware than the hardware can send out on the bus, the hardware buffers will become full and the data will be lost. The firmware does not have any type of intelligence to prioritize what data to discard so the lost frames are semi random. Single point mode does not provide any way to detect such data loss so it is up to the user to build intelligence into their applicaiton to prevent bus overloading. Ideally, you will want to stay below 90% bus load to reliably use single point mode to allow all the frames to be transmitted before writing them again.

 

The queued and stream session types will have a slightly higher cost in firmware performance, however, they do provide more robust error detection when data being written overflows the buffers.

Jeff L
National Instruments
0 Kudos
Message 7 of 8
(6,645 Views)
Solution
Accepted by topic author lbains

Well,

 

After all the consultation , hit and trials on different XNet Session the solution came out with the help of going through the Release notes for XNet 15.0 - Debounce Property Issue, it states that "Frequent single point output frames may not be transmitted on the bus when the debounce time property is used."

 

This Issue it seems has been resolved in XNet 15.5 and I was able to verify the same code in 15.5 and it worked.

 

So the solution to the above problem posted was updating the XNet 15.5 and also updating the firmware on NI9860 because of the new XNet driver.

 

Regards

Lakhvir

Message 8 of 8
(6,571 Views)