06-27-2016 06:35 PM
Hello Seniors,
In the past few days I have been trying few things with XNet communication and I have few observations(questions) that I would like to get more information about them.
My System Setup:
NI 9860 – Slot 8
Other slots are populated by different DIO & AIO modules but I am not using them in my project
My Frame Information is as follows:
Application Protocol – None
Comment – None(blank)
ID – 1 to 125
Name – Frame_(ID)
Payload length – 8
CAN. Ext ID? – False
CAN. Timing Type – Event Data
CAN. Tx Time – 1ms
Total no. of Signals – 1000
Size bit – 8
Type – UI8
Attached you will find images that are as follows:
OBSERVATIONS:
Few more important things that might help you apart from the information statedabove:
Any help/suggestions on these finding would help me make progress.
Solved! Go to Solution.
06-27-2016 06:49 PM
Hi All,
I have some new findings that I wanted to share with you.
Questions based on findings:
Thanks, I will keep on sharing my findings as I do my testing.
06-28-2016 12:48 PM
I ran a quick test and I could sucessfully transmit the maximum of 500 frames per interface with 8 signals per frame using the attached test. I simply used the bus monitor to receive the frames via a input stream session. I was using a cDAQ 9171 with a 9862 module. I would expect similar performance from the cRIO. What else is your application doing? Is your write loop actually executing at 1 ms?
Can you post a test case VI that demonstrates the behavior?
06-28-2016 01:41 PM
Hello Jeff,
Thanks for your response.
I went through your code and found that the CAN transmit time selected in your code is 1 seconds which we would like to decrease to maybe 1 msec as we would want our XNet communication to be as fast as possible.
And if I changed that part of code and ran it on my System I could see the same kind of response in XNet Bus Monitor.
And regarding our loop rate that is actually running at 100ms and not 1ms.
Also is there a way to check if XNet communication signals are dropping, since there is no error out, is there a property node or warning code which we can use to handle these kind of situations.
Attached images would show the code changes and xnet bus monitor response.
If you feel that my understanding of transmit time is wrong please guide me on that.
Thanks
Lakhvir
06-28-2016 03:05 PM
It seems you are well beyond 100% bus load with a transmit time of 1 ms per frame. At 125 Kbaud we can expect a real world transmission rate of slightly less than 1 ms per frame. It will vary a little due to stuff bits and processing overhead. When you are calling XNET write, the signals are placed in their frames and sent to a hardware buffer. Since we are working in single point mode, the folowing call to XNET write will overwrite the data in the hardware buffer becasue not all of the frames have been sent out on the bus. At 100% bus load some frames will simply never "get their turn" at the front of the hardware buffer. Try slowing down your overlal loop rate to allow all of the data to be sent or increasing your transmit time (interpreted as a minimum seperation). In my tests I can sucessfully send all the signals with a loop rate of 250 ms and a transmit time of 0.001 s . We simply have to provide enough time for the frames to transmit. The bus monitor also has a indicator for Bus Load which is very usefull for diagnosing certain problems.
If you are concerned about losing signal data you should be using a queued or stream session type as they use hardware buffers.
06-28-2016 05:40 PM
Jeff,
As suggested I tried running the application with the transmit time of 0.001 sec and loop rate of 250 ms but I still see the same kind of response i.e. signals dropping [DEFINE THIS - i.e. we stop receiving them in NI-XNET bus monitor] {GRADUALLY} over a period of time. Even with increased timings (loop rate of 500ms) I still see signals getting dropped. Is there any way to catch the signals that are dropped by XNET programatically (from the XNET Session) without the support of XNet Bus monitor.
Another point worth noticing is that few of the property nodes in the vi that you shared does not exist in my system although they appear under different name (new names are "ByteOrder" and "InterfaceBaudRate" as you can see in attached VI), I was wondering if this issue of different behaviours between our systems is caused by differnet Xnet versions. My XNet version is 15.0 I am just curios to know which version are you using in your system.
You can try running this vi in your system and see if the response is same. I am running this vi on cDAQ9174 with a NI9860 in Slot 4 and NI9862 in slot1.
I observed that there are few signals that keep on dropping although they were good when the application started For example - Signal 35 dropped after 426 frames(refer attached xnet bus monitor snapshot)
In my second run I observed that although my Peak Bus load was only 81% I still had signals dropping in that run also. So the assumption that the bus load of 100% causes the signals to drop is not entirely true in this run. Also worth mentioning is that when we ran our original tests on RT our bus load was hardly 50% & some signals were randomly dropping if we had more than 256 signals.
Let me know if with the same vi your observations are any different. Please refer to the attachments.
Regards
Lakhvir
07-01-2016 10:35 AM
The best resource I can point you to is the XNET manual:
http://www.ni.com/pdf/manuals/372840k.pdf
It is quite long but with judicial use of control-F you can find some detaild information on the session types and how they handle data. Signal output sessions do not use software queues to store the values we write so when you are writing the first 1000 signal values, the hardware places 125 frames into individual hardware buffers in the C-Series module. If you write to a signal twice before the frame is sent on the bus the old data will be overwritten. That is how the single point mode is expected to work and that is what you are seeing.
When we start to write more frames to the hardware than the hardware can send out on the bus, the hardware buffers will become full and the data will be lost. The firmware does not have any type of intelligence to prioritize what data to discard so the lost frames are semi random. Single point mode does not provide any way to detect such data loss so it is up to the user to build intelligence into their applicaiton to prevent bus overloading. Ideally, you will want to stay below 90% bus load to reliably use single point mode to allow all the frames to be transmitted before writing them again.
The queued and stream session types will have a slightly higher cost in firmware performance, however, they do provide more robust error detection when data being written overflows the buffers.
07-07-2016 11:39 AM
Well,
After all the consultation , hit and trials on different XNet Session the solution came out with the help of going through the Release notes for XNet 15.0 - Debounce Property Issue, it states that "Frequent single point output frames may not be transmitted on the bus when the debounce time property is used."
This Issue it seems has been resolved in XNet 15.5 and I was able to verify the same code in 15.5 and it worked.
So the solution to the above problem posted was updating the XNet 15.5 and also updating the firmware on NI9860 because of the new XNet driver.
Regards
Lakhvir