05-04-2018 11:55 AM
I am really bored from communication or data acquisition problems when I transfer sensor datas from Target.vi to Host.vi.
Sometimes the program is frozen. Sometimes there is lossy data transfer from Target to Host.
What I want is that please tell me the best way to transfer sensor datas from CompactRIO to Host.vi so that my program will not be frozen or there will be no lossy communication.
"""When doing this, please assume that I will collect approximately 100 sensor datas with sampling rate 5 Hz during 5 hours.""""
I do NOT want buffer problem, lossy data problem, CPU problem, Memory problem, frozen problem anymore.
Thank you for now
Solved! Go to Solution.
05-04-2018 12:08 PM
100 sensors at 5Hz is well within in scan engine range. You can pop the data in one giant cluster and ship it down to the host with a stream.
05-04-2018 12:45 PM
How are you currently doing it? Shared Variables? (I hate those things).
In general, TCP/IP or Network Streams are the way to stream data between your systems. Personally, I tend to use the TCP/IP with the STM library.
05-04-2018 01:16 PM
These usually happens in long-timed tests. I use shared variable to transfer the 100 sensor datas to the Host.vi.
For example, the while loop that I use to parse the sensor datas come from Target.vi slows down in time. There is no arrays, property nodes or anything that may cause to make the loop slow down. Only there is a shared variable to parse sensor datas.
May a shared variable cause such a thing? I can not understand the logic. It seems that the shared variable causes to freeze the loop. Is it possible??
At the speed and lossless communication side, the best idea is to use Network Streams ??
05-04-2018 01:24 PM
@Gilfoyle wrote:
May a shared variable cause such a thing? I can not understand the logic. It seems that the shared variable causes to freeze the loop. Is it possible??
I have seen Network Published Shared Variables (NPSV) do worse. They tend to be slow and problematic. And they work with UDP, which is a lossy network communication protocol. Yes, the NPSV is one of the few things in LabVIEW programming I claim are evil. The only time I legitimately found them useful was to pass the CLED exam, and I felt dirty using them for it.
@Gilfoyle wrote:
At the speed and lossless communication side, the best idea is to use Network Streams ??
Like I said before, TCP or Network Streams will both handle your data rate with no problem. It is just a question of which one you are more comfortable with.
05-04-2018 01:42 PM
Then the best way to transfer sensor datas is to use Network Streams.
Should I use "Build Array" or "Merge Signal" function to combine the sensor datas? (I mean I will use "Write Multiple Elements to Stream" function. Then I have to combine the datas. Which function is the best to combine? Build Array, Merge Signal, Bundle etc.)
Should I use while loop or Timed-while loop at the Target.vi ?
I want to make everything correct at the Target side. It must be perfect when we consider memory usage and speed
05-05-2018 12:04 PM
I also dislike the Dynamic Data type. For that reason alone, use Build Array. I have used clusters.
Nothing here is time critical, so no need for a Timed Loop.
05-07-2018 03:09 PM
For data which much be lossless, such as regularly sampled measurement data or command / response data, use network streams. For tag data, when you only care about latest values, use shared variables.
05-08-2018 03:03 PM
Sir, I tried Network Streams instead of shared variable. The result did not change. I guess there is a problem at my data logging algorithm.
I log the sensors (approximately 105) to a text file in while loop at sampling rate 5 Hz. Whole long tests, it may cause the loop to slown down. I will try to divide the datas in three different while loop. So that for each loop there will be only 35 sensor datas.
I will log whole datas during 2 hours and observe what will happen
05-08-2018 03:21 PM
105 channels at 5 Hz is not a tremendous amount of overhead. Network streams should be able to handle that easily, but you will have to put a bit of effort into managing buffer overruns and underruns.
On the host side, there is no need to log that data at the acquisition rate, as long as you can buffer it fast enough to keep up with the incoming data, and log the contents of the buffer to disk at a rate which strikes a balance between minimizing risk of data loss and keeping disk activity to a minimum. You also don't want your disk writes to suspend other activity for so long that you force a buffer overrun on the read side.
Also, make sure that you are only opening a reference to your logfile once, writing to it in a loop, and then only closing the reference when you're finished. TDMS data files will also be considerably faster than text files for logging. You might want to consider logging as TDMS, and then if you need a text file, performing a conversion as a post-processing step after you're finished with the acquisition.