12-18-2007 10:41 AM
12-18-2007 11:07 AM
12-18-2007
07:25 PM
- last edited on
10-28-2025
04:17 PM
by
Content Cleaner
Have you tried using a tdms file, those have very good write speeds (I believe it is supposed to have deterministic timing while saving) in my experience as long as you write data in big enough chunks, or use the NI_MinimumBufferSize property (see https://forums.ni.com/t5/DIAdem/Why-are-my-TDMS-files-so-large/m-p/561576?requireLogin=False for how to use NI_MinimumBufferSize). The problem would be efficiently converting your clusters into something you can put into a tdms.
12-19-2007 05:13 AM
Hi Matt,
I implemented the TDMS where I am actually passing on the converted cluster to the data input as a string. So far the result has been an improvement over all the methods applied. However, its still not ideal because the test now fails 4-5 mins where as it used to fail 2-3 mins in to it.
I also introduced the property for the buffer allocation and the outcome is a bit more erratic than without it. It is observed that the system looses some time inbetween (assuming disk IO) and then the time is once again out of skew.
Although the efforts seem not to go in vein due to the duration improvement
, there must be some way to make the streaming transparent on the RT system.
Regards,
Ashm01
12-19-2007 05:18 AM
01-14-2008 04:48 AM
Sorry for the long hiatus.
(Been busy, replicating this on a smaller scale)
It has been observed that the hiccup occurs for one cycle due to HD write. During the absense of updates we modified our code and :
Conclusion: The RT hiccups after 30 secs for 10 ms and then resumes. Has anyone done simultaneous control and high speed data logging on the RT and encountered this behavior? As time goes on, the hiccups accumulate in to a huge stack/skew.
Regards,
Ashm01
01-14-2008
03:35 PM
- last edited on
10-28-2025
04:17 PM
by
Content Cleaner
Have you seperated your DAQ loop from your File IO loop. Given a big enough buffer, and an average file loop time that's less than the daq time then you should be fine.
Here's a basic example
https://www.ni.com/en/support/documentation/supplemental/18/real-time-fifo-frequently-asked-question...
Matt W
01-14-2008
11:59 PM
- last edited on
10-28-2025
04:18 PM
by
Content Cleaner
Excellent advice from Matt. Keep in mind that the HD is a shared resource in your system. If your time-critical process needs to access the HD, and a lower priority process is using it, the time-critical process has to wait, resulting in priority inversion.
Matt's suggestion avoids this by allowing the time-critical process to transmit the data in a buffered manner to a background process to write it to disk.
01-21-2008 02:48 AM
Hello,
Thanks for all your inputs. As of now, I don't have any High priority loops accessing the HD directly. As mentioned before, I am polling a few 3rd Party Communication cards which have a buffer of 2MB each. I try to frequently poll them so I don't loose much data due to accumulation. However,the quantum of data is sporadic and in burst so there is no steady rate to stream it to disk.
I tried the following experiments:
The most interesting was that I converted my write to disk portion to a Sub Vi and made it Time critical. This seemed to have corrected the problem but in theory this is just a wrong practice.
There must be a better way than to make the write thread as high priority.![]()
Any advice?
Regards,
Ashm01
01-21-2008
08:50 PM
- last edited on
10-28-2025
04:19 PM
by
Content Cleaner
I would guess that your other time critical sections are starving the io thread enough to cause problems (perhaps from polling too often, or something in them is eating up more cycles than expected).
If your doing some string processing that might be allocating and deallocating a lot, which can cause a slow down. Since LabVIEW's memory handler is apparently single threaded (at least according to http://zone.ni.com/devzone/cda/tut/p/id/4537 ), then even low priority processor could affect the higher priority ones due to priority inversion.
Setting to the shared variables to single process may help with the speed (assuming you didn't try that already). Did you try the low level level shared RT FIFO as well?
Matt W