05-06-2025 01:58 PM
Hi everyone,
I'm currently working on a project that involves real-time data logging using LabVIEW and NI DAQ devices. While I've managed to set up basic data acquisition, I'm facing challenges with ensuring reliable and efficient data logging, especially when dealing with high-frequency signals. I'm particularly interested in: Optimal buffer sizes and memory management techniques. Strategies to prevent data loss during high-speed acquisition. Best practices for structuring the LabVIEW code for scalability and maintainability. I would greatly appreciate any insights, experiences, or additional resources you could share to help me enhance the reliability and efficiency of my data logging setup.
I've come across several resources that have been helpful:
https://www.youtube.com/watch?v=GBhJk5Tnshc
https://www.theengineeringprojects.com/2023/08/introduction-to-labview.html
https://www.viewpointusa.com/labview/labview-data-acquisition/
https://www.m4-engineering.com/multi-sensor-data-using-labview/
05-06-2025 02:13 PM
The simplest way is to just configure DAQmx to stream it straight to a TDMS file: https://www.ni.com/en/support/documentation/supplemental/21/ni-daqmx-high-speed-streaming-to-disk.ht...
That'll save all of the data you capture. If you're trying to save subsets of data (e.g., monitoring continuously at 100 kHz for an hour, and need to save 100 ms of interesting content) you'll have to do it manually.
In general though, you want a producer/consumer architecture. And you're allowed to have multiple of these in one program! For example, you can enqueue your data into multiple queues to be removed by multiple loops. You might have one loop for your UI, which takes your 100 kHz data and averages it out, updating your screen every 100 ms. Another loop might write each datapoint to disk. (I'd recommend TDMS files for storage, but you can use CSV's too- they're just a LOT bigger on disk.)
Look up some of the project templates that ship with LabVIEW. IIRC one is "continuous measurement and logging" that might point you in the right direction.
05-06-2025 02:34 PM
Mostly just reiterating what Bert said...
1. For logging data, the DAQmx Configure Logging is by far the most efficient route to take. It easily beats out a Producer/Consumer for logging data to disk.
2. It is best to have a loop that does nothing but read from the DAQ. Use a queue or notifier to pass the data to other loops that may need the data. There should be no explicit wait in this loop.
3. There is a rule-of-thumb to read 100ms of data each iteration of your loop. Obviously, the number of samples you read will depend on your sample rate.
4. For most setups, do not worry about setting the buffer size. The default is usually larger than you will need, especially if you are following point 3 above.
05-07-2025 09:07 AM
@crossrulz wrote:
3. There is a rule-of-thumb to read 100ms of data each iteration of your loop. Obviously, the number of samples you read will depend on your sample rate.
Slightly off topic, but this is the first I've heard of this. Is there some history behind it? Granted, I don't save my data to disk (instead I process it in chunks and send it out via TCP), so is it HD write speed or something along those lines that would suggest you should write more frequently instead of in larger chunks? The handshaking alone adds enough delay to justify doing mine in 2-second chunks from a network performance aspect instead of more frequent smaller transfers, but I realize that writing to disk is apples to oranges when compared to sending out network traffic.
05-07-2025 09:27 AM
@DHerron wrote:
@crossrulz wrote:
3. There is a rule-of-thumb to read 100ms of data each iteration of your loop. Obviously, the number of samples you read will depend on your sample rate.
Slightly off topic, but this is the first I've heard of this. Is there some history behind it? Granted, I don't save my data to disk (instead I process it in chunks and send it out via TCP), so is it HD write speed or something along those lines that would suggest you should write more frequently instead of in larger chunks? The handshaking alone adds enough delay to justify doing mine in 2-second chunks from a network performance aspect instead of more frequent smaller transfers, but I realize that writing to disk is apples to oranges when compared to sending out network traffic.
I can say from experience that it is easy to go down the rabbit hole of buffer size, samples to download, etc. Here are some things that I have found; they are corner cases and really only apply to high stream rates.
Below are some screen shots of a simulated instrument running at a high rate using the above two points. It has been running for the last 20 minutes without issue in Log and Read mode, but not saving the data.
The values I had were:
Sample Rate 10MSa/s/ch 8 Channels
N Samples to Read 125952 Samples ~ 12ms of data!
Buffer Size 80609280 Samples ~ 8s of data
File Size 20152320 Samples ~ 2s of data
File size is 1/4 of buffer Size
Buffer Size and File Size both multiples of NSamples to Read and disk sector size(512)
You can see that DAQmx can be efficient, as the CPU load is low.
05-09-2025 06:58 AM - edited 05-09-2025 07:00 AM
Have you evaluated FlexLogger? There's even a free version. That product has already figured out the DAQ and logging pieces of the puzzle. And, an off-the-shelf product takes a lot of the burden of maintainability off your shoulders.