LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Guidelines for selecting Samples per Channel to achieve a good buffer size

Hello, I have an application that has been successfully operating for a few years - it uses cDAQ chassis and runs on Windows.  Intermittently we have found buffer over-run issues and the application errors out, and would like to increase the buffer size to help avoid this.  Hoping to get guidelines for setting "Samples per Channel" in DAQmx Timing vi when in Continuous Samples mode to increase the buffer.

 

More details: The application runs on cDAQ-9172/9189/and similar and has the DAQmx Timing vi setup for Continuous Samples, but the "Samples per Channel" port is currently not wired.  The "number of samples per channel" of DAQmx Read function is set as a function of the sample rate.  This setup has proven to be robust, except in a few rare instances.  The Read vi is in a loop that executes 5 times per second.

 

To give a healthy margin on the buffer, to account for when Windows turns its attention away from Labview momentarily, should I set "Samples per Channel" within the DAQmx Timing vi to be ~5x (?)  the quantity expected per loop?  With modern computers with a lot of memory, and that this computer is solely dedicated to this task (and has all Office etc removed), is there a penalty in making the buffer too large?

 

0 Kudos
Message 1 of 5
(250 Views)

Related discussion - https://forums.ni.com/t5/LabVIEW/Sample-rate-vs-samples-per-channel/td-p/1537786

Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution.

Finding it hard to source NI hardware? Try NI Trading Post
0 Kudos
Message 2 of 5
(246 Views)

First things first.  No, there's really no penalty for setting a larger buffer size in your call to DAQmx Timing when doing Continuous Sampling.  I frequently advocate users to set it to something more like 5 seconds worth, even when they're nominally reading the new accumulated contents every 0.1 seconds.  On modern PC's, I don't even give a second thought to buffer memory usage of 10 MB or less.

 

Second, how are you controlling your loop timing and doing your reads?  One fairly common but subtle problem I've seen from users here is to *overconstrain* timing.  For example, supposing you were sampling at 10 kHz, it'd be an overconstraint if you both:

- called DAQmx Read and requested 2000 samples

- ran some kind of 200 msec timer in the loop

 

You should do one or the other, NOT both at once.  What can happen over long stretches of time are that the 200 msec timer will sometimes take 1-10 extra msec due to Windows OS reasons.  Each time that happens, the 200 msec worth of samples you read will be the oldest 200 out of 201-210 msec worth in the buffer.  Over time, that backlog can gradually accumulate to the point that you overflow the buffer.

 

If this doesn't make sense to you, post your code.  (First save back to previous version like 2019 or so).

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 3 of 5
(213 Views)

My general rule of thumb for DAQ applications is to leave the Samples Per Channel on the DAQ Timing unwired and read 100ms worth of data with each read of the DAQmx Read.  The loop with the DAQmx Read should do nothing but read from the DAQ, maybe with a chart.  Otherwise, a queue is used to send the read data to another loop for processing and/or logging.  This lets the DAQmx Read set the loop rate (no additional waits) and makes sure nothing else is slowing it down.

 

If logging to a TDMS file, use the DAQmx Configure Logging before starting the task and the logging just happens behind the scenes.  This is more efficient than even using a Producer/Consumer.

 

I have yet to have any problems with this setup.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 4 of 5
(207 Views)

I usually use 4-8 seconds worth of buffer. 

what crossrulz said works well 99% of the time. When you are pushing bus limits, or high sample rates there is one modification to make. When reading data the number of samples should be an even multiple of the disk sector size. Use the built in logging features but make that mod for the log and read mode; log only mode requires that mod.

Message 5 of 5
(161 Views)