In some code examples I've seen, some or all of the parameters of cfg_samp_clk_timing are initialized in variables and those are called when using the actual function.
rate = 10000.0
source = ''
active_edge = Edge.RISING
sample_mode = AcquisitionType.FINITE
samps_per_chan = 1000
ai_timing = task.timing.cfg_samp_clk_timing(rate, source, active_edge, sample_mode, samps_per_chan)
I've noticed that when actually reading the samples, the usage is something like:
But what does that actually do? Does it read a total of 1000 samples since samps_per_chan = 1000? If so, what is the point of setting up that value in cfg_samp_clk_timing?
Those "samples per channel" inputs for timing and reading have long been a source of confusion. You can find more background and some description by following links that start from this Idea Exchange posting I made. I'm not trying to justify any of the stuff below, just trying to describe it.
- when configuring timing for finite sampling, "samps per chan" will set the size of the task buffer on the PC.
- when configuring timing for continuous sampling, "samps per chan" is often ignored, unless you specify something larger than the default settings.
- when reading from a task, any wired non-negative "samps per chan" value will be the # samples retrieved from the task buffer on that particular read call
- when reading from a task and "samps per chan" is not specified (i.e., the default value of -1 is used), behavior varies DRASTICALLY for finite vs. continuous sampling.
With finite sampling it means "wait until all finite samples have been acquired and moved to the task buffer, then retrieve them all at once."
With continuous sampling it means "do NOT wait, just give me whatever previously-unread samples are presently available in the task buffer, even if that # is 0"
In your specific case with a finite sampling task, it'd be typical to simply read all samples at once after the acquisition is complete. But the parameter in the read call gives you the *option* to iteratively retrieve a smaller # samples per call, perhaps to give a live update of measurements before the entire acquisition has run to completion.
In the case of continuous sampling, one *must* keep reading some # of samples iteratively to prevent buffer overflow as the driver is busy trying to continuously push data from the device *into* the task buffer. It's sometimes sensible to use the default value -1 to retrieve all available samples when the loop timing is controlled by some other means. It's more common to specify a specific # samples, which will have the side effect of pausing the loop until they're available, thus controlling loop timing with the read call itself.
Thanks for clearing that up. Today we talked a bit about the topic at the office and our best guess was that read() simply reads certain # of samples from the buffer. This actually makes sense but the documentation leave much to be desired. 🙂