From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmxCfgSampClkTiming, question about sampsPerChanToAcquire parameter

Solved!
Go to solution

I am working on a tool to do real time analysis of acoustic data using the NIDAQ libraries.  We intend to support a wide range of devices, rates, etc, but currently I am testing on a USB-6251.  I began by adapting  the ContAcq-IntClk.c sample to learn about the calls necessary to perform the operations we need, but it is not clear to me how the last parameter of DAQmxCfgSampClkTiming (sampsPerChanToAcquire) works.

 

In the help files there is a link to a topic "How is buffer size determined?" which leads me to believe that it is merely a parameter that sets the size of the internal circular buffer.  As I understand it, any size that is big enough to not cause overwrites before the buffer is read should work here, but I am observing that changing this parameter affects the period of the acquired waveform (larger buffers yielding larger periods).  Part of the reason that I am having trouble with this is that most/all of the examples use literal constants for buffer size (usually 1000) and sample rates (usually 10k), while we desire our application to allow user specified sampling rates and channel counts.  So the question is, what should i pass to the function for this parameter, expressed as a function of parameters to other DAQ procedure calls?

 

Due to the real time analysis nature of our application, i do not have as much control over how our buffers are sized as a simple example (apologies), but some code snippets follow.  I have explored the use of DAQmxCfgInputBuffer() as well with no luck.  I am fairly confident that each parameter is correct other than the one in the topic, and possibly DAQmxRegisterEveryNSamplesEvent() parameter nSamples

 

// setup  
niShowError(DAQmxCreateTask("ReadSamples",&taskHandle));
niShowError(DAQmxGetDevAIPhysicalChans(p->devName.c_str(), chanName, chanStrLength));
niShowError(DAQmxCreateAIVoltageChan(taskHandle,chanStr.c_str(), "", differentialMode , -p->maxToMeasure, p->maxToMeasure, DAQmx_Val_Volts, NULL));
niShowError(DAQmxCfgSampClkTiming(taskHandle, "", p->sRate, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 100*p->nSamsPerBufTotal));
niShowError(DAQmxRegisterEveryNSamplesEvent(taskHandle, DAQmx_Val_Acquired_Into_Buffer, p->nSamsPerBufPerChan, 0, everyNCallback, this));
niShowError(DAQmxRegisterDoneEvent(taskHandle,0,DoneCallback,NULL));

...

int32 CVICALLBACK everyNCallback(TaskHandle taskHandle, int32 everyNsamplesEventType, uInt32 nSamples, void *callbackData)
{
   ...
   if(!niShowError(DAQmxReadBinaryI16(	taskHandle, -1, timeout, DAQmx_Val_GroupByScanNumber, (SHORT*)(ni->hTempbuf[ix]), p->nSamsPerBufTotal, &read, NULL)))
   ...

 

 

0 Kudos
Message 1 of 6
(3,783 Views)

I can only offer a small tidbit because I'm not experienced in the C-based DAQmx calls.  I've done this stuff quite a bit from LabVIEW though, so I *can* tell you that DAQmx can be used the way you like.  Others will know the details better.

 

Yes, any size big enough to avoid overwrites should be fine.  Under LabVIEW, there are also 2 different ways to choose/influence a buffer size.  One is available when configuring a sample clock for continuous sampling, and DAQmx tends to treat it as a suggestion rather than a command.  It will, for example, make your buffer bigger than requested if it thinks you didn't ask for enough.

 

The other way is likely analogous to your DAQmxCfgInputBuffer() call.  Under LabVIEW, the requested size is treated as a direct command, and the size will be exactly what you requested.  I expect similar behavior from the C-based API -- what problem did you see?

 

Finally, if your waveform period depends on your buffer size, I'd humbly suggest that's coming from a bug in your processing code.  It sounds like somewhere the code is taking a parameter related to # samples in a buffer and treating it as though it relates to # samples in a waveform period.  Maybe something with a cryptic or ambiguous name that's easier to mix up?

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 6
(3,780 Views)

Thanks for responding, I am still experiencing trouble with the size of the internal buffer.  To test it, i started from the "ContAcq-IntClk.c" demo and made the following changes:

 

1. DAQmxErrChk (DAQmxCfgSampClkTiming(taskHandle,"",10000.0,DAQmx_Val_Rising,DAQmx_Val_ContSamps,testSizeVariable));

 

set testSizeVariable to either 10,000 or 100,000 depending on which size you want to test.

 

2. DAQmxCfgInputBuffer(taskHandle, testSizeVariable); 

 

this function is called immediately after DAQmxCfgSampClkTiming.

 

3. add file IO

 

#include <fstream>
#include <iostream>
using namespace std;
ofstream out;

 

 

placed immediately above the definition of EveryNCallback and 

out.open("sampleLog.txt", fstream::out | fstream::app);
for(int i = 0; i < 1000; ++i)
out << data[i] << endl;
out.close();

 placed immediately after DAQmxReadAnalogF64.

 

I tested this using a simulated NI USB-6251.  Using an internal buffer size of 10,000 I obtain the following graph:

10k.jpg

 

 

Now, if i set the buffer size to 100,000 with NO OTHER CHANGES TO THE CODE, I obtain this graph:

100k.jpg

 

I cannot understand why this would happen, since theoretically the simulated board is sending the same signal in either case, but clearly the period of the waveform is equal to the size of the internal buffer.  Code used to test is attached.

0 Kudos
Message 3 of 6
(3,726 Views)

seems my .cpp attachment may have failed? attempting to attach as plaintext.

0 Kudos
Message 4 of 6
(3,722 Views)

I'm definitely no expert on the C syntax, but the form of your function call to configure the timing and the buffer looks right to my very untrained eye.  Nevertheless, the data you posted leads me to suspect the syntax is wrong.

 

Here's what it looks like to me: when you increase the variable by a factor of 10, you read 1/10 as many cycles of an unchanging (or *is* it?) waveform signal.   It sure looks like the configuration is changing the *sample rate* by a factor of 10, either instead of or in addition to the buffer size.  You can test my theory by trying some other values for 'testSizeVariable', especially some that aren't integer multiples or divisors of the numbers you've tried previously.

 

There are only two other guesses I'd even think about checking into:

(1) maybe this is an artifact of using a simulated device?  I don't know how DAQmx decides what to make a simulated ai signal look like, and how that decision may vary with your task config parameters.  So there's a chance that you see a difference because DAQmx is deciding to show you glimpses of two different sine waves.  Maybe they just decide to give you a signal that will produce a fixed # of cycles in whatever buffer size you designate.  Thus with a bigger buffer and the same # of points captured, you see fewer of the sine cycles.

(2) This is pretty unlikely, and almost definitely impossible with simulated devices, but in the more general case if I saw something like this I would consider whether my data was the aliased result of a much higher frequency sine wave.  Again, I doubt this is the problem for you right now with simulated devices -- just file the idea away as a tip for a rainy day.

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 5 of 6
(3,712 Views)
Solution
Accepted by topic author antisheep

This is actually a trait of how simulated devices work.  We load one period of a noisy sine wave across the full buffer, so the behavior you are seeing is expected.

Seth B.
Principal Test Engineer | National Instruments
Certified LabVIEW Architect
Certified TestStand Architect
0 Kudos
Message 6 of 6
(3,709 Views)