Linux Users

Showing results for 
Search instead for 
Did you mean: 

How to solve the delay in data acquisition?

Hi all,

Currently I am using the NI USB 2618 for 12-channel data aquisition on Mandriva 2007 spring. Following the example "contAcquireNChan.c", I set

chan[] = "Dev1/ai16,Dev1/ai17,Dev1/ai18,Dev1/ai19,Dev1/ai20,Dev1/ai21,Dev1/ai0,Dev1/ai1,Dev1/ai2,Dev1/ai3,Dev1/ai4,Dev1/ai5";

clockSource[] = "OnboardClock";



It works well. However, if I increase the sampleRate to be 6000, which means the effective rate of a sampling cycle is around 50Hz, a significant delay arise between the datasource and the sampled.

Does it mean NI uses a buffer to store the sampled data, and if my reading is too slow, the buffer will provide the old data from the buffer to the reading process instead of the up-to-date data?

What if I want the up-to-date data and omit the old ones?

Many thanks~!

Below are my codes,


  printf("Creating Task...\n");
    DAQmxErrChk (DAQmxBaseCreateTask("",&taskHandle));
    DAQmxErrChk (DAQmxBaseCreateAIVoltageChan(taskHandle,chan,"",DAQmx_Val_Cfg_Default,min,max,DAQmx_Val_Volts,NULL));
    DAQmxErrChk (DAQmxBaseCfgSampClkTiming(taskHandle,clockSource,sampleRate,DAQmx_Val_Rising,DAQmx_Val_ContSamps,samplesPerChan));
    DAQmxErrChk (DAQmxBaseCfgInputBuffer(taskHandle,400000)); //use a 100,000 sample DMA buffer
    DAQmxErrChk (DAQmxBaseStartTask(taskHandle));

  printf("begin to acquire data on NI Dev1\n");
    while (1)
        DAQmxErrChk (DAQmxBaseReadAnalogF64(taskHandle,pointsToRead,timeout,DAQmx_Val_GroupByChannel,raw_data,bufferSize*NUM_OF_CHANNEL,&pointsRead,NULL));
        totalRead += pointsRead;

       ....(data processing through semaphore and shared memory).


0 Kudos
Message 1 of 7

Hey GuoQing,

I was hoping someone more experienced in DAQ would answer your question, since I am not a DAQ expert.  Since this question isn't Linux specific you can probably find some answers on the regular NI discussion forums at

The real answer to your question is that the data is buffered.  It is much more efficient to let the USB device buffer the data and transfer it in large chucks to the computer.  In fact this is the only real way to achieve high data throughput.  The downside is that buffering the data also increases the latency between when the data was acquired and when you can process it.  If you want to minimize the latency you would want to write your program to only read a single data point at a time, but this will greatly decrease the data throughput you can achieve.

If you are concerned about minimizing latency I should mention that USB is not a good bus to use.  PCI, or PCIe would be better.  I should also mention that depending on your latency requirements you may need to use a real-time OS like LabVIEW RT, since Linux is not a real-time OS.  You may still be able to use Linux and your USB device, but you will need to determine what the maximum acceptable latency is for your system, and consider what the consequence might be if you exceed that latency.  For example if your data acquisition system is controlling a robotic arm with a saw, two second old data might cause that saw blade to cut something it wasn't intended to cut.

Shawn Bohrer

National Instruments

Use NI products on Linux? Come join the NI Linux Users Community
0 Kudos
Message 2 of 7

Hi GuoQing-

Shawn is correct; you may need to consider another hardware bus if low latency is an important consideration.  Regardless, I want to give some tips based on your code:


     The input value specifies number of samples per channel to allocate for the buffer, so you should specify how many samples per channel you want for your buffer backlog.  The driver will scale <number of samples> * <sample width> * <number of channels> internally to allocate a properly-sized buffer.



     As with the buffer configuration, the number of samples to read is specified per channel.  I don't see what value you specify for pointsToRead, but it will return <pointsToRead> * <sample width> * <number of channels> bytes with each iteration, so you need to make sure your buffer is large enough to accomodate all of that data.  Also, you should know that it will block operation until the requested number of samples are available.  For example, since your sample rate is 1200 samples/sec, DAQmxBaseReadAnalogF64() can only execute every (pointsToRead / 1200) seconds.  This delay might contribute to what you perceive as latency in operation.

However, some buffering on the host is required to allow for larger throughput from the device as Shawn described.  So you may need to find a "sweet spot" for pointsToRead that allows the sample rate you need with an acceptable loop latency for blocking on the DAQmxBaseReadAnalogF64() call.

Hopefully this helps.

Tom W
National Instruments
0 Kudos
Message 3 of 7

Hi Shawn and TomW,

Thank you so much for your suggestions. I will try and report the progress shortly.

Best Regards,


0 Kudos
Message 4 of 7

Hi TomW,

I have another question.

If the sample rate is 1200 samples/sec, and the number of channels are 8, will DAQmxBaseReadAnalogF64() execute every (pointsToRead / 1200) seconds or (pointsToRead / 1200)*8 seconds?

Thanks in advance.


0 Kudos
Message 5 of 7

Hi GuoQing-

The number of samples to read specified to DAQmxBaseRead... is always given in number of samples per channel, so the time to wait/read will always be <sample rate> / <number of samples to read>, regardless of the number of channels in the task.

Tom W
National Instruments
0 Kudos
Message 6 of 7

I have a doubt in DAQ .I am using read function (1d n channel and  n samples)in that how to give input as multiple channel?


0 Kudos
Message 7 of 7