From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.


Showing results for 
Search instead for 
Did you mean: 

Huge latency in data acquisition

I have a datalogging system of approx 120 channels and am experiencing a latency in the results of around 4-5 seconds in the data (this can be seen on screen and when analysing results). Being rather new at this I would welcome any suggestions on mistakes that us newbies may make. It is a rig representative an fuel supply system and contains pumps, controllable valves, pressure/ flow transducers etc. Sampling rate of 10hz.
Using v7.1, NI-DAQmx hardware - PCI6733/PCI6229/PCI6224*2. Channels: Dig in = 66; Dig out = 26; Ana in = 21; Ana out = 12;
Any help/ pointers are greatly appreciated.
0 Kudos
Message 1 of 10
Please explain how you are determining this latency.

One thing that newbies sometimes do is use the "simple" VIs too much. While simple to use, they do EVERYTHING, most of which doesn't need RE-DOing every sample.

In other words, you don't need to INITIALIZE a DAQ operation but once. You don't need to TERMINATE it but once. If you are initializing the DAQ every sample, then it has to go thru your 120 channels EVERY TIME, and configure the board EVERY TIME, and that's a great big waste.

You want to CONFIGURE your DAQ ONCE, then sample as many times as you need, then TERMINATE it ONCE.

If you want to fetch data periodically while continuing to acquire it, set your CONFIGURE options to CONTINUOUS ACQUISITION. You can then read data whenever you want, while it's still coming in.

I do 150+ channels at 600 Hz, your situation should be no problem for the hardware to handle.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.

Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 2 of 10

Hi Paul,


What CoastalMaineBird has suggested is a possible reason for such latency in your readings. Polling the open and close operations by having open close functions inside the borders of a while loop would create overhead. The normal process model for reading would be


Open > read > close


If you had this code all be inside the while loop it may turn out to be slow and create latency. Alternatively if you were to just open then poll the read operation only i.e. have the open and close on the outside of the while loop it would be more efficient.


There are other ways you can programmatically create such overheads. In most cases there are good programming styles / practices you can adopt to easily avoid running into such issues. It would be helpful for us to inspect how you are programmatically performing the acquisition hence helpful if you could attach your VI for us to inspect the code.


Such programmatic style guidelines and good practices are discussed in our very well rated LabVIEW courses.


Kind Regards,



0 Kudos
Message 3 of 10
I would also like to plug that there are a hundred or so DAQmx examples which ship with LV.  While they don't cover a system with this many channels and this many different I/O types, they would illustrate what the others have discussed about setting up the program
0 Kudos
Message 4 of 10
Thanks for your replies - they have been useful. I have structured the code as suggested :- open > read/write > close within the while loop. I am writing to a data log file and this certainly slows the loop down but the main problem is that although the time stamps are incremental it is quite clear that the data is not, for example one of the test runs fills the tanks and the flow and level
are logged - there are clear jumps in the data - almost like drop outs. I have developed a user interface which uses a number of graphs/plots dials and sliders - but even when these are stripped out these jumps in data still seem to occur.

Thanks again for you help.

0 Kudos
Message 5 of 10
1... Perhaps you code post some code for us to look at.

2... Start with a version of your code that does nothing but the DAQ. Config the thing ONCE, use a FOR loop to read the data, say 100 times. Each time do NOTHING with the data except auto-index it into an output array. Close the DAQ when the loop is done, and graph some part of the data. See if the "jumps" in data are still showing. If they ARE, then your data really is jumping, or your DAQ is not configured corectly to capture what you're feeding it. If they AREN'T, then perhaps you're overloading the CPU with file-writing, graphing, other processing, etc.

3... Use TOP, or if you're on Windows, use the TASK MANAGER to see where your CPU usage is. 120 channels at 10 Hz shoold be no sweat for a modern CPU. If your CPU usage is near 100%, then you're not organized as well as you could be.

One of the things I have done (wen I needed to process BLOCKS of data, i.e. multiple samples) is have TWO separate functions in a WHILE loop: Function #1 acquires data and posts it into a shift register (right side), Function #2 retrieves data from the shift reg (left side) and processes, saves, graphs it, whatever.


That lets you process data WHILE acquiring it. Acquiring the data doesn't require CPU power, but it does require time to wait for the clock to tick over. The typical brute force scheme says ACQUIRE, PROCESS, ACQUIRE, PROCESS, ACQUIRE, PROCESS. That means that while you're acquiring, you're NOT processing. While you're processing, you're NOT acquiring. You can overlap those functions.

Another thing to consider is letting NI-DAQ buffer the data for you, that's what it's there for. If you tell it to acquire 100 Samples at 10 Hz, then it will do it. You can read them out and process them when time is available. It's perfectly valid to ask for one acquisition of 100 samples, then ask for 100 DAQ READS to process them one at a time.


Steve Bird
Culverson Software - Elegant software that is a pleasure to use.

Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 6 of 10
Thanks again for all replies so far.

Figuring that the overhead may have come from the rather complex front end GUI I stripped down the code to something somewhat more basic.

I was playing about with the stripped down code and it made absolutely no difference.

So I started to investigate the way in which the data was being sampled - the analogue input channels were set to 'continuous samples' so I changed this to 'hardware timed single point' and it seems to have solved the problem.

Any suggestions on what I have done and what difference this has made.

Thanks again.

0 Kudos
Message 7 of 10
Can you post your simplified code?
Steve Bird
Culverson Software - Elegant software that is a pleasure to use.

Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 8 of 10
I have found the the DAQ latency comes from one of 2 sources, Initializing the daq tasks which should only be done once or processing latancies.  The processing latancies are resolved with architecture changes.  DAQ should be in its own thread, and the data should be sent to a consumer thread for processing.  Processing also includes file IO and logging (this can kill a daq loop).  Also GUI should be event driven and aslo seperated form the DAQ thread.  This architecture allows for the maximum responsivness from your collection, and a more balanced system.
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 9 of 10
Will post code on monday now as I have to do some other things just now.

Thanks anyhow.

0 Kudos
Message 10 of 10