Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Graph displaying latency: why?

Hi all,

I have a little trouble problem with my experiment composed of: a laser providing an analogic signal, a connection bloc with the input from the laser on ACH0, then a cable to the NI-DAQ PCI card inside the computer, and finally LabView 8.02. I developed a very simple program to make the acquisition of the analogic signal from the laser via the acquisition bloc. The card is a DAQ PCI 6281 card and the acquisition block is a BNC 2090. Finally, the sampling frequency is 100 Hz. I previously had a problem with the card that lead to us to re-initialize factory parameters. the experiment works quite fine, and we visualize the laser signal on the graph. BUT:

What we want is observing the signal provided by the laser during 30s. This works. The problem is: if we set the sampling frequency to 1000Hz, the the refresh rate is very good on the graph and we see the signal changing continuously. If we set the sampling frequency to 100 Hz (the one that interests us), then the refresh rate is poor, every 1s. To fix this, we changed in the DAQ function the "number of samples to read" from 100 to 10 and left the sampling frequency to 100 Hz. This almost fixes the problem, but just at the beginning of the acquisition, there is a sort of "speedup" of the signal on the graph, that becomes normal after about 1s. We'd like to avoid this, of course !

Could someone please explain me the reason of this, and how it could be fixed please?

Thank you VERY much in advance.

Julien
0 Kudos
Message 1 of 3
(3,230 Views)
Hi Julianito,
 
It's easy to figure out why, Hardest to find a way to explain it easily 😉
Ok, first, let see what happened to our datas during the acquisition in continuous mode.
1/ datas are acquired and stored into the FIFO (the card's internal buffer) depending on the acquisition rate.
2/ datas are sending to the computer's memory (RAM) into another buffer (the one you can set with DAQmx property nodes), depending on the buffer size.
3/ When the system is ready (ie: when the processor is free) it will pass it to LabVIEW buffer (that's why it is called double-buffer technique)
4/ LabVIEW display it into your Graph.
 
So, your data are displayed with a delay...
 
Secondly, let's do it with your inputs : one channel, 1000Hz, continuous, extracting each time 100 points from the buffer, sending them on the graph
 A --> The points will pass true all this system and finally the graph will refresh every 100 points on LabVIEW's buffer... So every100/1000 seconds, it will refresh. In time, it give us 100ms... too fast for human eye.
 B --> the same with 100Hz : The graph will refresh every 100/100 seconds... in time, it give us one seconde...
 C --> the same, extracting only 10 points in the buffer : We'v got the same refresh as in case A.
 
Youre speed up at the beginning could be due to several actions. Check the end of your task, are you deallocate ressources correctly?
Are the "beginning values" right ? if they are not, maybe they are comming from the previous acquisition and haven't been read.
 
Please check all this and keep us informed.
 
Regards,
BRAUD Jean-Philippe
Field Sales Engineer - Nord et Centre France
LabVIEW Certified Developper
0 Kudos
Message 2 of 3
(3,199 Views)
Basically what is happening is that there is quite a bit of overhead of starting a task.  It takes a little bit more time until everything finishes up and we start requesting that data.  So, the first few reads have data immediately available in the buffer for us to read: they go almost instantaneously.  After we dry those up, each further read actually has to wait until the data is available from the device (in your two use cases 100 ms), but the reads come pretty regularly from here on.

I agree that you should double check that you're clearing all your tasks when you're done so you aren't leaking any tasks (they'll go away when you close LabVIEW, but is basically a memory leak until then).  Also, I suggest using the "DAQmx Control Task" vi, and explicitly commiting before you start your task.  This gets a lot of the overhead out of the way before you call start.

If you're still seeing issues, then one sure way to get this to go away is using a StartTrigger that isn't pulsed until after you start the AI task.  I don't know how feasible this is for you, though.
------
Zach Hindes
NI R&D
0 Kudos
Message 3 of 3
(3,194 Views)