Hi Julianito,
It's easy to figure out why, Hardest to find a way to explain it easily 😉
Ok, first, let see what happened to our datas during the acquisition in continuous mode.
1/ datas are acquired and stored into the FIFO (the card's internal buffer) depending on the acquisition rate.
2/ datas are sending to the computer's memory (RAM) into another buffer (the one you can set with DAQmx property nodes), depending on the buffer size.
3/ When the system is ready (ie: when the processor is free) it will pass it to LabVIEW buffer (that's why it is called double-buffer technique)
4/ LabVIEW display it into your Graph.
So, your data are displayed with a delay...
Secondly, let's do it with your inputs : one channel, 1000Hz, continuous, extracting each time 100 points from the buffer, sending them on the graph
A --> The points will pass true all this system and finally the graph will refresh every 100 points on LabVIEW's buffer... So every100/1000 seconds, it will refresh. In time, it give us 100ms... too fast for human eye.
B --> the same with 100Hz : The graph will refresh every 100/100 seconds... in time, it give us one seconde...
C --> the same, extracting only 10 points in the buffer : We'v got the same refresh as in case A.
Youre speed up at the beginning could be due to several actions. Check the end of your task, are you deallocate ressources correctly?
Are the "beginning values" right ? if they are not, maybe they are comming from the previous acquisition and haven't been read.
Please check all this and keep us informed.
Regards,
BRAUD Jean-Philippe
Field Sales Engineer - Nord et Centre France
LabVIEW Certified Developper