I don't have LV on my network PC so can't look at your attachment. However, the behavior you report doesn't appear surprising to me. The key to understanding it is to realize that when there's a delay caused by processing/writing, it's the
next data acq time that is reduced, not the previous.
The reason why is that the data acq buffer is being filled by hw even while you do your processing/writing. So when you next ask to read a set of data, the hw already has a "head start" and the
additional waiting time is reduced by the size of that head start. The hw read keeps synchronizing your software by completing at exact 1000 msec intervals.
I'd predict that the
very first measure of data acq time will be independent of your processing/writing time, and that you can always sum time intervals 2 + 3 + (next) 1 and get a constant. You can test this by adding small artificial delays in the processing/writing step.
Hope this helps,
-Kevin P.
ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.