I don't have LV on my network PC so can't look at your attachment. However, the behavior you report doesn't appear surprising to me. The key to understanding it is to realize that when there's a delay caused by processing/writing, it's the next
data acq time that is reduced, not the previous.
The reason why is that the data acq buffer is being filled by hw even while you do your processing/writing. So when you next ask to read a set of data, the hw already has a "head start" and the additional
waiting time is reduced by the size of that head start. The hw read keeps synchronizing your software by completing at exact 1000 msec intervals.
I'd predict that the very first
measure of data acq time will be independent of your processing/writing time, and that you can always sum time intervals 2 + 3 + (next) 1 and get a constant. You can test this by adding small artificial delays in the processing/writing step.
Hope this helps,
CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW?
(Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).