I'm having trouble with a part of my code that is writing data to a text file. See attached a snapshot of the code plus an example text file.
In this particular file I used log interval of 1 second. As you can see the point of where it will write the data shifts upwards, and it also seems to have some hiccups. I would like to write the data at exactly 1000ms intervals. To illustrate:
At what point of the second it writes is not important, as long as it is consistent.
Do I have to subtract the while loop execution time before setting "dt" and so change the loop period for each iteration? If so, how? Any other solutions?
Are you concerned with the first two readings? or with the 100us delta between all subsequent measurement? or both?
I doubt that there is much you can do about the 100us delta.
What do the FALSE cases look like? Especially the nested one?
The write to file is a function of the OS as well as your LV code. You will probably never get the level of consistency you are seeking unless you are running under a real time OS.
Do you really want the DATA to be logged at precise intervals? You can take snapshots of the data in your loop, append a time stamp, and write them to a queue. In a parallel loop running at a less precise speed write the data to the file at the convenience of the loop and the OS. (Producer/consumer desing pattern).
Value property nodes are slow because they require a switch to the UI thread. This may affect your timing also. Use wires if possible.
I am worried about the 100us delta between all subsequent measurements. It doesn't give me truly 1000ms intervals but rather something like 1001ms. When using one second precision in the log file it will skip one second occasionally.
Try introducing a bit more data flow into the code. Generate the timestamp as the very first thing you do inside the loop, with some data-flow around it to make sure nothing happens before it.
Timings of more accurate than 1 ms are quite tricky to deal with on a non-RT OS.
You should expect some jitter using software timings, your average time delta should converge to 1 second eventually.
You would be better off following Lynn's (johnsold) advice. Take a snapshot of the data and append a timestamp. Enqueue it. Use a separate loop to dequeue and write the data. You don't have to write every ms. Since you have a timestamp for each data, you can write every second or every 10 seconds. You will never get a non real time OS to write to a file with ms precision.
Don't froget about errors when trying to represent frations that are not a power of 2 since "0.1" can not be repesnted exactly in binary.
When the exact time stamp matters I use hardware timed acquisitions and let DAmx provide teh time stamps.
If the WF time stamps aren't tight enough then use a precise clock source and do everything in multiples of the tick clock.
OK, so I did som e fooling around with the producer/consumer model and got the exact same result
2010-07-08 23:06:44,5293 229
2010-07-08 23:06:45,5294 229
2010-07-08 23:06:46,5295 229
2010-07-08 23:06:47,5296 229
Any other last suggestions? What is bothering me is that to the user of my software it will look like I missed a sample (because I only use second resolution in the logged time stamps). Every time it reaches xx,9999 it will look like this:
2010-07-08 23:37:31,9997 229
2010-07-08 23:37:32,9998 229
2010-07-08 23:37:33,9999 229
2010-07-08 23:37:35,0000 229
2010-07-08 23:37:36,0001 229
When the ms are not included it will of course just look like 23:37:34 is missing.
Someone must have addressed this problem before. It seems to consistent to not have a fix.
It might help to go into 'VI Properties', Category 'Execution', and increase the Priority for the VI.
This will give tighter timing for your VI.
Warning - 'time critical priority (highest)' will make other apps in the OS seem sluggish or like they are locked up.