LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Write to file at exact ms intervals

Hi

 

I'm having trouble with a part of my code that is writing data to a text file. See attached a snapshot of the code plus an example text file.

 

In this particular file I used log interval of 1 second. As you can see the point of where it will write the data shifts upwards, and it also seems to have some hiccups. I would like to write the data at exactly 1000ms intervals. To illustrate:

 

13:30:32,0000
13:30:32,0000
13:30:33,0000
13:30:34,0000
13:30:35,0000

 

or

 

13:30:32,3423
13:30:32,3423
13:30:33,3423
13:30:34,3423
13:30:35,3423

 

At what point of the second it writes is not important, as long as it is consistent.

 

Do I have to subtract the while loop execution time before setting "dt" and so change the loop period for each iteration? If so, how? Any other solutions?

 

Thanks,

John B.

Download All
0 Kudos
Message 1 of 33
(2,462 Views)

Are you concerned with the first two readings?  or with the 100us delta between all subsequent measurement? or both?

 

I doubt that there is much you can do about the 100us delta.

 

What do the FALSE cases look like? Especially the nested one?

0 Kudos
Message 2 of 33
(2,443 Views)

The write to file is a function of the OS as well as your LV code.  You will probably never get the level of consistency you are seeking unless you are running under a real time OS.

 

Do you really want the DATA to be logged at precise intervals? You can take snapshots of the data in your loop, append a time stamp, and write them to a queue.  In a parallel loop running at a less precise speed write the data to the file at the convenience of the loop and the OS.  (Producer/consumer desing pattern).

 

Value property nodes are slow because they require a switch to the UI thread.  This may affect your timing also.  Use wires if possible.

 

Lynn

Message 3 of 33
(2,429 Views)

I am worried about the 100us delta between all subsequent measurements. It doesn't give me truly 1000ms intervals but rather something like 1001ms. When using one second precision in the log file it will skip one second occasionally.

 

13:30:31
13:30:32
13:30:33
13:30:34
13:30:36

Download All
0 Kudos
Message 4 of 33
(2,419 Views)

Try introducing a bit more data flow into the code. Generate the timestamp as the very first thing you do inside the loop, with some data-flow around it to make sure nothing happens before it.

 

Timings of more accurate than 1 ms are quite tricky to deal with on a non-RT OS.

 

You should expect some jitter using software timings, your average time delta should converge to 1 second eventually.

0 Kudos
Message 5 of 33
(2,401 Views)

You would be better off following Lynn's (johnsold) advice.  Take a snapshot of the data and append a timestamp.  Enqueue it.  Use a separate loop to dequeue and write the data.  You don't have to write every ms.  Since you have a timestamp for each data, you can write every second or every 10 seconds.  You will never get a non real time OS to write to a file with ms precision.

 

- tbob

Inventor of the WORM Global
0 Kudos
Message 6 of 33
(2,362 Views)

Thanks

 

I'll dig into queues then.. what surprises me is that it's 1ms late every time.. never 1ms early.

0 Kudos
Message 7 of 33
(2,344 Views)

Don't froget about errors when trying to represent frations that are not a power of 2 since "0.1" can not be repesnted exactly in binary.

 

When the exact time stamp matters I use hardware timed acquisitions and let DAmx provide teh time stamps.

 

If the WF time stamps aren't tight enough then use a precise clock source and do everything in multiples of the tick clock.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 8 of 33
(2,336 Views)

OK, so I did som e fooling around with the producer/consumer model and got the exact same result

 

2010-07-08    23:06:44,5293    229
2010-07-08    23:06:45,5294    229
2010-07-08    23:06:46,5295    229
2010-07-08    23:06:47,5296    229

 

Any other last suggestions? What is bothering me is that to the user of my software it will look like I missed a sample (because I only use second resolution in the logged time stamps). Every time it reaches xx,9999 it will look like this:

 

2010-07-08    23:37:31,9997    229
2010-07-08    23:37:32,9998    229
2010-07-08    23:37:33,9999    229
2010-07-08    23:37:35,0000    229
2010-07-08    23:37:36,0001    229

 

When the ms are not included it will of course just look like 23:37:34 is missing.

 

Someone must have addressed this problem before. It seems to consistent to not have a fix.

0 Kudos
Message 9 of 33
(2,321 Views)

It might help to go into 'VI Properties', Category 'Execution', and increase the Priority for the VI.

 

This will give tighter timing for your VI.

 

Warning - 'time critical priority (highest)' will make other apps in the OS seem sluggish or like they are locked up.

 

Thanks,
Jim

0 Kudos
Message 10 of 33
(2,315 Views)