LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

why would my time stamp readings skip?

Hi,

I'm using LabVIEW 7.1

I've created a VI data logger which reads temperatures from 8 RTDs and logs them against a time stamp (in seconds) which is converted into a numeric using double floating point. The data is read continuously at 100Hz, and a mean value of a 100 readings is stored at a frequency specified by the user (normally 1 reading per RTD every 5 minutes). After a specified length of time (again, dictated by the user), the test stops, and the data is stored in a spreadsheet.

This all seems to work fine, the correct number of date points are stored, and the temperatures all look correct. However, the time stamp skips a time-value from time to time. This is most strange as the test takes the correct length of time, and displays the correct time while running (a graph in the VI front panel shows what data is being collected), however when the spreadsheet is viewed in Excel, or opened in a simple display VI, the time jumps.

This shows itself in a different manner when using the same VI to log data at shorter intervals. If I log every 15 seconds, the time stamp in the spreadsheet is the same value for several readings, then jumps to a greater value which it maintains for a few readinds before jumping again.

Does anyone have a) any idea what I'm trying to explain? and if so b) any idea how I can correct it?

I need to log the time and date fairly accurately as I plan to run this along-side stand alone data loggers.

If I need to attach a copy of my VI, please let me know and I will try to do so asap.

Many thanks for reading this.

Ellen
***Of all the things I've lost, I miss my mind the most!***
0 Kudos
Message 1 of 10
(3,441 Views)
This may be due to the way data are saved. See if the double used to save time is not changed to a single just before writing to disk (that produces time jumps of 256 s). Have also a look to the number of digits used when saving the data in ASCII format (you need 9 significant digits in Exp format to reach a 1 s resolution).
You may have to dig into the subvi's to find it out.
Then the solution will be to modify the subvi's in order to have conversion to the right format.

Hope this helps

CC
Chilly Charly    (aka CC)

         E-List Master - Kudos glutton - Press the yellow button on the left...
        
Message 2 of 10
(3,434 Views)
Hi Charly,

Thank you so much for replying as quickly as you did.

I figured that the dbl/sgl conversion was probably the problem about 5 minutes after posting, however I couldn't find where it might be. It's so nice to have confirmation

I'm currently trying to sort it now, I think I've found the route of it (for anyone who views this later hoping to answer their queries... open up the 'write to spreadsheet file' subVI and change the representation of the elements in the 2D or 1D array (depending on which one you're trying to use) from 'sgl' to 'dbl', then save this as a subVI for later use) You will also need to do the same thing with 'Read from spreadsheet file' should you wish to read the data again at a later date.)

Ellen
***Of all the things I've lost, I miss my mind the most!***
0 Kudos
Message 3 of 10
(3,422 Views)
The timestamp is actually a 128 bit fixed point number, with 64 bits of integer and 64 bits of fraction, representing the number of seconds since midnight January 1, 1904 GMT. A double has 54 bits of precision, so you lose a lot in the conversion. Unfortunately, you lose it where you need it, in the fraction. The integer portion takes a significant number of resolution bits of the double, so you lose a lot. One way around the problem is to take a timestamp at the start of your data and subtract this from the succeeding timestamps. This will remove most of the integer portion of the data and give you your resolution back where you need it. Unfortunately, the LabVIEW primitive subtract does not properly handle timestamps. However, there is a timstamp subtract buried in LabVIEW (used by LabVIEW Measurement File Express VIs for the same reason) that will do what you want. You can find it in /vi.lib/waveform/TSOps.llb. Finally, you need to save 15 digits to get the full resolution of a double.

If you really want accuracy, you can type cast the timestamp into a cluster of four U32s and directly manipulate the integers. The TSOps.llb functions are examples of this. This will give you a lot of control, but is not a trivial solution.

Good luck.
Message 4 of 10
(3,414 Views)
Just chiming in here -

This was the first I'd seen these VIs buried in the vi.lib\waveform folder. I'm a little mystified about why they're necessary, and puzzled about your comment that the 'Subtract' primitive doesn't work with timestamps. I just checked my LV7.1 installation and I get the following results:

If the minuend is a timestamp, and the subtrahend is (or is promoted to) a floating-point number of seconds, the result is a timestamp representing the original timestamp offset by the number of seconds.

If the minuend AND subtrahend are timestamps, the result is a double-precision float which is the difference in seconds.

Beyond that, I can't think of another valid subtraction operation - it's meaningless to subtract a timestamp from a double (and LV enforces this by displaying broken wires). The VIs in the vi.lib\waveform folder also include an addition operation which adds two timestamps, which also seems nonsensical to me (they're absolute time values - what does the result represent?). The addition primitive seems to handle all the other cases (and again LV forbids the addition primitive from adding two timestamps by displaying broken wires).

Am I missing something obvious?

Best regards,

Dave
David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
Message 5 of 10
(3,375 Views)
If you look at the diagram, you will notice that subtracting two timestamps leads to coercion dots on the inputs to the subtract primitive. I always assumed this meant the timestamps were coerced to doubles before being subtracted. I was wrong. I checked, and the subtract does work correctly - you get a nanosecond difference if there is only a nanosecond difference between two "normal" timestamps (this is three orders of magnitude greater than the double resoulution). The type change is still annoying, although I can see why it was done.

I disagree with your statement that it is meaningless to subtract a timestamp from a double. Seconds is seconds. If you have a timestamp represented by a double coming from some GPIB instrument and you want to subtract the timestamp from a DAQ device from it, you have a valid operation. You can easily work around the problem with type cast operators. I will freely admit that the relative accuracy of such timestamps is probably suspect unless a lot of care is taken.
0 Kudos
Message 6 of 10
(3,348 Views)
Yes, I agree with you that the cercion dots are misleading. And I see what you're saying about subtracting a timestamp from a double - IF the double is assumed to represent a 'seconds since 1904' (IOW, the prior method for representing timestamps), then the subtraction makes sense. Though I guess you could/should explicitly up-cast such a double to a true timestamp before wiring it as the minuend.

I'm still completely baffled by the library routine from NI which ADDS two timestamps by mucking around with their internals - there's no sense in adding two values which are supposed to represent absolute time. I guess it's only good for people who want to misuse the timestamp as a general purpose 128-bit fixed-point numeric container.

Best regards,

Dave
David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
Message 7 of 10
(3,349 Views)

@David Boyd wrote:
I guess it's only good for people who want to misuse the timestamp as a general purpose 128-bit fixed-point numeric container.



In rereading my own post, I realized that the above text might've been taken as offensive or insulting. I didn't mean it to sound so, and I want to apologize to you for giving that impression.

Just to be clear on my thinking, when NI introduced the timestamp datatype, they described it as suitable for one specific purpose - representing absolute times over a greater epoch than the existing floating-point representation, while still providing nanosecond resolution. If I understand it correctly, time math operations which require or produce delta-T values still best represent those delta-T values with floating-point doubles, which have ample resolution for nanoseconds as long as the magnitudes of the deltas aren't spanning, I don't know, years or decades. This is why the NI libraries don't make sense to me - they're doing things with timestamps that aren't suppoosed to be done! Also, NI is free at any time to change the underlying representation of the timestamp datatype, which would not break the add and subtract primitive operations, but would seriously break the library VIs which depend on the square-peg-round-hole typecast to the cluster of U32s.

All the best,

Dave
David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
Message 8 of 10
(3,340 Views)
David, your answer lies in a bit of NI history. The low-level timestamp subroutines were originally written in LabVIEW 6.1 to support timestamps for the NI-HWS file format. This was a new file format at the time and we felt that, even though LabVIEW did not yet support timestamps, it would in the future and we would rather not change the HWS file format again if we could avoid it. They crept into vi.lib due to their use in the LVM save and store Express VIs. When those VIs were being developed for LV 7.0, the primitives had not yet been modified to support timestamps properly, and it was unclear whether they would or not for 7.0. The primitives did make it into LabVIEW 7.0, but the supporting VIs were never changed. This was due to inertia (why fix it if it works) and the fact that these VIs are still the only way to do an operation on two timestamps and get a timestamp back. They were never exposed in the palette for exactly the reason you mention - they are very dependent on the timestamp format, which may change.

Hope this clears things up for you a bit.

Damien Gray
Senior Software Engineer
National Instruments
0 Kudos
Message 9 of 10
(3,325 Views)
Thanks, Damien, for this excellent explanation. I hadn't realized till now that you work for NI. And clearly I've been adding more noise than signal to this thread!

All the best,

Dave
David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
0 Kudos
Message 10 of 10
(3,317 Views)