Showing results for 
Search instead for 
Did you mean: 

Memory layout of timestamp?

Hi all,

I have a question about how LabVIEW stores Timestamps in memory.  Based on the timestamp tutorial ( and relevant manual page ( I expect two sequential 64-bit integers in memory. The first being whole seconds (I64) and the second being fractional seconds (U64).


However, when I pass a Timestamp into MoveBlock, I see these in the opposite order (see attached screenshot). But if I pass it into the Type Cast node, I get them in the expected order.




Why do these two approaches give different results? I am interested in passing the Timestamp into a CLN, hence checking with MoveBlock, but I want to make sure my observation is correct.




0 Kudos
Message 1 of 5

Quite a few years ago, as I was "getting into" LabVIEW, I got curious about the LabVIEW TimeStamp, and wrote a number of routines to "figure it out".  It appears to be saved as a 128-bit quantity, with the top 64 bits being an integer representing "number of seconds", while the low 64 bits represent "fractions of a second".  What I suspect lies in the difference between your two representations is the "endian" question.  You can easily show that if you generate an array of, say, 8 U8's running from 0 to 7, when you examine them as an array of U8, they will appear as 00, 01, 02, 03, 04, 05, 06, 07.  If you now typecast them to a U64, you'll see 0001020304050607, with the first element being in the "most significant" position.


Thus your "Cast" representation shows Seconds first, then the representation for Fractional Seconds.  I'm not at all sure what MoveBlock does, but the TypeCast method gives you the "right answer".  Note that if you use the "Convert to Dbl" function, you will get 3548591130.something seconds, demonstrating that the first of your TypeCast numbers is, indeed, seconds.


Bob Schor

Message 2 of 5

Type Cast will jigger your data and return everything in big endian format, Intel processors are little endian.  When NI describes the "memory layout" they are describing it at the LV level, meaning they describe what you see as the result of Type Cast (big endian) and not what you see at the physical memory layer.   Back in the day there were more big endian processors and the distinction was not important, but now the opposite is true and Type Cast results should not be thought of as the physical memory layout even though the default label of the output would lead a C programmer to think so.


Regardless, one way to directly answer the question "What is the memory layout of a timestamp" is to use MoveBlock.  That is how you are going to see it inside your CLFN .  My preferred method is to use Flatten To String with 'native' order selected, same result.

Message 3 of 5

Thanks for the responses! I agree with both of what you say, but the thing that was confusing me was that if LV uses two 64-bit integers to represent the Timestamp, then the type cast should operate on each independently. I'm on a little-endian machine so type-cast (effectively) converts to big-endian, does the cast, then converts back to little-endian. In this instance, the type casting should not change the order of elements.


However, if LV uses a 128-bit integer to store the Timestamp internally as Bob suggested, then casting to a cluster of two integers will change the order because the type-cast converts to a BE 128-bit integer, then separates the high and low parts, then converts those parts back to LE, effectively switching the elements. This is an important difference between calling it a "128-bit integer" and "two 64-bit integers" in terms of memory layout.


Actually, now that I go back and check, the Timestamp tutorial explicitly states

> The LabVIEW timestamp is a 128-bit data type that represents absolute time

So it's all self-consistent, and I just have to remember that when NI talks about effective memory interpretation, they probably refer to the BE decomposition, since that's the default for Flattened data in LV.




0 Kudos
Message 4 of 5

One thing you have to remember is that LabVIEW ALWAYS attempts to represent any of its flattened data in a platform independent manner. This is important since LabVIEW runs both on BigEndian and LittleEndian machines. Yes the BigEndian systems nowadays are limited to the PowerPC based CompactRIO systems which run the VxWorks OS, but back in the days when Intel didn't control all desktop computers on earth, there was a strange company with a fruity symbol who used also PowerPC CPUs in all their machines, and their OS was using the BigEndian format eventhough the PowerPC CPU can theoretically run either way. Most likely that was decided that way since the previous computers from that same company used the Motorola 68K CPUs which were true BigEndian only CPUs.


Basically from all the platforms LabVIEW ever has run on (including the now obsolete Sparc, PA-RISC, 68K) only the x86 and possibly the new ARM based ones used truely LittleEndian. Everything else used BigEndian but all but the PPC one for the older cRIO systems are not actively sold anymore.


Therefore all flattened data in LabVIEW is still in BigEndian format and will remain so for a lot longer. After all, BigEndian is officially the "network byte order" which supposedly is the default for binary network protocols (unless the developer never worked on anything than Intel CPUs and doesn't care about the rest of the world). Many large mainframe systems, before the PC took over the world, were also BigEndian systems, so it was really Intel who tried to be different at first.


If LabVIEW would flatten de timestamp as you first thought it should, then you couldn't flatten it on a PC, send it over TCP/IP to a PPC cRIO system and interpret it there still as a valid timestamp.

Rolf Kalbermatter
My Blog
Message 5 of 5