LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Problem with typecast function

Solved!
Go to solution

Hi everybody,

 

I have a problem with the typecast function in labview. I want to save my measured data in a binary format, therefore I use the typecast function to convert different data types to the string format. These files should be read in Matlab and other software afterwards. Everything works right within labview but it doesn't with the other software. For example, if I want to save a value of 1 (U16) I can only read a value of 256 instead in Matlab. I guess that this has something to do with the effects of Mismatching discribed in the help. But how can I fix this? Can somebody help me, this would be really great?!

 

Thank you very much!

Johannes

0 Kudos
Message 1 of 5
(3,539 Views)

Type Cast does not convert numeric values to the "string" format. For example, if you wire in a value of 123.8 (DBL) into it, the output is not the string "123.8". Rather, you will get 8 bytes, which corresponds to the IEEE-754 representation of the number 123.8. This "binary" string would have the sequence of bytes of 405E F333 3333 3333 . At the other end you'd have to read 8 bytes and type cast back to a DBL (64-bits). Is that what you are actually doing at the other end, or are you thinking that you should be seeing the string "123.8"?

 

If you are reading by bytes as opposed to by character, another issue that you could be having in endianness. LabVIEW is big-endian. Matlab, I believe, is little-endian. This means you will have byte-swapping. You can eliminate this by using the Flatten To String function, which allows you to explicitly set the endianness of the bytes.

Message 2 of 5
(3,509 Views)
Solution
Accepted by topic author joe25

I want to save a value of 1 (U16) I can only read a value of 256 instead in Matlab

 

You have a byte-order problem (as Mercurio said, "Endianess").

 

A U16 takes 16 bits, or two bytes.

When sending two bytes from here to there, both sides must agree on which one comes first.

 

For a long time, the standard was "big-endian", i.e., the more signiificant byte came first.

 

Intel came along and went against the standard, i.e. "little-endian" or least significant byte first.

 

LabVIEW has "Big-Endian" as a standard for transfer (it originated on non-Intel architectures).

 

If the two sides of the exchange do not agree on the method, then you will see the symptoms you describe,

 

If the two bytes are 00 01,  one side will see that as 00 01 = one, and the other side will see it as 01 00 = 256.

 

 

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 3 of 5
(3,496 Views)

Thank you both for your fast reply!

 

I was able to figure this out by myself by changing the endianess in matlab to big endian. But this makes me really wonder due to the fact that I have choosen little endian in the "save binary" labview function as well as in my matlab software. I expected that my data is automatically converted to little endian with this comment. But this doesn't work, because I had to choose big endian in matlab to load the data correctly. I guess that the Flatten To String Function will fix this but why doesn't the Save to Binary Function do this?

 

Anyone know?

 

Thanks!

Johannes

0 Kudos
Message 4 of 5
(3,463 Views)

I guess you wired a string (the output of the type cast) to the file write function. The "byte order" only has an effect if you wire floating point data directly to the function.

A string can contain anything (little endian encoded floats, big endian encoded floats, other string data, ...) so the "byte order" won't (and should not!) affect a string wired to the write to binary file function.

The byte order only affects the conversion at the point where floating point data is converted into a string.

 

0 Kudos
Message 5 of 5
(3,460 Views)