LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

how can i create binary files??

I'm having a problem because i have bytes that i need to store in a file, but if i use "Write To Spreadsheet File" my bytes are converted to text for example if i had a 00000100 is converted to a 04 so my byte is converted to a 00110100 because labview changed from 8 bits to ascii, and if i use the binary file options in labview is restricted to 16 signed or 8 bits floating point, and labview change my 00000100 to other binary number, what can i do?? thank you.
0 Kudos
Message 1 of 9
(4,120 Views)
Carlos;

You take your bytes and put it in a byte array (U8), then use the "array to string" VI to convert that array into a string, which you can now save in a file using the write string to file VI.

Regards;
Enrique
www.vartortech.com
0 Kudos
Message 2 of 9
(4,119 Views)
As I said in your other post, you can write banary files of any data type. I think you're confused because LabVIEW only comes with two examples (Write to I16 and Write to SGL) but if you wire any data type to the Write File function, that data type will be written without any conversion. To make you own high level VIs for binary read and write, modify one of the examples and just change the data types. for example, to modify Write to I16 for I8, change the data types in the fron panel to I8 by right clicking the numeric controls of 1D Array and 2D array and selecting representation. Then go into Write File+[I16] and to the same thing to the same thing to the two front panel arrays there.
Message 3 of 9
(4,119 Views)
Using LabVIEW 7.1, I have a preprocessed 3 channel SGL array (up to 500000 scans) of bipolar data to be stored in I16 binary format, compatible with high speed data reader example. Simply type casting to I16 creates binary garbage. Does the SGL array have to be normalised to a nominal magnitude first, with polarity, scaling, offset etc stored in the file header?   any suggestions or LabVIEW examples please?
0 Kudos
Message 4 of 9
(4,035 Views)
Hello Ross,

SGL and I16 have a completely different data range: I16 is valid from -32768 to +32767, SGL goes (roughly) from +-3e-45 to +-3E+38...
So you will probably need to do your own scaling/normalizing before writing to an I16 file. Easiest could be to divide all numbers of a channel by the maximum and multiply with 32767 to scale to the full I16 range...

TypeCast is no conversion, it's just a different interpretation of the bit pattern!

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 5 of 9
(4,032 Views)
If all your SGL array values are between -32768 and +32767, you can use the To Word Int (I16) function in the Numeric - Conversion palette.  It will round off the input to the nearest integer and output it in I16 format.  If the input value is out of range of an I16, it will coerce the output to the max or min number for I16.  If you use scaling, you will have to unscale to get back the correct number.
- tbob

Inventor of the WORM Global
0 Kudos
Message 6 of 9
(4,009 Views)

Thanks to GerdW and tbob for their suggestions - I tried GerdW's suggestion and it worked fine!

regards,

Ross

0 Kudos
Message 7 of 9
(3,991 Views)

I have solved my problem by scaling before I16 conversion but before I leave this thread could someone explain this phenomena. In my earlier conversion attempts, a two dimensional SGL array (three waveforms on a graph) could be cast to "I16" then cast back to SGL and again graphically displayed with no apparent distortion of the waveforms - the intermediate "binary I16" data seemed to be garbage when viewed graphically or in numeric indicators, transposed or not, yet it seemed to faithfully transmit all data in the "I16" array when cast back to SGL without any scaling/unscaling applied - why is it so!!! (ie the "binary data" seemed unusable as I16, but the data was not corrupted when cast back to SGL)

Regards,

Ross

0 Kudos
Message 8 of 9
(3,976 Views)
Hello Ross,

as I said before: TypeCast is no conversion, it's a different interpretation of the bit pattern!

You casted a SGL array to I16 array. This gives you twice the array size (SGl takes 32 bits, I16 only 16), but I16 array seems to contain "garbage". Casting back to SGL gives the original values again, as you didn't change any bits in this process (hopefully).
Now you wanted to display the data as I16, meaning you have to convert the data (in contrast to 'cast the data'). You have to change the bit pattern behind the numbers to get a display of 'non garbage'... But that should be working by now 🙂


Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 9 of 9
(3,965 Views)