LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Generate and Write unicode characters to file

Solved!
Go to solution

The genearted characters looks OK (up to x00FF) but after writing them to file those characters and their values are different. Also the characters after 0x00FF are not coming proper.

Any idea?

 

Unicode.pngCharacters.png

0 Kudos
Message 1 of 10
(7,815 Views)

When you do the conversion to U8, you guarantee that no value above x00FF can get to the file!

 

If you need multi-byte characters, you need to use mutliple U8s.

 

Lynn

0 Kudos
Message 2 of 10
(7,806 Views)

Byte order mark was missing while writing the file -

 

BOM.png

 

But for charcters after 0XFF tried this option, which is not working

 

new image.png

 

Any other option?

0 Kudos
Message 3 of 10
(7,785 Views)

What do you mean by "...is not working"? The code works. It produces a string with the same bytes, reversed, as the number sent to the U16 conversion. 

 

Please specify a value of the input, and what you expect to get from the output.

 

Lynn

0 Kudos
Message 4 of 10
(7,773 Views)

Still not able to generate these characters -

Missing.png

0 Kudos
Message 5 of 10
(7,764 Views)

Is this file you're writing going to be read as a text file that has unicode in it?  If it is, you have to put a Unicode header in front of your data.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 6 of 10
(7,744 Views)

.

 

This little snippet produces a file that when opened in Wordpad contaains the text shown in the constant on the BD

Capture1.PNG

 


"Should be" isn't "Is" -Jay
0 Kudos
Message 7 of 10
(7,737 Views)
Solution
Accepted by V_T_S

You should probably give this page a thorough read if you are intending on using Unicode in your application.  Here is a relevant excerpt:

 

ASCII technically only defines a 7-bit value and can accordingly represent 128 different characters including control characters such as newline (0x0A) and carriage return (0x0D). However ASCII characters in most applications including LabVIEW are stored as 8-bit values which can represent 256 different characters. The additional 128 characters in this extended ASCII range are defined by the operating system code page aka "Language for non-Unicode Programs".  For example, on a Western system, Windows defaults to the character set defined by the Windows-1252 code page. Windows-1252 is an extension of another commonly used encoding called ISO-8859-1.

 

Windows-1252 gives you characters up to 0xFF (ÿ) but not anything greater than 8-bits (e.g. no 0x0100).  By default LabVIEW only supports these 8-bit characters uses multi-byte character strings--interpretation is based on the current code page selected in the operating system.  You may turn on Unicode through the instructions provided in my first link (it is unsupported and can be a bit buggy on occasion...) to obtain multibyte unicode character support to allow for multibyte characters not in the OS code page.

 

Unicode has multiple encodings, and the raw bits for a given character depend on the encoding used.  LabVIEW's limited unicode support appears to use UTF-16LE (little-endian) encoding for anything displayed within the UI.  So to get the characters to show up on the UI, you must enable unicode (instructions shown in my first link) and write the proper UTF-16LE codes:

 

UnicodeExample.png

 

UTF-8 is more common and thus easier to work with outside of LabVIEW (e.g. my version of Notepad++ evidently does not support UTF-16LE).  I usually end up using UTF-8 encoded strings for files and converting them to UTF-16LE  for display in LabVIEW.

 

UnicodeExampleUTF8.png

 

The unicode library in my first link has the necessary subVIs for converting between UTF-8 and "Unicode" (i.e. UTF-16LE).

 

 

Best Regards,

John Passiak
Message 8 of 10
(7,707 Views)

Well... I learned something as well typing all of that out...

 

I just had neglected to write the byte order mark (BOM) when using the LabVIEW UTF-16LE.  When I do this the file works in Notepad or Notepad++ (which detects the file format as UCS-2 Little Endian).

 

 

UnicodeExampleToFile.png

 

I still like UTF-8 personally as the BOM is not required.

 

 

 

Best Regards,

John Passiak
0 Kudos
Message 9 of 10
(7,684 Views)

I've used the undocumented LabVIEW Unicode support quite often.  They work fairly well, but they're kind of twitchy.  If you can tame the beasts, they will serve you well.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 10 of 10
(7,679 Views)