LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Will unicode be supported by LabVIEW?

Hi,

i found some threads concerning this isue but i'm wondering if
NI already startet activities to realize this feature?

For converting strings to BSTR there is given an hint that
MultiByteToWideChar() API function could be used for it.
Is there any example for this dll call available?

Regards,
Sunny
Message 1 of 15
(14,899 Views)


@Sunny wrote:
Hi,

i found some threads concerning this isue but i'm wondering if
NI already startet activities to realize this feature?

For converting strings to BSTR there is given an hint that
MultiByteToWideChar() API function could be used for it.
Is there any example for this dll call available?

Regards,
Sunny




Search in vi.lib/registry.llb for the VI STR_ASCII-Unicode.vi. It allows conversion of LabVIEW strings into Unicode strings and uses above mentioned function.

Supporting Unicode in LabVIEW directly will be a very hard challange for NI to implement. I'm not sure I would want to hold my breath for that.

Rolf Kalbermatter
Rolf Kalbermatter
My Blog
Message 2 of 15
(14,879 Views)


@Sunny wrote:
Hi,

i found some threads concerning this isue but i'm wondering if
NI already startet activities to realize this feature?

For converting strings to BSTR there is given an hint that
MultiByteToWideChar() API function could be used for it.
Is there any example for this dll call available?

Regards,
Sunny




Just another note. A BSTR is not a simple Unicode string. It really is a pointer to the Unicode string part of a larger structure which contains also a prepended (and invisible to a normal caller) 32bit value with the number of characters in the pointer. There are special system function in Windows to create (and destroy) BSTRs such as SysAllocStringLen and SysFreeString in oleaut32.dll.

Rolf Kalbermatter
Rolf Kalbermatter
My Blog
Message 3 of 15
(14,879 Views)
Sunny,

LV is already designed to support multiple languages, but we use the multi-byte encoding scheme (MBCS) rather than UTF-16 (what most people mean when they say Unicode or talk about BSTRs). The primary reason is that LV is multi-platform. While Windows has embraced Unicode all the way to the kernel, many other operating systems we work with do not.

However, if you want to work with BSTRs in LV, it may be possible to use the .NET interface in LV 7.x. I haven't tried much with it, but as a .NET fanatic, I'd be happy to look into it a bit more. .NET has a Marshal class that provides all sorts of conversions to and from BSTRs, and LV automatically converts between LV strings and .NET strings - so...

Please give me a couple of simple examples of where you might need BSTRs and we can see if the .NET layer can help out.
0 Kudos
Message 4 of 15
(14,848 Views)
Here is a library to convert LabVIEW strings into different versions of Unicode. The BSTR pointer variant would be the prefered one if you need an explicit 32bit pointer to be passed to ActiveX nodes for instance. The LabVIEW array variant would be useful to be passed to Call Library Nodes of all kinds. The creation also supports creating an explicit length for passing to functions to be filled in. No need to invole .Net for such things.

Rolf Kalbermatter
Rolf Kalbermatter
My Blog
Message 5 of 15
(14,829 Views)
Spoilsport 🙂
Message 6 of 15
(14,813 Views)
I have tried the STR_Unicode-ASCII.vi, but I get the message "system feature not enabled". Does anybody know how I can make it work? Another question: It seems the same function can be used to convert from UTF-8, and the WideCharToMultiByte can be used for the opposite, but I can't get it to work. Any suggestions? (What I am trying to achieve is a conversion from Labview string to UTF-8 and vice versa).
 
Martin
0 Kudos
Message 7 of 15
(14,741 Views)
On what Windows system is that? The WideChar functions, more precisely UTF16 is supported on every Windows NT system and higher. As to how UTF8 would be supported I have no idea. I think UTF8 does make no sense at all as it could not represent more characters than the ASCII character set can.

Checkout the documentation for those two functions on MSDN. As far as I can see is the idea that you only can convert a WideChar string into UTF8 and vice versa. No way of converting ASCII directly into UTF8 at all. WideChar in MS terms is however ALWAYs an UTF16 string.

However you need to be vary about what flags you pass to the functions. Many combinations of flags and particular parameters will simply return an unsupported error. For instance as MSDN says you can only use UTF8 if dwFlags is 0 and also lpDefaultChar and lpUsedDefaultChar must be NULL.

Rolf Kalbermatter

Message Edited by rolfk on 07-03-2005 04:36 PM

Rolf Kalbermatter
My Blog
Message 8 of 15
(14,741 Views)
Actually, both UTF8 and UTF16 are simply encoding schemes for the Unicode standard, which supports character sets of 32 bits. Obviously no one is ready to start using 32-bit character strings - the memory hit would be terrible for 99% of applications. So UTF8 and 16 both encode down the Unicode standard. UTF8 is often selected when ASCII is the main language since the it is a direct mapping and you only need to decode for non-ASCII characters. UTF16 is better for most multi-language systems as you rarely need to go through the decode logic (only rare of often extinct languages require more).
0 Kudos
Message 9 of 15
(14,712 Views)

Have you added UseUnicode=True to your Labview.ini file in the National Instruments folder?

0 Kudos
Message 10 of 15
(11,119 Views)