From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Boolean to (0,1) function: why the I16 output?

Solved!
Go to solution

Hello,

 

Just out of pure curiosity, what is the reason behind that the Boolean to (0,1) function has an output of I16? Bools are stored as U8 in LV recently. Is it for some kind of backward compatibility? As I remember, I read somewhere that in some old LV versions, Bools were stored on 16 bits, not as now on 8 bits...

 

edit: and why it needs to be signed? values 0 and 1 does not require sign...

 

Example_VI_BD.png

Message 1 of 9
(4,125 Views)

I would think that this goes back to LabVIEW 0.0 where memory was expensive and rare. A decision had to be made and maybe I16 was the most "universal" integer that limited coercion when combined with integers from other sources.

 

Many versions ago, almost all functions got an output configuration option and it is almost inexcusable that "boolean to (0,1)" was left out, because this is where I typically need it most. Wouldn't it be great to be able to configure the output representation here!?

 

See also my very old comment to this idea.

Message 2 of 9
(4,072 Views)

I checked, NXG the same, output is I16.

0 Kudos
Message 3 of 9
(4,060 Views)

@Blokk wrote:

I checked, NXG the same, output is I16.


For backwards compatibility when importing "classic" VIs no doubt.

0 Kudos
Message 4 of 9
(4,050 Views)
Solution
Accepted by Blokk

Back in LabVIEW 2.x a boolean was a 16 bit value, unless it was an array of booleans in which case the booleans were packed into an array of 16 bit integers internally. That meant an array of 16 booleans was an array of 1* 16 bit value while 17 booleans occupied 2* 16 bit integers. The 16 bit boolean came from the Macintosh OS where a boolean was defined as a short (16 bit integer).

 

16 bit for storing a single bit is fairly wasteful but on the Mac it had some advantages since the 68000 architecture was a 32 bit architecture with 16 bit databus initially. On other platforms however the 16 bit had no advantage of any kind. In addition the packed format for booleans was anything but ideal. Iterating through a boolean array was relatively expensive as the compiler had to add extra shift operations and checks to see when to iterate to the next 16 bit value.

 

So in LabVIEW 4.0 they did the first of two fundamental datatype changes in the existing datatype system in LabVIEW. A boolean got 8 bits, and an array of booleans was simply an array of 8 bit values without any packing. Wasteful for arrays? yes, but much faster to iterate through! The typecode for the old boolean is still present in LabVIEW with code 0x20 and the new 8 bit boolean got the typecode 0x21.

 

Boolean to Number wasn't changed to output an 8 bit value though, most likely out of compatibility concerns. If you had a boolean, converted it to a Numeric and then used a Typecast to convert it to a 16 bit enum, an 8 bit output on the Boolean to Number would cause the conversion to always result in the first enum value on the output of the Typecast value, since Typecast works (explainable) funky when the size of the types doesn't match on both sides. There are likely other operations in LabVIEW were similar code breakage could have occurred if the Boolean to Number would have been simply changed to match the internal format. And back in LabVIEW 4.0 they had many other more important problems to tackle than to add a configuration option for the primitive outputs Smiley Very Happy

Rolf Kalbermatter
My Blog
Message 5 of 9
(4,011 Views)

@tyk007 wrote:

@Blokk wrote:

I checked, NXG the same, output is I16.


For backwards compatibility when importing "classic" VIs no doubt.


Sure, remove captions, variant labels, picture controls, units, etc., etc.. (EDIT: not to mention a functional IDE.) But keep silly 20 year old decisions for "compatibility". 

 

Simple solution 1): make a legacy vi for compatibility, but give the new function a sensible output.

Simple solution 2): Give it a second input to provide the output type. Default can be I16.

 

BTW. What would be a sensible output? Not sure if I would prefer an U8 or an I32. U8 because it's what a Boolean is, I32 because it's the de facto integer choice.

Message 6 of 9
(4,000 Views)

@rolfk wrote:

So in LabVIEW 4.0 they did the first of two fundamental datatype changes in the existing datatype system in LabVIEW.


Was is LV4? I think they changed it after LV4? The Type Cast can be set to "Convert 4.x" data, and then it will use a bit packed array instead of one Boolean per byte. Not that it really matters...

Message 7 of 9
(3,995 Views)

wiebe@CARYA wrote:

@rolfk wrote:

So in LabVIEW 4.0 they did the first of two fundamental datatype changes in the existing datatype system in LabVIEW.


Was is LV4? I think they changed it after LV4? The Type Cast can be set to "Convert 4.x" data, and then it will use a bit packed array instead of one Boolean per byte. Not that it really matters...


I was just about to say the same thing. Maybe LV4 introduced boolean scalars as U8 (instead of I16), but the packing of boolean arrays was definitely eliminated after 4.x.

0 Kudos
Message 8 of 9
(3,950 Views)

You guys are of course both right. The change was between 4.x and 5.0. I had this Flatten to and Unflatten from with the red 4.x text on them in mind and knew that it meant that the old behaviour was 4.x, but somehow didn't quite make the mental move to also write the right thing.

The next change was between 7.x and 8.0. 

Rolf Kalbermatter
My Blog
Message 9 of 9
(3,937 Views)