LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Typecast numeric to cluster issue (unpreserved bitness pattern)

Hello Community,

 

When communicating with a device, I receive U32 packets and want to convert them to a defined structure. I create a typedef cluster that has in total 32 bits. The idea is to typecast the U32 to this cluster. The issue I observe is that the typecast function does not preserve the bit pattern when there are booleans in the cluster. From what I see, the booleans are not represented by a single bit.

 

In C++, I used to do this with a union, which allowed me to reinterpret the bits directly.

 

The goal is if the structure changes, I just need to update the typedef in one place, so my whole code is updated accordingly.

 

I strongly think I have used this technique with previous versions of LabVIEW (LV2021).

 

Could you confirm if this is a known issue, and do you have any alternatives?

 

Please see attached vi as exemple.

 

I am using LV2024 with windows 11.

0 Kudos
Message 1 of 10
(218 Views)

A LabVIEW boolean is really an 8-bit value. Yes it seems overly wasteful but 8-bits is the minimum amount addressable by pretty much any CPU architecture still being of any importance out there. So if you  typecast a 32-bit integer into a cluster with booleans it will simply fill in the first 4 booleans, interpreting each byte as one boolean and making the boolean true if that byte is anything but 0. And this has been like this since LabVIEW 5.0. Before that a LabVIEW boolean was a 16 bit integer (an inheritance from the Macintosh Classic OS) and a LabVIEW Boolean array was indeed a packed bit array. But especially that packed bit array turned out to be a serious performance problem as the conversion between a boolean array and other types was computationally rather expensive, so the boolean was revised for 5.0 to be a byte (as most other compilers do by default too, if they support a separate boolean datatype) and the array of booleans was simplified by simply being an array of bytes too.

 

If you need to deal with bitflags you could use Numeric to Boolean Array and then index into that array or build a boolean array and convert it with Boolean Array to Numeric. BUT: Especially the Numeric to Boolean Array function is performance wise quite a hit as it needs to create an array of bytes for something that you usually throw away anyways right away after having looked at one or two of the booleans. I always use boolean logic instead. The LabVIEW AND, OR and XOR nodes happily work on integers too. You just have to do a little more work in determining the actual value to look at a specific bit in an integer but the resulting code created by LabVIEW is as close to the silicon as you can get in a high level language. It is helpful to use Hex notation for the numeric constants used to do these boolean operations, but if you are not as used to it and want to see things on bitlevel you can also choose binary representation for the numeric constant. Also you want to treat these usually as unsigned integer to avoid trouble with two's complement logic.

 

Basically

- to see if a bit is set, AND the integer with the according numeric bit constant and test the result for != 0

- to set a bit, OR the numeric with the according numeric constant

- to clear a bit, AND the numeric with the two's complement of the bit mask for that bit. The two's complement of an integer is easlily computed in LabVIEW with the Negate function or if you use the Compound Arithmetic Node with the according boolean operator mode, by selecting to invert the input terminal.

- to invert a bit, XOR the numeric with the numeric bit constant

 

Also I don't think that unions are the way to go about this in C(++). Most likely you mean bit fields:

typedef struct
{
   uint32_t running :1;
   uint32_t alarm: 1;
   uint32_t level: 8;
} MyBitField;

 

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
0 Kudos
Message 2 of 10
(202 Views)

Thank you for your reply.


You are correct, what I meant was a bitfield in C, not a union.


My main goal is to simplify the code rather than optimize for performance. Since the product is still in development and the bit structure of the U32 data may change, I’d like to avoid building an entire factory just to manipulate a few bits.


I’m surprised there isn’t a more intuitive way to do this in LabVIEW.

 

Should I make this in C and then call it from LabVIEW, would that work ?

 

 

0 Kudos
Message 3 of 10
(175 Views)

Hi Reda,

 


@Reda-Zeghlache wrote:

I’m surprised there isn’t a more intuitive way to do this in LabVIEW.


The "most intuitive" way is the NumberToBooleanArray function IMHO.

(The overhead for handling a boolean array of 32 elements is rather small.)

To add even more overhead: you can convert that boolean array into a (typedefined) cluster of 32 boolean elements. Now you could access the bits by their element label…

 

I do prefer the approach suggested by Rolf: use boolean functions directly on your numeric data.

Checking bit #3 is the same as "(value AND 0x08) != 0"!

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 4 of 10
(166 Views)

Thanks also for you reply. I don't see how intuitive it is with the conversion of the full number to a boolean array. Please consider my goal is not to convert the array to boolean(bits) but to manipulate a defined structure as shown in the attached vi in my post.
let me wirte an example

let's assume I recieve U32 from device defined as follow:

{Speed: 16       //16 bits
Angle:  10        // 10 bits

State:  4           //4bits

Device_on: 1   //1 bit
Error: 1            //1 bit 
}

then tomorrow it changes too:

{Speed: 10       //10 bits
Angle:  10        // 10 bits

State:  4           //4bits

Device_on: 1   //1 bit

Power: 1          // 1 bit
Error: 1            //1 bit 

code: 3            // 3bits}

 

My idea was to do like in C, create a cluster typedef and then update it accordingly, without modifying the code or just a minimum

0 Kudos
Message 5 of 10
(157 Views)

Well, bitfields are definitely not as portable as you might believe. They are a perfect way to make your C code into a troubled bit of mess if you are not careful. Basically, Endianess throws all your assumptions away if you are not super careful and there are other potential problems with bitfields. Yes you may say, Endianess? who cares anything is Little Eindian nowadays. Until you come into network communication and other stuff, where the network byte order still predominantly is the standard and that is Big Endian. And of course LabVIEW itself with its Typecast function which always assumes Big Endian too, for traditional reasons as the Mac originally run on a Motorola 68020 CPU which was Big Endian too.

 

However if you want to contain your modifications to one or two places it is quite feasible to define a LabVIEW cluster that can accommodate your data, make it a Typedef and write ONE VI that converts the bits from the device into this cluster. Mission accomplished.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
0 Kudos
Message 6 of 10
(146 Views)

Thanks for your engagement on this topic. However my aim is not to turn this to a debate but rather to find the most convenient solution to maintain U32 structure that could change frequently. Or a typecast conversion that preserve the bitness pattern

 

Yes we can have a vi that does the conversion to a typedef cluster. and This is the solution I would like to avoid. It requires to also to modify that vi each time.

 

 

0 Kudos
Message 7 of 10
(137 Views)

Hi Reda,

 


@Reda-Zeghlache wrote:

Yes we can have a vi that does the conversion to a typedef cluster. and This is the solution I would like to avoid.


Why? It's the most straight-forward: one VI with an U32 input and a cluster output. (All the conversion is bundled in one VI/function.)

 


@Reda-Zeghlache wrote:

It requires to also to modify that vi each time.


You would need to change your cluster type definition each time, too!

And changing a type definition also requires to recompile the whole app.

And check for any problems/errors due to a changed type definition: when there are cluster elements added/removed, then the datatype of the wire changes!

 

Generic advice:

When converting code from one language to another then you should not try to replicate each specific language feature in the smallest detail.

Instead you should create new code that handles the underlying data in a similar way producing the very same output.

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 8 of 10
(118 Views)

Thanks for your valuable input

0 Kudos
Message 9 of 10
(114 Views)

Yamaeda_0-1760712311550.png

 

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 10 of 10
(91 Views)