LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to build array of bits spread out in multiple bytes.

I'm trying to build an array of bits(1s and 0s) spread over 16bytes where only 2 bits, specifically bits 6 and 7 in the first byte were used. The next 126 bits of the array occupies the next 15 bytes up to bit 0 of the last byte (byte16). I'm not really sure how to build this array and perform the bit manipulation to get the correct result. I've attached a sample code of my attempt. Thanks in advance for your help. 

 

Below is spec description.

 

Byte     Bit count    Data type

 1               2           Enumeration

 2-16        126         Enumeration

0 Kudos
Message 1 of 8
(1,955 Views)

Is the desired output an array of 16 U8s? or a 128 bit scalar? Or some inbetween grouping?

Are the remaining 126 bits always 0? (This is what your code shows, but it isn't obvious at least to me from the description if that's the desired behaviour in all cases).

 

If they are always 0, then you can do what you're already doing for the first byte, and then use an array of 15 (not 16) 0-valued U8s to fill out your 128 bits.

If you need to take an input with values between 0 and (2^126)-1 as your second input, you'll probably need to add the leading U8 of your array of 16 U8s out and add it to the bit-shifted input you already have to get the first byte, then concatenate the remaining 15?


GCentral
0 Kudos
Message 2 of 8
(1,927 Views)

So is there anything wrong with your attempt? Your lower array should only have 15 elements, right?

 

Your description does not really specify if the first byte already only contains the two bits in the right place or if you need to shift them up. From your code, I assume that bit#0 is LSB, i.e. unity.

 

Also note that if you do a left shift by 6 on a byte, the masking (AND b11)  is not really needed because the rest of the bits will be zeroed anyway.

 

I would also recommend to format your controls/indicators in binary (%08b). Much easier to verify the result. Where do the other bytes come from?

 

What do the 16 bytes represent? Do you need to worry about byte order?

0 Kudos
Message 3 of 8
(1,924 Views)

cbutcher,

Thanks for your reply. The desired result is a 2bit u8 value in the first byte and an array of 126bit scalar. No the remaining 126bits are not always zero. They can be 1 or 0 depending on the input.  

0 Kudos
Message 4 of 8
(1,880 Views)

altenbach,

Thank for responding. Yes Bit#0 is the LSB. The first 2bits is used to represent a state in the first byte. I assumed the 2bits will be in the right location after bit shifting. The other 15bytes will make up the 126bit array which will be a scalar. I don't think byte order is a concern.

0 Kudos
Message 5 of 8
(1,874 Views)

Maybe this is what you're looking for?

Bit_array sample code.png

 

Take care for your most significant bits ordering when adding here - I wrote this to concatenate in the way I think you described, but note that I had to reverse in my random input generation because for the Bool Array to Number, LabVIEW assumes that the 0th element of the array is the LS-bit.


GCentral
0 Kudos
Message 6 of 8
(1,871 Views)

cbutcher,

Thanks cbutcher. I think this may be what I need but my question is if I want to generate a specific bit array of states where a number value instead of random numbers will be represented with 126bits. How do you modify the code? Could Number to boolean array function be used in the first for loop instead? 

0 Kudos
Message 7 of 8
(1,859 Views)

Sure - you can put whatever you want there! But do you really have an enum with ~8.5x10^37 values?

 

There are various useful ways to convert enums in LabVIEW to numbers - if you want the U8 array output but have a couple of larger enums that form these sections (perhaps e.g. vastly oversized U32 enums combined in a layout like "NI" - "Digital" - "Input" - "NI-9401" ) where each of these enums has a few possible values and you're just choosing the appropriate typedefs based on the preceding value, you can make use of things like Join Numbers and Split Number. You should be able to directly wire the enums to these - they will be coerced to their representation type (e.g. U32, U16).

 

I suspect we could give a clearer answer if you tell us what format the input 126 bits are in. They can't be a (single) LabVIEW enum, because the largest allowed representation is a U32 there. It looks like Fixed Point is limited to 64 bits in LabVIEW too - I suppose a Complex Double is 128 bits, but that would be a strange format to store an enum in!

 

Do you perhaps have a cluster of enums/numeric values or something similar?


GCentral
0 Kudos
Message 8 of 8
(1,850 Views)