LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Setting ones and zeros bit by bit to a 16 bit unsigned int?

Solved!
Go to solution

Hello,

 

I am trying to write data to DACs to control the voltages applied to 64 (8x8 grid) different pins on a piece of hardware. I currently have an 8x8 string array of the bit strings (ones and zeros) necessary to be written and control the specific DAC. Is there any way I can convert this 8x8 array of ones and zeros stored as a string into an 16 bit unsigned integer 8x8 array of those same ones and zeros? I am having an issue where I end up losing the leading zeros from the string when converting to unsigned int.

 

Thanks!

0 Kudos
Message 1 of 3
(1,587 Views)
Solution
Accepted by topic author lmcohen2

Show a VI that demonstrates what you are talking about.

 

An integer can't lose leading zeroes.  A 16-bit integer will have 16 binary digits.  Perhaps you don't see the leading zeroes because you are displaying the integer in an indicator as a binary display format that isn't showing them.  You can change the display format to be binary, have 16 digits padded with leading zeroes.

 

 

Message 2 of 3
(1,583 Views)

That solved my problem, thanks!

0 Kudos
Message 3 of 3
(1,568 Views)