# LabVIEW

cancel
Showing results for
Did you mean:

## Converting from a 16 bit integer to a 12 bit integer

Hello,

I need to convert a 16 bit integer, from a slider on the front panel, into a 12 bit integer. The DAC I will be using this data to write to requires a 12 bit voltage value and 4 trailing zeros (i.e. a 16 bit word). Is this possible?

Thanks!

Message 1 of 16
(8,034 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

Since you want trailing zeros, just use Logical Shift to shift 4 places.

There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 2 of 16
(8,031 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

LabVIEW does not have 12 bit integers, so apparently you want 16 bits, with only 12 bits significant and aligned the way you need. Tim already solved that for you.

Now we also need to adapt the slider control. First, you need to change the representation correctly (probably U16). Next you need to correctly set the"data entry...". Make sure to coerce to the valid range and appropriate for 12 bits. Make sure to read the documentation regarding byte order (LabVIEW is always big endian, but your instrument might not be).

Message 3 of 16
(8,015 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

Thank you crossrulz and altenbach, I have implemented both those things and I am getting the output I've desired. However, it is only appearing as I want it to when I use an indicator and manipulate it to display U16 as a binary 16 digit number with leading zeros. If I don't wire the output from logical shift to the indicator, the data

isn't transferred in U16 binary 16 digit with leading zeros form. Is there a way to get around this? I've inserted a picture of my code as well.

Message 4 of 16
(8,008 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

@lmcohen2 wrote:

However, it is only appearing as I want it to when I use an indicator and manipulate it to display U16 as a binary 16 digit number with leading zeros.

The data is always 16 bits.  It is just a question of whether or not you are displaying the data in a useful manner.

@lmcohen2 wrote:

If I don't wire the output from logical shift to the indicator, the data isn't transferred in U16 binary 16 digit with leading zeros form. Is there a way to get around this?

How are you verifying the data?  This sentence makes no sense to me.  And your image does not show where your data is supposed to be going.  Again, your integer is 16 bits, period.  It is just a matter of how you have the data being displayed.

Also, just change your slider to be a U16 data type instead of a I16 and then converting the data type.

There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 5 of 16
(7,998 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

Hi Imcohen,

as the wire is the variable (THINK DATAFLOW!) the value in the wire is the very same as in your indicator!

Why do you think it would be different?

Best regards,
GerdW

using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 6 of 16
(7,997 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

I am just confused as the data needs to be sent to a microcontroller in the exact form as the indicator displays it (16 bit with trailing zeros). If it doesn't have the trailing zeros, the microcontroller will not understand the message. I am just overthinking this?

Message 7 of 16
(7,993 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

How are you communicating with the controller and how does it want the data (U16? Flattened binary string? Formatted binary string consisting of the characters 0 and 1, Etc.). Do you have a driver subvi?

Message 8 of 16
(7,970 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

I am using an Aardvark to communicate through SPI. There is an Aardvark subvi for writing SPI and it requires data that is an array of unsigned bytes U8. So I am concerned that the data I am trying to write will be misinterpreted and result in errors.

Message 9 of 16
(7,962 Views)

## Re: Converting from a 16 bit integer to a 12 bit integer

So you need to cast your U16 to an array of two U8s. Simple as that! (Again be careful with byte order).

What does the documentation say exactly?

Message 10 of 16
(7,959 Views)