# LabVIEW

cancel
Showing results for
Did you mean:

## How to input a 64 -bit word ???

Hi

Since I am new to LabVIEW programming, I desperately need your
suggestions.

I have to implement a cluster in which there are unsigned bit words
(8-bit word, 16-bit word, 24-bit and 64-bit word). In the control
palette, I could find the representation for 8, 16 and 32-bit signed
or Unsigned. What should I do if I have to implement a 24-bit or a
64-bit word?

Regards
Rajesh
rajes_kumar@yahoo.com
Message 1 of 4
(3,058 Views)

## Re: How to input a 64 -bit word ???

LabVIEW can only handle unsigned integers up to 32 bits. Therefore, you need to split the 64 bit word into a 32 bit hi word and a 32 bit lo word. For the 24 bit word, use a 32 bit unsigned integer and ignore the top 8 bits or split it into an 8 bit hi word and a 16 bit lo word.

If you attach a vi explaining exactly what you are trying to do, I may be able to help more.

Dave.
==============================================

Message 2 of 4
(3,057 Views)

## Re: How to input a 64 -bit word ???

There is no native LabVIEW support for 64bit integers.

What kind of application is this? Do you just need to (a) shuffle the values around, or (b) do actual math and comparison operations on them?

Case (a) is easy and you can use anything you like for 64 bits (8 character string, Arrays of 8xU8, 4xU16, or 2xU32, etc.)
Case (b) is more difficult, because you need to implement your own math, comparison, and formatting operations. You could internally represent the U64 number as above, as cluster of 2 U32s, or similar.

Infinite precision arithmetic is not hard. Look at some of the recent LabVIEW zone coding challenges, e.g. for factorials or vzone/codechallenge4_results.htm>Vampire numbers. U64 would seem even simpler.

24 bits fit in a U32, so you just need to possibly check for 24 bit overflow (in this case e.g. "number (logical AND) b11111111000000000000000000000000)" is not equal zero).

Message 3 of 4
(3,057 Views)

## Re: How to input a 64 -bit word ???

Hi,

TO detail my application bit more.. I have to use the 64-bit word (9 X
64-bit word) in the FPGA to calculate the modulating frequency. The
FPGA part is not interesting here. I have the set of 64-bit pattern
which is being used in the FPGA code. In the near future, I might want
to change the bit pattern frequently and hence I want to implement a
VI, where I can change the 64-bit pattern and not to change the
pattern in FPGA. So i tried to implement a cluster where I would input
the bit pattern (whole 64 bit word) and drive the word to the FPGA
throught the microcontroller.

Hope I dont need any calculation to be performed in the VI.

|--------| from VI |-------| |---------|
| 64 bit |------------>| uC |--------->| FPGA |
|--------| |-------| |---------|

Is that possible to find a solution which would help me???

Regards
Rajesh

altenbach wrote in message news:<5065000000050000006A6F0100-1079395200000@exchange.ni.com>...
> There is no native LabVIEW support for 64bit integers.
>
> What kind of application is this? Do you just need to (a) shuffle the
> values around, or (b) do actual math and comparison operations on
> them?
>
> Case (a) is easy and you can use anything you like for 64 bits (8
> character string, Arrays of 8xU8, 4xU16, or 2xU32, etc.)
> Case (b) is more difficult, because you need to implement your own
> math, comparison, and formatting operations. You could internally
> represent the U64 number as above, as cluster of 2 U32s, or similar.
>
> Infinite precision arithmetic is not hard. Look at some of the recent
> LabVIEW zone coding challenges, e.g. for
> href=http://www.ni.com/devzone/lvzone/codechallenge6_results.htm>factorials

> or
> href=http://www.ni.com/devzone/lvzone/codechallenge4_results.htm>Vampire
> numbers
. U64 would seem even simpler.
>
> 24 bits fit in a U32, so you just need to possibly check for 24 bit
> overflow (in this case e.g. "number (logical AND)
> b11111111000000000000000000000000)" is not equal zero).
>