LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to pass a parameter of type __int64 to a DLL from LabVIEW?

I have a DLL function built in C that has a parameter of type __int64. Is it possible for me to represent a control of type equivalent to __int64 in LabVIEW?
Message 1 of 4
(3,742 Views)
There is no such thing in LabVIEW as a 64 bit integer of course. But you may be able to pass the value in anyway. Try putting it in as a SGL or DBL number. You just may be able to fool the DLL into receiving the data in spite of the type mismatch.

If you have the DLL source code, you could also try recompiling it to split the i64 into to i32s, and then combine the two in the DLL.

Good luck.
Message 2 of 4
(3,742 Views)
I don't think a single would work since it is only 4 bytes. A double
might work because it is also 8 bytes. I think you would probably
have to take an array of 2 U32's that represent the high DWORD and the
low DWORD in your I64, cast them to a string of 8 bytes and then cast
them to a double. You may be able to go straight from the U32's to
the double. The program may throw an exception anyways.

If it is possible it would be better to write it to allow you to pass
in 2 U32's or 2I32's instead.

LabVIEW needs to add support for LONG_LONG_INTEGER, i64, etc. It also
needs to add support for pointers and handles.

Doug De Clue
ddeclue@bellsouth.net


Labviewguru wrote in message news:<50650000000500000076920000-1027480788000@exchange.n
i.com>...
> There is no such thing in LabVIEW as a 64 bit integer of course. But
> you may be able to pass the value in anyway. Try putting it in as a
> SGL or DBL number. You just may be able to fool the DLL into
> receiving the data in spite of the type mismatch.
>
> If you have the DLL source code, you could also try recompiling it to
> split the i64 into to i32s, and then combine the two in the DLL.
>
> Good luck.
Message 3 of 4
(3,742 Views)

kelsie,

I would agree with the other answers, but I have a bit more information as well. LabVIEW's EXT data type is in an IEEE format for an 80-bit floating point number (1 sign, 15 exponent, 64 mantissa). This should then represent an 64-bit integer with no issues (theoretically). However, I have only ever been able to get it to represent integers up to 1000000000000000000.000000 visually (1*10^18). I have a VI that reproduces this and I am trying to get it fixed. Now, the numerical value in memory is correct, but LabVIEW cannot visualize it to you at each integer value from 10^18 to 2^64. (Try placin
g an EXT control on the front panel and typing in 1000000000000000000.000000. Then type in 1000000000000000001.000000. Notice that it does not display that value. However, run the attached LabVIEW VI and notice that the numerical values are not identical because they are not equal.) Another thing to note is that the increment/decrement buttons quit working after 9007199254741000.00.

This all came about because customers wanted to get 64-bit integers from GPIB instruments. So what I did what to take the string and make the 64-bit integer into 2 32-bit integers. I then put them back together into an EXT data type to represent the large number. This is when I ran into the issue of the very large numbers.

So, I agree that inputting two 32-bit integers would be the best method. Then use the "Scale by Power of 2" function to multiply the high part by 2^32 and add it to the low part. Again, this works for numbers up to 10^18.

I hope this helps.

Randy Hoskin
Applications Engineer

National Instruments

Message 4 of 4
(3,742 Views)