LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Necessary to make strange tweaks to my array data pointers when calling dlls

In building pointers to be passed to certain dll calls, I have noticed that I need to pay attention to some peculiar details if I want the functions to execute properly. Namely, numeric types must have their bytes reversed from the way in which LabVIEW orders them (least-significant last) before being concatenated with the rest of the pointer, and booleans must be encoded as 32-bit integers rather than as single bytes.

The particular library I am calling was written in Delphi Pascal, which uses a true-boolean, single-byte type. The functions are stdcall calls. I'm certain the library was compiled on a Windows machine, but I am not certain about hardware architecture (e.g. Intel or AMD o
r whatever) - not that I necessarily think it should make a difference. I don't have the source for these libraries, as they're 3rd party proprietary stuff.

So, does a real pro out there know why I have to reverse bytes for my numeric types, and why I have to cast booleans to 4 bytes? Does this have to do with the stdcall convention versus the C calling convention? And if so, then in general - and for future reference - if given the choice, is it easier to use the C calling convention with LabVIEW?

Your answer would really help me solve a long-running mystery.

Thanks,
Nick
"You keep using that word. I do not think it means what you think it means." - Inigo Montoya
Message 1 of 8
(4,212 Views)
I'm not sure what you mean with "before being concatenated with the rest of the pointer".

An x86 machine is little endian. So the CPU expects the bytes to be in reverse order. If a function in your dll expects an int32, it will want the bytes in reverse order.

So, when LabVIEW makes calls to native DLLs/etc, it will automatically make sure that the byte order is in the correct format for the platform you are executing on. (Sun and Apple are big endian, or network order).

Inside of the LabVIEW environment, you will notice that most things are "big endian," including writing data to a file. This is so that when you develop a VI an a Mac it will work on a PC and vice-versa. Cool huh?

Unfortunetly, there is no "standard" way of representin
g a boolean. It can differ from language to language and from compiler to compiler. You'll have to make the adjustment to whatever your language expects. (Like you have already).

The stdcall verses the C calling convention only affects how things are put and removed from the stack. When making a function call, what actually happens is that you push onto a stack all of the arguments and then "jump" to that function. The function then can look at the elements on the stack to see what values were passed to him. When the function returns, someone needs to shrink the stack (pop the parameters off the stack). The "stdcall/C" calling convention is what determines who has this responsibility: the function, or the calling code. If you don't set the calling convention correctly, then you will either try to pop too many things from the stack (because both will try to remove them), or you will not pop anything (because both expect the other to do it) from the stack and a crash will li
kely result.

Hope this helps,
Message 2 of 8
(4,212 Views)
Thanks, David,

You've answered my question. If you don't mind carrying on this discussion just a little further, though, you've brought up one more question...

You pointed out that depending on which calling convention you use, either the function or the calling code pops parameters off the stack. Which convention does which, and does one method inherently involve less overhead than the other?

Thanks again,
Nick

P.S. - For anyone reading this discussion in the future, what I meant by, "before being concatenated with the rest of the pointer" was that the dll takes a complex data type (a Delphi record type, very roughly equivalent to a LabVIEW cluster) and I have to reverse the order of the bytes in each numerical value individually before I bu
ild the pointer to that parameter. I do not want to reverse the order of the entire pointer - that would place the values within the pointer in the wrong order. For example, let's say I have some data type that consists of two integers (I32). Suppose I want to pass a value of 1, 2 to my function.

I first transform the integers to 1-D byte arrays (or strings). LabVIEW will spit out this:
1 --> 1000
2 --> 0100

As David pointed out, I must reverse these for execution on my x86 machine:
1000 --> 0001
0100 --> 0010

Now I append them together ("concatenate" if you're thinking of strings) to build my pointer:
0001 + 0010 = 00010010.

Notice that if I had just stuck the byte codes together and reversed the whole thing, I would have gotten:
1000 + 0100 = 10000100 --> 00100001

My function would interpret this as 2, 1 not 1, 2.
"You keep using that word. I do not think it means what you think it means." - Inigo Montoya
Message 3 of 8
(4,212 Views)
>You pointed out that depending on which calling
>convention you use, either the function or the calling
>code pops parameters off the stack. Which convention
>does which, and does one method inherently involve
> less overhead than the other?

With cdecl the caller is responsible to adjust the stack pointer whereas with stdcall the function itself adjusts the stack pointer. stdcall is supposed to be a tiny wee bit faster but I'm sure you won't be able to measure the few nanoseconds. I'm not exactly sure on what claims this is actually based, but have decided that worrying about this form of optimalization is simple bullshit as long as you don't program in assembly.

What you explain with the byte order I'm not sure what you mean. It all se
ems wrong to me. Of course if you convert an int32 to a boolean array the first bit will be visually on the left side as arrays are typically interpreted in the same direction as we read text but that is only a visual representation. In memory an int32 consists in LabVIEW of 32 bits in the native format. So unless you tell us what Delphi type you try to create (and hopefully my rusty Pascal knowledge is good enough to understand it) I can not really tell you what you should rather do to get that type.

Rolf Kalbermatter
Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 4 of 8
(4,212 Views)
> The particular library I am calling was written in Delphi Pascal,
> which uses a true-boolean, single-byte type. The functions are
> stdcall calls. I'm certain the library was compiled on a Windows
> machine, but I am not certain about hardware architecture (e.g. Intel
> or AMD or whatever) - not that I necessarily think it should make a
> difference. I don't have the source for these libraries, as they're
> 3rd party proprietary stuff.
>
> So, does a real pro out there know why I have to reverse bytes for my
> numeric types, and why I have to cast booleans to 4 bytes? Does this
> have to do with the stdcall convention versus the C calling
> convention? And if so, then in general - and for future reference -
> if given the choice, is it easier to use the C calling convention with
> LabVIEW?
>

As others are pointing out, there are lot of calling conventions to
choose from on Windows. It isn't so much a matter of efficiency as
convention. If both agree, things work. If caller and callee don't
agree, bad things happen.

As for your byte ordering, I think you must be flattening your cluster
and passing the string to Delphi. In memory, all LV data matches the
CPU byte ordering. When written to disk or flattened for transmission
over TCP, or such, LV will always reorder the bytes to be in network
order. If you pass the cluster using an adapt to type param in the Call
Lib Function, your shouldn't need to do this, but you will need to be
careful about alignment assumptions between caller and callee. If you
do the flattening, you will need to do byte swapping.

As for the Boolean, LV uses a byte to store each Boolean. But when
passed to a DLL on the stack, everything gets promoted to be four bytes.
This is another piece of the calling convention that is even more
complicated and not selectable. The flattened Boolean will be a byte
too. You shouldn't see Booleans taking more than one byte except when
in a cluster with other larger types. A cluster of Boolean and I32 uses
only five bytes, but on some computers, bytes will be inserted between
so the I32 can be four byte aligned. I don't really think this is what
you are seeing since on Windows, everything is packed to one byte alignment.

If you still have questions, please post again and be more specific
about what you are trying to do and what led you to think Booleans take
four bytes, etc.

Greg McKaskle
Message 5 of 8
(4,212 Views)
Thanks, Greg.

Yes, I've been "flattening" a cluster to pass it to the function. Sorry for not using this bit of LV nomenclature earlier. My wording must have made things confusing. Let's continue getting the record straight. (Sorry David and Rolf, perhaps this will clarify things for you guys)...

The developer who wrote the dll I'm calling was thinking ahead. He realized that most developers using his code wouldn't be using Delphi. So whenever he has Delphi-specific, user-defined data types, he has you pass them to the dll calls as pointers rather than as the actual data type. So once in a while I have to create a pointer to a user-defined Delphi "record" type that contains some mixture of other types - strings, longs, doubles, booleans, you name it.

I agree, Greg, that I should simply be able to use "adapt to type." Internally, you'd expect adapt to type to do the exact same thing that I'm doing byte by byte on my own. But, I swear to you that it doesn't work. It seems to me either LV or Delphi is dropping the ball. (My money's on Delphi. I wish NI could buy Borland and turn their headquarters into a warehouse! I loathe that company - but that's beside the point.) And I know this, because if I use adapt to type, I have to switch the booleans in my cluster to I32's. But even then everything's still wrong - the bytes of all numeric values have been reversed! I get these huge numbers going into the function, and the booleans always come out FALSE because the "true" byte is at the wrong end. Strings, on the other hand, come out just fine!

To circumvent this crap, I pass the whole thing as an array data pointer ([U8]), which I build piece-wise as described in my earlier post. How do I know to do this? A whole lot of trial and error.

So my original question was this: Why do I have to so painstakingly fiddle with my data? As you point out, everything on the stack gets promoted to 4 bytes. The answer to my question probably has something to do with that fact. But you would think that the software would take care of this kind of stuff along the way, so we humans don't have to think in 1's and 0's.

My approach to solving this problem is unconventional, a royal pain, and certainly not for everyone - I'll give you that. But for anyone reading this in the future because they're stuck debugging a Call Library Function node that takes some freaky data type - particularly if the DLL was written in Delphi - I will say that you may have to get down and dirty with your bytes.
"You keep using that word. I do not think it means what you think it means." - Inigo Montoya
Message 6 of 8
(4,212 Views)
Well, watch out about adapt to type! It will pass the structures as they are used in LabVIEW. This can have a number of pitfalls.

1) LabVIEW uses it's own format for strings and arrays, called handles. It is a pointer to a pointer to an array with one int32 per dimension in front telling the size of each dimension. No other environments do use this type of format by default, although if you know it you can always account for it on the C/C++ or even Delphi side. But passing structures which contain strings or arrays by adapt to type will usually not work unless the DLL is specifically aware of the LabVIEW types.

2) Byte alignment can be an issue too. LabVIEW packs structures as much as possible on Windows. Windows 32bit API does nowadays often use long word alignment for longs, floats and doubles. Delphi might want to do that as well by default. Your Delphi documentation should be able to tell you such details (I would hope). You can adjust on the LabVIEW side for this if necessary by adding explicit dummy filler values into the cluster.

Unless you have a flatten or typecast function in your wire the byte order should be always the same as the CPU on which it is running. The Flatten (and Unflatten) or Typecast function do always do a byte and word swap for according data types when you convert from or to larger sizes than 8 bit values. So when you do Typecast your int32 to a string or byte array the order of the bytes is reversed on the other side. Basically if you build a cluster in LabVIEW and Typecast it to a string to pass to a DLL, what you should do (only under Windows and Linux!!) to the Cluster wire before wiring it to the Typecast function is putting a Swap Byte and Swap Word function in the wire. This will take care of byteswapping any values in the cluster correctly to compensate for the byteswapping the Typecast will do. Of course since about LabVIEW 6.0 you can also alternatively select the CLN parameter to adapt to type and then you don't need to byteswap and typecast at all.
But watch out: If the cluster is not flat (e.g. contains strings or arrays, you can see that the Typecast will refuse to connect properly to that wire as Typecast can only convert flat data types) you can't really pass such a cluster easily to a DLL nor by Flattening it nor by using Adapt to Type. When Flattened the enntire cluster will be put in one single buffer, with the strings embedded into it, which is definitely not what any DLL ever would expect. When using Adapt to Type, the embedded arrays (and strings are really byte arrays too) will not be passed to the DLL in a way which a DLL can understand by default.

Last but not least, LabVIEWs Boolean is really a 8bit integer with the LSB indicating the state. WinAPI and probably Delphi too, use a 32 bit integer to represent Booleans. You can easily adjust for that by including a n int32 with value 0 or 1 on the LabVIEW side.

Rolf Kalbermatter
Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 7 of 8
(4,212 Views)
Thanks David, Rolf, and Greg for all the input on this rather esoteric topic. I still have issues regarding this topic, but I don't know that I can formulate them into pointed questions. In any case, the dll calls work, and that's what matters in the end.

What I DO know is that - thanks to you guys - I definitely know more about what's going on at the assembly level when a dll is called than I did when I started this thread. Gentlemen, you rock. Four stars all around.

Nick
"You keep using that word. I do not think it means what you think it means." - Inigo Montoya
Message 8 of 8
(4,212 Views)