LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Trying to Follow White Paper - Running a LabVIEW Application on Linux Without a Graphical User Interface

Solved!
Go to solution

wiebe@CARYA wrote:

Obviously LoadLibrary won't work on Linux.

 

Is there a Linux equivalent?


LoadLibrary:

 

void *dlopen(const char *file, int mode);

 

 

GetProcAddress:

 

void *dlsym(void *restrict handle, const char *restrict name); 

 

 

FreeLibrary:

 

int dlclose(void *handle);

 

 

Rolf Kalbermatter
My Blog
Message 11 of 25
(876 Views)

Does it make sense to use those as a way to make sure one and the same runtime engine is used?

0 Kudos
Message 12 of 25
(869 Views)

wiebe@CARYA wrote:

Does it make sense to use those as a way to make sure one and the same runtime engine is used?


Nope! Shared library initialization is a very complicated topic. Generally they are resolved based on name and the dynamic loader maintains a table of loaded modules for each process and their according refcount. Each explicit (dlopen) or implicit (import table reference) loading of a shared library will increase its refcount after the first time it was loaded and dlclose() will decrease it. Once the refcount reaches 0, the module is unloaded (and any possible fini() stub executed), although init() and fini() are deprecated and you should use the GCC 

__attribute__((constructor))
__attribute__((destructor))

instead when explicitedly defining load and unload initializer functions.

Rolf Kalbermatter
My Blog
0 Kudos
Message 13 of 25
(863 Views)

@rolfk wrote:

wiebe@CARYA wrote:

Does it make sense to use those as a way to make sure one and the same runtime engine is used?


Nope! Shared library initialization is a very complicated topic. Generally they are resolved based on name and the dynamic loader maintains a table of loaded modules for each process and their according refcount. Each explicit (dlopen) or implicit (import table reference) loading of a shared library will increase its refcount after the first time it was loaded and dlclose() will decrease it. Once the refcount reaches 0, the module is unloaded (and any possible fini() stub executed), although init() and fini() are deprecated and you should use the GCC 

__attribute__((constructor))
__attribute__((destructor))

instead when explicitedly defining load and unload initializer functions.


But if you'd make sure there's only one explicit *dlopen?

 

You could use *dlopen, and then use the pointers (from *dlsym) to call the functions? That would mean the refcount would always be 1 after dlopen, until it becomes 0 after dlclose?

0 Kudos
Message 14 of 25
(850 Views)

Yes, and make sure that every single reference to any function is resolved through explicit dlsym() calls. But I'm still in dubio if that is actually the problem. Standard shared library loading should not cause multiple parallel instances of the shared library being opened. So without an example to play with, there is little to say if that is actually the problem here, but I'm not going to build that example just out of curiosity.

 

It honestly isn't my problem and I don't see where I could use the gained insight at this point myself. 😀

And don't discount the possibility of a much simpler mistake in the OP's setup. It may seem straightforward what he is trying to do, but the description so far leaves more than enough room to make other mistakes somewhere.

Rolf Kalbermatter
My Blog
Message 15 of 25
(845 Views)

Fair points.

 

I do have example code I could pack up and load here that don't have anything referencing secret or ITAR topics, etc.

 

I am using Syslog to log any generated errors, and I am not getting any explicit LabVIEW errors in my configuration.

 

It seems that when I moved from C calling the Main LabVIEW program to calling a VI that Dynamically loads and runs the Main VI (not using the copy in the .SO, as I can't get LabVIEW to run the VI from inside the .SO) that the VI is not in the same context any longer.  This I kind of expected.  The VI called by C is built into the .SO to use the Headless LabVIEW Run-time engine. I'll see if I can get something either packed up and loaded here or get more debugging information.

 

Maybe I am going about the system all wrong.

 

The system start-up and connection monitoring got more complicated - as at first I was told xinetd starts the application.  I then wrote code that uses C to launch the LV Main - and using Pipes VIs to point at STDIN and STDOUT was able to use the xinedt mapped TCP connection to read and write on the port in question.  Of course - then I am told well, there can be more than one client on the port, and xinetd is set up as blocking, and we run C code to take over the port monitoring after.

 

I need to have the C code then be able to send my LV program the new FD connections while LabVIEW Main runs in parallel.  I'm not great at C, but can get help from the C developers at work and probably have the C code load the Main VI from the .SO as original using pthread so it doesn't block the execution of the simple C code I am using to test all this out.

 

I was hoping to use the Linux RT VIs that NI put out for Inter Process Communication, but copying those three libraries over to CentOS produces errors, not unexpectedly.

 

 

 

 

Ryan Vallieu CLA, CLED
Senior Systems Analyst II
NASA Ames Research Center
0 Kudos
Message 16 of 25
(832 Views)

My o my, what a mess you got yourself into! This starts to look like Münchhausen needing to pull himself out of the swamp together with his horse, blindfolded and with arms tied on his back. Someone really wanted to pester you on purpose!

 

Linux RT shared libraries will be hard to simply drag to a standard Linux distribution and I'm not sure the fact it is CentOS will help anything here. Being a RedHat Enterprise derived distribution might be not bad here but still.

 

What is certainly important is to realise that NI Linux RT comes in two flavors: ARM compiled and x64 compiled. So you will absolutely need to make sure that you get the x64 compiled version of the libraries and also that your CentOS installation is an x64 installation too. Anything else will anyhow never ever work. There is no easy way to see from the outside what architecture an executable or shared library is made for, but you can use command line tools like "readelf -h <file>" or "objdump -f <file>" or even just "file <file>" that will give you various amounts of information about what architecture the binary file is built for.

 

Still I haven't yet groked the full picture of your Münchausen experiment that you are supposed to pull out of the hat! It certainly feels like an extremely roundabout way of doing things, and whenever I had this feeling in the past it turned out that I was basically looking at the problem from the totally wrong side.

Rolf Kalbermatter
My Blog
0 Kudos
Message 17 of 25
(825 Views)

This would all go away if they just let me use the LabVIEW Listener functions to monitor for new connections.  I've demoed it to them and it does what we want, xinetd is just a proxy for launching the system.  Maybe I should just push harder to move away from this, as this seems to be the only system we have out of many that uses this connection launching style.  The code my boss has written for other client connection monitoring, he told me, does not use xinetd to launch the code.

 

I suppose the ONE reason that seems valid that I am hearing is that if the program crashes, that they use xinetd connection detection from the client to respawn the program.  

 

I'm going to see if my code can be launched by xinetd with xinetd set to blocking (only one instance launching of the server) and then having my listener code monitor for subsequent connections.  That would still leave me with needing to convert that first connection from FD 0, 1 for the TCP socket from xinetd into a LabVIEW TCP Connection.  I am unsure if Extern.c has functions for that. Or I can do some crazy mixed handling of the connections.  Isn't this fun?

 

Re: IPC Library:

x64 - all the libraries that are referenced by the CINs from the LabVIEW VI were already on the CentOS x64 LabVIEW system, as they are standard C libraries. I didn't copy those over from a Linux RT system.  The only thing manually installed was the LabVIEW code.  The VIs looked to all load up with no missing library notices.  VIs run up to a certain point, then I start to have permission access issues, which is probably due to the way NI Linux RT is set up to run lvuser vs me running my account on CentOS.  This I am trying to work out with the NI developer.  They originally suggested just copying over the LabVIEW Libraries and trying it out as a first attempt.

Ryan Vallieu CLA, CLED
Senior Systems Analyst II
NASA Ames Research Center
0 Kudos
Message 18 of 25
(820 Views)

So it basically all hinges on the requirement to use xinetd to detect going down of your app from the client side? That's pretty much insane!

 

As to turning a file descriptor into a LabVIEW network refnum, I'm not aware of any such function. The opposite does exist inside vi.lib where you can retrieve the raw socket descriptor from the underlaying system socket that a network refnum wraps. But even if such a function would exist I doubt that it would work for the stdio pseudo file descriptors. LabVIEWs use of the socket functions is not limited to just recv() and send() but it also does always select() on it and absolutely requires the socket to be in asynchronous mode. I'm not sure stdio file descriptors support the full semantics of such a requirement. It seems that xinetd also allows to hand of the original network file descriptor to the client, which would work if there was a function to turn such a socket fd into a LabVIEW refnum, but as said I do not believe there is any externally available function in LabVIEW to do so.

 

There are aspects to xinetd that may be additional benefits, such as configurable protection to port scanning and ACL based connection wrappers, but it doesn't seem to be the reason for you to choose for this. Things like timed availability of connections could always be very easily implemented in the application itself.

 

If the service shutdown detection is really the main reason for using xinetd, I do wonder if it is really smart to require you to jump through 500 hoops and loops to make your LabVIEW app to work with it rather than make the client side jump through one single hoop to somehow implement a heart beat mechanisme that will tell it very quickly if the connection has been lost!!! All my network based implementation have usually one message type (often enough the 0x00 type) with some simple timestamp in it and a small bit pattern and the server responds to this simply by echoing the message and inverting the bit pattern. Serves as a simple roundtrip measurement, are you alive? message and optionally synchronization in one. If the client does not want to use it he is free to do so, if he does want to use it he always can as often as he likes.

Rolf Kalbermatter
My Blog
0 Kudos
Message 19 of 25
(801 Views)

Thanks for the discussion, Rolf.  I am in agreement, I usually just run a heartbeat mechanism.  It doesn't even have to be the client that recognizes this loss of function, it could be another resident daemon on the server side.  

 

I can easily implement a startup routine to get the server up and running and waiting for connections and also start another that watches for a heartbeat loss and relaunch if needed.  I will have to make my case more emphatically.

 

Our system is totally disconnected from the regular networks, we don't utilize xinetd for security, as it would take someone getting through physical security to have access to a machine to be able to do any kind of port scanning or DOS attacks.

 

Thanks for the further insight on the FD conversion.

Ryan Vallieu CLA, CLED
Senior Systems Analyst II
NASA Ames Research Center
0 Kudos
Message 20 of 25
(777 Views)