LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
Wes_P

Install LabVIEW API ONLY for NI Drivers

Status: New

Problem

Many times, the bulk of LabVIEW development happens on computers that will never interface with hardware. A dozen engineers may be collaborating on code that will ultimately run on a dedicated machine somewhere, that is connected. Yet, as things currently are, I have to install more than I need on my development machine to get access to API VIs. If I am working on my laptop on an application with DAQ, RF, Spectrum analyzer, etc. components, I have to choose to either download and install all of that, or deal with missing VIs and broken arrows. This seems needless, since my particular machine will never actually interface with the hardware.

 

Idea

I would like to have the option to install only the LabVIEW VIs and ignore the driver itself. In many, if not most cases, the LabVIEW API could be independent of driver version. It could install very quickly, since it would just be a set of essentially no-op VIs. I don't care that the VIs would do nothing. They would just be placeholders for my development purposes. This would allow me to have full API access to develop my code without having to carry around large driver installations that I will never actually use.

Wes Pierce
Principal Engineer
Pierce Controls
9 Comments
User002
Not applicable

This would be awesome - but it would be double awesome if it's agnostic to driver version.

 

Today, my list of biggest headaches as a LabVIEW developer includes either maintaining a fleet of virtual machines or constantly un-re-installing the entire NI stack because I need to increment or decrement a driver version. 

cbutcher
Trusted Enthusiast

This would also be nice for "machines" that don't/can't install the full drivers.


See my 7-minute presentation at the GLA Summit for discussion of this regarding Docker instances - many drivers can't be installed there, but you can fudge this enough to get the API available and build applications/libraries.

If the API was explicitly separate from the driver backend, then this would be much easier and the 'fudge' would be unrequired. As a workaround, you could see if fudging your installations in the same way could get you a reduced installation size (but I'd guess it's more pain that it's worth on 'real' computers that can install the full driver software).


GCentral
Albert.Geven
Trusted Enthusiast

And if this could be a simulation environment in which we can plug in simulations like DAQmx modules it would be the best we can wish to have as a development system.

greetings from the Netherlands
rolfk
Knight of NI

A good idea but not trivial to implement. Most LabVIEW hardware drivers link to an actual shared library directly, which most times links to many other shared libraries more or less directly too. This will require the immediate shared library to link to all other resources dynamically at runtime, which while possible will be a major redesign of that software layer. It would however allow to possible implement a dynamic simulation driver instead, however that is even a bigger project than the aforementioned dynamic link layer.

Rolf Kalbermatter
My Blog
cbutcher
Trusted Enthusiast

Rolf - could this be avoided by linking by path rather than directly? If the situation you're describing is the abundance of VIs that contain little more than a Call Library Function Node, then these will give you a "working" VI even if the DLL/so doesn't exist in the case that they take a path input.

I'm not sure what the performance/functional implications of that kind of change would be - I guess this is some sort of delay-loaded DLL vs linking directly in the compiled code, but perhaps you could comment?

It doesn't seem like the Idea here requires working (functional) VIs - only "non-broken" compile-time VIs. The alternative suggested implementation (empty VIs with fixed connector panes) would require a new set of code to be "written", but given the original driver, it sounds like it's only a Delete key away?

A set of "Conflicts" and "Supplies" clauses in the package would be enough to then handle installation etc (although this (alternate packages supplying the same package name) runs into a separate problem, which might make it an undesirable solution).


GCentral
rolfk
Knight of NI

No shared libraries (DLLs) are ALWAYS dynamically loaded. The difference is when that happens. With a path in the Call Library Node LabVIEW knows at load time of the VI what DLL to try to load and will do so. After that no further attempts to load code are done (except when doing things like recompiling etc. in which LabVIEW will do a FreeLibrary() and then a LoadLibrary() again).

 

When passing the path from the diagram, things are a little more complex and there certainly is a bit of extra runtime overhead involved. First LabVIEW compares the path with the internally stored path. If it is the same it simply executes the function. If not the old path is unloaded (if not empty) and then the new path is loaded (if not empty).

 

So you have the extra path comparison which is ALWAYS executed at every node invocation, and the potential delay for loading the DLL (hierarchy). The loading will be noticeable (measurable) delay during the first execution, the comparison alone is for most purposes pretty much negligible (unless you are in the small ballpark of people who can get upset about 100 ns delays in the code).

 

That said, changing even a simple driver to do it this way would be a major effort. Changing the VIs alone is just the first step in a long series of work items and not even the one asking for the biggest effort. Code testing, packaging, testing again, creation of an API only installer selection, testing again is going to be a lot of work, and work that is very difficult to sell to a manager.

 

Personally I think it would be easier to do that decoupling in the typical LabVIEW shared library wrapper that is almost always used by NI drivers anyhow. This limits the modification to this DLL only instead of 100ds of LabVIEW VIs that all have to be modified. It's still a lot of work but more centralized than trying to change the LabVIEW VI driver level.

 

But a hard sell to management in either way.

Rolf Kalbermatter
My Blog
GuenterMueller
Active Participant

Excellent answer, Rolf.
Let me add to your second paragraph's final phrase "If not the old path is unloaded (if not empty) and then the new path is loaded (if not empty).": I use the method of specifying the path to the DLL on a block diagram also in reentrant VIs. Each reentrant instance might link to a different DLL. (These DLLs are actually APIs for different products.)
Rolf: You said "... unloading the path ..." actually seems to mean that just the path itself in LabVIEW gets unloaded. As to what I know, the DLL itself is not affected and remains loaded.

rolfk
Knight of NI
GuenterMueller wrote:

Rolf: You said "... unloading the path ..." actually seems to mean that just the path itself in LabVIEW gets unloaded. As to what I know, the DLL itself is not affected and remains loaded.


Of course, that was an inaccuracy in wording. Unloading a path in itself is a pretty nonsensical thing. I meant the DLL this path represents. The Call Library Node will basically free the stored HINSTANCE for the DLL. This will decrement the load count for the DLL and if nobody else has any references to that DLL open it WILL result in unloading the DLL. The problem here is that if you have multiple (instances) of Call Library Nodes referencing the same DLL, you have to make sure that each and all of them have freed their according handle before the DLL is actually unloaded!

 

This can be cumbersome especially when you resolve to using reentrant VIs with each instance in fact referencing its own version of the (possibly same) DLL. Yet another reason to do the actual dynamic loading rather in the wrapper DLL than in the LabVIEW VI if you ever intend to do real dynamic referencing as meant in this suggestion. For your instance specific DLL loading that is a very different animal, but turns quickly into a real nightmare. The only way I would consider that, is by wrapping everything into classes and have the class instance manage the respective shared library name.

Rolf Kalbermatter
My Blog
User002
Not applicable

One alternative I've though about that someone could potentially implement all in G code:

 

You create a parent interface class whose API mimics DAQmx call for call.  This is a little tough with things like property nodes, but most of the API could be done without much fuss.  Then you write a child class.  Since development environment support for the DAQmx API is relatively stable, I don't think you'd need more than one child class.  It would simply be the thinnest wrapper as an indirection layer.  Or really misdirection - so you can pull out the actual DAQmx calls when the driver isn't there by running with the parent object instead of the child.  Classic dependency injection, right?

 

Pros:  Not needing to write many children for each version (since :LabVIEW is already good about moving DAQmx code between DAQmx versions)

 

Cons:  You still need to swap out the actual class being run in your code.  If instead of classes you wanted to go with something like Conditional Disable structures, you could - but LV would probably still bug you with a warning every time you load the project.  Other cons include needing this extra package at all.

 

And... after typing this all out - the major reason this won't work is because without DAQmx, you can't really use any of its typedefs.  You can't use the DAQmx Task Reference if the typedef isn't there - and so on.  You can't build a copycat higher level API because even if you're not trying to load the VIs, you still depend on the same typedefs.  You could work around this by encapsulating each datatype in another class like the main API, to keep from using the datatypes unless you're in "run real DAQmx" mode.  But, by the time you've wrapped every datatype that matters in its on encapsulation ... it's not really practical.  You end up with parent and child types for every type of DAQmx reference and constant, and it's no longer easy to manage or to swap out.  And all this pain lands on the developer to manage - in addition to losing out on things like DAQmx constants autodetecting your hardware for you.  Not really any better than just installing the right driver and forgetting about it.

 

Am I wrong here?

 

Thanks for reading this edition of Ideas That Are Bad But Didn't Realize Until I Typed Them Out.  I'll post it anyway so we don't leave this exercise to the reader.