From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What setting to set to stop .NET dll being copied to build folder

Solved!
Go to solution

@rolfk wrote:

2) The assembly is not in the GAC. LabVIEW copies the assembly into the same directory as the packed library, compiled DLL or built executable (optionally the data directory) and stores the relative path in the compiled code, or if you specify an absolute path to store the assembly to it will store that absolute path in the compiled code. LabVIEW will then attempt to load the assembly from that location and if that fails it may try to load the assembly with name only which would let Windows search it in the GAC or the process root directory and that's it. After that all standard .Net search locations have been exhausted and the broken arrow is the only remaining recourse.

 


It seems like the solution I had is this one (with data directory replaced by whatever directory you'd like).

I guess I'm missing a key point, but why is it a problem? (if multiple PPLs can reference the same "installed" DLL, even if it must differ from a source DLL (so you get one copy)).

 

One obvious problem is you can't move the (installed/copied) DLL, but that didn't sound like a goal for the OP.

 

Another potential problem is as you highlighted earlier in your reply:


@rolfk wrote:

I would strongly suggest that an assembly that needs to be accessed by two or more different packed libraries, let alone 10, is a VERY strong candidate for an assembly that absolutely needs to be located in the GAC. Either the assembly is specialistic and you should be able to wrap it with a single VI library or it is universal and then it should be in the GAC. A situation where you have a specialistic assembly that needs to be interfaced from many different places in your code without a single VI library wrapping its functions is an absolute disaster waiting to happen whenever you need to modify that assembly somewhere.


This seems like a perfectly good reason to avoid doing this, but it doesn't (at least to me) seem like a problem with doing this directly (more like evidence that you may/will face issues down the line when you want to update the DLL).

 

Is it not possible to make backwards compatible changes to a .NET assembly? (e.g. add a function without recompiling callers?). I can imagine that might make you hate the idea, and would definitely suggest the wrapper (LabVIEW) library you mentioned.


GCentral
0 Kudos
Message 31 of 32
(297 Views)

@cbutcher wrote:

Is it not possible to make backwards compatible changes to a .NET assembly? (e.g. add a function without recompiling callers?). I can imagine that might make you hate the idea, and would definitely suggest the wrapper (LabVIEW) library you mentioned.


It's possible (usually), but requires extreme care and discipline. You can't modify any existing method or property in any way. The temptation to just change this one parameter to be an enum instead of the uInt8 it used to be, is often very strong. Also adding an optional parameter to a method while possibly in pure .Net systems is a nogo for interfacing to LabVIEW since LabVIEW wants a strongly typed interface and if the signature of a method changes LabVIEW requires a recompile of the code no matter what. It's a limitation in comparison to standard .Net semantics but LabVIEW doesn't handle optional parameters like .Net and treats them as compile time decisions, not runtime choices.

 

That's deeply ingrained in the execution model of LabVIEW and most likely another of many reason why LabVIEW NXG took so long. Taking out such basic assumptions from a system and making them work throughout differently is a major feat to implement and make sure it works across every single part as intended.

 

There certainly are good reason to do it the way LabVIEW does. It's a lot simpler to leave such decisions to the compiler stage rather than having to be prepared at runtime for such changes. It also is definitely more performant to not have to check for such things at runtime every time you want to call a method. That may not be of such concern nowadays but back when LabVIEW was invented in the late 80ies and early 90ies of the previous century, this would most likely have crippled LabVIEW in a serious way even on the high end systems of that time, which were very resource constrained systems in comparison to even a small single board computer nowadays.

 

A LabVIEW high end Windows computer in 1992 when the first LabVIEW for Windows version was released contained a 386 CPU with math coprocessor clocked at 33 MHz and had 8 MB of RAM. 16MB RAM was the maximum such a system could address and considered a beast and to expensive for anyone except high speed computing requirements.

Rolf Kalbermatter
My Blog
Message 32 of 32
(289 Views)