2020 SP1 here, I basically just clear the compile cache every time I do a build of Windows or RT application that is of any substantial size. If I don't there is probably a 90% chance it will take 15 minutes for it to fail and tell me something is broken that isn't. I'll then clear the cache, start over, and in about 20 minutes have a built application. Then opening the full source will probably take another 5 minutes to compile all the objects in the project so it can be opened.
Debugging problems, and making changes quickly have become impossible, which was one of LabVIEW's main benefits. I've looked into breaking the project up into packed project libraries, but since many of my libraries are on Windows and RT I'd need two separate sets of PPLs. And avoiding cross linking into the wrong target, is a pain that probably will be more effort than it is worth.
I can totally relate. It is so frustrating to build and deploy for hours just to test a 1 minute change. Just as you mentioned PPLs would be a nice solution if I could build PPLs for multiple target and LabVIEW would load the correct one for the OS. Just like for the library function node where you can place a ".*" instead of ".dll" or ".so" and the compiler can figure it out.
Well there is an undocumented feature of LabVIEW that lets you load a different VI based on the target it is opened in. I haven't tested this yet but the idea is it could load a different PPL based on the target used. Of course this means making all kinds of many PPLs, that are named the same and placed in just the right place for the targets I made, then linking my MNU package files to it, and updating all my code and, then testing if dynamic dispatch between classes still works, or if anything unexpecting things break.
I guess the feature isn't completely undocumented, here is something from NI.
I also second Hooovahh. We have automated our build/release process, and part of that pipeline is clearing both user and app builder caches.
Re. the resolution of pseudopaths to achieve per-target invocation of PPLs, here's the link to the forums post going into the details. Again, I second Hooovahh's sentiment - this will take a lot of work, so we haven't looked into it ourselves.
Edit: Too slow, got distracted before hitting the "Post" button 🙂
Thank you both, I will go through it in a moment! (: Here are a few things I have noticed regarding this error:
- Sometimes clearing the object cache is not enough, but with forced hierarchy recompile using ctrl+shift+run arrow on the top level vi will solve the problem (but it takes a lot of time to recompile everything).
-The compile code when recompiling the VI using ctrl+run is not the same when it is simply saved. I don't know what is the difference but the other day I saved a vi and the binary blob measured 7,8k but when I recompiled it using ctrl+run it measured 10,2k.
- When clearing the object cache LabVIEW does not clear the object cache of vi.lib, user.lib, instr.lib. You can delete it in here manually: c:\Program Files (x86)\National Instruments\LabVIEW 2020\VIObjCache\20.0.0r0\
- A lot of external libraries come with compile code not seperated (OpenG toolkit for example) and needs to be recompiled when opening it for a different target. You can use this vi to create a small vi to recursievly set the compile code flag: C:\Program Files (x86)\National Instruments\LabVIEW 2020\vi.lib\SourceOnly\Set Remove SourceOnly Tag.vi
- If interested you can view the compile object cache using a simple SQ lite database viewer
This thread turns out to be a valuable source of knowledge.
Here's a TOTD pointing to a forums thread about the nuances of recompiling, mass compiling and force compiling.
For Separating Compiled Code from Source, our blog post might be of interest to those not so familiar with the topic. I have long wanted to add more information about the specific cache locations etc, thank you for reminding me and sharing that information here.
I've also run into this same problem this past week in LabVIEW 2020 SP1. In my case it was using the cRIO WFM library (which uses a typedef for the FPGA VI reference). Since I have modified the FPGA VI, I had to update the typedef. Mass recompiling did not resolve the issue; instead I found that I had to open each of the library VIs and then disable and re-enable the auto-update from typedef option to force it to propagate.
Just running into this with LabVIEW 2018SP1. Working for quite some time on this project and never had that kind of problems really. Today trying to load the project and get all kind of deployment errors. First was a VI that used an On First Call and an String to IP and IP to String to evaluate its own IP address. Replacing either of these functions had no effect, recreating it from scratch made it go away only to cause a deployment error on another VI that was calling a libc function through a Call Library Node.
Wiped the target and reinstalled everything including the firmware. No joy. Deleted the Compile Cache. Still nothing. Force compiled everything. No change.
Recreated the libc VI and then it starts to complain about a strict typedef enum control that I use in various places inside a lvlib. Changed it to a non-typedef only to have it complain about the next strict typedef enum control elsewhere. It's so very frustrating, and that after working for about 3 years on this without such problems.
Hmmm, now it complains again about the original typedef! Something in this LabVIEW installation definitely seems hosed very good. No idea why it suddenly has that!
12 years this thread has been going on.
12 years without a solution or workaround from NI.
The method in this post still works for me when this issue rears its ugly head.
From my recollection the root cause was often times the inability to properly deploy typedefs, particularly when there were typedefs imbedded in typedefs (i.e. a typedef cluster containing other typedefs).