Continuous Integration

Showing results for 
Search instead for 
Did you mean: 

Any way to avoid closing LabVIEW.exe between jobs

Go to solution

My (Windows) CI pipeline kills LabVIEW.exe between each job, as seemingly the only safe way to guarantee no VIs linger in memory between jobs. It takes about 90 seconds for LabVIEW to reopen. Is there any better way to guarantee that whatever the current state of LabVIEW, I can return it to a "clean" condition?


The best I can do with the VI server operations I'm aware of:


1. Close all projects in memory from the "Project.Projects[]" property node

2. Use the "App.AllVIs" property node to detect any VIs in the default application instance (except of course the ones doing the cleaning)

3. Only kill LabVIEW.exe if unexpected VIs are in memory (is there a way to remove these from memory programmatically?)

0 Kudos
Message 1 of 5

What do you expect from a "better" solution? As far as I'm aware of there is no real alternative to just close LabVIEW completely to make sure everything is out of memory. I do it between every CI/ CD stage and job and even then the time "lost" is not much in comparison to the time some of the build or test job take to finish. Not sure if it's worth the effort to build some workaround for an already working solution. The only thing I wonder a bit is the time it takes to restart LabVIEW on you system, my build system is by far not the fastest but it only takes ~15s - 20s.

0 Kudos
Message 2 of 5

It depends on the scripts you are running a bit.


I do not close LabVIEW in between tasks and it's never caused me a major issue, the code has not been held open. I do have issues with the cache when switching branches but I don't think that is impacted by closing.


I'm generally calling app builder or VI tester through their APIs though and not directly loading the project.


I would try without and identify if there is a particular step that is causing the problem.


I like the idea of an unload everything script though - please share if you do get this working

James Mc
CLA and cRIO Fanatic
My writings on LabVIEW Development are at
0 Kudos
Message 3 of 5
Accepted by avogadro5

Use Docker containers? 😉

If you'd like help setting this up, I believe GDevCon is still selling tickets 😛


More seriously, this is possible and you can build useful things in a Docker container, although whether the time investment setting it up exceeds the time saved using them is a harder question to answer (and might depend on how much of others' setup code you can directly reuse).


My containers (as of now) take about a minute to build a PPL, of which ~30-40s is waiting for an "agent" to be allocated and some time is taken installing dependencies from NIPKGs (currently I've seen ~10s, but I haven't built things with vast collections of dependencies yet, so that's only for ~1-2 packages installed. Hopefully it isn't linear in number of packages to install time...)

I'm currently running directly on a host OS (not a remote or virtualized server), using a new 12th-gen Intel chip (but I saw similar results without some of the things I'm currently working on (NIPKGs and "elastic agents", mainly) with an older computer and an 8th(maybe?) gen chip).


I think that this time could be reduced by leaving some "agents" running, but then I have the same issue that you describe (how to have a clean state?) and so throwing it away seems faster/easier.

There's some additional delay in that process that I don't think is directly measured, since it doesn't seem like I'm building 12 libraries a minute (which I'd expect from 12 parallel containers and ~1 minute run times...), so I guess the "throwing away" part is taking time and not being measured in that 1-minute, or that the orchestrator has some other delay that I haven't tuned yet.


If you do use containers (or VMs, probably) then I think a significant gain comes from the hard-drive's speed. Some vague testing I did a year or two ago saw much better Windows container start up (similar to as described above) from an SSD, vs more than twice that time from a spinning disk.

Unfortunately there's no (as far as I know/can see) getting away from the fact that an image containing Windows and a moderately complete installation of LabVIEW is quite large - mine are something like 20GB with cRIO and 14GB without (LabVIEW 32/64bit + DAQmx, not many other drivers I don't think, and only one version of LabVIEW per image (so separate images for 32/64 bit)).


It might be this could be trimmed by more careful selection of initial packages (although I don't use the --install-recommended flag, or whatever it's called, so I'm not sure how many can be removed) and perhaps removing things after they're no longer needed (I use NI's command line tool to set up the image and install G-CLI, but then after that I only use James' (much better) G-CLI tool, so I could maybe uninstall the NI command line tool? But doubt it saves much...)


I used the windows/servercore image as a base (~5GB as I read now from `docker image ls`).

I'm not sure if the nanoserver image can be used, but I feel like the (launch time) savings might be small (launching "cmd" in nanoserver takes ~1s, servercore ~2s, and powershell in my LabVIEW 32-bit + DAQmx ~4s), and the number of (not so edge) cases where LabVIEW needs more might be larger for the nanoserver). Quoting from MS's documentation:

Windows Server Core vs Nanoserver

Windows Server Core and Nanoserver are the most common base images to target. The key difference between these images is that Nanoserver has a significantly smaller API surface. PowerShell, WMI, and the Windows servicing stack are absent from the Nanoserver image.


so you'd want to add at least Powershell (or Powershell Core, pwsh.exe).

The nanoserver image is much smaller though... (262MB)

0 Kudos
Message 4 of 5

Seems like the only "better" option is a container, which seems....difficult or at least time consuming to set up.


I think the fastest cleanup is the VI server idea of doing our best to verify nothing unexpected is in memory, and kill LV otherwise. If LV usually stays open between jobs I don't see how a container could be faster.

I do know 90 seconds is a long time, my workers are very slow VMs.

0 Kudos
Message 5 of 5