I'm an avid LabVIEW user and i'm facing some weird stuff about the Control and Simulation Toolkit. One of our projects is about a heavy simulation of some electric system with different integrators avaliable to end-user. It's a stiff problem with long simulation time (5000~86000 seconds). The problem here is how easily the memory usage of application grows through different case scenarios:
Before simulation: 75MB on Task Manager using Radau 5
After simulation (and plotting): 285MB on Task Manager
*Now switching to Radau 9 without restarting the application*
After simulation (and plotting): 659MB !!!!
*Now switching to BDF*
After simulation (and plotting): 912 MB !!!!!!!!!
*Still using BDF, but reducing the number of seconds*
After simulation (and plotting): 669 MB
*Switching to Radau 9 again and killing the async VI (Abort VI and Close Reference) which holds the simulation loop*: 220 MB
*Switching to Radau 5 and reducing the number of seconds*: 110MB
1) I'm using the same buffer to store the result data between simulations
2) Change DBL to FGL reduced only 20% of memory usage. Sadly, the simulation functions dont adapt the FGL datatype, so everything is locked on DBL.
3) The async VI which holds the simulation loop is not reentrant and the Open VI function which launch that VI is configured to "Call and Collect Mode"
Is there something else that i can do in order to reduce the memory usage?
PS: I can't share the source code... It's a huge project
I could understand your simulation is very long but what about the smallest time constant of your system? Based on that smallest time constant, do you think it is possible to increase the minimum and maximum step size? Usually plants/systems that requires long simulation period have slower dynamics. Do you think this is your case (I know as well usually electric system have faster dynamics)?
Can you describe me better which type is your electric system?
Simulation time isn't really a problem, because all used integrators are eligible for a stiff problem and that amount of time is intended. Also, only a small amount of signals are linear, so change the min/max step size wouldn't help here.
For me, it seems that every integrator hold an internal buffer with the simulation results and it only gets deallocated when i restart the simulation using the same integrator or unload the VI which holds the simulation loop.
Fortunally, i managed to reduce some memory allocation by removing few buffer dots on simulation plotting, but this annoying behaviour still exists.