I have a rather complex application which employs the use of 7 different cRIOs in a distributed control and measurement system. There is 1 main cRIO and 6 auxiliary cRIOs. Each auxiliary cRIO connects to the main cRIo via network stream to transfer data back and forth, then the main cRIO connects to a PC to ultimately share the data from each cRIO to a server.
When the main cRIO is connected to the PC, memory leaks at a rate of about 40 MB per hour, sometimes more. I have stripped down the code to the bare minimum to run and transfer memory and CPU data and still see the leak. I have no dynamically allocated arrays, no unlimited size queues or channels, and no other typical sources of memory leaks. I've been looking at my code all this time, except for just this morning. I left my Real Time application running over the weekend, but didn't connect it to my PC via network streams. Acquiition, control, system monitoring, and all other tasks were still being executed, but the Network stream was disconnected. I use a fixed size, lossy queue to transfer data to the PC, so this queue just spun around and around while the Network Stream was in an error state.
I expected to see the cRIO out of memory and aborted in its execution, but I booted up my PC and connected to the cRIO via the Network Stream like I normally do and Viola! the cRIO was still running and had only reduced memory by about 60 MB. It ran for almost 2 days without issue where with it connected to the Network Stream, it doesn't make it longer than 12 hours. This is making me think it's an issue, possibly a bug with NI Network Streams in Real Time on the cRIO 9036 hardware (or some other specific attribute of my setup).
I have used Network Streams many times before without any issues like this, so I wanted to see if anyone else out there has had a similar issue. Unfortunately, I can't share my code, it's highly proprietary to my company, but I can share screenshots of the memory data I collected if needed.
Thanks for your help!
P.S. - The first attached image is of the typical memory leak when the cRIO is connected. The second is the system after running over the weekend. You may also notice the odd, highly periodic CPU usage spikes. I do not know if these two issues are related. I am running cRIO 9069, firmware 6.50f0 and have installed all available software on this cRIO under LabVIEW 2018 SP1.
Is the data type you're putting into the stream fixed size? Do you have pre-allocate enabled for the streams? Definitely look for leaked references related to periodic activities near networking code.
I have tried it both ways, preallocated and allocate as needed. I have a cluster of some metadata and a string (binary data) that si variable length, but which is never longer than 1000 bytes. I have done significant testing to verify that this is in fact true, total message length never goes over 1024 bytes.
I have code which closes the network stream reference in the event of an error and tried to reopen a connection with a new reference. This virtually never happens, however. I've had tests running for as long as 6 hours without any disconnects. I've never noticed one in real time before, either. I know that doesn't mean it never happens, but it definitely isn't often or periodic.
For more information, I've tried both large (2000) message buffer sizes and small (25) message buffer sizes. I've also preallocated memory with an initialized string of 10,000 bytes and gotten the same result. A large chunk of memory is used at the outsell, but the leak still continues onward after the application initializes.
Has anyone ever seen anything like this related to driver or hardware issues or conflicts? I've used network streams many times before and never seen anything like this. Probably something stupid on my part. . .
I did some searching internally and I'm not seeing any documentation of memory leaks that are driver- or hardware-specific. It seems like you've done some good troubleshooting with buffer pre-allocation. It's difficult to troubleshoot internally without a reproducing case of your code, but if you're interested in diving into this further, I'd recommend opening a service request.
I agree with Lindsey, if you need to troubleshoot this deeper I recommend you to open a Service Request. Without a code it is very difficult to troubleshoot.
I have the same issue!
support request introduced...
I expect the data type transmitted to be the thing to focus on because if I only transfer waveform, there is no leak. If I add the processed data (array of cluster of (string+waveform+array of indicators) ), the leakage appears...
Hello, any news on the topic? I am facing the same issue. Funny enough, it only happens when I connect to RT target. When I simulate RT with a second LV EXE on my PC, there is no memory leak.