I have an interesting problem, that I cna't seem to figure out. I have a cFP running LabVIEW real-time, which communicates with a panel PC using TCP/IP. Basically, the application runs fine for anywhere from thirty minutes to an hour, then it starts to slow down, and lose messages. There are a few different loops in the RT application that control IO, communication, and datalogging. I send the execution time of the main loop to the GUI, as well as keeping track of the number of messages sent and received. At first, and for at least twenty minutes, the loop speed is exactly right, and I never have any messages lost. Then messages start to dissappear, and the loop execution time starts to jitter, this increase until the loop execution time is three to four times what it is supposed to be, and almost all the TCP messages are lost. Seemed like a wierd memory leak, so I took out all the code that was hadling the datalogging (although I calculated that I have at least a few months worth of space for datalogging in free memory), but the problem persisted. I then thought that maybe TCP messages were stacking up, so I reset the cFP and disconnected the GUI for an hour, which makes the cFP just listen on a given port, but as soon as I plugged it in again, the cFP was already slowed down. I slowed the main loop execution from 200 to 800 ms, to see if maybe I was pushing the processor to fast, but the problem was still there. I tried installing the code on another cFP, but it had the same problem. I reloaded the firmware, same thing. I installed another applcation I wrote that is almost exactly the same, and has the same overall architecture, and identical TCP communication, and datalogging, and it ran fine, so it is definitely something in my application.
The really strange thing is that at first the cFP seems to be able to keep up fine, and then after about a half hour, without changing anything, over the course of ten minutes or so, the whole thing starts slowing dramatically, and communication becomes almost completely unusable. I have used cFPs and LV real-time many times in the past, and I have never encountered this type of problem, also I'm a certified developer, so I feel confident that I know how to write LV code properly, so I don't think this is due to poor programming. The only thing I'm doing, that I don't normally do, is using globals in my RT application instead of functional globals, but I use globals in the similar application I mentioned, and it ran without incident for days. Also, because I havve to keep track of data accross power cycles, I occasionally update a config file, which I don't normally do, but, again, the alternateapplcation did the same thing, and it worked fine.
I am attaching the top level VI, but I'm afraid it doesn't reveal much. I have almost everything compatmentalized into the sub-VIs, as per NI recommendation.
Can anyone think of any reason why an RT application run on a cFP would seem to run fine at a given rate without any sign of thread starvation, or the like, and without writing anything to memory that I can see, would then, seemingly out of no where, and very consistently, suddenly just not work anymore?