I'm in charge of my company's data acquisition system and we've finally got everything running about how we want it to. The last issue we're having is that whenever the screen updates the program slows down measureably. I've got the program running in a loop that collects data, processes data, waits for user input, sends out commands to other hardware, and then repeats. The program is configured to only display one data point (64 channels, plus 5 on a chart and three other numeric gauge indicators) of every 100 collected. The effect this has is that while the program is not displaying data the loop executes in about 2-3ms, but that time jumps to ~60ms when it has to update the front panel.
My question is: how can improve the performance of my code on a windows XP system?
Is there some kind of indicator that I should use over another kind?
Does anyone have a guide on how to disable windows background processes (the computer the DAQ system is on isn't running much, but I'm not a very advanced user in that area)?
Any other general tips and tricks for this kind of problem?
I know the best solution would be to move to a real time operating system, but that just isn't an option right now. Any suggestions on how to improve performance on the system I've got would be appreciated.
How exactly have you designed your application? Have you used the Producer/Consumer design pattern and taken advantage of buffering and multi-threading?
A real-time OS won't solve the issue. You don't seem to need any determinism in your application, do you?
Can you please elaborate on your data acquisition setup? What kind of signals are you acquiring (analog/digital)? What is your sampling rate? How are you architecting your application to only display one data point out of every 100 collected? Are the five data points on the chart and the three on the numeric gauge indicators included in the 64 channels you mention? How are you benchmarking your loop speed to come up with the 2-3ms and ~60ms numbers? What version of LabVIEW are you running this on?
Thank you for choosing National Instruments.
Some clarifications, as requested. We're acquiring 32 analog channels and 32 thermocouple channels. The gauges and chart indicators are just a way of displaying a subset of the data in a more obvious form. All channels are also displayed in numeric indicators. The program is set up to run a loop with the following DAQ scheme: calculate a timestamp (relative to the start of the program), acquire a single scan of the analog channels, acquire a single scan of the thermocouple channels, and then do other data processing tasks. There's no set data rate, we're trying to acquire the data as fast as the program will allow, so the 2-3ms loop times is what we see in the output data file. One data point will occur at time 0.003, and then next would get the timestamp 0.006.
Every time the screen is updated, in the post-processing part, the time to execute the whole loop increases dramatically. In the data file we'd see a point recorded at time 0.006, and then the next one would be at time 0.066 (something to that effect). Since screen updating seems to be the cause of the slowdown I'm interested in what I can do to get that part to run as fast as possible.
We're running LabVIEW 7.1, and to get the screen to only update occasionally I have it set to keep a counter going that is incremented each time through the loop. If the counter gets to the value of 100, then in post-processing the current batch of data is sent to the front panel indicators and the counter is reset to 0. For all other counter values the data just goes into the output file and the screen indicators are left alone.
Your strategy does not sound very good. If you want speed you should not use singlescan. And what is the point in updating indicators every 3. msec? If you want samples with accurate timing you should continuous sampling, and pull data from the buffer with a rate around 50 times pr second. Remember the human eye can not detect fast changes. If I am not wrong the frame rate in a motion picture is 25 frames second
Coq is right on. The simplest solution would be to collect the samples into a shift register and only output them into the chart display once every half second or second using the modulo and case statements and based on a certain number of loop iterations. Also, single point means that everything in your loop has to happen before it goes and looks for another data point, including updating the graph on screen. It would be better to request a quarter or half second worth of data at a time, still implementing the slower graph update method. Let us know how that works out.
What are you doing with the extra data that you are not displaying on your front panel? Do you need to display the data to the front panel immediately? If the display is not absolutely time sensitive, I would like to bring up something Adnan brought up earlier, have you tried using a Producer/Consumer architecture?
Thank you for choosing National Instruments.
We have been helping people with that type of issue for years and the best thing you can do is to post your code so we can review it.
If you do not want to post your code then the best we can do is to advice you to set up your architecture such that the operations are stream lined and optimized by playing 20 questions.
So please post your code and let the volunteers take a llok and give you solid suggestings.
You continuous acquisition into queues read by the display and logging functions. The hardware will keep your acquisition going and the display (maybe defer FP updates) and logging can cope with Windows.