LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

cpu usage increases indefinitely until crash

I have an application in which 32 channels of strain gauge data are being read every 5 seconds over long periods of time (up to months).  The program works, but has proved to be unstable.  Sometimes it will go for a week, sometimes two, and then it will crash.  It first gets some strange graphics artifacts in the front panel graphics, then eventually it stops taking data, and finally completely crashes and Labview will not start up again properly until after a restart of the computer.  This has been an ongoing problem for a while, and I have had a difficult time pinpointing the problem. 
 
In my first attempts to write this program I tried to use a waveform chart and display 50,000 or so data points in the chart.  This proved problematic for the users because sometimes the data they wanted to look at had already passed.  The channels of data they are looking at are generally constant until a certain event occurs, at which time the signal may jump to another level.  The important thing for the user is that they can find these events for each channel while the acquisition is running.  My new strategy was to use an XY graph and display all the data as it comes in.  I have a feeling Labview might be making extra copies of this data or something and eventually causing problems.
 
The program is written in Labview 8.0 for windows XP.  I can't seem to duplicate the crash issue in Labview itself, but the users see the problem in the executable that comes from this program.  Any ideas?  Thanks!
0 Kudos
Message 1 of 6
(2,764 Views)
After looking at the program for about a minute, I cannot see anything that would cause an obvious crash but that doesn't mean there are problems. How big do the files get? They seem to grow without limits so you are reading larger and larger files until you run out of memory. Not sure without further analysis.
 
 
Several things stick out though:
  • I would never place long while loops inside a frame of an event structure. Seems to defeat the purpose.
  • There are much better way to scan every N seconds without using time polling every 5 ms. Look into the timed loop.
  • Do you really need to create a new DAQ task, take a reading, and destroy the task for every reading? Over an over again? Typically you would create the task once, read inside the loop, and destroy at program close. 
  • ...
 
 
Message 2 of 6
(2,754 Views)
For one moment I thought, it would be a great idea to recommend LabVIEW's Profiler and have LabVIEW search the VI with the increasing memory consumption.
Regrettably, the code doesn't contain many subVIs. It looks even like hard work to identify any suspicious code.
Guenter
0 Kudos
Message 3 of 6
(2,734 Views)

from what i see, in the while loop polling every 200ms, you reopen a file, every time, instead of keeping the refnum. is it due to the fact that you read-write from different places in the loops? Lv should handle that without problem.

also, these loops being in the event structure, and the panel being not locked, it might be that there are events which register all the time, but are not cleared. could it fill your memory over weeks time?

you shouldnt restart a new task every time you ask for a DAQ.

an overall quick fix might be  to request deallocation at end of loop.

on a side note: the diagram is very large (2-3 times my screen), and it is very difficult to surf trough it. i dont see a reason why is it so large, especially that there are not many information on it.

-----------------------------------------------------------------------------------------------------
... And here's where I keep assorted lengths of wires...
Message 4 of 6
(2,716 Views)
Christian wrote;
 

 
  • Do you really need to create a new DAQ task, take a reading, and destroy the task for every reading? Over an over again? Typically you would create the task once, read inside the loop, and destroy at program close. 
  • ...
  •  

    I think that is the first thing to change.

    LabVIEW keeps those resources around evn though you destroy them. Its just safer that way. Smiley Wink  I would expect 1K at least for each task.

    Start and stop the acquisition in the loop but create and destroy as few times as possible.

    If you really want to make it better, go to a producer consumer architecture.

    Ben

    Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
    0 Kudos
    Message 5 of 6
    (2,712 Views)
    Thank you all for your responses.  I made some of the changes suggested, but due to time constraints (I probably don't have time to switch to a completely different architecture) and other issues, I could not fix everything suggested. 
     
    altenbach:
    The files do indeed grow without bound, at a rate of roughly 200 MB a month.  The computer that is being used has 4 GB of RAM and a Core2 Duo processor, so I guess I just thought it could handle it for up to 3 months (the longest period of time it would be on for one run).  I guess that would mean that every 5 seconds a large file would have to be read into memory to display the data on the XY graph.  Unfortunately, I don't see a simply way to get around this.  I guess if the user could look at a subset of the data in the top loop as the data continues to be written in the bottom loop, it would ease the load on the processor, but I am not sure how to create a movable window to look at the past data.  I am also not sure if populating the table with all that data means Labview has to keep a copy of all that data in memory for both the table and the graph.  I guess the chances are that is the case.
     
    As for the while loop in the event structure, I am not sure of a simpler way to build a user interface where you can start and stop the acquisition or do other things while the acquisition is running.  Is that so bad?  I initially tried to do a producer/consumer architecture, and couldn't get it to work.  I don't have much formal Labview training and so most of what I do is from examples.  I just could not figure out how to control the flow of the program and don't fully understand the VIs used in design pattern example (queue handlers, semaphores, etc).
     
    The time polling and was used instead of a timed loop because initially, the users wanted to be able to change the rate of acquisition on the fly.  So, sometimes they are interested in data once every second, usually once every 5 seconds, and occasionally once per minute.  That design requirement was also why I created and destroyed the task within the loop, because I wanted to be able to change the rate on the fly.  I thought the timed loop required a preset rate that could not be changed while the acquisition is going.  However, in the interest of time and to try to eliminate this bug, I did go ahead and pull out the task creation and destruction from the loop.
     
     
    Gabi1:
    I also changed the file reading so that opening of the file only occurs once, outside the loop rather than every iteration.  That worked fine.  As an attempt at a quick fix, I also put in a request deallocation at the end of the loop.  As for the large diagram, I will try to improve in future programs (more subVIs, etc).  I have always had a hard time keeping my programs on one screen without using ugly and confusing stacked sequence structures.
     
    Ben:
    Do you think this could easily be converted to a producer consumer architecture, and if so, where are some good examples?  I couldn't find good examples where data is generated in one loop and displayed in another, only a general picture of the design pattern.  I probably could devote about one day to fixing this issue and I am certainly not a Labview pro.
     
    All:
     
    One thought I had to reduce file size and processor overhead was to write data to a .txt file and a binary file in parallel so that Labview could read from the only the binary file but the user would still get the benefit of the txt file for simpler post-processing.  In Matlab (which I am much more comfortable with) you can easily save the same type of data as either text or binary, but it seems a little more convoluted in Labview, and I couldn't get it to correctly read the binary information back from the "Write to binary" VI.  This may be a little bit of a vague question, but if I wired the same data into both the "write to text" and "write to binary" VIs, how to I read back the binary stuff correctly?  Also suggestions on making a "data window" so the user could scroll back only looking at portions of the file would also be appreciated, so that the entire file does not have to be read in during every iteration, just the most recent portion, or if desired, a window going back in time to search for an event in the data that occurred previous to the window.
     
    Thanks a lot!
    0 Kudos
    Message 6 of 6
    (2,672 Views)