LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

VI is running very slowly

Solved!
Go to solution

Hi Everyone,

 

Hope you all, are having a good week.  I know it's sort of a shot in the dark to post my code and ask for help, but the truth of the matter is that I'm a novice LabViewer and as a community this forum might be able to help me out or point me in the right direction  in way less time than it would have taken me on my own.  So enough of that and down to the fun stuff:

 

The Goal: 1.  In a broad sense, I want to read in approximately 10-20mins of data sampled at 30 kHz in TDMS format.  There are 5 channels in total.

                       1A.  Filter the data with high and low pas filters from all 5 channels channels.

                       1B.  Take the standard deviation of the first chunk from each of the five channels of data and divide every chunk read in by this standard                                     deviation to normalize them.

                       1C.  Add the first three channels together to increase signal to noise & find peak locations

                       1D.  Use these peak locations and grab the peak amplitudes from all five channels and write them to a text file

                       1E.  Finally, upon having the data saved away as a text file for current and future use, plot them and analyze the peak amplitude                                                     information.

The Attempt:

 

As you can see I have attached two files, Calibration_Routine which calls find_peaks.  The idea is that Calibration routine, opens the TDMS file, reads in the data one chunk at a time(750,000 lines at a time before errors), passes the data on to find peaks, which as the name implies, finds the peaks, and returns them to Calibration_Routine so that they may be written to a text file.

 

The problem:

So far the code seems to function correctly but will get up to about 4.5 million lines read and then goes incredibly slowly but without producing any errors.

 

What I'm asking from you:

 

As a newbie to LabView I understand that there is probably a lot of room for optimization of my code.  My suspicion is that there is some hidden memory buffer that is filling up and clogging up the whole routine.

 

1.  How do I go about investigating memory issues?

2.  In a general sense is there a better way to re-write my code?

3.  If, and I realize this one is a stretch, but if you see any obvious problems with my code, what are they and would you mind pointing me to some resources so that I may fix it?

 

As I post this, my goal is to work my way through this document:

 

http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/vi_execution_speed/

 

I am hoping that between the gracious help of the members here, and my own slow but steady reading I can get this thing to work.

 

I greatly appreciate your time to even look at my post.  All the best,

 

-Joe

 

 

 

Download All
0 Kudos
Message 1 of 5
(2,705 Views)

Number 1: You don't need to worry about investigating whether you will have memory issues -- you WILL have memory issues. 20 minutes of sampling at a 30khz rate is 36 million samples.

 

Number 2: Get rid of the express VIs and the evil purple wires. They are efficient and using them too much can make you sterile.

 

Number 3: After saving the data to disk, I would process it piecewise at the end.

 

Number 4: TDMS is optimized to get data on disk as quick as possible -- not necessarily reading it back in...

 

Mike...

 

Ok, maybe not sterile, but they are very bad with large amounts of data.


Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 2 of 5
(2,659 Views)
Solution
Accepted by topic author Joe_Lyons_flowcytometry

Update:

 

I figured out the reason It was clogging up.  I was trying to convert my dynamic data(blue wire) to an array inside of a loop.  Once I put this inside outside of a loop the VI can get through 20 minutes of data in a matter of minutes.  Not too bad IMO.

 

The downside, it is writing the data in a way which I didn't expect but at least that part can be tested now 🙂  If there's an interest I can re-post the fixed code.

0 Kudos
Message 3 of 5
(2,652 Views)

Well, I don't know if it's enough to get you over the hump but here's a couple things:

 

1A could be done performed by a single BandPass filter.

1C could be done channel by channel, by adding your waveforms for the 1st 3 channels, finding your peak locations, then finding those values channel by channel.  This probably seems like the long way around, but it'd work faster than trying to rely on virtual memory or whatever is going on behind the scenes.

 

You can use Tools, Profile, Performance and Memory to find out which VIs are taking so long.



This avatar was scraped from an instance of good public spending: http://apod.nasa.gov
0 Kudos
Message 4 of 5
(2,643 Views)

Thanks qzerror!  Those are good suggestions.

0 Kudos
Message 5 of 5
(2,640 Views)