LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to increase VI run time for Data Analysis?

 
 
I am currently having trouble getting this program to run quickly. I was wondering if someone could look at this Vi and tell me what is wrong. I am currently reading in a file and comparing this time series with another time series calculating a value and saving this measure. This loop must be executed 999 times. This process is taking several hours to run one file, so I was wondering if there were some problems in the loops I have set up or what? Any help will be greatly appreciated Thanks.
0 Kudos
Message 1 of 7
(2,747 Views)

There is no way to tell where the slowdown occurs unless you also include the missing subVIs?

Where is the "other time series" coming from?

Since you don't have the path wired, it will prompt you for all files. Is that right? Can't you process an entire list of files unattended instead? It seems silly to occupy an operator for hours selecting 999 files in a row. How are you running this? Is it a subVI in a toplevel program?

Overall, your code looks overly complicated, but (except for the missing subVIs) should run pretty fast. How big are the arrays? Since you only seem to operate on the first ~100 elements, make sure you don't drage huge files into this. 😉

Please attach a full set of VIs, some example datafiles, and tell us how it is supposed to be used.

Message Edited by altenbach on 07-14-2007 12:00 PM

0 Kudos
Message 2 of 7
(2,743 Views)
There are a couple things that you need to clean up on the code that you posted. Namely the way you handle the boolean arrays.

First, you have the equal operator the output of which will be true if both inputs are either true or false. Sort of sound like an exclusive NOR gate...

Second, you take the array, sort it, reverse its direction and look for the first false element. It looks like all you really need to know is the number of true bits in the array. All you have to do is send the array to the Boolean to (0,1) function and sum the results.

Mike...

Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 3 of 7
(2,734 Views)
  The VI is trying to calculate Approximate Entropy which calculates the maximum likelihood the ratio of adjacent vector (x,y) is similar to adjacent vectors (x,y,z). 
First this Vi reads a time series of Data and calculates a standard deviation of the time series. Multiplies this by .2 to create a tolerance limit around an array subset. The Tolerance Vectors.Vi then makes an array equal to the time series in order to compare its elements with the original time series. If the different vector values (x,y)  of the time series falls within the tolerance vectors it means it is "similar" in value and returns a 1.
Then all the vectors which fell in the tolerance range are added and make the denominator of apen.
This process is repeated for the numerator of this ratio except now it finds vectors values(x,y,z) which have values that fall within the tolerance limits.
Tthe natural log of this ratio is taken and then negated to get an APEN value for vector (x,y). Because their are 1000 pts in this calculation, this loop must be iterated 999 times to get that average APEN of the time series. This is why my program is taken so long becasue the long iteration count and was wondering if there was a way to reduce this somehow. Thanks for your time and consideration!
Download All
0 Kudos
Message 4 of 7
(2,722 Views)
First the making the 'ToleramceVectors.vi' reentrant will make your for loop twice as fast, second make the while loop inside ToleramceVectors.vi' into a for loop with 500 iterations, third do the code inside that vi inplace:


That's for starters.

Ton

Message Edited by TonP on 07-15-2007 07:05 PM

Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
Nederlandse LabVIEW user groep www.lvug.nl
My LabVIEW Ideas

LabVIEW, programming like it should be!
0 Kudos
Message 5 of 7
(2,717 Views)

Things just don't add up!


@Schola wrote:
  The VI is trying to calculate Approximate Entropy which calculates the maximum likelihood the ratio of adjacent vector (x,y) is similar to adjacent vectors (x,y,z). 

You read ONE file, where you take adjacent pairs in the upper part (I guess these are (x,y)), but in the lower part, you take triplets (presumably (x,y,z)). In "tolerance vectors", you then clone these subsets to arrays of either 1000 points (upper) or 1500 point (lower), but compare them with only a 1000 point subset. Now you only run the FOR loop 1 times, while in the first post, the FOR loop had a count of 100. Which one is correct?

You still haven't told us the size of the original array. Are "x,y" and "x,y,z" 2D and 3D coordinates, resp?

Your "Tolerance vectors" subVI is not needed at all, because it just inflates existing data with hot air, while not adding any new information. Most likely it would be much faster to get the two (three) numbers x,y,(z) individually and decimate the size=1000 subset to 2(3) 1D arrays. Now you could just compare the scalar (x,y, or z) each with one of the decimated arrays. Let me know if this is not clear!

Also, you take the absolute value which is definitely not needed because the sum of ones is always zero or greater, right?

Do you have a website explaining the math behind your algoritm?

As mentioned, you definitely can get rid of the "tolerance vectors" subVI entirely, but here is another alternative for your enjoyment. It is probably slightly less efficient to TonP's version, but simpler in code. 😉

 


 

Message Edited by altenbach on 07-15-2007 12:49 PM

0 Kudos
Message 6 of 7
(2,706 Views)
 
 
Sorry for not specifying the specifics I have attached the journal article in which this calculation arises from.  It is part of the Apendix "Calculation of APEN) the last page. The corrections everyone has suggested has helped and i've got it working a lot faster. Thanks a lot.
0 Kudos
Message 7 of 7
(2,698 Views)