I would appreciate if someone could give me some suggestions that would speed up the attached "weighted average.vi". It takes 30 seconds to run for the attached "data.tdms" on a Core 2 Extreme CPU Q9300@2.53GHz, LabVIEW 2012, Windows 7 computer, and 20 seconds on a i7 CPU firstname.lastname@example.orgGhz, LabVIEW 2013, Windows 7 computer.
The weighted average.vi calculates a weighted average for each group, e.g. "2325E, ,", "2350E, ,". "R" stands for repeat. Number_of_Stacks is 32 for all groups with different repeat numbers in the attached "data.tdms" and will be different numbers in future data.tdms.
Thank you very much,
Solved! Go to Solution.
Well I still don't fully understand what it does, but I was able to improve it by 10% or so. Attached is my attempt.
Pattern matching functions are external modules and are much slower for simple tasks like a search. The Search Split String is just about the fastest method based on most benchmarks. It can be improved by right clicking and choosing Match Single Character if possible and one of them is. I moved the Initialize 2D array to the outside loop. No need to make a new one every time. I also changed the data type from a waveform to a 2D array of double. It looks like that's wha tyou wanted anyway. I also attempted to defrag the file first but the file isn't very fragmented and it actually took longer after adding that time.
To be honest it is alot of data. It may just take that long to chug through it. Is it possible you can do some of this while other processing are happening, to trick the user into thinking it takes less time?
There are some other improvements you might be abel to get away with if you increase complexity alot. You may have a fast computer but you are only using 1 core with this code. If you could figure out a way to turn on for loop parallelism you should see lots of improvement.
Edit: I also turned off automatic error handling and allow debugging.