LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to improve the execution time of my VI?

My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.
0 Kudos
Message 1 of 17
(5,391 Views)
Bryan,

It is hard to tell from a written description where the bottleneck might be. Use the Profiler to tell which VIs run most often and for how long.

How big are your files? What file format (ASCII, LV datalog, etc.)? What kind of processing is being done? Is there enough RAM in the computer to keep all the data in memory at once or is swapping to virtual memory occurring? Does the program generate any errors? Are all the files in the chosen directory of the same type to be analyzed?

If you can post a copy of your program and a sample file, someone can probably offer suggestions.

Lynn
0 Kudos
Message 2 of 17
(5,381 Views)
Hi Lynn, thank you for your suggestions, but i dont have the code at my current location, I will try to give a more detailed description:

The vi access about 500 files (all .txt type) in a single directory. Each file is composed of a 2D array (6x4000). I use the spreadsheet read vi (from LabVIEW) to open each file and take out the 2D array. Then a simple algorism is performed with the 2D array. The loop repeats this process on every file. I am using a PC with 1.3G processer and 512Mb memory. The vi runs smoothly without error.

Thank again for the suggestion.

Bryan
0 Kudos
Message 3 of 17
(5,378 Views)
Hi BryanL!

Well, it doesn't sound like there is anything that hollers out "Optimize Me!!!"... if that makes any sense :-).

There are a couple things that I can think of doing, though, that may help:
1) You can Open, Read, and then do algorithm on the data for each .txt file (while storing the references to each file in an array). At the "clean-up" stage of your VI, you can then close all of the files with the array of references to each file. This will in total take the same amount of time, but will shorten the time taken to perform the algorithm on the data (if this benefits your application). I wouldn't usually recommend this, but in your situation it sounds like it could possibly help.
2) When creating the txt files you can combine data into less files. For example, instead of making 500 data files, make 100 files of 5 times the data.

These are really the only 2 things that I can think of to "optimize" your code.

Hope this helps!

Travis H.
National Instruments
Travis H.
LabVIEW R&D
National Instruments
0 Kudos
Message 4 of 17
(5,352 Views)
Hello Travis,

Thank you so much for the suggestions. Indeed your method would help...then i only have two options of generating those data files (from another program which i have no control over). I can either generate 500 files or I have the option of generating one huge file (all data in 11Mb). I tried both options, the multiple file approach took a long time mainly in open and read those files. The other approach was still slow due to the need to the large amount of memory taken by the file, though i have 512Mb of RAM. I wonder if there is any memory management I can implement in LabVIEW that will help things (the large array of data were transported in my vi to about 6-7 subVIs for analysis). Thanks again.

Bryan.
0 Kudos
Message 5 of 17
(5,327 Views)
Hi Bryan,

your approach sounds sound enough. I just want to ask something which just occured to me. Does your VI take the same amount of time for each file, or does the execution time increase with each file. If the time is increasing, make sure you're not retaining "old" data. Replace the previously read data with the new data instead of collecting them in an array or something like that. This means open file, read file, close file, process file, open next file, read next file.......

Otherwise, files with (judging by your 6x4000 data set) around 200k shouldn't take too long to process. There are, however, quite a few of them.

You also say the data is sent to 6-7 sub-VIs. Making sure there are no copies of the data made during this process may result in a significant speed-up. If you are sending the data to more than one sub-VI simultaneously, this will automatically create a copy of the data, thus slowing down the processing as the new memory is allocated and de-allocated.

Reading your original mail, I don't think these are likely to be your problem, but I thought I'd post anyway. You never know.

Hope this helps.

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 6 of 17
(5,309 Views)
Hi Shane,

Great points! Thanks for the reply. I didn't measure if my processing time is increasing or not from file to file. I did profile the vi and found most of the time are spent in Open and Read file subVIs. I used the LabVIEW "Read from spreadsheet.vi" and assumed that the vi automatically closed the files after reading data (but not sure if this is the case), thus freeing up the memory.

By piling all of the data into one single file (instead of accessing 500 files indivitually) does seem to decrease my processing time significantly. Indeed the data was transported to different subVIs sometimes parallelly and sometimes sequencially. By having a parrallel arrangement, I am taking advantage of the processing power of my PC without having different subVIs waiting for data. Again, as you pointed out, the data copy process may actually slow down my speed due to memory issue. It is hard for me to tell which route to take...?

Bryan.
0 Kudos
Message 7 of 17
(5,279 Views)
Bryan,

If "read from spreadsheet file" is the main time hog, consider dropping it! 😮 It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" :))

If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.

I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.

Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
0 Kudos
Message 8 of 17
(5,272 Views)
Hello Altenbach,

Thanks for the great suggestions. I didn't thought of the complexity of read speadsheet.vi at all. I remember once i indeed opened it, and it was lots of stuff. With a single file (11Mb), I was able to reduce my processing time from about 20min (reading 500 files) to less than 5 min. I will implement your suggestions and see what will be the results.

Thanks again.

Bryan
0 Kudos
Message 9 of 17
(5,264 Views)
I use the spreadsheet read vi


There's your problem.
If you have access to the data used to write those files, consider writing it as a 2-D array of SGL (or DBL) indstead of a spreadsheet. That will save considerable time, both on writing and later reading. The conversion from a number to a string, and from a string to a number, takes time.

If you have to keep them as spreadsheet files, consider reading the file yourself. If they're not too big, read the whole file as a string (Open it, get the EOF, then read that nany bytes). Then use "spreadsheet string to array" to convert to numbers.

If that's not practical, then read it in chunks (of 1024 or 2048 bytes, for example), then extract the lines from that and build an array.

You want to minimize the number of FILE READ operations.

If you can do your analysis on a line-by-line basis, then don't convert to a 2-D array. Read the whole file as a string, then extract from that string a line at a time, and convert that to a 1-D array of channels.

Just some thoughts.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 10 of 17
(5,245 Views)