LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

fast search and update large arrays

Hello,
 
I am searching for a good and fast way to read a big amount of spreadsheet files and to post all data into an array to do some calculations.
What kind of data I have:
- up to 400 files
- each file with up to 300 rows
- the files contain data like serialnumber, PASS/FAIL, Date/Time, TestDuration row-wise
 
What kind of calculation I need to do:
- read all the files line by line to check for multiple serialnumbers, how many times this unit has been tested
- output is per day. It may happen that the same serialnumber appears some days later again if the unit has to be repaired, but I will not count that one anymore for PASS/FAIL, but for TestDuration i have to.
 
output of calculation should be:
- how may times each unit has been tested
- status of the unit (pass or fail)
- How long the test station was in work from counting TestDuration time
 
What I have now:
- reading all files line by line and post it into one big array
- then search for multiple serialnumers
- if one found, update test-tries-value in array and status (pass/fail)
- only update units that are tested at the same day but count them for TestDuration statistics
- count per day how many units are tested once, twice..., NOGO's
- count TestDuration to have time of test in work (TestUsage)
- output is saved as spreadsheet file to import into Excel
output looks like:
Date 1 Test 2 Tests 3 Tests 4 Tests and more NoGo
01.01.06 0 0 0 0 0
02.01.06 0 0 0 0 0
03.01.06 0 0 0 0 0
04.01.06 0 0 0 0 0
05.01.06 0 0 0 0 0
06.01.06 0 0 0 0 0
07.01.06 0 0 0 0 0
08.01.06 0 0 0 0 0
09.01.06 0 0 0 0 0
10.01.06 45 6 1 2 25
11.01.06 0 0 0 0 0
12.01.06 64 9 1 1 27
13.01.06 13 2 1 0 10
14.01.06 107 9 3 1 34
15.01.06 23 2 0 1 12
16.01.06 191 16 4 0 26
17.01.06 130 14 6 1 26
18.01.06 97 15 6 2 17
19.01.06 63 15 1 0 26
20.01.06 92 20 2 2 50
21.01.06 142 15 6 0 38
22.01.06 113 17 11 3 23
23.01.06 123 25 5 1 18
24.01.06 123 6 4 0 19
25.01.06 62 7 1 0 10
26.01.06 44 19 7 1 26
27.01.06 92 24 5 3 51
28.01.06 137 10 7 0 13
29.01.06 0 0 0 0 0
30.01.06 4 0 0 0 0
31.01.06 0 0 0 0 0
 
and
 
Date TestRuns TestUsage
01.01.06 0 00:00:00
02.01.06 0 00:00:00
03.01.06 0 00:00:00
04.01.06 0 00:00:00
05.01.06 0 00:00:00
06.01.06 0 00:00:00
07.01.06 0 00:00:00
08.01.06 0 00:00:00
09.01.06 0 00:00:00
10.01.06 147 05:25:42
11.01.06 0 00:00:00
12.01.06 182 07:56:00
13.01.06 63 03:14:43
14.01.06 239 10:32:42
15.01.06 103 05:16:04
16.01.06 328 16:44:33
17.01.06 292 14:27:37
18.01.06 243 11:05:12
19.01.06 231 09:56:59
20.01.06 313 13:24:36
21.01.06 314 15:34:46
22.01.06 293 12:12:07
23.01.06 261 13:12:02
24.01.06 224 10:56:10
25.01.06 150 07:18:17
26.01.06 195 07:04:21
27.01.06 297 11:08:50
28.01.06 291 14:53:52
29.01.06 0 00:00:00
30.01.06 6 00:16:10
31.01.06 0 00:00:00
 
The way I do it now is extremely slow. Calculating everyting for half a year (180 files containing 20'000 tests) took 1 hour with my PIII 667 Computer. Is there a faster and better way to do that?
 
Thanks for your help
Mathias
0 Kudos
Message 1 of 2
(2,244 Views)
I don't have time to look into details at the moment, but why are you reading your files "line by line"???? Just read the entire file at once and translate it into an array later!
 
(Files are just a long string of bytes. In the worst case code, it would, for each line read (!), search the entire file from the beginning, counting line feeds to read a certain numbered "line". If you read sequential lines, you can improve the code a bit by looping the "mark after read" output to the "start of read" input via a shift register. Still, it is way too many read operations if you need the entire file anyway. Right?)
0 Kudos
Message 2 of 2
(2,229 Views)