LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Parsing Quicker

Solved!
Go to solution

Hi everyone,

 

I am fairly new to LabVIEW and here is my task: I need to import an immensely large data file (16 megabytes) of a long chain of 16-bit binary numbers.  i would have to parse this data into 4 columns of 16-bit numbers until the end of the file.

 

elaboration: If the data were a counter (represented in hex) and it started from 0000, 0001 .... FFFF, 0000, 0001, etc. the data would look like this:

0000000100020003000400050006000700080009000A000B000C (and so on for a 16 megabyte file). 

 

I would have to turn it into this:

0000  0001  0002  0003

0004  0005  0006  0007

0008  0009  000A  000B (and so on, delimited by tabs).

 

I have the VI attached to do this function, with the flexibility of parsing it into any number of columns, while counting and shifting accordingly.   I additionally used the for loop iteration "i" as the shift register by multiplying it by the number of columns needed using to "Write to Spreadsheet File."  I then compared the counter times 4 (in this case) and compared it to the byte length of the total file to detect the EOF to terminate the loop.

 

This program works.  The problem is, it takes too long to parse it this way for a 16mb file.  The addition of the tab delimiters and carriage returns increase the output file to a 30mb file and it takes approximately 2 hours to parse in total.  I made this as eloquent and simple as possible but Im just wondering if there is another method to do this to make this quicker (aiming for less than an hour).  Is there a limitation in LabVIEW that im overlooking?  or is my CPU just too slow?  are working with arrays usually this slow?  is each comparision of length of bytes and counter after each for loop iteration taking up most of the time?

 

i feel my lack of experience with labVIEW is finally taking its toll when working with such large files.

0 Kudos
Message 1 of 6
(3,222 Views)
Solution
Accepted by topic author eclx

One thought: having your write to spreadsheet file in the while loop continuously is going to slow you WAY down.  Add a shift register, and use that to store the data until you reach the end of the file, and then write all the information to the spreadsheet.

 

 

Kenny

Message 2 of 6
(3,209 Views)

It seems to me that all you're doing is reshaping a 1D array into a 2D array with a user-selected number of columns. You do not need a loop to do this. The Reshape Array function does this.

 

Note that to get the 4 digit format you need to use %04x as the format string, not %x. 

 

 

Message Edited by smercurio_fc on 10-08-2008 03:30 PM
Message 3 of 6
(3,191 Views)

Hi ECLX:

 

The reason why your data VI performance is not the becausethe unnecessary processes/computations.

For intance:

1. Array size does not chance after reading from file...so technically you need to read only once.

2. You can format the array and save at once...instead of saving it one at the time.

 

Here is a simple modification. It would be great if you can attach a small version of your file SCI file....so I can see if I can make it even better...

 

Best

FSCT

Message 4 of 6
(3,182 Views)

Hey guys,

 

Thanks for all your responses.  If I could tag all of u with the "problem solved," I would.  The Re-appending after each loop and storing one iteratinon at a time slowed it WAY down.  Now this function takes from 2 hours down to ~10 seconds to parse a 16mb file!

 

thanks guys:

 

Kenny, thanks for pointing out the main problem,

Smer, I will try that new function out right away~ thanks for further increasing my knowledge of the array functions.

FightOn, you pointed out the main problem also; however, i only have LabVIEW 7.1 so i was unable to view your VI

0 Kudos
Message 5 of 6
(3,168 Views)

We are not here only for the Kudos...but to help fellow LV Programmers.  

From 7200 Secs to 10 secs.....that is a LabVIEW like improvement......

 

Best,


FightOn 

 

 

0 Kudos
Message 6 of 6
(3,148 Views)