LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Is there a file size limit when using Read From Spreadsheet File?

Solved!
Go to solution

@ssmith490D wrote:

Camerond--

       Thanks for the help. I recreated the VI you posted and unfortunately it doesnt work. It looks like the index array vi is only looking at the zero index and not the other 26 fields in each row. 

ssmith

 


Okay, I didn't post the VI.

 

However, when I open TEST2MET.txt, your fields are not separated with tabs, but with spaces. The Read From Spreadsheet File VI uses tabs as the default delimiter, so numbers with spaces in between are simply truncated after the last numeric-type character (double-click on the VI for more information). In essence, you only have one field in each row unless you either change the delimiter to (one or more) space(s) or convert those spaces to a tab.

 

Cameron

 

To err is human, but to really foul it up requires a computer.
The optimist believes we are in the best of all possible worlds - the pessimist fears this is true.
Profanity is the one language all programmers know best.
An expert is someone who has made all the possible mistakes.

To learn something about LabVIEW at no extra cost, work the online LabVIEW tutorial(s):

LabVIEW Unit 1 - Getting Started</ a>
Learn to Use LabVIEW with MyDAQ</ a>
0 Kudos
Message 11 of 20
(1,153 Views)

Altenbach--

      I started out with a couple of operations that I wanted to perform on the dataset. I have since scaled it back because I dont get to use LV as much as I would like, so I'm not very experienced with it. I wanted to use LV for this because I like using LV and I don't use any scripting languages. The dataset consists of 20 meteorological measurements. Sometimes erroneous points are generated or data is missing due to instrument failure, instrument calibration time, etc. After I process the data missing or bad points are represented by a -999. The first 7 fields(columns) are record number, year, month, day, hour, minute, second. I strip these away and then remove the first row of the data which is the column headers. This leaves me with just the data. The first step is to find the percent of data that is good. So I want to total the number of fields that have a -999 in it and then do the math to find percent of good data. The second thing I wanted to do was to find the MIN and/or MAX of certain fields and report that value and the date and time the MIN or MAX occured. Then if that went well I was going to attempt to use date and time and do one hour averaging on each of the columns. I could just as easily create 3 separate VI's to do each, which would be fine. However I have not gotten past the reading in part and the extreme use of memory that is causing the VI to crash. Thanks for the help here.

 

ssmith

0 Kudos
Message 12 of 20
(1,143 Views)

Cameron--

     Using the "Read From Spreadsheet File.vi" for the delimiter i created a constant and put in a space by hitting the spacebar. This seems to work because the fields are being separated properly...or so it seems. Thanks for the help.

 

ssmith

0 Kudos
Message 13 of 20
(1,139 Views)

I have something that works. I used some of the code Greg posted and some that Altenbach posted to get something that works, albeit a little slower than I would have thought. But then again LV was not designed to do file manipulations. So thanks guys. 

 

 

 

0 Kudos
Message 14 of 20
(1,104 Views)

@ssmith490D wrote:

I have something that works. I used some of the code Greg posted and some that Altenbach posted to get something that works, albeit a little slower than I would have thought. But then again LV was not designed to do file manipulations. So thanks guys.


Your problem is that the Read from Spreadsheet File keeps opening and closing the file each time you perform a read.  If you used the Read Text File, you could configure it to read X lines and then you can convert that 1D array of strings into your actual data for comparison.  Doing it this way, you open the file once before the loop and then close it once after the loop.  Constantly opening and closing the file is really slow.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 15 of 20
(1,099 Views)

Thanks for the response. The files are too big to read all at once, it crashes the VI due to lack of memory. So reading and counting the occurances of "-999" has to be done in blocks to avoid the out of memory issues. I dont see a mechanism on the Read from Text File VI. to make this happen.

 

ssmith

0 Kudos
Message 16 of 20
(1,092 Views)

@ssmith490D wrote:

Thanks for the response. The files are too big to read all at once, it crashes the VI due to lack of memory. So reading and counting the occurances of "-999" has to be done in blocks to avoid the out of memory issues. I dont see a mechanism on the Read from Text File VI. to make this happen.


You don't see the input that says "Number of Bytes" or "Number of Lines"?


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 17 of 20
(1,080 Views)

Yes I see that. My file size is 525600 lines. So what I'm missing is a mechanism to start at line number 500001 after the first iteration of the loop and then at 1000001 on the second, etc. Read from spreadsheet file has a "Start of Read Offset" which "Read from Text File" does not. 

 

ssmith

0 Kudos
Message 18 of 20
(1,069 Views)
Solution
Accepted by topic author ssmith490D

@ssmith490D wrote:

Yes I see that. My file size is 525600 lines. So what I'm missing is a mechanism to start at line number 500001 after the first iteration of the loop and then at 1000001 on the second, etc. Read from spreadsheet file has a "Start of Read Offset" which "Read from Text File" does not.


The Read From Spreadsheet File needs that since it opens and closes the file each time.  When you open the file and read a little at a time, the file pointer is automatically moved for you.  That's why I have the reading of 111 characters before the loop.  That was to get the file pointer to the beginning of the data you cared about.  So after reading 50000 lines, the file pointer will be at the 50001 line.  And that is where the next read will start at.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 19 of 20
(1,057 Views)

Crossrulz--

     Awesome!!! Thanks much for your help on this. My version that uses Read From Spreadsheet File takes about 42 seconds to run through a years worth of data. Your version that uses the Read from Text File vi runs the data in about 16 seconds. Thanks a lot.

 

ssmith

0 Kudos
Message 20 of 20
(1,032 Views)