01-25-2018 08:06 PM
Dear LabVIEW experts,
We collect large data from experiments, and csv files are prefered for communicating between people and programs... (at the moment). Is there a way to append large (say 600MB) csv file to another without bumping into "memory not enough to complete this operation"? Thank you!
01-25-2018 08:27 PM
You can open one at a time and write them to another file.
But if combining them into 1 causes out of memory problems, do you really want to combine them into one file?
01-25-2018 09:45 PM
Not much needs to be in memory at any given time.
Open the first one for writing, then keep reading chunks of the second file and append them to the first until you run out of data.
(Since at the end everything is copied, you don't even need to care about the formatted file structure))
Of course it is pure madness to use formatted files for such large data structures. Everybody should agree on a simple binary format, then go with that!
01-26-2018 02:02 AM
It's easier to continue with the processing of the data.
But true, I can always edit the data analysis code to use multiple data files instead of a combined one. I was just wondering if there's a better/easier way to handle a large data file in LabVIEW...
01-26-2018 02:05 AM
Hmmm... okay... so take part of the data from the second file each time and append it gradually to the first file? That sounds feasible, just need a loop lol Will give it a try! Thank you! 🙂
And yea I agree about the binary format! lol
01-26-2018 03:12 AM
@alexalexalexalex wrote:
Hmmm... okay... so take part of the data from the second file each time and append it gradually to the first file? That sounds feasible, just need a loop lol Will give it a try! Thank you! 🙂
And yea I agree about the binary format! lol
If using e.g. Read from spreadsheet file all you need to do is wire "number of rows" and read e.g. 1 million at a time (and add them to the other file) until the EOF output activates. 🙂
/Y
01-26-2018 05:34 AM
@Yamaeda wrote:
If using e.g. Read from spreadsheet file all you need to do is wire "number of rows" and read e.g. 1 million at a time (and add them to the other file) until the EOF output activates. 🙂
The Read From Spreadsheet File decodes the text into data and then the Write To Spreadsheet File takes that same data and formats it into a string. There is just no point in the conversion back and forth. Much simpler to just use the Read From Text File and the Write To Text File since the data is already in the right format.
01-26-2018 10:42 AM
@Yamaeda wrote:
If using e.g. Read from spreadsheet file all you need to do is wire "number of rows" and read e.g. 1 million at a time (and add them to the other file) until the EOF output activates. 🙂
Wow, that's expensive! First of all, reading a certain number of rows requires searching the file for a certain number of linefeeds, which are typically unevenly spaced. Then it requires scanning the formatted data into array data (String, or even worse: numeric), just to reverse these operations a nanosecond later to basically end up where you started.
01-26-2018 11:32 AM
Just a silly question... will it add a "return" at the end of the last line of the first file? lol
Gotta give it a try and see...
01-26-2018 11:37 AM
No, it will not add anything, but depending on how the first file was written, there already might be a linefeed there. Reading the last character from the first file is a simple operation if you need to make sure how it ends.
Typically, there are only "LFs", though, no "CRs"