We have some large comma-delimited files that we're trying to read in (anything up to 2Mb). There is all sorts of data in them, neatly formatted into sections.
configurations data,
test1 data
test2 data etc etc
At the moment the system is designed around reading the string out as a spreadsheet string and converting to a 2D array of strings. Each cell is then searched/scanned for it's contents, providing indices for the 2D string array. The main whole array is subsetted and the data is extracted & converted into the correct data type.
Does this sound like an efficient way of dealing with large data sets? I am curious because I'm assuming that labview can't work out how much memory it'll need for an array of string, as each element can be of different length (unlike a number).
Would it be more efficient to search the whole string for the correct tag and then churn out a 2D array of offsets to index the string with?