LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Analysis of multiple files

Solved!
Go to solution

I'm trying to analyse data which is spread between multiple files over multiple TDMS groups and channels.

The files are contiguous blocks of the same sampled signals (i.e. they are only split to allow efficient storage and visualisation).

I need to analyse each of the signals individually, and so plan to concatenate each channel's data for all of the files, and then run a waveform analysis "en masse".

I generate 288 files per day, each with around 200 channels. Each daily analysis will therefore contain about 8.6 million samples.

However, due to memory restrictions, I need to concatenate each channel individually, rather than using a single continuous file.

I therefore need to use multiple reads to the same files to extract each channel's information.

 

What is the best way to read from the same files multiple times?

Is it best to Open all files into an array of TDMS references, and then perform a read for each channel, only closing the files at the end of the analysis?

Or would it be better to Open/Read/Close each file for every channel?

 

Sorry if this is either obvious or has been answered before!

 

0 Kudos
Message 1 of 4
(2,412 Views)
Generally I would recommend you to open each TDMS file once and close the files only if you do not need it anymore (after analysis). Your second idea of open/read/close for every channel and file (288x200 times) would mean a lot of overhead.

Beside of that from your description it's not clear to me how big your files are (you mentioned 8.6 Million Samples, per file? Per channel?) and why you need to generate 288 files per day (you mentioned because of efficient storage and visualization). 288 files per day sounds a lot for me if this runs day by day and is maybe also itself difficult to handle in the future.
Sascha
0 Kudos
Message 2 of 4
(2,394 Views)

Hi Cheggers,

 

Thanks for the quick reply - I had thought that there might be less overhead if I only performed a single open/close on each file, but wasn't sure what memory impact that might have.

 

The files are 8.6 million samples per channel per day, over about 200 channels. These are acquired continuously in 5 minute time blocks to make it easier to store and manipulate the files later on (I generally only need to review "raw" data in small time blocks, based on the outcome of analysing the whole day's acquisition). Eventually, I suspect the answer will be to run the code in real time to avoid the lengthy post processing, but that is not an option at the moment!

0 Kudos
Message 3 of 4
(2,365 Views)
Solution
Accepted by topic author jhughes01

So to come back to your question my advise would still be to just open/read/close the files once. I also recommend to check if it is possible for you to change the data type (for example from double to single or even raw data type). This would make the file size much smaller and therefore increase read performance.

 

cheggers

Sascha
0 Kudos
Message 4 of 4
(2,358 Views)