From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Can I make my 3d array data set a 2D array data set?

Solved!
Go to solution

Hi.  I am currently putting my data into a 3D array.  I'm not exactly happy with the 3D array data set since that calls for a large amount of handling of the array whenever I want to do anything with the data - which I have to do a lot of analysis, so it's a pain.  It's reliable, but it takes BD space and is in general a pain.  I read here (post #6 & #7 if the link doesn't pull you directly to it) a couple of posts by some well-respected LabVIEW guru's that they saw no reason to used 3D arrays.  I was hoping someone might be able to point to a better solution for how to store and access my data logically.

 

I am acquiring n channels of data (one right now but I will add to that later), which puts me at a 2D array with each row being the channel and the columns being the data points.

I need to acquire multiple test runs worth of data.

I also need to average the data of each test run, average the averages, and then verify the test run averages are within a tolerance of the overall average.

I also need to be able to rerun one of the test runs if it's average doesn't fall within the tolerance of the overall average.

 

I'm using the page of the 3D array as the test run.  So I can then access whatever test run I want to by selecting the corresponding page.

I'm waiting to save all the test runs at one time until the user selects to save the data.  Which means the user has run the minimum # of test runs and all have averages that are within tolerance of the overall average.  I like this soley because this test will be performed many different times for different UUT's and all the data for that UUT can be saved to a single file.

 

I'm currently saving the file as both a binary file (for my backup data purposes) and then the user can also choose to save it to Excel, with each test run on it's own sheet. 

 

I've thought about saving after each run.  I wouldn't know how to append/overwrite the data in the binary file. I'd rather not have a separate binary file for each test run, but that's not a dealbreaker.

 

The only way I see to avoid the 3D array is to save after each test run.  This would mean I'd have much more file manipulation (when they need to be replaced, when the averages need to be analyzed, etc.).

 

Is the "save after every test cycle" the approach that would typically be used?  If so, it seems to me that the file manipulations would be more error-prone than the 3D array manipulations (maybe just for me I suppose). Can someone tell me otherwise?

 

Is there an alternate approach that I have not thought of or discovered here?

 

Thanks,

 

Scott

0 Kudos
Message 1 of 4
(2,770 Views)
Solution
Accepted by topic author doyles

I think saving after every test cycle is a safer more reliable way to go than trying to bunch up all the data and saving at once.  That way if the program locks up, you don't lose all the data because it was still only in memory.  The data that had been collected so far is safely in a file.  Why do you think file manipulation would be more error prone?

 

If you need to append to an existing file, then you just use the Set File position function to set the file pointer to the end of the file after you open the file.

 

You may also want to look at the TDMS file format as it has ways of sorting and organizing multiple datasets.

0 Kudos
Message 2 of 4
(2,763 Views)

Thanks.  I have changing to TDMS on the task list already.  I think I'll figure out how easy it would be to switch to saving after every test when I'm changing to TDMS.  I do agree it's more robust, and certainly will help keep the code simpler - which I think tends to make the code more robust in itself.

0 Kudos
Message 3 of 4
(2,736 Views)

I was looking at the TDMS stuff today and that is much more complicated than what I'm using currently with the binary files.  The problem with the TDMS files is I am using a super cluster to store all of my non-data test information that I want to save in the file.  My test data is in a SEQ.

 

I don't like using the binary files because any changes to the type-def super cluster results in old files that become unreadable.    The TDMS methods for handling clusters is certainly not drop and replace and I don't want a long string of unbundle/bundle actions that I have to maintain everytime I change the cluster.

 

I've dowloaded the library from LAVA and some other files from the forums so when I get a chance I'll look at those and figure out how to implement them.  Any other tips in the meantime will certainly be appreciated.

0 Kudos
Message 4 of 4
(2,711 Views)