From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Optimizing large 2-D array data sets

All

I have a DAQ subVI which acquires 600,000 data sets for each channels extracting 32667 sets per acquisition. I process these 32667 datasets as soon as I acquire them. I need to store these 600000 data sets and am currently using globals since I need to be using this data in another VI.

I've read that using functional globals or queue will be good for optimization. I am not sure how exactly this is done. Can someone help me out in this ?

[ Also, I've built a FIFO buffer and that didn't seem to optimize my application.]

Thanks for your help

Kudos are the best way to say thanks 🙂
0 Kudos
Message 1 of 8
(2,977 Views)
You said "I need to store these 600000 data sets and am currently using globals since I need to be using this data in another VI."

Why not use suitable file I/O function VI's write to a file and later read from that file in the other VI??
This way, you will not be loading the VI's virtual memory

But then, this depends on your application requirement.
0 Kudos
Message 2 of 8
(2,966 Views)
Check out the tutorial Managing Large Data Sets in LabVIEW.  It includes sample code for a couple of methods to do what you want to do.  The examples are 1D arrays, but can be easily modified for 2D arrays.  Let us know if you need more help.
0 Kudos
Message 3 of 8
(2,958 Views)
Too early in the morning...

I just did the math and realized you are taking 20GBytes+ of data (depends on your data width).  You can easily handle one 32k set at a time.  You can handle 100 sets at a time with a bit of effort.  You will need a disk buffer if you want to save all your data.  I would recommend NI-HWS (on your driver CD under the computer based instruments tab) for this, since it is a fast, binary file format designed to stream data for 1D arrays.
Message 4 of 8
(2,954 Views)
I looked at these examples. So I would implement it like this ? When I get the 32667 sets of data for 4 channels, I collect these into a functional global for about 10 acquisitions ?
I am not too clear on this.

Currently I implemented a empty array and am using replace array subset as soon as I acquire data.

Kudos are the best way to say thanks 🙂
0 Kudos
Message 5 of 8
(2,942 Views)
How you implement it depends on how fast data is coming in and what you do to it before you store it.  NI-HWS is essentially hardware limited for the type of streaming you are doing.  On an old 650MHz Pentium III computer, I could get 20MBytes/sec or more.  You may be better off just going directly to disk.  However, you did say you were doing some analysis.  Please give us more details of your operation (data size, rate, specific analysis details, at least as much as you can, etc.) and we will try to give you better help.

If you use the functional global, you need to resize the internal array to the final size you need before you use it.  Then use replace array subset to fill it and read subset to read from it.  You can integrate your file save into the functional global, as well.
0 Kudos
Message 6 of 8
(2,931 Views)
I implemented this and it works decent so far. When I receive the data sets for an acquisition, I write them to a binary file and keep appending them. This seems reasonably fast.

Here are the details of my operation:
PCI 6110E
5MS/s
Total data points - 600,000
4 Channels, Double (But i am converting them to single), 32667 per acquisition

I wish to store these during the acquisition where it is used for some processing. It takes me about 20-22 seconds to collect the entire sample of data, process and display the results, and store them to a binary.

Reason for my storage is I will be using them on another VI. I am able to read the huge file and do some processing, and compute some metrics. It takes me about 2-3 seconds.

My main concern is the 20-22 seconds. I've not tried NI-HWS so far. Is it on one of the hardware driver CDs ? Iam not able to locate it.

Anyways, thanks a lot for your help.

Kudos are the best way to say thanks 🙂
0 Kudos
Message 7 of 8
(2,877 Views)
You can find NI-HWS on the DriverCD that should have come with your hardware.  Look under the Modular Instruments tab when you get to the list of elements to install.  It will be the last one in the list.  NI-HWS was added to the DriverCD about two years ago, so you may not have it if your CD is old enough.  You can call your local NI representative and get one, if you need to.

However, given you can read your file back quickly, the file storage is probably not an issue.  My guess is that you are make a few extra copies of data and the memory management is what is eating your time.  The tutorial Managing Large Data Sets in LabVIEW contains a section on finding copies that should prove useful.  I would also recommend you fetch and store your data as I16 binary.  Save the scale and offset at the beginning of the file as a pair of doubles (NI-HWS makes this easy).  This will cut your data size in half.  However, it may make your analysis more difficult.  Use the decimation routines in the tutorial for display.  This will also save you a lot of time.  Most displays don't have 32,000 pixels across.

One last point.  Be careful in your analysis routines.  Most LabVIEW analysis VIs use doubles.  If you convert to singles, then use a LabVIEW VI, the data will be converted back to double before being analyzed.
0 Kudos
Message 8 of 8
(2,852 Views)