What you are describing is a rather trivial application of HDF5, but the code you need is not in the sfpFile set of VIs (which was designed to handle 1D waveforms). However, the sfpFile code can be easily modified to do what you want. First, however, a digression on how HDF5 handles data that will help you on your way. HDF5 is a self-describing file format. All files contain all the data you need to get the data out - such things as compression, byte order, data type, and array dimensions. When you create a data set in a file, you have to specify the array dimensions (in HDF5 parlance, this is part of the dataspace). You have a choice of whether to make the array expandable or not. If you make it expandable, you have to specify the chunk size you will grow the array by. Note that HDF5 handles multiple dimensions (512 is the limit, I think) as easily as one.
Go to the
HDF5 website for documentation on how all this works. It is very low-level, but the sfpFile code will show you working examples of 1D and 2D waveforms. For example, open
H5D Create-Write 1D DBL array.vi. The first call creates a data space that can be expanded with initial size equal to the data being written. The creation parameters list for the dataset is then created and populated with the chunk size and compression parameters in the next two VI calls. Next, the dataset itself is created. Finally, the dataset is written to disk. The references are then all closed.
One important thing to note. When writing data, two dataspaces are required - the memory dataspace and the disk dataspace. They do
NOT need to be the same. You can write a 2D array in memory to a section of a 3D array on disk. The full dataspace specification includes not only the size of the array, but what portion of it you are reading or writing. That portion can be as simple as a continuous section or as complex as a bunch of random points, with many variants between. See the HDF5 documentation for details.
To modify this VI for a 3D data set, change the
DU64 Dims input of
H5Screate_simple.vi from a one element array to a three element array giving your initial 3D array size. The chunk size input of
H5Pset_chunk.vi will also need to be changed to a three element array. I have found that best performance results when your chunk size is 65,000 bytes on any Windows system. Getting larger or smaller will result in slower performance, sometimes dramatically so. Your disk read/write should be essentially hardware limited (somewhere between about 10MBytes/sec to 25MByte/sec, depending on how new your PC is and how defragmented your drive).
When you write the data using a modification of
H5Dwrite xxx.vi, you will need to create two dataspaces, one 2D one for your input data (wire to
mem_type_id), and one 3D one for where in your disk data you are storing it (wire to
file_space_id). See the HDF5 documentation and the sfpFile examples for details. sfpFile should contain all the HDF5 primitives you need. The VI filenames are usually exact copies of the HDF5 subroutine calls, with changes made in the case of different data types (e.g. HDF5 only has one
H5Dwrite, LabVIEW has one for every data type supported, with more trivial to make). Note that sfpFile was written before the higher level API for HDF5, so it contains similar higher level functionality, but the subVIs do not correspond to the HDF5 higher level API subroutine calls.
If you haven't already done so, you should read the tutorial
Managing Large Data in LabVIEW. It will explain how sfpFile deals with 64 bit numbers.
Two more practical notes. Be very careful with HDF5 references. You must close them when you are finished with them or the HDF5 file will remain open, even after you exit LabVIEW. You can sledgehammer the problem away by calling
H5close.vi, but this totally shuts down the HDF5 runtime engine, stopping any other users at the same time. Second, HDF5 is not multi-thread safe. Make sure you don't try to use it simultaneously from two locations in your code (very easy to do with LabVIEW). You will get errors and could corrupt your data.
Don't expect to learn HDF5 quickly. It is a very complex and low-level API. However, it can do just about anything you would want in a binary file API, so it is definitely worth the effort. Once you figure it out, you will wonder what you ever did without it. Good luck. Let me know if you have problems. The HDF5 helpdesk is also fairly responsive (within 24hrs) and highly informative if you hit a sticky spot.