Here is a short overview of what you will need to get started. HDF5 is composed of several subsystems. The ones you will need to worry about are the File, Dataset, Dataspace, and Datatype, and Property.
All of the subsystems must be created and closed. Even though the root API is in C, it has a lot of object-oriented character. If you want a file, you create it. You have to close it. If you want a Property, you need to create it and close it when you are done with it. Unclosed references are the single most frustrating part of working with HDF5. They keep the run-time running and files open. A file does not close until all references to it are closed. You can close the file reference, but if you have an reference to, for example, a group in the file open, the file stays open. Since the run-time is a separate process, it stays running and the file stays open even if you exit the calling application. The cure for this is to run H5close. This shuts down everything and is invaluable during development. Don't do this in shipping code, as it will close the HDF5 run-time for all applications using it.
Now for a quick run-down of the subsystems you need. Property is used by all of the above to govern how things are created/used. For example, if you want compression in your data, use the property input of Dataset to do this. Properties must be created and closed when you are done with them.
File is the subsystem which governs opening and closing the "file". Properties for file include things such as buffered/unbuffered and RAM disk vs. normal. For basic use, you won't need to set any properties.
Datatype governs the strict typing of your data. There are a large number of predefined data types. These are sufficient for normal use. You don't have to close the predefined types.
Dataspace governs the dimensionality of your data. This includes both the current size and the maximum size. Since you are streaming, you will want to set the maximum size to infinity.
Dataset is the generic data object. This can be anything from a simple scalar to a 100-dimensional data set with billions of data members. To create a dataset, you will need to set its properties (chunked and compression), set its dataspace in memory and on disk, and its datatype.
Note that the Intermediate VIs take care of a lot of the overhead for this.
To stream, at each iteration, you need to do the following:
- Extend the Dataset so it is large enough for your new data
- Read the new dataspace of the dataset (get the reference)
- Select the hyperslab of the dataspace you will be writing into. It is called a hyperslab because it can be multi-dimensional and non-contiguous. You don't need that power, but it is there. It is really just a subset of the data. Use this dataspace as the file dataspace when you write to the dataset.
- Create a new dataspace (or faster, resize an existing one) that corresponds to the size of the data you will be writing to the dataset.
- Now write the data. The main inputs are your data, the datatype (which you should have from when you first created the dataset), the file dataspace, and the memory dataspace.
- Close anything you opened.
This may not make lots of sense at the moment, but hang in there. You may want to work through the HDF5 tutorial, using the LabVIEW VIs (some are a bit different due to LV interface differences). If you are still having trouble, let me know. The learning curve is very steep and I may have just confused you more than helped at this point. Good luck.