DAQmx 9.3 now introduces extended capability for integrated logging to split an acquisition into multiple files, to log non-buffered tasks, and to pause/resume logging.
Splitting acquisition into multiple files
There are two ways to split files:
It seems like this isn't accessible in SignalExpress or in the .NET wrapper. Is this feature intended primarily for LabVIEW users or will it something eventually available in other development environments?
The new method StartNewFile is added to the Task class.
The new attributes/properties are added to the DaqStream class, LoggingPause and LoggingSamplesPerFile.
Signal Express has its own suite of logging features. Since it might be confusing to have two different ways to log data in the Signal Express environment, we decided to not show DAQmx-specific logging support in that environment and simply rely on the environment features.
For the .Net API, as noted by Canisius above, these features are available in the referenced classes (and they are also available in our DAQmx C API).
Nice to hear from you again.
I am glad that this feature is now implemented and thanks for noticing me about it. Until now I used producer-consumer queued structure in order to execute asynchronously threats responsible for data acquisition and data logging. For my sample rate and number of channels it was definitely sufficient. This asynchronous execution allows me to do something more during measurements. As an example, sometimes I need to release some disk space and I simply can move data to another location from the same hard drive where data are logged simultaneously without any interruption. How about this DAQmx new logging feature? Does it contain additional buffer in order to protect against unwanted data logging interruption due to lack of storage device performance/efficiency?
In DAQmx, data is already being DMA'd to an internal circular buffer. It is already a producer-consumer pattern. In your application, adding a queue would have a similar effect to just increasing that circular buffer size. Usually, there is no need to use a producer-consumer architecture with DAQmx data since it is effectively already using such an architecture. With most MI products, however, they DMA directly into a LabVIEW buffer. In that case, there is no additional buffering done, and producer-consumer can help tremendously.
To address your particular scenario, it sounds like you are occasionally moving files in order to free up space. If that is your goal with this, you have a couple different ways to do this with the logging feature.
1) If you specify that DAQmx should split files every so often, you can just have another loop that will run to occasionally move older files that have been closed out.
2) You can change the path to which DAQmx is writing explicitly; that is, every so often, tell DAQmx to start writing files to a new location. Of note, there is a particular feature that we added with the "SampsPerFile" logic wherein you can specify a new location while keeping the same numbering scheme and samples per file. Let's say that you told DAQmx to create a new file every 2000 samples (by using the SampsPerFile attribute) and specified c:\temp\test.tdms for your file path. It will write to c:\temp\test.tdms initially and then create c:\temp\test_0001.tdms. If at any point, you want it to start writing to a new drive, you can do one of three things:
a. Call DAQmx Start New File and specify a completely new file path. In this case, it will start back over with <fileName>.tdms and then <fileName>_0001.tdms. This will take effect at the next call to DAQmx Read, even if there 2000 samples have not yet been written to the current file.
b. Set the FilePath attribute again to something like d:\temp\newLocation.tdms. In this case, it will start back over on the numbering, but it will wait until the current file has been "filled".
c. Set the FilePath attribute again to something like d:\temp/ (directory ending in a slash). When you do that, it will continue from where it left off, but going to the new location: d:\temp\test_0002.tdms. It will wait until the current file has been "filled".
I know that this is a bit confusing and perhaps crazy, but we wanted to supply as much customization as possible. Let me know if you have any questions.
Hi Andy McRorie and Team,
My requirement is - Log each one minute data in single TDMS file (irrespect of the sampling rate).
I do want to understand the "Logging - Samples per File" property node value.
The value which I am passing, needs to aligned to some "X" number.
I am requesting "120000000 Samples per File"
But the corrected value is "120171520 Samples per File"
What is the relation between both the numbers? or How I can acheive this number programatically?
I'm assuming that you are using "Log Only" mode (as opposed to "Log and Read"). If that assumption is correct, the reason why you are seeing this coercion is that "Log Only" mode is super-optimized for streaming to disk. Therefore, every write to disk has to be sector-aligned. Sector sizes are in powers of 2, and the typical sector size is 512. When you request a certain size per file, it can't split a single write operation into multiple files; therefore, the samples per file has to be evenly divisible by the file write size. You can set the file write size attribute as desired; however, that value must be divisible by the sector size.
If you don't care about the optimal performance (that is, speeds >200 MB/s), you can use "Log and Read" mode. In Log and Read mode, you completely control the write size because you are calling DAQmx Read yourself. In that case, you can set Samples Per File to whatever you like. In that case, you would want to read from DAQmx Read in an evenly divisible number into 120000000.
Let me know if you have additional questions or clarification.
Dear Andy McRorie,
Thank you for your reply.
Yes, I am using "Log Only" mode for TDMS data logging. Since, the sampling rate is high (2M Samples/Sec), I am more concern about optimization.
The default "File Write Size is 250880 Samples"; And now I have tried to use "File Write Size " along with "Samples per File"; It does not worked-out.
|Sampling Rate||File Write Size
Samples per File
Samples per File
And one more thing I have did is, I have used all the values divisible by "2" or powers of 2; It doesn't helped me.
I have noticed that the values suggested by DAQmx is not divisible by "2" or powers of 2"; So "Volume Sector Size" is not or powers of 2!.
I am not understanding "Volume Sector Size" or the values are not only depends on "Volume Sector Size".
Can you please help me to sort out this challenge?
The code which I have collected the data is in working condition. So, the configurations are right.
Just out of curiousity, have you tried modifying any pre-written VI's such as the TDMS Streaming - Log and Read Data.vi found at : https://decibel.ni.com/content/docs/DOC-11321 to serve as a starting place? If not, you can change the sample mode in this VI to continous samples and the number of samples to 60*rate (1 minute of data collection) you can essentially achieve the TDMS stream at high data transfer rates without having to modify the property nodes to working about memory mapping issues.