Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx Logging New Features - Split files, non-buffered logging, and pause/resume

DAQmx 9.3 now introduces extended capability for integrated logging to split an acquisition into multiple files, to log non-buffered tasks, and to pause/resume logging.

 

Splitting acquisition into multiple files

There are two ways to split files:

 

  • Set the Samples per file property on the DAQmx Read property node, or configure this in the DAQ Assistant. With this option, DAQmx will automatically create a new file at the specified sample interval with a naming convention of <filename>+_000n (ex: "c:\test.tdms", "c:\test_0001.tdms", ...).  With this approach, you can also specify the next file name to use at any given time during the acquisition.
  • Call DAQmx Start New File at any time to tell DAQmx to create a new file with a specified path.
Cases where you might want to split a file into multiple logs are typically to either a: Make the files more managageable in size (for things like FTP), and b: reduce the risk of corruption over an extended period of time (for example, if power failure prevented your file system from recovering some sectors).

 

Log non-buffered tasks
On demand and hardware-timed single point tasks can now use the integrated logging feature.

 

Pause/Resume
Set the Logging.Paused attribute on the DAQmx Read property node to specify whether to pause or resume logging.  This is useful for conditional logging applications.

 


These features have been discussed multiple times on the forums, so I wanted to make sure that people were aware of their existence.  Let us know if you have any questions or additional feature requests for logging.

 

Thanks,

Andy McRorie
NI R&D
Message 1 of 18
(7,267 Views)

Hi Andy,

 

 

It seems like this isn't accessible in SignalExpress or in the .NET wrapper. Is this feature intended primarily for LabVIEW users or will it something eventually available in other development environments?

- Regards,

Beutlich
0 Kudos
Message 2 of 18
(7,197 Views)

Hi Eric,

 

The new method StartNewFile is added to the Task class.

 

The new attributes/properties are added to the DaqStream class, LoggingPause and LoggingSamplesPerFile.

 

 

 

Thanks,

Canisius

0 Kudos
Message 3 of 18
(7,191 Views)

Signal Express has its own suite of logging features.  Since it might be confusing to have two different ways to log data in the Signal Express environment, we decided to not show DAQmx-specific logging support in that environment and simply rely on the environment features.

 

For the .Net API, as noted by Canisius above, these features are available in the referenced classes (and they are also available in our DAQmx C API).

Thanks,

Andy McRorie
NI R&D
0 Kudos
Message 4 of 18
(7,189 Views)

Hi Andy,

 

Nice to hear from you again.

 

I am glad that this feature is now implemented and thanks for noticing me about it. Until now I used producer-consumer queued structure in order to execute asynchronously threats responsible for data acquisition and data logging. For my sample rate and number of channels it was definitely sufficient. This asynchronous execution allows me to do something more during measurements. As an example, sometimes I need to release some disk space and I simply can move data to another location from the same hard drive where data are logged simultaneously without any interruption. How about this DAQmx new logging feature? Does it contain additional buffer in order to protect against unwanted data logging interruption due to lack of storage device performance/efficiency?

 

Thanks!

--

Łukasz Kocewiak

0 Kudos
Message 5 of 18
(7,152 Views)

Hey Lucasz,

 

In DAQmx, data is already being DMA'd to an internal circular buffer.  It is already a producer-consumer pattern.  In your application, adding a queue would have a similar effect to just increasing that circular buffer size.  Usually, there is no need to use a producer-consumer architecture with DAQmx data since it is effectively already using such an architecture.  With most MI products, however, they DMA directly into a LabVIEW buffer.  In that case, there is no additional buffering done, and producer-consumer can help tremendously.

 

To address your particular scenario, it sounds like you are occasionally moving files in order to free up space.  If that is your goal with this, you have a couple different ways to do this with the logging feature.

1) If you specify that DAQmx should split files every so often, you can just have another loop that will run to occasionally move older files that have been closed out.

2) You can change the path to which DAQmx is writing explicitly; that is, every so often, tell DAQmx to start writing files to a new location.  Of note, there is a particular feature that we added with the "SampsPerFile" logic wherein you can specify a new location while keeping the same numbering scheme and samples per file.  Let's say that you told DAQmx to create a new file every 2000 samples (by using the SampsPerFile attribute) and specified c:\temp\test.tdms for your file path.  It will write to c:\temp\test.tdms initially and then create c:\temp\test_0001.tdms.  If at any point, you want it to start writing to a new drive, you can do one of three things:

a. Call DAQmx Start New File and specify a completely new file path.  In this case, it will start back over with <fileName>.tdms and then <fileName>_0001.tdms.  This will take effect at the next call to DAQmx Read, even if there 2000 samples have not yet been written to the current file.

b. Set the FilePath attribute again to something like d:\temp\newLocation.tdms.  In this case, it will start back over on the numbering, but it will wait until the current file has been "filled".

c. Set the FilePath attribute again to something like d:\temp/ (directory ending in a slash).  When you do that, it will continue from where it left off, but going to the new location: d:\temp\test_0002.tdms.  It will wait until the current file has been "filled".

 

I know that this is a bit confusing and perhaps crazy, but we wanted to supply as much customization as possible.  Let me know if you have any questions.

Thanks,

Andy McRorie
NI R&D
Message 6 of 18
(7,138 Views)

Hi Andy McRorie and Team,

 

My requirement is - Log each one minute data in single TDMS file (irrespect of the sampling rate).

For this,

I do want to understand the "Logging - Samples per File" property node value.

The value which I am passing, needs to aligned to some "X" number.

For Example,

I am requesting "120000000 Samples per File"

But the corrected value is "120171520 Samples per File"

 

What is the relation between both the numbers? or How I can acheive this number programatically?

 

Thank you,

Yogesh Redemptor

Regards,
Yogesh Redemptor
0 Kudos
Message 7 of 18
(6,924 Views)

Hey Yogesh,

 

I'm assuming that you are using "Log Only" mode (as opposed to "Log and Read").  If that assumption is correct, the reason why you are seeing this coercion is that "Log Only" mode is super-optimized for streaming to disk.  Therefore, every write to disk has to be sector-aligned.  Sector sizes are in powers of 2, and the typical sector size is 512.  When you request a certain size per file, it can't split a single write operation into multiple files; therefore, the samples per file has to be evenly divisible by the file write size.  You can set the file write size attribute as desired; however, that value must be divisible by the sector size.

 

If you don't care about the optimal performance (that is, speeds >200 MB/s), you can use "Log and Read" mode.  In Log and Read mode, you completely control the write size because you are calling DAQmx Read yourself.  In that case, you can set Samples Per File to whatever you like.  In that case, you would want to read from DAQmx Read in an evenly divisible number into 120000000.

 

Let me know if you have additional questions or clarification.

Thanks,

Andy McRorie
NI R&D
Message 8 of 18
(6,873 Views)

Dear Andy McRorie,

 

Thank you for your reply.

 

Yes, I am using "Log Only" mode for TDMS data logging. Since, the sampling rate is high (2M Samples/Sec), I am more concern about optimization.

The default "File Write Size is 250880 Samples"; And now I have tried to use "File Write Size " along with "Samples per File"; It does not worked-out.

Sampling Rate File Write Size
(Samples)
Requested
Samples per File
Aligned
Samples per File
2000000 250880 120000000 120171520
2000000 250880 60000000 60211200
2000000 4096 60000000 60002304
2000000 2048 60000000 60000256

And one more thing I have did is, I have used all the values divisible by "2" or powers of 2; It doesn't helped me.

I have noticed that the values suggested by DAQmx is not divisible by "2" or powers of 2"; So "Volume Sector Size" is not or powers of 2!.

 

I am not understanding "Volume Sector Size" or the values are not only depends on "Volume Sector Size".

Can you please help me to sort out this challenge?

 

Note:-

The code which I have collected the data is in working condition. So, the configurations are right.

 

Thank you,

Yogesh Redemptor

 

Regards,
Yogesh Redemptor
Message 9 of 18
(6,855 Views)

Hello Yogesh,

 

Just out of curiousity, have you tried modifying any pre-written VI's such as the TDMS Streaming - Log and Read Data.vi found at : https://decibel.ni.com/content/docs/DOC-11321 to serve as a starting place? If not, you can change the sample mode in this VI to continous samples and the number of samples to 60*rate (1 minute of data collection) you can essentially achieve the TDMS stream at high data transfer rates without having to modify the property nodes to working about memory mapping issues.

 

Best,

Blayne Kettlewell

0 Kudos
Message 10 of 18
(6,820 Views)