Showing results for 
Search instead for 
Did you mean: 

Why the TDMS file is larger than it should be

Go to solution



I'm writing data simultaneously from two analog channels to TDMS file at sample rate 10 kHz using 12-bit NI PCI-6111 DAQ card.


Simply calculate:


10000 samples per second  x  12-bit  x  2 channels =  240000 bits per second = 30 kbytes/s = 1,8 mbyte per minute.


I have 766 seconds long recording, so it should takes: 30 kbytes/s  x  766 second = about 23 mbytes.


But my TDMS file takes 123 mbytes! and i have problem with processing such big TDMS files, for example i haven't enogh memory for JTFA analysis. Where's the problem?


Best regards


0 Kudos
Message 1 of 20



Each time a data point is written to a TDMS file, header information is written along with the data.  The downside to this is that you can get a lot of redundant header information resulting in a very large file.  The upside is that it allows you to write to TDMS files much faster without having to search through the file for the appropriate header each time a data point is written.  Once you have a TDMS file created, you can reduce the file size with 'TDMS'.


Here is an article that gives some more information about this topic:


Chris M 

0 Kudos
Message 2 of 20

I think you store the data as Double data (8 bytes per value).

TDMS doesn't support 12 bit data, so the least storage space you can use is 16 bit.


What is JTFA? can't you do it in chunks.



Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
Nederlandse LabVIEW user groep
My LabVIEW Ideas

LabVIEW, programming like it should be!
Message 3 of 20

Dear Chris,


I know this article ( and i did it. My NI_MinimumBufferSize size is set to 1000000. When i use TDMS the files size is decreasing about 400 bytes (yes, bytes). Size of TDMS index file is 1487 bytes. So i think this is not this way...


Dear Tom,


I think you are right, My TDMS file is exactly 122 561 487 bytes, If we assume that LV write one point of data as 64-bit number then we have:


64-bit  x  10000 samples/second  x  2 channels  x  766 seconds / 8 (change to bytes)  =   122 560 000.


And i wrote, that my header file is 1 487 bytes so...  122 560 000 + 1 487 = 122 561 487 <-- this is exactly size of my TDMS file!.


If it's this, how to force LV to write data as 16-bit numbers ?


I have problem with Gabor Transform on more than 80000 samples.




0 Kudos
Message 4 of 20

Ton, very sorry, i wrote Tom...


JTFA is a Joint Time Frequency Analysis. This is in Advanced Signal Processing Toolkit. I cannot do it in chunks.



0 Kudos
Message 5 of 20

Wire a 16-bit waveform into the TDMS Write VI and it will write 16-bit data.  From the LabVIEW Help, TDMS will accept the following

  • Waveform or a 1D array of waveforms
  • Digital table
  • Dynamic data
  • 1D or 2D array of:
    • Signed or unsigned integers
    • Single, double, or extended precision numerics
    • Alphanumeric strings that do not contain null characters
    • Timestamps
    • Booleans
If you need compression, NI-HWS will do it for you.


This account is no longer active. Contact ShadesOfGray for current posts and information.
0 Kudos
Message 6 of 20
Accepted by topic author kacperek



You're right, i was writing voltage values in volts as waveform(DBL) from DAQmx, but DBL number is 64-bit, that why my files was so large.


Now i'm writing unscaled data representated as I16 (16-bit integer) and everything is ok.


It's possible to further reduce file size using DAQmx Channel Property Node -



0 Kudos
Message 7 of 20



Have you tried to play with AI.RawDataCompressionType and AI.LossyLSBRemoval.CompressedSampleSize?


Right now I try to acquire data using DAQmx from DSA-4472. The first step was to log data as an 2D I32 instead of Wfm 1D DBL. I also consider lossy and lossless compression provided by DAQmx, but one says that this can be applied only in case of Raw data acquisition.


I am wondering if you know how to acquire data with lossless and lossy compression and save it as an unscaled I32 in TDMS file? Afterwards I would like to open files in Matlab using Dominonilibddc (nilibddc.dll).





0 Kudos
Message 8 of 20

Note that as of DAQmx 9.0, there's a built-in feature in DAQmx that will log the unscaled data to TDMS with scaling information.  That is, you just call "DAQmx Configure Logging" with a file path.  From there, DAQmx will take care of writing the data to a TDMS file for you.


When you go to open the TDMS file, it will be scaled for you; that is, DAQmx will write all of the scaling information in the header of the TDMS file and then TDMS knows out to read the scaling information and apply it in the same exact way as DAQ would have.  This means you do not need to worry about storing scaling coefficients or making sure that you are scaling correctly.  This is the fastest possible way to stream to disk and results in a small file size.


Let me know if you have any questions on this feature.


Andy McRorie
0 Kudos
Message 9 of 20

Hi Andy,


Thank you for your fast response!


I have tried "DAQmx Configure Logging". There are few delimitation that I cannot find any work around.


My measurements set-up is for long-term measurements (few weeks) and I save data in separated TDMS files every one minute. In order not to loose any samples I use Producer/Consumer structure. As far as I know DAQmx-built logging does not support such kind of feature. Am I correct? Due to this fact I use TDMS API to save one-minute files.


I cannot find how to define number representation using "DAQmx Configure Logging". Is it I32, SGL, DBL or something else? I am not really sure how the scaling meta data work in TDMS. As far as I understand from your previous post data in TDMS files saved using DAQmx 9.0 are unscaled integers which are scaled automatically by scaling coefficient when I display it using TDMS viewer. When I load TDMS files saved in a way you suggested in Matlab using nilibddc.dll there are also scaled, is it implemented in the library to scale or it works slightly different?





0 Kudos
Message 10 of 20