DIAdem

cancel
Showing results for 
Search instead for 
Did you mean: 

DataBlAppend takes long time on registered data

Greetings! I'm using DIAdem 2012 on a Win7/64-bit computer (16GB memory and solid-state hard drive).  I work with one tdms file at a time but that file can be up to 8GB so I bring it into the Data Portal via the Register Data option.  The tdms file contains about 40 channels and each channel has about 50M datapoints.  If it matters, the data type of each channel is U16 with appropriate scaling factors in the channel parameters.  I display one channel in View and my goal is to set the two cursors on either side of an "event" then copy that segment of data between the cursors to a new channel in another group.  Actually, there are about ten channels that I want to copy exactly the same segment out to ten new channels.  This is the standard technique for programmatically "copying-flagged-data-points", i.e. reading and using the X1,X2 cursor position.  I am using DataBlAppend to write these new channels (I have also tried DataBlCopy with identical results).  My VBS script works exactly as I desire.  The new channel group containing the segments will be written out as a tdms file using another script. 

Copying out "small" segments takes a certain amount of time but copying larger segments takes an increasing amount of time, i.e. the increase is not linear.  I would like to do larger segments but I don't like waiting 20-30 minutes per segment.  The time culprit is the script line "Call DataBlAppend (CpyS, CurPosX1, CurPosX2-CurPosX1 +1, CpyT)" where CpyS and CpyT are strings containing the names of the source and target channels respectively (the empty target channels were previously created in the new group). 

My question is, "is there a faster way to do this within DIAdem?"  The amount of data being written to the new group can range from 20-160MB but I need to be able to write up to 250MB.  TDMS files of this size can normally be loaded or written out quite quickly on this computer under normal circumstances, so what is slowing this process down?  Thanks!

0 Kudos
Message 1 of 4
(5,236 Views)

Hi Chris,

 

Your results surprise me, to my knowledge there is no faster way to copy row sections from channels loaded in the Data Portal.  One alternative approach you could try that might be faster would be to use DataFileLoadRed() with the same row range you're using in DataBlCopy().

 

Brad Turpin

DIAdem Product Support Engineer

National Instruments

0 Kudos
Message 2 of 4
(5,223 Views)

Greetings, Brad!! 

 

I agree that DataBlCopy is fast when working "from channels loaded in the Data Portal" but the tdms file I am working with is only "registered" in the portal.  I do not know exactly why that makes a difference except that it must go out to the disk in order to read each channel.  The function DataBlCopy (or Append) is a black box to me so I was hoping for some insight as to why it is behaving like it is under these circumstances.  However, your suggestion to try the function DataFileLoadRed() may bear fruit!  I wrote up a little demo script to copy out a "large" segment from a 8GB file registered in the portal using DataFileLoadRed and it is much, much faster!  It was a little odd selecting "IntervalCount" as my method and the total number of intervals the same as the total number of data points between my begin and end points, and "eInterFirstValue" [in the interval] as the reduction method, but the results speak for themselves.  I will need to do some thorough checking to verify that I am getting exactly the data I want but DataFileLoadRed does look promising as an alternative.  Thanks!

 

Chris

Message 3 of 4
(5,210 Views)

Brad,

 

I have been able to verify that copying the data using DataFileLoadRed() gives me the exact same data as the DataBlCopy or DataBlAppend.  It is also much faster!  As a point of comparison, I registered a 6GB file then copied the same 600k row segment using both methods.  DataBlCopy took 140 seconds to perform the copy operation while DataFileLoadRed took 0.5 seconds.  This is a stunning difference!  This will dramatically change for the better how I have been dealing with these datasets. 

I never had considered using DataFileLoadRed() before for this purpose because it was not clear that function could be used to do this type of "reduction".  When I first considered your suggestion I admit it was not intuitively obvious how to set it up to load from an already registered data file or to load contiguous data. 

Thanks, Brad!  This will make a huge difference when I start copying out 1M-3M row segments.

 

Chris

0 Kudos
Message 4 of 4
(5,194 Views)