LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How do I recover datalog records written beyond the 2 GB limit?

Solved!
Go to solution

One of the Known Issues published by NI for LabVIEW versions 8.x, 2009, & 2010 is that datalog files are limited to 2 GB.  What is not explained is that Write Datalog will dutifully append records to a datalog file beyond the 2 GB limit without generating an error but then Read Datalog returns error code 4 (end of file encountered) when you try to access those same records.  Unfortunately, this problem went undetected in one of my applications for several weeks and I now have several hundred Production Quality Assurance records which are inaccessible.  This puts me in a very bad position with my Manufacturing clients as you can imagine.

 

Has anyone already developed a solution for recovering datalog records written beyond the 2 GB limit?

 

A strategy I have been pursuing is to split this datalog file into two parts, with each part being less than 2 GB in size, using the binary file IO vi's which readily read past the 2 GB limit.  I can successfully extract the bytes for each variable-length record because I can recognize my sequential Record IDs in the first field of the record clusters, but I have not successfully reconstituted the datalog headers for the files yet due to my ignorance of the datalog file specification.  The first reconstituted file is readable using datalog file IO when I simply use the original datalog header and change byte location 0008:000B to hold the revised number of records although I know parts of the header are still "not quite right",  But I am receiving error 116 (Unflatten or byte stream read operation failed due to corrupt, unexpected, or truncated data.) when I use this same strategy with the second file containing the latter records.  Specifically, I believe I also need to revise byte location 000C:000F which seems to contain information which is a function of the file size.  Hopefully I don't have to touch bytes 002C:0227 containing the record type information.

 

I also thought I might create an empty datalog file (of the correct type) first and then save the recovered records using Write Datalog after applying Unflatten from String or Type Cast to the binary data using my custom record type but this consistently generated error 116.

 

Is there a better strategy to solve my problem?  If not, can an AE or anyone else provide some more details on the datalog file format so that I can properly reconstitute the datalog file header information.?

 

Thanks in advance.

 

Larry

0 Kudos
Message 1 of 4
(3,002 Views)

just sharing an idea (that may or may not help).

 

The file still gets written because the OS has the file pointer in the proper form.

 

I THINK you can just read one byte at a time and let the OS track the pointer for you.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 2 of 4
(2,997 Views)

Yes, you are right Ben.  I can use binary file IO to read the datalog file (> 2 GB in size) one byte or one chunk of bytes at a time, and in fact this is what I have done to recover records as a byte stream, but my goal now is to re-incorporate these bytes into two validly formatted datalog files (each < 2 GB in size) so that the records are once again readable using the datalog file IO in my application.  My hang-up is in creating a valid datalog header without knowing the details of the datalog file format specification.

 

Or if I could convert the array of bytes I read for each record into a valid cluster matching my custom typedef, then I could simply use the LabVIEW datalog file IO routines to build the pair of datalog files.  I was dismayed when I received an incompatibility error (116?) when I flattened the byte array to a string and then tried unflattening the string into my custom cluster typedef.  When I did this I believe I had wired FALSE constants to "prepend array or string size" and "data includes array or string size".  Maybe I should try prepending the byte array with its I32 array size and wiring TRUE constants instead.

0 Kudos
Message 3 of 4
(2,984 Views)
Solution
Accepted by topic author Larry_Stanos

I found a solution.  Smiley Happy

 

These are the steps I took to split a large datalog file (> 2 GB) into two smaller datalog files (each < 2 GB) so that all records could once again be accessed by an application that employs datalog file IO.

 

  1. Opened the datalog file as a byte stream file using Open/Create/Replace File.
  2. Searched the file for a byte sequence matching the Record ID of the first record to extract using Read from Binary File. (I knew from my datalog cluster type that this U32 Record ID marked the beginning of a record.  For efficiency reasons, I read 100,000-byte blocks each time and kept track of the byte offset location in the file.)
  3. Searched the file for a byte sequence matching the next Record ID (or eof if this was the last record) to determine the size of the current record.
  4. Read the current record into a byte array using Read from Binary File in conjunction with Set File Position which works properly for files up to 4 GB in size.
  5. Repeated steps 2 - 4 for the remaining subset of records targeted for one of the new datalog files.
  6. For each byte array, flattened the byte array to a string (prepend array size = False), and then unflattened the binary string (data includes array size = False) according to my datalog record type.
  7. Saved each unflattened record (cluster) to the new datalog file.

 

Using the above procedure, I was able to circumvent the need to create my own datalog file headers so I did not need an understanding of the datalog file format specifications.

Message 4 of 4
(2,962 Views)