LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What is the format of Datalog files?

Oh frack, I thought this was closed.

If you use the Open/Create/Replace Datalog function, you will always get the same header. The rest of the data can be anything and is defined in the program that does the writing. In the example, the data is a cluster that consists of a string and a 1D array of SGL. It could just as easily be just a 1D SGL array, a 1D DBL array, a cluster with several strings, or an infinite amount of other options. IT IS HOWEVER THE PROGRAMMER DECIDES WHAT DATA SHOULD BE STORED AND IN WHAT FORMAT. The record type you wire to the Open/Create/Replace Datalog file can be anything and this is where the format of the file is defined. You can find out how NI stores a SGL/DBL/I32,I16,array, cluster, but there is no generic definition of a datalog FILE. There never was and never will be. Please stop asking and looking for one. The way to find the definition of a binary file format is to look at the program that created it. Each program that creates a binary file can, and probably is, different in the way it defines the data that is being stored. It is the programmer's choice!

Also note that if the original programmer had decided to use the Write Binary and not the Write Datalog, the header would be completely different and the header format would be whatever the PROGRAMMER decided it should be.

I also keep saying that doing a read is simple if you have access to the write program. Look again at the read example. It simply uses the same record type from the write and everything after that is automatic. By using that defined data type, the Read Datalog function knows how to interpret the stored binary data.

Can we move on now? You are spending too much time on this and approaching it form the wrong end. If you had simply provided the code that was used to create the files and an example data file, the read VI could have been created in less time than I used to compose this response.

0 Kudos
Message 11 of 17
(6,827 Views)
Hi Dennis,
Thanks again for taking the time to reply.  I think that it is possible that you misunderstood (or I forgot to describe) my original intent for asking about the format of the datalog file.

I want to read a datalog file with code that I write outside of LabView.

I can (and will have to) write my code so that it is aware of the format of the data generated by the LabView application.  In the case of the Write Datalog File Example, my code is aware that the LabView application wrote bundles of data consisting of a date/timestamp string followed by an array of SGLs.

You claim "If you use the Open/Create/Replace Datalog function, you will always get the same header."  Where can I find the documentation that backs that statement up?  I know that the header will change if I wired in a different record type, (even to the extent of changing the label attached to the "empty array" constant in the example -- that the header will change slightly).  That's ok.  I can live with that.

You also said, "
You can find out how NI stores a SGL/DBL/I32,I16,array, cluster ..." that is probably exactly what I'm looking for.  Can you tell me where that is documented?


I'm kinda stuck here.  I don't believe that I've been able to articulate my question very well.  I keep asking one question, and you keep answering a slightly different question.  I am aware that the layout of a datalog file will change as a function of the type of data that are written to it.  There are some specifications somewhere that govern how that layout changes.  All I'm asking is, "where can I find those specifications?"  I am not asking "what is the format of the datalog file generated by this or that particular application", I am asking "what is the recipe that describes the format as a function of the types of data written?"

As an example, the "Read Datalog Example" is aware of that recipe.  It knows that, to read a datalog with a record type of "a bundle consisting of an empty string followed by an empty array labeled 'empty array'", it must seek 582 bytes into the file to start reading it.

Please don't answer with "THE PROGRAMMER DECIDES THE FORMAT".  If you feel that that is the answer, then, once again, I have failed to explain the nature of my question and, once again, that answer won't satisfy me.


Having said that, I truly apprecate the time and effort you expended to try to help me.

Thanks again.

--wpd

0 Kudos
Message 12 of 17
(6,821 Views)

For just information on particular data types and not information on 'datalogs' look at the Data>Storage topic in the on-line help. On the header information created by the specific Open/Create/Replace Datalog, I don't know if or where this exists. I know the datalogs created from the front panel have changed from version to version and I don't think the use of the 'Datalog' functions are nearly as widespread as the ordinary Open/Create/Replace function that can create a binary file with or without a header and if a header is created, is entirely up to the programmer. In fact, I would recomend that the ordinary file read/write functions be used because even if you get a definition of the 'datalog' header, it is liable to change with a new release of LabVIEW.

And I guess I did not understand that you were not going to use LabVIEW to read these files. I think that is where the biggest misunderstanding was.



Message Edited by Dennis Knutson on 07-09-2008 10:13 AM
0 Kudos
Message 13 of 17
(6,810 Views)

wpd,

 

Your request is completely valid! I understand what you are trying to do and YES, this should be documented (albeit, it might just be internal). I don't have a solution but I have a perspective (ha, don't we all!) and perhaps clarification on wpd's problem..

 

As with any file format there is a specification which describes how data is stored whether its a lowly binary file (Big-endian,little-endian, word size,etc) or a much more extracted level of data such as ASCII (One char = 8 bits big-endian),TDMS (http://zone.ni.com/devzone/cda/tut/p/id/5696) or even, do i dare say?, Datalog (still unknown).

 

Some might argue that we can't know the datalog format but i think what they mean is "We can't know the datalog content". For example, I KNOW the format of an ASCII file before I even open the file because the format has been specified. I just don't know the contents of the file itself. If there was no format specification then why would we both giving these kinds of file a unique name like "ASCII".

 

Lets take Dennis's approach by using the Read Datalog File Example and Write Datalog File Example that LabVIEW provides. If the Read From Datalog is given a cluster it must convert that cluster into physical memory locations in order to read data from the Datalog file. The header appears to have some kind of table of contents for describing each field in a record. If this record mapping was documented then any program could read and create datalogs (without needing LabVIEW). Same goes for the Write to Datalog File Example. The VI must know how to parse the cluster in order to write the data in to a speicific format otherwise the data would be unrecoverable.

 

Others have asked similar questions regarding datalog files and AEs have eluded to the confidential nature of them (see below) but this is irrelevant if you know the format. Recall that a binary file appears pretty confidential until you are given information regarding the format of what the bits actually mean (e.g. first 512k is header, followed by 64k records with each record beginning with a 32 bit address pointing to the next record,etc)

 

Title: Retrieving datalog files

http://forums.ni.com/ni/board/message?board.id=170&message.id=90990&requireLogin=False

 

 

Craig

0 Kudos
Message 14 of 17
(6,680 Views)

If you want to read your data outside labview you may pack data as a C struct As an example we can use the struct bellow

typedef struct {
     char a;
     int b;
     char buffer[256];
} mystructure;
This struct is 1+2+256 byte wide. If we want to parse this struct, we use postion and size in bytes. The a value will always be the byte at index 0, And the buffer will always be start at byte index 3. In labview this can acomplished by using the flatten to string on all values and then conatenate all togehter in a string. This string can be written to a binary file as data record. If you do not use  the ext datatype in labview this should work fine. But outside Labview you may have to do some byte swapping. As an example Labview and C uses the same dataformat for the SGL/float type but the internal byteorder is different (big/little endian)

good luck

 

 



Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
0 Kudos
Message 15 of 17
(6,667 Views)

Did wpd ever receive the datalog file format specifications from NI?  I also have a need for this information because I have fallen victim to the 2 GB limit of datalog files.  Let me explain.

 

One of the Known Issues published by NI for LabVIEW versions 8.x, 2009, & 2010 is that datalog files are limited to 2 GB.  What is not explained is that Write Datalog will dutifully append records to a datalog file beyond the 2 GB limit without generating an error but then Read Datalog returns error code 4 (end of file encountered) when you try to access those same records.  Unfortunately, this problem went undetected in one of my applications for several weeks and I now have several hundred Production Quality Assurance records which are inaccessible.  This puts me in a very bad position with my Manufacturing clients as you can imagine.

 

The strategy I am pursuing is to split this datalog file into two parts, with each part being less than 2 GB in size, using the binary file IO vi's which readily read past the 2 GB limit.  I can successfully extract the bytes for each variable-length record because I can recognize my sequential Record IDs in the first field of the record clusters, but I have not successfully reconstituted the datalog header for the file containing the latter records yet due to my ignorance of the datalog file specification.  The first reconstituted file is readable using datalog file IO when I simply use the original datalog header and change byte location 0008:000B to hold the revised number of records,  But I am receiving error 116 (Unflatten or byte stream read operation failed due to corrupt, unexpected, or truncated data.) when I use this same strategy with the second file containing the latter records.  Specifically, I believe I also need to revise byte location 000C:000F which seems to contain information which is a function of the file size.  Hopefully I don't have to touch bytes 002C:0227 containing the record type information.

 

I also thought I might create an empty datalog file (of the correct type) first and then save the recovered records using Write Datalog after applying Unflatten from String or Type Cast to the binary data using my custom record type but this consistently generated error 116.

 

Is there a better strategy to solve my problem?  If not, can an AE or anyone else provide some more details on the datalog file format so that I can properly reconstitute the datalog file header information.?

 

Thanks in advance.

 

Larry

0 Kudos
Message 16 of 17
(6,491 Views)
0 Kudos
Message 17 of 17
(4,106 Views)