04-23-2008 03:18 PM
04-23-2008 03:31 PM - edited 04-23-2008 03:36 PM
You are asking what would be the best format to store the data then?
You want small, but sortable?
I'd like to have my cake and eat it too 😄
You might consider programatically breaking it up into smaller files. For safety purposes I would open, write, close (some of the write functions do that for you).
Design pattern should be a producer/ consumer variant with data being feed into a queue that another loop reads out of the queue and writes to the file(s).
I'm not eniterly sure what would be smaller than writing the file as binary...
EDIT: Add on
I have never really messed with TDS, but it looks like it might be a good idea (again I'd open, write, close -- for safety). Dynamically created file name, a new one for each day or something...
04-23-2008 09:00 PM - edited 04-23-2008 09:01 PM
04-24-2008 08:00 AM
Thanks for both of these replies. I am switching files every other day at this point to keep files *relatively* small. I will try the tdms and see how it works. Jarrod, your point of being able to jump immediately to, using your example, the 100,000th point is key. I am using ver8.2.1, in case I didn't mention that.
Thanks again,
-JD
04-24-2008 08:07 AM
Logging to a database running on another machine will give you "your cake" and "let you eat it too". Provided the machine hosting the DB and interconnecting network have adiquit performance, 100 should be very do-able.
Ben
04-24-2008 08:51 AM