10-26-2022 09:04 AM
Hello again, CatDoe, and thank you for appending Smplepracticeexcelsave.vi. Here are some (nested) comments.
Bob Schor
10-26-2022 09:12 AM
Appending might seem "inefficient" but with the proper program architecture that is irrelevant.
10-26-2022 09:19 AM
Thank you!
10-26-2022 09:25 AM
Hello RTSLVU,
1. Unfortunately, I do not know that information. I could collect a single row of data or over 30,000 in a day. It simply depends on the operator. The speed of individual elements is instrument dependent and varies over each collection period.
2. At this time, data is collected during various times in my message handling loop and stored in a cluster. Then the cluster data array is sent to the file. Data cannot be collected in a single sweep as multiple instruments and steps are involved.
3. Right now I save a data array at the end of the program run (old methods) or append a single row to a file (current method). This is not ideal as a program crash leads to data loss, which is very very unacceptable in the system's use. Therefore, I need some way to keep most of my data. A single row of data loss is "acceptable" if absolutely necessary, but more than that is unforgivable for this system.
10-26-2022 09:50 AM
As promised, here is a demo I whipped up.
This starts by defining the output file, here "Text Logger.txt", in LabVIEW's Default Data Directory (usually the LabVIEW Data folder). The first For loop creates an Array of 19 Strings, "Col 1", "Col 2", etc. and writes it (without appending, so it creates a New File) for the Header Row. The second For loop does 100 "Appends" of 1000 Rows of 19 Column data (from the Random Number generator), saved with three decimals of precision (so something like "0.123"), and then writes this 1000 x 19 2D Array to the Log file.
When I run this, I get a file of 11,231 kB, a text file I can open and see that it has 19 columns and 10,001 rows (don't forget to count the Header!). How long did this take? Would you believe 20 seconds? Nope, less than 2 seconds. Fast enough for you? This means you can save data from 19 channels sampled at 1 kHz in 0.02 seconds, not too bad ...
Bob Schor
10-26-2022 10:11 AM
@CatDoe wrote:
Hello RTSLVU,
1. Unfortunately, I do not know that information. I could collect a single row of data or over 30,000 in a day. It simply depends on the operator. The speed of individual elements is instrument dependent and varies over each collection period.
2. At this time, data is collected during various times in my message handling loop and stored in a cluster. Then the cluster data array is sent to the file. Data cannot be collected in a single sweep as multiple instruments and steps are involved.
3. Right now I save a data array at the end of the program run (old methods) or append a single row to a file (current method). This is not ideal as a program crash leads to data loss, which is very very unacceptable in the system's use. Therefore, I need some way to keep most of my data. A single row of data loss is "acceptable" if absolutely necessary, but more than that is unforgivable for this system.
1. So the shortest interval you can reliably collect data would be the update rate of the slowest instrument. Any idea what that is?
2. Multiple instruments is irrelevant. If there's "test steps" to perform between measurements then that also determines the shortest rate you can save data.
3. Sounds like you have an inefficient program architecture to begin with. I don't think trying to kludge in a "faster file format" is going to fix your overall issues.
We really need to see your entire program. But I think a lot of your issues can be better handled by a Channeled Message Handler. Consider each of your instruments running in its own loop continuously collecting data as fast as it can. Using the proper Channel Wire as a "lossy Queue" from each instrument loop. Those Channel Wires will always contain the most recent data and you can combine and save that data in other loops at the require interval one line at a time.
10-26-2022 10:53 AM
@Bob_Schor wrote:
As promised, here is a demo I whipped up.
When I run this, I get a file of 11,231 kB, a text file I can open and see that it has 19 columns and 10,001 rows (don't forget to count the Header!). How long did this take? Would you believe 20 seconds? Nope, less than 2 seconds. Fast enough for you? This means you can save data from 19 channels sampled at 1 kHz in 0.02 seconds, not too bad ...
You can do a lot better by avoiding the Write Delimited File, which opens and closes the file each time it is called. Instead, use the file IO primitives (Open/Create/Replace File to create the file, Write Text File to write the data, and Close File to close it) so that the file is only opened and closed once. Use Array To Speadsheet String to format the data.
10-26-2022 11:11 AM
Thank you for suggesting the channel handler. In this case I am restricted to making the most out of the message queues. I cannot share the whole program due to privileged information.
10-26-2022 11:23 AM
Thank you for the demo Bob_Schor. I will use this as a starting point and see if I run into any more questions.
10-26-2022 11:23 AM
The purpose of the "Demo" was as a "gentle introduction" for @CatDoe who had what appeared to be periodic needs to save data of some unknown size. LabVIEW's Write Delimited Spreadsheet does pretty much what @crossrulz suggests -- after collecting 1000 "data records", they are written, all-at-once, to an existing file by first going to the end of the file and then writing the data. It is a feature of this demo that the file is opened, written to, and then closed, the assumption being that data are being generated at a much slower rate than the writing speed, so opening, appending, closing (2% of the time), and then waiting with the file safely closed (98% of the time). Sometimes "simpler" isn't so bad ...
Bob Schor