From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
11-14-2019 04:33 PM
This little VI is at the heart of the "compilation" phase of my program. Although it can write to a queue, in the instance I'm talking about Is Live is false and it doesn't. It's only writing to a file. Behold the VI, behold the performance report. Look! Shortest 0.1 ms, average 10.5, longest 66.9. What on earth could be influencing this simple file write to be taking such a radically unstable amount of time??
And then there are the Display and Draw columns, which seem to be responsible for most of the time. This VI is not normally open for viewing as the program runs. Why would it spend any time on displaying and drawing at all? Incidentally, this performance has grown steadily worse over the past couple of weeks in our development cycle, with no changes actually made to this particular VI.
11-14-2019 05:07 PM
File I/O can always be a source of performance issues as well as vary wildly. If performance is an issue I would remove your file I/O from your mainline processing. Create a small consumer task that handles all of the file I/O and run this in parallel. Use a queue to post the data that needs to be written to this logging task. What we do is only output to file once per second. In our typical task we simply flush the queue of data to be written, format all of that data at one time and perform a single file write with all of the new data.
With this approach your mainline processing will not be impacted and your file I/O will also be more efficient since you are writing more data at a single time (assuming your are getting data at a rate faster than once per second).
11-15-2019 05:47 AM
You may also want to look at the tdms file format instead of writing to a text file. TDMS is super-fast, is made for logging data like that and simple to start using: http://www.ni.com/product-documentation/3727/en/
11-15-2019 09:52 AM
Windows will delay disk writes by default, use a Producer/Consumer so your program does not have to wait for the write to complete.
Also you say this VI has been getting slowly slower and slower the longer you use it.
Could your file also be growing and growing and growing to a point where it is so big it consumes more and more RAM every time you open it?
11-15-2019 11:06 AM
Clearly there is something wrong with this.
11-15-2019 12:29 PM
Replying to RTSLVU and Altenbach:
There would not be an issue of the file growing continually longer, because a new file is started on each run. And there have been numerous runs over the past two weeks.
Altenbach's questions:
Tell me more about this producer/consumer concept. (I really don't want to go away from text files, because it is so useful to have them easily readable at this stage in development.) So I should be using Format into String and sending a series of strings via Queue to the consumer, which occasionally writes them to disk? This is basically buffered I/O, and I thought that normal I/O write operations would do that already, anyway. In C for example, output is like that unless you do special things to the open file ref to make it not do that.
11-15-2019 01:06 PM
I am sure there are better resources to explain Producer/Consumer in detail but...
In a nutshell:
11-15-2019 01:06 PM - edited 11-15-2019 01:07 PM
@Ken_Brooks wrote:
- Did you disable write caching?
- -- I don't even know how I would do that. Is that a LabVIEW thing or a Windows thing?
It's a windows thing and I really don't know what depends on it. Never tried without.
You do it from the disk drive properties in the device manager:
11-15-2019 01:20 PM
As mentioned in a previous post, Producer/Consumer is essentially 2 parallel loops. One acts as the producer of the data, the consumes it. In your particular situation, take the VI that you posted and place a while loop around it. Pass take your test data and create a typedefed cluster with that data. In your main code, create a queue using your typedef. Pass the queue reference to the subVI you just created. Inside the loop use a dequeue to pull the data from the queue, format it and write it to the file. As I mentioned earlier, I prefer to have my log process act on time and process multiple data elements at one time by flushing the queue. When you flush a queue you can get an array of all the elements currently in the queue. My other preference is to create a typedef which consists of a string and a variant. The string is my command and the variant is the data. This way you can pass commands to your consumer rather than simply passing data. This allows you to have commands such as Exit so you can stop your producer loop and any other commands that may be relevant to your situation.
11-18-2019 08:22 PM
Okay, I went and did it: I built a producer/consumer version. Results, somewhat surprising:
1. The consumer loop, including the formatting and the file writing, took a NEGLIGIBLE amount of time. Disk writing issues were not it. The producer did not run ahead of the consumer, as I thought it might.
2. Turns out that there was a considerable cost to running with the Emit subVI OPEN. Closing it cut its time down from thousands to 46.7. STILL WITH A HIGH VARIANCE, which is really surprising to me.
3. This did not radically speed up the process of the program, which is what I was after!
I still wonder why Emit is taking vastly more time than the Consumer Loop, which seems to do so much more work. Any insights? Am I mostly looking at procedure call overhead? (Since the consumer loop is not repeatedly called, but stays running?)
The new code is attached, as is a new timing report, and the Typedef we've been needing.