09-07-2007 08:07 AM
09-07-2007 08:25 AM
09-07-2007 08:35 AM - edited 09-07-2007 08:35 AM
Message Edited by DFGray on 09-07-2007 08:37 AM
09-07-2007 08:58 AM
I’m not sure if these are covered already but it think they may be worth mentioning...
1. Every call to Obtain Queue creates a copy of the queue reference. So for every Obtain Queue you must do a Release Queue. If you are using Obtaining Queue within a loop or scattered throughout the program and this annoyance is not taken care of, this can obviously be a problem.
2. To my understanding, the memory space allocated to a given "Queue" in Labview can grow to any size as more data is enqueued. However I have been told that this memory space only grows, and does not deallocate itself as the queue flushes. So if at one point during your acquisition the disk was backed up, the Queue would grow rather quickly since acquisition is uninterrupted. Even if the disk catches up, and the Queue returns to a reasonable size, the memory allocated within labview for that queues data remains allocated.
I don’t have much experience with high speed data streaming in Labview so forgive me if I’m pointing out the obvious.
I would think the best way to achieve maximum throughput with limited jitter would be a combination of...
-free space defragment
-pre-allocation of a large sequential data file on the disk.
-Limiting the Write Queue memory footprint. This should be managed both automatically with 'max queue size', as well as programmatically limiting the size of each block of data enqueued.
-And the obvious last (or first) step would be getting more RAM and a Raid0 setup.
09-11-2007 07:33 AM
Well, y'all will get a chuckle out of what I found to be the problem with my timed loop slowing down (I, however, am not amused). I discovered it by adding indicators to other loop indices and trying to right-justify them.
I posted this issue in another thread, but in a nutshell, LabVIEW apparently creates a numeric spinner type when you right-click on a loop index in a while loop and do a Create Indicator. The created indicator looks just like a normal numeric indicator. I left it left-justified. When I came in the next morning after running my loop overnight, the loop index had incremented far enough that I was not seeing the least-significant digit, which is why it had 'slowed down' by a factor of ten. I believe I would have immediately known this to be the problem if the entire field had been filled with numbers, however, because LabVIEW created a spinner indicator, there is a blank area where the spinner control would normally be. I assumed that I still had not filled the field completely, and that the loop had slowed down.
In trying do diagose the problem, I created some other loop indicators for other parallel loops and tried to right-justify them. They wouldn't right-justify all the way! It turns out that if you have your default control style set to System, LabVIEW creates the spinner indicator, and the 'blank area' is reserved for the spinner, which of course will never be seen since it's an indicator. I tried doing the same thing with my preference set to Classic, and LabVIEW did exactly what I expected. Right-justifying goes all the way to the right.
So, let this be a warning to everyone who uses System style controls - you may not get what you expect from a numeric indicator.
09-11-2007 07:41 AM
Hi wired,
Does this mean all of the observations in this thread are null and void?
If not, how would you summarize your observations reagarding performance?
Trying to stay on top of this,
Ben
09-11-2007 07:55 AM
No, Ben,
That was a completely different issue that I felt needed to be addressed before getting back to original issue of the streaming performance. I'm in a bit of a multi-tasking mode right now, so I haven't been able to spend much time on the performance issue. I did make my own driver for the 3rd party card, so I am no longer calling a dll. Everything is being done using VISA calls.
One thing I did do that was helpful for my startup problem was to pre-allocate my flattened string queue and fill it with strings larger than my max expected size. So far, that has seemed to take care of the 'slow startup' problem. I need to perform some more tests, which I am running while working on another project. I will be sure to keep this thread updated with my results.
Kevin
09-11-2007 07:57 AM
"I will be sure to keep this thread updated with my results."
5-stars for the promise.
another 5-stars are waiting for your debreif.
Ben
09-14-2007 09:32 PM
03-24-2011 04:26 PM
One thing that we've run across as we've run many of our LV delivered applications is file-system fragmentation. We test digitally-controlled actuators in a test environment, and we can have a couple of tests running at the same time, streaming about 800 Mb/day/actuator to the disk. After running these and observing various hits to the quasi-real-time performance (all that you can expect under Windows XP [lots of RAM and a good graphics card are our defenses against degradation]), we noticed that the disk was becoming badly, badly, fragmented. (Tool of choice at the time was Piriform's "Defraggler".) After looking up the issue on-line ("XP file-system fragmentation" was one search), I wondered if pre-allocating a file size and writing to the pre-allocated file would help. Certain caveats would apply.
One of the challenges we faced was that these test programs were controlled externally (by a test chamber), and could run for an arbitrary amount of time. At 800 Mbytes/day per device, we could get two days' worth of data in a data file before the file size crept past 2 gigabytes (2^31 -1 bytes). Logic was placed in the program to adjust just how often the data file would get rolled so as to keep the file sizes under the 2 gigabyte limit. This wasn't too hard. However, with two test programs running over a period of days, file-system fragmentation would always increase. It was not uncommon to see a 1.6 Gbyte file that had over 100,000 fragments. That was worrisome.
Assumptions about the file-system:
Caveats to using a pre-allocated data-storage file:
Results:
One thing we did was to sort of "manually" buffer the data that was to be written to disk, rather than let the operating system do it. After doing some arithmetic on the data sizes, we settled on setting a threshold so that writes would occur at about 1-second intervals. We aren't sure if this was the right thing to do -- second-guessing an operating systme is not always wise, unless one is sure of what one is doing.