I have a streaming-to-disk application that combines data from multiple PXI modules into a single streaming file (not TDMS). I'm using a PXIe-1075 rack with MXIe, a PXI-6123, and three PXIe-6124s. Streaming is done to the NI 8260 4-disk software RAID. I most recently was working with this app in LabVIEW 8.6.1, with DAQmx 8.9.
I was running a number of high-throughput tests, and finally was ready to distribute the app when LabVIEW 2009 came in the mail. I figured I might as well install and upgrade. I upgraded both LabVIEW and the device drivers, including DAQmx. Much to my surprise, I obtained a huge performance benefit in my throughput. My CPU utilization dropped from the mid-80s to 50% when streaming 4 cards at their max rates (aggregate 104MB/S). Curiously, though, my app that I had built in 8.6.1 experienced a decrease in throughput, i.e. I could not perform the acquisition at all (100% utilization).
I'm just curious if anyone else has experienced this. Although I am pleased with the performance increase, I am disconcerted by the lack of explanation for it. NI touted no performance improvements for LV2009, with the exception of the parallel for loop, which I am not using (and the only two found in the app that could be parallelized were non time-critical).
Looking only for feedback from end-users, not from NI applications engineers (unless you care to reveal that a performance limiting bug was removed).
LV2009 allocates any array over 1MB large on a 4k page boundary. Previous versions always allocated arrays with an 80-byte offset (similar to malloc()'s performance.) This new functionality is not documented. This may increase performance due to less overhead on the memory controller.
In addition DAQmx 9.0 can write data directly to disk without pulling it into user memory by writing directly from kernel memory to disk via DMA. This would further reduce CPU utilization if that is a primary concern.