07-27-2010 11:54 AM
2.5 (Msamples / second) * (2 Bytes / sample) * 12 channels = 60 MB/s of data. Like Matt says, this would be close to the limit of most 3.5" 7200 RPM SATA hard drives. A 5400 RPM hard drive or a 2.5" SATA wouldn't have a chance with this amount of data. Keep in mind that on a conventional hard disk, the write speeds become slower as it fills up since you must utilize the inner sectors of the disk.
Assuming you have a suitable hard drive or RAID setup, you might want to try the Integrated TDMS Logging feature introduced in DAQmx 9.0. The following is an example of how to effectively use this feature:
Continuously Log Data to TDMS File
Using this feature would bypass application memory and write to the disk directly from Kernel memory, which results in lower CPU usage and higher streaming performance (assuming the disk is physically capable of writing this much data).
Best Regards,
07-27-2010 11:55 AM
Hi Matt,
Thanks for your fast response.
I have seen this article and also have tried with automatic buffer size. Results are similar. Normally I use two times smaller input buffer size than sampling rate. I have just found if manually configuring the buffer size, a multiple of eight times the sector size of the hard disk is recommended. For instance, if your sector size is 512 bytes, your buffer size might be 4,096 samples. I will try to play with it.
In order to log data I use DAQmx Configure Logging. In this case NI-DAQmx streams data directly from the device buffer to the hard disk. NI-DAQmx improves performance and reduces the disk footprint by writing raw data to the TDMS file, including the scaling information separately for use when reading back the TDMS file. So I do not have to think about data representation. This seems to be for me the most efficient solution.
Regarding the hard disk performance, I use internal hard disk. I need to save only around one minute of data so I would not expect files bigger than 3.3GB (considering I16). I also expect that there is something done in DAXmx with the LSB so the file size could be even smaller in case of 14bit ADC in PXI-6133. The PC supports SATA, even if it is revision 1.0 1.5Gb/s the transfer should not be less than 140MB/s and I need around 56MB/s. This seems not to be the case. Previously I used an external HDD with USB 2.0 interface and I have to convince the bandwidth was definitely to narrow.
I did some test and observed that the problem gets bigger when I use DAQmx multiple board support within the same task. When I log 8 channels with 2.5MS/s/ch from the same board, everything goes smooth and the circular buffer is not overwritten. But everything changes when I log 4 channels from the first board and another 4 channels from the second board. This cannot be hadled and I get Error -200279. The problem seems to be with the chassis controller itself.
Fortunately I have two chassis so I will try to synchronize two PXI-6133 boards in two different PXI-1033. Right now I am trying to figure out how to do this. This should solve the problem. Do you have any other ideas how to hadle it in a simpler way?
Thanks!
--
Lukasz
07-27-2010 01:19 PM - edited 07-27-2010 01:24 PM
Hey Lukasz,
Using this feature is the fastest way to stream data to disk. We use non-buffered file I/O, asynchronous (overlapped) I/O, single buffering, and raw data to optimize performance.
My strong suspicion in this case is that you're probably running right around the max rate of your hard drive. For example, if your drive writes at 50 MB/s, it's not going to be able to sustain this acquisition. This hypothesis is even more so supported by your observation that starting from an existing file seemed to help.
One thing you can sanity check would be just checking your disk write speed with a disk speed utility; I found this one with a quick google search: http://crystalmark.info/download/index-e.html
It might be just that you're not keeping up due to jitter. That is, if your disk can sustain around 61 MB/s, maybe it occassionally is dropping to 55 MB/s for a couple seconds, which is killing the entire log as the buffer fills up and can't recover. If that is the case, you might be able to increase the buffer size (as you noticed, to an even multiple of 8 times the volume sector size - typically 4096). Note that if it's not evenly divisible, you will get a warning in LabVIEW that will tell you a recommended size (using your volume sector size).
I'd just like to reiterate that the probability is extremely low that the software is the problem in your application. That is, it's most likely a hardware limitation either with the bus or the hard drive. There are a couple things that you can do in software to help alleviate potential hardware problems: one being increasing the buffer size, as I mentioned above. The other two methods that you can try and optimize in software will be available in an upcoming DAQmx release. You will be able to customize two attributes in DAQmx for optimizing this performance: 1) file preallocation: You can specify a file pre-allocation size. If a file is pre-allocated, the hard drive can usually perform a little better since the space is already reserved. 2) custom write sizes: By default, DAQmx picks a "good" write size (block size in which data is written to disk). Some drives, however, prefer specific sizes. This attribute will allow you to play with that size and see if your hard drive likes different sizes better.
If, however, the rate would not be sustainable on that hard drive, you should probably upgrade to something like a RAID. Even if you're right around the edge of performance of the drive, keep in mind that it's not a very sustainable solution. For example, as a drive starts writing to the inner rim, performance will begin to significantly degrade. Even if increasing the buffer size seems to help, you'd definitely want to let it run for a while to make sure it's able to be maintained.
07-27-2010 01:36 PM
Hi,
Thanks for your fast replies!
You have right. This seems to be close to the HDD transfer rate. I am just wondering about this phenomena. In case of one board with 8 AI there will be around 38MB/s and this can be easily saved. As I mentioned in the previous post, when I try to log the same amount of data but using 2 boards (4 channels per board) I get Error -200279. What might be the reason?
I have found out that a stable solution for my system is:12 channels within one task with 1.5MS/s. This gives around 35MB/s and perhaps will be fair enough.
The best way seems to use two separated measurement units and log 6 channels per unit. I have tried to find how to synchronize two chassis with PXI-6133, TB-2709 and SMB-210 but I am not really sure if it can be done. Do I need to use something like timing and synchronization board or could it be done only using DAQs?
Best regards,
Lukasz
07-27-2010 01:46 PM
I see what you are curious about.
I can think of two possible causes for this phenomenon:
Other than those two ideas, I'm not sure why you would be experiencing that.
08-12-2010 12:51 PM
Hi,
Thank you for all responses.
I decided to use two separated measurement units with PXI-6682 timing and synchronization board in order to trigger measurements according to a time-stamp.
Right now I save 8 channels with 2.5MS/s/ch using DAQmx Logging and PXI-6133. It will be around 38MB/s (with 16bit assumption). I observed that sometimes it can save without any problems but sometimes it behaves unpredictable and the 'Error -200279' appears.
What would you recommend to improve the overall performance and make the system more reliable. I was wondering if a SSD would not solve the problem. Is there any way to optimize the OS (Windows Vista Enterprise 64bit)? How about higher priority settings to a run-time application?
According to the benchmark software mentioned in the previous post my HDD can write with 52MB/s. This should be enough if I would like to save 38MB/s but sometimes this occurs insufficient.
Best regards,
Lukasz
08-13-2010 01:30 PM
Hello Lukko,
Unfortunately, Windows OS doesn't allow you to really optimize it or set the priority of an application. Since the Windows environment is non-deterministic, the time to execute the same code will differ everytime it is executed. These properties can be controlled in a Real-Time OS where you can force determinism (your code will execute in the same amount of time everytime). A Real-Time OS is designed to run applications with very precise timing and a high degree of reliability. I would review the link about the OS and see if you would be interested in this.
For your current application, I think we should monitor the CPU usage. We can see if the error 200279 occurs at the same time when the CPU usage is starting to max out. We can then try to stop any processes that are taking up to much of the CPU that are causing your program to slow down.
08-15-2010 08:03 PM
Hey Lucasz,
First of all, an RT operating system will not help you push the limits further with high speed data logging. In fact, Pharlap OS lacks some of the file I/O features to really get to max rate (like overlapped I/O, non-buffered streaming); as well, it doesn't have RAID support. That being said, it's perfectly reasonable that an RT OS wouldn't have these features since it's designed for determinism, and file I/O is a non-deterministic operation (though, with non-buffered file I/O and overlapped I/O, the jitter is reduced and determinism is more attainable).
All that being said, there are two features in DAQmx 9.2 (which is now released) that should help you push the limits a little bit more. 1) Custom write sizes - specify the chunk size in which data is written to disk. While our automatically selected write size is optimal in most cases, sometimes you can eek out slightly better performance for your particular hardware configuration by using a smaller value than the default. Just make sure that this value is evenly divisible by the volume sector size (which is typically 512). I would suggest starting with 512 and working up from there (by increments of 512). 2) File Pre-allocation - You can specify the file size ahead of time to pre-allocate. This might gain you enough performance to get over the hump.
These two features are both properties that you can set on the DAQmx Read property node (under Logging). If these don't do the trick for you, it's time to consider other hardware (most likely hard drives). Solid states are definitely awesome and can reach some pretty good rates; however, there's a huge drawback with these. A solid state drive (as they exist today) can sometimes go "out to lunch". While their rate over time is up there, occasionally there can be some "dry spells". In the case of normal computer usage (like for using programs, installing, etc), this is acceptable; in the case of high speed data logging, sometimes that can hose your whole acquisition. I haven't played with them too much to characterize how much/how often, but this is what I have heard.
If you find that your current hard drive solution isn't working for you and solid state doesn't work out, I would definitely recommend that you use a RAID solution (even a cheaper/lower rate one). In general, they will sustain rates better and provide data redundancy (like RAID 5) so that it's ok if a drive goes out.
08-27-2010 01:52 AM
Hey Andy,
Thanks for you extended reply.
Yesterday I installed the newest DAQmx 9.2 (NIDAQ920f1) and I will try to play with the new features you mentioned above. I will let know about the results.
Also yesterday I carried out some transfer tests on both SSD and HDD hard disks. It occurred that the SSD performs better almost on all cases but the most crucial part which is the sequential writing speed was smaller in comparison to the HDD. Please take a look at presented below results.
SSD (the left one) compared with HDD (the right one)
Below I also include both drives specification
Right now I am a little confused because a SSD (at least this one) does not seem to be appropriate for my application. I am wondering if something can be improved in case of the SSD. Both drives use SATA/300 transfer mode so the PC itself should not be a problem. The SSD is tested on Dell Adamo and the HDD on HP EliteBook 6930p.
Best regards,
Lukasz Kocewiak
08-30-2010 02:08 PM
Hello Lukasz,
At this point, I think we should look at the specifications for this device to see if this is what we should expect. If it isn't, I think we would need to ask the makers of the device to see to if they have any suggestions why this is occurring. If this ends up that it isn't going to be fast enough, I think it would be best like Andy said to get a RAID so we can get a fast enough write speed to have this design work.