09-22-2009 03:15 PM
Hello!
I am using NI PXI-4462. (4 Analog Input, 204.8kS/s Sampling Rate)
I want to collecting data from 'force sensor' (1st channel) and 'acceleration sensor' (2nd, 3rd, 4th channel).
Also I want to save those data in one text file.
So, I make a block diagram and front pannel. You can see it in the attached file.
The program is working well in a low sampling rate.
However, when I set the sampling rate as high as 204800 S/s, the program give me 'Error -200279'.
I do know what this error means, and I know why this happened in high sampling rate.
I want to know how I can fix this.
Is there any problem in my block diagram?
Is there any way to save high sampling rate data?
I really want to set samplling rate more higher than 200000 S/s.
I would really appreciate if you can help me.
Thank you.
Solved! Go to Solution.
09-22-2009 03:47 PM
nh,
You have provided excellent documentation. So what has happened is that the amount of time it takes to execute the remaining portion of the loop results in a number of samples to be collected exceeds the size of the buffer you have provided (I am not exactly sure what that is, but this will happen at high sampling rates) resulting in samples being overwritten. You might be best served in this case to switch to a producer consumer loop - have the loop you have there acquire the data but then have an additional loop which processes the data in parallel with the acquisition. The data would be shipped from the producer to the consumer via a queue. However, a caveat would be that, if you have an infinitely deep queue and you start to fall behind, you will find that at the sampling rate you specify you will start consuming more and more memory. In that case, you will have to find a way to optimize your calculations or allow for lossy acquisition.
Hope this helps. Matt
09-22-2009 03:55 PM
09-22-2009 09:40 PM
Thank you for your help : )
I manually set up the 'input buffer' size as '3000000' by calling 'DAQmx Configure Input Buffer.vi'.
You can check this in the red circle in attached file.
Now, I can save about 3 seconds even though I set the sampling rate as high as 204800 S/s.
It is really good news for me!
However, unfortunatly when I try to save, my computer is getting really slow.
Also, after 3 or 4 times trying, my computer is unbearably slow.
Is there anyone who is collecting over 204800 S/s data with NI device? (collecting means saving data in text file.)
Please oblige me with any advice. Thank you.
09-22-2009 11:22 PM
nh,
I collect data and process the data at similar rates and faster (250 kS/s - 2MS/s) using the architecture I suggested above (with no missed acquisition periods); however, I am reducing the data before writing it to file (in general, I am writing at most 80 points at 1 Hz). Writing to text files is computationally intensive - consider writing to binary or datalog. You can find an example of the architecture I am talking about here. You might want to try this example to see if you can get better performance also.
Cheers, Matt
09-23-2009 05:31 PM
As Matt said, converting binary data into ASCII is computationally intensive and is likely what is slowing down your loop rate. Doing this also results in much larger files (1 byte per ASCII character).
If you are looking to write the data to a binary file (which is what we would recommend), you should a look at the new TDMS Streaming feature introduced in DAQmx 9.0. The feature makes streaming data incredibly simple (one single function to be included in your DAQmx task before starting).
For some more information about the feature you can look at this forum post:
http://forums.ni.com/ni/board/message?board.id=250&message.id=52058#M52058
The driver will not be shipped until the next Device Drivers CD comes out, but you can download it from this link in the meantime. DAQmx 9.0 includes shipping examples that show how to use the TDMS streaming functionality.
Best Regards,
John
09-24-2009 11:13 AM
To reiterate what John said, your best solution is to use the TDMS feature in DAQmx 9.0.
To compare the amount of data that is being written to disk between text file and TDMS (with this feature):
Aside from the data speed, the second method would produce a file that would be 1/4 the size, which would matter over time as disk space might become a concern.
Of course, the disadvantage here is that TDMS is a binary file format that you can't simply open up in Notepad. That being said, we do provide a free add-in for Excel so that you can easily import the file into Excel. Similarly, you could use the TDMS API to read the file and write it into another format after the acquisition (we provide a C DLL, a LabVIEW API, a .Net interface, and even Matlab).
Now that I've said all this, you could use a text file, and you should be able to achieve the throughput that you are seeking. 10 MB/s is well within the bounds of a typical desktop hard drive, and a LabVIEW application can easily keep up with this throughput. As mentioned above, the only reason why you're having trouble is because you're trying to serialize the DAQmx Read, the analysis, and the disk writes whereas this application needs to take advantage of LabVIEW's parallel processing by configuring a producer-consumer design. In LabVIEW, if you select File>>New, under "From Template", you can find Frameworks>>Design Patterns>>Producer/Consumer Design Pattern (Data). This should provide a good starting point for how you would read from DAQ while simultaneously writing data to disk.
09-25-2009 11:09 AM
I really really really really want to say thank you very much to all of you and especially Matt.
I learned alot this time. I think 'producer consumer loop' and TDMS is really powerful.
Thank you again. Have a great day!!!
07-27-2010 07:20 AM
Hello,
This topic seems to be the most appropriate for my problem related with high sample rate data logging. Let me describe it in few sentences.
I am going to log 12 channels from 2 PXI-6133 DAQ boards placed in a PXI-1033 chassis. Everything is transferred to a portable computer (Intel Core 2 Duo T9400 @ 2.53GHz, 4GB 800MHz RAM, Windows Vista Enterprise 64bit) by the PCI Express bus.
Actually I would like to log data with 2.5MS/s/ch sampling rate and I observed some problems with it. Of course I get this error: Attempted to read samples that are no longer available. With sampling rate around 1.5MS/s/ch everything works fine. Everything starts to crash around 2MS/s/ch.
To log data I use following structure:
- DAQmx Create Task
- DAQmx Create Virtual Channel (12 Channels, AI Voltage)
- DAQmx Timing (Continuous Samples, Buffer Size, Sampling Rate)
- DAQmx Configure Logging (Log, Open and Create)
- DAQmx Is Task Done
- DAQmx Clear Task
Do you have any ideas/suggestions how to optimize the system performance to log 2.5MS/s/ch. I tried to change the buffer size but this does not solve the problem. I observed that it is better to have already created file and only open it to write instead creating a new one.
Are the DAQmx task configuration and the logging method the most efficient? By the way, is there any difference between defining input buffer size in DAQmx Timing (Continuous Samples) and DAQmx Configure Input Buffer?
Best regards,
Lukasz Kocewiak
DONG Energy Power A/S
07-27-2010 11:22 AM
Lukasz,
Regarding the input buffer size, you can check out this article here which tells you how the buffer is automatically set if you do not specify this value (which in most cases you shouldn't need to).
With respect to your attempt to log data - it seems that you are starting to exceed the capabilities of most normal hard-drives. Are all of those samples double precision or are you using the unscaled 16-bit? If the former - you are attempting to write about 240 Mb/s - that would be about 2.5x more data than the highest performing hard-drive could handle in NI's benchmarking in their RAID article. Even if you are using 16-bit integers, you are still attempting to write about 60 Mb/s which would put most systems near their max in terms of write capabilities. If it is absolutely necessary to retain all of that data (i.e. it can not be decimated for storage), then increasing the input buffer size is unlikely to do any good as you will inevitably fill the buffer that you allocated as you continue to not complete acquisition do to the fact that you will be writing ever-more data with each loop execution. That being the case, you may want to consider using a RAID setup to save data.
Hope all of that helps.
Peace, Matt