Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Save high sampling rate data

Solved!
Go to solution

I personally have not used our in-disk RAID solution (HDD or SSD) as I'm not in that group and was mainly interested in exercising DAQmx-TDMS logging at max rates.  Note that my statement was a generalization, not specifically with respect to the NI 8260.  The team that puts together our RAID solutions is pretty awesome at finding the best components to achieve the best sustainable throughput, and I would definitely recommend that you work with an Applications Engineer to work through the issues that you're running into.  It seems like configuration can make some of the most impact on your throughput even with the best hardware.

 

You're right that an article would be good.

 

If you're interested, the primary benchmark that I have done using TDMS from DAQmx has been using the PXIe-6537 (a 50 MHz digital board that streams 4 bytes for a total of 200 MB/s throughput).  I was able to use a single PXIe-8133 controller in a PXIe-1075 chassis with 12 PXIe-6537 going to 4 HDD-8265's configured for RAID 5.  I ran the boards all at max rate sustain-ably with CPU utilization at around 5% for a total throughput on that single system of 2.4 GB/s.  As digital devices do not currently support multi-device tasks, this was done with one device per task.

Thanks,

Andy McRorie
NI R&D
0 Kudos
Message 31 of 39
(2,314 Views)

wired-

 

I think I can help clarify some things about the SSD discussion going on in this thread.  I have been doing validation on SSDs over the past year and personally did the validation on the NI-8260 SSD.  All of your observations about SSD behavior is correct and that's what makes qualifying a solution for streaming so difficult.  Because of this, we spent a lot of time picking a drive vendor that is the least susceptible to this behavior, specifically significant dips in performance.  The drive we use is the Intel X25-E 64GB, which uses SLC NAND, as opposed to MLC which is more commonly found in consumer drives.

 

http://www.intel.com/design/flash/nand/extreme/index.htm

 

Out of all of the drives that we tested, the Intel drive had the best sustained performance over time.  The drives were always tested in a 'dirty' state and written to ~100% of capacity.  Many of the Intel drives we tested had over 100TB written to them without any degradation in performance or the need to perform a secure erase (factory fresh).  As you noted, this is not the case for many SSDs on the market.  The majority of our testing is real world, meaning that we use LabVIEW along with data acquisition hardware to get our numbers. 

 

Here is an example of the NI-8260 SSD with a PXIe-8130 in a PXIe-1082 chassis.  This specific write filled to array to ~99% capacity.  Each point on the graph represents 128MB of data being written.

 

NI-8260 SSD

 

As you can see, the performance is fairly consistent over the entire array.  There are dips throughout the write, but many of them can be overcome with a reasonably sized software buffer.  We evaluated a number of other drive vendors and this was by far the best result.  When using other drives, specifically MLC based drives, those dips were closer to 100MB/s, as opposed to 300MB/s.

 

Regards,


Andrew Mierau

Project Engineer - RAID & Servers

National Instruments

Message 32 of 39
(2,260 Views)

Thanks to both Andrews for the information, especially for the confirmation about the SSD performace problems with many vendors' drives.  In hindsight, I wish I had spec'ed an NI RAID product,  but they seemed so overpriced compared to what I could build myself. Now I know why 😉

 

I think it would be a great idea for you to write a knowledgebase article about this very issue, especially detailing your experiences in testing/choosing SSDs.  With the consumer SSD price point becoming more reasonable, I can foresee many NI customers investigating this technology, and a clear, easy-to-find article detailing the potential pitfalls of SSD would be extremely helpful in preventing 'performance problems'.  Maybe you can call it "The Little Blue Pill for SSDs" 😉

0 Kudos
Message 33 of 39
(2,253 Views)

Hello !

 

I am using NI PCI-6110. I want to collect big amounts of data (20-40 millions of samples)

When I tried to use the same VI as lukko (in "Save high sampling rate data", lukko, 09-02-2010 05:24 AM)

 

http://forums.ni.com/t5/image/serverpage/image-id/22994i11C32453C91A849D/image-size/original?v=mpbl-...

 

an error appears "Not enough memory to complete this operation" and maximum number of samples written in TDMS-file is 19349504.

 

My sample rate = 1.25 MS/s

Samples per channel = 2 MS

Logging.FileWriteSize = 2048

Logging.PreallocationSize = 20 000 000

 

 

So, my questions are:

 

  1. Is it possible to acquire more than 19 millions of samples using TDMS streaming ?

     

  2. With TDMS streaming, do I collect all available samples using continuous mode with while-loop without loss of samples between loop iterations ?


Thanks.

0 Kudos
Message 34 of 39
(1,963 Views)

Hey Serge2,

 

1. It is possible to acquire more samples. Check out DAQmx TDMS Data Logging - TDMS File Splitting for a good way to accomplish acquiring the number of samples you are taking.

 

2. The data is being written to a buffer, and then you are reading off of that buffer in the loop, so you won't be losing any samples as long as you are reading fast enough that you don't overload the buffer.

 

Hope this helps,

0 Kudos
Message 35 of 39
(1,936 Views)

Hey Eric. Thanks a lot. It woks perfectly. I split data in 10-millions-samples TDMS files. In fact memory overflow was because I tried to visualize big amount of data after acquisition. The second step is to assemble the data in Labview for further analysis. I attach a part of VI where I read TDMS files and concatenate data. This gives me maximum of 80 million samples in "TDMS data" variable. Trying to read more than this gives me memory overflow.

 

 

So, is it possible to increase some kind of LV internal memory to store more data in variables ?

 

 

Serge2

0 Kudos
Message 36 of 39
(1,928 Views)

Hi Serge2,

 

I also did some measurements with relatively high sample rate (2.5MS/s/ch) and it occurred in TDMS files sometimes even bigger than 10GB. Of course it would be difficult to handle such amount of data at once due to memory limitations. That is why I used TDMS API in LabView to save specified block of data in external (smaller) files.

 

Here is the screenshot:

Read Large Files in Lab View

 

The issue is to specify how many samples you would like to read at once (Samples to read) and from which sample you would like to start (Offset). You can also close this structure in a loop and scan you large files for events. I did like that in my case and after the event was found I just extracted it to another file.

 

Hope it solves your problem.

 

Regards,

--

Łukasz

0 Kudos
Message 37 of 39
(1,920 Views)

Hi Łukasz,

 

Thanks for help. Your idea is really useful and permits to work with data of any size. I will surely use it.

I would like just precise that for analysis I use simultaneous fit of data, so I will try to verify first the maximal

data size that LV can handle.

 

Sergei

0 Kudos
Message 38 of 39
(1,917 Views)

Hi Member,

 

Can you share with me your vi code for high-sampling rate data acquisition and data logging? 

Thanks. 

0 Kudos
Message 39 of 39
(1,151 Views)