LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What is "a lot" of DAQ samples in terms of RAM usage?

I'm using a USB-6001 DAQ to acquire sensor samples. This particular sensor needs to be calibrated before use and it also needs to be polarized and "settle" before accurate measurements can be made. If it hasn't been used for a few days, the polarization/settling can take 3-4hrs, if it has been used recently and remained powered on it's ready to go and just needs a few minutes to go through calibration routine. I need to show the data for the whole time in either scenario.

 

The minimum sampling rate is 30hz, I'd probably prefer something like 120hz but I'm concerned about the performance implications of 120 samples/sec for those times when it takes 3-4hrs to polarize. While that seems like a huuuuge amount of data to "remember" to my own mushy cpu, I have no idea if that's a piece of cake for a modern PC. This is a single channel analog acquisition. Does it sound reasonable to just sample like that for 3-4hrs keeping all that data on a shift register or do I need to consider coming up with a way to display more recent data at 30-120hz and either log to disk the older stuff or down sample the older stuff and keep it on the register? I would prefer not writing to disk so my initial idea is to show/keep in memory about 5 minutes of "high frequency" data and periodically down sample the older data and also keep it in memory. At the end of this process I need to be able to present the 3-4hrs of data in a graph but it is ok to present only 1 sample/sec. After all this I started to wonder whether I just have no conception of how taxing it is to keep things simple, keep the full 30-120hz data on a register, and use it for both the live view and final graph view. That would certainly simplify things for me. Assume a basic Core i5 cpu with 8gb ram, running Windows 10.

0 Kudos
Message 1 of 3
(575 Views)

Let's calculate, 

  • 1 channel analog data at 120Hz rate for 4 hours
  • 1 * 120 * 4 * 3600 = 1.738M samples
  • Each sample is a double which takes up 64 bits (8 bytes)
  • 1.738M samples take up just ~14MB of memory

based on this calculation, 14MB is insignificant for modern computers. As long as you use the best practices for array manipulation, you can minimize memory duplication and be efficient.

 

Some articles to read,

https://www.ni.com/docs/en-US/bundle/labview/page/lvconcepts/memory_management_for_large_data_sets.h...

https://forums.ni.com/t5/Example-Code/Managing-Large-Data-Sets-in-LabVIEW/ta-p/4100668

 

Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution.

Finding it hard to source NI hardware? Try NI Trading Post
0 Kudos
Message 2 of 3
(557 Views)

As Santosh points out, a simple calculation with pencil and paper (or, you could write a little LabVIEW Program to do the multiplication for you) will give you the answer to "How many Samples" (in terms of Channels/Sample x Samples/Sec * Time Duration of Sample), and if you know how the Samples are stored (i.e. Bytes/Sample), you can estimate the amount of memory to hold all of the Samples in memory.

 

So you'd think if you needed to sample at 1 MHz for 4 days, this comes out to (on the order of) a million samples x 86,400 sec/day * 4 days = 345.6 Giga-samples, which is probably more RAM than you have on your PC.  But who says you have to save in RAM?  You spool the data to a Disk (a small investment can get you a Terabyte or more of storage).  Once your 4 days are up, you can read all of those data back in and start processing them "in batches" ...

 

Bob Schor

0 Kudos
Message 3 of 3
(515 Views)