LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Consumer Producer Loops

The suggestion from jwscs sounds like something definitely worth trying.   Especially so if the main culprit is something internal to the TDMS API when used for longer time periods as implied in the linked article.

 

The only other thought I had is that if the trigger rate is fast enough, memory usage may be growing simply because your physical disk can't keep up with the data bandwidth.  The data would then need to be cached *somewhere*, and as long as the excess data bandwidth persists, memory usage would keep growing.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 11 of 16
(815 Views)

jayantbala@gmail.com wrote:

 

  1. The sampling is based on a trigger . The time of occurrence of this trigger is random. 
  2. Application requires to capture the signal for 1 sec after the trigger(=200M Samples@ 200MSPS).
  3. The device acquires and stores the specified number of samples only once every trigger event.  Device has internal memory to store the samples.  

A "Producer-Consumer" architecture seems overly complicated here (and requires lots of extra memory).   You already have a "Producer" and a "Queue", your digitizer and its internal buffer.  Just have a single subVI that reads the data and posts it to a TDMS file (preferably one channel at a time).  

0 Kudos
Message 12 of 16
(806 Views)

a little bit of info on your system would've been nice.

i gather you have >14G of RAM, and possibly a SSD.

depending on how your SSD is connected (could be pcie, nvm, sata) you might write with my aforementioned method to multiple T'DMS files in parallel and have a look at your throughput.

 

there are also advanced TDMS VIs for asynchronous writes, but i think that would be overkill as well.

 

try to go the simplest route with your architecture, e.g. state machine.

essentially you could just grab and dump in one go, no need for intermediaries, like queues/globals.

it will depend a little on the minimum trigger interval and if you can get rid of the data fast enough.

if your system is too slow to write to disk, you will eventually run into out-of-ram anyways.

 


If Tetris has taught me anything, it's errors pile up and accomplishments disappear.
0 Kudos
Message 13 of 16
(798 Views)

@jwscs wrote:

a little bit of info on your system would've been nice.

i gather you have >14G of RAM, and possibly a SSD.

depending on how your SSD is connected (could be pcie, nvm, sata) you might write with my aforementioned method to multiple T'DMS files in parallel and have a look at your throughput.

 

there are also advanced TDMS VIs for asynchronous writes, but i think that would be overkill as well.

 

try to go the simplest route with your architecture, e.g. state machine.

essentially you could just grab and dump in one go, no need for intermediaries, like queues/globals.

it will depend a little on the minimum trigger interval and if you can get rid of the data fast enough.

if your system is too slow to write to disk, you will eventually run into out-of-ram anyways.

 


Hi,

 A. I am using Keysight high speed digitizer consists of :

  1. Two Numbers of 2 -channel M9203A digitizers each with internal memory of 1G Samples/ channel. Each sample is 2 byte wide. PXIe Gen2 x8 interface. 
  2. A M9037A Embedded Controller : Intel i7 Quad core , 8 Threads, 16GB RAM, 240GB Sata SSD and 64 bit Embedded Windows 7. Gen 3 two of x8 Interface.
  3.  M9019A 24 slot PXIe chassis to host above two . Supporting PXIe Gen3. 

B. I am using TDMS asynchronous write . 

C. Acquire ->Fetch -> Write->Acquire cycle is not possible in target application. As the trigger is random. The trigger rate can be anything between sub 1 sec to 30 min. Device must be ready to reacquire samples as soon as possible after each acquisition. 

 

Meanwhile checked with binary write . No Memory growth . 

So if drop the idea of writing in TDMS file and go for direct binary write. What is it which I will loose. Given that my data set is simple raw samples. 

 

jayant

 

jayant
0 Kudos
Message 14 of 16
(781 Views)

jayantbala@gmail.com wrote:

So if drop the idea of writing in TDMS file and go for direct binary write. What is it which I will loose. Given that my data set is simple raw samples.


If you did not write any metadata or use a waveform data type, nothing really.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 15 of 16
(765 Views)

jayantbala@gmail.com wrote:

@jwscs wrote:

a little bit of info on your system would've been nice.

i gather you have >14G of RAM, and possibly a SSD.

depending on how your SSD is connected (could be pcie, nvm, sata) you might write with my aforementioned method to multiple T'DMS files in parallel and have a look at your throughput.

 

there are also advanced TDMS VIs for asynchronous writes, but i think that would be overkill as well.

 

try to go the simplest route with your architecture, e.g. state machine.

essentially you could just grab and dump in one go, no need for intermediaries, like queues/globals.

it will depend a little on the minimum trigger interval and if you can get rid of the data fast enough.

if your system is too slow to write to disk, you will eventually run into out-of-ram anyways.

 


Hi,

 A. I am using Keysight high speed digitizer consists of :

  1. Two Numbers of 2 -channel M9203A digitizers each with internal memory of 1G Samples/ channel. Each sample is 2 byte wide. PXIe Gen2 x8 interface. 

...

C. Acquire ->Fetch -> Write->Acquire cycle is not possible in target application. As the trigger is random. The trigger rate can be anything between sub 1 sec to 30 min. Device must be ready to reacquire samples as soon as possible after each acquisition. 

 


With 1G Samples/ch of onboard memory and 200M Samples/trigger, cannot your devices buffer 5 triggers-worth of data?

Message 16 of 16
(760 Views)